summaryrefslogtreecommitdiff
path: root/rts/Messages.c
Commit message (Collapse)AuthorAgeFilesLines
* rts/Messages: RefactorBen Gamari2022-12-161-27/+20
| | | | This doesn't change behavior but makes the code a bit easier to follow.
* Corrected types of thread ids obtained from the RTSMann mit Hut2021-10-061-9/+9
| | | | | | | | | | | | | | While the thread ids had been changed to 64 bit words in e57b7cc6d8b1222e0939d19c265b51d2c3c2b4c0 the return type of the foreign import function used to retrieve these ids - namely 'GHC.Conc.Sync.getThreadId' - was never updated accordingly. In order to fix that this function returns now a 'CUULong'. In addition to that the types used in the thread labeling subsystem were adjusted as well and several format strings were modified throughout the whole RTS to display thread ids in a consistent and correct way. Fixes #16761
* Introduce stack snapshotting / cloning (#18741)Sven Tennie2021-09-231-1/+7
| | | | | | | | | | | | | | Add `StackSnapshot#` primitive type that represents a cloned stack (StgStack). The cloning interface consists of two functions, that clone either the treads own stack (cloneMyStack) or another threads stack (cloneThreadStack). The stack snapshot is offline/cold, i.e. it isn't evaluated any further. This is useful for analyses as it prevents concurrent modifications. For technical details, please see Note [Stack Cloning]. Co-authored-by: Ben Gamari <bgamari.foss@gmail.com> Co-authored-by: Matthew Pickering <matthewtpickering@gmail.com>
* rts/Messages: Add missing write barrier in THROWTO message updateGHC GitLab CI2020-11-291-1/+1
| | | | | | After a THROWTO message has been handle the message closure is overwritten by a NULL message. We must ensure that the original closure's pointers continue to be visible to the nonmoving GC.
* rts: Make write of to_cap->inbox atomicwip/tsan/schedBen Gamari2020-10-241-1/+1
| | | | | This is necessary since emptyInbox may read from to_cap->inbox without taking cap->lock.
* rts/Messages: Annotate benign raceBen Gamari2020-10-241-0/+3
|
* rts/Messages: Drop incredibly fishy write barrierBen Gamari2020-10-241-2/+0
| | | | | | | | | | | executeMessage previously had a write barrier at the beginning of its loop apparently in an attempt to synchronize with another thread's writes to the Message. I would guess that the author had intended to use a load barrier here given that there are no globally-visible writes done in executeMessage. I've removed the redundant barrier since the necessary load barrier is now provided by the ACQUIRE_LOAD.
* rts: Avoid data races in message handlingBen Gamari2020-10-241-21/+15
|
* rts: Remove incorrect assertions around MSG_THROWTO messagesBen Gamari2020-02-111-3/+0
| | | | | | | | | | Previously we would assert that threads which are sending a `MSG_THROWTO` message must have their blocking status be blocked on the message. In the usual case of a thread throwing to another thread this is guaranteed by `stg_killThreadzh`. However, `throwToSelf`, used by the GC to kill threads which ran out of heap, failed to guarantee this. Noted while debugging #17785.
* Nonmoving: Ensure write barrier vanishes in non-threaded RTSBen Gamari2019-10-211-2/+2
|
* rts: Implement concurrent collection in the nonmoving collectorBen Gamari2019-10-201-1/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This extends the non-moving collector to allow concurrent collection. The full design of the collector implemented here is described in detail in a technical note B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell Compiler" (2018) This extension involves the introduction of a capability-local remembered set, known as the /update remembered set/, which tracks objects which may no longer be visible to the collector due to mutation. To maintain this remembered set we introduce a write barrier on mutations which is enabled while a concurrent mark is underway. The update remembered set representation is similar to that of the nonmoving mark queue, being a chunked array of `MarkEntry`s. Each `Capability` maintains a single accumulator chunk, which it flushed when it (a) is filled, or (b) when the nonmoving collector enters its post-mark synchronization phase. While the write barrier touches a significant amount of code it is conceptually straightforward: the mutator must ensure that the referee of any pointer it overwrites is added to the update remembered set. However, there are a few details: * In the case of objects with a dirty flag (e.g. `MVar`s) we can exploit the fact that only the *first* mutation requires a write barrier. * Weak references, as usual, complicate things. In particular, we must ensure that the referee of a weak object is marked if dereferenced by the mutator. For this we (unfortunately) must introduce a read barrier, as described in Note [Concurrent read barrier on deRefWeak#] (in `NonMovingMark.c`). * Stable names are also a bit tricky as described in Note [Sweeping stable names in the concurrent collector] (`NonMovingSweep.c`). We take quite some pains to ensure that the high thread count often seen in parallel Haskell applications doesn't affect pause times. To this end we allow thread stacks to be marked either by the thread itself (when it is executed or stack-underflows) or the concurrent mark thread (if the thread owning the stack is never scheduled). There is a non-trivial handshake to ensure that this happens without racing which is described in Note [StgStack dirtiness flags and concurrent marking]. Co-Authored-by: Ömer Sinan Ağacan <omer@well-typed.com>
* Correct closure observation, construction, and mutation on weak memory machines.Travis Whitaker2019-06-281-2/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here the following changes are introduced: - A read barrier machine op is added to Cmm. - The order in which a closure's fields are read and written is changed. - Memory barriers are added to RTS code to ensure correctness on out-or-order machines with weak memory ordering. Cmm has a new CallishMachOp called MO_ReadBarrier. On weak memory machines, this is lowered to an instruction that ensures memory reads that occur after said instruction in program order are not performed before reads coming before said instruction in program order. On machines with strong memory ordering properties (e.g. X86, SPARC in TSO mode) no such instruction is necessary, so MO_ReadBarrier is simply erased. However, such an instruction is necessary on weakly ordered machines, e.g. ARM and PowerPC. Weam memory ordering has consequences for how closures are observed and mutated. For example, consider a closure that needs to be updated to an indirection. In order for the indirection to be safe for concurrent observers to enter, said observers must read the indirection's info table before they read the indirectee. Furthermore, the entering observer makes assumptions about the closure based on its info table contents, e.g. an INFO_TYPE of IND imples the closure has an indirectee pointer that is safe to follow. When a closure is updated with an indirection, both its info table and its indirectee must be written. With weak memory ordering, these two writes can be arbitrarily reordered, and perhaps even interleaved with other threads' reads and writes (in the absence of memory barrier instructions). Consider this example of a bad reordering: - An updater writes to a closure's info table (INFO_TYPE is now IND). - A concurrent observer branches upon reading the closure's INFO_TYPE as IND. - A concurrent observer reads the closure's indirectee and enters it. (!!!) - An updater writes the closure's indirectee. Here the update to the indirectee comes too late and the concurrent observer has jumped off into the abyss. Speculative execution can also cause us issues, consider: - An observer is about to case on a value in closure's info table. - The observer speculatively reads one or more of closure's fields. - An updater writes to closure's info table. - The observer takes a branch based on the new info table value, but with the old closure fields! - The updater writes to the closure's other fields, but its too late. Because of these effects, reads and writes to a closure's info table must be ordered carefully with respect to reads and writes to the closure's other fields, and memory barriers must be placed to ensure that reads and writes occur in program order. Specifically, updates to a closure must follow the following pattern: - Update the closure's (non-info table) fields. - Write barrier. - Update the closure's info table. Observing a closure's fields must follow the following pattern: - Read the closure's info pointer. - Read barrier. - Read the closure's (non-info table) fields. This patch updates RTS code to obey this pattern. This should fix long-standing SMP bugs on ARM (specifically newer aarch64 microarchitectures supporting out-of-order execution) and PowerPC. This fixes issue #15449. Co-Authored-By: Ben Gamari <ben@well-typed.com>
* Add Note [BLACKHOLE points to IND]Simon Marlow2018-03-251-0/+1
| | | | | | | | | | | | Test Plan: ci Reviewers: osa1, bgamari, erikd Reviewed By: osa1 Subscribers: rwbarton, thomie, carter Differential Revision: https://phabricator.haskell.org/D4517
* rts: Add --internal-counters RTS flag and several countersDouglas Wilson2018-03-191-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing internal counters: * gc_alloc_block_sync * whitehole_spin * gen[g].sync * gen[1].sync are now not shown in the -s report unless --internal-counters is also passed. If --internal-counters is passed we now show the counters above, reformatted, as well as several other counters. In particular, we now count the yieldThread() calls that SpinLocks do as well as their spins. The added counters are: * gc_spin (spin and yield) * mut_spin (spin and yield) * whitehole_threadPaused (spin only) * whitehole_executeMessage (spin only) * whitehole_lockClosure (spin only) * waitForGcThreadsd (spin and yield) As well as the following, which are not SpinLock-like things: * any_work * do_work * scav_find_work See the Note for descriptions of what these counters are. We add busy_wait_nops in these loops along with the counter increment where it was absent. Old internal counters output: ``` gc_alloc_block_sync: 0 whitehole_gc_spin: 0 gen[0].sync: 0 gen[1].sync: 0 ``` New internal counters output: ``` Internal Counters: Spins Yields gc_alloc_block_sync 323 0 gc_spin 9016713 752 mut_spin 57360944 47716 whitehole_gc 0 n/a whitehole_threadPaused 0 n/a whitehole_executeMessage 0 n/a whitehole_lockClosure 0 0 waitForGcThreads 2 415 gen[0].sync 6 0 gen[1].sync 1 0 any_work 2017 no_work 2014 scav_find_work 1004 ``` Test Plan: ./validate Check it builds with #define PROF_SPIN removed from includes/rts/Config.h Reviewers: bgamari, erikd, simonmar, hvr Reviewed By: simonmar Subscribers: rwbarton, thomie, carter GHC Trac Issues: #3553, #9221 Differential Revision: https://phabricator.haskell.org/D4302
* Fix a lost-wakeup bug in BLACKHOLE handling (#13751)Simon Marlow2017-06-081-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The problem occurred when * Threads A & B evaluate the same thunk * Thread A context-switches, so the thunk gets blackholed * Thread C enters the blackhole, creates a BLOCKING_QUEUE attached to the blackhole and thread A's `tso->bq` queue * Thread B updates the blackhole with a value, overwriting the BLOCKING_QUEUE * We GC, replacing A's update frame with stg_enter_checkbh * Throw an exception in A, which ignores the stg_enter_checkbh frame Now we have C blocked on A's tso->bq queue, but we forgot to check the queue because the stg_enter_checkbh frame has been thrown away by the exception. The solution and alternative designs are discussed in Note [upd-black-hole]. This also exposed a bug in the interpreter, whereby we were sometimes context-switching without calling `threadPaused()`. I've fixed this and added some Notes. Test Plan: * `cd testsuite/tests/concurrent && make slow` * validate Reviewers: niteria, bgamari, austin, erikd Reviewed By: erikd Subscribers: rwbarton, thomie GHC Trac Issues: #13751 Differential Revision: https://phabricator.haskell.org/D3630
* Prefer #if defined to #ifdefBen Gamari2017-04-281-5/+5
| | | | Our new CPP linter enforces this.
* Use C99's boolBen Gamari2016-11-291-1/+1
| | | | | | | | | | | | Test Plan: Validate on lots of platforms Reviewers: erikd, simonmar, austin Reviewed By: erikd, simonmar Subscribers: michalt, thomie Differential Revision: https://phabricator.haskell.org/D2699
* rts: Replace `nat` with `uint32_t`Erik de Castro Lopo2016-05-051-3/+3
| | | | | | | | | | | | The `nat` type was an alias for `unsigned int` with a comment saying it was at least 32 bits. We keep the typedef in case client code is using it but mark it as deprecated. Test Plan: Validated on Linux, OS X and Windows Reviewers: simonmar, austin, thomie, hvr, bgamari, hsyl20 Differential Revision: https://phabricator.haskell.org/D2166
* Revert "rts: add Emacs 'Local Variables' to every .c file"Simon Marlow2014-09-291-8/+0
| | | | This reverts commit 39b5c1cbd8950755de400933cecca7b8deb4ffcd.
* rts: add Emacs 'Local Variables' to every .c fileAustin Seipp2014-07-281-0/+8
| | | | | | | | This will hopefully help ensure some basic consistency in the forward by overriding buffer variables. In particular, it sets the wrap length, the offset to 4, and turns off tabs. Signed-off-by: Austin Seipp <austin@well-typed.com>
* rts: delint Messages.cAustin Seipp2014-07-281-4/+6
| | | | Signed-off-by: Austin Seipp <austin@well-typed.com>
* rts: detab/dewhitespace Messages.cAustin Seipp2014-07-281-21/+19
| | | | Signed-off-by: Austin Seipp <austin@well-typed.com>
* Better abstraction over run queues.Edward Z. Yang2013-01-161-4/+2
| | | | | | | | | This adds some new functions: peekRunQueue, promoteInRunQueue, singletonRunQueue and truncateRunQueue which help abstract away manual linked list manipulation, making it easier to swap in a new queue implementation. Signed-off-by: Edward Z. Yang <ezyang@mit.edu>
* Deprecate lnat, and use StgWord insteadSimon Marlow2012-09-071-5/+5
| | | | | | | | | | | | lnat was originally "long unsigned int" but we were using it when we wanted a 64-bit type on a 64-bit machine. This broke on Windows x64, where long == int == 32 bits. Using types of unspecified size is bad, but what we really wanted was a type with N bits on an N-bit machine. StgWord is exactly that. lnat was mentioned in some APIs that clients might be using (e.g. StackOverflowHook()), so we leave it defined but with a comment to say that it's deprecated.
* Fix a scheduling bug in the threaded RTSSimon Marlow2011-12-011-2/+2
| | | | | | | | | | | | | | | The parallel GC was using setContextSwitches() to stop all the other threads, which sets the context_switch flag on every Capability. That had the side effect of causing every Capability to also switch threads, and since GCs can be much more frequent than context switches, this increased the context switch frequency. When context switches are expensive (because the switch is between two bound threads or a bound and unbound thread), the difference is quite noticeable. The fix is to have a separate flag to indicate that a Capability should stop and return to the scheduler, but not switch threads. I've called this the "interrupt" flag.
* Annotate thread stop events with the owner of the black holeSimon Marlow2011-01-271-0/+43
| | | | | | | | | So we can now get these in ThreadScope: 19487000: cap 1: stopping thread 6 (blocked on black hole owned by thread 4) Note: needs an update to ghc-events. Older ThreadScopes will just ignore the new information.
* Implement stack chunks and separate TSO/STACK objectsSimon Marlow2010-12-151-5/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes two changes to the way stacks are managed: 1. The stack is now stored in a separate object from the TSO. This means that it is easier to replace the stack object for a thread when the stack overflows or underflows; we don't have to leave behind the old TSO as an indirection any more. Consequently, we can remove ThreadRelocated and deRefTSO(), which were a pain. This is obviously the right thing, but the last time I tried to do it it made performance worse. This time I seem to have cracked it. 2. Stacks are now represented as a chain of chunks, rather than a single monolithic object. The big advantage here is that individual chunks are marked clean or dirty according to whether they contain pointers to the young generation, and the GC can avoid traversing clean stack chunks during a young-generation collection. This means that programs with deep stacks will see a big saving in GC overhead when using the default GC settings. A secondary advantage is that there is much less copying involved as the stack grows. Programs that quickly grow a deep stack will see big improvements. In some ways the implementation is simpler, as nothing special needs to be done to reclaim stack as the stack shrinks (the GC just recovers the dead stack chunks). On the other hand, we have to manage stack underflow between chunks, so there's a new stack frame (UNDERFLOW_FRAME), and we now have separate TSO and STACK objects. The total amount of code is probably about the same as before. There are new RTS flags: -ki<size> Sets the initial thread stack size (default 1k) Egs: -ki4k -ki2m -kc<size> Sets the stack chunk size (default 32k) -kb<size> Sets the stack chunk buffer size (default 1k) -ki was previously called just -k, and the old name is still accepted for backwards compatibility. These new options are documented.
* Add a couple of missing tests for EAGER_BLACKHOLESimon Marlow2010-08-231-0/+1
| | | | | | | | This was leading to looping and excessive allocation, when the computation should have just blocked on the black hole. Reported by Christian Höner zu Siederdissen <choener@tbi.univie.ac.at> on glasgow-haskell-users.
* Fix #4131 missing UNTAG_CLOSURE in messageBlackHole()benl@ouroborus.net2010-06-111-1/+1
|
* messageBlackHole: fix deadlock bug caused by a missing 'volatile'Simon Marlow2010-06-101-1/+3
|
* Add a missing UNTAG_CLOSURE, causing bus errors on SparcSimon Marlow2010-05-241-2/+3
| | | | | | | | We just about got away with this on x86 which isn't alignment-sensitive. The result of the memory load is compared against a few different values, but there is a fallback case that happened to be the right thing when the pointer was tagged. A good bug to find, nonetheless.
* Change the representation of the MVar blocked queueSimon Marlow2010-04-011-17/+2
| | | | | | | | | | | | | | | | | | | | | The list of threads blocked on an MVar is now represented as a list of separately allocated objects rather than being linked through the TSOs themselves. This lets us remove a TSO from the list in O(1) time rather than O(n) time, by marking the list object. Removing this linear component fixes some pathalogical performance cases where many threads were blocked on an MVar and became unreachable simultaneously (nofib/smp/threads007), or when sending an asynchronous exception to a TSO in a long list of thread blocked on an MVar. MVar performance has actually improved by a few percent as a result of this change, slightly to my surprise. This is the final cleanup in the sequence, which let me remove the old way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent). It is now the case that only the Capability that owns a TSO may modify its state (well, almost), and this simplifies various things. More of the RTS is based on message-passing between Capabilities now.
* Move a thread to the front of the run queue when another thread blocks on itSimon Marlow2010-03-291-2/+22
| | | | | | | This fixes #3838, and was made possible by the new BLACKHOLE infrastructure. To allow reording of the run queue I had to make it doubly-linked, which entails some extra trickiness with regard to GC write barriers and suchlike.
* change throwTo to use tryWakeupThread rather than unblockOneSimon Marlow2010-03-291-5/+3
|
* New implementation of BLACKHOLEsSimon Marlow2010-03-291-0/+296
This replaces the global blackhole_queue with a clever scheme that enables us to queue up blocked threads on the closure that they are blocked on, while still avoiding atomic instructions in the common case. Advantages: - gets rid of a locked global data structure and some tricky GC code (replacing it with some per-thread data structures and different tricky GC code :) - wakeups are more prompt: parallel/concurrent performance should benefit. I haven't seen anything dramatic in the parallel benchmarks so far, but a couple of threading benchmarks do improve a bit. - waking up a thread blocked on a blackhole is now O(1) (e.g. if it is the target of throwTo). - less sharing and better separation of Capabilities: communication is done with messages, the data structures are strictly owned by a Capability and cannot be modified except by sending messages. - this change will utlimately enable us to do more intelligent scheduling when threads block on each other. This is what started off the whole thing, but it isn't done yet (#3838). I'll be documenting all this on the wiki in due course.