summaryrefslogtreecommitdiff
path: root/rts/Updates.h
Commit message (Collapse)AuthorAgeFilesLines
* Updates: Don't zero slop until closure has been pushedGHC GitLab CI2020-11-291-1/+1
| | | | | Ensure that the the free variables have been pushed to the update remembered set before we zero the slop.
* rts/Updates: Use proper atomic operationsBen Gamari2020-10-241-4/+2
|
* nonmoving: Optimise the write barrierBen Gamari2020-05-211-21/+18
|
* Nonmoving: Ensure write barrier vanishes in non-threaded RTSBen Gamari2019-10-211-2/+2
|
* rts: Implement concurrent collection in the nonmoving collectorBen Gamari2019-10-201-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This extends the non-moving collector to allow concurrent collection. The full design of the collector implemented here is described in detail in a technical note B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell Compiler" (2018) This extension involves the introduction of a capability-local remembered set, known as the /update remembered set/, which tracks objects which may no longer be visible to the collector due to mutation. To maintain this remembered set we introduce a write barrier on mutations which is enabled while a concurrent mark is underway. The update remembered set representation is similar to that of the nonmoving mark queue, being a chunked array of `MarkEntry`s. Each `Capability` maintains a single accumulator chunk, which it flushed when it (a) is filled, or (b) when the nonmoving collector enters its post-mark synchronization phase. While the write barrier touches a significant amount of code it is conceptually straightforward: the mutator must ensure that the referee of any pointer it overwrites is added to the update remembered set. However, there are a few details: * In the case of objects with a dirty flag (e.g. `MVar`s) we can exploit the fact that only the *first* mutation requires a write barrier. * Weak references, as usual, complicate things. In particular, we must ensure that the referee of a weak object is marked if dereferenced by the mutator. For this we (unfortunately) must introduce a read barrier, as described in Note [Concurrent read barrier on deRefWeak#] (in `NonMovingMark.c`). * Stable names are also a bit tricky as described in Note [Sweeping stable names in the concurrent collector] (`NonMovingSweep.c`). We take quite some pains to ensure that the high thread count often seen in parallel Haskell applications doesn't affect pause times. To this end we allow thread stacks to be marked either by the thread itself (when it is executed or stack-underflows) or the concurrent mark thread (if the thread owning the stack is never scheduled). There is a non-trivial handshake to ensure that this happens without racing which is described in Note [StgStack dirtiness flags and concurrent marking]. Co-Authored-by: Ömer Sinan Ağacan <omer@well-typed.com>
* Correct closure observation, construction, and mutation on weak memory machines.Travis Whitaker2019-06-281-1/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here the following changes are introduced: - A read barrier machine op is added to Cmm. - The order in which a closure's fields are read and written is changed. - Memory barriers are added to RTS code to ensure correctness on out-or-order machines with weak memory ordering. Cmm has a new CallishMachOp called MO_ReadBarrier. On weak memory machines, this is lowered to an instruction that ensures memory reads that occur after said instruction in program order are not performed before reads coming before said instruction in program order. On machines with strong memory ordering properties (e.g. X86, SPARC in TSO mode) no such instruction is necessary, so MO_ReadBarrier is simply erased. However, such an instruction is necessary on weakly ordered machines, e.g. ARM and PowerPC. Weam memory ordering has consequences for how closures are observed and mutated. For example, consider a closure that needs to be updated to an indirection. In order for the indirection to be safe for concurrent observers to enter, said observers must read the indirection's info table before they read the indirectee. Furthermore, the entering observer makes assumptions about the closure based on its info table contents, e.g. an INFO_TYPE of IND imples the closure has an indirectee pointer that is safe to follow. When a closure is updated with an indirection, both its info table and its indirectee must be written. With weak memory ordering, these two writes can be arbitrarily reordered, and perhaps even interleaved with other threads' reads and writes (in the absence of memory barrier instructions). Consider this example of a bad reordering: - An updater writes to a closure's info table (INFO_TYPE is now IND). - A concurrent observer branches upon reading the closure's INFO_TYPE as IND. - A concurrent observer reads the closure's indirectee and enters it. (!!!) - An updater writes the closure's indirectee. Here the update to the indirectee comes too late and the concurrent observer has jumped off into the abyss. Speculative execution can also cause us issues, consider: - An observer is about to case on a value in closure's info table. - The observer speculatively reads one or more of closure's fields. - An updater writes to closure's info table. - The observer takes a branch based on the new info table value, but with the old closure fields! - The updater writes to the closure's other fields, but its too late. Because of these effects, reads and writes to a closure's info table must be ordered carefully with respect to reads and writes to the closure's other fields, and memory barriers must be placed to ensure that reads and writes occur in program order. Specifically, updates to a closure must follow the following pattern: - Update the closure's (non-info table) fields. - Write barrier. - Update the closure's info table. Observing a closure's fields must follow the following pattern: - Read the closure's info pointer. - Read barrier. - Read the closure's (non-info table) fields. This patch updates RTS code to obey this pattern. This should fix long-standing SMP bugs on ARM (specifically newer aarch64 microarchitectures supporting out-of-order execution) and PowerPC. This fixes issue #15449. Co-Authored-By: Ben Gamari <ben@well-typed.com>
* Prefer #if defined to #ifdefBen Gamari2017-04-281-3/+3
| | | | Our new CPP linter enforces this.
* cpp: Use #pragma once instead of #ifndef guardsBen Gamari2017-04-231-4/+1
| | | | | | | | | | | | | | This both says what we mean and silences a bunch of spurious CPP linting warnings. This pragma is supported by all CPP implementations which we support. Reviewers: austin, erikd, simonmar, hvr Reviewed By: simonmar Subscribers: rwbarton, thomie Differential Revision: https://phabricator.haskell.org/D3482
* Revert "rts: add Emacs 'Local Variables' to every .c file"Simon Marlow2014-09-291-8/+0
| | | | This reverts commit 39b5c1cbd8950755de400933cecca7b8deb4ffcd.
* rts: detabify/dewhitespace Updates.hAustin Seipp2014-08-201-12/+12
| | | | Signed-off-by: Austin Seipp <austin@well-typed.com>
* rts: add Emacs 'Local Variables' to every .c fileAustin Seipp2014-07-281-0/+8
| | | | | | | | This will hopefully help ensure some basic consistency in the forward by overriding buffer variables. In particular, it sets the wrap length, the offset to 4, and turns off tabs. Signed-off-by: Austin Seipp <austin@well-typed.com>
* Only emit %write_barrier primitive for THREADED_RTSGabor Greif2013-02-261-2/+2
|
* profiling fixesSimon Marlow2012-10-191-2/+2
|
* profiling fixesSimon Marlow2012-10-091-2/+2
|
* Produce new-style Cmm from the Cmm parserSimon Marlow2012-10-081-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The main change here is that the Cmm parser now allows high-level cmm code with argument-passing and function calls. For example: foo ( gcptr a, bits32 b ) { if (b > 0) { // we can make tail calls passing arguments: jump stg_ap_0_fast(a); } return (x,y); } More details on the new cmm syntax are in Note [Syntax of .cmm files] in CmmParse.y. The old syntax is still more-or-less supported for those occasional code fragments that really need to explicitly manipulate the stack. However there are a couple of differences: it is now obligatory to give a list of live GlobalRegs on every jump, e.g. jump %ENTRY_CODE(Sp(0)) [R1]; Again, more details in Note [Syntax of .cmm files]. I have rewritten most of the .cmm files in the RTS into the new syntax, except for AutoApply.cmm which is generated by the genapply program: this file could be generated in the new syntax instead and would probably be better off for it, but I ran out of enthusiasm. Some other changes in this batch: - The PrimOp calling convention is gone, primops now use the ordinary NativeNodeCall convention. This means that primops and "foreign import prim" code must be written in high-level cmm, but they can now take more than 10 arguments. - CmmSink now does constant-folding (should fix #7219) - .cmm files now go through the cmmPipeline, and as a result we generate better code in many cases. All the object files generated for the RTS .cmm files are now smaller. Performance should be better too, but I haven't measured it yet. - RET_DYN frames are removed from the RTS, lots of code goes away - we now have some more canned GC points to cover unboxed-tuples with 2-4 pointers, which will reduce code size a little.
* Implement stack chunks and separate TSO/STACK objectsSimon Marlow2010-12-151-94/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes two changes to the way stacks are managed: 1. The stack is now stored in a separate object from the TSO. This means that it is easier to replace the stack object for a thread when the stack overflows or underflows; we don't have to leave behind the old TSO as an indirection any more. Consequently, we can remove ThreadRelocated and deRefTSO(), which were a pain. This is obviously the right thing, but the last time I tried to do it it made performance worse. This time I seem to have cracked it. 2. Stacks are now represented as a chain of chunks, rather than a single monolithic object. The big advantage here is that individual chunks are marked clean or dirty according to whether they contain pointers to the young generation, and the GC can avoid traversing clean stack chunks during a young-generation collection. This means that programs with deep stacks will see a big saving in GC overhead when using the default GC settings. A secondary advantage is that there is much less copying involved as the stack grows. Programs that quickly grow a deep stack will see big improvements. In some ways the implementation is simpler, as nothing special needs to be done to reclaim stack as the stack shrinks (the GC just recovers the dead stack chunks). On the other hand, we have to manage stack underflow between chunks, so there's a new stack frame (UNDERFLOW_FRAME), and we now have separate TSO and STACK objects. The total amount of code is probably about the same as before. There are new RTS flags: -ki<size> Sets the initial thread stack size (default 1k) Egs: -ki4k -ki2m -kc<size> Sets the stack chunk size (default 32k) -kb<size> Sets the stack chunk buffer size (default 1k) -ki was previously called just -k, and the old name is still accepted for backwards compatibility. These new options are documented.
* Fix the symbol visibility pragmasSimon Marlow2010-06-171-2/+2
|
* Remove the IND_OLDGEN and IND_OLDGEN_PERM closure typesSimon Marlow2010-04-011-2/+2
| | | | | | | These are no longer used: once upon a time they used to have different layout from IND and IND_PERM respectively, but that is no longer the case since we changed the remembered set to be an array of addresses instead of a linked list of closures.
* New implementation of BLACKHOLEsSimon Marlow2010-03-291-17/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This replaces the global blackhole_queue with a clever scheme that enables us to queue up blocked threads on the closure that they are blocked on, while still avoiding atomic instructions in the common case. Advantages: - gets rid of a locked global data structure and some tricky GC code (replacing it with some per-thread data structures and different tricky GC code :) - wakeups are more prompt: parallel/concurrent performance should benefit. I haven't seen anything dramatic in the parallel benchmarks so far, but a couple of threading benchmarks do improve a bit. - waking up a thread blocked on a blackhole is now O(1) (e.g. if it is the target of throwTo). - less sharing and better separation of Capabilities: communication is done with messages, the data structures are strictly owned by a Capability and cannot be modified except by sending messages. - this change will utlimately enable us to do more intelligent scheduling when threads block on each other. This is what started off the whole thing, but it isn't done yet (#3838). I'll be documenting all this on the wiki in due course.
* Use local mut lists in UPD_IND(), also clean up Updates.hSimon Marlow2009-12-311-82/+35
|
* Store a destination step in the block descriptorSimon Marlow2009-11-291-1/+1
| | | | | | | At the moment, this just saves a memory reference in the GC inner loop (worth a percent or two of GC time). Later, it will hopefully let me experiment with partial steps, and simplifying the generation/step infrastructure.
* Omit visibility pragmas on Windows (fixes warnings/validate failures)Simon Marlow2009-09-091-2/+6
|
* Declare RTS-private prototypes with __attribute__((visibility("hidden")))Simon Marlow2009-08-051-0/+5
| | | | | | | | | | This has no effect with static libraries, but when the RTS is in a shared library it does two things: - it prevents the function from being exposed by the shared library - internal calls to the function can use the faster non-PLT calls, because the function cannot be overriden at link time.
* Remove old GUM/GranSim codeSimon Marlow2009-06-021-55/+0
|
* Keep the remembered sets local to each thread during parallel GCSimon Marlow2009-01-121-1/+1
| | | | | | | | | | | | | | | | | | | | | This turns out to be quite vital for parallel programs: - The way we discover which threads to traverse is by finding dirty threads via the remembered sets (aka mutable lists). - A dirty thread will be on the remembered set of the capability that was running it, and we really want to traverse that thread's stack using the GC thread for the capability, because it is in that CPU's cache. If we get this wrong, we get penalised badly by the memory system. Previously we had per-capability mutable lists but they were aggregated before GC and traversed by just one of the GC threads. This resulted in very poor performance particularly for parallel programs with deep stacks. Now we keep per-capability remembered sets throughout GC, which also removes a lock (recordMutableGen_sync).
* cruft removalSimon Marlow2009-01-061-21/+0
|
* Updating a thunk in raiseAsync might encounter an IND; copesimonmar@microsoft.com2008-02-281-2/+3
| | | | | | | | There was already a check to avoid updating an IND, but it was originally there to avoid a bug which doesn't exist now. Furthermore the test and update are not atomic, so another thread could be updating this thunk while we are. We have to just go ahead and update anyway - it might waste a little work, but this is a very rare case.
* Lightweight ticky-ticky profilingKirsten Chevalier2007-02-071-73/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following changes restore ticky-ticky profiling to functionality from its formerly bit-rotted state. Sort of. (It got bit-rotted as part of the switch to the C-- back-end.) The way that ticky-ticky is supposed to work is documented in Section 5.7 of the GHC manual (though the manual doesn't mention that it hasn't worked since sometime around 6.0, alas). Changes from this are as follows (which I'll document on the wiki): * In the past, you had to build all of the libraries with way=t in order to use ticky-ticky, because it entailed a different closure layout. No longer. You still need to do make way=t in rts/ in order to build the ticky RTS, but you should now be able to mix ticky and non-ticky modules. * Some of the counters that worked in the past aren't implemented yet. I was originally just trying to get entry counts to work, so those should be correct. The list of counters was never documented in the first place, so I hope it's not too much of a disaster that some don't appear anymore. Someday, someone (perhaps me) should document all the counters and what they do. For now, all of the counters are either accurate (or at least as accurate as they always were), zero, or missing from the ticky profiling report altogether. This hasn't been particularly well-tested, but these changes shouldn't affect anything except when compiling with -fticky-ticky (famous last words...) Implementation details: I got rid of StgTicky.h, which in the past had the macros and declarations for all of the ticky counters. Now, those macros are defined in Cmm.h. StgTicky.h was still there for inclusion in C code. Now, any remaining C code simply cannot call the ticky macros -- or rather, they do call those macros, but from the perspective of C code, they're defined as no-ops. (This shouldn't be too big a problem.) I added a new file TickyCounter.h that has all the declarations for ticky counters, as well as dummy macros for use in C code. Someday, these declarations should really be automatically generated, since they need to be kept consistent with the macros defined in Cmm.h. Other changes include getting rid of the header that was getting added to closures before, and getting rid of various code having to do with eager blackholing and permanent indirections (the changes under compiler/ and rts/Updates.*).
* FILL_SLOP: don't fill slop for BLACKHOLE/CAF_BLACKHOLESimon Marlow2006-08-301-0/+6
| | | | This affects -debug only, avoids crash with test conc012.
* Replace inline C functions with C-- macros in .cmm codeSimon Marlow2006-06-291-3/+1
| | | | So that we can build the RTS with the NCG.
* use the new "prim %write_barrier()" in .cmm instead of calls to wb()Simon Marlow2006-06-291-23/+23
|
* fix up slop-overwriting for THUNK_SELECTORS in DEBUG modeSimon Marlow2006-06-271-10/+14
|
* Reorganisation of the source treeSimon Marlow2006-04-071-0/+361
Most of the other users of the fptools build system have migrated to Cabal, and with the move to darcs we can now flatten the source tree without losing history, so here goes. The main change is that the ghc/ subdir is gone, and most of what it contained is now at the top level. The build system now makes no pretense at being multi-project, it is just the GHC build system. No doubt this will break many things, and there will be a period of instability while we fix the dependencies. A straightforward build should work, but I haven't yet fixed binary/source distributions. Changes to the Building Guide will follow, too.