diff options
author | Ben Gamari <ben@well-typed.com> | 2019-02-05 11:51:14 -0500 |
---|---|---|
committer | Ben Gamari <ben@smart-cactus.org> | 2019-10-20 21:15:52 -0400 |
commit | bd8e3ff43b64a72ed1c820e89691d0a83a1c6e96 (patch) | |
tree | 8b07778e3c09460edce24750ae6da4d487eb5774 /rts/Threads.c | |
parent | f8f77a070f4a9a93944dff0b7270162a40931c58 (diff) | |
download | haskell-bd8e3ff43b64a72ed1c820e89691d0a83a1c6e96.tar.gz |
rts: Implement concurrent collection in the nonmoving collector
This extends the non-moving collector to allow concurrent collection.
The full design of the collector implemented here is described in detail
in a technical note
B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell
Compiler" (2018)
This extension involves the introduction of a capability-local
remembered set, known as the /update remembered set/, which tracks
objects which may no longer be visible to the collector due to mutation.
To maintain this remembered set we introduce a write barrier on
mutations which is enabled while a concurrent mark is underway.
The update remembered set representation is similar to that of the
nonmoving mark queue, being a chunked array of `MarkEntry`s. Each
`Capability` maintains a single accumulator chunk, which it flushed
when it (a) is filled, or (b) when the nonmoving collector enters its
post-mark synchronization phase.
While the write barrier touches a significant amount of code it is
conceptually straightforward: the mutator must ensure that the referee
of any pointer it overwrites is added to the update remembered set.
However, there are a few details:
* In the case of objects with a dirty flag (e.g. `MVar`s) we can
exploit the fact that only the *first* mutation requires a write
barrier.
* Weak references, as usual, complicate things. In particular, we must
ensure that the referee of a weak object is marked if dereferenced by
the mutator. For this we (unfortunately) must introduce a read
barrier, as described in Note [Concurrent read barrier on deRefWeak#]
(in `NonMovingMark.c`).
* Stable names are also a bit tricky as described in Note [Sweeping
stable names in the concurrent collector] (`NonMovingSweep.c`).
We take quite some pains to ensure that the high thread count often seen
in parallel Haskell applications doesn't affect pause times. To this end
we allow thread stacks to be marked either by the thread itself (when it
is executed or stack-underflows) or the concurrent mark thread (if the
thread owning the stack is never scheduled). There is a non-trivial
handshake to ensure that this happens without racing which is described
in Note [StgStack dirtiness flags and concurrent marking].
Co-Authored-by: Ömer Sinan Ağacan <omer@well-typed.com>
Diffstat (limited to 'rts/Threads.c')
-rw-r--r-- | rts/Threads.c | 22 |
1 files changed, 18 insertions, 4 deletions
diff --git a/rts/Threads.c b/rts/Threads.c index 3d5b463051..2b11a1eb90 100644 --- a/rts/Threads.c +++ b/rts/Threads.c @@ -86,6 +86,7 @@ createThread(Capability *cap, W_ size) stack->stack_size = stack_size - sizeofW(StgStack); stack->sp = stack->stack + stack->stack_size; stack->dirty = STACK_DIRTY; + stack->marking = 0; tso = (StgTSO *)allocate(cap, sizeofW(StgTSO)); TICK_ALLOC_TSO(); @@ -611,6 +612,7 @@ threadStackOverflow (Capability *cap, StgTSO *tso) TICK_ALLOC_STACK(chunk_size); new_stack->dirty = 0; // begin clean, we'll mark it dirty below + new_stack->marking = 0; new_stack->stack_size = chunk_size - sizeofW(StgStack); new_stack->sp = new_stack->stack + new_stack->stack_size; @@ -721,9 +723,17 @@ threadStackUnderflow (Capability *cap, StgTSO *tso) barf("threadStackUnderflow: not enough space for return values"); } - new_stack->sp -= retvals; + if (RTS_UNLIKELY(nonmoving_write_barrier_enabled)) { + // ensure that values that we copy into the new stack are marked + // for the nonmoving collector. Note that these values won't + // necessarily form a full closure so we need to handle them + // specially. + for (unsigned int i = 0; i < retvals; i++) { + updateRemembSetPushClosure(cap, (StgClosure *) old_stack->sp[i]); + } + } - memcpy(/* dest */ new_stack->sp, + memcpy(/* dest */ new_stack->sp - retvals, /* src */ old_stack->sp, /* size */ retvals * sizeof(W_)); } @@ -735,8 +745,12 @@ threadStackUnderflow (Capability *cap, StgTSO *tso) // restore the stack parameters, and update tot_stack_size tso->tot_stack_size -= old_stack->stack_size; - // we're about to run it, better mark it dirty + // we're about to run it, better mark it dirty. + // + // N.B. the nonmoving collector may mark the stack, meaning that sp must + // point at a valid stack frame. dirty_STACK(cap, new_stack); + new_stack->sp -= retvals; return retvals; } @@ -768,7 +782,7 @@ loop: if (q == (StgMVarTSOQueue*)&stg_END_TSO_QUEUE_closure) { /* No further takes, the MVar is now full. */ if (info == &stg_MVAR_CLEAN_info) { - dirty_MVAR(&cap->r, (StgClosure*)mvar); + dirty_MVAR(&cap->r, (StgClosure*)mvar, mvar->value); } mvar->value = value; |