diff options
author | Simon Marlow <marlowsd@gmail.com> | 2010-12-15 12:08:43 +0000 |
---|---|---|
committer | Simon Marlow <marlowsd@gmail.com> | 2010-12-15 12:08:43 +0000 |
commit | f30d527344db528618f64a25250a3be557d9f287 (patch) | |
tree | 5b827afed254139a197cbdcdd37bebe8fa859d67 /rts/ThreadPaused.c | |
parent | 99b6e6ac44c6c610b0d60e3b70a2341c83d23106 (diff) | |
download | haskell-f30d527344db528618f64a25250a3be557d9f287.tar.gz |
Implement stack chunks and separate TSO/STACK objects
This patch makes two changes to the way stacks are managed:
1. The stack is now stored in a separate object from the TSO.
This means that it is easier to replace the stack object for a thread
when the stack overflows or underflows; we don't have to leave behind
the old TSO as an indirection any more. Consequently, we can remove
ThreadRelocated and deRefTSO(), which were a pain.
This is obviously the right thing, but the last time I tried to do it
it made performance worse. This time I seem to have cracked it.
2. Stacks are now represented as a chain of chunks, rather than
a single monolithic object.
The big advantage here is that individual chunks are marked clean or
dirty according to whether they contain pointers to the young
generation, and the GC can avoid traversing clean stack chunks during
a young-generation collection. This means that programs with deep
stacks will see a big saving in GC overhead when using the default GC
settings.
A secondary advantage is that there is much less copying involved as
the stack grows. Programs that quickly grow a deep stack will see big
improvements.
In some ways the implementation is simpler, as nothing special needs
to be done to reclaim stack as the stack shrinks (the GC just recovers
the dead stack chunks). On the other hand, we have to manage stack
underflow between chunks, so there's a new stack frame
(UNDERFLOW_FRAME), and we now have separate TSO and STACK objects.
The total amount of code is probably about the same as before.
There are new RTS flags:
-ki<size> Sets the initial thread stack size (default 1k) Egs: -ki4k -ki2m
-kc<size> Sets the stack chunk size (default 32k)
-kb<size> Sets the stack chunk buffer size (default 1k)
-ki was previously called just -k, and the old name is still accepted
for backwards compatibility. These new options are documented.
Diffstat (limited to 'rts/ThreadPaused.c')
-rw-r--r-- | rts/ThreadPaused.c | 58 |
1 files changed, 28 insertions, 30 deletions
diff --git a/rts/ThreadPaused.c b/rts/ThreadPaused.c index 94a5a15f46..aeae1d4128 100644 --- a/rts/ThreadPaused.c +++ b/rts/ThreadPaused.c @@ -44,13 +44,13 @@ stackSqueeze(Capability *cap, StgTSO *tso, StgPtr bottom) // contains two values: the size of the gap, and the distance // to the next gap (or the stack top). - frame = tso->sp; + frame = tso->stackobj->sp; ASSERT(frame < bottom); prev_was_update_frame = rtsFalse; current_gap_size = 0; - gap = (struct stack_gap *) (tso->sp - sizeofW(StgUpdateFrame)); + gap = (struct stack_gap *) (frame - sizeofW(StgUpdateFrame)); while (frame <= bottom) { @@ -150,7 +150,7 @@ stackSqueeze(Capability *cap, StgTSO *tso, StgPtr bottom) next_gap_start = (StgWord8*)gap + sizeof(StgUpdateFrame); sp = next_gap_start; - while ((StgPtr)gap > tso->sp) { + while ((StgPtr)gap > tso->stackobj->sp) { // we're working in *bytes* now... gap_start = next_gap_start; @@ -164,7 +164,7 @@ stackSqueeze(Capability *cap, StgTSO *tso, StgPtr bottom) memmove(sp, next_gap_start, chunk_size); } - tso->sp = (StgPtr)sp; + tso->stackobj->sp = (StgPtr)sp; } } @@ -201,27 +201,27 @@ threadPaused(Capability *cap, StgTSO *tso) // blackholing, or eager blackholing consistently. See Note // [upd-black-hole] in sm/Scav.c. - stack_end = &tso->stack[tso->stack_size]; + stack_end = tso->stackobj->stack + tso->stackobj->stack_size; - frame = (StgClosure *)tso->sp; + frame = (StgClosure *)tso->stackobj->sp; - while (1) { - // If we've already marked this frame, then stop here. - if (frame->header.info == (StgInfoTable *)&stg_marked_upd_frame_info) { - if (prev_was_update_frame) { - words_to_squeeze += sizeofW(StgUpdateFrame); - weight += weight_pending; - weight_pending = 0; - } - goto end; - } - - info = get_ret_itbl(frame); + while ((P_)frame < stack_end) { + info = get_ret_itbl(frame); switch (info->i.type) { - + case UPDATE_FRAME: + // If we've already marked this frame, then stop here. + if (frame->header.info == (StgInfoTable *)&stg_marked_upd_frame_info) { + if (prev_was_update_frame) { + words_to_squeeze += sizeofW(StgUpdateFrame); + weight += weight_pending; + weight_pending = 0; + } + goto end; + } + SET_INFO(frame, (StgInfoTable *)&stg_marked_upd_frame_info); bh = ((StgUpdateFrame *)frame)->updatee; @@ -235,7 +235,7 @@ threadPaused(Capability *cap, StgTSO *tso) { debugTrace(DEBUG_squeeze, "suspending duplicate work: %ld words of stack", - (long)((StgPtr)frame - tso->sp)); + (long)((StgPtr)frame - tso->stackobj->sp)); // If this closure is already an indirection, then // suspend the computation up to this point. @@ -245,25 +245,22 @@ threadPaused(Capability *cap, StgTSO *tso) // Now drop the update frame, and arrange to return // the value to the frame underneath: - tso->sp = (StgPtr)frame + sizeofW(StgUpdateFrame) - 2; - tso->sp[1] = (StgWord)bh; + tso->stackobj->sp = (StgPtr)frame + sizeofW(StgUpdateFrame) - 2; + tso->stackobj->sp[1] = (StgWord)bh; ASSERT(bh->header.info != &stg_TSO_info); - tso->sp[0] = (W_)&stg_enter_info; + tso->stackobj->sp[0] = (W_)&stg_enter_info; // And continue with threadPaused; there might be // yet more computation to suspend. - frame = (StgClosure *)(tso->sp + 2); + frame = (StgClosure *)(tso->stackobj->sp + 2); prev_was_update_frame = rtsFalse; continue; } + // zero out the slop so that the sanity checker can tell // where the next closure is. - DEBUG_FILL_SLOP(bh); - - // @LDV profiling - // We pretend that bh is now dead. - LDV_RECORD_DEAD_FILL_SLOP_DYNAMIC((StgClosure *)bh); + OVERWRITING_CLOSURE(bh); // an EAGER_BLACKHOLE or CAF_BLACKHOLE gets turned into a // BLACKHOLE here. @@ -301,7 +298,8 @@ threadPaused(Capability *cap, StgTSO *tso) prev_was_update_frame = rtsTrue; break; - case STOP_FRAME: + case UNDERFLOW_FRAME: + case STOP_FRAME: goto end; // normal stack frames; do nothing except advance the pointer |