diff options
author | Ben Gamari <ben@smart-cactus.org> | 2019-10-23 14:01:45 -0400 |
---|---|---|
committer | Ben Gamari <ben@smart-cactus.org> | 2019-10-23 14:56:46 -0400 |
commit | 7f72b540288bbdb32a6750dd64b9d366501ed10c (patch) | |
tree | 438203c9c0b052fb65210b5e89acfa7b1d44d5b8 /rts/eventlog | |
parent | 8abddac870d4b49f77b5ce56bfeb68328dd0d651 (diff) | |
parent | 984745b074c186f6058730087a4fc8156240ec76 (diff) | |
download | haskell-7f72b540288bbdb32a6750dd64b9d366501ed10c.tar.gz |
Merge non-moving garbage collector
This introduces a concurrent mark & sweep garbage collector to manage the old
generation. The concurrent nature of this collector typically results in
significantly reduced maximum and mean pause times in applications with large
working sets.
Due to the large and intricate nature of the change I have opted to
preserve the fully-buildable history, including merge commits, which is
described in the "Branch overview" section below.
Collector design
================
The full design of the collector implemented here is described in detail
in a technical note
> B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell
> Compiler" (2018)
This document can be requested from @bgamari.
The basic heap structure used in this design is heavily inspired by
> K. Ueno & A. Ohori. "A fully concurrent garbage collector for
> functional programs on multicore processors." /ACM SIGPLAN Notices/
> Vol. 51. No. 9 (presented at ICFP 2016)
This design is intended to allow both marking and sweeping
concurrent to execution of a multi-core mutator. Unlike the Ueno design,
which requires no global synchronization pauses, the collector
introduced here requires a stop-the-world pause at the beginning and end
of the mark phase.
To avoid heap fragmentation, the allocator consists of a number of
fixed-size /sub-allocators/. Each of these sub-allocators allocators into
its own set of /segments/, themselves allocated from the block
allocator. Each segment is broken into a set of fixed-size allocation
blocks (which back allocations) in addition to a bitmap (used to track
the liveness of blocks) and some additional metadata (used also used
to track liveness).
This heap structure enables collection via mark-and-sweep, which can be
performed concurrently via a snapshot-at-the-beginning scheme (although
concurrent collection is not implemented in this patch).
Implementation structure
========================
The majority of the collector is implemented in a handful of files:
* `rts/Nonmoving.c` is the heart of the beast. It implements the entry-point
to the nonmoving collector (`nonmoving_collect`), as well as the allocator
(`nonmoving_allocate`) and a number of utilities for manipulating the heap.
* `rts/NonmovingMark.c` implements the mark queue functionality, update
remembered set, and mark loop.
* `rts/NonmovingSweep.c` implements the sweep loop.
* `rts/NonmovingScav.c` implements the logic necessary to scavenge the
nonmoving heap.
Branch overview
===============
```
* wip/gc/opt-pause:
| A variety of small optimisations to further reduce pause times.
|
* wip/gc/compact-nfdata:
| Introduce support for compact regions into the non-moving
|\ collector
| \
| \
| | * wip/gc/segment-header-to-bdescr:
| | | Another optimization that we are considering, pushing
| | | some segment metadata into the segment descriptor for
| | | the sake of locality during mark
| | |
| * | wip/gc/shortcutting:
| | | Support for indirection shortcutting and the selector optimization
| | | in the non-moving heap.
| | |
* | | wip/gc/docs:
| |/ Work on implementation documentation.
| /
|/
* wip/gc/everything:
| A roll-up of everything below.
|\
| \
| |\
| | \
| | * wip/gc/optimize:
| | | A variety of optimizations, primarily to the mark loop.
| | | Some of these are microoptimizations but a few are quite
| | | significant. In particular, the prefetch patches have
| | | produced a nontrivial improvement in mark performance.
| | |
| | * wip/gc/aging:
| | | Enable support for aging in major collections.
| | |
| * | wip/gc/test:
| | | Fix up the testsuite to more or less pass.
| | |
* | | wip/gc/instrumentation:
| | | A variety of runtime instrumentation including statistics
| | / support, the nonmoving census, and eventlog support.
| |/
| /
|/
* wip/gc/nonmoving-concurrent:
| The concurrent write barriers.
|
* wip/gc/nonmoving-nonconcurrent:
| The nonmoving collector without the write barriers necessary
| for concurrent collection.
|
* wip/gc/preparation:
| A merge of the various preparatory patches that aren't directly
| implementing the GC.
|
|
* GHC HEAD
.
.
.
```
Diffstat (limited to 'rts/eventlog')
-rw-r--r-- | rts/eventlog/EventLog.c | 75 | ||||
-rw-r--r-- | rts/eventlog/EventLog.h | 10 |
2 files changed, 82 insertions, 3 deletions
diff --git a/rts/eventlog/EventLog.c b/rts/eventlog/EventLog.c index 6d7b487152..5f22af5bfc 100644 --- a/rts/eventlog/EventLog.c +++ b/rts/eventlog/EventLog.c @@ -109,7 +109,15 @@ char *EventDesc[] = { [EVENT_HEAP_PROF_SAMPLE_COST_CENTRE] = "Heap profile cost-centre sample", [EVENT_PROF_SAMPLE_COST_CENTRE] = "Time profile cost-centre stack", [EVENT_PROF_BEGIN] = "Start of a time profile", - [EVENT_USER_BINARY_MSG] = "User binary message" + [EVENT_USER_BINARY_MSG] = "User binary message", + [EVENT_CONC_MARK_BEGIN] = "Begin concurrent mark phase", + [EVENT_CONC_MARK_END] = "End concurrent mark phase", + [EVENT_CONC_SYNC_BEGIN] = "Begin concurrent GC synchronisation", + [EVENT_CONC_SYNC_END] = "End concurrent GC synchronisation", + [EVENT_CONC_SWEEP_BEGIN] = "Begin concurrent sweep", + [EVENT_CONC_SWEEP_END] = "End concurrent sweep", + [EVENT_CONC_UPD_REM_SET_FLUSH] = "Update remembered set flushed", + [EVENT_NONMOVING_HEAP_CENSUS] = "Nonmoving heap census" }; // Event type. @@ -456,6 +464,27 @@ init_event_types(void) eventTypes[t].size = EVENT_SIZE_DYNAMIC; break; + case EVENT_CONC_MARK_BEGIN: + case EVENT_CONC_SYNC_BEGIN: + case EVENT_CONC_SYNC_END: + case EVENT_CONC_SWEEP_BEGIN: + case EVENT_CONC_SWEEP_END: + eventTypes[t].size = 0; + break; + + case EVENT_CONC_MARK_END: + eventTypes[t].size = 4; + break; + + case EVENT_CONC_UPD_REM_SET_FLUSH: // (cap) + eventTypes[t].size = + sizeof(EventCapNo); + break; + + case EVENT_NONMOVING_HEAP_CENSUS: // (cap, blk_size, active_segs, filled_segs, live_blks) + eventTypes[t].size = 13; + break; + default: continue; /* ignore deprecated events */ } @@ -497,8 +526,10 @@ initEventLogging(const EventLogWriter *ev_writer) event_log_writer = ev_writer; initEventLogWriter(); - if (sizeof(EventDesc) / sizeof(char*) != NUM_GHC_EVENT_TAGS) { - barf("EventDesc array has the wrong number of elements"); + int num_descs = sizeof(EventDesc) / sizeof(char*); + if (num_descs != NUM_GHC_EVENT_TAGS) { + barf("EventDesc array has the wrong number of elements (%d, NUM_GHC_EVENT_TAGS=%d)", + num_descs, NUM_GHC_EVENT_TAGS); } /* @@ -1015,6 +1046,15 @@ void postTaskDeleteEvent (EventTaskId taskId) } void +postEventNoCap (EventTypeNum tag) +{ + ACQUIRE_LOCK(&eventBufMutex); + ensureRoomForEvent(&eventBuf, tag); + postEventHeader(&eventBuf, tag); + RELEASE_LOCK(&eventBufMutex); +} + +void postEvent (Capability *cap, EventTypeNum tag) { EventsBuf *eb = &capEventBuf[cap->no]; @@ -1140,6 +1180,35 @@ void postThreadLabel(Capability *cap, postBuf(eb, (StgWord8*) label, strsize); } +void postConcUpdRemSetFlush(Capability *cap) +{ + EventsBuf *eb = &capEventBuf[cap->no]; + ensureRoomForEvent(eb, EVENT_CONC_UPD_REM_SET_FLUSH); + postEventHeader(eb, EVENT_CONC_UPD_REM_SET_FLUSH); + postCapNo(eb, cap->no); +} + +void postConcMarkEnd(StgWord32 marked_obj_count) +{ + ACQUIRE_LOCK(&eventBufMutex); + ensureRoomForEvent(&eventBuf, EVENT_CONC_MARK_END); + postEventHeader(&eventBuf, EVENT_CONC_MARK_END); + postWord32(&eventBuf, marked_obj_count); + RELEASE_LOCK(&eventBufMutex); +} + +void postNonmovingHeapCensus(int log_blk_size, + const struct NonmovingAllocCensus *census) +{ + ACQUIRE_LOCK(&eventBufMutex); + postEventHeader(&eventBuf, EVENT_NONMOVING_HEAP_CENSUS); + postWord8(&eventBuf, log_blk_size); + postWord32(&eventBuf, census->n_active_segs); + postWord32(&eventBuf, census->n_filled_segs); + postWord32(&eventBuf, census->n_live_blocks); + RELEASE_LOCK(&eventBufMutex); +} + void closeBlockMarker (EventsBuf *ebuf) { if (ebuf->marker) diff --git a/rts/eventlog/EventLog.h b/rts/eventlog/EventLog.h index ec9a5f34e3..5bd3b5dadb 100644 --- a/rts/eventlog/EventLog.h +++ b/rts/eventlog/EventLog.h @@ -11,6 +11,7 @@ #include "rts/EventLogFormat.h" #include "rts/EventLogWriter.h" #include "Capability.h" +#include "sm/NonMovingCensus.h" #include "BeginPrivate.h" @@ -39,6 +40,7 @@ void postSchedEvent(Capability *cap, EventTypeNum tag, * Post a nullary event. */ void postEvent(Capability *cap, EventTypeNum tag); +void postEventNoCap(EventTypeNum tag); void postEventAtTimestamp (Capability *cap, EventTimestamp ts, EventTypeNum tag); @@ -164,6 +166,11 @@ void postProfSampleCostCentre(Capability *cap, void postProfBegin(void); #endif /* PROFILING */ +void postConcUpdRemSetFlush(Capability *cap); +void postConcMarkEnd(StgWord32 marked_obj_count); +void postNonmovingHeapCensus(int log_blk_size, + const struct NonmovingAllocCensus *census); + #else /* !TRACING */ INLINE_HEADER void postSchedEvent (Capability *cap STG_UNUSED, @@ -177,6 +184,9 @@ INLINE_HEADER void postEvent (Capability *cap STG_UNUSED, EventTypeNum tag STG_UNUSED) { /* nothing */ } +INLINE_HEADER void postEventNoCap (EventTypeNum tag STG_UNUSED) +{ /* nothing */ } + INLINE_HEADER void postMsg (char *msg STG_UNUSED, va_list ap STG_UNUSED) { /* nothing */ } |