| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
The nonmoving GC doesn't support `+RTS -G1`, which this test insists on.
|
| | | |
| | | |
| | | |
| | | | |
This uses the nonmoving collector when compiling the testcases.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Flush the update remembered set. The goal here is to flush periodically to
ensure that we don't end up with a thread who marks their stack on their
local update remembered set and doesn't flush until the nonmoving sync
period as this would result in a large fraction of the heap being marked
during the sync pause.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously we would perform a preparatory moving collection, resulting
in many things being added to the mark queue. When we finished with this
we would realize in nonmovingCollect that there was already a collection
running, in which case we would simply not run the nonmoving collector.
However, it was very easy to end up in a "treadmilling" situation: all
subsequent GC following the first failed major GC would be scheduled as
major GCs. Consequently we would continuously feed the concurrent
collector with more mark queue entries and it would never finish.
This patch aborts the major collection far earlier, meaning that we
avoid adding nonmoving objects to the mark queue and allowing the
concurrent collector to finish.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously we would look at the segment header to determine the block
size despite the fact that we already had the block size at hand.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
This improved overall runtime on nofib's constraints test by nearly 10%.
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Ensure that the bitmap of the segmentt that we will clear next is in
cache by the time we reach it.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Use memchr instead of a open-coded loop. This is nearly twice as fast in
a synthetic benchmark.
|
| | | |
| | | |
| | | |
| | | | |
This shortens MarkQueueEntry by 30% (one word)
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Perf showed that the this single div was capturing up to 10% of samples
in nonmovingMark. However, the overwhelming majority of cases is looking
at small block sizes. These cases we can easily compute explicitly,
allowing the compiler to turn the division into a significantly more
efficient division-by-constant.
While the increase in source code looks scary, this all optimises down
to very nice looking assembler. At this point the only remaining
hotspots in nonmovingBlockCount are due to memory access.
|
| | | | |
|
| | | | |
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | | |
This commit does two things:
* Allow aging of objects during the preparatory minor GC
* Refactor handling of static objects to avoid the use of a hashtable
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Otherwise the census is unsafe when mutators are running due to
concurrent mutation.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This introduces a simple census of the non-moving heap (not to be
confused with the heap census used by the heap profiler). This
collects basic heap usage information (number of allocated and free
blocks) which is useful when characterising fragmentation of the
nonmoving heap.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This introduces a few events to mark key points in the nonmoving
garbage collection cycle. These include:
* `EVENT_CONC_MARK_BEGIN`, denoting the beginning of a round of
marking. This may happen more than once in a single major collection
since we the major collector iterates until it hits a fixed point.
* `EVENT_CONC_MARK_END`, denoting the end of a round of marking.
* `EVENT_CONC_SYNC_BEGIN`, denoting the beginning of the post-mark
synchronization phase
* `EVENT_CONC_UPD_REM_SET_FLUSH`, indicating that a capability has
flushed its update remembered set.
* `EVENT_CONC_SYNC_END`, denoting that all mutators have flushed their
update remembered sets.
* `EVENT_CONC_SWEEP_BEGIN`, denoting the beginning of the sweep portion
of the major collection.
* `EVENT_CONC_SWEEP_END`, denoting the end of the sweep portion of the
major collection.
|
| |
| |
| |
| |
| |
| | |
This required some fiddling around with the location of forward
declarations since the C sources generated by GHC's C backend only
includes Stg.h.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
This requires that we break nonmovingExit into two pieces since we need
to first stop the collector to relinquish any capabilities, then we need
to shutdown the scheduler, then we need to free the nonmoving
allocators.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This extends the non-moving collector to allow concurrent collection.
The full design of the collector implemented here is described in detail
in a technical note
B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell
Compiler" (2018)
This extension involves the introduction of a capability-local
remembered set, known as the /update remembered set/, which tracks
objects which may no longer be visible to the collector due to mutation.
To maintain this remembered set we introduce a write barrier on
mutations which is enabled while a concurrent mark is underway.
The update remembered set representation is similar to that of the
nonmoving mark queue, being a chunked array of `MarkEntry`s. Each
`Capability` maintains a single accumulator chunk, which it flushed
when it (a) is filled, or (b) when the nonmoving collector enters its
post-mark synchronization phase.
While the write barrier touches a significant amount of code it is
conceptually straightforward: the mutator must ensure that the referee
of any pointer it overwrites is added to the update remembered set.
However, there are a few details:
* In the case of objects with a dirty flag (e.g. `MVar`s) we can
exploit the fact that only the *first* mutation requires a write
barrier.
* Weak references, as usual, complicate things. In particular, we must
ensure that the referee of a weak object is marked if dereferenced by
the mutator. For this we (unfortunately) must introduce a read
barrier, as described in Note [Concurrent read barrier on deRefWeak#]
(in `NonMovingMark.c`).
* Stable names are also a bit tricky as described in Note [Sweeping
stable names in the concurrent collector] (`NonMovingSweep.c`).
We take quite some pains to ensure that the high thread count often seen
in parallel Haskell applications doesn't affect pause times. To this end
we allow thread stacks to be marked either by the thread itself (when it
is executed or stack-underflows) or the concurrent mark thread (if the
thread owning the stack is never scheduled). There is a non-trivial
handshake to ensure that this happens without racing which is described
in Note [StgStack dirtiness flags and concurrent marking].
Co-Authored-by: Ömer Sinan Ağacan <omer@well-typed.com>
|
| | |
|
| |
| |
| |
| |
| | |
This simply runs the compile_and_run tests with `-xn`, enabling the
nonmoving oldest generation.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This implements the core heap structure and a serial mark/sweep
collector which can be used to manage the oldest-generation heap.
This is the first step towards a concurrent mark-and-sweep collector
aimed at low-latency applications.
The full design of the collector implemented here is described in detail
in a technical note
B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell
Compiler" (2018)
The basic heap structure used in this design is heavily inspired by
K. Ueno & A. Ohori. "A fully concurrent garbage collector for
functional programs on multicore processors." /ACM SIGPLAN Notices/
Vol. 51. No. 9 (presented by ICFP 2016)
This design is intended to allow both marking and sweeping
concurrent to execution of a multi-core mutator. Unlike the Ueno design,
which requires no global synchronization pauses, the collector
introduced here requires a stop-the-world pause at the beginning and end
of the mark phase.
To avoid heap fragmentation, the allocator consists of a number of
fixed-size /sub-allocators/. Each of these sub-allocators allocators into
its own set of /segments/, themselves allocated from the block
allocator. Each segment is broken into a set of fixed-size allocation
blocks (which back allocations) in addition to a bitmap (used to track
the liveness of blocks) and some additional metadata (used also used
to track liveness).
This heap structure enables collection via mark-and-sweep, which can be
performed concurrently via a snapshot-at-the-beginning scheme (although
concurrent collection is not implemented in this patch).
The mark queue is a fairly straightforward chunked-array structure.
The representation is a bit more verbose than a typical mark queue to
accomodate a combination of two features:
* a mark FIFO, which improves the locality of marking, reducing one of
the major overheads seen in mark/sweep allocators (see [1] for
details)
* the selector optimization and indirection shortcutting, which
requires that we track where we found each reference to an object
in case we need to update the reference at a later point (e.g. when
we find that it is an indirection). See Note [Origin references in
the nonmoving collector] (in `NonMovingMark.h`) for details.
Beyond this the mark/sweep is fairly run-of-the-mill.
[1] R. Garner, S.M. Blackburn, D. Frampton. "Effective Prefetch for
Mark-Sweep Garbage Collection." ISMM 2007.
Co-Authored-By: Ben Gamari <ben@well-typed.com>
|
| | |
|
| |
| |
| |
| | |
This flag will enable the use of a non-moving oldest generation.
|
| |
| |
| |
| |
| |
| |
| | |
To keep the non-moving collector nicely separated from the moving
collector its scavenging phase will live in another file,
`NonMovingScav.c`. However, it will need to use these functions so
let's expose them.
|
| |
| |
| |
| |
| | |
This warning is a bit of a relic; there is little reason to avoid
aggregate return values in 2019.
|
| |
| |
| |
| |
| | |
These will be needed when we implement sweeping in the nonmoving
collector.
|
| |\ \
| | | |
| | | |
| | | | |
'wip/gc/aligned-block-allocation' into wip/gc/preparation
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This implements support for block group allocations which are aligned to
an integral number of blocks.
This will be used by the nonmoving garbage collector, which uses the
block allocator to allocate the segments which back its heap. These
segments are a fixed number of blocks in size, with each segment being
aligned to the segment size boundary. This allows us to easily find the
segment metadata stored at the beginning of the segment.
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The concurrent mark-and-sweep will be performed by a GHC task which will
not hold a capability. This is necessary to avoid a concurrent mark from
interfering with minor generation collections.
However, the major collector must synchronize with the mutators at the
end of marking to flush their update remembered sets. This patch extends
the `requestSync` mechanism used to synchronize garbage collectors to
allow synchronization without holding a capability.
This change is fairly straightforward as the capability was previously
only required for two reasons:
1. to ensure that we don't try to re-acquire a capability that we
the sync requestor already holds.
2. to provide a way to suspend and later resume the sync request if
there is already a sync pending.
When synchronizing without holding a capability we needn't worry about
consideration (1) at all.
(2) is slightly trickier and may happen, for instance, when a capability
requests a minor collection and shortly thereafter the non-moving mark
thread requests a post-mark synchronization. In this case we need to
ensure that the non-moving mark thread suspends his request until after
the minor GC has concluded to avoid dead-locking. For this we introduce
a condition variable, `sync_finished_cond`, which a
non-capability-bearing requestor will wait on and which is signalled
after a synchronization or GC has finished.
|
| | | |
|
| | | |
|