| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with equality constraints
In #17304, Richard and Simon dicovered that using `-XFlexibleInstances`
for `Outputable` instances of AST data types means users can provide orphan
`Outputable` instances for passes other than `GhcPass`.
Type inference doesn't currently to suffer, and Richard gave an example
in #17304 that shows how rare a case would be where the slightly worse
type inference would matter.
So I went ahead with the refactoring, attempting to fix #17304.
|
|
|
|
|
|
|
| |
We were using `pprIfaceAppArgs` instead of `pprParendIfaceAppArgs`
in `pprIfaceConDecl`. Oops.
Fixes #17384.
|
|
|
|
| |
See #16873.
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This introduces a concurrent mark & sweep garbage collector to manage the old
generation. The concurrent nature of this collector typically results in
significantly reduced maximum and mean pause times in applications with large
working sets.
Due to the large and intricate nature of the change I have opted to
preserve the fully-buildable history, including merge commits, which is
described in the "Branch overview" section below.
Collector design
================
The full design of the collector implemented here is described in detail
in a technical note
> B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell
> Compiler" (2018)
This document can be requested from @bgamari.
The basic heap structure used in this design is heavily inspired by
> K. Ueno & A. Ohori. "A fully concurrent garbage collector for
> functional programs on multicore processors." /ACM SIGPLAN Notices/
> Vol. 51. No. 9 (presented at ICFP 2016)
This design is intended to allow both marking and sweeping
concurrent to execution of a multi-core mutator. Unlike the Ueno design,
which requires no global synchronization pauses, the collector
introduced here requires a stop-the-world pause at the beginning and end
of the mark phase.
To avoid heap fragmentation, the allocator consists of a number of
fixed-size /sub-allocators/. Each of these sub-allocators allocators into
its own set of /segments/, themselves allocated from the block
allocator. Each segment is broken into a set of fixed-size allocation
blocks (which back allocations) in addition to a bitmap (used to track
the liveness of blocks) and some additional metadata (used also used
to track liveness).
This heap structure enables collection via mark-and-sweep, which can be
performed concurrently via a snapshot-at-the-beginning scheme (although
concurrent collection is not implemented in this patch).
Implementation structure
========================
The majority of the collector is implemented in a handful of files:
* `rts/Nonmoving.c` is the heart of the beast. It implements the entry-point
to the nonmoving collector (`nonmoving_collect`), as well as the allocator
(`nonmoving_allocate`) and a number of utilities for manipulating the heap.
* `rts/NonmovingMark.c` implements the mark queue functionality, update
remembered set, and mark loop.
* `rts/NonmovingSweep.c` implements the sweep loop.
* `rts/NonmovingScav.c` implements the logic necessary to scavenge the
nonmoving heap.
Branch overview
===============
```
* wip/gc/opt-pause:
| A variety of small optimisations to further reduce pause times.
|
* wip/gc/compact-nfdata:
| Introduce support for compact regions into the non-moving
|\ collector
| \
| \
| | * wip/gc/segment-header-to-bdescr:
| | | Another optimization that we are considering, pushing
| | | some segment metadata into the segment descriptor for
| | | the sake of locality during mark
| | |
| * | wip/gc/shortcutting:
| | | Support for indirection shortcutting and the selector optimization
| | | in the non-moving heap.
| | |
* | | wip/gc/docs:
| |/ Work on implementation documentation.
| /
|/
* wip/gc/everything:
| A roll-up of everything below.
|\
| \
| |\
| | \
| | * wip/gc/optimize:
| | | A variety of optimizations, primarily to the mark loop.
| | | Some of these are microoptimizations but a few are quite
| | | significant. In particular, the prefetch patches have
| | | produced a nontrivial improvement in mark performance.
| | |
| | * wip/gc/aging:
| | | Enable support for aging in major collections.
| | |
| * | wip/gc/test:
| | | Fix up the testsuite to more or less pass.
| | |
* | | wip/gc/instrumentation:
| | | A variety of runtime instrumentation including statistics
| | / support, the nonmoving census, and eventlog support.
| |/
| /
|/
* wip/gc/nonmoving-concurrent:
| The concurrent write barriers.
|
* wip/gc/nonmoving-nonconcurrent:
| The nonmoving collector without the write barriers necessary
| for concurrent collection.
|
* wip/gc/preparation:
| A merge of the various preparatory patches that aren't directly
| implementing the GC.
|
|
* GHC HEAD
.
.
.
```
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously we would first move the new objects to their appropriate
non-moving GC list, then do another pass over that list to clear their
mark bits. This is needlessly expensive. First clear the mark bits of
the existing objects, then add the newly evacuated objects and, at the
same time, clear their mark bits.
This cuts the preparatory GC time in half for the Pusher benchmark with
a large queue size.
|
| | |
|
| |
| |
| |
| |
| |
| | |
The expectation here is that the nonmoving GC is latency-centric,
whereas the moving GC emphasizes throughput. Therefore we give the
latter the benefit of better static branch prediction.
|
| |
| |
| |
| |
| |
| | |
This largely follows the model used for large objects, with appropriate
adjustments made to account for references in the sharing deduplication
hashtable.
|
| |\ \
| | | |
| | | |
| | | | |
wip/gc/everything2
|
| | | | |
|
| | | | |
|
| | | | |
|
| | |/
| | |
| | |
| | | |
This will allow us to easily move the block size elsewhere.
|
| | | |
|
| | | |
|
| | | |
|
| |/
| |
| |
| |
| | |
This allows indirection chains residing in the non-moving heap to be
shorted-out.
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | | |
This is consistent with the other unoptimized ways.
|
| | | |
| | | |
| | | |
| | | | |
The nonmoving way finalizes things in a different order.
|
| | | |
| | | |
| | | |
| | | | |
The nonmoving collector doesn't support -G1
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The debugged RTS initializes the heap with 0xaa, which breaks the
(admittedly rather fragile) assumption that uninitialized fields are set
to 0x00:
```
Wrong exit code for heap_all(nonmoving)(expected 0 , actual 1 )
Stderr ( heap_all ):
heap_all: user error (assertClosuresEq: Closures do not match
Expected: FunClosure {info = StgInfoTable {entry = Nothing, ptrs = 0, nptrs = 1, tipe = FUN_0_1, srtlen = 0, code = Nothing}, ptrArgs = [], dataArgs = [0]}
Actual: FunClosure {info = StgInfoTable {entry = Nothing, ptrs = 0, nptrs = 1, tipe = FUN_0_1, srtlen = 1032832, code = Nothing}, ptrArgs = [], dataArgs = [12297829382473034410]}
CallStack (from HasCallStack):
assertClosuresEq, called at heap_all.hs:230:9 in main:Main
)
```
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
The nonmoving GC doesn't support `+RTS -G1`, which this test insists on.
|
| | | |
| | | |
| | | |
| | | | |
This uses the nonmoving collector when compiling the testcases.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Flush the update remembered set. The goal here is to flush periodically to
ensure that we don't end up with a thread who marks their stack on their
local update remembered set and doesn't flush until the nonmoving sync
period as this would result in a large fraction of the heap being marked
during the sync pause.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously we would perform a preparatory moving collection, resulting
in many things being added to the mark queue. When we finished with this
we would realize in nonmovingCollect that there was already a collection
running, in which case we would simply not run the nonmoving collector.
However, it was very easy to end up in a "treadmilling" situation: all
subsequent GC following the first failed major GC would be scheduled as
major GCs. Consequently we would continuously feed the concurrent
collector with more mark queue entries and it would never finish.
This patch aborts the major collection far earlier, meaning that we
avoid adding nonmoving objects to the mark queue and allowing the
concurrent collector to finish.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously we would look at the segment header to determine the block
size despite the fact that we already had the block size at hand.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
This improved overall runtime on nofib's constraints test by nearly 10%.
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Ensure that the bitmap of the segmentt that we will clear next is in
cache by the time we reach it.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Use memchr instead of a open-coded loop. This is nearly twice as fast in
a synthetic benchmark.
|
| | | |
| | | |
| | | |
| | | | |
This shortens MarkQueueEntry by 30% (one word)
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Perf showed that the this single div was capturing up to 10% of samples
in nonmovingMark. However, the overwhelming majority of cases is looking
at small block sizes. These cases we can easily compute explicitly,
allowing the compiler to turn the division into a significantly more
efficient division-by-constant.
While the increase in source code looks scary, this all optimises down
to very nice looking assembler. At this point the only remaining
hotspots in nonmovingBlockCount are due to memory access.
|
| | | | |
|
| | | | |
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | | |
This commit does two things:
* Allow aging of objects during the preparatory minor GC
* Refactor handling of static objects to avoid the use of a hashtable
|
| | | |
|