| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
We compare it to n_gc_idle_threads which is unsigned as well.
So make both signed to avoid a warning.
|
|
|
|
|
|
| |
used timed wait on condition variable in waitForGcThreads
fix dodgy timespec calculation
|
|
|
|
|
|
|
|
| |
I've never observed this counter taking a non-zero value, however I do
think it's existence is justified by the comment in grab_local_todo_block.
I've not added it to RTSStats in GHC.Stats, as it doesn't seem worth the
api churn.
|
|
|
|
| |
We are no longer busyish waiting, so this is no longer meaningful
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here we remove the schedYield loop in scavenge_until_all_done+any_work, replacing
it with a single mutex + condition variable.
Previously any_work would check todo_large_objects, todo_q,
todo_overflow of each gen for work. Comments explained that this was
checking global work in any gen. However, these must have been out of
date, because all of these locations are local to a gc thread.
We've eliminated any_work entirely, instead simply looping back into
scavenge_loop, which will quickly return if there is no work.
shutdown_gc_threads is called slightly earlier than before. This ensures
that n_gc_threads can never be observed to increase from 0 by a worker thread.
startup_gc_threads is removed. It consisted of a single variable
assignment, which is moved inline to it's single callsite.
|
|
|
|
|
| |
These are the two remaining non-atomic accesses to `wakeup` which were
missed by the original TSAN patch.
|
|
|
|
|
|
|
|
| |
Solves #19147. When n_capabilities > 1 we were not correctly accounting
for gc time for sequential collections. In this case par_n_gcthreads ==
1, however it is not guaranteed that the single gc thread is capability 0.
A similar issue for copied is addressed as well.
|
|
|
|
|
|
|
| |
The TSAN rework (specifically aad1f803) introduced a subtle regression
in GC.c, swapping `g0` in place of `gen`. Whoops!
Fixes #18997.
|
|
|
|
|
|
|
|
| |
We currently only post the entry counters, not the other global
counters as in my experience the former are more useful. We use the heap
profiler's census period to decide when to dump.
Also spruces up the documentation surrounding ticky-ticky a bit.
|
|
|
|
|
|
|
| |
Fixes #16525 by tracking dependencies between object file symbols and
marking symbol liveness during garbage collection
See Note [Object unloading] in CheckUnload.c for details.
|
|\ |
|
| | |
|
| |
| |
| |
| |
| | |
Ensure that the GC leader synchronizes with workers before calling
stat_endGC.
|
| | |
|
| | |
|
|/ |
|
|
|
|
| |
(cherry picked from commit 2fa79119570b358a4db61446396889b8260d7957)
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Previously sparks living in the non-moving heap would be promptly GC'd
by the minor collector since pruneSparkQueue uses the BF_EVACUATED flag,
which non-moving heap blocks do not have set.
Fix this by implementing proper support in pruneSparkQueue for
determining reachability in the non-moving heap. The story is told in
Note [Spark management in the nonmoving heap].
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Flush the update remembered set. The goal here is to flush periodically to
ensure that we don't end up with a thread who marks their stack on their
local update remembered set and doesn't flush until the nonmoving sync
period as this would result in a large fraction of the heap being marked
during the sync pause.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we would perform a preparatory moving collection, resulting
in many things being added to the mark queue. When we finished with this
we would realize in nonmovingCollect that there was already a collection
running, in which case we would simply not run the nonmoving collector.
However, it was very easy to end up in a "treadmilling" situation: all
subsequent GC following the first failed major GC would be scheduled as
major GCs. Consequently we would continuously feed the concurrent
collector with more mark queue entries and it would never finish.
This patch aborts the major collection far earlier, meaning that we
avoid adding nonmoving objects to the mark queue and allowing the
concurrent collector to finish.
|
| |
|
|
|
|
|
|
|
| |
This commit does two things:
* Allow aging of objects during the preparatory minor GC
* Refactor handling of static objects to avoid the use of a hashtable
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This implements the core heap structure and a serial mark/sweep
collector which can be used to manage the oldest-generation heap.
This is the first step towards a concurrent mark-and-sweep collector
aimed at low-latency applications.
The full design of the collector implemented here is described in detail
in a technical note
B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell
Compiler" (2018)
The basic heap structure used in this design is heavily inspired by
K. Ueno & A. Ohori. "A fully concurrent garbage collector for
functional programs on multicore processors." /ACM SIGPLAN Notices/
Vol. 51. No. 9 (presented by ICFP 2016)
This design is intended to allow both marking and sweeping
concurrent to execution of a multi-core mutator. Unlike the Ueno design,
which requires no global synchronization pauses, the collector
introduced here requires a stop-the-world pause at the beginning and end
of the mark phase.
To avoid heap fragmentation, the allocator consists of a number of
fixed-size /sub-allocators/. Each of these sub-allocators allocators into
its own set of /segments/, themselves allocated from the block
allocator. Each segment is broken into a set of fixed-size allocation
blocks (which back allocations) in addition to a bitmap (used to track
the liveness of blocks) and some additional metadata (used also used
to track liveness).
This heap structure enables collection via mark-and-sweep, which can be
performed concurrently via a snapshot-at-the-beginning scheme (although
concurrent collection is not implemented in this patch).
The mark queue is a fairly straightforward chunked-array structure.
The representation is a bit more verbose than a typical mark queue to
accomodate a combination of two features:
* a mark FIFO, which improves the locality of marking, reducing one of
the major overheads seen in mark/sweep allocators (see [1] for
details)
* the selector optimization and indirection shortcutting, which
requires that we track where we found each reference to an object
in case we need to update the reference at a later point (e.g. when
we find that it is an indirection). See Note [Origin references in
the nonmoving collector] (in `NonMovingMark.h`) for details.
Beyond this the mark/sweep is fairly run-of-the-mill.
[1] R. Garner, S.M. Blackburn, D. Frampton. "Effective Prefetch for
Mark-Sweep Garbage Collection." ISMM 2007.
Co-Authored-By: Ben Gamari <ben@well-typed.com>
|
| |
|
|
|
|
|
| |
This gets all remaining functions in-line with the new 'traverse' prefix
and module name.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here the following changes are introduced:
- A read barrier machine op is added to Cmm.
- The order in which a closure's fields are read and written is changed.
- Memory barriers are added to RTS code to ensure correctness on
out-or-order machines with weak memory ordering.
Cmm has a new CallishMachOp called MO_ReadBarrier. On weak memory machines, this
is lowered to an instruction that ensures memory reads that occur after said
instruction in program order are not performed before reads coming before said
instruction in program order. On machines with strong memory ordering properties
(e.g. X86, SPARC in TSO mode) no such instruction is necessary, so
MO_ReadBarrier is simply erased. However, such an instruction is necessary on
weakly ordered machines, e.g. ARM and PowerPC.
Weam memory ordering has consequences for how closures are observed and mutated.
For example, consider a closure that needs to be updated to an indirection. In
order for the indirection to be safe for concurrent observers to enter, said
observers must read the indirection's info table before they read the
indirectee. Furthermore, the entering observer makes assumptions about the
closure based on its info table contents, e.g. an INFO_TYPE of IND imples the
closure has an indirectee pointer that is safe to follow.
When a closure is updated with an indirection, both its info table and its
indirectee must be written. With weak memory ordering, these two writes can be
arbitrarily reordered, and perhaps even interleaved with other threads' reads
and writes (in the absence of memory barrier instructions). Consider this
example of a bad reordering:
- An updater writes to a closure's info table (INFO_TYPE is now IND).
- A concurrent observer branches upon reading the closure's INFO_TYPE as IND.
- A concurrent observer reads the closure's indirectee and enters it. (!!!)
- An updater writes the closure's indirectee.
Here the update to the indirectee comes too late and the concurrent observer has
jumped off into the abyss. Speculative execution can also cause us issues,
consider:
- An observer is about to case on a value in closure's info table.
- The observer speculatively reads one or more of closure's fields.
- An updater writes to closure's info table.
- The observer takes a branch based on the new info table value, but with the
old closure fields!
- The updater writes to the closure's other fields, but its too late.
Because of these effects, reads and writes to a closure's info table must be
ordered carefully with respect to reads and writes to the closure's other
fields, and memory barriers must be placed to ensure that reads and writes occur
in program order. Specifically, updates to a closure must follow the following
pattern:
- Update the closure's (non-info table) fields.
- Write barrier.
- Update the closure's info table.
Observing a closure's fields must follow the following pattern:
- Read the closure's info pointer.
- Read barrier.
- Read the closure's (non-info table) fields.
This patch updates RTS code to obey this pattern. This should fix long-standing
SMP bugs on ARM (specifically newer aarch64 microarchitectures supporting
out-of-order execution) and PowerPC. This fixes issue #15449.
Co-Authored-By: Ben Gamari <ben@well-typed.com>
|
|
|
|
|
|
| |
- Remove redundant casting in evacuate_static_object
- Remove redundant parens in STATIC_LINK
- Fix a typo in GC.c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This moves all URL references to Trac Wiki to their corresponding
GitLab counterparts.
This substitution is classified as follows:
1. Automated substitution using sed with Ben's mapping rule [1]
Old: ghc.haskell.org/trac/ghc/wiki/XxxYyy...
New: gitlab.haskell.org/ghc/ghc/wikis/xxx-yyy...
2. Manual substitution for URLs containing `#` index
Old: ghc.haskell.org/trac/ghc/wiki/XxxYyy...#Zzz
New: gitlab.haskell.org/ghc/ghc/wikis/xxx-yyy...#zzz
3. Manual substitution for strings starting with `Commentary`
Old: Commentary/XxxYyy...
New: commentary/xxx-yyy...
See also !539
[1]: https://gitlab.haskell.org/bgamari/gitlab-migration/blob/master/wiki-mapping.json
|
| |
|
|
|
|
|
|
|
|
| |
In the concurrent nonmoving collector we will need the ability to call
`traverseWeakPtrList` concurrently with minor generation collections.
This global state stands in the way of this. However, refactoring it
away is straightforward since this list only persists the length of a
single GC.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Long ago, the stable name table and stable pointer tables were one.
Now, they are separate, and have significantly different
implementations. I believe the time has come to finish the split
that began in #7674.
* Divide `rts/Stable` into `rts/StableName` and `rts/StablePtr`.
* Give each table its own mutex.
* Add FFI functions `hs_lock_stable_ptr_table` and
`hs_unlock_stable_ptr_table` and document them.
These are intended to replace the previously undocumented
`hs_lock_stable_tables` and `hs_lock_stable_tables`,
which are now documented as deprecated synonyms.
* Make `eqStableName#` use pointer equality instead of unnecessarily
comparing stable name table indices.
Reviewers: simonmar, bgamari, erikd
Reviewed By: bgamari
Subscribers: rwbarton, carter
GHC Trac Issues: #15555
Differential Revision: https://phabricator.haskell.org/D5084
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a previous attempt (c6cc93bca69abc258513af8cf2370b14e70fd8fb) I had
tried aligning to 8 bytes under the assumption that the problem was that
the_gc_thread, a StgWord8[], wasn't being aligned to 8-bytes as the
gc_thread struct would expect. However, we actually need even stronger
alignment due to the alignment attribute attached to gen_workspace,
which claims it should be aligned to a 64-byte boundary.
This fixes #15482.
Reviewers: erikd, simonmar
Reviewed By: simonmar
Subscribers: rwbarton, carter
GHC Trac Issues: #15482
Differential Revision: https://phabricator.haskell.org/D5052
|
|
|
|
|
|
| |
This caused segmentation faults on Darwin.
This reverts commit c6cc93bca69abc258513af8cf2370b14e70fd8fb.
|
|
|
|
|
| |
Since we cast this to a gc_thread the compiler may assume that it's aligned.
Make sure that this is so. Fixes #15482.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test here should have been changed after D1106. It was harmless
but we caught fewer GC'd CAFs than we should have.
Test Plan:
Using `nofib/imaginary/primes` compiled with `-debug`.
Before:
```
> ./primes 100 +RTS -G1 -A32k -DG
CAF gc'd at 0x0x7b0960
CAF gc'd at 0x0x788728
CAF gc'd at 0x0x790db0
CAF gc'd at 0x0x790de0
12 CAFs live
CAF gc'd at 0x0x788880
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
547
CAF gc'd at 0x0x7995c8
13 CAFs live
```
After:
```
> ./primes 100 +RTS -G1 -A32k -DG
CAF gc'd at 0x0x7b0960
CAF gc'd at 0x0x788728
CAF gc'd at 0x0x790db0
CAF gc'd at 0x0x790de0
12 CAFs live
CAF gc'd at 0x0x788880
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
12 CAFs live
547
CAF gc'd at 0x0x7995c8
CAF gc'd at 0x0x790ea0
12 CAFs live
```
Reviewers: bgamari, osa1, erikd, noamz
Reviewed By: bgamari
Subscribers: rwbarton, thomie, carter
Differential Revision: https://phabricator.haskell.org/D4963
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This feature has some very serious correctness issues (#14310),
introduces a great deal of complexity, and hasn't seen wide usage.
Consequently we are removing it, as proposed in Proposal #77 [1]. This
is heavily based on a patch from fryguybob.
Updates stm submodule.
[1] https://github.com/ghc-proposals/ghc-proposals/pull/77
Test Plan: Validate
Reviewers: erikd, simonmar, hvr
Reviewed By: simonmar
Subscribers: rwbarton, thomie, carter
GHC Trac Issues: #14310
Differential Revision: https://phabricator.haskell.org/D4760
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's no-op on all platforms
Reviewers: bgamari, simonmar, erikd, dfeuer
Reviewed By: dfeuer
Subscribers: dfeuer, thomie, carter
Differential Revision: https://phabricator.haskell.org/D4588
|
| |
|
|
|
|
| |
[skip ci]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With a large heap it's possible to build up a lot of finalizers
between GCs. We've observed GC spending up to 50% of its time running
finalizers. But there's no reason we have to run finalizers during
GC, and especially no reason we have to block *all* the mutator
threads while *one* GC thread runs finalizers one by one.
I thought about a bunch of alternative ways to handle this, which are
documented along with runSomeFinalizers() in Weak.c. The approach I
settled on is to have a capability run finalizers if it is idle. So
running finalizers is like a low-priority background thread. This
requires some minor scheduler changes, but not much. In the future we
might be able to move more GC work into here (I have my eye on freeing
large blocks, for example).
Test Plan:
* validate
* tested on our system and saw reductions in GC pauses of 40-50%.
Reviewers: bgamari, niteria, osa1, erikd
Reviewed By: bgamari, osa1
Subscribers: rwbarton, thomie, carter
Differential Revision: https://phabricator.haskell.org/D4521
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing internal counters:
* gc_alloc_block_sync
* whitehole_spin
* gen[g].sync
* gen[1].sync
are now not shown in the -s report unless --internal-counters is also passed.
If --internal-counters is passed we now show the counters above, reformatted, as
well as several other counters. In particular, we now count the yieldThread()
calls that SpinLocks do as well as their spins.
The added counters are:
* gc_spin (spin and yield)
* mut_spin (spin and yield)
* whitehole_threadPaused (spin only)
* whitehole_executeMessage (spin only)
* whitehole_lockClosure (spin only)
* waitForGcThreadsd (spin and yield)
As well as the following, which are not SpinLock-like things:
* any_work
* do_work
* scav_find_work
See the Note for descriptions of what these counters are.
We add busy_wait_nops in these loops along with the counter increment where it
was absent.
Old internal counters output:
```
gc_alloc_block_sync: 0
whitehole_gc_spin: 0
gen[0].sync: 0
gen[1].sync: 0
```
New internal counters output:
```
Internal Counters:
Spins Yields
gc_alloc_block_sync 323 0
gc_spin 9016713 752
mut_spin 57360944 47716
whitehole_gc 0 n/a
whitehole_threadPaused 0 n/a
whitehole_executeMessage 0 n/a
whitehole_lockClosure 0 0
waitForGcThreads 2 415
gen[0].sync 6 0
gen[1].sync 1 0
any_work 2017
no_work 2014
scav_find_work 1004
```
Test Plan:
./validate
Check it builds with #define PROF_SPIN removed from includes/rts/Config.h
Reviewers: bgamari, erikd, simonmar, hvr
Reviewed By: simonmar
Subscribers: rwbarton, thomie, carter
GHC Trac Issues: #3553, #9221
Differential Revision: https://phabricator.haskell.org/D4302
|