summaryrefslogtreecommitdiff
path: root/rts/Schedule.c
Commit message (Collapse)AuthorAgeFilesLines
* rts: trigger a major collection if megablock usage exceeds maxHeapSizeTeo Camarasu2022-10-151-1/+5
| | | | | | | | | | | When the heap is suffering from block fragmentation, live bytes might be low while megablock usage is high. If megablock usage exceeds maxHeapSize, we want to trigger a major GC to try to recover some memory otherwise we will die from a heapOverflow at the end of the GC. Fixes #21927
* rts: Don't hint inlining of appendToRunQueueBen Gamari2022-10-121-0/+55
| | | | | | | | | These hints have resulted in compile-time warnings due to failed inlinings for quite some time. Moreover, it's quite unlikely that inlining them is all that beneficial given that they are rather sizeable functions. Resolves #22280.
* rts: Move thread labels into TSOBen Gamari2022-08-061-3/+0
| | | | | | | This eliminates the thread label HashTable and instead tracks this information in the TSO, allowing us to use proper StgArrBytes arrays for backing the label and greatly simplifying management of object lifetimes when we expose them to the user with the coming `threadLabel#` primop.
* rts: forkOn context switches the target capabilityDouglas Wilson2022-07-161-0/+5
| | | | Fixes #21824
* typosEric Lindblad2022-06-011-1/+1
|
* rts: Refactor handling of dead threads' stacksBen Gamari2022-04-291-1/+3
| | | | | | | | | | | | | | | | This fixes a bug that @JunmingZhao42 and I noticed while working on her MMTK port. Specifically, in stg_stop_thread we used stg_enter_info as a sentinel at the tail of a stack after a thread has completed. However, stg_enter_info expects to have a two-field payload, which we do not push. Consequently, if the GC ends up somehow the stack it will attempt to interpret data past the end of the stack as the frame's fields, resulting in unsound behavior. To fix this I eliminate this hacky use of `stg_stop_thread` and instead introduce a new stack frame type, `stg_dead_thread_info`. Not only does this eliminate the potential for the previously mentioned memory unsoundness but it also more clearly captures the intended structure of the dead threads' stacks.
* Revert "rts: Refactor handling of dead threads' stacks"Matthew Pickering2022-04-281-3/+1
| | | | This reverts commit e09afbf2a998beea7783e3de5dce5dd3c6ff23db.
* rts: Refactor handling of dead threads' stacksBen Gamari2022-04-251-1/+3
| | | | | | | | | | | | | | | | This fixes a bug that @JunmingZhao42 and I noticed while working on her MMTK port. Specifically, in stg_stop_thread we used stg_enter_info as a sentinel at the tail of a stack after a thread has completed. However, stg_enter_info expects to have a two-field payload, which we do not push. Consequently, if the GC ends up somehow the stack it will attempt to interpret data past the end of the stack as the frame's fields, resulting in unsound behavior. To fix this I eliminate this hacky use of `stg_stop_thread` and instead introduce a new stack frame type, `stg_dead_thread_info`. Not only does this eliminate the potential for the previously mentioned memory unsoundness but it also more clearly captures the intended structure of the dead threads' stacks.
* Fix a few Note inconsistenciesBen Gamari2022-02-011-1/+2
|
* rts/winio: Fix #18382Ben Gamari2022-01-181-31/+0
| | | | | | | | | | | | | | | | | | | Here we refactor WinIO's IO completion scheme, squashing a memory leak and fixing #18382. To fix #18382 we drop the special thread status introduced for IoPort blocking, BlockedOnIoCompletion, as well as drop the non-threaded RTS's special dead-lock detection logic (which is redundant to the GC's deadlock detection logic), as proposed in #20947. Previously WinIO relied on foreign import ccall "wrapper" to create an adjustor thunk which can be attached to the OVERLAPPED structure passed to the operating system. It would then use foreign import ccall "dynamic" to back out the original continuation from the adjustor. This roundtrip is significantly more expensive than the alternative, using a StablePtr. Furthermore, the implementation let the adjustor leak, meaning that every IO request would leak a page of memory. Fixes T18382.
* Corrected types of thread ids obtained from the RTSMann mit Hut2021-10-061-4/+4
| | | | | | | | | | | | | | While the thread ids had been changed to 64 bit words in e57b7cc6d8b1222e0939d19c265b51d2c3c2b4c0 the return type of the foreign import function used to retrieve these ids - namely 'GHC.Conc.Sync.getThreadId' - was never updated accordingly. In order to fix that this function returns now a 'CUULong'. In addition to that the types used in the thread labeling subsystem were adjusted as well and several format strings were modified throughout the whole RTS to display thread ids in a consistent and correct way. Fixes #16761
* Move `/includes` to `/rts/include`, sort per package betterJohn Ericson2021-08-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | In order to make the packages in this repo "reinstallable", we need to associate source code with a specific packages. Having a top level `/includes` dir that mixes concerns (which packages' includes?) gets in the way of this. To start, I have moved everything to `rts/`, which is mostly correct. There are a few things however that really don't belong in the rts (like the generated constants haskell type, `CodeGen.Platform.h`). Those needed to be manually adjusted. Things of note: - No symlinking for sake of windows, so we hard-link at configure time. - `CodeGen.Platform.h` no longer as `.hs` extension (in addition to being moved to `compiler/`) so as not to confuse anyone, since it is next to Haskell files. - Blanket `-Iincludes` is gone in both build systems, include paths now more strictly respect per-package dependencies. - `deriveConstants` has been taught to not require a `--target-os` flag when generating the platform-agnostic Haskell type. Make takes advantage of this, but Hadrian has yet to.
* Make `PosixSource.h` installed and under `rts/`John Ericson2021-08-091-1/+1
| | | | | | is used outside of the rts so we do this rather than just fish it out of the repo in ad-hoc way, in order to make packages in this repo more self-contained.
* rts: Allow building with ASSERTs on in non-DEBUG wayDaniel Gröber2021-07-291-2/+0
| | | | | | | | | | We have a couple of places where the conditions in asserts depend on code ifdefed out when DEBUG is off. I'd like to allow compiling assertions into non-DEBUG RTSen so that won't do. Currently if we remove the conditional around the definition of ASSERT() the build will not actually work due to a deadlock caused by initMutex not initializing mutexes with PTHREAD_MUTEX_ERRORCHECK because DEBUG is off.
* RTS: try to fix timer racesSylvain Henry2021-07-261-1/+4
| | | | | | | | | | | | | | | | | | * Pthread based timer was initialized started while some other parts of the RTS assume it is initialized stopped, e.g. in hs_init_ghc: /* Start the "ticker" and profiling timer but don't start until the * scheduler is up. However, the ticker itself needs to be initialized * before the scheduler to ensure that the ticker mutex is initialized as * moreCapabilities will attempt to acquire it. */ * after a fork, don't start the timer before the IOManager is initialized: the timer handler (handle_tick) might call wakeUpRts to perform an idle GC, which calls wakeupIOManager/ioManagerWakeup Found while debugging #18033/#20132 but I couldn't confirm if it fixes them.
* rts: Gradually return retained memory to the OSMatthew Pickering2021-03-101-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Related to #19381 #19359 #14702 After a spike in memory usage we have been conservative about returning allocated blocks to the OS in case we are still allocating a lot and would end up just reallocating them. The result of this was that up to 4 * live_bytes of blocks would be retained once they were allocated even if memory usage ended up a lot lower. For a heap of size ~1.5G, this would result in OS memory reporting 6G which is both misleading and worrying for users. In long-lived server applications this results in consistent high memory usage when the live data size is much more reasonable (for example ghcide) Therefore we have a new (2021) strategy which starts by retaining up to 4 * live_bytes of blocks before gradually returning uneeded memory back to the OS on subsequent major GCs which are NOT caused by a heap overflow. Each major GC which is NOT caused by heap overflow increases the consec_idle_gcs counter and the amount of memory which is retained is inversely proportional to this number. By default the excess memory retained is oldGenFactor (controlled by -F) / 2 ^ (consec_idle_gcs * returnDecayFactor) On a major GC caused by a heap overflow, the `consec_idle_gcs` variable is reset to 0 (as we could continue to allocate more, so retaining all the memory might make sense). Therefore setting bigger values for `-Fd` makes the rate at which memory is returned slower. Smaller values make it get returned faster. Setting `-Fd0` disables the memory return completely, which is the behaviour of older GHC versions. The default is `-Fd4` which results in the following scaling: > mapM print [(x, 1/ (2**(x / 4))) | x <- [1 :: Double ..20]] (1.0,0.8408964152537146) (2.0,0.7071067811865475) (3.0,0.5946035575013605) (4.0,0.5) (5.0,0.4204482076268573) (6.0,0.35355339059327373) (7.0,0.29730177875068026) (8.0,0.25) (9.0,0.21022410381342865) (10.0,0.17677669529663687) (11.0,0.14865088937534013) (12.0,0.125) (13.0,0.10511205190671433) (14.0,8.838834764831843e-2) (15.0,7.432544468767006e-2) (16.0,6.25e-2) (17.0,5.255602595335716e-2) (18.0,4.4194173824159216e-2) (19.0,3.716272234383503e-2) (20.0,3.125e-2) So after 13 consecutive GCs only 0.1 of the maximum memory used will be retained. Further to this decay factor, the amount of memory we attempt to retain is also influenced by the GC strategy for the oldest generation. If we are using a copying strategy then we will need at least 2 * live_bytes for copying to take place, so we always keep that much. If using compacting or nonmoving then we need a lower number, so we just retain at least `1.2 * live_bytes` for some protection. In future we might want to make this behaviour more aggressive, some relevant literature is > Ulan Degenbaev, Jochen Eisinger, Manfred Ernst, Ross McIlroy, and Hannes Payer. 2016. Idle time garbage collection scheduling. SIGPLAN Not. 51, 6 (June 2016), 570–583. DOI:https://doi.org/10.1145/2980983.2908106 which describes the "memory reducer" in the V8 javascript engine which on an idle collection immediately returns as much memory as possible.
* Profiling: Allow heap profiling to be controlled dynamically.Matthew Pickering2021-03-031-2/+2
| | | | | | | | | | This patch exposes three new functions in `GHC.Profiling` which allow heap profiling to be enabled and disabled dynamically. 1. startHeapProfTimer - Starts heap profiling with the given RTS options 2. stopHeapProfTimer - Stops heap profiling 3. requestHeapCensus - Perform a heap census on the next context switch, regardless of whether the timer is enabled or not.
* Add a common wakeupIOManager hookDuncan Coutts2021-01-251-1/+1
| | | | | | | Use in the scheduler in threaded mode. Replaces the direct call to ioManagerWakeup which are part of specific I/O manager implementations.
* Replace a direct call to ioManagerStartCap with a new hookDuncan Coutts2021-01-251-3/+1
| | | | | | | | | | Replace a direct call to ioManagerStartCap in the forkProcess in Schedule.c with a new hook initIOManagerAfterFork in IOManager. This replaces a direct hook in the scheduler from the a single I/O manager impl (the threaded unix one) with a generic hook. Add some commentrary on opportunities for future rationalisation.
* Move ioManager{Start,Wakeup,Die} to internal IOManager.hDuncan Coutts2021-01-251-0/+1
| | | | | | | | Move them from the external IOInterface.h to the internal IOManager.h. The functions are all in fact internal. They are not used from the base library at all. Remove ioManagerWakeup as an exported symbol. It is not used elsewhere.
* Move win32/IOManager to win32/MIOManagerDuncan Coutts2021-01-251-1/+1
| | | | | It is only for MIO, and we want to use the generic name IOManager for the name of the common parts of the interface and dispatch.
* rts: enable thread label table in all RTS flavours #17972Adam Sandberg Ericsson2020-12-201-2/+1
|
* rts: Flush eventlog buffers from flushEventLogBen Gamari2020-11-241-1/+1
| | | | | | | | | | | | As noted in #18043, flushTrace failed flush anything beyond the writer. This means that a significant amount of data sitting in capability-local event buffers may never get flushed, despite the users' pleads for us to flush. Fix this by making flushEventLog flush all of the event buffers before flushing the writer. Fixes #18043.
* Merge remote-tracking branch 'origin/wip/tsan/all'Ben Gamari2020-11-081-39/+63
|\
| * Merge branch 'wip/tsan/timer' into wip/tsan/allBen Gamari2020-11-011-0/+8
| |\
| | * rts: Pause timer while changing capability countBen Gamari2020-10-241-0/+8
| | | | | | | | | | | | This avoids #17289.
| * | Merge branch 'wip/tsan/misc' into wip/tsan/allBen Gamari2020-11-011-1/+3
| |\ \
| | * | rts: Avoid lock order inversion during forkBen Gamari2020-10-241-1/+3
| | |/ | | | | | | | | | Fixes #17275.
| * | Merge branch 'wip/tsan/storage' into wip/tsan/allBen Gamari2020-11-011-2/+2
| |\ \
| | * | rts/GC: Use atomicsBen Gamari2020-10-301-2/+2
| | |/
| * | Capabiliity: Properly fix data race on n_returning_tasksBen Gamari2020-10-241-2/+7
| | | | | | | | | | | | | | | There is a real data race but can be made safe by using proper atomic (but relaxed) accesses.
| * | Document schedulePushWork raceBen Gamari2020-10-241-7/+2
| | |
| * | rts/Schedule: Eliminate data races on recent_activityBen Gamari2020-10-241-7/+7
| | | | | | | | | | | | We cannot safely use relaxed atomics here.
| * | rts: Eliminate data races on pending_syncBen Gamari2020-10-241-3/+3
| | |
| * | rts: Accept data race in work-stealing implementationBen Gamari2020-10-241-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | This race is okay since the task is owned by the capability pushing it. By Note [Ownership of Task] this means that the capability is free to write to `task->cap` without taking `task->lock`. Fixes #17276.
| * | rts/Schedule: Use relaxed operations for sched_stateBen Gamari2020-10-241-17/+21
| | |
| * | rts: Mitigate races in capability interruption logicBen Gamari2020-10-241-6/+6
| | |
| * | rts: Use relaxed atomics on n_returning_tasksBen Gamari2020-10-241-2/+5
| | | | | | | | | | | | | | | | | | | | | This mitigates the warning of a benign race on n_returning_tasks in shouldYieldCapability. See #17261.
| * | rts: Add assertions for task ownership of capabilitiesBen Gamari2020-10-241-1/+4
| |/
* | RtsAPI: pause and resume the RTSDavid Eichmann2020-11-021-8/+25
|/ | | | | | | | | The `rts_pause` and `rts_resume` functions have been added to `RtsAPI.h` and allow an external process to completely pause and resume the RTS. Co-authored-by: Sven Tennie <sven.tennie@gmail.com> Co-authored-by: Matthew Pickering <matthewtpickering@gmail.com> Co-authored-by: Ben Gamari <bgamari.foss@gmail.com>
* rts: fix race condition in StgCRunTamar Christina2020-10-091-17/+0
| | | | | | | | | | | | | | | | On windows the stack has to be allocated 4k at a time, otherwise we get a segfault. This is done by using a helper ___chkstk_ms that is provided by libgcc. The Haskell side already knows how to handle this but we need to do the same from STG. Previously we would drop the stack in StgRun but would only make it valid whenever the scheduler loop ran. This approach was fundamentally broken in that it falls apart when you take a signal from the OS. We see it less often because you initially get allocated a 1MB stack block which you have to blow past first. Concretely this means we must always keep the stack valid. Fixes #18601.
* winio: Queue IO processing threads at the front of the queue.Andreas Klebinger2020-07-151-0/+8
| | | | | This will unblock the IO thread sooner hopefully leading to higher throughput in some situations.
* winio: nonthreaded: Create io processing threads in main thread.Andreas Klebinger2020-07-151-0/+4
| | | | | | | | | | | | | | | | | | | | We now set a flag in the IO thread. The scheduler when looking for work will check the flag and create/queue threads accordingly. We used to create these in the IO thread. This improved performance but caused frequent segfaults. Thread creation/allocation is only safe to do if nothing currently accesses the storeagemanager. However without locks in the non-threaded runtime this can't be guaranteed. This shouldn't change performance all too much. In the past we had: * IO: Create/Queue thread. * Scheduler: Runs a few times. Eventually picks up IO processing thread. Now it's: * IO: Set flag to queue thread. * Scheduler: Pick up flag, if set create/queue thread. Eventually picks up IO processing thread.
* winio: Properly check for the tso of an incall to be zero.Andreas Klebinger2020-07-151-1/+1
|
* winio: Fix a scheduler bug with the threaded-runtime.Tamar Christina2020-07-151-13/+6
|
* winio: Add IOPort synchronization primitiveTamar Christina2020-07-151-3/+34
|
* Fix more typos, via an improved Levenshtein-style correctorBrian Wignall2020-01-121-2/+2
|
* Merge non-moving garbage collectorBen Gamari2019-10-231-34/+90
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This introduces a concurrent mark & sweep garbage collector to manage the old generation. The concurrent nature of this collector typically results in significantly reduced maximum and mean pause times in applications with large working sets. Due to the large and intricate nature of the change I have opted to preserve the fully-buildable history, including merge commits, which is described in the "Branch overview" section below. Collector design ================ The full design of the collector implemented here is described in detail in a technical note > B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell > Compiler" (2018) This document can be requested from @bgamari. The basic heap structure used in this design is heavily inspired by > K. Ueno & A. Ohori. "A fully concurrent garbage collector for > functional programs on multicore processors." /ACM SIGPLAN Notices/ > Vol. 51. No. 9 (presented at ICFP 2016) This design is intended to allow both marking and sweeping concurrent to execution of a multi-core mutator. Unlike the Ueno design, which requires no global synchronization pauses, the collector introduced here requires a stop-the-world pause at the beginning and end of the mark phase. To avoid heap fragmentation, the allocator consists of a number of fixed-size /sub-allocators/. Each of these sub-allocators allocators into its own set of /segments/, themselves allocated from the block allocator. Each segment is broken into a set of fixed-size allocation blocks (which back allocations) in addition to a bitmap (used to track the liveness of blocks) and some additional metadata (used also used to track liveness). This heap structure enables collection via mark-and-sweep, which can be performed concurrently via a snapshot-at-the-beginning scheme (although concurrent collection is not implemented in this patch). Implementation structure ======================== The majority of the collector is implemented in a handful of files: * `rts/Nonmoving.c` is the heart of the beast. It implements the entry-point to the nonmoving collector (`nonmoving_collect`), as well as the allocator (`nonmoving_allocate`) and a number of utilities for manipulating the heap. * `rts/NonmovingMark.c` implements the mark queue functionality, update remembered set, and mark loop. * `rts/NonmovingSweep.c` implements the sweep loop. * `rts/NonmovingScav.c` implements the logic necessary to scavenge the nonmoving heap. Branch overview =============== ``` * wip/gc/opt-pause: | A variety of small optimisations to further reduce pause times. | * wip/gc/compact-nfdata: | Introduce support for compact regions into the non-moving |\ collector | \ | \ | | * wip/gc/segment-header-to-bdescr: | | | Another optimization that we are considering, pushing | | | some segment metadata into the segment descriptor for | | | the sake of locality during mark | | | | * | wip/gc/shortcutting: | | | Support for indirection shortcutting and the selector optimization | | | in the non-moving heap. | | | * | | wip/gc/docs: | |/ Work on implementation documentation. | / |/ * wip/gc/everything: | A roll-up of everything below. |\ | \ | |\ | | \ | | * wip/gc/optimize: | | | A variety of optimizations, primarily to the mark loop. | | | Some of these are microoptimizations but a few are quite | | | significant. In particular, the prefetch patches have | | | produced a nontrivial improvement in mark performance. | | | | | * wip/gc/aging: | | | Enable support for aging in major collections. | | | | * | wip/gc/test: | | | Fix up the testsuite to more or less pass. | | | * | | wip/gc/instrumentation: | | | A variety of runtime instrumentation including statistics | | / support, the nonmoving census, and eventlog support. | |/ | / |/ * wip/gc/nonmoving-concurrent: | The concurrent write barriers. | * wip/gc/nonmoving-nonconcurrent: | The nonmoving collector without the write barriers necessary | for concurrent collection. | * wip/gc/preparation: | A merge of the various preparatory patches that aren't directly | implementing the GC. | | * GHC HEAD . . . ```
| * Disable aging when doing deadlock detection GCBen Gamari2019-10-221-10/+13
| |
| * Nonmoving: Ensure write barrier vanishes in non-threaded RTSBen Gamari2019-10-211-1/+1
| |