summaryrefslogtreecommitdiff
path: root/rts/Capability.c
Commit message (Collapse)AuthorAgeFilesLines
* rts: Use a separate free block list for allocatePinnedMatthew Pickering2021-03-081-0/+1
| | | | | | | | | | | The way in which allocatePinned took blocks out of the nursery was leading to horrible fragmentation in some workloads. The strategy now is that a separate free block list is reserved for each capability and blocks are taken from there. When it's empty the global SM lock is taken and a fresh block of size PINNED_EMPTY_SIZE is allocated. Fixes #19481
* rts/eventlog: Ensure that all capability buffers are flushedBen Gamari2021-03-011-1/+1
| | | | | | | | | | | The previous approach performed the flush in yieldCapability. However, as pointed out in #19435, this is wrong as it idle capabilities will not go through this codepath. The fix is simple: undo the optimisation, flushing in `flushEventLog` by calling `flushAllCapsEventsBufs` after acquiring all capabilities. Fixes #19435.
* Replace a ioManagerDie call with stopIOManagerDuncan Coutts2021-01-251-1/+9
| | | | | The latter is the proper hook defined in IOManager.h. The former is part of a specific I/O manager implementation (the threaded unix one).
* Move ioManager{Start,Wakeup,Die} to internal IOManager.hDuncan Coutts2021-01-251-0/+1
| | | | | | | | Move them from the external IOInterface.h to the internal IOManager.h. The functions are all in fact internal. They are not used from the base library at all. Remove ioManagerWakeup as an exported symbol. It is not used elsewhere.
* Move setIOManagerControlFd from Capability.c to IOManager.cDuncan Coutts2021-01-251-17/+0
| | | | | This is a better home for it. It is not really an aspect of capabilities. It is specific to one of the I/O manager impls.
* Rename includes/rts/IOManager.h to IOInterface.hDuncan Coutts2021-01-251-1/+1
| | | | | | | | | | | | | | | | | | | | | Naming is hard. Where we want to get to is to have a clear internal and external API for the IO manager within the RTS. What we have right now is just the external API (used in base for the Haskell side of the threaded IO manager impls) living in includes/rts/IOManager.h. We want to add a clear RTS internal API, which really ought to live in rts/IOManager.h. Several people think it's too confusing to have both: * includes/rts/IOManager.h for the external API * rts/IOManager.h for the internal API So the plan is to add rts/IOManager.{h,c} as the internal parts, and rename the external part to be includes/rts/IOInterface.h. It is admittidly not great to have .h files in includes/rts/ called "interface" since by definition, every .h fle under includes/ is an interface! Alternative naming scheme suggestions welcome!
* rts/Capability: Use relaxed load in findSparkBen Gamari2021-01-091-1/+2
| | | | When checking n_returning_tasks.
* rts: Flush eventlog buffers from flushEventLogBen Gamari2020-11-241-0/+5
| | | | | | | | | | | | As noted in #18043, flushTrace failed flush anything beyond the writer. This means that a significant amount of data sitting in capability-local event buffers may never get flushed, despite the users' pleads for us to flush. Fix this by making flushEventLog flush all of the event buffers before flushing the writer. Fixes #18043.
* Merge remote-tracking branch 'origin/wip/tsan/all'Ben Gamari2020-11-081-77/+177
|\
| * Merge branch 'wip/tsan/timer' into wip/tsan/allBen Gamari2020-11-011-9/+17
| |\
| | * Fix #17289Ben Gamari2020-10-241-9/+17
| | |
| * | Merge branch 'wip/tsan/event-mgr' into wip/tsan/allBen Gamari2020-11-011-1/+1
| |\ \
| | * | Mitigate data races in event manager startup/shutdownwip/tsan/event-mgrBen Gamari2020-10-241-1/+1
| | |/
| * | Capabiliity: Properly fix data race on n_returning_tasksBen Gamari2020-10-241-2/+8
| | | | | | | | | | | | | | | There is a real data race but can be made safe by using proper atomic (but relaxed) accesses.
| * | Document schedulePushWork raceBen Gamari2020-10-241-27/+68
| | |
| * | rts/Schedule: Eliminate data races on recent_activityBen Gamari2020-10-241-1/+1
| | | | | | | | | | | | We cannot safely use relaxed atomics here.
| * | rts: Eliminate data races on pending_syncBen Gamari2020-10-241-2/+2
| | |
| * | rts: Accept data race in work-stealing implementationBen Gamari2020-10-241-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | This race is okay since the task is owned by the capability pushing it. By Note [Ownership of Task] this means that the capability is free to write to `task->cap` without taking `task->lock`. Fixes #17276.
| * | rts/Schedule: Use relaxed operations for sched_stateBen Gamari2020-10-241-2/+2
| | |
| * | rts: Use relaxed operations for cap->running_task (TODO)Ben Gamari2020-10-241-9/+11
| | | | | | | | | | | | | | | This shouldn't be necessary since only the owning thread of the capability should be touching this.
| * | rts/Capability: Use relaxed operations for last_free_capabilityBen Gamari2020-10-241-3/+3
| | |
| * | rts: Clarify locking behavior of releaseCapability_Ben Gamari2020-10-241-0/+4
| | |
| * | rts: Annotate benign race in waitForCapabilityBen Gamari2020-10-241-1/+21
| | |
| * | rts: Factor out logic to identify a good capability for running a taskBen Gamari2020-10-241-26/+41
| |/ | | | | | | | | | | Not only does this make the control flow a bit clearer but it also allows us to add a TSAN suppression on this logic, which requires (harmless) data races.
| * rts/Capability: Intialize interrupt fieldBen Gamari2020-10-241-0/+1
| | | | | | | | | | | | Previously this was left uninitialized. Also clarify some comments.
* | RtsAPI: pause and resume the RTSDavid Eichmann2020-11-021-1/+9
|/ | | | | | | | | The `rts_pause` and `rts_resume` functions have been added to `RtsAPI.h` and allow an external process to completely pause and resume the RTS. Co-authored-by: Sven Tennie <sven.tennie@gmail.com> Co-authored-by: Matthew Pickering <matthewtpickering@gmail.com> Co-authored-by: Ben Gamari <bgamari.foss@gmail.com>
* Fix typos, via a Levenshtein-style correctorBrian Wignall2020-01-041-1/+1
|
* rts: Fix --debug-numa mode under DockerBen Gamari2019-12-301-0/+2
| | | | | | | | | As noted in #17606, Docker disallows the get_mempolicy syscall by default. This caused numerous tests to fail under CI in the `debug_numa` way. Avoid this by disabling the NUMA probing logic when --debug-numa is in use, instead setting n_numa_nodes in RtsFlags.c. Fixes #17606.
* Remove outdated commentSylvain Henry2019-12-241-4/+2
|
* rts: Implement concurrent collection in the nonmoving collectorBen Gamari2019-10-201-9/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This extends the non-moving collector to allow concurrent collection. The full design of the collector implemented here is described in detail in a technical note B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell Compiler" (2018) This extension involves the introduction of a capability-local remembered set, known as the /update remembered set/, which tracks objects which may no longer be visible to the collector due to mutation. To maintain this remembered set we introduce a write barrier on mutations which is enabled while a concurrent mark is underway. The update remembered set representation is similar to that of the nonmoving mark queue, being a chunked array of `MarkEntry`s. Each `Capability` maintains a single accumulator chunk, which it flushed when it (a) is filled, or (b) when the nonmoving collector enters its post-mark synchronization phase. While the write barrier touches a significant amount of code it is conceptually straightforward: the mutator must ensure that the referee of any pointer it overwrites is added to the update remembered set. However, there are a few details: * In the case of objects with a dirty flag (e.g. `MVar`s) we can exploit the fact that only the *first* mutation requires a write barrier. * Weak references, as usual, complicate things. In particular, we must ensure that the referee of a weak object is marked if dereferenced by the mutator. For this we (unfortunately) must introduce a read barrier, as described in Note [Concurrent read barrier on deRefWeak#] (in `NonMovingMark.c`). * Stable names are also a bit tricky as described in Note [Sweeping stable names in the concurrent collector] (`NonMovingSweep.c`). We take quite some pains to ensure that the high thread count often seen in parallel Haskell applications doesn't affect pause times. To this end we allow thread stacks to be marked either by the thread itself (when it is executed or stack-underflows) or the concurrent mark thread (if the thread owning the stack is never scheduled). There is a non-trivial handshake to ensure that this happens without racing which is described in Note [StgStack dirtiness flags and concurrent marking]. Co-Authored-by: Ömer Sinan Ağacan <omer@well-typed.com>
* rts: Non-concurrent mark and sweepÖmer Sinan Ağacan2019-10-201-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This implements the core heap structure and a serial mark/sweep collector which can be used to manage the oldest-generation heap. This is the first step towards a concurrent mark-and-sweep collector aimed at low-latency applications. The full design of the collector implemented here is described in detail in a technical note B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell Compiler" (2018) The basic heap structure used in this design is heavily inspired by K. Ueno & A. Ohori. "A fully concurrent garbage collector for functional programs on multicore processors." /ACM SIGPLAN Notices/ Vol. 51. No. 9 (presented by ICFP 2016) This design is intended to allow both marking and sweeping concurrent to execution of a multi-core mutator. Unlike the Ueno design, which requires no global synchronization pauses, the collector introduced here requires a stop-the-world pause at the beginning and end of the mark phase. To avoid heap fragmentation, the allocator consists of a number of fixed-size /sub-allocators/. Each of these sub-allocators allocators into its own set of /segments/, themselves allocated from the block allocator. Each segment is broken into a set of fixed-size allocation blocks (which back allocations) in addition to a bitmap (used to track the liveness of blocks) and some additional metadata (used also used to track liveness). This heap structure enables collection via mark-and-sweep, which can be performed concurrently via a snapshot-at-the-beginning scheme (although concurrent collection is not implemented in this patch). The mark queue is a fairly straightforward chunked-array structure. The representation is a bit more verbose than a typical mark queue to accomodate a combination of two features: * a mark FIFO, which improves the locality of marking, reducing one of the major overheads seen in mark/sweep allocators (see [1] for details) * the selector optimization and indirection shortcutting, which requires that we track where we found each reference to an object in case we need to update the reference at a later point (e.g. when we find that it is an indirection). See Note [Origin references in the nonmoving collector] (in `NonMovingMark.h`) for details. Beyond this the mark/sweep is fairly run-of-the-mill. [1] R. Garner, S.M. Blackburn, D. Frampton. "Effective Prefetch for Mark-Sweep Garbage Collection." ISMM 2007. Co-Authored-By: Ben Gamari <ben@well-typed.com>
* rts/Capability: A few documentation commentsBen Gamari2019-10-181-0/+5
|
* Expunge #ifdef and #ifndef from the codebaseJohn Ericson2019-07-141-3/+3
| | | | | | | | These are unexploded minds as far as the linter is concerned. I don't want to hit in my MRs by mistake! I did this with `sed`, and then rolled back some changes in the docs, config.guess, and the linter itself.
* Typo fix, replace a foldl with foldl'Ömer Sinan Ağacan2018-12-121-1/+1
|
* rts: Rip out support for STM invariantsBen Gamari2018-06-021-1/+0
| | | | | | | | | | | | | | | | | | | | | | | This feature has some very serious correctness issues (#14310), introduces a great deal of complexity, and hasn't seen wide usage. Consequently we are removing it, as proposed in Proposal #77 [1]. This is heavily based on a patch from fryguybob. Updates stm submodule. [1] https://github.com/ghc-proposals/ghc-proposals/pull/77 Test Plan: Validate Reviewers: erikd, simonmar, hvr Reviewed By: simonmar Subscribers: rwbarton, thomie, carter GHC Trac Issues: #14310 Differential Revision: https://phabricator.haskell.org/D4760
* rts: Note functions which must take all_tasks_mutex.Ben Gamari2018-03-021-0/+3
|
* rts: Add format attribute to barfBen Gamari2018-02-061-1/+1
| | | | | | | | | | | | Test Plan: Validate Reviewers: erikd, simonmar Reviewed By: simonmar Subscribers: rwbarton, thomie, carter Differential Revision: https://phabricator.haskell.org/D4374
* A bunch of typofixesGabor Greif2017-09-261-1/+1
|
* Prefer #if defined to #ifdefBen Gamari2017-04-281-5/+5
| | | | Our new CPP linter enforces this.
* Use C99's boolBen Gamari2016-11-291-37/+37
| | | | | | | | | | | | Test Plan: Validate on lots of platforms Reviewers: erikd, simonmar, austin Reviewed By: erikd, simonmar Subscribers: michalt, thomie Differential Revision: https://phabricator.haskell.org/D2699
* tryGrabCapability should be using TRY_ACQUIRE_LOCKSimon Marlow2016-09-151-1/+3
|
* Add hs_try_putmvar()Simon Marlow2016-09-121-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is a fast, non-blocking, asynchronous, interface to tryPutMVar that can be called from C/C++. It's useful for callback-based C/C++ APIs: the idea is that the callback invokes hs_try_putmvar(), and the Haskell code waits for the callback to run by blocking in takeMVar. The callback doesn't block - this is often a requirement of callback-based APIs. The callback wakes up the Haskell thread with minimal overhead and no unnecessary context-switches. There are a couple of benchmarks in testsuite/tests/concurrent/should_run. Some example results comparing hs_try_putmvar() with using a standard foreign export: ./hs_try_putmvar003 1 64 16 100 +RTS -s -N4 0.49s ./hs_try_putmvar003 2 64 16 100 +RTS -s -N4 2.30s hs_try_putmvar() is 4x faster for this workload (see the source for hs_try_putmvar003.hs for details of the workload). An alternative solution is to use the IO Manager for this. We've tried it, but there are problems with that approach: * Need to create a new file descriptor for each callback * The IO Manger thread(s) become a bottleneck * More potential for things to go wrong, e.g. throwing an exception in an IO Manager callback kills the IO Manager thread. Test Plan: validate; new unit tests Reviewers: niteria, erikd, ezyang, bgamari, austin, hvr Subscribers: thomie Differential Revision: https://phabricator.haskell.org/D2501
* Fix an assertion that could randomly failSimon Marlow2016-08-051-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: ASSERT_THREADED_CAPABILITY_INVARIANTS was testing properties of the returning_tasks queue, but that requires cap->lock to access safely. This assertion would randomly fail if stressed enough. Instead I've removed it from the catch-all ASSERT_PARTIAL_CAPABILITIY_INVARIANTS and made it a separate assertion only called under cap->lock. Test Plan: ``` cd testsuite/tests/concurrent/should_run make TEST=setnumcapabilities001 WAY=threaded1 EXTRA_HC_OPTS=-with-rtsopts=-DS CLEANUP=0 while true; do ./setnumcapabilities001.run/setnumcapabilities001 4 9 2000 || break; done ``` Reviewers: niteria, bgamari, ezyang, austin, erikd Subscribers: thomie Differential Revision: https://phabricator.haskell.org/D2440 GHC Trac Issues: #10860
* Track the lengths of the thread queuesSimon Marlow2016-08-031-2/+7
| | | | | | | | | | | | | | | Summary: Knowing the length of the run queue in O(1) time is useful: for example we don't have to traverse the run queue to know how many threads we have to migrate in schedulePushWork(). Test Plan: validate Reviewers: ezyang, erikd, bgamari, austin Subscribers: thomie Differential Revision: https://phabricator.haskell.org/D2437
* NUMA cleanupsSimon Marlow2016-06-171-3/+35
| | | | | - Move the numaMap and nNumaNodes out of RtsFlags to Capability.c - Add a test to tests/rts
* NUMA supportSimon Marlow2016-06-101-15/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The aim here is to reduce the number of remote memory accesses on systems with a NUMA memory architecture, typically multi-socket servers. Linux provides a NUMA API for doing two things: * Allocating memory local to a particular node * Binding a thread to a particular node When given the +RTS --numa flag, the runtime will * Determine the number of NUMA nodes (N) by querying the OS * Assign capabilities to nodes, so cap C is on node C%N * Bind worker threads on a capability to the correct node * Keep a separate free lists in the block layer for each node * Allocate the nursery for a capability from node-local memory * Allocate blocks in the GC from node-local memory For example, using nofib/parallel/queens on a 24-core 2-socket machine: ``` $ ./Main 15 +RTS -N24 -s -A64m Total time 173.960s ( 7.467s elapsed) $ ./Main 15 +RTS -N24 -s -A64m --numa Total time 150.836s ( 6.423s elapsed) ``` The biggest win here is expected to be allocating from node-local memory, so that means programs using a large -A value (as here). According to perf, on this program the number of remote memory accesses were reduced by more than 50% by using `--numa`. Test Plan: * validate * There's a new flag --debug-numa=<n> that pretends to do NUMA without actually making the OS calls, which is useful for testing the code on non-NUMA systems. * TODO: I need to add some unit tests Reviewers: erikd, austin, rwbarton, ezyang, bgamari, hvr, niteria Subscribers: thomie Differential Revision: https://phabricator.haskell.org/D2199
* rts: Replace `nat` with `uint32_t`Erik de Castro Lopo2016-05-051-17/+18
| | | | | | | | | | | | The `nat` type was an alias for `unsigned int` with a comment saying it was at least 32 bits. We keep the typedef in case client code is using it but mark it as deprecated. Test Plan: Validated on Linux, OS X and Windows Reviewers: simonmar, austin, thomie, hvr, bgamari, hsyl20 Differential Revision: https://phabricator.haskell.org/D2166
* Don't STATIC_INLINE giveCapabilityToTaskSimon Marlow2016-05-041-1/+1
| | | | This causes errors with some versions of gcc (4.4.7 here).
* Allow limiting the number of GC threads (+RTS -qn<n>)Simon Marlow2016-05-041-18/+30
| | | | | | | | | | | | | | | | | | This allows the GC to use fewer threads than the number of capabilities. At each GC, we choose some of the capabilities to be "idle", which means that the thread running on that capability (if any) will sleep for the duration of the GC, and the other threads will do its work. We choose capabilities that are already idle (if any) to be the idle capabilities. The idea is that this helps in the following situation: * We want to use a large -N value so as to make use of hyperthreaded cores * We use a large heap size, so GC is infrequent * But we don't want to use all -N threads in the GC, because that thrashes the memory too much. See docs for usage.
* RTS: Add setInCallCapability()Simon Marlow2016-04-261-14/+19
| | | | | | | | This allows an OS thread to specify which capability it should run on when it makes a call into Haskell. It is intended for a fairly specialised use case, when the client wants to have tighter control over the mapping between OS threads and Capabilities - perhaps 1:1 correspondence, for example.