| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
yieldCapability() was not prepared to be called by a Task that is not
either a worker or a bound Task. This could happen if we ended up in
yieldCapability via this call stack:
performGC()
scheduleDoGC()
requestSync()
yieldCapability()
and there were a few other ways this could happen via requestSync.
The fix is to handle this case in yieldCapability(): when the Task is
not a worker or a bound Task, we put it on the returning_workers
queue, where it will be woken up again.
Summary of changes:
* `yieldCapability`: factored out subroutine waitForWorkerCapability`
* `waitForReturnCapability` renamed to `waitForCapability`, and
factored out subroutine `waitForReturnCapability`
* `releaseCapabilityAndQueue` worker renamed to `enqueueWorker`, does
not take a lock and no longer tests if `!isBoundTask()`
* `yieldCapability` adjusted for refactorings, only change in behavior
is when it is not a worker or bound task.
Test Plan:
* new test concurrent/should_run/performGC
* validate
Reviewers: niteria, austin, ezyang, bgamari
Subscribers: thomie, bgamari
Differential Revision: https://phabricator.haskell.org/D997
GHC Trac Issues: #10545
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
clearNursery resets all the bd->free pointers of nursery blocks to
make the blocks empty. In profiles we've seen clearNursery taking
significant amounts of time particularly with large -N and -A values.
This patch moves the work of clearNursery to the point at which we
actually need the new block, thereby introducing an invariant that
blocks to the right of the CurrentNursery pointer still need their
bd->free pointer reset. This should make things faster overall,
because we don't need to clear blocks that we don't use.
Test Plan: validate
Reviewers: AndreasVoellmy, ezyang, austin
Subscribers: thomie, carter, ezyang, simonmar
Differential Revision: https://phabricator.haskell.org/D318
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
This reverts commit 39b5c1cbd8950755de400933cecca7b8deb4ffcd.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This reverts commit 4748f5936fe72d96edfa17b153dbfd84f2c4c053. The fix for #9423
was reverted because this commit introduced a C function setIOManagerControlFd()
(defined in Schedule.c) defined for all OS types, while the prototype
(in includes/rts/IOManager.h) was only included when mingw32_HOST_OS is
not defined. This broke Windows builds.
This commit reverts the original commit and resolves the problem by only defining
setIOManagerControlFd() when mingw32_HOST_OS is defined. Hence the missing prototype
error should not occur on Windows.
In addition, since the io_manager_control_wr_fd field of the Capability struct is only
usd by the setIOManagerControlFd, this commit includes the io_manager_control_wr_fd
field in the Capability struct only when mingw32_HOST_OS is not defined.
Test Plan: Try to compile successfully on all platforms.
Reviewers: austin
Reviewed By: austin
Subscribers: simonmar, ezyang, carter
Differential Revision: https://phabricator.haskell.org/D174
|
|
|
|
|
|
|
|
|
| |
This should fix the Windows fallout, and hopefully this will be fixed
once that's sorted out.
This reverts commit f9f89b7884ccc8ee5047cf4fffdf2b36df6832df.
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Fix #9423.
The problem in #9423 is caused when code invoked by `hs_exit()` waits
on all foreign calls to return, but some IO managers are in `safe` foreign
calls and do not return. The previous design signaled to the timer manager
(via its control pipe) that it should "die" and when the timer manager
returned to Haskell-land, the Haskell code in timer manager then signalled
to the IO manager threads that they should return from foreign calls and
`die`. Unfortunately, in the shutdown sequence the timer manager is unable
to return to Haskell-land fast enough and so the code that signals to the
IO manager threads (via their control pipes) is never executed and the IO
manager threads remain out in the foreign calls.
This patch solves this problem by having the RTS signal to all the IO
manager threads (via their control pipes; and in addition to signalling
to the timer manager thread) that they should shutdown (in `ioManagerDie()`
in `rts/Signals.c`. To do this, we arrange for each IO manager thread to
register its control pipe with the RTS (in `GHC.Thread.startIOManagerThread`).
In addition, `GHC.Thread.startTimerManagerThread` registers its control pipe.
These are registered via C functions `setTimerManagerControlFd` (in
`rts/Signals.c`) and `setIOManagerControlFd` (in `rts/Capability.c`). The IO
manager control pipe file descriptors are stored in a new field of the
`Capability_ struct`.
Test Plan: See the notes on #9423 to recreate the problem and to verify that it no longer occurs with the fix.
Auditors: simonmar
Reviewers: simonmar, edsko, ezyang, austin
Reviewed By: austin
Subscribers: phaskell, simonmar, ezyang, carter, relrod
Differential Revision: https://phabricator.haskell.org/D129
GHC Trac Issues: #9423, #9284
|
|
|
|
|
|
|
|
| |
This will hopefully help ensure some basic consistency in the forward by
overriding buffer variables. In particular, it sets the wrap length, the
offset to 4, and turns off tabs.
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
Signed-off-by: Edward Z. Yang <ezyang@cs.stanford.edu>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have various problems with reallocating the array of Capabilities,
due to threads in waitForReturnCapability that are already holding a
pointer to a Capability.
Rather than add more locking to make this safer, I decided it would be
easier to ensure that we never move the Capabilities at all. The
capabilities array is now an array of pointers to Capabaility. There
are extra indirections, but it rarely matters - we don't often access
Capabilities via the array, normally we already have a pointer to
one. I ran the parallel benchmarks and didn't see any difference.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
lnat was originally "long unsigned int" but we were using it when we
wanted a 64-bit type on a 64-bit machine. This broke on Windows x64,
where long == int == 32 bits. Using types of unspecified size is bad,
but what we really wanted was a type with N bits on an N-bit machine.
StgWord is exactly that.
lnat was mentioned in some APIs that clients might be using
(e.g. StackOverflowHook()), so we leave it defined but with a comment
to say that it's deprecated.
|
|
|
|
|
|
| |
If we are interrupted to do a GC, then we do not immediately do another
one. This avoids a starvation situation where one Capability keeps
forcing a GC and the other Capabilities make no progress at all.
|
|
|
|
|
|
|
| |
In addition to the existing global method. For now we just do
it both ways and assert they give the same grand total. At some
stage we can simplify the global method to just take the sum of
the per-cap counters.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
allocator.
Prompted by a benchmark posted to parallel-haskell@haskell.org by
Andreas Voellmy <andreas.voellmy@gmail.com>. This program exhibits
contention for the block allocator when run with -N2 and greater
without the fix:
{-# LANGUAGE MagicHash, UnboxedTuples, BangPatterns #-}
module Main where
import Control.Monad
import Control.Concurrent
import System.Environment
import GHC.IO
import GHC.Exts
import GHC.Conc
main = do
[m] <- fmap (fmap read) getArgs
n <- getNumCapabilities
ms <- replicateM n newEmptyMVar
sequence [ forkIO $ busyWorkerB (m `quot` n) >> putMVar mv () | mv <- ms ]
mapM takeMVar ms
busyWorkerB :: Int -> IO ()
busyWorkerB n_loops = go 0
where go !n | n >= n_loops = return ()
| otherwise =
do p <- (IO $ \s ->
case newPinnedByteArray# 1024# s of
{ (# s', mbarr# #) ->
(# s', () #)
}
)
go (n+1)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch allows setNumCapabilities to /reduce/ the number of active
capabilities as well as increase it. This is particularly tricky to
do, because a Capability is a large data structure and ties into the
rest of the system in many ways. Trying to clean it all up would be
extremely error prone.
So instead, the solution is to mark the extra capabilities as
"disabled". This has the following consequences:
- threads on a disabled capability are migrated away by the
scheduler loop
- disabled capabilities do not participate in GC
(see scheduleDoGC())
- No spark threads are created on this capability
(see scheduleActivateSpark())
- We do not attempt to migrate threads *to* a disabled
capability (see schedulePushWork()).
So a disabled capability should do no work, and does not participate
in GC, although it remains alive in other respects. For example, a
blocked thread might wake up on a disabled capability, and it will get
quickly migrated to a live capability. A disabled capability can
still initiate GC if necessary. Indeed, it turns out to be hard to
migrate bound threads, so we wait until the next GC to do this (see
comments for details).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is an experimental tweak to the parallel GC that avoids waking up
a Capability to do parallel GC if we know that the capability has been
idle for a (tunable) number of GC cycles. The idea is that if you're
only using a few Capabilities, there's no point waking up the ones
that aren't busy.
e.g. +RTS -qi3
says "A Capability will participate in parallel GC if it was running
at all since the last 3 GC cycles."
Results are a bit hit and miss, and I don't completely understand why
yet. Hence, for now it is turned off by default, and also not
documented except in the +RTS -? output.
|
|
|
|
|
| |
At present the number of capabilities can only be *increased*, not
decreased. The latter presents a few more challenges!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Consider this experimental for the time being. There are a lot of
things that could go wrong, but I've verified that at least it works
on the test cases we have.
I also did some API cleanups while I was here. Previously we had:
Capability * rts_eval (Capability *cap, HaskellObj p, /*out*/HaskellObj *ret);
but this API is particularly error-prone: if you forget to discard the
Capability * you passed in and use the return value instead, then
you're in for subtle bugs with +RTS -N later on. So I changed all
these functions to this form:
void rts_eval (/* inout */ Capability **cap,
/* in */ HaskellObj p,
/* out */ HaskellObj *ret)
It's much harder to use this version incorrectly, because you have to
pass the Capability in by reference.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The parallel GC was using setContextSwitches() to stop all the other
threads, which sets the context_switch flag on every Capability. That
had the side effect of causing every Capability to also switch
threads, and since GCs can be much more frequent than context
switches, this increased the context switch frequency. When context
switches are expensive (because the switch is between two bound
threads or a bound and unbound thread), the difference is quite
noticeable.
The fix is to have a separate flag to indicate that a Capability
should stop and return to the scheduler, but not switch threads. I've
called this the "interrupt" flag.
|
| |
|
|
|
|
|
|
|
|
|
| |
The invariant is: created = converted + remaining + gcd + fizzled
Since sparks move between capabilities, we have to aggregate the
counters over all capabilities. This in turn means we can only check
the invariant at stable points where all but one capabilities are
stopped. We can do this at shutdown time and before and after a global
synchronised GC.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is mostly for the beneift of having sensible places to put tracing
code later. We want a code path that has somewhere to trace (in order):
(1) starting up all capabilities;
(2) N * starting up an individual capability;
(3) N * shutting down an individual capability;
(4) shutting down all capabilities.
This has to work in both threaded and non-threaded modes.
Locations (1) and (2) are provided by initCapabilities and
initCapability respectively. Previously, there was no loccation for (4)
and while shutdownCapability should be usable for (3) it was only called
in the !THREADED_RTS case.
Now, shutdownCapability is called unconditionally (and the body is
conditonal on THREADED_RTS) and there is a new shutdownCapabilities that
calls shutdownCapability in a loop.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a port of some of the changes from my private local-GC branch
(which is still in darcs, I haven't converted it to git yet). There
are a couple of small functional differences in the GC stats: first,
per-thread GC timings should now be more accurate, and secondly we now
report average and maximum pause times. e.g. from minimax +RTS -N8 -s:
Tot time (elapsed) Avg pause Max pause
Gen 0 2755 colls, 2754 par 13.16s 0.93s 0.0003s 0.0150s
Gen 1 769 colls, 769 par 3.71s 0.26s 0.0003s 0.0059s
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
The new strategies library (parallel-2.0+, preferably 2.2+) is now
required for parallel programming, otherwise parallelism will be lost.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The list of threads blocked on an MVar is now represented as a list of
separately allocated objects rather than being linked through the TSOs
themselves. This lets us remove a TSO from the list in O(1) time
rather than O(n) time, by marking the list object. Removing this
linear component fixes some pathalogical performance cases where many
threads were blocked on an MVar and became unreachable simultaneously
(nofib/smp/threads007), or when sending an asynchronous exception to a
TSO in a long list of thread blocked on an MVar.
MVar performance has actually improved by a few percent as a result of
this change, slightly to my surprise.
This is the final cleanup in the sequence, which let me remove the old
way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the
new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent). It
is now the case that only the Capability that owns a TSO may modify
its state (well, almost), and this simplifies various things. More of
the RTS is based on message-passing between Capabilities now.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This replaces the global blackhole_queue with a clever scheme that
enables us to queue up blocked threads on the closure that they are
blocked on, while still avoiding atomic instructions in the common
case.
Advantages:
- gets rid of a locked global data structure and some tricky GC code
(replacing it with some per-thread data structures and different
tricky GC code :)
- wakeups are more prompt: parallel/concurrent performance should
benefit. I haven't seen anything dramatic in the parallel
benchmarks so far, but a couple of threading benchmarks do improve
a bit.
- waking up a thread blocked on a blackhole is now O(1) (e.g. if
it is the target of throwTo).
- less sharing and better separation of Capabilities: communication
is done with messages, the data structures are strictly owned by a
Capability and cannot be modified except by sending messages.
- this change will utlimately enable us to do more intelligent
scheduling when threads block on each other. This is what started
off the whole thing, but it isn't done yet (#3838).
I'll be documenting all this on the wiki in due course.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This replaces some complicated locking schemes with message-passing
in the implementation of throwTo. The benefits are
- previously it was impossible to guarantee that a throwTo from
a thread running on one CPU to a thread running on another CPU
would be noticed, and we had to rely on the GC to pick up these
forgotten exceptions. This no longer happens.
- the locking regime is simpler (though the code is about the same
size)
- threads can be unblocked from a blocked_exceptions queue without
having to traverse the whole queue now. It's a rare case, but
replaces an O(n) operation with an O(1).
- generally we move in the direction of sharing less between
Capabilities (aka HECs), which will become important with other
changes we have planned.
Also in this patch I replaced several STM-specific closure types with
a generic MUT_PRIM closure type, which allowed a lot of code in the GC
and other places to go away, hence the line-count reduction. The
message-passing changes resulted in about a net zero line-count
difference.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea is that this leaves Tasks and OSThread in one-to-one
correspondence. The part of a Task that represents a call into
Haskell from C is split into a separate struct InCall, pointed to by
the Task and the TSO bound to it. A given OSThread/Task thus always
uses the same mutex and condition variable, rather than getting a new
one for each callback. Conceptually it is simpler, although there are
more types and indirections in a few places now.
This improves callback performance by removing some of the locks that
we had to take when making in-calls. Now we also keep the current Task
in a thread-local variable if supported by the OS and gcc (currently
only Linux).
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a batch of refactoring to remove some of the GC's global
state, as we move towards CPU-local GC.
- allocateLocal() now allocates large objects into the local
nursery, rather than taking a global lock and allocating
then in gen 0 step 0.
- allocatePinned() was still allocating from global storage and
taking a lock each time, now it uses local storage.
(mallocForeignPtrBytes should be faster with -threaded).
- We had a gen 0 step 0, distinct from the nurseries, which are
stored in a separate nurseries[] array. This is slightly strange.
I removed the g0s0 global that pointed to gen 0 step 0, and
removed all uses of it. I think now we don't use gen 0 step 0 at
all, except possibly when there is only one generation. Possibly
more tidying up is needed here.
- I removed the global allocate() function, and renamed
allocateLocal() to allocate().
- the alloc_blocks global is gone. MAYBE_GC() and
doYouWantToGC() now check the local nursery only.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This has no effect with static libraries, but when the RTS is in a
shared library it does two things:
- it prevents the function from being exposed by the shared library
- internal calls to the function can use the faster non-PLT calls,
because the function cannot be overriden at link time.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The first phase of this tidyup is focussed on the header files, and in
particular making sure we are exposinng publicly exactly what we need
to, and no more.
- Rts.h now includes everything that the RTS exposes publicly,
rather than a random subset of it.
- Most of the public header files have moved into subdirectories, and
many of them have been renamed. But clients should not need to
include any of the other headers directly, just #include the main
public headers: Rts.h, HsFFI.h, RtsAPI.h.
- All the headers needed for via-C compilation have moved into the
stg subdirectory, which is self-contained. Most of the headers for
the rest of the RTS APIs have moved into the rts subdirectory.
- I left MachDeps.h where it is, because it is so widely used in
Haskell code.
- I left a deprecated stub for RtsFlags.h in place. The flag
structures are now exposed by Rts.h.
- Various internal APIs are no longer exposed by public header files.
- Various bits of dead code and declarations have been removed
- More gcc warnings are turned on, and the RTS code is more
warning-clean.
- More source files #include "PosixSource.h", and hence only use
standard POSIX (1003.1c-1995) interfaces.
There is a lot more tidying up still to do, this is just the first
pass. I also intend to standardise the names for external RTS APIs
(e.g use the rts_ prefix consistently), and declare the internal APIs
as hidden for shared libraries.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reduces the latency between a context-switch being triggered and
the thread returning to the scheduler, which in turn should reduce the
cost of the GC barrier when there are many cores.
We still retain the old context_switch flag which is checked at the
end of each block of allocation. The idea is that setting HpLim may
fail if the the target thread is modifying HpLim at the same time; the
context_switch flag is a fallback. It also allows us to "context
switch soon" without forcing an immediate switch, which can be costly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This turns out to be quite vital for parallel programs:
- The way we discover which threads to traverse is by finding
dirty threads via the remembered sets (aka mutable lists).
- A dirty thread will be on the remembered set of the capability
that was running it, and we really want to traverse that thread's
stack using the GC thread for the capability, because it is in
that CPU's cache. If we get this wrong, we get penalised badly by
the memory system.
Previously we had per-capability mutable lists but they were
aggregated before GC and traversed by just one of the GC threads.
This resulted in very poor performance particularly for parallel
programs with deep stacks.
Now we keep per-capability remembered sets throughout GC, which also
removes a lock (recordMutableGen_sync).
|
|
|
|
| |
Fixes crashes on Windows and Sparc
|
|
|
|
|
|
| |
The problem is that the packing caused some unaligned loads, which
lead to bus errors on Sparc (and reduced performance elsewhere,
presumably).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the GC had its own pool of threads to use as workers when
doing parallel GC. There was a "leader", which was the mutator thread
that initiated the GC, and the other threads were taken from the pool.
This was simple and worked fine for sequential programs, where we did
most of the benchmarking for the parallel GC, but falls down for
parallel programs. When we have N mutator threads and N cores, at GC
time we would have to stop N-1 mutator threads and start up N-1 GC
threads, and hope that the OS schedules them all onto separate cores.
It practice it doesn't, as you might expect.
Now we use the mutator threads to do GC. This works quite nicely,
particularly for parallel programs, where each mutator thread scans
its own spark pool, which is probably in its cache anyway.
There are some flag changes:
-g<n> is removed (-g1 is still accepted for backwards compat).
There's no way to have a different number of GC threads than mutator
threads now.
-q1 Use one OS thread for GC (turns off parallel GC)
-qg<n> Use parallel GC for generations >= <n> (default: 1)
Using parallel GC only for generations >=1 works well for sequential
programs. Compiling an ordinary sequential program with -threaded and
running it with -N2 or more should help if you do a lot of GC. I've
found that adding -qg0 (do parallel GC for generation 0 too) speeds up
some parallel programs, but slows down some sequential programs.
Being conservative, I left the threshold at 1.
ToDo: document the new options.
|