summaryrefslogtreecommitdiff
path: root/rts/Schedule.c
Commit message (Collapse)AuthorAgeFilesLines
* Make forkProcess work with +RTS -NSimon Marlow2011-12-061-86/+182
| | | | | | | | | | | | | | | | | | | | | | Consider this experimental for the time being. There are a lot of things that could go wrong, but I've verified that at least it works on the test cases we have. I also did some API cleanups while I was here. Previously we had: Capability * rts_eval (Capability *cap, HaskellObj p, /*out*/HaskellObj *ret); but this API is particularly error-prone: if you forget to discard the Capability * you passed in and use the return value instead, then you're in for subtle bugs with +RTS -N later on. So I changed all these functions to this form: void rts_eval (/* inout */ Capability **cap, /* in */ HaskellObj p, /* out */ HaskellObj *ret) It's much harder to use this version incorrectly, because you have to pass the Capability in by reference.
* Fix a scheduling bug in the threaded RTSSimon Marlow2011-12-011-7/+13
| | | | | | | | | | | | | | | The parallel GC was using setContextSwitches() to stop all the other threads, which sets the context_switch flag on every Capability. That had the side effect of causing every Capability to also switch threads, and since GCs can be much more frequent than context switches, this increased the context switch frequency. When context switches are expensive (because the switch is between two bound threads or a bound and unbound thread), the difference is quite noticeable. The fix is to have a separate flag to indicate that a Capability should stop and return to the scheduler, but not switch threads. I've called this the "interrupt" flag.
* Make profiling work with multiple capabilities (+RTS -N)Simon Marlow2011-11-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | This means that both time and heap profiling work for parallel programs. Main internal changes: - CCCS is no longer a global variable; it is now another pseudo-register in the StgRegTable struct. Thus every Capability has its own CCCS. - There is a new built-in CCS called "IDLE", which records ticks for Capabilities in the idle state. If you profile a single-threaded program with +RTS -N2, you'll see about 50% of time in "IDLE". - There is appropriate locking in rts/Profiling.c to protect the shared cost-centre-stack data structures. This patch does enough to get it working, I have cut one big corner: the cost-centre-stack data structure is still shared amongst all Capabilities, which means that multiple Capabilities will race when updating the "allocations" and "entries" fields of a CCS. Not only does this give unpredictable results, but it runs very slowly due to cache line bouncing. It is strongly recommended that you use -fno-prof-count-entries to disable the "entries" count when profiling parallel programs. (I shall add a note to this effect to the docs).
* Time handling overhaulSimon Marlow2011-11-251-1/+1
| | | | | | | | | | | | | | | | | | | | | Terminology cleanup: the type "Ticks" has been renamed "Time", which is an StgWord64 in units of TIME_RESOLUTION (currently nanoseconds). The terminology "tick" is now used consistently to mean the interval between timer signals. The ticker now always ticks in realtime (actually CLOCK_MONOTONIC if we have it). Before it used CPU time in the non-threaded RTS and realtime in the threaded RTS, but I've discovered that the CPU timer has terrible resolution (at least on Linux) and isn't much use for profiling. So now we always use realtime. This should also fix The default tick interval is now 10ms, except when profiling where we drop it to 1ms. This gives more accurate profiles without affecting runtime too much (<1%). Lots of cleanups - the resolution of Time is now in one place only (Rts.h) rather than having calculations that depend on the resolution scattered all over the RTS. I hope I found them all.
* fix occasional failure of numsparks001 test. During shutdown weSimon Marlow2011-08-141-5/+14
| | | | | | | | | discard all the sparks from each Capability, but we were forgetting to account for the discarded sparks in the stats, leading to a failure of the assertion that tests the spark invariant. I've moved the discarding of sparks to just before the GC, to avoid race conditions, and counted the discarded sparks as GC'd.
* Move the call to heapCensus() into GarbageCollect(), just beforeSimon Marlow2011-07-201-5/+4
| | | | | | | | calling resurrectThreads() (fixes #5314). This avoids a lot of problems, because resurrectThreads() may overwrite some closures in the heap, leaving slop behind. The bug in instances, this fix avoids them all in one go.
* Add spark counter tracingDuncan Coutts2011-07-181-0/+2
| | | | | | | A new eventlog event containing 7 spark counters/statistics: sparks created, dud, overflowed, converted, GC'd, fizzled and remaining. These are maintained and logged separately for each capability. We log them at startup, on each GC (minor and major) and on shutdown.
* Move allocation of spark pools into initCapabilityDuncan Coutts2011-07-181-4/+0
| | | | | | Rather than a separate phase of initSparkPools. It means all the spark stuff for a capability is initialisaed at the same time, which is then becomes a good place to stick an initial spark trace event.
* Add assertion of the invariant for the spark countersDuncan Coutts2011-07-181-0/+10
| | | | | | | | | The invariant is: created = converted + remaining + gcd + fizzled Since sparks move between capabilities, we have to aggregate the counters over all capabilities. This in turn means we can only check the invariant at stable points where all but one capabilities are stopped. We can do this at shutdown time and before and after a global synchronised GC.
* Change tryStealSpark so it does not consume fizzled sparksDuncan Coutts2011-07-181-0/+4
| | | | | We want to count fizzled sparks accurately. Now tryStealSpark returns fizzled sparks, and the callers now update the fizzled spark count.
* Fix Windows breakage (#5322). When I modified StgRun to use the pureSimon Marlow2011-07-181-0/+4
| | | | | | | | assembly version as part of the fix for #5250, we inadvertently lost the Windows magic for extending the stack. Win32 requires that the stack is extended a page at a time, otherwise you get a segfault. The C compiler knows how to do this, so we now call a C stub to ensure there's enough stack space at each invocation of the scheduler.
* Fix gcc 4.6 warnings; fixes #5176Ian Lynagh2011-06-251-2/+8
| | | | | | | | | | | Based on a patch from David Terei. Some parts are a little ugly (e.g. defining things that only ASSERTs use only when DEBUG is defined), so we might want to tweak things a little. I've also turned off -Werror for didn't-inline warnings, as we now get a few such warnings.
* Rearrange shutdownCapability code slightlyDuncan Coutts2011-05-261-10/+1
| | | | | | | | | | | | | | | | | | | This is mostly for the beneift of having sensible places to put tracing code later. We want a code path that has somewhere to trace (in order): (1) starting up all capabilities; (2) N * starting up an individual capability; (3) N * shutting down an individual capability; (4) shutting down all capabilities. This has to work in both threaded and non-threaded modes. Locations (1) and (2) are provided by initCapabilities and initCapability respectively. Previously, there was no loccation for (4) and while shutdownCapability should be usable for (3) it was only called in the !THREADED_RTS case. Now, shutdownCapability is called unconditionally (and the body is conditonal on THREADED_RTS) and there is a new shutdownCapabilities that calls shutdownCapability in a loop.
* Revert "Add capability sets to the event system. Contains code from Duncan ↵Duncan Coutts2011-05-231-8/+8
| | | | | | | | Coutts." This reverts commit 58532eb46041aec8d4cbb48b054cb5b001edb43c. Turns out it didn't work on Windows and it'll need some non-trivial changes to make it work on Windows. We'll get it in later once that's sorted out.
* Add capability sets to the event system. Contains code from Duncan Coutts.Spencer Janssen2011-05-181-8/+8
|
* scheduleDoGC: if we're doing heapCensus(), do it *before* releasingSimon Marlow2011-05-111-6/+6
| | | | the other mutator threads (#5127)
* Refactoring and tidy upSimon Marlow2011-04-111-0/+10
| | | | | | | | | | | | This is a port of some of the changes from my private local-GC branch (which is still in darcs, I haven't converted it to git yet). There are a couple of small functional differences in the GC stats: first, per-thread GC timings should now be more accurate, and secondly we now report average and maximum pause times. e.g. from minimax +RTS -N8 -s: Tot time (elapsed) Avg pause Max pause Gen 0 2755 colls, 2754 par 13.16s 0.93s 0.0003s 0.0150s Gen 1 769 colls, 769 par 3.71s 0.26s 0.0003s 0.0059s
* scheduleThreadOn: use TSO_LOCKED even on the non-threaded RTSSimon Marlow2011-03-301-1/+1
|
* scheduleProcessInbox: use non-blocking acquire, and take the whole queueSimon Marlow2011-02-021-4/+28
| | | | | This is an improvement from my GC branch, that helps performance for intensive message-passing communication between Capabilities.
* Annotate thread stop events with the owner of the black holeSimon Marlow2011-01-271-2/+12
| | | | | | | | | So we can now get these in ThreadScope: 19487000: cap 1: stopping thread 6 (blocked on black hole owned by thread 4) Note: needs an update to ghc-events. Older ThreadScopes will just ignore the new information.
* raiseExceptionHelper: update tso->stackobj->sp before calling ↵Simon Marlow2010-12-211-0/+1
| | | | threadStackOverflow (#4845)
* Implement stack chunks and separate TSO/STACK objectsSimon Marlow2010-12-151-247/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes two changes to the way stacks are managed: 1. The stack is now stored in a separate object from the TSO. This means that it is easier to replace the stack object for a thread when the stack overflows or underflows; we don't have to leave behind the old TSO as an indirection any more. Consequently, we can remove ThreadRelocated and deRefTSO(), which were a pain. This is obviously the right thing, but the last time I tried to do it it made performance worse. This time I seem to have cracked it. 2. Stacks are now represented as a chain of chunks, rather than a single monolithic object. The big advantage here is that individual chunks are marked clean or dirty according to whether they contain pointers to the young generation, and the GC can avoid traversing clean stack chunks during a young-generation collection. This means that programs with deep stacks will see a big saving in GC overhead when using the default GC settings. A secondary advantage is that there is much less copying involved as the stack grows. Programs that quickly grow a deep stack will see big improvements. In some ways the implementation is simpler, as nothing special needs to be done to reclaim stack as the stack shrinks (the GC just recovers the dead stack chunks). On the other hand, we have to manage stack underflow between chunks, so there's a new stack frame (UNDERFLOW_FRAME), and we now have separate TSO and STACK objects. The total amount of code is probably about the same as before. There are new RTS flags: -ki<size> Sets the initial thread stack size (default 1k) Egs: -ki4k -ki2m -kc<size> Sets the stack chunk size (default 32k) -kb<size> Sets the stack chunk buffer size (default 1k) -ki was previously called just -k, and the old name is still accepted for backwards compatibility. These new options are documented.
* Only reset the event log if logging is turned on (addendum to #4512)Simon Marlow2010-12-101-4/+4
|
* Catch too-large allocations and emit an error message (#4505)Simon Marlow2010-12-091-0/+4
| | | | | | | | | | | | | | | | This is a temporary measure until we fix the bug properly (which is somewhat tricky, and we think might be easier in the new code generator). For now we get: ghc-stage2: sorry! (unimplemented feature or known bug) (GHC version 7.1 for i386-unknown-linux): Trying to allocate more than 1040384 bytes. See: http://hackage.haskell.org/trac/ghc/ticket/4550 Suggestion: read data from a file instead of having large static data structures in the code.
* Fixes for #4512: EventLog.c - provides ability to terminate event logging, ↵Dmitry Astapov2010-12-031-0/+11
| | | | Schedule.c - uses them in forkProcess.
* small tidyupSimon Marlow2010-11-261-4/+2
|
* Keep a maximum of 6 spare worker threads per Capability (#4262)Simon Marlow2010-11-251-1/+2
|
* Fix for interruptible FFI handlingSimon Marlow2010-09-251-3/+3
| | | | | | Set tso->why_blocked before calling maybePerformBlockedException(), so that throwToSingleThreaded() doesn't try to unblock the current thread (it is already unblocked).
* Don't interrupt when task blocks exceptions, don't immediately start exception.Edward Z. Yang2010-09-251-1/+1
|
* Interruptible FFI calls with pthread_kill and CancelSynchronousIO. v4Edward Z. Yang2010-09-191-11/+12
| | | | | | | | | | | | | | | | | | | | | | | This is patch that adds support for interruptible FFI calls in the form of a new foreign import keyword 'interruptible', which can be used instead of 'safe' or 'unsafe'. Interruptible FFI calls act like safe FFI calls, except that the worker thread they run on may be interrupted. Internally, it replaces BlockedOnCCall_NoUnblockEx with BlockedOnCCall_Interruptible, and changes the behavior of the RTS to not modify the TSO_ flags on the event of an FFI call from a thread that was interruptible. It also modifies the bytecode format for foreign call, adding an extra Word16 to indicate interruptibility. The semantics of interruption vary from platform to platform, but the intent is that any blocking system calls are aborted with an error code. This is most useful for making function calls to system library functions that support interrupting. There is no support for pre-Vista Windows. There is a partner testsuite patch which adds several tests for this functionality.
* Fix #4074 (I hope).Simon Marlow2010-05-181-0/+4
| | | | | | | 1. allow multiple threads to call startTimer()/stopTimer() pairs 2. disable the timer around fork() in forkProcess() A corresponding change to the process package is required.
* Fix crash in nested callbacks (#4038)Simon Marlow2010-05-071-11/+11
| | | | | Broken by "Split part of the Task struct into a separate struct InCall".
* tidyup; no functional changesSimon Marlow2010-05-051-6/+1
|
* BlockedOnMsgThrowTo is possible in resurrectThreads (#4030)Simon Marlow2010-05-051-0/+6
|
* Change the representation of the MVar blocked queueSimon Marlow2010-04-011-33/+6
| | | | | | | | | | | | | | | | | | | | | The list of threads blocked on an MVar is now represented as a list of separately allocated objects rather than being linked through the TSOs themselves. This lets us remove a TSO from the list in O(1) time rather than O(n) time, by marking the list object. Removing this linear component fixes some pathalogical performance cases where many threads were blocked on an MVar and became unreachable simultaneously (nofib/smp/threads007), or when sending an asynchronous exception to a TSO in a long list of thread blocked on an MVar. MVar performance has actually improved by a few percent as a result of this change, slightly to my surprise. This is the final cleanup in the sequence, which let me remove the old way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent). It is now the case that only the Capability that owns a TSO may modify its state (well, almost), and this simplifies various things. More of the RTS is based on message-passing between Capabilities now.
* Move a thread to the front of the run queue when another thread blocks on itSimon Marlow2010-03-291-13/+30
| | | | | | | This fixes #3838, and was made possible by the new BLACKHOLE infrastructure. To allow reording of the run queue I had to make it doubly-linked, which entails some extra trickiness with regard to GC write barriers and suchlike.
* New implementation of BLACKHOLEsSimon Marlow2010-03-291-213/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This replaces the global blackhole_queue with a clever scheme that enables us to queue up blocked threads on the closure that they are blocked on, while still avoiding atomic instructions in the common case. Advantages: - gets rid of a locked global data structure and some tricky GC code (replacing it with some per-thread data structures and different tricky GC code :) - wakeups are more prompt: parallel/concurrent performance should benefit. I haven't seen anything dramatic in the parallel benchmarks so far, but a couple of threading benchmarks do improve a bit. - waking up a thread blocked on a blackhole is now O(1) (e.g. if it is the target of throwTo). - less sharing and better separation of Capabilities: communication is done with messages, the data structures are strictly owned by a Capability and cannot be modified except by sending messages. - this change will utlimately enable us to do more intelligent scheduling when threads block on each other. This is what started off the whole thing, but it isn't done yet (#3838). I'll be documenting all this on the wiki in due course.
* remove out of date commentSimon Marlow2010-04-011-3/+0
|
* avoid a fork deadlock (see comments)Simon Marlow2010-03-291-0/+7
|
* Use message-passing to implement throwTo in the RTSSimon Marlow2010-03-111-94/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This replaces some complicated locking schemes with message-passing in the implementation of throwTo. The benefits are - previously it was impossible to guarantee that a throwTo from a thread running on one CPU to a thread running on another CPU would be noticed, and we had to rely on the GC to pick up these forgotten exceptions. This no longer happens. - the locking regime is simpler (though the code is about the same size) - threads can be unblocked from a blocked_exceptions queue without having to traverse the whole queue now. It's a rare case, but replaces an O(n) operation with an O(1). - generally we move in the direction of sharing less between Capabilities (aka HECs), which will become important with other changes we have planned. Also in this patch I replaced several STM-specific closure types with a generic MUT_PRIM closure type, which allowed a lot of code in the GC and other places to go away, hence the line-count reduction. The message-passing changes resulted in about a net zero line-count difference.
* Split part of the Task struct into a separate struct InCallSimon Marlow2010-03-091-79/+65
| | | | | | | | | | | | | | | The idea is that this leaves Tasks and OSThread in one-to-one correspondence. The part of a Task that represents a call into Haskell from C is split into a separate struct InCall, pointed to by the Task and the TSO bound to it. A given OSThread/Task thus always uses the same mutex and condition variable, rather than getting a new one for each callback. Conceptually it is simpler, although there are more types and indirections in a few places now. This improves callback performance by removing some of the locks that we had to take when making in-calls. Now we also keep the current Task in a thread-local variable if supported by the OS and gcc (currently only Linux).
* fix lost context switches in GHCi (fixes test 3429(ghci))Simon Marlow2010-02-151-17/+13
|
* Fix a deadlock, and possibly other problemsSimon Marlow2010-01-261-3/+18
| | | | | | | | | | | | | | | | After a bound thread had completed, its TSO remains in the heap until it has been GC'd, although the associated Task is returned to the caller where it is freed and possibly re-used. The bug was that GC was following the pointer to the Task and updating the TSO field, meanwhile the Task had already been recycled (it was being used by exitScheduler()). Confusion ensued, leading to a very occasional deadlock at shutdown, but in principle it could result in other crashes too. The fix is to remove the link between the TSO and the Task when the TSO has completed and the call to schedule() has returned; see comments in Schedule.c.
* fix a commentSimon Marlow2009-12-301-1/+1
|
* remove an unnecessary debug trace, duplicated by a traceSchedEventSimon Marlow2009-12-301-1/+0
|
* Use local mut lists in UPD_IND(), also clean up Updates.hSimon Marlow2009-12-311-1/+2
|
* If ACTIVITY_INACTIVE is set, wait for GC before resetting itSimon Marlow2009-12-131-1/+5
| | | | | | | I don't think this fixes any real bugs, but there's a small possibility that when the RTS is woken up for an idle-time GC, the IO manager thread might be pre-empted which would prevent the idle GC from happening; this change ensures that the idle GC happens anyway.
* Expose all EventLog events as DTrace probesManuel M T Chakravarty2009-12-121-11/+11
| | | | | | | | | | | | | | - Defines a DTrace provider, called 'HaskellEvent', that provides a probe for every event of the eventlog framework. - In contrast to the original eventlog, the DTrace probes are available in all flavours of the runtime system (DTrace probes have virtually no overhead if not enabled); when -DTRACING is defined both the regular event log as well as DTrace probes can be used. - Currently, Mac OS X only. User-space DTrace probes are implemented differently on Mac OS X than in the original DTrace implementation. Nevertheless, it shouldn't be too hard to enable these probes on other platforms, too. - Documentation is at http://hackage.haskell.org/trac/ghc/wiki/DTrace
* add a missing unlockTSO()Simon Marlow2009-12-091-0/+1
|
* threadStackUnderflow: fix recently introduced bug (conc068(threaded1) failure)Simon Marlow2009-12-071-1/+1
| | | | | bug introduced by "threadStackUnderflow: put the new TSO on the mut list if necessary"