summaryrefslogtreecommitdiff
path: root/rts/Threads.c
Commit message (Collapse)AuthorAgeFilesLines
...
* Revert "Default to infinite stack size (#8189)"Austin Seipp2013-09-081-8/+2
| | | | This reverts commit d85044f6b201eae0a9e453b89c0433608e0778f0.
* Default to infinite stack size (#8189)Austin Seipp2013-09-081-2/+8
| | | | | | | | | | | | | When servicing a stack overflows, only throw an exception to the given thread if the user explicitly set a max stack size, using +RTS -K. Otherwise just service it normally and grow the stack. In case we actually run out of *heap* (stack chuncks are allocated on the heap), then we need to bail by calling the stackOverflow() hook and exit immediately. Authored-by: Ben Gamari <bgamari.foss@gmail.com> Signed-off-by: Austin Seipp <aseipp@pobox.com>
* Don't move Capabilities in setNumCapabilities (#8209)Simon Marlow2013-09-041-1/+1
| | | | | | | | | | | | | We have various problems with reallocating the array of Capabilities, due to threads in waitForReturnCapability that are already holding a pointer to a Capability. Rather than add more locking to make this safer, I decided it would be easier to ensure that we never move the Capabilities at all. The capabilities array is now an array of pointers to Capabaility. There are extra indirections, but it rarely matters - we don't often access Capabilities via the array, normally we already have a pointer to one. I ran the parallel benchmarks and didn't see any difference.
* Implement atomicReadMVar, fixing #4001.Edward Z. Yang2013-07-091-0/+4
| | | | | | | | | We add the invariant to the MVar blocked threads queue that threads blocked on an atomic read are always at the front of the queue. This invariant is easy to maintain, since takers are only ever added to the end of the queue. Signed-off-by: Edward Z. Yang <ezyang@mit.edu>
* More accurate cost attribution for stacks. Fixes #7818.Edward Z. Yang2013-04-221-2/+2
| | | | | | | | Previously, stacks were always attributed to CCCS_SYSTEM. Now, we attribute them to the CCS when the stack was allocated. If a stack grows, new stack chunks inherit the CCS of the old stack. Signed-off-by: Edward Z. Yang <ezyang@mit.edu>
* Lots of nat -> StgWord changesSimon Marlow2012-09-071-4/+4
|
* Deprecate lnat, and use StgWord insteadSimon Marlow2012-09-071-4/+4
| | | | | | | | | | | | lnat was originally "long unsigned int" but we were using it when we wanted a 64-bit type on a 64-bit machine. This broke on Windows x64, where long == int == 32 bits. Using types of unspecified size is bad, but what we really wanted was a type with N bits on an N-bit machine. StgWord is exactly that. lnat was mentioned in some APIs that clients might be using (e.g. StackOverflowHook()), so we leave it defined but with a comment to say that it's deprecated.
* Fix crash with tiny initial stack size (#5993)Simon Marlow2012-04-121-2/+2
|
* updateThunk: minor tweak to avoid an unnecessary call to checkBlockingQueuesSimon Marlow2012-04-121-4/+8
|
* threadStackOverflow: Tweak to stack chunk sizingSimon Marlow2012-03-281-1/+2
| | | | | | | | | If the old stack is only half full, then the next chunk we allocate will be twice as large, to accommodate large requests for stack. In that case, make sure that the chunk we allocate is at least as large as the usual chunk size - there's no point allocating a 2k chunk (double the default initial chunk size of 1k) if in the normal case we would allocate a 32k chunk.
* Fix a bug in threadStackOverflow() (maybe #5214)Simon Marlow2012-03-281-13/+22
| | | | | | | | | | | | | | If we overflow the current stack chunk and copy its entire contents into the next stack chunk, we could end up with two UNDERFLOW_FRAMEs. We had a special case to catch this in the case when the old stack chunk was the last one (ending in STOP_FRAME), but it went wrong for other chunks. I found this bug while poking around in the core dump attached to options and running the nofib suite: imaginary/wheel_sieve2 crashed with +RTS -kc600 -kb300. I don't know if this is the cause of all the symptoms reported in
* Rename the CCCS field of StgTSO so as not to conflict with the CCCS ↵Simon Marlow2012-01-051-1/+1
| | | | | | pseudo-register Needed by #5357
* Time handling overhaulSimon Marlow2011-11-251-5/+7
| | | | | | | | | | | | | | | | | | | | | Terminology cleanup: the type "Ticks" has been renamed "Time", which is an StgWord64 in units of TIME_RESOLUTION (currently nanoseconds). The terminology "tick" is now used consistently to mean the interval between timer signals. The ticker now always ticks in realtime (actually CLOCK_MONOTONIC if we have it). Before it used CPU time in the non-threaded RTS and realtime in the threaded RTS, but I've discovered that the CPU timer has terrible resolution (at least on Linux) and isn't much use for profiling. So now we always use realtime. This should also fix The default tick interval is now 10ms, except when profiling where we drop it to 1ms. This gives more accurate profiles without affecting runtime too much (<1%). Lots of cleanups - the resolution of Time is now in one place only (Rts.h) rather than having calculations that depend on the resolution scattered all over the RTS. I hope I found them all.
* GC refactoring and cleanupSimon Marlow2011-02-021-1/+1
| | | | | | | | | Now we keep any partially-full blocks in the gc_thread[] structs after each GC, rather than moving them to the generation. This should give us slightly better locality (though I wasn't able to measure any difference). Also in this patch: better sanity checking with THREADED.
* fix warningSimon Marlow2010-12-161-1/+1
|
* fix for large stack allocationsSimon Marlow2010-12-151-6/+26
|
* Implement stack chunks and separate TSO/STACK objectsSimon Marlow2010-12-151-46/+242
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes two changes to the way stacks are managed: 1. The stack is now stored in a separate object from the TSO. This means that it is easier to replace the stack object for a thread when the stack overflows or underflows; we don't have to leave behind the old TSO as an indirection any more. Consequently, we can remove ThreadRelocated and deRefTSO(), which were a pain. This is obviously the right thing, but the last time I tried to do it it made performance worse. This time I seem to have cracked it. 2. Stacks are now represented as a chain of chunks, rather than a single monolithic object. The big advantage here is that individual chunks are marked clean or dirty according to whether they contain pointers to the young generation, and the GC can avoid traversing clean stack chunks during a young-generation collection. This means that programs with deep stacks will see a big saving in GC overhead when using the default GC settings. A secondary advantage is that there is much less copying involved as the stack grows. Programs that quickly grow a deep stack will see big improvements. In some ways the implementation is simpler, as nothing special needs to be done to reclaim stack as the stack shrinks (the GC just recovers the dead stack chunks). On the other hand, we have to manage stack underflow between chunks, so there's a new stack frame (UNDERFLOW_FRAME), and we now have separate TSO and STACK objects. The total amount of code is probably about the same as before. There are new RTS flags: -ki<size> Sets the initial thread stack size (default 1k) Egs: -ki4k -ki2m -kc<size> Sets the stack chunk size (default 32k) -kb<size> Sets the stack chunk buffer size (default 1k) -ki was previously called just -k, and the old name is still accepted for backwards compatibility. These new options are documented.
* removeThreadFromQueue: stub out the link field before returning (#4813)Simon Marlow2010-12-021-1/+4
|
* commentSimon Marlow2010-10-131-0/+13
|
* Interruptible FFI calls with pthread_kill and CancelSynchronousIO. v4Edward Z. Yang2010-09-191-2/+2
| | | | | | | | | | | | | | | | | | | | | | | This is patch that adds support for interruptible FFI calls in the form of a new foreign import keyword 'interruptible', which can be used instead of 'safe' or 'unsafe'. Interruptible FFI calls act like safe FFI calls, except that the worker thread they run on may be interrupted. Internally, it replaces BlockedOnCCall_NoUnblockEx with BlockedOnCCall_Interruptible, and changes the behavior of the RTS to not modify the TSO_ flags on the event of an FFI call from a thread that was interruptible. It also modifies the bytecode format for foreign call, adding an extra Word16 to indicate interruptibility. The semantics of interruption vary from platform to platform, but the intent is that any blocking system calls are aborted with an error code. This is most useful for making function calls to system library functions that support interrupting. There is no support for pre-Vista Windows. There is a partner testsuite patch which adds several tests for this functionality.
* Add a couple of missing tests for EAGER_BLACKHOLESimon Marlow2010-08-231-0/+1
| | | | | | | | This was leading to looping and excessive allocation, when the computation should have just blocked on the black hole. Reported by Christian Höner zu Siederdissen <choener@tbi.univie.ac.at> on glasgow-haskell-users.
* Fix for derefing ThreadRelocated TSOs in MVar operationsSimon Marlow2010-04-071-2/+6
|
* don't forget to deRefTSO() in tryWakeupThread()Simon Marlow2010-04-061-1/+2
|
* fix bug in migrateThread()Simon Marlow2010-04-011-1/+1
|
* Change the representation of the MVar blocked queueSimon Marlow2010-04-011-124/+49
| | | | | | | | | | | | | | | | | | | | | The list of threads blocked on an MVar is now represented as a list of separately allocated objects rather than being linked through the TSOs themselves. This lets us remove a TSO from the list in O(1) time rather than O(n) time, by marking the list object. Removing this linear component fixes some pathalogical performance cases where many threads were blocked on an MVar and became unreachable simultaneously (nofib/smp/threads007), or when sending an asynchronous exception to a TSO in a long list of thread blocked on an MVar. MVar performance has actually improved by a few percent as a result of this change, slightly to my surprise. This is the final cleanup in the sequence, which let me remove the old way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent). It is now the case that only the Capability that owns a TSO may modify its state (well, almost), and this simplifies various things. More of the RTS is based on message-passing between Capabilities now.
* change throwTo to use tryWakeupThread rather than unblockOneSimon Marlow2010-03-291-0/+19
|
* New implementation of BLACKHOLEsSimon Marlow2010-03-291-9/+205
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This replaces the global blackhole_queue with a clever scheme that enables us to queue up blocked threads on the closure that they are blocked on, while still avoiding atomic instructions in the common case. Advantages: - gets rid of a locked global data structure and some tricky GC code (replacing it with some per-thread data structures and different tricky GC code :) - wakeups are more prompt: parallel/concurrent performance should benefit. I haven't seen anything dramatic in the parallel benchmarks so far, but a couple of threading benchmarks do improve a bit. - waking up a thread blocked on a blackhole is now O(1) (e.g. if it is the target of throwTo). - less sharing and better separation of Capabilities: communication is done with messages, the data structures are strictly owned by a Capability and cannot be modified except by sending messages. - this change will utlimately enable us to do more intelligent scheduling when threads block on each other. This is what started off the whole thing, but it isn't done yet (#3838). I'll be documenting all this on the wiki in due course.
* Use message-passing to implement throwTo in the RTSSimon Marlow2010-03-111-6/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This replaces some complicated locking schemes with message-passing in the implementation of throwTo. The benefits are - previously it was impossible to guarantee that a throwTo from a thread running on one CPU to a thread running on another CPU would be noticed, and we had to rely on the GC to pick up these forgotten exceptions. This no longer happens. - the locking regime is simpler (though the code is about the same size) - threads can be unblocked from a blocked_exceptions queue without having to traverse the whole queue now. It's a rare case, but replaces an O(n) operation with an O(1). - generally we move in the direction of sharing less between Capabilities (aka HECs), which will become important with other changes we have planned. Also in this patch I replaced several STM-specific closure types with a generic MUT_PRIM closure type, which allowed a lot of code in the GC and other places to go away, hence the line-count reduction. The message-passing changes resulted in about a net zero line-count difference.
* Split part of the Task struct into a separate struct InCallSimon Marlow2010-03-091-2/+2
| | | | | | | | | | | | | | | The idea is that this leaves Tasks and OSThread in one-to-one correspondence. The part of a Task that represents a call into Haskell from C is split into a separate struct InCall, pointed to by the Task and the TSO bound to it. A given OSThread/Task thus always uses the same mutex and condition variable, rather than getting a new one for each callback. Conceptually it is simpler, although there are more types and indirections in a few places now. This improves callback performance by removing some of the locks that we had to take when making in-calls. Now we also keep the current Task in a thread-local variable if supported by the OS and gcc (currently only Linux).
* kill some old GRAN/PARALLEL_HASKELL codeSimon Marlow2009-12-171-14/+0
|
* Expose all EventLog events as DTrace probesManuel M T Chakravarty2009-12-121-2/+2
| | | | | | | | | | | | | | - Defines a DTrace provider, called 'HaskellEvent', that provides a probe for every event of the eventlog framework. - In contrast to the original eventlog, the DTrace probes are available in all flavours of the runtime system (DTrace probes have virtually no overhead if not enabled); when -DTRACING is defined both the regular event log as well as DTrace probes can be used. - Currently, Mac OS X only. User-space DTrace probes are implemented differently on Mac OS X than in the original DTrace implementation. Nevertheless, it shouldn't be too hard to enable these probes on other platforms, too. - Documentation is at http://hackage.haskell.org/trac/ghc/wiki/DTrace
* GC refactoring, remove "steps"Simon Marlow2009-12-031-5/+5
| | | | | | | | | | | | | | | | | | | | | The GC had a two-level structure, G generations each of T steps. Steps are for aging within a generation, mostly to avoid premature promotion. Measurements show that more than 2 steps is almost never worthwhile, and 1 step is usually worse than 2. In theory fractional steps are possible, so the ideal number of steps is somewhere between 1 and 3. GHC's default has always been 2. We can implement 2 steps quite straightforwardly by having each block point to the generation to which objects in that block should be promoted, so blocks in the nursery point to generation 0, and blocks in gen 0 point to gen 1, and so on. This commit removes the explicit step structures, merging generations with steps, thus simplifying a lot of code. Performance is unaffected. The tunable number of steps is now gone, although it may be replaced in the future by a way to tune the aging in generation 0.
* Make allocatePinned use local storage, and other refactoringsSimon Marlow2009-12-011-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | This is a batch of refactoring to remove some of the GC's global state, as we move towards CPU-local GC. - allocateLocal() now allocates large objects into the local nursery, rather than taking a global lock and allocating then in gen 0 step 0. - allocatePinned() was still allocating from global storage and taking a lock each time, now it uses local storage. (mallocForeignPtrBytes should be faster with -threaded). - We had a gen 0 step 0, distinct from the nurseries, which are stored in a separate nurseries[] array. This is slightly strange. I removed the g0s0 global that pointed to gen 0 step 0, and removed all uses of it. I think now we don't use gen 0 step 0 at all, except possibly when there is only one generation. Possibly more tidying up is needed here. - I removed the global allocate() function, and renamed allocateLocal() to allocate(). - the alloc_blocks global is gone. MAYBE_GC() and doYouWantToGC() now check the local nursery only.
* Unify event logging and debug tracing.Simon Marlow2009-08-291-8/+4
| | | | | | | | | | | | | | | | | | | - tracing facilities are now enabled with -DTRACING, and -DDEBUG additionally enables debug-tracing. -DEVENTLOG has been removed. - -debug now implies -eventlog - events can be printed to stderr instead of being sent to the binary .eventlog file by adding +RTS -v (which is implied by the +RTS -Dx options). - -Dx debug messages can be sent to the binary .eventlog file by adding +RTS -l. This should help debugging by reducing the impact of debug tracing on execution time. - Various debug messages that duplicated the information in events have been removed.
* Fix #3429: a tricky race conditionSimon Marlow2009-08-181-2/+3
| | | | | | | | | | | | | | | | | | There were two bugs, and had it not been for the first one we would not have noticed the second one, so this is quite fortunate. The first bug is in stg_unblockAsyncExceptionszh_ret, when we found a pending exception to raise, but don't end up raising it, there was a missing adjustment to the stack pointer. The second bug was that this case was actually happening at all: it ought to be incredibly rare, because the pending exception thread would have to be killed between us finding it and attempting to raise the exception. This made me suspicious. It turned out that there was a race condition on the tso->flags field; multiple threads were updating this bitmask field non-atomically (one of the bits is the dirty-bit for the generational GC). The fix is to move the dirty bit into its own field of the TSO, making the TSO one word larger (sadly).
* RTS tidyup sweep, first phaseSimon Marlow2009-08-021-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The first phase of this tidyup is focussed on the header files, and in particular making sure we are exposinng publicly exactly what we need to, and no more. - Rts.h now includes everything that the RTS exposes publicly, rather than a random subset of it. - Most of the public header files have moved into subdirectories, and many of them have been renamed. But clients should not need to include any of the other headers directly, just #include the main public headers: Rts.h, HsFFI.h, RtsAPI.h. - All the headers needed for via-C compilation have moved into the stg subdirectory, which is self-contained. Most of the headers for the rest of the RTS APIs have moved into the rts subdirectory. - I left MachDeps.h where it is, because it is so widely used in Haskell code. - I left a deprecated stub for RtsFlags.h in place. The flag structures are now exposed by Rts.h. - Various internal APIs are no longer exposed by public header files. - Various bits of dead code and declarations have been removed - More gcc warnings are turned on, and the RTS code is more warning-clean. - More source files #include "PosixSource.h", and hence only use standard POSIX (1003.1c-1995) interfaces. There is a lot more tidying up still to do, this is just the first pass. I also intend to standardise the names for external RTS APIs (e.g use the rts_ prefix consistently), and declare the internal APIs as hidden for shared libraries.
* Remove old GUM/GranSim codeSimon Marlow2009-06-021-544/+0
|
* Fix bug in previous change: allocate the correct sizeSimon Marlow2009-05-291-2/+3
|
* Round stack size to a whole number of megablocksSimon Marlow2009-05-281-1/+1
| | | | This is not a bug fix, it just makes better use of memory
* Add fast event loggingSimon Marlow2009-03-171-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | Generate binary log files from the RTS containing a log of runtime events with timestamps. The log file can be visualised in various ways, for investigating runtime behaviour and debugging performance problems. See for example the forthcoming ThreadScope viewer. New GHC option: -eventlog (link-time option) Enables event logging. +RTS -l (runtime option) Generates <prog>.eventlog with the binary event information. This replaces some of the tracing machinery we already had in the RTS: e.g. +RTS -vg for GC tracing (we should do this using the new event logging instead). Event logging has almost no runtime cost when it isn't enabled, though in the future we might add more fine-grained events and this might change; hence having a link-time option and compiling a separate version of the RTS for event logging. There's a small runtime cost for enabling event-logging, for most programs it shouldn't make much difference. (Todo: docs)
* Instead of a separate context-switch flag, set HpLim to zeroSimon Marlow2009-03-131-2/+8
| | | | | | | | | | | | This reduces the latency between a context-switch being triggered and the thread returning to the scheduler, which in turn should reduce the cost of the GC barrier when there are many cores. We still retain the old context_switch flag which is checked at the end of each block of allocation. The idea is that setting HpLim may fail if the the target thread is modifying HpLim at the same time; the context_switch flag is a fallback. It also allows us to "context switch soon" without forcing an immediate switch, which can be costly.
* indicate which TSOs are dirty in the printAllThreads() outputSimon Marlow2009-01-071-0/+5
|
* Move the context_switch flag into the CapabilitySimon Marlow2008-09-191-2/+2
| | | | | Fixes a long-standing bug that could in some cases cause sub-optimal scheduling behaviour.
* Fix debug message formatting on Windowssimonpj@microsoft.com2008-09-101-1/+1
|
* Fix race condition in wakeupThreadOnCapability() (#2574)Simon Marlow2008-09-091-1/+1
| | | | | | | | | | | wakeupThreadOnCapbility() is used to signal another capability that there is a thread waiting to be added to its run queue. It adds the thread to the (locked) wakeup queue on the remote capability. In order to do this, it has to modify the TSO's link field, which has a write barrier. The write barrier might put the TSO on the mutable list, and the bug was that it was using the mutable list of the *target* capability, which we do not have exclusive access to. We should be using the current Capabilty's mutable list in this case.
* Don't traverse the entire list of threads on every GC (phase 1)Simon Marlow2008-04-161-4/+6
| | | | | | Instead of keeping a single list of all threads, keep one per step and only look at the threads belonging to steps that we are collecting.
* Add a write barrier to the TSO link field (#1589)Simon Marlow2008-04-161-16/+17
|
* Fix C/Haskell type mismatchesIan Lynagh2007-04-031-3/+3
|
* Match format strings and arguments for printf-like functionssven.panne@aedion.de2006-08-101-3/+3
|
* comment out a non-true assertionSimon Marlow2006-06-161-1/+1
|