summaryrefslogtreecommitdiff
path: root/rts/PrimOps.cmm
Commit message (Collapse)AuthorAgeFilesLines
* Test USE_MINIINTERPRETER rather than GhcUnregisterisedIan Lynagh2012-05-271-1/+1
|
* Set the context_switch flag in yield#Simon Marlow2012-05-161-0/+5
| | | | | | | yieldThread hasn't been working for a while: unless we set the context_switch flag to indicate that the current time slice is over, the RTS scheduler just runs the same thread again. Spotted by Andreas Voellmy (thanks!).
* Add a new primop mkWeakNoFinalizer (#5879)Simon Marlow2012-04-271-5/+11
|
* Improve the handling of threadDelay in the non-threaded RTSSimon Marlow2012-04-111-5/+1
| | | | | | | Firstly, we were rounding up too much, such that the smallest delay was 20ms. Secondly, there is no need to use millisecond resolution on a 64-bit machine where we have room in the TSO to use the normal nanosecond resolution that we use elsewhere in the RTS.
* Fixed for unregisterised Windows buildsIan Lynagh2012-03-181-1/+1
|
* raiseAsync: cope with ATOMICALLY_FRAMES inside UPDATE_FRAMES (#5866)Simon Marlow2012-02-271-0/+10
|
* Fix a crash in STM when unregisterisedSimon Marlow2012-01-061-1/+1
| | | | | | | Fixes several test failures: ../../libraries/stm/tests 2411 [bad exit code] (normal,hpc,profasm,ghci,optllvm) ../../libraries/stm/tests stm046 [bad exit code] (normal,hpc,profasm,ghci,optllvm) ../../libraries/stm/tests stm061 [bad exit code] (normal,hpc,profasm,ghci,optllvm)
* Fix silly bug in casMutVar#: I forgot the GC write barrierSimon Marlow2011-12-091-0/+3
|
* Add new primtypes 'ArrayArray#' and 'MutableArrayArray#'Manuel M T Chakravarty2011-12-071-0/+39
| | | | | | | | The primitive array types, such as 'ByteArray#', have kind #, but are represented by pointers. They are boxed, but unpointed types (i.e., they cannot be 'undefined'). The two categories of array types —[Mutable]Array# and [Mutable]ByteArray#— are containers for unboxed (and unpointed) as well as for boxed and pointed types. So far, we lacked support for containers for boxed, unpointed types (i.e., containers for the primitive arrays themselves). This is what the new primtypes provide. Containers for boxed, unpointed types are crucial for the efficient implementation of scattered nested arrays, which are central to the new DPH backend library dph-lifted-vseg. Without such containers, we cannot eliminate all unboxing from the inner loops of traversals processing scattered nested arrays.
* Make profiling work with multiple capabilities (+RTS -N)Simon Marlow2011-11-291-20/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | This means that both time and heap profiling work for parallel programs. Main internal changes: - CCCS is no longer a global variable; it is now another pseudo-register in the StgRegTable struct. Thus every Capability has its own CCCS. - There is a new built-in CCS called "IDLE", which records ticks for Capabilities in the idle state. If you profile a single-threaded program with +RTS -N2, you'll see about 50% of time in "IDLE". - There is appropriate locking in rts/Profiling.c to protect the shared cost-centre-stack data structures. This patch does enough to get it working, I have cut one big corner: the cost-centre-stack data structure is still shared amongst all Capabilities, which means that multiple Capabilities will race when updating the "allocations" and "entries" fields of a CCS. Not only does this give unpredictable results, but it runs very slowly due to cache line bouncing. It is strongly recommended that you use -fno-prof-count-entries to disable the "entries" count when profiling parallel programs. (I shall add a note to this effect to the docs).
* Time handling overhaulSimon Marlow2011-11-251-8/+5
| | | | | | | | | | | | | | | | | | | | | Terminology cleanup: the type "Ticks" has been renamed "Time", which is an StgWord64 in units of TIME_RESOLUTION (currently nanoseconds). The terminology "tick" is now used consistently to mean the interval between timer signals. The ticker now always ticks in realtime (actually CLOCK_MONOTONIC if we have it). Before it used CPU time in the non-threaded RTS and realtime in the threaded RTS, but I've discovered that the CPU timer has terrible resolution (at least on Linux) and isn't much use for profiling. So now we always use realtime. This should also fix The default tick interval is now 10ms, except when profiling where we drop it to 1ms. This gives more accurate profiles without affecting runtime too much (<1%). Lots of cleanups - the resolution of Time is now in one place only (Rts.h) rather than having calculations that depend on the resolution scattered all over the RTS. I hope I found them all.
* Add eventlog event for thread labelsDuncan Coutts2011-11-041-2/+2
| | | | | | The existing GHC.Conc.labelThread will now also emit the the thread label into the eventlog. Profiling tools like ThreadScope could then use the thread labels rather than thread numbers.
* Overhaul of infrastructure for profiling, coverage (HPC) and breakpointsSimon Marlow2011-11-021-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | User visible changes ==================== Profilng -------- Flags renamed (the old ones are still accepted for now): OLD NEW --------- ------------ -auto-all -fprof-auto -auto -fprof-exported -caf-all -fprof-cafs New flags: -fprof-auto Annotates all bindings (not just top-level ones) with SCCs -fprof-top Annotates just top-level bindings with SCCs -fprof-exported Annotates just exported bindings with SCCs -fprof-no-count-entries Do not maintain entry counts when profiling (can make profiled code go faster; useful with heap profiling where entry counts are not used) Cost-centre stacks have a new semantics, which should in most cases result in more useful and intuitive profiles. If you find this not to be the case, please let me know. This is the area where I have been experimenting most, and the current solution is probably not the final version, however it does address all the outstanding bugs and seems to be better than GHC 7.2. Stack traces ------------ +RTS -xc now gives more information. If the exception originates from a CAF (as is common, because GHC tends to lift exceptions out to the top-level), then the RTS walks up the stack and reports the stack in the enclosing update frame(s). Result: +RTS -xc is much more useful now - but you still have to compile for profiling to get it. I've played around a little with adding 'head []' to GHC itself, and +RTS -xc does pinpoint the problem quite accurately. I plan to add more facilities for stack tracing (e.g. in GHCi) in the future. Coverage (HPC) -------------- * derived instances are now coloured yellow if they weren't used * likewise record field names * entry counts are more accurate (hpc --fun-entry-count) * tab width is now correct (markup was previously off in source with tabs) Internal changes ================ In Core, the Note constructor has been replaced by Tick (Tickish b) (Expr b) which is used to represent all the kinds of source annotation we support: profiling SCCs, HPC ticks, and GHCi breakpoints. Depending on the properties of the Tickish, different transformations apply to Tick. See CoreUtils.mkTick for details. Tickets ======= This commit closes the following tickets, test cases to follow: - Close #2552: not a bug, but the behaviour is now more intuitive (test is T2552) - Close #680 (test is T680) - Close #1531 (test is result001) - Close #949 (test is T949) - Close #2466: test case has bitrotted (doesn't compile against current version of vector-space package)
* fix #5381: the -debug RTS could crash with "internal error: MVAR_CLEANSimon Marlow2011-08-081-4/+4
| | | | | | on mutable list" after a call to tryPutMVar#. I don't think this leads to any problems without -debug.
* Make array copy primops inlineJohan Tibell2011-05-191-105/+0
|
* Add array copy/clone primopsDaniel Peebles2011-05-191-0/+106
|
* add casMutVar#Simon Marlow2011-04-111-0/+19
|
* GHC.Prim.threadStatus# now returns the cap number, and the value of TSO_LOCKEDSimon Marlow2011-03-011-2/+11
|
* Enable DTrace on Solaris; based on a patch from Karel GardasIan Lynagh2011-02-101-1/+14
|
* do a bit of by-hand CSESimon Marlow2011-02-021-7/+11
|
* Implement stack chunks and separate TSO/STACK objectsSimon Marlow2010-12-151-51/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes two changes to the way stacks are managed: 1. The stack is now stored in a separate object from the TSO. This means that it is easier to replace the stack object for a thread when the stack overflows or underflows; we don't have to leave behind the old TSO as an indirection any more. Consequently, we can remove ThreadRelocated and deRefTSO(), which were a pain. This is obviously the right thing, but the last time I tried to do it it made performance worse. This time I seem to have cracked it. 2. Stacks are now represented as a chain of chunks, rather than a single monolithic object. The big advantage here is that individual chunks are marked clean or dirty according to whether they contain pointers to the young generation, and the GC can avoid traversing clean stack chunks during a young-generation collection. This means that programs with deep stacks will see a big saving in GC overhead when using the default GC settings. A secondary advantage is that there is much less copying involved as the stack grows. Programs that quickly grow a deep stack will see big improvements. In some ways the implementation is simpler, as nothing special needs to be done to reclaim stack as the stack shrinks (the GC just recovers the dead stack chunks). On the other hand, we have to manage stack underflow between chunks, so there's a new stack frame (UNDERFLOW_FRAME), and we now have separate TSO and STACK objects. The total amount of code is probably about the same as before. There are new RTS flags: -ki<size> Sets the initial thread stack size (default 1k) Egs: -ki4k -ki2m -kc<size> Sets the stack chunk size (default 32k) -kb<size> Sets the stack chunk buffer size (default 1k) -ki was previously called just -k, and the old name is still accepted for backwards compatibility. These new options are documented.
* fix bugs in tryTakeMVar/tryPutMVarSimon Marlow2010-10-291-3/+3
| | | | I'm surprised that these haven't caused any problems (or maybe they have?)
* Follow GHC.Bool/GHC.Types mergeIan Lynagh2010-10-231-3/+3
|
* newAlignedPinnedByteArray#: avoid allocating an extra word sometimesSimon Marlow2010-09-091-0/+5
|
* add numSparks# primop (#4167)Simon Marlow2010-07-201-0/+11
|
* FIX #38000 Store StgArrWords payload size in bytesAntoine Latter2010-01-011-11/+14
|
* Fix for derefing ThreadRelocated TSOs in MVar operationsSimon Marlow2010-04-071-36/+62
|
* get the reg liveness right in the putMVar# heap checkSimon Marlow2010-04-071-1/+1
|
* initialise the headers of MVAR_TSO_QUEUE objects properlySimon Marlow2010-04-071-2/+2
|
* putMVar#: fix reg liveness in the heap checkSimon Marlow2010-04-061-1/+1
|
* Change the representation of the MVar blocked queueSimon Marlow2010-04-011-155/+195
| | | | | | | | | | | | | | | | | | | | | The list of threads blocked on an MVar is now represented as a list of separately allocated objects rather than being linked through the TSOs themselves. This lets us remove a TSO from the list in O(1) time rather than O(n) time, by marking the list object. Removing this linear component fixes some pathalogical performance cases where many threads were blocked on an MVar and became unreachable simultaneously (nofib/smp/threads007), or when sending an asynchronous exception to a TSO in a long list of thread blocked on an MVar. MVar performance has actually improved by a few percent as a result of this change, slightly to my surprise. This is the final cleanup in the sequence, which let me remove the old way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent). It is now the case that only the Capability that owns a TSO may modify its state (well, almost), and this simplifies various things. More of the RTS is based on message-passing between Capabilities now.
* New implementation of BLACKHOLEsSimon Marlow2010-03-291-32/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This replaces the global blackhole_queue with a clever scheme that enables us to queue up blocked threads on the closure that they are blocked on, while still avoiding atomic instructions in the common case. Advantages: - gets rid of a locked global data structure and some tricky GC code (replacing it with some per-thread data structures and different tricky GC code :) - wakeups are more prompt: parallel/concurrent performance should benefit. I haven't seen anything dramatic in the parallel benchmarks so far, but a couple of threading benchmarks do improve a bit. - waking up a thread blocked on a blackhole is now O(1) (e.g. if it is the target of throwTo). - less sharing and better separation of Capabilities: communication is done with messages, the data structures are strictly owned by a Capability and cannot be modified except by sending messages. - this change will utlimately enable us to do more intelligent scheduling when threads block on each other. This is what started off the whole thing, but it isn't done yet (#3838). I'll be documenting all this on the wiki in due course.
* Enable shared libraries on Windows; fixes trac #3879Ian Lynagh2010-03-201-0/+2
|
* Use message-passing to implement throwTo in the RTSSimon Marlow2010-03-111-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This replaces some complicated locking schemes with message-passing in the implementation of throwTo. The benefits are - previously it was impossible to guarantee that a throwTo from a thread running on one CPU to a thread running on another CPU would be noticed, and we had to rely on the GC to pick up these forgotten exceptions. This no longer happens. - the locking regime is simpler (though the code is about the same size) - threads can be unblocked from a blocked_exceptions queue without having to traverse the whole queue now. It's a rare case, but replaces an O(n) operation with an O(1). - generally we move in the direction of sharing less between Capabilities (aka HECs), which will become important with other changes we have planned. Also in this patch I replaced several STM-specific closure types with a generic MUT_PRIM closure type, which allowed a lot of code in the GC and other places to go away, hence the line-count reduction. The message-passing changes resulted in about a net zero line-count difference.
* Fix a bug that can lead to noDuplicate# not working sometimes.Simon Marlow2010-02-161-8/+71
| | | | | | | | | | | | The symptom is that under some rare conditions when running in parallel, an unsafePerformIO or unsafeInterleaveIO computation might be duplicated, so e.g. lazy I/O might give the wrong answer (the stream might appear to have duplicate parts or parts missing). I have a program that demonstrates it -N3 or more, some lazy I/O, and a lot of shared mutable state. See the comment with stg_noDuplicatezh in PrimOps.cmm that explains the problem and the fix. This took me about a day to find :-(
* Add missing import sm_mutex, which fixes the -fvia-c buildbenl@cse.unsw.edu.au2010-02-021-0/+1
|
* Fix #650: use a card table to mark dirty sections of mutable arraysSimon Marlow2009-12-171-5/+23
| | | | | | | | | | | | The card table is an array of bytes, placed directly following the actual array data. This means that array reading is unaffected, but array writing needs to read the array size from the header in order to find the card table. We use a bytemap rather than a bitmap, because updating the card table must be multi-thread safe. Each byte refers to 128 entries of the array, but this is tunable by changing the constant MUT_ARR_PTRS_CARD_BITS in includes/Constants.h.
* Expose all EventLog events as DTrace probesManuel M T Chakravarty2009-12-121-0/+15
| | | | | | | | | | | | | | - Defines a DTrace provider, called 'HaskellEvent', that provides a probe for every event of the eventlog framework. - In contrast to the original eventlog, the DTrace probes are available in all flavours of the runtime system (DTrace probes have virtually no overhead if not enabled); when -DTRACING is defined both the regular event log as well as DTrace probes can be used. - Currently, Mac OS X only. User-space DTrace probes are implemented differently on Mac OS X than in the original DTrace implementation. Nevertheless, it shouldn't be too hard to enable these probes on other platforms, too. - Documentation is at http://hackage.haskell.org/trac/ghc/wiki/DTrace
* add locking in mkWeakForeignEnv#Simon Marlow2009-12-081-0/+2
|
* need locking around use of weak_ptr_list in mkWeak#Simon Marlow2009-12-071-0/+2
|
* Make allocatePinned use local storage, and other refactoringsSimon Marlow2009-12-011-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | This is a batch of refactoring to remove some of the GC's global state, as we move towards CPU-local GC. - allocateLocal() now allocates large objects into the local nursery, rather than taking a global lock and allocating then in gen 0 step 0. - allocatePinned() was still allocating from global storage and taking a lock each time, now it uses local storage. (mallocForeignPtrBytes should be faster with -threaded). - We had a gen 0 step 0, distinct from the nurseries, which are stored in a separate nurseries[] array. This is slightly strange. I removed the g0s0 global that pointed to gen 0 step 0, and removed all uses of it. I think now we don't use gen 0 step 0 at all, except possibly when there is only one generation. Possibly more tidying up is needed here. - I removed the global allocate() function, and renamed allocateLocal() to allocate(). - the alloc_blocks global is gone. MAYBE_GC() and doYouWantToGC() now check the local nursery only.
* micro-opt: replace stmGetEnclosingTRec() with a field accessSimon Marlow2009-10-141-5/+5
| | | | | While fixing #3578 I noticed that this function was just a field access to StgTRecHeader, so I inlined it manually.
* Add a way to generate tracing events programmaticallySimon Marlow2009-09-251-0/+10
| | | | | | | | | | | | | | added: primop TraceEventOp "traceEvent#" GenPrimOp Addr# -> State# s -> State# s { Emits an event via the RTS tracing framework. The contents of the event is the zero-terminated byte string passed as the first argument. The event will be emitted either to the .eventlog file, or to stderr, depending on the runtime RTS flags. } and added the required RTS functionality to support it. Also a bit of refactoring in the RTS tracing code.
* Fix #3429: a tricky race conditionSimon Marlow2009-08-181-4/+4
| | | | | | | | | | | | | | | | | | There were two bugs, and had it not been for the first one we would not have noticed the second one, so this is quite fortunate. The first bug is in stg_unblockAsyncExceptionszh_ret, when we found a pending exception to raise, but don't end up raising it, there was a missing adjustment to the stack pointer. The second bug was that this case was actually happening at all: it ought to be incredibly rare, because the pending exception thread would have to be killed between us finding it and attempting to raise the exception. This made me suspicious. It turned out that there was a race condition on the tso->flags field; multiple threads were updating this bitmask field non-atomically (one of the bits is the dirty-bit for the generational GC). The fix is to move the dirty bit into its own field of the TSO, making the TSO one word larger (sadly).
* Rename primops from foozh_fast to stg_foozhSimon Marlow2009-08-031-84/+84
| | | | For consistency with other RTS exported symbols
* propagate the result of atomically properly (fixes #3049)Simon Marlow2009-06-241-4/+8
|
* Remove the implementation of gmp primops from the rtsDuncan Coutts2009-06-131-514/+1
|
* Convert the gmp cmm primops to use local stack allocationDuncan Coutts2009-06-101-59/+56
| | | | | | Using global temp vars is really ugly and in the threaded case it needs slots in the StgRegTable. It'd also be pretty silly once we move the cmm primops out of the rts, into the integer-gmp package.
* Remove the unused remains of __decodeFloatIan Lynagh2009-06-021-26/+0
|
* fix cut-and-pasto in mkWeakForeignEnv#, causing random segfaultsSimon Marlow2009-05-151-1/+1
|