summaryrefslogtreecommitdiff
path: root/includes
Commit message (Collapse)AuthorAgeFilesLines
* Add a header to all build system files:Simon Marlow2009-04-282-0/+24
| | | | | | | | | | | | | | # ----------------------------------------------------------------------------- # # (c) 2009 The University of Glasgow # # This file is part of the GHC build system. # # To understand how the build system works and how to modify it, see # http://hackage.haskell.org/trac/ghc/wiki/Building/Architecture # http://hackage.haskell.org/trac/ghc/wiki/Building/Modifying # # -----------------------------------------------------------------------------
* GHC new build system megapatchIan Lynagh2009-04-264-204/+184
|
* add missing files (part of #3171 fix)Simon Marlow2009-04-241-0/+21
|
* Add EVENT_CREATE_SPARK_THREAD to replace EVENT_SPARK_TO_THREADSimon Marlow2009-04-231-19/+20
| | | | Also some tidyups and renaming
* add getOrSetSignalHandlerStore, much like getOrSetTypeableStoreSimon Marlow2009-04-231-17/+0
| | | | Part of the fix for #3171
* Added new EventLog event: Spark to Thread.donnie@darthik.com2009-04-131-15/+16
|
* Fixed ThreadID to be defined as StgThreadID, not StgWord64. Changed ↵donnie@darthik.com2009-04-131-2/+2
| | | | | | | | | | CapabilityNum to CapNo. Added helper functions postCapNo() and postThreadID(). ThreadID was StgWord64, but should have been StgThreadID, which is currently StgWord32. Changed name from CapabilityNum to CapNo to better reflect naming in Capability struct where "no" is the capability number. Modified EventLog.c to use the helper functions postCapNo() and postThreadID () for CapNo and ThreadID.
* Eventlog support for new event type: create spark.donnie@darthik.com2009-04-031-1/+2
|
* SPARC NCG: HpLim is now always stored on the stack, not in a registerBen.Lippmeier@anu.edu.au2009-03-311-1/+5
| | | | | | | | | | | | This fixes the out of memory errors we were getting on sparc after the following patch: Fri Mar 13 03:45:16 PDT 2009 Simon Marlow <marlowsd@gmail.com> * Instead of a separate context-switch flag, set HpLim to zero Ignore-this: 6c5bbe1ce2c5ef551efe98f288483b0 This reduces the latency between a context-switch being triggered and the thread returning to the scheduler, which in turn should reduce the cost of the GC barrier when there are many cores.
* Set thread affinity with +RTS -qa (only on Linux so far)Simon Marlow2009-03-182-2/+3
|
* add missing case in ENTER() (fixes readwrite002(profasm) crash)Simon Marlow2009-03-191-0/+1
|
* Add fast event loggingSimon Marlow2009-03-172-7/+146
| | | | | | | | | | | | | | | | | | | | | | | | | | | Generate binary log files from the RTS containing a log of runtime events with timestamps. The log file can be visualised in various ways, for investigating runtime behaviour and debugging performance problems. See for example the forthcoming ThreadScope viewer. New GHC option: -eventlog (link-time option) Enables event logging. +RTS -l (runtime option) Generates <prog>.eventlog with the binary event information. This replaces some of the tracing machinery we already had in the RTS: e.g. +RTS -vg for GC tracing (we should do this using the new event logging instead). Event logging has almost no runtime cost when it isn't enabled, though in the future we might add more fine-grained events and this might change; hence having a link-time option and compiling a separate version of the RTS for event logging. There's a small runtime cost for enabling event-logging, for most programs it shouldn't make much difference. (Todo: docs)
* FIX biographical profiling (#3039, probably #2297)Simon Marlow2009-03-171-4/+26
| | | | | | | | | Since we introduced pointer tagging, we no longer always enter a closure to evaluate it. However, the biographical profiler relies on closures being entered in order to mark them as "used", so we were getting spurious amounts of data attributed to VOID. It turns out there are various places that need to be fixed, and I think at least one of them was also wrong before pointer tagging (CgCon.cgReturnDataCon).
* Add getNumberOfProcessors(), FIX MacOS X build problem (hopefully)Simon Marlow2009-03-171-0/+3
| | | | | Somebody needs to implement getNumberOfProcessors() for MacOS X, currently it will return 1.
* Use work-stealing for load-balancing in the GCSimon Marlow2009-03-132-5/+1
| | | | | | | | | | | | | | | | | New flag: "+RTS -qb" disables load-balancing in the parallel GC (though this is subject to change, I think we will probably want to do something more automatic before releasing this). To get the "PARGC3" configuration described in the "Runtime support for Multicore Haskell" paper, use "+RTS -qg0 -qb -RTS". The main advantage of this is that it allows us to easily disable load-balancing altogether, which turns out to be important in parallel programs. Maintaining locality is sometimes more important that spreading the work out in parallel GC. There is a side benefit in that the parallel GC should have improved locality even when load-balancing, because each processor prefers to take work from its own queue before stealing from others.
* Instead of a separate context-switch flag, set HpLim to zeroSimon Marlow2009-03-132-30/+10
| | | | | | | | | | | | This reduces the latency between a context-switch being triggered and the thread returning to the scheduler, which in turn should reduce the cost of the GC barrier when there are many cores. We still retain the old context_switch flag which is checked at the end of each block of allocation. The idea is that setting HpLim may fail if the the target thread is modifying HpLim at the same time; the context_switch flag is a fallback. It also allows us to "context switch soon" without forcing an immediate switch, which can be costly.
* Partial fix for #2917Simon Marlow2009-03-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | - add newAlignedPinnedByteArray# for allocating pinned BAs with arbitrary alignment - the old newPinnedByteArray# now aligns to 16 bytes Foreign.alloca will use newAlignedPinnedByteArray#, and so might end up wasting less space than before (we used to align to 8 by default). Foreign.allocaBytes and Foreign.mallocForeignPtrBytes will get 16-byte aligned memory, which is enough to avoid problems with SSE instructions on x86, for example. There was a bug in the old newPinnedByteArray#: it aligned to 8 bytes, but would have failed if the header was not a multiple of 8 (fortunately it always was, even with profiling). Also we occasionally wasted some space unnecessarily due to alignment in allocatePinned(). I haven't done anything about Foreign.malloc/mallocBytes, which will give you the same alignment guarantees as malloc() (8 bytes on Linux/x86 here).
* Rewrite of signal-handling (ghc patch; see also base and unix patches)Simon Marlow2009-02-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | The API is the same (for now). The new implementation has the capability to define signal handlers that have access to the siginfo of the signal (#592), but this functionality is not exposed in this patch. #2451 is the ticket for the new API. The main purpose of bringing this in now is to fix race conditions in the old signal handling code (#2858). Later we can enable the new API in the HEAD. Implementation differences: - More of the signal-handling is moved into Haskell. We store the table of signal handlers in an MVar, rather than having a table of StablePtrs in the RTS. - In the threaded RTS, the siginfo of the signal is passed down the pipe to the IO manager thread, which manages the business of starting up new signal handler threads. In the non-threaded RTS, the siginfo of caught signals is stored in the RTS, and the scheduler starts new signal handler threads.
* update Sparc store/load barrier (#3019), and fix commentsSimon Marlow2009-02-121-3/+2
|
* comment wibblesSimon Marlow2009-02-111-2/+2
|
* NCG: Use sync instead of msync for a memory barrier for powerpcBen.Lippmeier@anu.edu.au2009-02-131-1/+1
| | | | | Darwin 9.6.0 + GCC 4.0.1 doesn't understand "msync". I think "sync" means the same thing.
* one more bugfix: a load/load memory barrier is required in stealWSDeque_()Simon Marlow2009-02-111-16/+37
|
* build fix: add -I../rts/parallelSimon Marlow2009-02-061-1/+1
|
* add a single-threaded version of cas()Simon Marlow2009-02-061-0/+11
|
* add a store/load memory barrierSimon Marlow2009-02-061-0/+25
|
* SPARC NCG: Give regs o0-o5 back to the allocatorBen.Lippmeier@anu.edu.au2009-02-031-11/+45
|
* Implement #2191 (traceCcs# -- prints CCS of a value when available -- take 3)Samuel Bronson2009-01-271-0/+2
| | | | | | In this version, I untag R1 before using it, and even enter R2 at the end rather than simply returning it (which didn't work right when R2 was a thunk).
* add comment for ASSERT_LOCK_HELD()Simon Marlow2009-01-261-0/+5
|
* Reinstate: Always check the result of pthread_mutex_lock() and ↵Ian Lynagh2009-01-171-30/+10
| | | | | | | | | | | | | | | pthread_mutex_unlock(). Sun Jan 4 19:24:43 GMT 2009 Matthias Kilian <kili@outback.escape.de> Don't check pthread_mutex_*lock() only on Linux and/or only if DEBUG is defined. The return values of those functions are well defined and should be supported on all operation systems with pthreads. The checks are cheap enough to do them even in the default build (without -DDEBUG). While here, recycle an unused macro ASSERT_LOCK_NOTHELD, and let the debugBelch part enabled with -DLOCK_DEBUG work independently of -DDEBUG.
* UNDO: Always check the result of pthread_mutex_lock() and ↵Simon Marlow2009-01-161-10/+30
| | | | | | | | | | | | | | | | | | | | | | | | pthread_mutex_unlock(). This patch caused problems on Mac OS X, undoing until we can do it better. rolling back: Sun Jan 4 19:24:43 GMT 2009 Matthias Kilian <kili@outback.escape.de> * Always check the result of pthread_mutex_lock() and pthread_mutex_unlock(). Don't check pthread_mutex_*lock() only on Linux and/or only if DEBUG is defined. The return values of those functions are well defined and should be supported on all operation systems with pthreads. The checks are cheap enough to do them even in the default build (without -DDEBUG). While here, recycle an unused macro ASSERT_LOCK_NOTHELD, and let the debugBelch part enabled with -DLOCK_DEBUG work independently of -DDEBUG. M ./includes/OSThreads.h -30 +10
* Always check the result of pthread_mutex_lock() and pthread_mutex_unlock().Matthias Kilian2009-01-041-30/+10
| | | | | | | | | | | | | Don't check pthread_mutex_*lock() only on Linux and/or only if DEBUG is defined. The return values of those functions are well defined and should be supported on all operation systems with pthreads. The checks are cheap enough to do them even in the default build (without -DDEBUG). While here, recycle an unused macro ASSERT_LOCK_NOTHELD, and let the debugBelch part enabled with -DLOCK_DEBUG work independently of -DDEBUG.
* Keep the remembered sets local to each thread during parallel GCSimon Marlow2009-01-121-32/+10
| | | | | | | | | | | | | | | | | | | | | This turns out to be quite vital for parallel programs: - The way we discover which threads to traverse is by finding dirty threads via the remembered sets (aka mutable lists). - A dirty thread will be on the remembered set of the capability that was running it, and we really want to traverse that thread's stack using the GC thread for the capability, because it is in that CPU's cache. If we get this wrong, we get penalised badly by the memory system. Previously we had per-capability mutable lists but they were aggregated before GC and traversed by just one of the GC threads. This resulted in very poor performance particularly for parallel programs with deep stacks. Now we keep per-capability remembered sets throughout GC, which also removes a lock (recordMutableGen_sync).
* FIX #1364: added support for C finalizers that run as soon as the value is ↵Simon Marlow2008-12-103-0/+4
| | | | | | | | | | | not longer reachable. Patch originally by Ivan Tomac <tomac@pacific.net.au>, amended by Simon Marlow: - mkWeakFinalizer# commoned up with mkWeakFinalizerEnv# - GC parameters to ALLOC_PRIM fixed
* Fix #2592: do an orderly shutdown when the heap is exhaustedSimon Marlow2008-12-091-1/+2
| | | | | | Really we should be raising an exception in this case, but that's tricky (see comments). At least now we shut down the runtime correctly rather than just exiting.
* Fix more problems caused by padding in the Capability structureSimon Marlow2008-12-022-1/+5
| | | | Fixes crashes on Windows and Sparc
* Merging in the new codegen branchdias@eecs.harvard.edu2008-08-142-14/+30
| | | | | | | | | | | | | | | | | | This merge does not turn on the new codegen (which only compiles a select few programs at this point), but it does introduce some changes to the old code generator. The high bits: 1. The Rep Swamp patch is finally here. The highlight is that the representation of types at the machine level has changed. Consequently, this patch contains updates across several back ends. 2. The new Stg -> Cmm path is here, although it appears to have a fair number of bugs lurking. 3. Many improvements along the CmmCPSZ path, including: o stack layout o some code for infotables, half of which is right and half wrong o proc-point splitting
* Add a --machine-readable RTS flagIan Lynagh2008-11-231-0/+1
| | | | Currently it only affects the -t flag output
* Use mutator threads to do GC, instead of having a separate pool of GC threadsSimon Marlow2008-11-212-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, the GC had its own pool of threads to use as workers when doing parallel GC. There was a "leader", which was the mutator thread that initiated the GC, and the other threads were taken from the pool. This was simple and worked fine for sequential programs, where we did most of the benchmarking for the parallel GC, but falls down for parallel programs. When we have N mutator threads and N cores, at GC time we would have to stop N-1 mutator threads and start up N-1 GC threads, and hope that the OS schedules them all onto separate cores. It practice it doesn't, as you might expect. Now we use the mutator threads to do GC. This works quite nicely, particularly for parallel programs, where each mutator thread scans its own spark pool, which is probably in its cache anyway. There are some flag changes: -g<n> is removed (-g1 is still accepted for backwards compat). There's no way to have a different number of GC threads than mutator threads now. -q1 Use one OS thread for GC (turns off parallel GC) -qg<n> Use parallel GC for generations >= <n> (default: 1) Using parallel GC only for generations >=1 works well for sequential programs. Compiling an ordinary sequential program with -threaded and running it with -N2 or more should help if you do a lot of GC. I've found that adding -qg0 (do parallel GC for generation 0 too) speeds up some parallel programs, but slows down some sequential programs. Being conservative, I left the threshold at 1. ToDo: document the new options.
* Add optional eager black-holing, with new flag -feager-blackholingSimon Marlow2008-11-185-58/+35
| | | | | | | | | | | | | | | Eager blackholing can improve parallel performance by reducing the chances that two threads perform the same computation. However, it has a cost: one extra memory write per thunk entry. To get the best results, any code which may be executed in parallel should be compiled with eager blackholing turned on. But since there's a cost for sequential code, we make it optional and turn it on for the parallel package only. It might be a good idea to compile applications (or modules) with parallel code in with -feager-blackholing. ToDo: document -feager-blackholing.
* Attempt to fix #2512 and #2063; add +RTS -xm<address> -RTS optionSimon Marlow2008-11-171-0/+2
| | | | | | | | | | | | | | | | | On x86_64, the RTS needs to allocate memory in the low 2Gb of the address space. On Linux we can do this with MAP_32BIT, but sometimes this doesn't work (#2512) and other OSs don't support it at all (#2063). So to work around this: - Try MAP_32BIT first, if available. - Otherwise, try allocating memory from a fixed address (by default 1Gb) - We now provide an option to configure the address to allocate from. This allows a workaround on machines where the default breaks, and also provides a way for people to test workarounds that we can incorporate in future releases.
* refactor: move unlockClosure() into SMPClosureOps() where it should beSimon Marlow2008-11-143-11/+11
|
* Omit definitions of cas() and xchg() in .hc codeSimon Marlow2008-11-141-0/+13
| | | | | They cause compilation errors (correctly) with newer gccs Shows up compiling the RTS via C, which happens on Windows
* Run sparks in batches, instead of creating a new thread for each oneSimon Marlow2008-11-061-0/+1
| | | | | Signficantly reduces the overhead for par, which means that we can make use of paralellism at a much finer granularity.
* Refactoring and reorganisation of the schedulerSimon Marlow2008-10-221-39/+7
| | | | | | | | | | | | | | | | | Change the way we look for work in the scheduler. Previously, checking to see whether there was anything to do was a non-side-effecting operation, but this has changed now that we do work-stealing. This lead to a refactoring of the inner loop of the scheduler. Also, lots of cleanup in the new work-stealing code, but no functional changes. One new statistic is added to the +RTS -s output: SPARKS: 1430 (2 converted, 1427 pruned) lets you know something about the use of `par` in the program.
* Work stealing for sparksberthold@mathematik.uni-marburg.de2008-09-152-95/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | Spark stealing support for PARALLEL_HASKELL and THREADED_RTS versions of the RTS. Spark pools are per capability, separately allocated and held in the Capability structure. The implementation uses Double-Ended Queues (deque) and cas-protected access. The write end of the queue (position bottom) can only be used with mutual exclusion, i.e. by exactly one caller at a time. Multiple readers can steal()/findSpark() from the read end (position top), and are synchronised without a lock, based on a cas of the top position. One reader wins, the others return NULL for a failure. Work stealing is called when Capabilities find no other work (inside yieldCapability), and tries all capabilities 0..n-1 twice, unless a theft succeeds. Inside schedulePushWork, all considered cap.s (those which were idle and could be grabbed) are woken up. Future versions should wake up capabilities immediately when putting a new spark in the local pool, from newSpark(). Patch has been re-recorded due to conflicting bugfixes in the sparks.c, also fixing a (strange) conflict in the scheduler.
* add readTVarIO :: TVar a -> IO aSimon Marlow2008-10-102-0/+3
|
* Remove #define _BSD_SOURCE from Stg.hIan Lynagh2008-10-061-3/+0
| | | | It's no longer needed, as base no longer #includes it
* On Linux use libffi for allocating executable memory (fixed #738)Simon Marlow2008-09-192-2/+2
|
* Move the context_switch flag into the CapabilitySimon Marlow2008-09-193-2/+2
| | | | | Fixes a long-standing bug that could in some cases cause sub-optimal scheduling behaviour.
* Fix MacOS X build: don't believe __GNUC_GNU_INLINE__ on MacOS XSimon Marlow2008-09-181-1/+5
|