| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In addition to more const-correctness fixes this patch fixes an
infelicity of the previous const-correctness patch (995cf0f356) which
left `UNTAG_CLOSURE` taking a `const StgClosure` pointer parameter
but returning a non-const pointer. Here we restore the original type
signature of `UNTAG_CLOSURE` and add a new function
`UNTAG_CONST_CLOSURE` which takes and returns a const `StgClosure`
pointer and uses that wherever possible.
Test Plan: Validate on Linux, OS X and Windows
Reviewers: Phyx, hsyl20, bgamari, austin, simonmar, trofi
Reviewed By: simonmar, trofi
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2231
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a function takes a pointer parameter and doesn't update what
the pointer points to, we can add `const` to the parameter
declaration to document that no updates occur.
Test Plan: Validate on Linux, OS X and Windows
Reviewers: austin, Phyx, bgamari, simonmar, hsyl20
Reviewed By: bgamari, simonmar, hsyl20
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2200
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We can't define Stg{Int,Word} in terms of {,u}intptr_t because STG
depends on them being the exact same size as void*, and {,u}intptr_t
does not make that guarantee. Furthermore, we also need to define
StgHalf{Int,Word}, so the preprocessor if needs to stay. But we can at
least keep it in a single place instead of repeating it in various
files.
Also define STG_{INT,WORD}{8,16,32,64}_{MIN,MAX} and use it in HsFFI.h,
further reducing the need for CPP in other files.
Reviewers: austin, bgamari, simonmar, hvr, erikd
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2182
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On x64 Windows `sizeof long` is 4 which could easily overflow
resulting in incorrect heap profiling results. This change does not
affect either Linux or OS X where `sizeof long` == `sizeof ssize_t`
regardless of machine word size.
Test Plan: Validate on Linux and Windows
Reviewers: hsyl20, bgamari, simonmar, austin
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2177
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `nat` type was an alias for `unsigned int` with a comment saying
it was at least 32 bits. We keep the typedef in case client code is
using it but mark it as deprecated.
Test Plan: Validated on Linux, OS X and Windows
Reviewers: simonmar, austin, thomie, hvr, bgamari, hsyl20
Differential Revision: https://phabricator.haskell.org/D2166
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The heap census now handles large ARR_WORDS objects which have
been shrunk by shrinkMutableByteArray# or resizeMutableByteArray#.
Test Plan: ./validate && make test WAY=profasm
Reviewers: hvr, bgamari, austin, thomie
Reviewed By: thomie
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2005
GHC Trac Issues: #11627
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are two ways to do retainer profiling. Quoting from the user's guide:
1. `+RTS -hr` "Breaks down the graph by retainer set"
2. `+RTS -hr<cc> -h<x>`, where `-h<x>` is one of normal heap profiling
break-down options (e.g. `-hc`), and `-hr<cc> means "Restrict the
profile to closures with retainer sets containing cost-centre
stacks with one of the specified cost centres at the top."
Retainer profiling writes to a .hp file, like the other heap profiling
options, but also to a .prof file. Therefore, when the .prof file is not
writeable for whatever reason, retainer profiling should be turned off
completely.
This worked ok when running the program with `+RTS -hr` (option 1), but a
segfault would occur when using `+RTS -hr<cc> -h<x>`, with `x!=r` (option 2).
This commit fixes that.
Reviewed by: bgamari
Differential Revision: https://phabricator.haskell.org/D1849
GHC Trac Issues: #11489
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it seems that this closure type has not been in use since 5d52d9, so all
this is dead and untested code. This removes it. Some of the code might
be useful for a counting indirection as described in #10613, so when
implementing that, have a look at what this commit removes.
Test Plan: validate on harbormaster
Reviewers: austin, bgamari, simonmar
Reviewed By: simonmar
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1821
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Was broken by ce1f1607ed7f8fedd2f63c8610cafefd59baaf32. I've added a
test so that hopefully it won't break again.
Test Plan: validate & new test case
Reviewers: bgamari, austin, erikd
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1746
GHC Trac Issues: #11304
|
|
|
|
|
|
|
|
|
|
| |
Reviewers: austin
Reviewed By: austin
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1509
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Amazingly, there were zero changes to the byte code generator and very
few changes to the interpreter - mainly because we've used good
abstractions that hide the differences between profiling and
non-profiling. So that bit was pleasantly straightforward, but there
were a pile of other wibbles to get the whole test suite through.
Note that a compiler built with -prof is now like one built with
-dynamic, in that to use TH you have to build the code the same way.
For dynamic, we automatically enable -dynamic-too when TH is required,
but we don't have anything equivalent for profiling, so you have to
explicitly use -prof when building code that uses TH with a profiled
compiler. For this reason Cabal won't work with TH. We don't expect
to ship a profiled compiler, so I think that's OK.
Test Plan: validate with GhcProfiled=YES in validate.mk
Reviewers: goldfire, bgamari, rwbarton, austin, hvr, erikd, ezyang
Reviewed By: ezyang
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1407
GHC Trac Issues: #4837, #545
|
|
|
|
|
|
|
|
|
|
| |
Rename StgArrWords to StgArrBytes (see Trac #8552)
Reviewed By: austin
Differential Revision: https://phabricator.haskell.org/D1233
GHC Trac Issues: #8552
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Depends on D767
Setting this flag prevents RTS from giving RTS suggestions like "Use
`+RTS -Ksize -RTS' to increase it."
According to the comment @rwbarton made in #9579, sometimes "+RTS"
suggestions don't make sense (e.g. when the program is precompiled and
installed through package managers), we can encourage people to
distribute binaries with either "-no-rtsopts-suggestions" or "-rtsopts".
Reviewed By: erikd, austin
Differential Revision: https://phabricator.haskell.org/D809
GHC Trac Issues: #9579
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch provides user with a better hint when most RTS options
are not available (not compiled with `-rtsopts`).
A new field "rtsOptsEnabled" is added into RtsFlags.MiscFlags to
tell the availablity of RTS options.
Some concerns:
* Unlike other flag fields in "libraries/base/GHC/RTS/Flags.hsc",
"RtsOptsEnabled" is defined in "includes/RtsAPI.h" and lacks
constant macros. Therefore In "GHC.RTS", "RtsOptsEnabled" simply
derives Enum instance and reads as of type "CInt".
* There are other ways to change RTS options (e.g. `-with-rtsopts`),
but it might be too verbose to mention.
Test Plan: validate
Reviewers: austin, hvr, thomie, simonmar
Reviewed By: thomie
Subscribers: thomie, rwbarton
Differential Revision: https://phabricator.haskell.org/D767
GHC Trac Issues: #9579
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: Signed-off-by: Edward Z. Yang <ezyang@cs.stanford.edu>
Test Plan: validate
Reviewers: simonmar, austin
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D679
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
This reverts commit 39b5c1cbd8950755de400933cecca7b8deb4ffcd.
|
|
|
|
|
|
|
|
| |
This will hopefully help ensure some basic consistency in the forward by
overriding buffer variables. In particular, it sets the wrap length, the
offset to 4, and turns off tabs.
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
|
|
| |
Discovered by Coverity. CID 43166.
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These array types are smaller than Array# and MutableArray# and are
faster when the array size is small, as they don't have the overhead
of a card table. Having no card table reduces the closure size with 2
words in the typical small array case and leads to less work when
updating or GC:ing the array.
Reduces both the runtime and memory allocation by 8.8% on my insert
benchmark for the HashMap type in the unordered-containers package,
which makes use of lots of small arrays. With tuned GC settings
(i.e. `+RTS -A6M`) the runtime reduction is 15%.
Fixes #8923.
|
| |
|
| |
|
|
|
|
| |
Signed-off-by: Edward Z. Yang <ezyang@mit.edu>
|
|
|
|
|
|
|
|
|
|
| |
This improves GC performance when there are a lot of TVars in the
heap. For instance, a TChan with a lot of elements causes a massive
GC drag without this patch.
There's more to do - several other STM closure types don't have write
barriers, so GC performance when there are a lot of threads blocked on
STM isn't great. But fixing the problem for TVar is a good start.
|
|
|
|
|
|
|
|
|
|
|
|
| |
lnat was originally "long unsigned int" but we were using it when we
wanted a 64-bit type on a 64-bit machine. This broke on Windows x64,
where long == int == 32 bits. Using types of unspecified size is bad,
but what we really wanted was a type with N bits on an N-bit machine.
StgWord is exactly that.
lnat was mentioned in some APIs that clients might be using
(e.g. StackOverflowHook()), so we leave it defined but with a comment
to say that it's deprecated.
|
|
|
|
|
|
| |
Mostly this meant getting pointer<->int conversions to use the right
sizes. lnat is now size_t, rather than unsigned long, as that seems a
better match for how it's used.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Terminology cleanup: the type "Ticks" has been renamed "Time", which
is an StgWord64 in units of TIME_RESOLUTION (currently nanoseconds).
The terminology "tick" is now used consistently to mean the interval
between timer signals.
The ticker now always ticks in realtime (actually CLOCK_MONOTONIC if
we have it). Before it used CPU time in the non-threaded RTS and
realtime in the threaded RTS, but I've discovered that the CPU timer
has terrible resolution (at least on Linux) and isn't much use for
profiling. So now we always use realtime. This should also fix
The default tick interval is now 10ms, except when profiling where we
drop it to 1ms. This gives more accurate profiles without affecting
runtime too much (<1%).
Lots of cleanups - the resolution of Time is now in one place
only (Rts.h) rather than having calculations that depend on the
resolution scattered all over the RTS. I hope I found them all.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
User visible changes
====================
Profilng
--------
Flags renamed (the old ones are still accepted for now):
OLD NEW
--------- ------------
-auto-all -fprof-auto
-auto -fprof-exported
-caf-all -fprof-cafs
New flags:
-fprof-auto Annotates all bindings (not just top-level
ones) with SCCs
-fprof-top Annotates just top-level bindings with SCCs
-fprof-exported Annotates just exported bindings with SCCs
-fprof-no-count-entries Do not maintain entry counts when profiling
(can make profiled code go faster; useful with
heap profiling where entry counts are not used)
Cost-centre stacks have a new semantics, which should in most cases
result in more useful and intuitive profiles. If you find this not to
be the case, please let me know. This is the area where I have been
experimenting most, and the current solution is probably not the
final version, however it does address all the outstanding bugs and
seems to be better than GHC 7.2.
Stack traces
------------
+RTS -xc now gives more information. If the exception originates from
a CAF (as is common, because GHC tends to lift exceptions out to the
top-level), then the RTS walks up the stack and reports the stack in
the enclosing update frame(s).
Result: +RTS -xc is much more useful now - but you still have to
compile for profiling to get it. I've played around a little with
adding 'head []' to GHC itself, and +RTS -xc does pinpoint the problem
quite accurately.
I plan to add more facilities for stack tracing (e.g. in GHCi) in the
future.
Coverage (HPC)
--------------
* derived instances are now coloured yellow if they weren't used
* likewise record field names
* entry counts are more accurate (hpc --fun-entry-count)
* tab width is now correct (markup was previously off in source with
tabs)
Internal changes
================
In Core, the Note constructor has been replaced by
Tick (Tickish b) (Expr b)
which is used to represent all the kinds of source annotation we
support: profiling SCCs, HPC ticks, and GHCi breakpoints.
Depending on the properties of the Tickish, different transformations
apply to Tick. See CoreUtils.mkTick for details.
Tickets
=======
This commit closes the following tickets, test cases to follow:
- Close #2552: not a bug, but the behaviour is now more intuitive
(test is T2552)
- Close #680 (test is T680)
- Close #1531 (test is result001)
- Close #949 (test is T949)
- Close #2466: test case has bitrotted (doesn't compile against current
version of vector-space package)
|
|
|
|
|
|
|
|
|
|
| |
Now that the heap census runs in the middle of garbage collections,
the "CPU time" it was calculating included any CPU time used so far
in the current GC. This could cause CPU time to appear to go down,
which means hp2ps complained about "samples out of sequence".
I'm not sure if this is the nicest way to solve this (maybe resurrecting
mut_user_time_during_GC would be better?) but it gets things working again.
|
|
|
|
|
|
|
|
| |
Previously we were completely ignoring these, due to the difficulties
of traversing the pinned blocks (the objects are not necessarily
end-to-end, we can't tell how large the gaps are). Now just count the
whole block as a big ARR_WORDS, so at least we're accounting for the
memory and it has the right type.
|
|
|
|
|
|
|
|
|
|
|
| |
Based on a patch from David Terei.
Some parts are a little ugly (e.g. defining things that only ASSERTs
use only when DEBUG is defined), so we might want to tweak things a
little.
I've also turned off -Werror for didn't-inline warnings, as we now
get a few such warnings.
|
|
|
|
| |
gen_workspace structures. Fixes heapprof001(prof_hc_hb).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the code generator generated small code fragments labelled
with __stginit_M for each module M, and these performed whatever
initialisation was necessary for that module and recursively invoked
the initialisation functions for imported modules. This appraoch had
drawbacks:
- FFI users had to call hs_add_root() to ensure the correct
initialisation routines were called. This is a non-standard,
and ugly, API.
- unless we were using -split-objs, the __stginit dependencies would
entail linking the whole transitive closure of modules imported,
whether they were actually used or not. In an extreme case (#4387,
#4417), a module from GHC might be imported for use in Template
Haskell or an annotation, and that would force the whole of GHC to
be needlessly linked into the final executable.
So now instead we do our initialisation with C functions marked with
__attribute__((constructor)), which are automatically invoked at
program startup time (or DSO load-time). The C initialisers are
emitted into the stub.c file. This means that every time we compile
with -prof or -hpc, we now get a stub file, but thanks to #3687 that
is now invisible to the user.
There are some refactorings in the RTS (particularly for HPC) to
handle the fact that initialisers now get run earlier than they did
before.
The __stginit symbols are still generated, and the hs_add_root()
function still exists (but does nothing), for backwards compatibility.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch makes two changes to the way stacks are managed:
1. The stack is now stored in a separate object from the TSO.
This means that it is easier to replace the stack object for a thread
when the stack overflows or underflows; we don't have to leave behind
the old TSO as an indirection any more. Consequently, we can remove
ThreadRelocated and deRefTSO(), which were a pain.
This is obviously the right thing, but the last time I tried to do it
it made performance worse. This time I seem to have cracked it.
2. Stacks are now represented as a chain of chunks, rather than
a single monolithic object.
The big advantage here is that individual chunks are marked clean or
dirty according to whether they contain pointers to the young
generation, and the GC can avoid traversing clean stack chunks during
a young-generation collection. This means that programs with deep
stacks will see a big saving in GC overhead when using the default GC
settings.
A secondary advantage is that there is much less copying involved as
the stack grows. Programs that quickly grow a deep stack will see big
improvements.
In some ways the implementation is simpler, as nothing special needs
to be done to reclaim stack as the stack shrinks (the GC just recovers
the dead stack chunks). On the other hand, we have to manage stack
underflow between chunks, so there's a new stack frame
(UNDERFLOW_FRAME), and we now have separate TSO and STACK objects.
The total amount of code is probably about the same as before.
There are new RTS flags:
-ki<size> Sets the initial thread stack size (default 1k) Egs: -ki4k -ki2m
-kc<size> Sets the stack chunk size (default 32k)
-kb<size> Sets the stack chunk buffer size (default 1k)
-ki was previously called just -k, and the old name is still accepted
for backwards compatibility. These new options are documented.
|
|
|
|
|
|
|
| |
These are no longer used: once upon a time they used to have different
layout from IND and IND_PERM respectively, but that is no longer the
case since we changed the remembered set to be an array of addresses
instead of a linked list of closures.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This replaces the global blackhole_queue with a clever scheme that
enables us to queue up blocked threads on the closure that they are
blocked on, while still avoiding atomic instructions in the common
case.
Advantages:
- gets rid of a locked global data structure and some tricky GC code
(replacing it with some per-thread data structures and different
tricky GC code :)
- wakeups are more prompt: parallel/concurrent performance should
benefit. I haven't seen anything dramatic in the parallel
benchmarks so far, but a couple of threading benchmarks do improve
a bit.
- waking up a thread blocked on a blackhole is now O(1) (e.g. if
it is the target of throwTo).
- less sharing and better separation of Capabilities: communication
is done with messages, the data structures are strictly owned by a
Capability and cannot be modified except by sending messages.
- this change will utlimately enable us to do more intelligent
scheduling when threads block on each other. This is what started
off the whole thing, but it isn't done yet (#3838).
I'll be documenting all this on the wiki in due course.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This replaces some complicated locking schemes with message-passing
in the implementation of throwTo. The benefits are
- previously it was impossible to guarantee that a throwTo from
a thread running on one CPU to a thread running on another CPU
would be noticed, and we had to rely on the GC to pick up these
forgotten exceptions. This no longer happens.
- the locking regime is simpler (though the code is about the same
size)
- threads can be unblocked from a blocked_exceptions queue without
having to traverse the whole queue now. It's a rare case, but
replaces an O(n) operation with an O(1).
- generally we move in the direction of sharing less between
Capabilities (aka HECs), which will become important with other
changes we have planned.
Also in this patch I replaced several STM-specific closure types with
a generic MUT_PRIM closure type, which allowed a lot of code in the GC
and other places to go away, hence the line-count reduction. The
message-passing changes resulted in about a net zero line-count
difference.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GC had a two-level structure, G generations each of T steps.
Steps are for aging within a generation, mostly to avoid premature
promotion.
Measurements show that more than 2 steps is almost never worthwhile,
and 1 step is usually worse than 2. In theory fractional steps are
possible, so the ideal number of steps is somewhere between 1 and 3.
GHC's default has always been 2.
We can implement 2 steps quite straightforwardly by having each block
point to the generation to which objects in that block should be
promoted, so blocks in the nursery point to generation 0, and blocks
in gen 0 point to gen 1, and so on.
This commit removes the explicit step structures, merging generations
with steps, thus simplifying a lot of code. Performance is
unaffected. The tunable number of steps is now gone, although it may
be replaced in the future by a way to tune the aging in generation 0.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a batch of refactoring to remove some of the GC's global
state, as we move towards CPU-local GC.
- allocateLocal() now allocates large objects into the local
nursery, rather than taking a global lock and allocating
then in gen 0 step 0.
- allocatePinned() was still allocating from global storage and
taking a lock each time, now it uses local storage.
(mallocForeignPtrBytes should be faster with -threaded).
- We had a gen 0 step 0, distinct from the nurseries, which are
stored in a separate nurseries[] array. This is slightly strange.
I removed the g0s0 global that pointed to gen 0 step 0, and
removed all uses of it. I think now we don't use gen 0 step 0 at
all, except possibly when there is only one generation. Possibly
more tidying up is needed here.
- I removed the global allocate() function, and renamed
allocateLocal() to allocate().
- the alloc_blocks global is gone. MAYBE_GC() and
doYouWantToGC() now check the local nursery only.
|
| |
|
|
|
|
| |
Also, use C99-style array initialisers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The first phase of this tidyup is focussed on the header files, and in
particular making sure we are exposinng publicly exactly what we need
to, and no more.
- Rts.h now includes everything that the RTS exposes publicly,
rather than a random subset of it.
- Most of the public header files have moved into subdirectories, and
many of them have been renamed. But clients should not need to
include any of the other headers directly, just #include the main
public headers: Rts.h, HsFFI.h, RtsAPI.h.
- All the headers needed for via-C compilation have moved into the
stg subdirectory, which is self-contained. Most of the headers for
the rest of the RTS APIs have moved into the rts subdirectory.
- I left MachDeps.h where it is, because it is so widely used in
Haskell code.
- I left a deprecated stub for RtsFlags.h in place. The flag
structures are now exposed by Rts.h.
- Various internal APIs are no longer exposed by public header files.
- Various bits of dead code and declarations have been removed
- More gcc warnings are turned on, and the RTS code is more
warning-clean.
- More source files #include "PosixSource.h", and hence only use
standard POSIX (1003.1c-1995) interfaces.
There is a lot more tidying up still to do, this is just the first
pass. I also intend to standardise the names for external RTS APIs
(e.g use the rts_ prefix consistently), and declare the internal APIs
as hidden for shared libraries.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Eager blackholing can improve parallel performance by reducing the
chances that two threads perform the same computation. However, it
has a cost: one extra memory write per thunk entry.
To get the best results, any code which may be executed in parallel
should be compiled with eager blackholing turned on. But since
there's a cost for sequential code, we make it optional and turn it on
for the parallel package only. It might be a good idea to compile
applications (or modules) with parallel code in with
-feager-blackholing.
ToDo: document -feager-blackholing.
|
| |
|
|
|
|
|
|
|
|
|
| |
gcc 4.3 emits warnings for static inline functions that its heuristics
decided not to inline. The workaround is to either mark appropriate
functions as "hot" (a new attribute in gcc 4.3), or sometimes to use
"extern inline" instead.
With this fix I can validate with gcc 4.3 on Fedora 9.
|
|
|
|
|
|
|
|
| |
Now allocate() is a synonym for allocateInGen().
I also made various cleanups: there is now less special-case code for
supporting -G1 (two-space collection), and -G1 now works with
-threaded.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously MVars were always on the mutable list of the old
generation, which meant every MVar was visited during every minor GC.
With lots of MVars hanging around, this gets expensive. We addressed
this problem for MUT_VARs (aka IORefs) a while ago, the solution is to
use a traditional GC write-barrier when the object is modified. This
patch does the same thing for MVars.
TVars are still done the old way, they could probably benefit from the
same treatment too.
|