| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Our new CPP linter enforces this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Fixes (#12636).
- changes all the typecasts to _unsinged long long_ to
have the format specifiers work.
Reviewers: austin, bgamari, erikd, simonmar, Phyx
Reviewed By: erikd, Phyx
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D3129
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Plan: Validate on lots of platforms
Reviewers: erikd, simonmar, austin
Reviewed By: erikd, simonmar
Subscribers: michalt, thomie
Differential Revision: https://phabricator.haskell.org/D2699
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
We currently have two info tables for a constructor
* XXX_con_info: the info table for a heap-resident instance of the
constructor, It has type CONSTR, or one of the specialised types like
CONSTR_1_0
* XXX_static_info: the info table for a static instance of this
constructor, which has type CONSTR_STATIC or CONSTR_STATIC_NOCAF.
I'm getting rid of the latter, and using the `con_info` info table for
both static and dynamic constructors. For rationale and more details
see Note [static constructors] in SMRep.hs.
I also removed these macros: `isSTATIC()`, `ip_STATIC()`,
`closure_STATIC()`, since they relied on the CONSTR/CONSTR_STATIC
distinction, and anyway HEAP_ALLOCED() does the same job.
Test Plan: validate
Reviewers: bgamari, simonpj, austin, gcampax, hvr, niteria, erikd
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2690
GHC Trac Issues: #12455
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Biographical profiling is not thread-safe as documented in #12019. Throw
an error when it is used in this way.
Test Plan: Validate
Reviewers: simonmar, austin, erikd
Reviewed By: erikd
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2516
GHC Trac Issues: #12019
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This brings in initial support for compact regions, as described in the
ICFP 2015 paper "Efficient Communication and Collection with Compact
Normal Forms" (Edward Z. Yang et.al.) and implemented by Giovanni
Campagna.
Some things may change before the 8.2 release, but I (Simon M.) wanted
to get the main patch committed so that we can iterate.
What documentation there is is in the Data.Compact module in the new
compact package. We'll need to extend and polish the documentation
before the release.
Test Plan:
validate
(new test cases included)
Reviewers: ezyang, simonmar, hvr, bgamari, austin
Subscribers: vikraman, Yuras, RyanGlScott, qnikst, mboes, facundominguez, rrnewton, thomie, erikd
Differential Revision: https://phabricator.haskell.org/D1264
GHC Trac Issues: #11493
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Plan: Try it
Reviewers: hvr, simonmar, austin, erikd
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1722
GHC Trac Issues: #11094
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The aim here is to reduce the number of remote memory accesses on
systems with a NUMA memory architecture, typically multi-socket servers.
Linux provides a NUMA API for doing two things:
* Allocating memory local to a particular node
* Binding a thread to a particular node
When given the +RTS --numa flag, the runtime will
* Determine the number of NUMA nodes (N) by querying the OS
* Assign capabilities to nodes, so cap C is on node C%N
* Bind worker threads on a capability to the correct node
* Keep a separate free lists in the block layer for each node
* Allocate the nursery for a capability from node-local memory
* Allocate blocks in the GC from node-local memory
For example, using nofib/parallel/queens on a 24-core 2-socket machine:
```
$ ./Main 15 +RTS -N24 -s -A64m
Total time 173.960s ( 7.467s elapsed)
$ ./Main 15 +RTS -N24 -s -A64m --numa
Total time 150.836s ( 6.423s elapsed)
```
The biggest win here is expected to be allocating from node-local
memory, so that means programs using a large -A value (as here).
According to perf, on this program the number of remote memory accesses
were reduced by more than 50% by using `--numa`.
Test Plan:
* validate
* There's a new flag --debug-numa=<n> that pretends to do NUMA without
actually making the OS calls, which is useful for testing the code
on non-NUMA systems.
* TODO: I need to add some unit tests
Reviewers: erikd, austin, rwbarton, ezyang, bgamari, hvr, niteria
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2199
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In addition to more const-correctness fixes this patch fixes an
infelicity of the previous const-correctness patch (995cf0f356) which
left `UNTAG_CLOSURE` taking a `const StgClosure` pointer parameter
but returning a non-const pointer. Here we restore the original type
signature of `UNTAG_CLOSURE` and add a new function
`UNTAG_CONST_CLOSURE` which takes and returns a const `StgClosure`
pointer and uses that wherever possible.
Test Plan: Validate on Linux, OS X and Windows
Reviewers: Phyx, hsyl20, bgamari, austin, simonmar, trofi
Reviewed By: simonmar, trofi
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2231
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a function takes a pointer parameter and doesn't update what
the pointer points to, we can add `const` to the parameter
declaration to document that no updates occur.
Test Plan: Validate on Linux, OS X and Windows
Reviewers: austin, Phyx, bgamari, simonmar, hsyl20
Reviewed By: bgamari, simonmar, hsyl20
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2200
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We can't define Stg{Int,Word} in terms of {,u}intptr_t because STG
depends on them being the exact same size as void*, and {,u}intptr_t
does not make that guarantee. Furthermore, we also need to define
StgHalf{Int,Word}, so the preprocessor if needs to stay. But we can at
least keep it in a single place instead of repeating it in various
files.
Also define STG_{INT,WORD}{8,16,32,64}_{MIN,MAX} and use it in HsFFI.h,
further reducing the need for CPP in other files.
Reviewers: austin, bgamari, simonmar, hvr, erikd
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2182
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On x64 Windows `sizeof long` is 4 which could easily overflow
resulting in incorrect heap profiling results. This change does not
affect either Linux or OS X where `sizeof long` == `sizeof ssize_t`
regardless of machine word size.
Test Plan: Validate on Linux and Windows
Reviewers: hsyl20, bgamari, simonmar, austin
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2177
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `nat` type was an alias for `unsigned int` with a comment saying
it was at least 32 bits. We keep the typedef in case client code is
using it but mark it as deprecated.
Test Plan: Validated on Linux, OS X and Windows
Reviewers: simonmar, austin, thomie, hvr, bgamari, hsyl20
Differential Revision: https://phabricator.haskell.org/D2166
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The heap census now handles large ARR_WORDS objects which have
been shrunk by shrinkMutableByteArray# or resizeMutableByteArray#.
Test Plan: ./validate && make test WAY=profasm
Reviewers: hvr, bgamari, austin, thomie
Reviewed By: thomie
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D2005
GHC Trac Issues: #11627
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are two ways to do retainer profiling. Quoting from the user's guide:
1. `+RTS -hr` "Breaks down the graph by retainer set"
2. `+RTS -hr<cc> -h<x>`, where `-h<x>` is one of normal heap profiling
break-down options (e.g. `-hc`), and `-hr<cc> means "Restrict the
profile to closures with retainer sets containing cost-centre
stacks with one of the specified cost centres at the top."
Retainer profiling writes to a .hp file, like the other heap profiling
options, but also to a .prof file. Therefore, when the .prof file is not
writeable for whatever reason, retainer profiling should be turned off
completely.
This worked ok when running the program with `+RTS -hr` (option 1), but a
segfault would occur when using `+RTS -hr<cc> -h<x>`, with `x!=r` (option 2).
This commit fixes that.
Reviewed by: bgamari
Differential Revision: https://phabricator.haskell.org/D1849
GHC Trac Issues: #11489
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it seems that this closure type has not been in use since 5d52d9, so all
this is dead and untested code. This removes it. Some of the code might
be useful for a counting indirection as described in #10613, so when
implementing that, have a look at what this commit removes.
Test Plan: validate on harbormaster
Reviewers: austin, bgamari, simonmar
Reviewed By: simonmar
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1821
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Was broken by ce1f1607ed7f8fedd2f63c8610cafefd59baaf32. I've added a
test so that hopefully it won't break again.
Test Plan: validate & new test case
Reviewers: bgamari, austin, erikd
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1746
GHC Trac Issues: #11304
|
|
|
|
|
|
|
|
|
|
| |
Reviewers: austin
Reviewed By: austin
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1509
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Amazingly, there were zero changes to the byte code generator and very
few changes to the interpreter - mainly because we've used good
abstractions that hide the differences between profiling and
non-profiling. So that bit was pleasantly straightforward, but there
were a pile of other wibbles to get the whole test suite through.
Note that a compiler built with -prof is now like one built with
-dynamic, in that to use TH you have to build the code the same way.
For dynamic, we automatically enable -dynamic-too when TH is required,
but we don't have anything equivalent for profiling, so you have to
explicitly use -prof when building code that uses TH with a profiled
compiler. For this reason Cabal won't work with TH. We don't expect
to ship a profiled compiler, so I think that's OK.
Test Plan: validate with GhcProfiled=YES in validate.mk
Reviewers: goldfire, bgamari, rwbarton, austin, hvr, erikd, ezyang
Reviewed By: ezyang
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1407
GHC Trac Issues: #4837, #545
|
|
|
|
|
|
|
|
|
|
| |
Rename StgArrWords to StgArrBytes (see Trac #8552)
Reviewed By: austin
Differential Revision: https://phabricator.haskell.org/D1233
GHC Trac Issues: #8552
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Depends on D767
Setting this flag prevents RTS from giving RTS suggestions like "Use
`+RTS -Ksize -RTS' to increase it."
According to the comment @rwbarton made in #9579, sometimes "+RTS"
suggestions don't make sense (e.g. when the program is precompiled and
installed through package managers), we can encourage people to
distribute binaries with either "-no-rtsopts-suggestions" or "-rtsopts".
Reviewed By: erikd, austin
Differential Revision: https://phabricator.haskell.org/D809
GHC Trac Issues: #9579
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch provides user with a better hint when most RTS options
are not available (not compiled with `-rtsopts`).
A new field "rtsOptsEnabled" is added into RtsFlags.MiscFlags to
tell the availablity of RTS options.
Some concerns:
* Unlike other flag fields in "libraries/base/GHC/RTS/Flags.hsc",
"RtsOptsEnabled" is defined in "includes/RtsAPI.h" and lacks
constant macros. Therefore In "GHC.RTS", "RtsOptsEnabled" simply
derives Enum instance and reads as of type "CInt".
* There are other ways to change RTS options (e.g. `-with-rtsopts`),
but it might be too verbose to mention.
Test Plan: validate
Reviewers: austin, hvr, thomie, simonmar
Reviewed By: thomie
Subscribers: thomie, rwbarton
Differential Revision: https://phabricator.haskell.org/D767
GHC Trac Issues: #9579
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: Signed-off-by: Edward Z. Yang <ezyang@cs.stanford.edu>
Test Plan: validate
Reviewers: simonmar, austin
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D679
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
This reverts commit 39b5c1cbd8950755de400933cecca7b8deb4ffcd.
|
|
|
|
|
|
|
|
| |
This will hopefully help ensure some basic consistency in the forward by
overriding buffer variables. In particular, it sets the wrap length, the
offset to 4, and turns off tabs.
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
|
|
| |
Discovered by Coverity. CID 43166.
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These array types are smaller than Array# and MutableArray# and are
faster when the array size is small, as they don't have the overhead
of a card table. Having no card table reduces the closure size with 2
words in the typical small array case and leads to less work when
updating or GC:ing the array.
Reduces both the runtime and memory allocation by 8.8% on my insert
benchmark for the HashMap type in the unordered-containers package,
which makes use of lots of small arrays. With tuned GC settings
(i.e. `+RTS -A6M`) the runtime reduction is 15%.
Fixes #8923.
|
| |
|
| |
|
|
|
|
| |
Signed-off-by: Edward Z. Yang <ezyang@mit.edu>
|
|
|
|
|
|
|
|
|
|
| |
This improves GC performance when there are a lot of TVars in the
heap. For instance, a TChan with a lot of elements causes a massive
GC drag without this patch.
There's more to do - several other STM closure types don't have write
barriers, so GC performance when there are a lot of threads blocked on
STM isn't great. But fixing the problem for TVar is a good start.
|
|
|
|
|
|
|
|
|
|
|
|
| |
lnat was originally "long unsigned int" but we were using it when we
wanted a 64-bit type on a 64-bit machine. This broke on Windows x64,
where long == int == 32 bits. Using types of unspecified size is bad,
but what we really wanted was a type with N bits on an N-bit machine.
StgWord is exactly that.
lnat was mentioned in some APIs that clients might be using
(e.g. StackOverflowHook()), so we leave it defined but with a comment
to say that it's deprecated.
|
|
|
|
|
|
| |
Mostly this meant getting pointer<->int conversions to use the right
sizes. lnat is now size_t, rather than unsigned long, as that seems a
better match for how it's used.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Terminology cleanup: the type "Ticks" has been renamed "Time", which
is an StgWord64 in units of TIME_RESOLUTION (currently nanoseconds).
The terminology "tick" is now used consistently to mean the interval
between timer signals.
The ticker now always ticks in realtime (actually CLOCK_MONOTONIC if
we have it). Before it used CPU time in the non-threaded RTS and
realtime in the threaded RTS, but I've discovered that the CPU timer
has terrible resolution (at least on Linux) and isn't much use for
profiling. So now we always use realtime. This should also fix
The default tick interval is now 10ms, except when profiling where we
drop it to 1ms. This gives more accurate profiles without affecting
runtime too much (<1%).
Lots of cleanups - the resolution of Time is now in one place
only (Rts.h) rather than having calculations that depend on the
resolution scattered all over the RTS. I hope I found them all.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
User visible changes
====================
Profilng
--------
Flags renamed (the old ones are still accepted for now):
OLD NEW
--------- ------------
-auto-all -fprof-auto
-auto -fprof-exported
-caf-all -fprof-cafs
New flags:
-fprof-auto Annotates all bindings (not just top-level
ones) with SCCs
-fprof-top Annotates just top-level bindings with SCCs
-fprof-exported Annotates just exported bindings with SCCs
-fprof-no-count-entries Do not maintain entry counts when profiling
(can make profiled code go faster; useful with
heap profiling where entry counts are not used)
Cost-centre stacks have a new semantics, which should in most cases
result in more useful and intuitive profiles. If you find this not to
be the case, please let me know. This is the area where I have been
experimenting most, and the current solution is probably not the
final version, however it does address all the outstanding bugs and
seems to be better than GHC 7.2.
Stack traces
------------
+RTS -xc now gives more information. If the exception originates from
a CAF (as is common, because GHC tends to lift exceptions out to the
top-level), then the RTS walks up the stack and reports the stack in
the enclosing update frame(s).
Result: +RTS -xc is much more useful now - but you still have to
compile for profiling to get it. I've played around a little with
adding 'head []' to GHC itself, and +RTS -xc does pinpoint the problem
quite accurately.
I plan to add more facilities for stack tracing (e.g. in GHCi) in the
future.
Coverage (HPC)
--------------
* derived instances are now coloured yellow if they weren't used
* likewise record field names
* entry counts are more accurate (hpc --fun-entry-count)
* tab width is now correct (markup was previously off in source with
tabs)
Internal changes
================
In Core, the Note constructor has been replaced by
Tick (Tickish b) (Expr b)
which is used to represent all the kinds of source annotation we
support: profiling SCCs, HPC ticks, and GHCi breakpoints.
Depending on the properties of the Tickish, different transformations
apply to Tick. See CoreUtils.mkTick for details.
Tickets
=======
This commit closes the following tickets, test cases to follow:
- Close #2552: not a bug, but the behaviour is now more intuitive
(test is T2552)
- Close #680 (test is T680)
- Close #1531 (test is result001)
- Close #949 (test is T949)
- Close #2466: test case has bitrotted (doesn't compile against current
version of vector-space package)
|
|
|
|
|
|
|
|
|
|
| |
Now that the heap census runs in the middle of garbage collections,
the "CPU time" it was calculating included any CPU time used so far
in the current GC. This could cause CPU time to appear to go down,
which means hp2ps complained about "samples out of sequence".
I'm not sure if this is the nicest way to solve this (maybe resurrecting
mut_user_time_during_GC would be better?) but it gets things working again.
|
|
|
|
|
|
|
|
| |
Previously we were completely ignoring these, due to the difficulties
of traversing the pinned blocks (the objects are not necessarily
end-to-end, we can't tell how large the gaps are). Now just count the
whole block as a big ARR_WORDS, so at least we're accounting for the
memory and it has the right type.
|
|
|
|
|
|
|
|
|
|
|
| |
Based on a patch from David Terei.
Some parts are a little ugly (e.g. defining things that only ASSERTs
use only when DEBUG is defined), so we might want to tweak things a
little.
I've also turned off -Werror for didn't-inline warnings, as we now
get a few such warnings.
|
|
|
|
| |
gen_workspace structures. Fixes heapprof001(prof_hc_hb).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the code generator generated small code fragments labelled
with __stginit_M for each module M, and these performed whatever
initialisation was necessary for that module and recursively invoked
the initialisation functions for imported modules. This appraoch had
drawbacks:
- FFI users had to call hs_add_root() to ensure the correct
initialisation routines were called. This is a non-standard,
and ugly, API.
- unless we were using -split-objs, the __stginit dependencies would
entail linking the whole transitive closure of modules imported,
whether they were actually used or not. In an extreme case (#4387,
#4417), a module from GHC might be imported for use in Template
Haskell or an annotation, and that would force the whole of GHC to
be needlessly linked into the final executable.
So now instead we do our initialisation with C functions marked with
__attribute__((constructor)), which are automatically invoked at
program startup time (or DSO load-time). The C initialisers are
emitted into the stub.c file. This means that every time we compile
with -prof or -hpc, we now get a stub file, but thanks to #3687 that
is now invisible to the user.
There are some refactorings in the RTS (particularly for HPC) to
handle the fact that initialisers now get run earlier than they did
before.
The __stginit symbols are still generated, and the hs_add_root()
function still exists (but does nothing), for backwards compatibility.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch makes two changes to the way stacks are managed:
1. The stack is now stored in a separate object from the TSO.
This means that it is easier to replace the stack object for a thread
when the stack overflows or underflows; we don't have to leave behind
the old TSO as an indirection any more. Consequently, we can remove
ThreadRelocated and deRefTSO(), which were a pain.
This is obviously the right thing, but the last time I tried to do it
it made performance worse. This time I seem to have cracked it.
2. Stacks are now represented as a chain of chunks, rather than
a single monolithic object.
The big advantage here is that individual chunks are marked clean or
dirty according to whether they contain pointers to the young
generation, and the GC can avoid traversing clean stack chunks during
a young-generation collection. This means that programs with deep
stacks will see a big saving in GC overhead when using the default GC
settings.
A secondary advantage is that there is much less copying involved as
the stack grows. Programs that quickly grow a deep stack will see big
improvements.
In some ways the implementation is simpler, as nothing special needs
to be done to reclaim stack as the stack shrinks (the GC just recovers
the dead stack chunks). On the other hand, we have to manage stack
underflow between chunks, so there's a new stack frame
(UNDERFLOW_FRAME), and we now have separate TSO and STACK objects.
The total amount of code is probably about the same as before.
There are new RTS flags:
-ki<size> Sets the initial thread stack size (default 1k) Egs: -ki4k -ki2m
-kc<size> Sets the stack chunk size (default 32k)
-kb<size> Sets the stack chunk buffer size (default 1k)
-ki was previously called just -k, and the old name is still accepted
for backwards compatibility. These new options are documented.
|
|
|
|
|
|
|
| |
These are no longer used: once upon a time they used to have different
layout from IND and IND_PERM respectively, but that is no longer the
case since we changed the remembered set to be an array of addresses
instead of a linked list of closures.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This replaces the global blackhole_queue with a clever scheme that
enables us to queue up blocked threads on the closure that they are
blocked on, while still avoiding atomic instructions in the common
case.
Advantages:
- gets rid of a locked global data structure and some tricky GC code
(replacing it with some per-thread data structures and different
tricky GC code :)
- wakeups are more prompt: parallel/concurrent performance should
benefit. I haven't seen anything dramatic in the parallel
benchmarks so far, but a couple of threading benchmarks do improve
a bit.
- waking up a thread blocked on a blackhole is now O(1) (e.g. if
it is the target of throwTo).
- less sharing and better separation of Capabilities: communication
is done with messages, the data structures are strictly owned by a
Capability and cannot be modified except by sending messages.
- this change will utlimately enable us to do more intelligent
scheduling when threads block on each other. This is what started
off the whole thing, but it isn't done yet (#3838).
I'll be documenting all this on the wiki in due course.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This replaces some complicated locking schemes with message-passing
in the implementation of throwTo. The benefits are
- previously it was impossible to guarantee that a throwTo from
a thread running on one CPU to a thread running on another CPU
would be noticed, and we had to rely on the GC to pick up these
forgotten exceptions. This no longer happens.
- the locking regime is simpler (though the code is about the same
size)
- threads can be unblocked from a blocked_exceptions queue without
having to traverse the whole queue now. It's a rare case, but
replaces an O(n) operation with an O(1).
- generally we move in the direction of sharing less between
Capabilities (aka HECs), which will become important with other
changes we have planned.
Also in this patch I replaced several STM-specific closure types with
a generic MUT_PRIM closure type, which allowed a lot of code in the GC
and other places to go away, hence the line-count reduction. The
message-passing changes resulted in about a net zero line-count
difference.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GC had a two-level structure, G generations each of T steps.
Steps are for aging within a generation, mostly to avoid premature
promotion.
Measurements show that more than 2 steps is almost never worthwhile,
and 1 step is usually worse than 2. In theory fractional steps are
possible, so the ideal number of steps is somewhere between 1 and 3.
GHC's default has always been 2.
We can implement 2 steps quite straightforwardly by having each block
point to the generation to which objects in that block should be
promoted, so blocks in the nursery point to generation 0, and blocks
in gen 0 point to gen 1, and so on.
This commit removes the explicit step structures, merging generations
with steps, thus simplifying a lot of code. Performance is
unaffected. The tunable number of steps is now gone, although it may
be replaced in the future by a way to tune the aging in generation 0.
|