| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It was possible to read non-existent memory, if we try to read the
srt_offset field of an info table when there is no SRT, and the info
table is right at the start of the text section.
This actually happened to me, I'm not sure why it never happened
before.
Test Plan: validate
Reviewers: rwbarton, ezyang, austin, bgamari
Reviewed By: austin, bgamari
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1401
|
|
|
|
|
|
|
|
|
|
| |
Rename StgArrWords to StgArrBytes (see Trac #8552)
Reviewed By: austin
Differential Revision: https://phabricator.haskell.org/D1233
GHC Trac Issues: #8552
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
[Revised version of D1076 that was committed and then backed out]
In a workload with a large amount of code, zero_static_objects_list()
takes a significant amount of time, and furthermore it is in the
single-threaded part of the GC.
This patch uses a slightly fiddly scheme for marking objects on the
static object lists, using a flag in the low 2 bits that flips between
two states to indicate whether an object has been visited during this
GC or not. We also have to take into account objects that have not
been visited yet, which might appear at any time due to runtime linking.
Test Plan: validate
Reviewers: austin, ezyang, rwbarton, bgamari, thomie
Reviewed By: bgamari, thomie
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1106
|
|
|
|
| |
This reverts commit b949c96b4960168a3b399fe14485b24a2167b982.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
In a workload with a large amount of code, zero_static_objects_list()
takes a significant amount of time, and furthermore it is in the
single-threaded part of the GC.
This patch uses a slightly fiddly scheme for marking objects on the
static object lists, using a flag in the low 2 bits that flips between
two states to indicate whether an object has been visited during this
GC or not. We also have to take into account objects that have not
been visited yet, which might appear at any time due to runtime linking.
Test Plan: validate
Reviewers: austin, bgamari, ezyang, rwbarton
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1076
|
|
|
|
|
|
|
|
|
|
| |
Test Plan: validate
Reviewers: austin, bgamari
Subscribers: thomie
Differential Revision: https://phabricator.haskell.org/D1047
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Very large modules can sometimes contain very large SRT bitmaps (this
is a separate problem that I need to look into). The large bitmaps
often contain a lot of zeros, so this patch skips over empty words in
the bitmap.
It makes a dramatic difference in the particular example that I saw,
where an old gen GC was taking 0.5s before this change and 0.07s after
it.
|
|
|
|
|
|
|
|
|
| |
When there's a conflict between two threads evacuating the same TSO,
in some cases we would update the incall->tso pointer to point to the
wrong copy of the TSO. This would get fixed during the next GC, but
if the thread completed in the meantime, it would likely crash. We're
seeing this about once per day on a heavily loaded machine (it varies
a lot though).
|
|
|
|
| |
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
| |
This reverts commit 39b5c1cbd8950755de400933cecca7b8deb4ffcd.
|
|
|
|
|
|
|
|
| |
This will hopefully help ensure some basic consistency in the forward by
overriding buffer variables. In particular, it sets the wrap length, the
offset to 4, and turns off tabs.
Signed-off-by: Austin Seipp <austin@well-typed.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function was inlined at two places already. And the function is
having the STATIC_INLINE annotation, so the assembly output should.
be the same.
To convince myself, I did diff the output of the object files before
and after the patch and they matched on my 64-bit Ubuntu 13.10 machine,
running gcc 4.8.1-10ubuntu9.
Also, I had to move scavenge_small_bitmap up a bit since it's not in any
.h-file.
While I was at it, I also applied the analogous patch for Compact.c.
Though I had to write `thread_small_bitmap` instead of just moving it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A long debate is in issue #8742, but the main motivation is that this
allows for applying a patch to reuse the function scavenge_small_bitmap
without changing the .o-file output.
Similarly, I changed the types in rts/sm/Compact.c, so I can create
a STATIC_INLINE function for the redundant code block:
while (size > 0) {
if ((bitmap & 1) == 0) {
thread((StgClosure **)p);
}
p++;
bitmap = bitmap >> 1;
size--;
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These array types are smaller than Array# and MutableArray# and are
faster when the array size is small, as they don't have the overhead
of a card table. Having no card table reduces the closure size with 2
words in the typical small array case and leads to less work when
updating or GC:ing the array.
Reduces both the runtime and memory allocation by 8.8% on my insert
benchmark for the HashMap type in the unordered-containers package,
which makes use of lots of small arrays. With tuned GC settings
(i.e. `+RTS -A6M`) the runtime reduction is 15%.
Fixes #8923.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
We add the invariant to the MVar blocked threads queue that
threads blocked on an atomic read are always at the front of
the queue. This invariant is easy to maintain, since takers
are only ever added to the end of the queue.
Signed-off-by: Edward Z. Yang <ezyang@mit.edu>
|
|
|
|
|
|
|
|
|
|
|
| |
The bug where TSOs were unconditionally kept on the mutable list was #1589
which was fixed in 04cddd339c000df6d02c90ce59dbffa58d2fe166.
Curiously enough, the commit that changed this comment
0417404f5d1230c9d291ea9f73e2831121c8ec99 occurred *after* this
change was made; I can only assume Simon Marlow accidentally forgot
that he had fixed this bug. :-)
Signed-off-by: Edward Z. Yang <ezyang@mit.edu>
|
|
|
|
|
|
|
|
|
|
| |
This improves GC performance when there are a lot of TVars in the
heap. For instance, a TChan with a lot of elements causes a massive
GC drag without this patch.
There's more to do - several other STM closure types don't have write
barriers, so GC performance when there are a lot of threads blocked on
STM isn't great. But fixing the problem for TVar is a good start.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main change here is that the Cmm parser now allows high-level cmm
code with argument-passing and function calls. For example:
foo ( gcptr a, bits32 b )
{
if (b > 0) {
// we can make tail calls passing arguments:
jump stg_ap_0_fast(a);
}
return (x,y);
}
More details on the new cmm syntax are in Note [Syntax of .cmm files]
in CmmParse.y.
The old syntax is still more-or-less supported for those occasional
code fragments that really need to explicitly manipulate the stack.
However there are a couple of differences: it is now obligatory to
give a list of live GlobalRegs on every jump, e.g.
jump %ENTRY_CODE(Sp(0)) [R1];
Again, more details in Note [Syntax of .cmm files].
I have rewritten most of the .cmm files in the RTS into the new
syntax, except for AutoApply.cmm which is generated by the genapply
program: this file could be generated in the new syntax instead and
would probably be better off for it, but I ran out of enthusiasm.
Some other changes in this batch:
- The PrimOp calling convention is gone, primops now use the ordinary
NativeNodeCall convention. This means that primops and "foreign
import prim" code must be written in high-level cmm, but they can
now take more than 10 arguments.
- CmmSink now does constant-folding (should fix #7219)
- .cmm files now go through the cmmPipeline, and as a result we
generate better code in many cases. All the object files generated
for the RTS .cmm files are now smaller. Performance should be
better too, but I haven't measured it yet.
- RET_DYN frames are removed from the RTS, lots of code goes away
- we now have some more canned GC points to cover unboxed-tuples with
2-4 pointers, which will reduce code size a little.
|
|
|
|
|
|
|
|
|
|
|
|
| |
lnat was originally "long unsigned int" but we were using it when we
wanted a 64-bit type on a 64-bit machine. This broke on Windows x64,
where long == int == 32 bits. Using types of unspecified size is bad,
but what we really wanted was a type with N bits on an N-bit machine.
StgWord is exactly that.
lnat was mentioned in some APIs that clients might be using
(e.g. StackOverflowHook()), so we leave it defined but with a comment
to say that it's deprecated.
|
| |
|
|
|
|
|
|
| |
Mostly this meant getting pointer<->int conversions to use the right
sizes. lnat is now size_t, rather than unsigned long, as that seems a
better match for how it's used.
|
|
|
|
|
|
|
|
|
| |
Now we keep any partially-full blocks in the gc_thread[] structs after
each GC, rather than moving them to the generation. This should give
us slightly better locality (though I wasn't able to measure any
difference).
Also in this patch: better sanity checking with THREADED.
|
|
|
|
|
|
| |
Store the *number* of the destination generation in the Bdescr struct,
so that in evacuate() we don't have to deref gen to get it.
This is another improvement ported over from my GC branch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch makes two changes to the way stacks are managed:
1. The stack is now stored in a separate object from the TSO.
This means that it is easier to replace the stack object for a thread
when the stack overflows or underflows; we don't have to leave behind
the old TSO as an indirection any more. Consequently, we can remove
ThreadRelocated and deRefTSO(), which were a pain.
This is obviously the right thing, but the last time I tried to do it
it made performance worse. This time I seem to have cracked it.
2. Stacks are now represented as a chain of chunks, rather than
a single monolithic object.
The big advantage here is that individual chunks are marked clean or
dirty according to whether they contain pointers to the young
generation, and the GC can avoid traversing clean stack chunks during
a young-generation collection. This means that programs with deep
stacks will see a big saving in GC overhead when using the default GC
settings.
A secondary advantage is that there is much less copying involved as
the stack grows. Programs that quickly grow a deep stack will see big
improvements.
In some ways the implementation is simpler, as nothing special needs
to be done to reclaim stack as the stack shrinks (the GC just recovers
the dead stack chunks). On the other hand, we have to manage stack
underflow between chunks, so there's a new stack frame
(UNDERFLOW_FRAME), and we now have separate TSO and STACK objects.
The total amount of code is probably about the same as before.
There are new RTS flags:
-ki<size> Sets the initial thread stack size (default 1k) Egs: -ki4k -ki2m
-kc<size> Sets the stack chunk size (default 32k)
-kb<size> Sets the stack chunk buffer size (default 1k)
-ki was previously called just -k, and the old name is still accepted
for backwards compatibility. These new options are documented.
|
|
|
|
|
|
|
|
|
|
| |
When a BCO with a zero-length bitmap was right at the edge of
allocated memory, we were reading a word of non-existent memory.
This showed up as a segfault in T789(ghci) for me, but the crash was
extremely sensitive and went away with most changes.
Also, optimised scavenge_large_bitmap a bit while I was in there.
|
|
|
|
| |
Which was being used seemed to be random
|
|
|
|
|
|
|
| |
These are no longer used: once upon a time they used to have different
layout from IND and IND_PERM respectively, but that is no longer the
case since we changed the remembered set to be an array of addresses
instead of a linked list of closures.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The list of threads blocked on an MVar is now represented as a list of
separately allocated objects rather than being linked through the TSOs
themselves. This lets us remove a TSO from the list in O(1) time
rather than O(n) time, by marking the list object. Removing this
linear component fixes some pathalogical performance cases where many
threads were blocked on an MVar and became unreachable simultaneously
(nofib/smp/threads007), or when sending an asynchronous exception to a
TSO in a long list of thread blocked on an MVar.
MVar performance has actually improved by a few percent as a result of
this change, slightly to my surprise.
This is the final cleanup in the sequence, which let me remove the old
way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the
new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent). It
is now the case that only the Capability that owns a TSO may modify
its state (well, almost), and this simplifies various things. More of
the RTS is based on message-passing between Capabilities now.
|
|
|
|
|
|
|
| |
This fixes #3838, and was made possible by the new BLACKHOLE
infrastructure. To allow reording of the run queue I had to make it
doubly-linked, which entails some extra trickiness with regard to
GC write barriers and suchlike.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This replaces the global blackhole_queue with a clever scheme that
enables us to queue up blocked threads on the closure that they are
blocked on, while still avoiding atomic instructions in the common
case.
Advantages:
- gets rid of a locked global data structure and some tricky GC code
(replacing it with some per-thread data structures and different
tricky GC code :)
- wakeups are more prompt: parallel/concurrent performance should
benefit. I haven't seen anything dramatic in the parallel
benchmarks so far, but a couple of threading benchmarks do improve
a bit.
- waking up a thread blocked on a blackhole is now O(1) (e.g. if
it is the target of throwTo).
- less sharing and better separation of Capabilities: communication
is done with messages, the data structures are strictly owned by a
Capability and cannot be modified except by sending messages.
- this change will utlimately enable us to do more intelligent
scheduling when threads block on each other. This is what started
off the whole thing, but it isn't done yet (#3838).
I'll be documenting all this on the wiki in due course.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Accidnetally pushed this patch which, while it validates, isn't
correct.
rolling back:
Fri Mar 19 11:21:27 GMT 2010 Simon Marlow <marlowsd@gmail.com>
* slight improvement to scavenging of update frames when a collision has occurred
M ./rts/sm/Scav.c -19 +15
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This replaces some complicated locking schemes with message-passing
in the implementation of throwTo. The benefits are
- previously it was impossible to guarantee that a throwTo from
a thread running on one CPU to a thread running on another CPU
would be noticed, and we had to rely on the GC to pick up these
forgotten exceptions. This no longer happens.
- the locking regime is simpler (though the code is about the same
size)
- threads can be unblocked from a blocked_exceptions queue without
having to traverse the whole queue now. It's a rare case, but
replaces an O(n) operation with an O(1).
- generally we move in the direction of sharing less between
Capabilities (aka HECs), which will become important with other
changes we have planned.
Also in this patch I replaced several STM-specific closure types with
a generic MUT_PRIM closure type, which allowed a lot of code in the GC
and other places to go away, hence the line-count reduction. The
message-passing changes resulted in about a net zero line-count
difference.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The card table is an array of bytes, placed directly following the
actual array data. This means that array reading is unaffected, but
array writing needs to read the array size from the header in order to
find the card table.
We use a bytemap rather than a bitmap, because updating the card table
must be multi-thread safe. Each byte refers to 128 entries of the
array, but this is tunable by changing the constant
MUT_ARR_PTRS_CARD_BITS in includes/Constants.h.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GC had a two-level structure, G generations each of T steps.
Steps are for aging within a generation, mostly to avoid premature
promotion.
Measurements show that more than 2 steps is almost never worthwhile,
and 1 step is usually worse than 2. In theory fractional steps are
possible, so the ideal number of steps is somewhere between 1 and 3.
GHC's default has always been 2.
We can implement 2 steps quite straightforwardly by having each block
point to the generation to which objects in that block should be
promoted, so blocks in the nursery point to generation 0, and blocks
in gen 0 point to gen 1, and so on.
This commit removes the explicit step structures, merging generations
with steps, thus simplifying a lot of code. Performance is
unaffected. The tunable number of steps is now gone, although it may
be replaced in the future by a way to tune the aging in generation 0.
|
| |
|
|
|
|
|
|
| |
This improves the performance of the mark/compact and mark/region
collectors, and paves the way for doing mark/region with smaller
region sizes, in the style of Immix.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were two bugs, and had it not been for the first one we would
not have noticed the second one, so this is quite fortunate.
The first bug is in stg_unblockAsyncExceptionszh_ret, when we found a
pending exception to raise, but don't end up raising it, there was a
missing adjustment to the stack pointer.
The second bug was that this case was actually happening at all: it
ought to be incredibly rare, because the pending exception thread
would have to be killed between us finding it and attempting to raise
the exception. This made me suspicious. It turned out that there was
a race condition on the tso->flags field; multiple threads were
updating this bitmask field non-atomically (one of the bits is the
dirty-bit for the generational GC). The fix is to move the dirty bit
into its own field of the TSO, making the TSO one word larger (sadly).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The first phase of this tidyup is focussed on the header files, and in
particular making sure we are exposinng publicly exactly what we need
to, and no more.
- Rts.h now includes everything that the RTS exposes publicly,
rather than a random subset of it.
- Most of the public header files have moved into subdirectories, and
many of them have been renamed. But clients should not need to
include any of the other headers directly, just #include the main
public headers: Rts.h, HsFFI.h, RtsAPI.h.
- All the headers needed for via-C compilation have moved into the
stg subdirectory, which is self-contained. Most of the headers for
the rest of the RTS APIs have moved into the rts subdirectory.
- I left MachDeps.h where it is, because it is so widely used in
Haskell code.
- I left a deprecated stub for RtsFlags.h in place. The flag
structures are now exposed by Rts.h.
- Various internal APIs are no longer exposed by public header files.
- Various bits of dead code and declarations have been removed
- More gcc warnings are turned on, and the RTS code is more
warning-clean.
- More source files #include "PosixSource.h", and hence only use
standard POSIX (1003.1c-1995) interfaces.
There is a lot more tidying up still to do, this is just the first
pass. I also intend to standardise the names for external RTS APIs
(e.g use the rts_ prefix consistently), and declare the internal APIs
as hidden for shared libraries.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New flag: "+RTS -qb" disables load-balancing in the parallel GC
(though this is subject to change, I think we will probably want to do
something more automatic before releasing this).
To get the "PARGC3" configuration described in the "Runtime support
for Multicore Haskell" paper, use "+RTS -qg0 -qb -RTS".
The main advantage of this is that it allows us to easily disable
load-balancing altogether, which turns out to be important in parallel
programs. Maintaining locality is sometimes more important that
spreading the work out in parallel GC. There is a side benefit in
that the parallel GC should have improved locality even when
load-balancing, because each processor prefers to take work from its
own queue before stealing from others.
|
|
|
|
|
|
|
|
| |
After much experimentation, I've found a formulation for HEAP_ALLOCED
that (a) improves performance, and (b) doesn't have any race
conditions when used concurrently. GC performance on x86_64 should be
improved slightly. See extensive comments in MBlock.h for the
details.
|