| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
| |
This got fixed sometime recently; not worth it trying to
figure out which commit.
|
|
|
|
|
|
| |
Co-authored-by: Sven Tennie <sven.tennie@gmail.com>
Co-authored-by: Matthew Pickering <matthewtpickering@gmail.com>
Co-authored-by: Ben Gamari <bgamari.foss@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the overflow check for the IMAGE_REL_AMD64_ADDR32NB
relocation failed to account for the signed nature of the value.
Specifically, the overflow check was:
uint64_t v;
v = S + A;
if (v >> 32) { ... }
However, `v` ultimately needs to fit into 32-bits as a signed value.
Consequently, values `v > 2^31` in fact overflow yet this is not caught
by the existing overflow check.
Here we rewrite the overflow check to rather ensure that
`INT32_MIN <= v <= INT32_MAX`. There is now quite a bit of repetition
between the `IMAGE_REL_AMD64_REL32` and `IMAGE_REL_AMD64_ADDR32` cases
but I am leaving fixing this for future work.
This bug was first noticed by @awson.
Fixes #15808.
|
|
|
|
| |
The previous merge mistakenly reverted it.
|
|\ |
|
| |\ |
|
| | |
| | |
| | |
| | | |
Since the latter wants to call getRTSStats.
|
| | |
| | |
| | |
| | |
| | | |
While on face value this seems a bit heavy, I think it's far better than
enforcing ordering on every access.
|
| | | |
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
We can generally be pretty relaxed in the barriers here since the timer
thread is a loop.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously `initScheduler` would attempt to pause the ticker and in so
doing acquire the ticker mutex. However, initTicker, which is
responsible for initializing said mutex, hadn't been called
yet.
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
This avoids #17289.
|
| | |/ |
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | | |
This suppresses the other side of a race during shutdown.
|
| | |/ |
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously the `current_value`, `first_watch_queue_entry`, and
`num_updates` fields of `StgTVar` were marked as `volatile` in an
attempt to provide strong ordering. Of course, this isn't sufficient.
We now use proper atomic operations. In most of these cases I strengthen
the ordering all the way to SEQ_CST although it's possible that some
could be weakened with some thought.
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This fixes a potentially harmful race where we failed to synchronize
before looking at a TVar's current_value.
Also did a bit of refactoring to avoid abstract over management of
max_commits.
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Here we are doing lazy initialization; it's okay if we do the check more
than once, hence relaxed operation is fine.
|
| | | |
| | | |
| | | |
| | | | |
Fixes #17275.
|
| | |/ |
|
| |\ \ |
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
After a few attempts at shoring up the previous implementation, I ended
up turning to the literature and now use the proven implementation,
> N.M. LĂȘ, A. Pop, A.Cohen, and F.Z. Nardelli. "Correct and Efficient
> Work-Stealing for Weak Memory Models". PPoPP'13, February 2013,
> ACM 978-1-4503-1922/13/02.
Note only is this approach formally proven correct under C11 semantics
but it is also proved to be a bit faster in practice.
|
| |\ \ |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Not only is this in general a good idea, but it turns out that GCC
unrolls the retry loop, resulting is massive code bloat in critical
parts of the RTS (e.g. `evacuate`).
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Ensure that the GC leader synchronizes with workers before calling
stat_endGC.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously we would take all capabilities but fail to join on the thread
itself, potentially resulting in a leaked thread.
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
By taking all_tasks_mutex in stat_exit. Also better-document the fact
that the task statistics are protected by all_tasks_mutex.
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This fixes two potentially problematic data races in the StablePtr
implementation:
* We would fail to RELEASE the stable pointer table when enlarging it,
causing other cores to potentially see uninitialized memory.
* We would fail to ACQUIRE when dereferencing a stable pointer.
|
| | | | |
|
| | |/ |
|
| |\ \ |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
Due to #18883.
|