diff options
author | Yann Ylavic <ylavic@apache.org> | 2021-12-16 15:56:27 +0000 |
---|---|---|
committer | Yann Ylavic <ylavic@apache.org> | 2021-12-16 15:56:27 +0000 |
commit | df1a5b2d7f36107f33af91db36debaf686110063 (patch) | |
tree | c6f780e7c5cb13c377d30e76661688d66775fa62 /libapr.rc | |
parent | 3de25a6b4dd3d78b0227a436f9e41e3063429927 (diff) | |
download | apr-df1a5b2d7f36107f33af91db36debaf686110063.tar.gz |
Merge r1894621, r1894719, r1894622 from trunk:
apr_atomic: Use __atomic builtins when available.
Unlike Intel's atomic builtins (__sync_*), the more recent __atomic builtins
provide atomic load and store for weakly ordered architectures like ARM32 or
powerpc[64], so use them when available (gcc 4.6.3+).
Follow up to r1894621: restore apr_atomic_init::apr__atomic_generic64_init().
Even if apr__atomic_generic64_init() is currently a noop when !APR_HAS_THREADS,
it may change later without apr_atomic_init() noticing (thanks RĂ¼diger).
apr_atomic: Fix load/store for weak memory ordering architectures.
Volatile access prevents compiler reordering of load/store but it's not enough
for weakly ordered archs like ARM32 and PowerPC[64].
While __atomic builtins provide load and store, __sync builtins don't so let's
use an atomic add of zero for the former and atomic exchange for the latter.
The assembly code for PowerPC was not correct either, fix apr_atomic_read32()
and apr_atomic_set32() and add the necessary memory barriers for the others.
PR 50586.
Submitted by: ylavic
git-svn-id: https://svn.apache.org/repos/asf/apr/apr/branches/1.7.x@1896067 13f79535-47bb-0310-9956-ffa450edef68
Diffstat (limited to 'libapr.rc')
0 files changed, 0 insertions, 0 deletions