diff options
author | Davi Arnaut <davi.arnaut@oracle.com> | 2010-11-30 21:19:49 -0200 |
---|---|---|
committer | Davi Arnaut <davi.arnaut@oracle.com> | 2010-11-30 21:19:49 -0200 |
commit | cfe8acb19870cb82b4508d208e5df8525536a46d (patch) | |
tree | 00fa1777fc730d888f0e9333e26e60583f3aadea | |
parent | e1e81ceb83dfc344cb20675604d46e8cb0e7403d (diff) | |
download | mariadb-git-cfe8acb19870cb82b4508d208e5df8525536a46d.tar.gz |
Bug#56760: my_atomics failures on osx10.5-x86-64bit
The problem was due to a misuse of GCC asm constraints used to
implement a atomic load. On x86_64, the load was implemented
as a cmpxchg which implicitly uses the eax register as a
source and destination operand, yet the dummy value used for
comparison wasn't being properly loaded into eax (and other
problems).
The core problem is that cmpxchg is unnecessary as a load
on x86_64 as there are other simpler instructions such
as xadd. Even though, such instructions are only used to
have a memory barrier as load and stores are atomic by
definition. Hence, the solution is to explicitly issue the
required CPU and compiler barriers.
include/atomic/x86-gcc.h:
Issue a synchronizing instruction before loading the value.
Afterwards, issue a compiler barrier to prevent reordering.
-rw-r--r-- | include/atomic/x86-gcc.h | 14 |
1 files changed, 7 insertions, 7 deletions
diff --git a/include/atomic/x86-gcc.h b/include/atomic/x86-gcc.h index 90602ef900c..ea3202aa9c9 100644 --- a/include/atomic/x86-gcc.h +++ b/include/atomic/x86-gcc.h @@ -78,15 +78,15 @@ : "memory") /* - Actually 32-bit reads/writes are always atomic on x86 - But we add LOCK_prefix here anyway to force memory barriers + Actually 32/64-bit reads/writes are always atomic on x86_64, + nonetheless issue memory barriers as appropriate. */ #define make_atomic_load_body(S) \ - ret=0; \ - asm volatile (LOCK_prefix "; cmpxchg %2, %0" \ - : "=m" (*a), "=a" (ret) \ - : "r" (ret), "m" (*a) \ - : "memory") + /* Serialize prior load and store operations. */ \ + asm volatile ("mfence" ::: "memory"); \ + ret= *a; \ + /* Prevent compiler from reordering instructions. */ \ + asm volatile ("" ::: "memory") #define make_atomic_store_body(S) \ asm volatile ("; xchg %0, %1;" \ : "=m" (*a), "+r" (v) \ |