diff options
author | Davi Arnaut <davi.arnaut@oracle.com> | 2010-07-23 09:37:10 -0300 |
---|---|---|
committer | Davi Arnaut <davi.arnaut@oracle.com> | 2010-07-23 09:37:10 -0300 |
commit | 93e38e8a3ea9fdbd88a34da487cb2ccbfdaf8c11 (patch) | |
tree | 49a56f15656733bfb1bab6fc971e59860daa8979 /include/atomic/gcc_builtins.h | |
parent | 9deafdd2db229886f37bdfae719936757caa3e1d (diff) | |
download | mariadb-git-93e38e8a3ea9fdbd88a34da487cb2ccbfdaf8c11.tar.gz |
Bug#22320: my_atomic-t unit test fails
Bug#52261: 64 bit atomic operations do not work on Solaris i386
gcc in debug compilation
One of the various problems was that the source operand to
CMPXCHG8b was marked as a input/output operand, causing GCC
to use the EBX register as the destination register for the
CMPXCHG8b instruction. This could lead to crashes as the EBX
register is also implicitly used by the instruction, causing
the value to be potentially garbaged and a protection fault
once the value is used to access a position in memory.
Another problem was the lack of proper clobbers for the atomic
operations and, also, a discrepancy between the implementations
for the Compare and Set operation. The specific problems are
described and fixed by Kristian Nielsen patches:
Patch: 1
Fix bugs in my_atomic_cas*(val,cmp,new) that *cmp is accessed
after CAS succeds.
In the gcc builtin implementation, problem was that *cmp was
read again after atomic CAS to check if old *val == *cmp;
this fails if CAS is successful and another thread modifies
*cmp in-between.
In the x86-gcc implementation, problem was that *cmp was set
also in the case of successful CAS; this means there is a
window where it can clobber a value written by another thread
after successful CAS.
Patch 2:
Add a GCC asm "memory" clobber to primitives that imply a
memory barrier.
This signifies to GCC that any potentially aliased memory
must be flushed before the operation, and re-read after the
operation, so that read or modification in other threads of
such memory values will work as intended.
In effect, it makes these primitives work as memory barriers
for the compiler as well as the CPU. This is better and more
correct than adding "volatile" to variables.
include/atomic/gcc_builtins.h:
Do not read from *cmp after the operation as it might be
already gone if the operation was successful.
include/atomic/nolock.h:
Prefer system provided atomics over the broken x86 asm.
include/atomic/x86-gcc.h:
Do not mark source operands as input/output operands.
Add proper memory clobbers.
include/my_atomic.h:
Add notes about my_atomic_add and my_atomic_cas behaviors.
unittest/mysys/my_atomic-t.c:
Remove work around, if it fails, there is either a problem
with the atomic operations code or the specific compiler
version should be black-listed.
Diffstat (limited to 'include/atomic/gcc_builtins.h')
-rw-r--r-- | include/atomic/gcc_builtins.h | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/include/atomic/gcc_builtins.h b/include/atomic/gcc_builtins.h index 100ff80cacd..d03d28f572e 100644 --- a/include/atomic/gcc_builtins.h +++ b/include/atomic/gcc_builtins.h @@ -22,8 +22,9 @@ v= __sync_lock_test_and_set(a, v); #define make_atomic_cas_body(S) \ int ## S sav; \ - sav= __sync_val_compare_and_swap(a, *cmp, set); \ - if (!(ret= (sav == *cmp))) *cmp= sav; + int ## S cmp_val= *cmp; \ + sav= __sync_val_compare_and_swap(a, cmp_val, set);\ + if (!(ret= (sav == cmp_val))) *cmp= sav #ifdef MY_ATOMIC_MODE_DUMMY #define make_atomic_load_body(S) ret= *a |