diff options
author | Daniel Black <daniel@mariadb.org> | 2021-02-15 12:31:31 +1100 |
---|---|---|
committer | Daniel Black <daniel@mariadb.org> | 2021-02-25 10:06:15 +1100 |
commit | e0ba68ba34f5623cfa3c61e2e1966971703297f5 (patch) | |
tree | 8d73e4a2156bc1601d006c272c2b06756475c753 /include | |
parent | cea03285ecf79312ed70cbd00fbe4233c2ea040f (diff) | |
download | mariadb-git-e0ba68ba34f5623cfa3c61e2e1966971703297f5.tar.gz |
MDEV-23510: arm64 lf_hash alignment of pointers
Like the 10.2 version 1635686b509111c10cdb0842a0dabc0ef07bdf56,
except C++ on internal functions for my_assume_aligned.
volatile != atomic.
volatile has no memory barrier schemantics, its for mmaped IO
so lets allow some optimizer gains and stop pretending it helps
with memory atomicity.
The MDEV lists a SEGV an assumption is made that an address was
partially read. As C packs structs strictly in order and on arm64 the
cache line size is 128 bits. A pointer (link - 64 bits), followed
by a hashnr (uint32 - 32 bits), leaves the following key (uchar *
64 bits), neither naturally aligned to any pointer and worse, split
across a cache line which is the processors view of an atomic
reservation of memory.
lf_dynarray_lvalue is assumed to return a 64 bit aligned address.
As a solution move the 32bit hashnr to the end so we don't get the
*key pointer split across two cache lines.
Tested by: Krunal Bauskar
Reviewer: Marko Mäkelä
Diffstat (limited to 'include')
-rw-r--r-- | include/lf.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/lf.h b/include/lf.h index 88ac644c349..267a66aeeaf 100644 --- a/include/lf.h +++ b/include/lf.h @@ -125,7 +125,7 @@ void *lf_alloc_new(LF_PINS *pins); C_MODE_END /* - extendible hash, lf_hash.c + extendible hash, lf_hash.cc */ #include <hash.h> |