summaryrefslogtreecommitdiff
path: root/sql/wsrep_notify.cc
diff options
context:
space:
mode:
authorDaniel Black <daniel@mariadb.org>2021-02-15 12:31:31 +1100
committerDaniel Black <daniel@mariadb.org>2021-02-25 10:06:15 +1100
commite0ba68ba34f5623cfa3c61e2e1966971703297f5 (patch)
tree8d73e4a2156bc1601d006c272c2b06756475c753 /sql/wsrep_notify.cc
parentcea03285ecf79312ed70cbd00fbe4233c2ea040f (diff)
downloadmariadb-git-e0ba68ba34f5623cfa3c61e2e1966971703297f5.tar.gz
MDEV-23510: arm64 lf_hash alignment of pointers
Like the 10.2 version 1635686b509111c10cdb0842a0dabc0ef07bdf56, except C++ on internal functions for my_assume_aligned. volatile != atomic. volatile has no memory barrier schemantics, its for mmaped IO so lets allow some optimizer gains and stop pretending it helps with memory atomicity. The MDEV lists a SEGV an assumption is made that an address was partially read. As C packs structs strictly in order and on arm64 the cache line size is 128 bits. A pointer (link - 64 bits), followed by a hashnr (uint32 - 32 bits), leaves the following key (uchar * 64 bits), neither naturally aligned to any pointer and worse, split across a cache line which is the processors view of an atomic reservation of memory. lf_dynarray_lvalue is assumed to return a 64 bit aligned address. As a solution move the 32bit hashnr to the end so we don't get the *key pointer split across two cache lines. Tested by: Krunal Bauskar Reviewer: Marko Mäkelä
Diffstat (limited to 'sql/wsrep_notify.cc')
0 files changed, 0 insertions, 0 deletions