MDEV-23510: arm64 lf_hash alignment of pointers
Like the 10.2 version 1635686b, except C++ on internal functions for my_assume_aligned. volatile != atomic. volatile has no memory barrier schemantics, its for mmaped IO so lets allow some optimizer gains and stop pretending it helps with memory atomicity. The MDEV lists a SEGV an assumption is made that an address was partially read. As C packs structs strictly in order and on arm64 the cache line size is 128 bits. A pointer (link - 64 bits), followed by a hashnr (uint32 - 32 bits), leaves the following key (uchar * 64 bits), neither naturally aligned to any pointer and worse, split across a cache line which is the processors view of an atomic reservation of memory. lf_dynarray_lvalue is assumed to return a 64 bit aligned address. As a solution move the 32bit hashnr to the end so we don't get the *key pointer split across two cache lines. Tested by: Krunal Bauskar Reviewer: Marko Mäkelä
Showing
Please register or sign in to comment