Commit b9768752 authored by Alexander Lobakin's avatar Alexander Lobakin Committed by Jens Axboe

asm-generic: fix __get_unaligned_be48() on 32 bit platforms

While testing the new macros for working with 48 bit containers,
I faced a weird problem:

32 + 16: 0x2ef6e8da 0x79e60000
48: 0xffffe8da + 0x79e60000

All the bits starting from the 32nd were getting 1d in 9/10 cases.
The debug showed:

p[0]: 0x00002e0000000000
p[1]: 0x00002ef600000000
p[2]: 0xffffffffe8000000
p[3]: 0xffffffffe8da0000
p[4]: 0xffffffffe8da7900
p[5]: 0xffffffffe8da79e6

that the value becomes a garbage after the third OR, i.e. on
`p[2] << 24`.
When the 31st bit is 1 and there's no explicit cast to an unsigned,
it's being considered as a signed int and getting sign-extended on
OR, so `e8000000` becomes `ffffffffe8000000` and messes up the
result.
Cast the @p[2] to u64 as well to avoid this. Now:

32 + 16: 0x7ef6a490 0xddc10000
48: 0x7ef6a490 + 0xddc10000

p[0]: 0x00007e0000000000
p[1]: 0x00007ef600000000
p[2]: 0x00007ef6a4000000
p[3]: 0x00007ef6a4900000
p[4]: 0x00007ef6a490dd00
p[5]: 0x00007ef6a490ddc1

Fixes: c2ea5fcf ("asm-generic: introduce be48 unaligned accessors")
Signed-off-by: default avatarAlexander Lobakin <alobakin@pm.me>
Link: https://lore.kernel.org/r/20220412215220.75677-1-alobakin@pm.meSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 868e6139
......@@ -143,7 +143,7 @@ static inline void put_unaligned_be48(const u64 val, void *p)
static inline u64 __get_unaligned_be48(const u8 *p)
{
return (u64)p[0] << 40 | (u64)p[1] << 32 | p[2] << 24 |
return (u64)p[0] << 40 | (u64)p[1] << 32 | (u64)p[2] << 24 |
p[3] << 16 | p[4] << 8 | p[5];
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment