Commit 8997c934 authored by Mark Rutland's avatar Mark Rutland Committed by Catalin Marinas

arm64: atomic_lse: match asm register sizes

The LSE atomic code uses asm register variables to ensure that
parameters are allocated in specific registers. In the majority of cases
we specifically ask for an x register when using 64-bit values, but in a
couple of cases we use a w regsiter for a 64-bit value.

For asm register variables, the compiler only cares about the register
index, with wN and xN having the same meaning. The compiler determines
the register size to use based on the type of the variable. Thus, this
inconsistency is merely confusing, and not harmful to code generation.

For consistency, this patch updates those cases to use the x register
alias. There should be no functional change as a result of this patch.
Acked-by: default avatarWill Deacon <will.deacon@arm.com>
Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
parent 55de49f9
...@@ -322,7 +322,7 @@ static inline void atomic64_and(long i, atomic64_t *v) ...@@ -322,7 +322,7 @@ static inline void atomic64_and(long i, atomic64_t *v)
#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \ #define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \
static inline long atomic64_fetch_and##name(long i, atomic64_t *v) \ static inline long atomic64_fetch_and##name(long i, atomic64_t *v) \
{ \ { \
register long x0 asm ("w0") = i; \ register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \ register atomic64_t *x1 asm ("x1") = v; \
\ \
asm volatile(ARM64_LSE_ATOMIC_INSN( \ asm volatile(ARM64_LSE_ATOMIC_INSN( \
...@@ -394,7 +394,7 @@ ATOMIC64_OP_SUB_RETURN( , al, "memory") ...@@ -394,7 +394,7 @@ ATOMIC64_OP_SUB_RETURN( , al, "memory")
#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \ #define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \
static inline long atomic64_fetch_sub##name(long i, atomic64_t *v) \ static inline long atomic64_fetch_sub##name(long i, atomic64_t *v) \
{ \ { \
register long x0 asm ("w0") = i; \ register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \ register atomic64_t *x1 asm ("x1") = v; \
\ \
asm volatile(ARM64_LSE_ATOMIC_INSN( \ asm volatile(ARM64_LSE_ATOMIC_INSN( \
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment