Commit b1040148 authored by Vineet Gupta's avatar Vineet Gupta

ARC: atomic: !LLSC: remove hack in atomic_set() for for UP

!LLSC atomics use spinlock (SMP) or irq-disable (UP) to implement
criticla regions. UP atomic_set() however was "cheating" by not doing
any of that so and still being functional.

Remove this anomaly (primarily as cleanup for future code improvements)
given that this config is not worth hassle of special case code.
Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: default avatarVineet Gupta <vgupta@kernel.org>
parent b0f839b4
...@@ -3,12 +3,10 @@ ...@@ -3,12 +3,10 @@
#ifndef _ASM_ARC_ATOMIC_SPLOCK_H #ifndef _ASM_ARC_ATOMIC_SPLOCK_H
#define _ASM_ARC_ATOMIC_SPLOCK_H #define _ASM_ARC_ATOMIC_SPLOCK_H
#ifndef CONFIG_SMP /*
* Non hardware assisted Atomic-R-M-W
/* violating atomic_xxx API locking protocol in UP for optimization sake */ * Locking would change to irq-disabling only (UP) and spinlocks (SMP)
#define arch_atomic_set(v, i) WRITE_ONCE(((v)->counter), (i)) */
#else
static inline void arch_atomic_set(atomic_t *v, int i) static inline void arch_atomic_set(atomic_t *v, int i)
{ {
...@@ -30,13 +28,6 @@ static inline void arch_atomic_set(atomic_t *v, int i) ...@@ -30,13 +28,6 @@ static inline void arch_atomic_set(atomic_t *v, int i)
#define arch_atomic_set_release(v, i) arch_atomic_set((v), (i)) #define arch_atomic_set_release(v, i) arch_atomic_set((v), (i))
#endif
/*
* Non hardware assisted Atomic-R-M-W
* Locking would change to irq-disabling only (UP) and spinlocks (SMP)
*/
#define ATOMIC_OP(op, c_op, asm_op) \ #define ATOMIC_OP(op, c_op, asm_op) \
static inline void arch_atomic_##op(int i, atomic_t *v) \ static inline void arch_atomic_##op(int i, atomic_t *v) \
{ \ { \
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment