• Vineet Gupta's avatar
    ARC: switch to generic bitops · cea43147
    Vineet Gupta authored
     - !LLSC now only needs a single spinlock for atomics and bitops
    
     - Some codegen changes (slight bloat) with generic bitops
    
       1. code increase due to LD-check-atomic paradigm vs. unconditonal
          atomic (but dirty'ing the cache line even if set already).
          So despite increase, generic is right thing to do.
    
       2. code decrease (but use of costlier instructions such as DIV vs.
          shifts based math) due to signed arithmetic.
          This needs to be revisited seperately.
    
         arc:
         static inline int test_bit(unsigned int nr, const volatile unsigned long *addr)
                                    ^^^^^^^^^^^^
         generic:
         static inline int test_bit(int nr, const volatile unsigned long *addr)
                                    ^^^
    
    Link: https://lore.kernel.org/r/20180830135749.GA13005@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
    Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
    [vgupta: wrote patch based on Will's poc, analysed codegen diffs]
    Signed-off-by: default avatarVineet Gupta <vgupta@kernel.org>
    cea43147
smp.h 3.73 KB