Commit f6b4ecee authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

locking,x86: Kill atomic_or_long()

There are no users, kill it.
Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/20140508135851.768177189@infradead.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 98a96f20
...@@ -219,21 +219,6 @@ static inline short int atomic_inc_short(short int *v) ...@@ -219,21 +219,6 @@ static inline short int atomic_inc_short(short int *v)
return *v; return *v;
} }
#ifdef CONFIG_X86_64
/**
* atomic_or_long - OR of two long integers
* @v1: pointer to type unsigned long
* @v2: pointer to type unsigned long
*
* Atomically ORs @v1 and @v2
* Returns the result of the OR
*/
static inline void atomic_or_long(unsigned long *v1, unsigned long v2)
{
asm(LOCK_PREFIX "orq %1, %0" : "+m" (*v1) : "r" (v2));
}
#endif
/* These are x86-specific, used by some header files */ /* These are x86-specific, used by some header files */
#define atomic_clear_mask(mask, addr) \ #define atomic_clear_mask(mask, addr) \
asm volatile(LOCK_PREFIX "andl %0,%1" \ asm volatile(LOCK_PREFIX "andl %0,%1" \
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment