Commit 24a376d6 authored by Peter Zijlstra's avatar Peter Zijlstra

locking/qspinlock,x86: Clarify virt_spin_lock_key

Add a few comments to clarify how this is supposed to work.
Reported-by: default avatarThomas Gleixner <tglx@linutronix.de>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Juergen Gross <jgross@suse.com>
parent fce45cd4
...@@ -63,10 +63,25 @@ static inline bool vcpu_is_preempted(long cpu) ...@@ -63,10 +63,25 @@ static inline bool vcpu_is_preempted(long cpu)
#endif #endif
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
/*
* virt_spin_lock_key - enables (by default) the virt_spin_lock() hijack.
*
* Native (and PV wanting native due to vCPU pinning) should disable this key.
* It is done in this backwards fashion to only have a single direction change,
* which removes ordering between native_pv_spin_init() and HV setup.
*/
DECLARE_STATIC_KEY_TRUE(virt_spin_lock_key); DECLARE_STATIC_KEY_TRUE(virt_spin_lock_key);
void native_pv_lock_init(void) __init; void native_pv_lock_init(void) __init;
/*
* Shortcut for the queued_spin_lock_slowpath() function that allows
* virt to hijack it.
*
* Returns:
* true - lock has been negotiated, all done;
* false - queued_spin_lock_slowpath() will do its thing.
*/
#define virt_spin_lock virt_spin_lock #define virt_spin_lock virt_spin_lock
static inline bool virt_spin_lock(struct qspinlock *lock) static inline bool virt_spin_lock(struct qspinlock *lock)
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment