Commit 7575637a authored by Oleg Nesterov's avatar Oleg Nesterov Committed by Thomas Gleixner

x86, fpu: Fix math_state_restore() race with kernel_fpu_begin()

math_state_restore() can race with kernel_fpu_begin() if irq comes
right after __thread_fpu_begin(), __save_init_fpu() will overwrite
fpu->state we are going to restore.

Add 2 simple helpers, kernel_fpu_disable() and kernel_fpu_enable()
which simply set/clear in_kernel_fpu, and change math_state_restore()
to exclude kernel_fpu_begin() in between.

Alternatively we could use local_irq_save/restore, but probably these
new helpers can have more users.

Perhaps they should disable/enable preemption themselves, in this case
we can remove preempt_disable() in __restore_xstate_sig().
Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
Reviewed-by: default avatarRik van Riel <riel@redhat.com>
Cc: matt.fleming@intel.com
Cc: bp@suse.de
Cc: pbonzini@redhat.com
Cc: luto@amacapital.net
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Suresh Siddha <sbsiddha@gmail.com>
Link: http://lkml.kernel.org/r/20150115192028.GD27332@redhat.comSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
parent 33a3ebdc
...@@ -51,6 +51,10 @@ static inline void kernel_fpu_end(void) ...@@ -51,6 +51,10 @@ static inline void kernel_fpu_end(void)
preempt_enable(); preempt_enable();
} }
/* Must be called with preempt disabled */
extern void kernel_fpu_disable(void);
extern void kernel_fpu_enable(void);
/* /*
* Some instructions like VIA's padlock instructions generate a spurious * Some instructions like VIA's padlock instructions generate a spurious
* DNA fault but don't modify SSE registers. And these instructions * DNA fault but don't modify SSE registers. And these instructions
......
...@@ -21,6 +21,17 @@ ...@@ -21,6 +21,17 @@
static DEFINE_PER_CPU(bool, in_kernel_fpu); static DEFINE_PER_CPU(bool, in_kernel_fpu);
void kernel_fpu_disable(void)
{
WARN_ON(this_cpu_read(in_kernel_fpu));
this_cpu_write(in_kernel_fpu, true);
}
void kernel_fpu_enable(void)
{
this_cpu_write(in_kernel_fpu, false);
}
/* /*
* Were we in an interrupt that interrupted kernel mode? * Were we in an interrupt that interrupted kernel mode?
* *
......
...@@ -788,18 +788,16 @@ void math_state_restore(void) ...@@ -788,18 +788,16 @@ void math_state_restore(void)
local_irq_disable(); local_irq_disable();
} }
/* Avoid __kernel_fpu_begin() right after __thread_fpu_begin() */
kernel_fpu_disable();
__thread_fpu_begin(tsk); __thread_fpu_begin(tsk);
/*
* Paranoid restore. send a SIGSEGV if we fail to restore the state.
*/
if (unlikely(restore_fpu_checking(tsk))) { if (unlikely(restore_fpu_checking(tsk))) {
drop_init_fpu(tsk); drop_init_fpu(tsk);
force_sig_info(SIGSEGV, SEND_SIG_PRIV, tsk); force_sig_info(SIGSEGV, SEND_SIG_PRIV, tsk);
return; } else {
}
tsk->thread.fpu_counter++; tsk->thread.fpu_counter++;
}
kernel_fpu_enable();
} }
EXPORT_SYMBOL_GPL(math_state_restore); EXPORT_SYMBOL_GPL(math_state_restore);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment