Commit fcaa3865 authored by Ingo Molnar's avatar Ingo Molnar Committed by Linus Torvalds

[PATCH] sched: fix scheduling latencies in mttr.c

Fix scheduling latencies in the MTRR-setting codepath.  Also, fix bad bug:
MTRR's _must_ be set with interrupts disabled!

From: Bernard Blackham <bernard@blackham.com.au>

The patch sched-fix-scheduling-latencies-in-mttr in recent -mm kernels has
the bad side-effect of re-enabling interrupts even if they were disabled.
This caused bugs in Software Suspend 2 which reenabled MTRRs whilst
interrupts were already disabled.

Attached is a replacement patch which uses spin_lock_irqsave instead of
spin_lock_irq.
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent d12ca525
......@@ -233,6 +233,13 @@ static unsigned long cr4 = 0;
static u32 deftype_lo, deftype_hi;
static spinlock_t set_atomicity_lock = SPIN_LOCK_UNLOCKED;
/*
* Since we are disabling the cache don't allow any interrupts - they
* would run extremely slow and would only increase the pain. The caller must
* ensure that local interrupts are disabled and are reenabled after post_set()
* has been called.
*/
static void prepare_set(void)
{
unsigned long cr0;
......@@ -240,11 +247,11 @@ static void prepare_set(void)
/* Note that this is not ideal, since the cache is only flushed/disabled
for this CPU while the MTRRs are changed, but changing this requires
more invasive changes to the way the kernel boots */
spin_lock(&set_atomicity_lock);
/* Enter the no-fill (CD=1, NW=0) cache mode and flush caches. */
cr0 = read_cr0() | 0x40000000; /* set CD flag */
wbinvd();
write_cr0(cr0);
wbinvd();
......@@ -266,8 +273,7 @@ static void prepare_set(void)
static void post_set(void)
{
/* Flush caches and TLBs */
wbinvd();
/* Flush TLBs (no need to flush caches - they are disabled) */
__flush_tlb();
/* Intel (P6) standard MTRRs */
......@@ -285,13 +291,16 @@ static void post_set(void)
static void generic_set_all(void)
{
unsigned long mask, count;
unsigned long flags;
local_irq_save(flags);
prepare_set();
/* Actually set the state */
mask = set_mtrr_state(deftype_lo,deftype_hi);
post_set();
local_irq_restore(flags);
/* Use the atomic bitops to update the global mask */
for (count = 0; count < sizeof mask * 8; ++count) {
......@@ -314,6 +323,9 @@ static void generic_set_mtrr(unsigned int reg, unsigned long base,
[RETURNS] Nothing.
*/
{
unsigned long flags;
local_irq_save(flags);
prepare_set();
if (size == 0) {
......@@ -328,6 +340,7 @@ static void generic_set_mtrr(unsigned int reg, unsigned long base,
}
post_set();
local_irq_restore(flags);
}
int generic_validate_add_page(unsigned long base, unsigned long size, unsigned int type)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment