Commit ce1143aa authored by Andy Lutomirski's avatar Andy Lutomirski Committed by Ingo Molnar

x86/dmi: Switch dmi_remap() from ioremap() [uncached] to ioremap_cache()

DMI cacheability is very confused on x86.

dmi_early_remap() uses early_ioremap(), which uses FIXMAP_PAGE_IO,
which is __PAGE_KERNEL_IO, which is __PAGE_KERNEL, which is cached.

Don't ask me why this makes any sense.

dmi_remap() uses ioremap(), which requests an uncached mapping.

However, on non-EFI systems, the DMI data generally lives between
0xf0000 and 0x100000, which is in the legacy ISA range, which
triggers a special case in the PAT code that overrides the cache
mode requested by ioremap() and forces a WB mapping.

On a UEFI boot, however, the DMI table can live at any physical
address.  On my laptop, it's around 0x77dd0000.  That's nowhere near
the legacy ISA range, so the ioremap() implicit uncached type is
honored and we end up with a UC- mapping.

UC- is a very, very slow way to read from main memory, so dmi_walk()
is likely to take much longer than necessary.

Given that, even on UEFI, we do early cached DMI reads, it seems
safe to just ask for cached access.  Switch to ioremap_cache().

I haven't tried to benchmark this, but I'd guess it saves several
milliseconds of boot time.
Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jean Delvare <jdelvare@suse.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toshi Kani <toshi.kani@hp.com>
Link: http://lkml.kernel.org/r/3147c38e51f439f3c8911db34c7d4ab22d854915.1453791969.git.luto@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent d8bced79
...@@ -15,7 +15,7 @@ static __always_inline __init void *dmi_alloc(unsigned len) ...@@ -15,7 +15,7 @@ static __always_inline __init void *dmi_alloc(unsigned len)
/* Use early IO mappings for DMI because it's initialized early */ /* Use early IO mappings for DMI because it's initialized early */
#define dmi_early_remap early_ioremap #define dmi_early_remap early_ioremap
#define dmi_early_unmap early_iounmap #define dmi_early_unmap early_iounmap
#define dmi_remap ioremap #define dmi_remap ioremap_cache
#define dmi_unmap iounmap #define dmi_unmap iounmap
#endif /* _ASM_X86_DMI_H */ #endif /* _ASM_X86_DMI_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment