Commit dd71a17b authored by Bryan O'Donoghue's avatar Bryan O'Donoghue Committed by Ingo Molnar

x86/platform/intel/quark: Change the kernel's IMR lock bit to false

Currently when setting up an IMR around the kernel's .text section we lock
that IMR, preventing further modification. While superficially this appears
to be the right thing to do, in fact this doesn't account for a legitimate
change in the memory map such as when executing a new kernel via kexec.

In such a scenario a second kernel can have a different size and location
to it's predecessor and can view some of the memory occupied by it's
predecessor as legitimately usable DMA RAM. If this RAM were then
subsequently allocated to DMA agents within the system it could conceivably
trigger an IMR violation.

This patch fixes the this potential situation by keeping the kernel's .text
section IMR lock bit false by default.
Suggested-by: default avatarIngo Molnar <mingo@kernel.org>
Reported-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: default avatarBryan O'Donoghue <pure.logic@nexus-software.ie>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boon.leong.ong@intel.com
Cc: paul.gortmaker@windriver.com
Link: http://lkml.kernel.org/r/1456190999-12685-2-git-send-email-pure.logic@nexus-software.ieSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 81f70ba2
...@@ -592,14 +592,14 @@ static void __init imr_fixup_memmap(struct imr_device *idev) ...@@ -592,14 +592,14 @@ static void __init imr_fixup_memmap(struct imr_device *idev)
end = (unsigned long)__end_rodata - 1; end = (unsigned long)__end_rodata - 1;
/* /*
* Setup a locked IMR around the physical extent of the kernel * Setup an unlocked IMR around the physical extent of the kernel
* from the beginning of the .text secton to the end of the * from the beginning of the .text secton to the end of the
* .rodata section as one physically contiguous block. * .rodata section as one physically contiguous block.
* *
* We don't round up @size since it is already PAGE_SIZE aligned. * We don't round up @size since it is already PAGE_SIZE aligned.
* See vmlinux.lds.S for details. * See vmlinux.lds.S for details.
*/ */
ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, true); ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, false);
if (ret < 0) { if (ret < 0) {
pr_err("unable to setup IMR for kernel: %zu KiB (%lx - %lx)\n", pr_err("unable to setup IMR for kernel: %zu KiB (%lx - %lx)\n",
size / 1024, start, end); size / 1024, start, end);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment