Commit 6af66ec5 authored by jerry.hoemann@hp.com's avatar jerry.hoemann@hp.com Committed by Ben Hutchings

x86/mm: account for PGDIR_SIZE alignment

Patch for 3.0-stable.  Function find_early_table_space removed upstream.

Fixes panic in alloc_low_page due to pgt_buf overflow during
init_memory_mapping.

find_early_table_space sizes pgt_buf based upon the size of the
memory being mapped, but it does not take into account the alignment
of the memory.  When the region being mapped spans a 512GB (PGDIR_SIZE)
alignment, a panic from alloc_low_pages occurs.

kernel_physical_mapping_init takes into account PGDIR_SIZE alignment.
This causes an extra call to alloc_low_page to be made.  This extra call
isn't accounted for by find_early_table_space and causes a kernel panic.

Change is to take into account PGDIR_SIZE alignment in find_early_table_space.
Signed-off-by: default avatarJerry Hoemann <jerry.hoemann@hp.com>
Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
parent 88933df6
......@@ -44,11 +44,15 @@ static void __init find_early_table_space(struct map_range *mr, int nr_range)
int i;
unsigned long puds = 0, pmds = 0, ptes = 0, tables;
unsigned long start = 0, good_end;
unsigned long pgd_extra = 0;
phys_addr_t base;
for (i = 0; i < nr_range; i++) {
unsigned long range, extra;
if ((mr[i].end >> PGDIR_SHIFT) - (mr[i].start >> PGDIR_SHIFT))
pgd_extra++;
range = mr[i].end - mr[i].start;
puds += (range + PUD_SIZE - 1) >> PUD_SHIFT;
......@@ -73,6 +77,7 @@ static void __init find_early_table_space(struct map_range *mr, int nr_range)
tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE);
tables += (pgd_extra * PAGE_SIZE);
#ifdef CONFIG_X86_32
/* for fixmap */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment