Commit e86a8769 authored by Toshi Kani's avatar Toshi Kani Committed by Greg Kroah-Hartman

mm/memory_hotplug.c: check start_pfn in test_pages_in_a_zone()

commit deb88a2a upstream.

Patch series "fix a kernel oops when reading sysfs valid_zones", v2.

A sysfs memory file is created for each 2GiB memory block on x86-64 when
the system has 64GiB or more memory.  [1] When the start address of a
memory block is not backed by struct page, i.e.  a memory range is not
aligned by 2GiB, reading its 'valid_zones' attribute file leads to a
kernel oops.  This issue was observed on multiple x86-64 systems with
more than 64GiB of memory.  This patch-set fixes this issue.

Patch 1 first fixes an issue in test_pages_in_a_zone(), which does not
test the start section.

Patch 2 then fixes the kernel oops by extending test_pages_in_a_zone()
to return valid [start, end).

Note for stable kernels: The memory block size change was made by commit
bdee237c ("x86: mm: Use 2GB memory block size on large-memory x86-64
systems"), which was accepted to 3.9.  However, this patch-set depends
on (and fixes) the change to test_pages_in_a_zone() made by commit
5f0f2887 ("mm/memory_hotplug.c: check for missing sections in
test_pages_in_a_zone()"), which was accepted to 4.4.

So, I recommend that we backport it up to 4.4.

[1] 'Commit bdee237c ("x86: mm: Use 2GB memory block size on
    large-memory x86-64 systems")'

This patch (of 2):

test_pages_in_a_zone() does not check 'start_pfn' when it is aligned by
section since 'sec_end_pfn' is set equal to 'pfn'.  Since this function
is called for testing the range of a sysfs memory file, 'start_pfn' is
always aligned by section.

Fix it by properly setting 'sec_end_pfn' to the next section pfn.

Also make sure that this function returns 1 only when the range belongs
to a zone.

Link: http://lkml.kernel.org/r/20170127222149.30893-2-toshi.kani@hpe.comSigned-off-by: default avatarToshi Kani <toshi.kani@hpe.com>
Cc: Andrew Banman <abanman@sgi.com>
Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 920bba10
...@@ -1371,7 +1371,7 @@ int is_mem_section_removable(unsigned long start_pfn, unsigned long nr_pages) ...@@ -1371,7 +1371,7 @@ int is_mem_section_removable(unsigned long start_pfn, unsigned long nr_pages)
} }
/* /*
* Confirm all pages in a range [start, end) is belongs to the same zone. * Confirm all pages in a range [start, end) belong to the same zone.
*/ */
int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn) int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
{ {
...@@ -1379,9 +1379,9 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn) ...@@ -1379,9 +1379,9 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
struct zone *zone = NULL; struct zone *zone = NULL;
struct page *page; struct page *page;
int i; int i;
for (pfn = start_pfn, sec_end_pfn = SECTION_ALIGN_UP(start_pfn); for (pfn = start_pfn, sec_end_pfn = SECTION_ALIGN_UP(start_pfn + 1);
pfn < end_pfn; pfn < end_pfn;
pfn = sec_end_pfn + 1, sec_end_pfn += PAGES_PER_SECTION) { pfn = sec_end_pfn, sec_end_pfn += PAGES_PER_SECTION) {
/* Make sure the memory section is present first */ /* Make sure the memory section is present first */
if (!present_section_nr(pfn_to_section_nr(pfn))) if (!present_section_nr(pfn_to_section_nr(pfn)))
continue; continue;
...@@ -1400,7 +1400,11 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn) ...@@ -1400,7 +1400,11 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
zone = page_zone(page); zone = page_zone(page);
} }
} }
return 1;
if (zone)
return 1;
else
return 0;
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment