Commit 1ae7013d authored by Hui Zhu's avatar Hui Zhu Committed by Linus Torvalds

CMA: page_isolation: check buddy before accessing it

I had an issue:

    Unable to handle kernel NULL pointer dereference at virtual address 0000082a
    pgd = cc970000
    [0000082a] *pgd=00000000
    Internal error: Oops: 5 [#1] PREEMPT SMP ARM
    PC is at get_pageblock_flags_group+0x5c/0xb0
    LR is at unset_migratetype_isolate+0x148/0x1b0
    pc : [<c00cc9a0>]    lr : [<c0109874>]    psr: 80000093
    sp : c7029d00  ip : 00000105  fp : c7029d1c
    r10: 00000001  r9 : 0000000a  r8 : 00000004
    r7 : 60000013  r6 : 000000a4  r5 : c0a357e4  r4 : 00000000
    r3 : 00000826  r2 : 00000002  r1 : 00000000  r0 : 0000003f
    Flags: Nzcv  IRQs off  FIQs on  Mode SVC_32  ISA ARM  Segment user
    Control: 10c5387d  Table: 2cb7006a  DAC: 00000015
    Backtrace:
        get_pageblock_flags_group+0x0/0xb0
        unset_migratetype_isolate+0x0/0x1b0
        undo_isolate_page_range+0x0/0xdc
        __alloc_contig_range+0x0/0x34c
        alloc_contig_range+0x0/0x18

This issue is because when calling unset_migratetype_isolate() to unset
a part of CMA memory, it try to access the buddy page to get its status:

		if (order >= pageblock_order) {
			page_idx = page_to_pfn(page) & ((1 << MAX_ORDER) - 1);
			buddy_idx = __find_buddy_index(page_idx, order);
			buddy = page + (buddy_idx - page_idx);

			if (!is_migrate_isolate_page(buddy)) {

But the begin addr of this part of CMA memory is very close to a part of
memory that is reserved at boot time (not in buddy system).  So add a
check before accessing it.

[akpm@linux-foundation.org: use conventional code layout]
Signed-off-by: default avatarHui Zhu <zhuhui@xiaomi.com>
Suggested-by: default avatarLaura Abbott <labbott@redhat.com>
Suggested-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 929aa5b2
......@@ -101,7 +101,8 @@ void unset_migratetype_isolate(struct page *page, unsigned migratetype)
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
if (!is_migrate_isolate_page(buddy)) {
if (pfn_valid_within(page_to_pfn(buddy)) &&
!is_migrate_isolate_page(buddy)) {
__isolate_free_page(page, order);
kernel_map_pages(page, (1 << order), 1);
set_page_refcounted(page);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment