Commit e68d343d authored by Waiman Long's avatar Waiman Long Committed by Andrew Morton

mm/kmemleak: move up cond_resched() call in page scanning loop

Commit bde5f6bc ("kmemleak: add scheduling point to kmemleak_scan()")
added a cond_resched() call to the struct page scanning loop to prevent
soft lockup from happening.  However, soft lockup can still happen in that
loop in some corner cases when the pages that satisfy the "!(pfn & 63)"
check are skipped for some reasons.

Fix this corner case by moving up the cond_resched() check so that it will
be called every 64 pages unconditionally.

Link: https://lkml.kernel.org/r/20230825164947.1317981-1-longman@redhat.com
Fixes: bde5f6bc ("kmemleak: add scheduling point to kmemleak_scan()")
Signed-off-by: default avatarWaiman Long <longman@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Yisheng Xie <xieyisheng1@huawei.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent f945116e
...@@ -1584,6 +1584,9 @@ static void kmemleak_scan(void) ...@@ -1584,6 +1584,9 @@ static void kmemleak_scan(void)
for (pfn = start_pfn; pfn < end_pfn; pfn++) { for (pfn = start_pfn; pfn < end_pfn; pfn++) {
struct page *page = pfn_to_online_page(pfn); struct page *page = pfn_to_online_page(pfn);
if (!(pfn & 63))
cond_resched();
if (!page) if (!page)
continue; continue;
...@@ -1594,8 +1597,6 @@ static void kmemleak_scan(void) ...@@ -1594,8 +1597,6 @@ static void kmemleak_scan(void)
if (page_count(page) == 0) if (page_count(page) == 0)
continue; continue;
scan_block(page, page + 1, NULL); scan_block(page, page + 1, NULL);
if (!(pfn & 63))
cond_resched();
} }
} }
put_online_mems(); put_online_mems();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment