Commit b62b51d2 authored by Kefeng Wang's avatar Kefeng Wang Committed by Andrew Morton

mm: memory_hotplug: remove head variable in do_migrate_range()

Patch series "mm: memory_hotplug: improve do_migrate_range()", v3.

Unify hwpoisoned page handling and isolation of HugeTLB/LRU/non-LRU
movable page, also convert to use folios in do_migrate_range().


This patch (of 5):

Directly use a folio for HugeTLB and THP when calculate the next pfn, then
remove unused head variable.

Link: https://lkml.kernel.org/r/20240827114728.3212578-1-wangkefeng.wang@huawei.com
Link: https://lkml.kernel.org/r/20240827114728.3212578-2-wangkefeng.wang@huawei.comSigned-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: default avatarMiaohe Lin <linmiaohe@huawei.com>
Acked-by: default avatarDavid Hildenbrand <david@redhat.com>
Cc: Dan Carpenter <dan.carpenter@linaro.org>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Naoya Horiguchi <nao.horiguchi@gmail.com>
Cc: Oscar Salvador <osalvador@suse.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent f66ac836
...@@ -1773,7 +1773,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end, ...@@ -1773,7 +1773,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
{ {
unsigned long pfn; unsigned long pfn;
struct page *page, *head; struct page *page;
LIST_HEAD(source); LIST_HEAD(source);
static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL, static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL,
DEFAULT_RATELIMIT_BURST); DEFAULT_RATELIMIT_BURST);
...@@ -1786,14 +1786,20 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) ...@@ -1786,14 +1786,20 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
continue; continue;
page = pfn_to_page(pfn); page = pfn_to_page(pfn);
folio = page_folio(page); folio = page_folio(page);
head = &folio->page;
if (PageHuge(page)) { /*
pfn = page_to_pfn(head) + compound_nr(head) - 1; * No reference or lock is held on the folio, so it might
isolate_hugetlb(folio, &source); * be modified concurrently (e.g. split). As such,
continue; * folio_nr_pages() may read garbage. This is fine as the outer
} else if (PageTransHuge(page)) * loop will revisit the split folio later.
pfn = page_to_pfn(head) + thp_nr_pages(page) - 1; */
if (folio_test_large(folio)) {
pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
if (folio_test_hugetlb(folio)) {
isolate_hugetlb(folio, &source);
continue;
}
}
/* /*
* HWPoison pages have elevated reference counts so the migration would * HWPoison pages have elevated reference counts so the migration would
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment