Commit f4c18e6f authored by Naoya Horiguchi's avatar Naoya Horiguchi Committed by Linus Torvalds

mm: check __PG_HWPOISON separately from PAGE_FLAGS_CHECK_AT_*

The race condition addressed in commit add05cec ("mm: soft-offline:
don't free target page in successful page migration") was not closed
completely, because that can happen not only for soft-offline, but also
for hard-offline.  Consider that a slab page is about to be freed into
buddy pool, and then an uncorrected memory error hits the page just
after entering __free_one_page(), then VM_BUG_ON_PAGE(page->flags &
PAGE_FLAGS_CHECK_AT_PREP) is triggered, despite the fact that it's not
necessary because the data on the affected page is not consumed.

To solve it, this patch drops __PG_HWPOISON from page flag checks at
allocation/free time.  I think it's justified because __PG_HWPOISON
flags is defined to prevent the page from being reused, and setting it
outside the page's alloc-free cycle is a designed behavior (not a bug.)

For recent months, I was annoyed about BUG_ON when soft-offlined page
remains on lru cache list for a while, which is avoided by calling
put_page() instead of putback_lru_page() in page migration's success
path.  This means that this patch reverts a major change from commit
add05cec about the new refcounting rule of soft-offlined pages, so
"reuse window" revives.  This will be closed by a subsequent patch.
Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Dean Nelson <dnelson@redhat.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Hugh Dickins <hughd@google.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 98ed2b00
...@@ -631,15 +631,19 @@ static inline void ClearPageSlabPfmemalloc(struct page *page) ...@@ -631,15 +631,19 @@ static inline void ClearPageSlabPfmemalloc(struct page *page)
1 << PG_private | 1 << PG_private_2 | \ 1 << PG_private | 1 << PG_private_2 | \
1 << PG_writeback | 1 << PG_reserved | \ 1 << PG_writeback | 1 << PG_reserved | \
1 << PG_slab | 1 << PG_swapcache | 1 << PG_active | \ 1 << PG_slab | 1 << PG_swapcache | 1 << PG_active | \
1 << PG_unevictable | __PG_MLOCKED | __PG_HWPOISON | \ 1 << PG_unevictable | __PG_MLOCKED | \
__PG_COMPOUND_LOCK) __PG_COMPOUND_LOCK)
/* /*
* Flags checked when a page is prepped for return by the page allocator. * Flags checked when a page is prepped for return by the page allocator.
* Pages being prepped should not have any flags set. It they are set, * Pages being prepped should not have these flags set. It they are set,
* there has been a kernel bug or struct page corruption. * there has been a kernel bug or struct page corruption.
*
* __PG_HWPOISON is exceptional because it needs to be kept beyond page's
* alloc-free cycle to prevent from reusing the page.
*/ */
#define PAGE_FLAGS_CHECK_AT_PREP ((1 << NR_PAGEFLAGS) - 1) #define PAGE_FLAGS_CHECK_AT_PREP \
(((1 << NR_PAGEFLAGS) - 1) & ~__PG_HWPOISON)
#define PAGE_FLAGS_PRIVATE \ #define PAGE_FLAGS_PRIVATE \
(1 << PG_private | 1 << PG_private_2) (1 << PG_private | 1 << PG_private_2)
......
...@@ -1676,12 +1676,7 @@ static void __split_huge_page_refcount(struct page *page, ...@@ -1676,12 +1676,7 @@ static void __split_huge_page_refcount(struct page *page,
/* after clearing PageTail the gup refcount can be released */ /* after clearing PageTail the gup refcount can be released */
smp_mb__after_atomic(); smp_mb__after_atomic();
/* page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
* retain hwpoison flag of the poisoned tail page:
* fix for the unsuitable process killed on Guest Machine(KVM)
* by the memory-failure.
*/
page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP | __PG_HWPOISON;
page_tail->flags |= (page->flags & page_tail->flags |= (page->flags &
((1L << PG_referenced) | ((1L << PG_referenced) |
(1L << PG_swapbacked) | (1L << PG_swapbacked) |
......
...@@ -950,7 +950,10 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, ...@@ -950,7 +950,10 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page,
list_del(&page->lru); list_del(&page->lru);
dec_zone_page_state(page, NR_ISOLATED_ANON + dec_zone_page_state(page, NR_ISOLATED_ANON +
page_is_file_cache(page)); page_is_file_cache(page));
if (reason != MR_MEMORY_FAILURE) /* Soft-offlined page shouldn't go through lru cache list */
if (reason == MR_MEMORY_FAILURE)
put_page(page);
else
putback_lru_page(page); putback_lru_page(page);
} }
......
...@@ -1296,6 +1296,10 @@ static inline int check_new_page(struct page *page) ...@@ -1296,6 +1296,10 @@ static inline int check_new_page(struct page *page)
bad_reason = "non-NULL mapping"; bad_reason = "non-NULL mapping";
if (unlikely(atomic_read(&page->_count) != 0)) if (unlikely(atomic_read(&page->_count) != 0))
bad_reason = "nonzero _count"; bad_reason = "nonzero _count";
if (unlikely(page->flags & __PG_HWPOISON)) {
bad_reason = "HWPoisoned (hardware-corrupted)";
bad_flags = __PG_HWPOISON;
}
if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_PREP)) { if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_PREP)) {
bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag set"; bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag set";
bad_flags = PAGE_FLAGS_CHECK_AT_PREP; bad_flags = PAGE_FLAGS_CHECK_AT_PREP;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment