Commit 10d83d77 authored by Peter Xu's avatar Peter Xu Committed by Andrew Morton

mm/pagewalk: check pfnmap for folio_walk_start()

Teach folio_walk_start() to recognize special pmd/pud mappings, and fail
them properly as it means there's no folio backing them.

[peterx@redhat.com: remove some stale comments, per David]
  Link: https://lkml.kernel.org/r/20240829202237.2640288-1-peterx@redhat.com
Link: https://lkml.kernel.org/r/20240826204353.2228736-7-peterx@redhat.comSigned-off-by: default avatarPeter Xu <peterx@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Gavin Shan <gshan@redhat.com>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Niklas Schnelle <schnelle@linux.ibm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent ae3c99e6
...@@ -672,11 +672,10 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, ...@@ -672,11 +672,10 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
{ {
unsigned long pfn = pmd_pfn(pmd); unsigned long pfn = pmd_pfn(pmd);
/* /* Currently it's only used for huge pfnmaps */
* There is no pmd_special() but there may be special pmds, e.g. if (unlikely(pmd_special(pmd)))
* in a direct-access (dax) mapping, so let's just replicate the return NULL;
* !CONFIG_ARCH_HAS_PTE_SPECIAL case from vm_normal_page() here.
*/
if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) { if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
if (vma->vm_flags & VM_MIXEDMAP) { if (vma->vm_flags & VM_MIXEDMAP) {
if (!pfn_valid(pfn)) if (!pfn_valid(pfn))
......
...@@ -753,7 +753,7 @@ struct folio *folio_walk_start(struct folio_walk *fw, ...@@ -753,7 +753,7 @@ struct folio *folio_walk_start(struct folio_walk *fw,
fw->pudp = pudp; fw->pudp = pudp;
fw->pud = pud; fw->pud = pud;
if (!pud_present(pud) || pud_devmap(pud)) { if (!pud_present(pud) || pud_devmap(pud) || pud_special(pud)) {
spin_unlock(ptl); spin_unlock(ptl);
goto not_found; goto not_found;
} else if (!pud_leaf(pud)) { } else if (!pud_leaf(pud)) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment