• Peter Xu's avatar
    mm/gup: handle hugepd for follow_page() · a12083d7
    Peter Xu authored
    Hugepd is only used in PowerPC so far on 4K page size kernels where hash
    mmu is used.  follow_page_mask() used to leverage hugetlb APIs to access
    hugepd entries.  Teach follow_page_mask() itself on hugepd.
    
    With previous refactors on fast-gup gup_huge_pd(), most of the code can be
    leveraged.  There's something not needed for follow page, for example,
    gup_hugepte() tries to detect pgtable entry change which will never happen
    with slow gup (which has the pgtable lock held), but that's not a problem
    to check.
    
    Since follow_page() always only fetch one page, set the end to "address +
    PAGE_SIZE" should suffice.  We will still do the pgtable walk once for
    each hugetlb page by setting ctx->page_mask properly.
    
    One thing worth mentioning is that some level of pgtable's _bad() helper
    will report is_hugepd() entries as TRUE on Power8 hash MMUs.  I think it
    at least applies to PUD on Power8 with 4K pgsize.  It means feeding a
    hugepd entry to pud_bad() will report a false positive.  Let's leave that
    for now because it can be arch-specific where I am a bit declined to
    touch.  In this patch it's not a problem as long as hugepd is detected
    before any bad pgtable entries.
    
    To allow slow gup like follow_*_page() to access hugepd helpers, hugepd
    codes are moved to the top.  Besides that, the helper record_subpages()
    will be used by either hugepd or fast-gup now.  To avoid "unused function"
    warnings we must provide a "#ifdef" for it, unfortunately.
    
    Link: https://lkml.kernel.org/r/20240327152332.950956-13-peterx@redhat.comSigned-off-by: default avatarPeter Xu <peterx@redhat.com>
    Tested-by: default avatarRyan Roberts <ryan.roberts@arm.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Andrew Jones <andrew.jones@linux.dev>
    Cc: Aneesh Kumar K.V (IBM) <aneesh.kumar@kernel.org>
    Cc: Axel Rasmussen <axelrasmussen@google.com>
    Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
    Cc: Christoph Hellwig <hch@infradead.org>
    Cc: David Hildenbrand <david@redhat.com>
    Cc: James Houghton <jthoughton@google.com>
    Cc: Jason Gunthorpe <jgg@nvidia.com>
    Cc: John Hubbard <jhubbard@nvidia.com>
    Cc: Kirill A. Shutemov <kirill@shutemov.name>
    Cc: Lorenzo Stoakes <lstoakes@gmail.com>
    Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: "Mike Rapoport (IBM)" <rppt@kernel.org>
    Cc: Muchun Song <muchun.song@linux.dev>
    Cc: Rik van Riel <riel@surriel.com>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Yang Shi <shy828301@gmail.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    a12083d7
gup.c 103 KB