• Mike Kravetz's avatar
    hugetlb: set hugetlb page flag before optimizing vmemmap · d8f5f7e4
    Mike Kravetz authored
    Currently, vmemmap optimization of hugetlb pages is performed before the
    hugetlb flag (previously hugetlb destructor) is set identifying it as a
    hugetlb folio.  This means there is a window of time where an ordinary
    folio does not have all associated vmemmap present.  The core mm only
    expects vmemmap to be potentially optimized for hugetlb and device dax. 
    This can cause problems in code such as memory error handling that may
    want to write to tail struct pages.
    
    There is only one call to perform hugetlb vmemmap optimization today.  To
    fix this issue, simply set the hugetlb flag before that call.
    
    There was a similar issue in the free hugetlb path that was previously
    addressed.  The two routines that optimize or restore hugetlb vmemmap
    should only be passed hugetlb folios/pages.  To catch any callers not
    following this rule, add VM_WARN_ON calls to the routines.  In the hugetlb
    free code paths, some calls could be made to restore vmemmap after
    clearing the hugetlb flag.  This was 'safe' as in these cases vmemmap was
    already present and the call was a NOOP.  However, for consistency these
    calls where eliminated so that we can add the VM_WARN_ON checks.
    
    Link: https://lkml.kernel.org/r/20230829213734.69673-1-mike.kravetz@oracle.com
    Fixes: f41f2ed4 ("mm: hugetlb: free the vmemmap pages associated with each HugeTLB page")
    Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
    Reviewed-by: default avatarMuchun Song <songmuchun@bytedance.com>
    Cc: James Houghton <jthoughton@google.com>
    Cc: Miaohe Lin <linmiaohe@huawei.com>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev>
    Cc: Usama Arif <usama.arif@bytedance.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    d8f5f7e4
hugetlb_vmemmap.c 16.7 KB