• Anshuman Khandual's avatar
    arm64/mm: Enable memory hot remove · bbd6ec60
    Anshuman Khandual authored
    The arch code for hot-remove must tear down portions of the linear map and
    vmemmap corresponding to memory being removed. In both cases the page
    tables mapping these regions must be freed, and when sparse vmemmap is in
    use the memory backing the vmemmap must also be freed.
    
    This patch adds unmap_hotplug_range() and free_empty_tables() helpers which
    can be used to tear down either region and calls it from vmemmap_free() and
    ___remove_pgd_mapping(). The free_mapped argument determines whether the
    backing memory will be freed.
    
    It makes two distinct passes over the kernel page table. In the first pass
    with unmap_hotplug_range() it unmaps, invalidates applicable TLB cache and
    frees backing memory if required (vmemmap) for each mapped leaf entry. In
    the second pass with free_empty_tables() it looks for empty page table
    sections whose page table page can be unmapped, TLB invalidated and freed.
    
    While freeing intermediate level page table pages bail out if any of its
    entries are still valid. This can happen for partially filled kernel page
    table either from a previously attempted failed memory hot add or while
    removing an address range which does not span the entire page table page
    range.
    
    The vmemmap region may share levels of table with the vmalloc region.
    There can be conflicts between hot remove freeing page table pages with
    a concurrent vmalloc() walking the kernel page table. This conflict can
    not just be solved by taking the init_mm ptl because of existing locking
    scheme in vmalloc(). So free_empty_tables() implements a floor and ceiling
    method which is borrowed from user page table tear with free_pgd_range()
    which skips freeing page table pages if intermediate address range is not
    aligned or maximum floor-ceiling might not own the entire page table page.
    
    Boot memory on arm64 cannot be removed. Hence this registers a new memory
    hotplug notifier which prevents boot memory offlining and it's removal.
    
    While here update arch_add_memory() to handle __add_pages() failures by
    just unmapping recently added kernel linear mapping. Now enable memory hot
    remove on arm64 platforms by default with ARCH_ENABLE_MEMORY_HOTREMOVE.
    
    This implementation is overall inspired from kernel page table tear down
    procedure on X86 architecture and user page table tear down method.
    
    [Mike and Catalin added P4D page table level support]
    Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
    Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
    Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    bbd6ec60
mmu.c 36.6 KB