• Jérôme Glisse's avatar
    mm/ZONE_DEVICE: new type of ZONE_DEVICE for unaddressable memory · 5042db43
    Jérôme Glisse authored
    HMM (heterogeneous memory management) need struct page to support
    migration from system main memory to device memory.  Reasons for HMM and
    migration to device memory is explained with HMM core patch.
    
    This patch deals with device memory that is un-addressable memory (ie CPU
    can not access it).  Hence we do not want those struct page to be manage
    like regular memory.  That is why we extend ZONE_DEVICE to support
    different types of memory.
    
    A persistent memory type is define for existing user of ZONE_DEVICE and a
    new device un-addressable type is added for the un-addressable memory
    type.  There is a clear separation between what is expected from each
    memory type and existing user of ZONE_DEVICE are un-affected by new
    requirement and new use of the un-addressable type.  All specific code
    path are protect with test against the memory type.
    
    Because memory is un-addressable we use a new special swap type for when a
    page is migrated to device memory (this reduces the number of maximum swap
    file).
    
    The main two additions beside memory type to ZONE_DEVICE is two callbacks.
    First one, page_free() is call whenever page refcount reach 1 (which
    means the page is free as ZONE_DEVICE page never reach a refcount of 0).
    This allow device driver to manage its memory and associated struct page.
    
    The second callback page_fault() happens when there is a CPU access to an
    address that is back by a device page (which are un-addressable by the
    CPU).  This callback is responsible to migrate the page back to system
    main memory.  Device driver can not block migration back to system memory,
    HMM make sure that such page can not be pin into device memory.
    
    If device is in some error condition and can not migrate memory back then
    a CPU page fault to device memory should end with SIGBUS.
    
    [arnd@arndb.de: fix warning]
      Link: http://lkml.kernel.org/r/20170823133213.712917-1-arnd@arndb.de
    Link: http://lkml.kernel.org/r/20170817000548.32038-8-jglisse@redhat.comSigned-off-by: default avatarJérôme Glisse <jglisse@redhat.com>
    Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
    Acked-by: default avatarDan Williams <dan.j.williams@intel.com>
    Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
    Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
    Cc: Balbir Singh <bsingharora@gmail.com>
    Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    Cc: David Nellans <dnellans@nvidia.com>
    Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: John Hubbard <jhubbard@nvidia.com>
    Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Cc: Mark Hairgrove <mhairgrove@nvidia.com>
    Cc: Michal Hocko <mhocko@kernel.org>
    Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Cc: Sherry Cheung <SCheung@nvidia.com>
    Cc: Subhash Gutti <sgutti@nvidia.com>
    Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
    Cc: Bob Liu <liubo95@huawei.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    5042db43
mprotect.c 13.8 KB