1. 11 Dec, 2023 14 commits
    • Vishal Verma's avatar
      mm/memory_hotplug: replace an open-coded kmemdup() in add_memory_resource() · 82b8a3b4
      Vishal Verma authored
      Patch series "mm: use memmap_on_memory semantics for dax/kmem", v10.
      
      The dax/kmem driver can potentially hot-add large amounts of memory
      originating from CXL memory expanders, or NVDIMMs, or other 'device
      memories'.  There is a chance there isn't enough regular system memory
      available to fit the memmap for this new memory.  It's therefore
      desirable, if all other conditions are met, for the kmem managed memory to
      place its memmap on the newly added memory itself.
      
      The main hurdle for accomplishing this for kmem is that memmap_on_memory
      can only be done if the memory being added is equal to the size of one
      memblock.  To overcome this, allow the hotplug code to split an
      add_memory() request into memblock-sized chunks, and try_remove_memory()
      to also expect and handle such a scenario.
      
      Patch 1 replaces an open-coded kmemdup()
      
      Patch 2 teaches the memory_hotplug code to allow for splitting
      add_memory() and remove_memory() requests over memblock sized chunks.
      
      Patch 3 allows the dax region drivers to request memmap_on_memory
      semantics. CXL dax regions default this to 'on', all others default to
      off to keep existing behavior unchanged.
      
      
      This patch (of 3):
      
      A review of the memmap_on_memory modifications to add_memory_resource()
      revealed an instance of an open-coded kmemdup().  Replace it with
      kmemdup().
      
      Link: https://lkml.kernel.org/r/20231107-vv-kmem_memmap-v10-0-1253ec050ed0@intel.com
      Link: https://lkml.kernel.org/r/20231107-vv-kmem_memmap-v10-1-1253ec050ed0@intel.comSigned-off-by: default avatarVishal Verma <vishal.l.verma@intel.com>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarFan Ni <fan.ni@samsung.com>
      Reported-by: default avatarDan Williams <dan.j.williams@intel.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Cc: "Huang, Ying" <ying.huang@intel.com>
      Cc: Jeff Moyer <jmoyer@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      82b8a3b4
    • Liam Ni's avatar
      NUMA: optimize detection of memory with no node id assigned by firmware · ff6c3d81
      Liam Ni authored
      Sanity check that makes sure the nodes cover all memory loops over
      numa_meminfo to count the pages that have node id assigned by the
      firmware, then loops again over memblock.memory to find the total amount
      of memory and in the end checks that the difference between the total
      memory and memory that covered by nodes is less than some threshold. 
      Worse, the loop over numa_meminfo calls __absent_pages_in_range() that
      also partially traverses memblock.memory.
      
      It's much simpler and more efficient to have a single traversal of
      memblock.memory that verifies that amount of memory not covered by nodes
      is less than a threshold.
      
      Introduce memblock_validate_numa_coverage() that does exactly that and use
      it instead of numa_meminfo_cover_memory().
      
      Link: https://lkml.kernel.org/r/20231026020329.327329-1-zhiguangni01@gmail.comSigned-off-by: default avatarLiam Ni <zhiguangni01@gmail.com>
      Reviewed-by: default avatarMike Rapoport (IBM) <rppt@kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Bibo Mao <maobibo@loongson.cn>
      Cc: Binbin Zhou <zhoubinbin@loongson.cn>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Feiyang Chen <chenfeiyang@loongson.cn>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Huacai Chen <chenhuacai@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: WANG Xuerui <kernel@xen0n.name>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      ff6c3d81
    • Baolin Wang's avatar
      mm: huge_memory: batch tlb flush when splitting a pte-mapped THP · 3027c6f8
      Baolin Wang authored
      I can observe an obvious tlb flush hotspot when splitting a pte-mapped THP
      on my ARM64 server, and the distribution of this hotspot is as follows:
      
         - 16.85% split_huge_page_to_list
            + 7.80% down_write
            - 7.49% try_to_migrate
               - 7.48% rmap_walk_anon
                    7.23% ptep_clear_flush
            + 1.52% __split_huge_page
      
      The reason is that the split_huge_page_to_list() will build migration
      entries for each subpage of a pte-mapped Anon THP by try_to_migrate(), or
      unmap for file THP, and it will clear and tlb flush for each subpage's
      pte.  Moreover, the split_huge_page_to_list() will set TTU_SPLIT_HUGE_PMD
      flag to ensure the THP is already a pte-mapped THP before splitting it to
      some normal pages.
      
      Actually, there is no need to flush tlb for each subpage immediately,
      instead we can batch tlb flush for the pte-mapped THP to improve the
      performance.
      
      After this patch, we can see the batch tlb flush can improve the latency
      obviously when running thpscale.
      
                                   k6.5-base                   patched
      Amean     fault-both-1      1071.17 (   0.00%)      901.83 *  15.81%*
      Amean     fault-both-3      2386.08 (   0.00%)     1865.32 *  21.82%*
      Amean     fault-both-5      2851.10 (   0.00%)     2273.84 *  20.25%*
      Amean     fault-both-7      3679.91 (   0.00%)     2881.66 *  21.69%*
      Amean     fault-both-12     5916.66 (   0.00%)     4369.55 *  26.15%*
      Amean     fault-both-18     7981.36 (   0.00%)     6303.57 *  21.02%*
      Amean     fault-both-24    10950.79 (   0.00%)     8752.56 *  20.07%*
      Amean     fault-both-30    14077.35 (   0.00%)    10170.01 *  27.76%*
      Amean     fault-both-32    13061.57 (   0.00%)    11630.08 *  10.96%*
      
      Link: https://lkml.kernel.org/r/431d9fb6823036369dcb1d3b2f63732f01df21a7.1698488264.git.baolin.wang@linux.alibaba.comSigned-off-by: default avatarBaolin Wang <baolin.wang@linux.alibaba.com>
      Reviewed-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Reviewed-by: default avatarYang Shi <shy828301@gmail.com>
      Reviewed-by: default avatarAlistair Popple <apopple@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      3027c6f8
    • Peng Zhang's avatar
      fork: use __mt_dup() to duplicate maple tree in dup_mmap() · d2406291
      Peng Zhang authored
      In dup_mmap(), using __mt_dup() to duplicate the old maple tree and then
      directly replacing the entries of VMAs in the new maple tree can result in
      better performance.  __mt_dup() uses DFS pre-order to duplicate the maple
      tree, so it is efficient.
      
      The average time complexity of __mt_dup() is O(n), where n is the number
      of VMAs.  The proof of the time complexity is provided in the commit log
      that introduces __mt_dup().  After duplicating the maple tree, each
      element is traversed and replaced (ignoring the cases of deletion, which
      are rare).  Since it is only a replacement operation for each element,
      this process is also O(n).
      
      Analyzing the exact time complexity of the previous algorithm is
      challenging because each insertion can involve appending to a node,
      pushing data to adjacent nodes, or even splitting nodes.  The frequency of
      each action is difficult to calculate.  The worst-case scenario for a
      single insertion is when the tree undergoes splitting at every level.  If
      we consider each insertion as the worst-case scenario, we can determine
      that the upper bound of the time complexity is O(n*log(n)), although this
      is a loose upper bound.  However, based on the test data, it appears that
      the actual time complexity is likely to be O(n).
      
      As the entire maple tree is duplicated using __mt_dup(), if dup_mmap()
      fails, there will be a portion of VMAs that have not been duplicated in
      the maple tree.  To handle this, we mark the failure point with
      XA_ZERO_ENTRY.  In exit_mmap(), if this marker is encountered, stop
      releasing VMAs that have not been duplicated after this point.
      
      There is a "spawn" in byte-unixbench[1], which can be used to test the
      performance of fork().  I modified it slightly to make it work with
      different number of VMAs.
      
      Below are the test results.  The first row shows the number of VMAs.  The
      second and third rows show the number of fork() calls per ten seconds,
      corresponding to next-20231006 and the this patchset, respectively.  The
      test results were obtained with CPU binding to avoid scheduler load
      balancing that could cause unstable results.  There are still some
      fluctuations in the test results, but at least they are better than the
      original performance.
      
      21     121   221    421    821    1621   3221   6421   12821  25621  51221
      112100 76261 54227  34035  20195  11112  6017   3161   1606   802    393
      114558 83067 65008  45824  28751  16072  8922   4747   2436   1233   599
      2.19%  8.92% 19.88% 34.64% 42.37% 44.64% 48.28% 50.17% 51.68% 53.74% 52.42%
      
      [1] https://github.com/kdlucas/byte-unixbench/tree/master
      
      Link: https://lkml.kernel.org/r/20231027033845.90608-11-zhangpeng.00@bytedance.comSigned-off-by: default avatarPeng Zhang <zhangpeng.00@bytedance.com>
      Suggested-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mateusz Guzik <mjguzik@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Mike Christie <michael.christie@oracle.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      d2406291
    • Peng Zhang's avatar
      maple_tree: preserve the tree attributes when destroying maple tree · 8e50d32c
      Peng Zhang authored
      When destroying maple tree, preserve its attributes and then turn it into
      an empty tree.  This allows it to be reused without needing to be
      reinitialized.
      
      Link: https://lkml.kernel.org/r/20231027033845.90608-10-zhangpeng.00@bytedance.comSigned-off-by: default avatarPeng Zhang <zhangpeng.00@bytedance.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mateusz Guzik <mjguzik@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Mike Christie <michael.christie@oracle.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      8e50d32c
    • Peng Zhang's avatar
      maple_tree: update check_forking() and bench_forking() · 446e1867
      Peng Zhang authored
      Updated check_forking() and bench_forking() to use __mt_dup() to duplicate
      maple tree.
      
      Link: https://lkml.kernel.org/r/20231027033845.90608-9-zhangpeng.00@bytedance.comSigned-off-by: default avatarPeng Zhang <zhangpeng.00@bytedance.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mateusz Guzik <mjguzik@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Mike Christie <michael.christie@oracle.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      446e1867
    • Peng Zhang's avatar
      maple_tree: skip other tests when BENCH is enabled · f670fa1c
      Peng Zhang authored
      Skip other tests when BENCH is enabled so that performance can be measured
      in user space.
      
      Link: https://lkml.kernel.org/r/20231027033845.90608-8-zhangpeng.00@bytedance.comSigned-off-by: default avatarPeng Zhang <zhangpeng.00@bytedance.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mateusz Guzik <mjguzik@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Mike Christie <michael.christie@oracle.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      f670fa1c
    • Peng Zhang's avatar
      maple_tree: update the documentation of maple tree · 9bc1d3cd
      Peng Zhang authored
      Introduce the new interface mtree_dup() in the documentation.
      
      Link: https://lkml.kernel.org/r/20231027033845.90608-7-zhangpeng.00@bytedance.comSigned-off-by: default avatarPeng Zhang <zhangpeng.00@bytedance.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mateusz Guzik <mjguzik@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Mike Christie <michael.christie@oracle.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      9bc1d3cd
    • Peng Zhang's avatar
      maple_tree: add test for mtree_dup() · a2587a7e
      Peng Zhang authored
      Add test for mtree_dup().
      
      Test by duplicating different maple trees and then comparing the two
      trees.  Includes tests for duplicating full trees and memory allocation
      failures on different nodes.
      
      Link: https://lkml.kernel.org/r/20231027033845.90608-6-zhangpeng.00@bytedance.comSigned-off-by: default avatarPeng Zhang <zhangpeng.00@bytedance.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mateusz Guzik <mjguzik@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Mike Christie <michael.christie@oracle.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      a2587a7e
    • Peng Zhang's avatar
      radix tree test suite: align kmem_cache_alloc_bulk() with kernel behavior. · 46c99e26
      Peng Zhang authored
      When kmem_cache_alloc_bulk() fails to allocate, leave the freed pointers
      in the array.  This enables a more accurate simulation of the kernel's
      behavior and allows for testing potential double-free scenarios.
      
      Link: https://lkml.kernel.org/r/20231027033845.90608-5-zhangpeng.00@bytedance.comSigned-off-by: default avatarPeng Zhang <zhangpeng.00@bytedance.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mateusz Guzik <mjguzik@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Mike Christie <michael.christie@oracle.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      46c99e26
    • Peng Zhang's avatar
      maple_tree: introduce interfaces __mt_dup() and mtree_dup() · fd32e4e9
      Peng Zhang authored
      Introduce interfaces __mt_dup() and mtree_dup(), which are used to
      duplicate a maple tree.  They duplicate a maple tree in Depth-First Search
      (DFS) pre-order traversal.  It uses memcopy() to copy nodes in the source
      tree and allocate new child nodes in non-leaf nodes.  The new node is
      exactly the same as the source node except for all the addresses stored in
      it.  It will be faster than traversing all elements in the source tree and
      inserting them one by one into the new tree.  The time complexity of these
      two functions is O(n).
      
      The difference between __mt_dup() and mtree_dup() is that mtree_dup()
      handles locks internally.
      
      Analysis of the average time complexity of this algorithm:
      
      For simplicity, let's assume that the maximum branching factor of all
      non-leaf nodes is 16 (in allocation mode, it is 10), and the tree is a
      full tree.
      
      Under the given conditions, if there is a maple tree with n elements, the
      number of its leaves is n/16.  From bottom to top, the number of nodes in
      each level is 1/16 of the number of nodes in the level below.  So the
      total number of nodes in the entire tree is given by the sum of n/16 +
      n/16^2 + n/16^3 + ...  + 1.  This is a geometric series, and it has log(n)
      terms with base 16.  According to the formula for the sum of a geometric
      series, the sum of this series can be calculated as (n-1)/15.  Each node
      has only one parent node pointer, which can be considered as an edge.  In
      total, there are (n-1)/15-1 edges.
      
      This algorithm consists of two operations:
      
      1. Traversing all nodes in DFS order.
      2. For each node, making a copy and performing necessary modifications
         to create a new node.
      
      For the first part, DFS traversal will visit each edge twice.  Let
      T(ascend) represent the cost of taking one step downwards, and T(descend)
      represent the cost of taking one step upwards.  And both of them are
      constants (although mas_ascend() may not be, as it contains a loop, but
      here we ignore it and treat it as a constant).  So the time spent on the
      first part can be represented as ((n-1)/15-1) * (T(ascend) + T(descend)).
      
      For the second part, each node will be copied, and the cost of copying a
      node is denoted as T(copy_node).  For each non-leaf node, it is necessary
      to reallocate all child nodes, and the cost of this operation is denoted
      as T(dup_alloc).  The behavior behind memory allocation is complex and not
      specific to the maple tree operation.  Here, we assume that the time
      required for a single allocation is constant.  Since the size of a node is
      fixed, both of these symbols are also constants.  We can calculate that
      the time spent on the second part is ((n-1)/15) * T(copy_node) + ((n-1)/15
      - n/16) * T(dup_alloc).
      
      Adding both parts together, the total time spent by the algorithm can be
      represented as:
      
      ((n-1)/15) * (T(ascend) + T(descend) + T(copy_node) + T(dup_alloc)) -
      n/16 * T(dup_alloc) - (T(ascend) + T(descend))
      
      Let C1 = T(ascend) + T(descend) + T(copy_node) + T(dup_alloc)
      Let C2 = T(dup_alloc)
      Let C3 = T(ascend) + T(descend)
      
      Finally, the expression can be simplified as:
      ((16 * C1 - 15 * C2) / (15 * 16)) * n - (C1 / 15 + C3).
      
      This is a linear function, so the average time complexity is O(n).
      
      Link: https://lkml.kernel.org/r/20231027033845.90608-4-zhangpeng.00@bytedance.comSigned-off-by: default avatarPeng Zhang <zhangpeng.00@bytedance.com>
      Suggested-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mateusz Guzik <mjguzik@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Mike Christie <michael.christie@oracle.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      fd32e4e9
    • Peng Zhang's avatar
      maple_tree: introduce {mtree,mas}_lock_nested() · b2472efe
      Peng Zhang authored
      In some cases, nested locks may be needed, so {mtree,mas}_lock_nested is
      introduced.  For example, when duplicating maple tree, we need to hold the
      locks of two trees, in which case nested locks are needed.
      
      At the same time, add the definition of spin_lock_nested() in tools for
      testing.
      
      Link: https://lkml.kernel.org/r/20231027033845.90608-3-zhangpeng.00@bytedance.comSigned-off-by: default avatarPeng Zhang <zhangpeng.00@bytedance.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mateusz Guzik <mjguzik@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Mike Christie <michael.christie@oracle.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      b2472efe
    • Peng Zhang's avatar
      maple_tree: add mt_free_one() and mt_attr() helpers · 4f2267b5
      Peng Zhang authored
      Patch series "Introduce __mt_dup() to improve the performance of fork()", v7.
      
      This series introduces __mt_dup() to improve the performance of fork(). 
      During the duplication process of mmap, all VMAs are traversed and
      inserted one by one into the new maple tree, causing the maple tree to be
      rebalanced multiple times.  Balancing the maple tree is a costly
      operation.  To duplicate VMAs more efficiently, mtree_dup() and __mt_dup()
      are introduced for the maple tree.  They can efficiently duplicate a maple
      tree.
      
      Here are some algorithmic details about {mtree,__mt}_dup().  We perform a
      DFS pre-order traversal of all nodes in the source maple tree.  During
      this process, we fully copy the nodes from the source tree to the new
      tree.  This involves memory allocation, and when encountering a new node,
      if it is a non-leaf node, all its child nodes are allocated at once.
      
      This idea was originally from Liam R.  Howlett's Maple Tree Work email,
      and I added some of my own ideas to implement it.  Some previous
      discussions can be found in [1].  For a more detailed analysis of the
      algorithm, please refer to the logs for patch [3/10] and patch [10/10].
      
      There is a "spawn" in byte-unixbench[2], which can be used to test the
      performance of fork().  I modified it slightly to make it work with
      different number of VMAs.
      
      Below are the test results.  The first row shows the number of VMAs.  The
      second and third rows show the number of fork() calls per ten seconds,
      corresponding to next-20231006 and the this patchset, respectively.  The
      test results were obtained with CPU binding to avoid scheduler load
      balancing that could cause unstable results.  There are still some
      fluctuations in the test results, but at least they are better than the
      original performance.
      
      21     121   221    421    821    1621   3221   6421   12821  25621  51221
      112100 76261 54227  34035  20195  11112  6017   3161   1606   802    393
      114558 83067 65008  45824  28751  16072  8922   4747   2436   1233   599
      2.19%  8.92% 19.88% 34.64% 42.37% 44.64% 48.28% 50.17% 51.68% 53.74% 52.42%
      
      Thanks to Liam and Matthew for the review.
      
      
      This patch (of 10):
      
      Add two helpers:
      1. mt_free_one(), used to free a maple node.
      2. mt_attr(), used to obtain the attributes of maple tree.
      
      Link: https://lkml.kernel.org/r/20231027033845.90608-1-zhangpeng.00@bytedance.com
      Link: https://lkml.kernel.org/r/20231027033845.90608-2-zhangpeng.00@bytedance.comSigned-off-by: default avatarPeng Zhang <zhangpeng.00@bytedance.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mateusz Guzik <mjguzik@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Mike Christie <michael.christie@oracle.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      4f2267b5
    • Li Zhijian's avatar
      mm/vmstat: move pgdemote_* to per-node stats · 23e9f013
      Li Zhijian authored
      Demotion will migrate pages across nodes.  Previously, only the global
      demotion statistics were accounted for.  Changed them to per-node
      statistics, making it easier to observe where demotion occurs on each
      node.
      
      This will help to identify which nodes are under pressure.
      
      This patch also make pgdemote_* behind CONFIG_NUMA_BALANCING, since
      demotion is not available for !CONFIG_NUMA_BALANCING
      
      With this patch, here is a sample where node0 node1 are DRAM,
      node3 is PMEM:
      Global stats:
      $ grep demote /proc/vmstat
      pgdemote_kswapd 254288
      pgdemote_direct 113497
      pgdemote_khugepaged 0
      
      Per-node stats:
      $ grep demote /sys/devices/system/node/node0/vmstat # demotion source
      pgdemote_kswapd 68454
      pgdemote_direct 83431
      pgdemote_khugepaged 0
      $ grep demote /sys/devices/system/node/node1/vmstat # demotion source
      pgdemote_kswapd 185834
      pgdemote_direct 30066
      pgdemote_khugepaged 0
      $ grep demote /sys/devices/system/node/node3/vmstat # demotion target
      pgdemote_kswapd 0
      pgdemote_direct 0
      pgdemote_khugepaged 0
      
      Link: https://lkml.kernel.org/r/20231103031450.1456523-1-lizhijian@fujitsu.comSigned-off-by: default avatarLi Zhijian <lizhijian@fujitsu.com>
      Acked-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      23e9f013
  2. 07 Dec, 2023 26 commits