- 22 Feb, 2024 40 commits
-
-
Mark Brown authored
Patch series "selftests/mm: Output cleanups for the compaction test". A couple of small updates for the check_compaction selftest which make it play more nicely with test automation systems. This patch (of 2): When the compaction test is run it checks to make sure that prerequistives the test requires are available and skips the tests if not. When this happens we log the test as a pass rather than a skip, log as a skip so that the distinction is clear and automation can see unexpected skips. Link: https://lkml.kernel.org/r/20240209-kselftest-mm-cleanup-v1-0-a3c0386496b5@kernel.org Link: https://lkml.kernel.org/r/20240209-kselftest-mm-cleanup-v1-1-a3c0386496b5@kernel.orgSigned-off-by: Mark Brown <broonie@kernel.org> Cc: Muhammad Usama Anjum <usama.anjum@collabora.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Kefeng Wang authored
Refactor compact_node() to handle both proactive and synchronous compact memory, which cleanups code a bit. Link: https://lkml.kernel.org/r/20240208013607.1731817-1-wangkefeng.wang@huawei.comSigned-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Anshuman Khandual authored
This adds the following new sysfs file tracking the number of successfully released pages from a given CMA heap area. This file will be available via CONFIG_CMA_SYSFS and help in determining active CMA pages available on the CMA heap area. This adds a new 'nr_pages_released' (CONFIG_CMA_SYSFS) into 'struct cma' which gets updated during cma_release(). /sys/kernel/mm/cma/<cma-heap-area>/release_pages_success After this change, an user will be able to find active CMA pages available in a given CMA heap area via the following method. Active pages = alloc_pages_success - release_pages_success That's valuable information for both software designers, and system admins as it allows them to tune the number of CMA pages available in the system. This increases user visibility for allocated CMA area and its utilization. Link: https://lkml.kernel.org/r/20240206045731.472759-1-anshuman.khandual@arm.comSigned-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
DAMON debugfs selftests dependency checker assumes debugfs would be mounted at /sys/kernel/debug. That would be ok for many cases, but some systems might mounted the file system on some different places. Parse the real mount point using /proc/mounts file. Link: https://lkml.kernel.org/r/20240207203134.69976-9-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Commit ebb3f994 ("mm/damon/dbgfs: fix 'struct pid' leaks in 'dbgfs_target_ids_write()'") fixes a pid leak bug in DAMON debugfs interface, namely dbgfs_target_ids_write() function. Add a selftest for the issue to prevent the problem from mistakenly recurring. Link: https://lkml.kernel.org/r/20240207203134.69976-8-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
commit 34796417 ("mm/damon/dbgfs: protect targets destructions with kdamond_lock") fixed a race of DAMON debugfs interface. Specifically, the race was happening between target_ids_read() and dbgfs_before_terminate(). Add a test for the issue to prevent the problem from accidentally recurring. Link: https://lkml.kernel.org/r/20240207203134.69976-7-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Add a selftest for DAMOS apply intervals. It runs two schemes having different apply interval agains an artificial memory access workload, and check if the scheme with smaller apply interval was applied more frequently. Link: https://lkml.kernel.org/r/20240207203134.69976-6-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Add a selftest for verifying the DAMOS quota feature. The test is very similar to sysfs_update_schemes_tried_regions_wss_estimation.py. It starts an artificial workload of 20 MiB working set, run DAMON to find the working set size, but with 1 MiB/100 ms size quota. Then, it collect the DAMON-found working set size every 100 ms and check if the quota was always applied as expected. For the confirmation, the tests shows the stat-applied region size and the qt_exceeds stat. Link: https://lkml.kernel.org/r/20240207203134.69976-5-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Update the test-purpose DAMON sysfs control Python module to support DAMOS apply interval. Link: https://lkml.kernel.org/r/20240207203134.69976-4-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Update the test-purpose DAMON sysfs control Python module to support DAMOS stats. Link: https://lkml.kernel.org/r/20240207203134.69976-3-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Patch series "selftests/damon: add more tests for core functionalities and corner cases". Continue DAMON selftests' test coverage improvement works with a trivial improvement of the test code itself. The sequence of the patches in patchset is as follows. The first five patches add two DAMON core functionalities tests. Those begins with three patches (patches 1-3) that update the test-purpose DAMON sysfs interface wrapper to support DAMOS quota, stats, and apply interval features, respectively. The fourth patch implements and adds a selftest for DAMOS quota feature, using the DAMON sysfs interface wrapper's newly added support of the quota and the stats feature. The fifth patch further implements and adds a selftest for DAMOS apply interval using the DAMON sysfs interface wrapper's newly added support of the apply interval and the stats feature. Two patches (patches 6 and 7) for implementing and adding two corner cases handling selftests follow. Those try to avoid two previously fixed bugs from recurring. Finally, a patch for making DAMON debugfs selftests dependency checker to use /proc/mounts instead of the hard-coded mount point assumption follows. This patch (of 8): Update the test-purpose DAMON sysfs control Python module to support DAMOS quota. Link: https://lkml.kernel.org/r/20240207203134.69976-1-sj@kernel.org Link: https://lkml.kernel.org/r/20240207203134.69976-2-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
John Groves authored
It tried to send me off to memory_hotplug.h for an enum that is a few lines above... Link: https://lkml.kernel.org/r/dba0f5f01162d6fa16e4da2a9fede7f97080e92d.1707179960.git.john@groves.netSigned-off-by: John Groves <john@groves.net> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Mark-PK Tsai authored
Some architectures, such as arm, have implemented optimized copy_page for full page copying. Replace the full page memcpy with copy_page to take advantage of the optimization. Link: https://lkml.kernel.org/r/20231007070554.8657-1-mark-pk.tsai@mediatek.comSigned-off-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com> Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Cc: YJ Chiang <yj.chiang@mediatek.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Li Zhijian authored
Currently, when a demotion occurs, it will prioritize selecting a node from the preferred targets as the destination node for the demotion. If the preferred node does not meet the requirements, it will try from all the lower memory tier nodes until it finds a suitable demotion destination node or ultimately fails. However, the demotion target information isn't exposed to the users, especially the preferred target information, which relies on more factors. This makes it hard for users to understand the exact demotion behavior. Rather than having a new sysfs interface to expose this information, printing directly to kernel messages, just like the current page allocation fallback order does. A dmesg example with this patch is as follows: [ 0.704860] Demotion targets for Node 0: null [ 0.705456] Demotion targets for Node 1: null // node 2 is onlined [ 32.259775] Demotion targets for Node 0: perferred: 2, fallback: 2 [ 32.261290] Demotion targets for Node 1: perferred: 2, fallback: 2 [ 32.262726] Demotion targets for Node 2: null // node 3 is onlined [ 42.448809] Demotion targets for Node 0: perferred: 2, fallback: 2-3 [ 42.450704] Demotion targets for Node 1: perferred: 2, fallback: 2-3 [ 42.452556] Demotion targets for Node 2: perferred: 3, fallback: 3 [ 42.454136] Demotion targets for Node 3: null // node 4 is onlined [ 52.676833] Demotion targets for Node 0: perferred: 2, fallback: 2-4 [ 52.678735] Demotion targets for Node 1: perferred: 2, fallback: 2-4 [ 52.680493] Demotion targets for Node 2: perferred: 4, fallback: 3-4 [ 52.682154] Demotion targets for Node 3: null [ 52.683405] Demotion targets for Node 4: null // node 5 is onlined [ 62.931902] Demotion targets for Node 0: perferred: 2, fallback: 2-5 [ 62.938266] Demotion targets for Node 1: perferred: 5, fallback: 2-5 [ 62.943515] Demotion targets for Node 2: perferred: 4, fallback: 3-4 [ 62.947471] Demotion targets for Node 3: null [ 62.949908] Demotion targets for Node 4: null [ 62.952137] Demotion targets for Node 5: perferred: 3, fallback: 3-4 Regarding this requirement, we have previously discussed [1]. The initial proposal involved introducing a new sysfs interface. However, due to concerns about potential changes and compatibility issues with the interface in the future, a consensus was not reached with the community. Therefore, this time, we are directly printing out the information. [1] https://lore.kernel.org/all/d1d5add8-8f4a-4578-8bf0-2cbe79b09989@fujitsu.com/ Link: https://lkml.kernel.org/r/20240206020151.605516-1-lizhijian@fujitsu.comSigned-off-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: "Huang, Ying" <ying.huang@intel.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
DAMON sysfs interface need to access kdamond-touching data for some of kdamond user commands. It uses ->after_aggregation() kdamond callback to safely access the data in the case. It had to use the aggregation interval callback because that was the only callback that users can access complete monitoring results. Since patch series "mm/damon: provide pseudo-moving sum based access rate", which starts from commit 78fbfb15 ("mm/damon/core: define and use a dedicated function for region access rate update"), DAMON provides good-to-use quality moitoring results for every sampling interval. It aims to help users who need to quickly retrieve the monitoring results. When the aggregation interval is set too long and therefore waiting for the aggregation interval can degrade user experience, or when the access pattern is expected to be significantly changed[1] could be such cases. However, because DAMON sysfs interface is still handling the commands per aggregation interval, the end user cannot get the benefit. Update DAMON sysfs interface to handle kdamond commands for every sampling interval if applicable. Specifically, all kdamond data accessing commands except 'commit' command are applicable. [1] https://lore.kernel.org/r/20240129121316.GA9706@cuiyangpei Link: https://lkml.kernel.org/r/20240206025158.203097-1-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: xiongping1 <xiongping1@xiaomi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Baolin Wang authored
alloc_and_dissolve_hugetlb_folio() preallocates a new hugetlb page before it takes hugetlb_lock. In 3 out of 4 cases the page is not really used and therefore the newly allocated page is just freed right away. This is wasteful and it might cause pre-mature failures in those cases. Address that by moving the allocation down to the only case (hugetlb page is really in the free pages pool). We need to drop hugetlb_lock to do so and therefore need to recheck the page state after regaining it. The patch is more of a cleanup than an actual fix to an existing problem. There are no known reports about pre-mature failures. Link: https://lkml.kernel.org/r/62890fd60b1ecd5bf1cdc476c973f60fe37aa0cb.1707181934.git.baolin.wang@linux.alibaba.comSigned-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: Muchun Song <muchun.song@linux.dev> Cc: David Hildenbrand <david@redhat.com> Cc: Oscar Salvador <osalvador@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Paul Gofman authored
pte_mkdirty() sets both _PAGE_DIRTY and _PAGE_SOFT_DIRTY bits. The _PAGE_SOFT_DIRTY can get set even if it wasn't set on original page before migration. This makes non-soft-dirty pages soft-dirty just because of migration/compaction. Clear the _PAGE_SOFT_DIRTY flag if it wasn't set on original page. By definition of soft-dirty feature, there can be spurious soft-dirty pages because of kernel's internal activity such as VMA merging or migration/compaction. This patch is eliminating the spurious soft-dirty pages because of migration/compaction. Link: https://lkml.kernel.org/r/20240206084838.34560-1-usama.anjum@collabora.comSigned-off-by: Paul Gofman <pgofman@codeweavers.com> Signed-off-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Acked-by: Andrei Vagin <avagin@gmail.com> Cc: Michał Mirosław <emmir@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Chengming Zhou authored
Since we don't need to leave zswap entry on the zswap tree anymore, we should remove it from tree once we find it from the tree. Then after using it, we can directly free it, no concurrent path can find it from tree. Only the shrinker can see it from lru list, which will also double check under tree lock, so no race problem. So we don't need refcount in zswap entry anymore and don't need to take the spinlock for the second time to invalidate it. The side effect is that zswap_entry_free() maybe not happen in tree spinlock, but it's ok since nothing need to be protected by the lock. Link: https://lkml.kernel.org/r/20240201-b4-zswap-invalidate-entry-v2-6-99d4084260a0@bytedance.comSigned-off-by: Chengming Zhou <zhouchengming@bytedance.com> Reviewed-by: Nhat Pham <nphamcs@gmail.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Chengming Zhou authored
The !zswap_exclusive_loads_enabled mode will leave compressed copy in the zswap tree and lru list after the folio swapin. There are some disadvantages in this mode: 1. It's a waste of memory since there are two copies of data, one is folio, the other one is compressed data in zswap. And it's unlikely the compressed data is useful in the near future. 2. If that folio is dirtied, the compressed data must be not useful, but we don't know and don't invalidate the trashy memory in zswap. 3. It's not reclaimable from zswap shrinker since zswap_writeback_entry() will always return -EEXIST and terminate the shrinking process. On the other hand, the only downside of zswap_exclusive_loads_enabled is a little more cpu usage/latency when compression, and the same if the folio is removed from swapcache or dirtied. More explanation by Johannes on why we should consider exclusive load as the default for zswap: Caching "swapout work" is helpful when the system is thrashing. Then recently swapped in pages might get swapped out again very soon. It certainly makes sense with conventional swap, because keeping a clean copy on the disk saves IO work and doesn't cost any additional memory. But with zswap, it's different. It saves some compression work on a thrashing page. But the act of keeping compressed memory contributes to a higher rate of thrashing. And that can cause IO in other places like zswap writeback and file memory. And the A/B test results of the kernel build in tmpfs with limited memory can support this theory: !exclusive exclusive real 63.80 63.01 user 1063.83 1061.32 sys 290.31 266.15 workingset_refault_anon 2383084.40 1976397.40 workingset_refault_file 44134.00 45689.40 workingset_activate_anon 837878.00 728441.20 workingset_activate_file 4710.00 4085.20 workingset_restore_anon 732622.60 639428.40 workingset_restore_file 1007.00 926.80 workingset_nodereclaim 0.00 0.00 pgscan 14343003.40 12409570.20 pgscan_kswapd 0.00 0.00 pgscan_direct 14343003.40 12409570.20 pgscan_khugepaged 0.00 0.00 Link: https://lkml.kernel.org/r/20240201-b4-zswap-invalidate-entry-v2-5-99d4084260a0@bytedance.comSigned-off-by: Chengming Zhou <zhouchengming@bytedance.com> Acked-by: Yosry Ahmed <yosryahmed@google.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Nhat Pham <nphamcs@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Chengming Zhou authored
cat /sys/kernel/debug/zswap/duplicate_entry 2086447 When testing, the duplicate_entry value is very high, but no warning message in the kernel log. From the comment of duplicate_entry "Duplicate store was encountered (rare)", it seems something goes wrong. Actually it's incremented in the beginning of zswap_store(), which found its zswap entry has already on the tree. And this is a normal case, since the folio could leave zswap entry on the tree after swapin, later it's dirtied and swapout/zswap_store again, found its original zswap entry. So duplicate_entry should be only incremented in the real bug case, which already have "WARN_ON(1)", it looks redundant to count bug case, so this patch just remove it. Link: https://lkml.kernel.org/r/20240201-b4-zswap-invalidate-entry-v2-4-99d4084260a0@bytedance.comSigned-off-by: Chengming Zhou <zhouchengming@bytedance.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Nhat Pham <nphamcs@gmail.com> Acked-by: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Chengming Zhou authored
When the shrinker encounter an existing folio in swap cache, it means we are shrinking into the warmer region. We should terminate shrinking if we're in the dynamic shrinker context. This patch add LRU_STOP to support this, to avoid overshrinking. Link: https://lkml.kernel.org/r/20240201-b4-zswap-invalidate-entry-v2-3-99d4084260a0@bytedance.comSigned-off-by: Chengming Zhou <zhouchengming@bytedance.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Nhat Pham <nphamcs@gmail.com> Reviewed-by: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Chengming Zhou authored
During testing I found there are some times the zswap_writeback_entry() return -ENOMEM, which is not we expected: bpftrace -e 'kr:zswap_writeback_entry {@[(int32)retval]=count()}' @[-12]: 1563 @[0]: 277221 The reason is that __read_swap_cache_async() return NULL because swapcache_prepare() failed. The reason is that we won't invalidate zswap entry when swap entry freed to the per-cpu pool, these zswap entries are still on the zswap tree and lru list. This patch moves the invalidation ahead to when swap entry freed to the per-cpu pool, since there is no any benefit to leave trashy zswap entry on the tree and lru list. With this patch: bpftrace -e 'kr:zswap_writeback_entry {@[(int32)retval]=count()}' @[0]: 259744 Note: large folio can't have zswap entry for now, so don't bother to add zswap entry invalidation in the large folio swap free path. Link: https://lkml.kernel.org/r/20240201-b4-zswap-invalidate-entry-v2-2-99d4084260a0@bytedance.comSigned-off-by: Chengming Zhou <zhouchengming@bytedance.com> Reviewed-by: Nhat Pham <nphamcs@gmail.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Chengming Zhou authored
Patch series "mm/zswap: optimize zswap lru list", v2. This series is motivated when observe the zswap lru list shrinking, noted there are some unexpected cases in zswap_writeback_entry(). bpftrace -e 'kr:zswap_writeback_entry {@[(int32)retval]=count()}' There are some -ENOMEM because when the swap entry is freed to per-cpu swap pool, it doesn't invalidate/drop zswap entry. Then the shrinker encounter these trashy zswap entries, it can't be reclaimed and return -ENOMEM. So move the invalidation ahead to when swap entry freed to the per-cpu swap pool, since there is no any benefit to leave trashy zswap entries on the zswap tree and lru list. Another case is -EEXIST, which is seen more in the case of !zswap_exclusive_loads_enabled, in which case the swapin folio will leave compressed copy on the tree and lru list. And it can't be reclaimed until the folio is removed from swapcache. Changing to zswap_exclusive_loads_enabled mode will invalidate when folio swapin, which has its own drawback if that folio is still clean in swapcache and swapout again, we need to compress it again. Please see the commit for details on why we choose exclusive load as the default for zswap. Another optimization for -EEXIST is that we add LRU_STOP to support terminating the shrinking process to avoid evicting warmer region. Testing using kernel build in tmpfs, one 50GB swapfile and zswap shrinker_enabled, with memory.max set to 2GB. mm-unstable zswap-optimize real 63.90s 63.25s user 1064.05s 1063.40s sys 292.32s 270.94s The main optimization is in sys cpu, about 7% improvement. This patch (of 6): Add more comments in shrink_memcg_cb() to describe the deref dance which is implemented to fix race problem between lru writeback and swapoff, and the reason why we rotate the entry at the beginning. Also fix the stale comments in zswap_writeback_entry(), and add more comments to state that we only deref the tree after we get the swapcache reference. Link: https://lkml.kernel.org/r/20240201-b4-zswap-invalidate-entry-v2-0-99d4084260a0@bytedance.com Link: https://lkml.kernel.org/r/20240201-b4-zswap-invalidate-entry-v2-1-99d4084260a0@bytedance.comSigned-off-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Suggested-by: Yosry Ahmed <yosryahmed@google.com> Suggested-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Yosry Ahmed <yosryahmed@google.com> Reviewed-by: Nhat Pham <nphamcs@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Ricardo B. Marliere authored
Now that the driver core can properly handle constant struct bus_type, move the memory_tier_subsys variable to be a constant structure as well, placing it into read-only memory which can not be modified at runtime. Link: https://lkml.kernel.org/r/20240204-bus_cleanup-mm-v1-1-00f49286f164@marliere.netSigned-off-by: Ricardo B. Marliere <ricardo@marliere.net> Suggested-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Hao Ge authored
too_many_isolated() should return bool as does the similar too_many_isolated() in mm/compaction.c. Link: https://lkml.kernel.org/r/20240205042618.108140-1-gehao@kylinos.cnSigned-off-by: Hao Ge <gehao@kylinos.cn> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Anshuman Khandual authored
There is no real difference between the global area, and other additionally configured CMA areas via CONFIG_CMA_AREAS that always defaults without user input. This makes MAX_CMA_AREAS same as CONFIG_CMA_AREAS, also incrementing its default values, thus maintaining current default for MAX_CMA_AREAS both for UMA and NUMA systems. Link: https://lkml.kernel.org/r/20240205051929.298559-1-anshuman.khandual@arm.comSigned-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Anshuman Khandual authored
All pr_debug() prints in (mm/cma.c) could be enabled via standard Makefile based method. Besides cma_debug_show_areas() should always be called during cma_alloc() failure path. This seemingly redundant config, CONFIG_CMA_DEBUG can be dropped without any problem. [lukas.bulwahn@gmail.com: remove debug code to removed CONFIG_CMA_DEBUG] Link: https://lkml.kernel.org/r/20240207143825.986-1-lukas.bulwahn@gmail.com Link: https://lkml.kernel.org/r/20240205031647.283510-1-anshuman.khandual@arm.comSigned-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Tiezhu Yang authored
After commit f7e01ab8 ("kasan: move tests to mm/kasan/"), the test module file is renamed from lib/test_kasan_module.c to mm/kasan/kasan_test_module.c, in order to keep consistent, rename test_kasan_module_init to kasan_test_module_init. Link: https://lkml.kernel.org/r/20240205060925.15594-3-yangtiezhu@loongson.cnSigned-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Acked-by: Marco Elver <elver@google.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Tiezhu Yang authored
After commit f7e01ab8 ("kasan: move tests to mm/kasan/"), the test file is renamed to mm/kasan/kasan_test.c and the test module is renamed to kasan_test.ko, so update the descriptions in the document. While at it, update the line number and testcase number when the tests kmalloc_large_oob_right and kmalloc_double_kzfree failed to sync with the current code in mm/kasan/kasan_test.c. Link: https://lkml.kernel.org/r/20240205060925.15594-2-yangtiezhu@loongson.cnSigned-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Acked-by: Marco Elver <elver@google.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Breno Leitao authored
hugetlb_madv_vs_map selftest was not part of the mm test-suite since we didn't have a fix for the problem it found. Now that the problem is already fixed (see previous commit), let's enable this selftest in the default test-suite. Link: https://lkml.kernel.org/r/20240205191843.4009640-3-leitao@debian.orgSigned-off-by: Breno Leitao <leitao@debian.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Lorenzo Stoakes <lstoakes@gmail.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Rik van Riel <riel@surriel.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Breno Leitao authored
Patch series "mm/hugetlb: Restore the reservation", v2. This is a fix for a case where a backing huge page could stolen after madvise(MADV_DONTNEED). A full reproducer is in selftest. See https://lore.kernel.org/all/20240105155419.1939484-1-leitao@debian.org/ In order to test this patch, I instrumented the kernel with LOCKDEP and KASAN, and run the following tests, without any regression: * The self test that reproduces the problem * All mm hugetlb selftests SUMMARY: PASS=9 SKIP=0 FAIL=0 * All libhugetlbfs tests PASS: 0 86 FAIL: 0 0 This patch (of 2): Currently there is a bug that a huge page could be stolen, and when the original owner tries to fault in it, it causes a page fault. You can achieve that by: 1) Creating a single page echo 1 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 2) mmap() the page above with MAP_HUGETLB into (void *ptr1). * This will mark the page as reserved 3) touch the page, which causes a page fault and allocates the page * This will move the page out of the free list. * It will also unreserved the page, since there is no more free page 4) madvise(MADV_DONTNEED) the page * This will free the page, but not mark it as reserved. 5) Allocate a secondary page with mmap(MAP_HUGETLB) into (void *ptr2). * it should fail, but, since there is no more available page. * But, since the page above is not reserved, this mmap() succeed. 6) Faulting at ptr1 will cause a SIGBUS * it will try to allocate a huge page, but there is none available A full reproducer is in selftest. See https://lore.kernel.org/all/20240105155419.1939484-1-leitao@debian.org/ Fix this by restoring the reserved page if necessary. These are the condition for the page restore: * The system is not using surplus pages. The goal is to reduce the surplus usage for this case. * If the VMA has the HPAGE_RESV_OWNER flag set, and is PRIVATE. This is safely checked using __vma_private_lock() * The page is anonymous Once this is scenario is found, set the `hugetlb_restore_reserve` bit in the folio. Then check if the resv reservations need to be adjusted later, done later, after the spinlock, since the vma_xxxx_reservation() might touch the file system lock. Link: https://lkml.kernel.org/r/20240205191843.4009640-1-leitao@debian.org Link: https://lkml.kernel.org/r/20240205191843.4009640-2-leitao@debian.orgSigned-off-by: Breno Leitao <leitao@debian.org> Suggested-by: Rik van Riel <riel@surriel.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Lorenzo Stoakes <lstoakes@gmail.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Paul Heidekrüger authored
Test that KASan can detect some unsafe atomic accesses. As discussed in the linked thread below, these tests attempt to cover the most common uses of atomics and, therefore, aren't exhaustive. Link: https://lkml.kernel.org/r/20240202113259.3045705-1-paul.heidekrueger@tum.de Link: https://lore.kernel.org/all/20240131210041.686657-1-paul.heidekrueger@tum.de/T/#uSigned-off-by: Paul Heidekrüger <paul.heidekrueger@tum.de> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Marco Elver <elver@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Christophe JAILLET authored
"page_counter.h" does not need <linux/kernel.h>. <linux/limits.h> is enough to get LONG_MAX. Files that include page_counter.h are limited. They have been compile tested or checked. $ git grep page_counter\.h include/linux/hugetlb_cgroup.h: struct page_counter hugepage[HUGE_MAX_HSTATE]; --> all files that include it have been compile tested include/linux/memcontrol.h:#include <linux/page_counter.h> --> <linux/kernel.h> has been added, to be safe include/net/sock.h:#include <linux/page_counter.h> --> already include <linux/kernel.h> mm/hugetlb_cgroup.c:#include <linux/page_counter.h> mm/memcontrol.c:#include <linux/page_counter.h> mm/page_counter.c:#include <linux/page_counter.h> --> compile tested Link: https://lkml.kernel.org/r/adfdbe21c4d06400d7bd802868762deb85cae8b6.1706908921.git.christophe.jaillet@wanadoo.frSigned-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
T.J. Mercier authored
Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim") we passed the number of pages for the reclaim request directly to try_to_free_mem_cgroup_pages, which could lead to significant overreclaim. After 0388536a the number of pages was limited to a maximum 32 (SWAP_CLUSTER_MAX) to reduce the amount of overreclaim. However such a small batch size caused a regression in reclaim performance due to many more reclaim start/stop cycles inside memory_reclaim. The restart cost is amortized over more pages with larger batch sizes, and becomes a significant component of the runtime if the batch size is too small. Reclaim tries to balance nr_to_reclaim fidelity with fairness across nodes and cgroups over which the pages are spread. As such, the bigger the request, the bigger the absolute overreclaim error. Historic in-kernel users of reclaim have used fixed, small sized requests to approach an appropriate reclaim rate over time. When we reclaim a user request of arbitrary size, use decaying batch sizes to manage error while maintaining reasonable throughput. MGLRU enabled - memcg LRU used root - full reclaim pages/sec time (sec) pre-0388536a : 68047 10.46 post-0388536a : 13742 inf (reclaim-reclaimed)/4 : 67352 10.51 MGLRU enabled - memcg LRU not used /uid_0 - 1G reclaim pages/sec time (sec) overreclaim (MiB) pre-0388536a : 258822 1.12 107.8 post-0388536a : 105174 2.49 3.5 (reclaim-reclaimed)/4 : 233396 1.12 -7.4 MGLRU enabled - memcg LRU not used /uid_0 - full reclaim pages/sec time (sec) pre-0388536a : 72334 7.09 post-0388536a : 38105 14.45 (reclaim-reclaimed)/4 : 72914 6.96 [tjmercier@google.com: v4] Link: https://lkml.kernel.org/r/20240206175251.3364296-1-tjmercier@google.com Link: https://lkml.kernel.org/r/20240202233855.1236422-1-tjmercier@google.com Fixes: 0388536a ("mm:vmscan: fix inaccurate reclaim during proactive reclaim") Signed-off-by: T.J. Mercier <tjmercier@google.com> Reviewed-by: Yosry Ahmed <yosryahmed@google.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Michal Koutny <mkoutny@suse.com> Acked-by: Shakeel Butt <shakeelb@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Shakeel Butt <shakeelb@google.com> Cc: Muchun Song <songmuchun@bytedance.com> Cc: Efly Young <yangyifei03@kuaishou.com> Cc: Yu Zhao <yuzhao@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Yajun Deng authored
These vma_merge() callers will pass mm, anon_vma and file, they all from the same vma. There is no need to pass three parameters at the same time. Pass vma instead of mm, anon_vma and file to vma_merge(), so that it can save two parameters. Link: https://lkml.kernel.org/r/20240203014632.2726545-1-yajun.deng@linux.dev Link: https://lore.kernel.org/lkml/20240125034922.1004671-2-yajun.deng@linux.dev/Signed-off-by: Yajun Deng <yajun.deng@linux.dev> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Yajun Deng <yajun.deng@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Breno Leitao authored
The usage of run_vmtests.sh does not include hugetlb, which is a valid test category. Add the 'hugetlb' to the usage of run_vmtests.sh. Link: https://lkml.kernel.org/r/20240129115246.1234253-1-leitao@debian.orgSigned-off-by: Breno Leitao <leitao@debian.org> Reviewed-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Reviewed-by: Joel Savitz <jsavitz@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
David Hildenbrand authored
... and conditionally return to the caller if any PTE except the first one is writable. fork() has to make sure to properly write-protect in case any PTE is writable. Other users (e.g., page unmaping) are expected to not care. Link: https://lkml.kernel.org/r/20240129124649.189745-16-david@redhat.comSigned-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David S. Miller <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Russell King (Oracle) <linux@armlinux.org.uk> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
David Hildenbrand authored
Let's always ignore the accessed/young bit: we'll always mark the PTE as old in our child process during fork, and upcoming users will similarly not care. Ignore the dirty bit only if we don't want to duplicate the dirty bit into the child process during fork. Maybe, we could just set all PTEs in the child dirty if any PTE is dirty. For now, let's keep the behavior unchanged, this can be optimized later if required. Ignore the soft-dirty bit only if the bit doesn't have any meaning in the src vma, and similarly won't have any in the copied dst vma. For now, we won't bother with the uffd-wp bit. Link: https://lkml.kernel.org/r/20240129124649.189745-15-david@redhat.comSigned-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David S. Miller <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Russell King (Oracle) <linux@armlinux.org.uk> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
David Hildenbrand authored
Let's implement PTE batching when consecutive (present) PTEs map consecutive pages of the same large folio, and all other PTE bits besides the PFNs are equal. We will optimize folio_pte_batch() separately, to ignore selected PTE bits. This patch is based on work by Ryan Roberts. Use __always_inline for __copy_present_ptes() and keep the handling for single PTEs completely separate from the multi-PTE case: we really want the compiler to optimize for the single-PTE case with small folios, to not degrade performance. Note that PTE batching will never exceed a single page table and will always stay within VMA boundaries. Further, processing PTE-mapped THP that maybe pinned and have PageAnonExclusive set on at least one subpage should work as expected, but there is room for improvement: We will repeatedly (1) detect a PTE batch (2) detect that we have to copy a page (3) fall back and allocate a single page to copy a single page. For now we won't care as pinned pages are a corner case, and we should rather look into maintaining only a single PageAnonExclusive bit for large folios. Link: https://lkml.kernel.org/r/20240129124649.189745-14-david@redhat.comSigned-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David S. Miller <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Russell King (Oracle) <linux@armlinux.org.uk> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
David Hildenbrand authored
We already read it, let's just forward it. This patch is based on work by Ryan Roberts. [david@redhat.com: fix the hmm "exclusive_cow" selftest] Link: https://lkml.kernel.org/r/13f296b8-e882-47fd-b939-c2141dc28717@redhat.com Link: https://lkml.kernel.org/r/20240129124649.189745-13-david@redhat.comSigned-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David S. Miller <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Russell King (Oracle) <linux@armlinux.org.uk> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-