- 21 Aug, 2023 40 commits
-
-
Kemeng Shi authored
Commit cfccd2e6 ("mm, compaction: finish pageblocks on complete migration failure") convert cc->order aligned check to page block order aligned check. Correct comment relevant with it. Link: https://lkml.kernel.org/r/20230804110454.2935878-7-shikemeng@huaweicloud.comSigned-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: David Hildenbrand <david@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Kemeng Shi authored
Commit e380bebe ("mm, compaction: keep migration source private to a single compaction instance") moved update of async and sync compact_cached_migrate_pfn from update_pageblock_skip to update_cached_migrate but left the comment behind. Move the relevant comment to correct this. Link: https://lkml.kernel.org/r/20230804110454.2935878-6-shikemeng@huaweicloud.comSigned-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Kemeng Shi authored
After 90ed667c ("Revert "Revert "mm/compaction: fix set skip in fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock. Correct comment that fast_find_block is used to avoid isolation_suitable check for pageblock returned from fast_find_migrateblock because fast_find_migrateblock will mark found pageblock skipped. Instead, comment that fast_find_block is used to avoid a redundant check of fast found pageblock which is already checked skip flag inside fast_find_migrateblock. Link: https://lkml.kernel.org/r/20230804110454.2935878-5-shikemeng@huaweicloud.comSigned-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: David Hildenbrand <david@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Kemeng Shi authored
Move migrate_pfn to page block end when block is marked skip to avoid unnecessary scan retry of that block from upper caller. For example, compact_zone may wrongly rescan skip page block with finish_pageblock set as following: 1. cc->migrate point to the start of page block 2. compact_zone record last_migrated_pfn to cc->migrate 3. compact_zone->isolate_migratepages->isolate_migratepages_block tries to scan the block. The low_pfn maybe moved forward to middle of block because of free pages at beginning of block. 4. we find first lru page could be isolated but block was exclusive marked skip. 5. abort isolate_migratepages_block and make cc->migrate_pfn point to found lru page at middle of block. 6. compact_zone find cc->migrate_pfn and last_migrated_pfn are in the same block and wrongly rescan the block with finish_pageblock set. Link: https://lkml.kernel.org/r/20230804110454.2935878-4-shikemeng@huaweicloud.comSigned-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: David Hildenbrand <david@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Kemeng Shi authored
We record start pfn of last isolated page block with last_migrated_pfn. And then: 1. We check if we mark the page block skip for exclusive access in isolate_migratepages_block by test if next migrate pfn is still in last isolated page block. If so, we will set finish_pageblock to do the rescan. 2. We check if a full cc->order block is scanned by test if last scan range passes the cc->order block boundary. If so, we flush the pages were freed. We treat cc->migrate_pfn before isolate_migratepages as the start pfn of last isolated page range. However, we always align migrate_pfn to page block or move to another page block in fast_find_migrateblock or in linearly scan forward in isolate_migratepages before do page isolation in isolate_migratepages_block. Update last_migrated_pfn with pageblock_start_pfn(cc->migrate_pfn - 1) after scan to correctly set start pfn of last isolated page range. To avoid that: 1. Miss a rescan with finish_pageblock set as last_migrate_pfn does not point to right pageblock and the migrate will not be in pageblock of last_migrate_pfn as it should be. 2. Wrongly issue flush by test cc->order block boundary with wrong last_migrate_pfn. Link: https://lkml.kernel.org/r/20230804110454.2935878-3-shikemeng@huaweicloud.comSigned-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: David Hildenbrand <david@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Liam R. Howlett authored
Reorder the operations for split and spanning stores so that new data is placed in the tree prior to marking the old data as dead. This will limit re-walks on dead data to just once instead of a retry loop. The order of operations is as follows: Create the new data, put the new data in place, mark the top node of the old data as dead. Then repair parent links in the reused nodes through all levels of the tree, following the new nodes downwards. Finally walk the top dead node looking for nodes that are no longer used, or subtrees that should be destroyed (marked dead throughout then freed), follow the partially used nodes downwards to discover other dead nodes and subtrees. Link: https://lkml.kernel.org/r/20230804165951.2661157-7-Liam.Howlett@oracle.comSigned-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Liam R. Howlett authored
All calls to mas_adopt_children() currently pass the parent as the node in the maple state. Allow for the parent pointer that is passed in to be used instead. Link: https://lkml.kernel.org/r/20230804165951.2661157-6-Liam.Howlett@oracle.comSigned-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Liam R. Howlett authored
Add a definition to shorten long code lines and clarify what the code is doing. Use the new definition to get the maple tree parent pointer from the maple state where possible. Link: https://lkml.kernel.org/r/20230804165951.2661157-5-Liam.Howlett@oracle.comSigned-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Liam R. Howlett authored
mas_replace() has a single user that takes a flag which is now always true. Replace this function with mas_put_in_tree() to better align with mas_replace_node(). Inline the remaining logic into the only caller; mas_wmb_replace(). Link: https://lkml.kernel.org/r/20230804165951.2661157-4-Liam.Howlett@oracle.comSigned-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Liam R. Howlett authored
Replacing nodes may cause a live lock-up if CPU resources are saturated by write operations on the tree by continuously retrying on dead nodes. To avoid the continuous retry scenario, ensure the new node is inserted into the tree prior to marking the old data as dead. This will define a window where old and new data is swapped. When reusing lower level nodes, ensure the parent pointer is updated after the parent is marked dead. This ensures that the child is still reachable from the top of the tree, but walking up to a dead node will result in a single retry that will start a fresh walk from the top down through the new node. Link: https://lkml.kernel.org/r/20230804165951.2661157-3-Liam.Howlett@oracle.comSigned-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Liam R. Howlett authored
Patch series "maple_tree: Change replacement strategy". The maple tree marks nodes dead as soon as they are going to be replaced. This could be problematic when used in the RCU context since the writer may be starved of CPU time by the readers. This patch set addresses the issue by switching the data replacement strategy to one that will only mark data as dead once the new data is available. This series changes the ordering of the node replacement so that the new data is live before the old data is marked 'dead'. When readers hit 'dead' nodes, they will restart from the top of the tree and end up in the new data. In more complex scenarios, the replacement strategy means a subtree is built and graphed into the tree leaving some nodes to point to the old parent. The view of tasks into the old data will either remain with the old data, or see the new data once the old data is marked 'dead'. Iterators will see the 'dead' node and restart on their own and switch to the new data. There is no risk of the reader seeing old data in these cases. The 'dead' subtree of data is then fully marked dead, but reused nodes will still point to the dead nodes until the parent pointer is updated. Walking up to a 'dead' node will cause a re-walk from the top of the tree and enter the new data area where old data is not reachable. Once the parent pointers are fully up to date in the active data, the 'dead' subtree is iterated to collect entirely 'dead' subtrees, and dead nodes (nodes that partially contained reused data). This patch (of 6): When dumping the tree, honour formatting request to output hex for the maple node type arange64. Link: https://lkml.kernel.org/r/20230804165951.2661157-1-Liam.Howlett@oracle.com Link: https://lkml.kernel.org/r/20230804165951.2661157-2-Liam.Howlett@oracle.comSigned-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Greg Kroah-Hartman authored
There are no modules using mm_kobj, so do not export it. Link: https://lkml.kernel.org/r/2023080436-algebra-cabana-417d@gregkhSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Kefeng Wang authored
It is better to use huge page size instead of PAGE_SIZE for stride when flush hugepage, which reduces the loop in __flush_tlb_range(). Let's support arch's flush_hugetlb_tlb_range(), which is used in hugetlb_unshare_all_pmds(), move_hugetlb_page_tables() and hugetlb_change_protection() for now. Note,: for hugepages based on contiguous bit, it has to be invalidated individually since the contiguous PTE bit is just a hint, the hardware may or may not take it into account. Link: https://lkml.kernel.org/r/20230802012731.62512-1-wangkefeng.wang@huawei.comSigned-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Barry Song <21cnbao@gmail.com> Cc: Joel Fernandes (Google) <joel@joelfernandes.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mina Almasry <almasrymina@google.com> Cc: Will Deacon <will@kernel.org> Cc: William Kucharski <william.kucharski@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Kefeng Wang authored
Archs may need to do special things when flushing hugepage tlb, so use the more applicable flush_hugetlb_tlb_range() instead of flush_tlb_range(). Link: https://lkml.kernel.org/r/20230801023145.17026-2-wangkefeng.wang@huawei.com Fixes: 550a7d60 ("mm, hugepages: add mremap() support for hugepage backed vma") Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Muchun Song <songmuchun@bytedance.com> Cc: Barry Song <21cnbao@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Joel Fernandes (Google) <joel@joelfernandes.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Mina Almasry <almasrymina@google.com> Cc: Will Deacon <will@kernel.org> Cc: William Kucharski <william.kucharski@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Kemeng Shi authored
There is no behavior change to remove "else continue" code at end of scan loop. Just remove it to make code cleaner. Link: https://lkml.kernel.org/r/20230803094901.2915942-5-shikemeng@huaweicloud.comSigned-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: David Hildenbrand <david@redhat.com> Cc: Kemeng Shi <shikemeng@huawei.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Kemeng Shi authored
The cursor is only used for page forward currently. We can simply move page forward directly to remove unnecessary cursor. Link: https://lkml.kernel.org/r/20230803094901.2915942-4-shikemeng@huaweicloud.comSigned-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: David Hildenbrand <david@redhat.com> Cc: Kemeng Shi <shikemeng@huawei.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Kemeng Shi authored
Merge the end_pfn boundary check for single page block forward and multiple page blocks forward to avoid do twice boundary check for multiple page blocks forward. Link: https://lkml.kernel.org/r/20230803094901.2915942-3-shikemeng@huaweicloud.comSigned-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: David Hildenbrand <david@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Kemeng Shi authored
Patch series "Fixes and cleanups to compaction", v2. This series contains random fixes and cleanups to free page isolation in compaction. This is based on another compact series[1]. More details can be found in respective patches. This patch (of 4): We will set skip to page block of block_start_pfn, it's more reasonable to set compact_cached_free_pfn to page block before the block_start_pfn. Link: https://lkml.kernel.org/r/20230803094901.2915942-1-shikemeng@huaweicloud.com Link: https://lkml.kernel.org/r/20230803094901.2915942-2-shikemeng@huaweicloud.comSigned-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: David Hildenbrand <david@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Kemeng Shi <shikemeng@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Miaohe Lin authored
The correct function name is obj_cgroup_may_zswap(). Correct the comment. Link: https://lkml.kernel.org/r/20230803120021.762279-1-linmiaohe@huawei.comSigned-off-by: Miaohe Lin <linmiaohe@huawei.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Muchun Song <songmuchun@bytedance.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Miaohe Lin authored
Since commit 5d0a661d ("mm/page_alloc: use only one PCP list for THP-sized allocations"), local variable base is just as same as order. So remove it. No functional change intended. Link: https://lkml.kernel.org/r/20230803114934.693989-1-linmiaohe@huawei.comSigned-off-by: Miaohe Lin <linmiaohe@huawei.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Ruan Jinjie authored
This code is already duplicated six times, use helper function put_z3fold_locked() to release z3fold page instead of open code it to help improve code readability a bit. And add put_z3fold_locked_list() helper function to be consistent with it. No functional change involved. Link: https://lkml.kernel.org/r/20230803113824.886413-1-ruanjinjie@huawei.comSigned-off-by: Ruan Jinjie <ruanjinjie@huawei.com> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com> Cc: Vitaly Wool <vitaly.wool@konsulko.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Update DAMON usage document for newly added DAMON monitoring target type DAMOS filter. Link: https://lkml.kernel.org/r/20230802214312.110532-14-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Update DAMON ABI document for the newly added DAMON monitoring target type DAMOS filter. Link: https://lkml.kernel.org/r/20230802214312.110532-13-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Update DAMON design document for the newly added DAMON monitoring target type DAMOS filter. Link: https://lkml.kernel.org/r/20230802214312.110532-12-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Test existence of files and validity of input keyword for DAMON monitoring target based DAMOS filter on DAMON sysfs interface. Link: https://lkml.kernel.org/r/20230802214312.110532-11-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Extend DAMON sysfs interface to support the DAMON monitoring target based DAMOS filter. Users can use it via writing 'target' to the filter's 'type' file and specifying the index of the target from the corresponding DAMON context's monitoring targets list to 'target_idx' sysfs file. Link: https://lkml.kernel.org/r/20230802214312.110532-10-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
One DAMON context can have multiple monitoring targets, and DAMOS schemes are applied to all targets. In some cases, users need to apply different scheme to different targets. Retrieving monitoring results via DAMON sysfs interface' 'tried_regions' directory could be one good example. Also, there could be cases that cgroup DAMOS filter is not enough. All such use cases can be worked around by having multiple DAMON contexts having only single target, but it is inefficient in terms of resource usage, thogh the overhead is not estimated to be huge. Implement DAMON monitoring target based DAMOS filter for the case. Like address range target DAMOS filter, handle these filters in the DAMON core layer, since it is more efficient than doing in operations set layer. This also means that regions that filtered out by monitoring target type DAMOS filters are counted as not tried by the scheme. Hence, target granularity monitoring results retrieval via DAMON sysfs interface becomes available. Link: https://lkml.kernel.org/r/20230802214312.110532-9-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Update DAMON usage document for the newly added address range type DAMOS filter. Link: https://lkml.kernel.org/r/20230802214312.110532-8-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Update DAMON ABI document for address ranges type DAMOS filter files. Link: https://lkml.kernel.org/r/20230802214312.110532-7-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Update DAMON design document's DAMOS filters section for address range DAMOS filters. Because address range filters are handled by the core layer and it makes difference in schemes tried regions and schemes statistics, clearly describe it. Link: https://lkml.kernel.org/r/20230802214312.110532-6-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Add a selftest for checking existence of addr_{start,end} files under DAMOS filter directory, and 'addr' damos filter type input of DAMON sysfs interface. Link: https://lkml.kernel.org/r/20230802214312.110532-5-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Implement a kunit test for the core of address range DAMOS filter handling, namely __damos_filter_out(). The test especially focus on regions that overlap with given filter's target address range. Link: https://lkml.kernel.org/r/20230802214312.110532-4-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Extend DAMON sysfs interface to support address range based DAMOS filters, by adding a special keyword for the filter/<N>/type file, namely 'addr', and two files under filter/<N>/ for specifying the start and the end addresses of the range, namely 'addr_start' and 'addr_end'. Link: https://lkml.kernel.org/r/20230802214312.110532-3-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Patch series "Extend DAMOS filters for address ranges and DAMON monitoring targets" There are use cases that need to apply DAMOS schemes to specific address ranges or DAMON monitoring targets. NUMA nodes in the physical address space, special memory objects in the virtual address space, and monitoring target specific efficient monitoring results snapshot retrieval could be examples of such use cases. This patchset extends DAMOS filters feature for such cases, by implementing two more filter types, namely address ranges and DAMON monitoring types. Patches sequence ---------------- The first seven patches are for the address ranges based DAMOS filter. The first patch implements the filter feature and expose it via DAMON kernel API. The second patch further expose the feature to users via DAMON sysfs interface. The third and fourth patches implement unit tests and selftests for the feature. Three patches (fifth to seventh) updating the documents follow. The following six patches are for the DAMON monitoring target based DAMOS filter. The eighth patch implements the feature in the core layer and expose it via DAMON's kernel API. The ninth patch further expose it to users via DAMON sysfs interface. Tenth patch add a selftest, and two patches (eleventh and twelfth) update documents. [1] https://lore.kernel.org/damon/20230728203444.70703-1-sj@kernel.org/ This patch (of 13): Users can know special characteristic of specific address ranges. NUMA nodes or special objects or buffers in virtual address space could be such examples. For such cases, DAMOS schemes could required to be applied to only specific address ranges. Implement yet another type of DAMOS filter for the purpose. Note that the existing filter types, namely anon pages and memcg DAMOS filters needed page level type check. Because such check can be done efficiently in the opertions set layer, those filters are handled in operations set layer. Specifically, only paddr operations set implementation supports these filters. Also, because statistics counting is done in the DAMON core layer, the regions that filtered out by these filters are counted as tried but failed to the statistics. Unlike those, address range based filters can efficiently handled in the core layer. Hence, do the handling in the layer, and count the regions that filtered out by those as the scheme has not tried for the region. This difference should clearly documented. Link: https://lkml.kernel.org/r/20230802214312.110532-1-sj@kernel.org Link: https://lkml.kernel.org/r/20230802214312.110532-2-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Update the DAMON usage document for newly added schemes/.../tried_regions/total_bytes file and the update_schemes_tried_bytes command. Link: https://lkml.kernel.org/r/20230802213222.109841-6-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Update the DAMON ABI document for newly added schemes/.../tried_regions/total_bytes file and the update_schemes_tried_bytes command. Link: https://lkml.kernel.org/r/20230802213222.109841-5-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Update sysfs.sh DAMON selftest for checking existence of 'total_bytes' file under the 'tried_regions' directory of DAMON sysfs interface. Link: https://lkml.kernel.org/r/20230802213222.109841-4-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Using tried_regions/total_bytes file, users can efficiently retrieve the total size of memory regions having specific access pattern. However, DAMON sysfs interface in kernel still populates all the infomration on the tried_regions subdirectories. That means the kernel part overhead for the construction of tried regions directories still exists. To remove the overhead, implement yet another command input for 'state' DAMON sysfs file. Writing the input to the file makes DAMON sysfs interface to update only the total_bytes file. Link: https://lkml.kernel.org/r/20230802213222.109841-3-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
SeongJae Park authored
Patch series "mm/damon/sysfs-schemes: implement DAMOS tried total bytes file". The tried_regions directory of DAMON sysfs interface is useful for retrieving monitoring results snapshot or DAMOS debugging. However, for common use case that need to monitor only the total size of the scheme tried regions (e.g., monitoring working set size), the kernel overhead for directory construction and user overhead for reading the content could be high if the number of monitoring region is not small. This patchset implements DAMON sysfs files for efficient support of the use case. The first patch implements the sysfs file to reduce the user space overhead, and the second patch implements a command for reducing the kernel space overhead. The third patch adds a selftest for the new file, and following two patches update documents. [1] https://lore.kernel.org/damon/20230728201817.70602-1-sj@kernel.org/ This patch (of 5): The tried_regions directory can be used for retrieving the monitoring results snapshot for regions of specific access pattern, by setting the scheme's action as 'stat' and the access pattern as required. While the interface provides every detail of the monitoring results, some use cases including working set size monitoring requires only the total size of the regions. For such cases, users should read all the information and calculate the total size of the regions. However, it could incur high overhead if the number of regions is high. Add a file for retrieving only the information, namely 'total_bytes' file. It allows users to get the total size by reading only the file. Link: https://lkml.kernel.org/r/20230802213222.109841-1-sj@kernel.org Link: https://lkml.kernel.org/r/20230802213222.109841-2-sj@kernel.orgSigned-off-by: SeongJae Park <sj@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Kalesh Singh authored
walk->can_swap might be invalid since it's not guaranteed to be initialized for the particular lruvec. Instead deduce it from the folio type (anon/file). Link: https://lkml.kernel.org/r/20230802025606.346758-3-kaleshsingh@google.com Fixes: 018ee47f ("mm: multi-gen LRU: exploit locality in rmap") Signed-off-by: Kalesh Singh <kaleshsingh@google.com> Tested-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> [mediatek] Tested-by: Charan Teja Kalla <quic_charante@quicinc.com> Cc: Yu Zhao <yuzhao@google.com> Cc: Aneesh Kumar K V <aneesh.kumar@linux.ibm.com> Cc: Barry Song <baohua@kernel.org> Cc: Brian Geffon <bgeffon@google.com> Cc: Jan Alexander Steffens (heftig) <heftig@archlinux.org> Cc: Lecopzer Chen <lecopzer.chen@mediatek.com> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Oleksandr Natalenko <oleksandr@natalenko.name> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Steven Barrett <steven@liquorix.net> Cc: Suleiman Souhlal <suleiman@google.com> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-