Commit a13d0964 authored by David Hildenbrand's avatar David Hildenbrand Committed by Andrew Morton

mm/rmap: remove page_try_dup_anon_rmap()

All users are gone, remove page_try_dup_anon_rmap() and any remaining
traces.

Link: https://lkml.kernel.org/r/20231220224504.646757-38-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 08e7795e
...@@ -253,7 +253,7 @@ void hugetlb_add_anon_rmap(struct folio *, struct vm_area_struct *, ...@@ -253,7 +253,7 @@ void hugetlb_add_anon_rmap(struct folio *, struct vm_area_struct *,
void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *, void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
unsigned long address); unsigned long address);
/* See page_try_dup_anon_rmap() */ /* See folio_try_dup_anon_rmap_*() */
static inline int hugetlb_try_dup_anon_rmap(struct folio *folio, static inline int hugetlb_try_dup_anon_rmap(struct folio *folio,
struct vm_area_struct *vma) struct vm_area_struct *vma)
{ {
...@@ -478,16 +478,6 @@ static inline int folio_try_dup_anon_rmap_pmd(struct folio *folio, ...@@ -478,16 +478,6 @@ static inline int folio_try_dup_anon_rmap_pmd(struct folio *folio,
#endif #endif
} }
static inline int page_try_dup_anon_rmap(struct page *page, bool compound,
struct vm_area_struct *vma)
{
struct folio *folio = page_folio(page);
if (likely(!compound))
return folio_try_dup_anon_rmap_pte(folio, page, vma);
return folio_try_dup_anon_rmap_pmd(folio, page, vma);
}
/** /**
* page_try_share_anon_rmap - try marking an exclusive anonymous page possibly * page_try_share_anon_rmap - try marking an exclusive anonymous page possibly
* shared to prepare for KSM or temporary unmapping * shared to prepare for KSM or temporary unmapping
...@@ -496,8 +486,8 @@ static inline int page_try_dup_anon_rmap(struct page *page, bool compound, ...@@ -496,8 +486,8 @@ static inline int page_try_dup_anon_rmap(struct page *page, bool compound,
* The caller needs to hold the PT lock and has to have the page table entry * The caller needs to hold the PT lock and has to have the page table entry
* cleared/invalidated. * cleared/invalidated.
* *
* This is similar to page_try_dup_anon_rmap(), however, not used during fork() * This is similar to folio_try_dup_anon_rmap_*(), however, not used during
* to duplicate a mapping, but instead to prepare for KSM or temporarily * fork() to duplicate a mapping, but instead to prepare for KSM or temporarily
* unmapping a page (swap, migration) via folio_remove_rmap_*(). * unmapping a page (swap, migration) via folio_remove_rmap_*().
* *
* Marking the page shared can only fail if the page may be pinned; device * Marking the page shared can only fail if the page may be pinned; device
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment