Commit 205d76ff authored by Marc Zyngier's avatar Marc Zyngier

KVM: Remove kvm_is_transparent_hugepage() and PageTransCompoundMap()

Now that arm64 has stopped using kvm_is_transparent_hugepage(),
we can remove it, as well as PageTransCompoundMap() which was
only used by the former.
Acked-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210726153552.1535838-5-maz@kernel.org
parent f2cc3273
......@@ -632,43 +632,6 @@ static inline int PageTransCompound(struct page *page)
return PageCompound(page);
}
/*
* PageTransCompoundMap is the same as PageTransCompound, but it also
* guarantees the primary MMU has the entire compound page mapped
* through pmd_trans_huge, which in turn guarantees the secondary MMUs
* can also map the entire compound page. This allows the secondary
* MMUs to call get_user_pages() only once for each compound page and
* to immediately map the entire compound page with a single secondary
* MMU fault. If there will be a pmd split later, the secondary MMUs
* will get an update through the MMU notifier invalidation through
* split_huge_pmd().
*
* Unlike PageTransCompound, this is safe to be called only while
* split_huge_pmd() cannot run from under us, like if protected by the
* MMU notifier, otherwise it may result in page->_mapcount check false
* positives.
*
* We have to treat page cache THP differently since every subpage of it
* would get _mapcount inc'ed once it is PMD mapped. But, it may be PTE
* mapped in the current process so comparing subpage's _mapcount to
* compound_mapcount to filter out PTE mapped case.
*/
static inline int PageTransCompoundMap(struct page *page)
{
struct page *head;
if (!PageTransCompound(page))
return 0;
if (PageAnon(page))
return atomic_read(&page->_mapcount) < 0;
head = compound_head(page);
/* File THP is PMD mapped and not PTE mapped */
return atomic_read(&page->_mapcount) ==
atomic_read(compound_mapcount_ptr(head));
}
/*
* PageTransTail returns true for both transparent huge pages
* and hugetlbfs pages, so it should only be called when it's known
......
......@@ -189,16 +189,6 @@ bool kvm_is_reserved_pfn(kvm_pfn_t pfn)
return true;
}
bool kvm_is_transparent_hugepage(kvm_pfn_t pfn)
{
struct page *page = pfn_to_page(pfn);
if (!PageTransCompoundMap(page))
return false;
return is_transparent_hugepage(compound_head(page));
}
/*
* Switches to specified vcpu, until a matching vcpu_put()
*/
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment