Commit db543216 authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini

KVM: x86/mmu: Walk host page tables to find THP mappings

Explicitly walk the host page tables to identify THP mappings instead
of relying solely on the metadata in struct page.  This sets the stage
for using a common method of identifying huge mappings regardless of the
underlying implementation (HugeTLB vs THB vs DAX), and hopefully avoids
the pitfalls of relying on metadata to identify THP mappings, e.g. see
commit 169226f7 ("mm: thp: handle page cache THP correctly in
PageTransCompoundMap") and the need for KVM to explicitly check for a
THP compound page.  KVM will also naturally work with 1gb THP pages, if
they are ever supported.

Walking the tables for THP mappings is likely marginally slower than
querying metadata, but a future patch will reuse the walk to identify
HugeTLB mappings, at which point eliminating the existing VMA lookup for
HugeTLB will make this a net positive.

Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Barret Rhoden <brho@google.com>
Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 17eff019
...@@ -3329,6 +3329,34 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *sptep) ...@@ -3329,6 +3329,34 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *sptep)
__direct_pte_prefetch(vcpu, sp, sptep); __direct_pte_prefetch(vcpu, sp, sptep);
} }
static int host_pfn_mapping_level(struct kvm_vcpu *vcpu, gfn_t gfn,
kvm_pfn_t pfn)
{
struct kvm_memory_slot *slot;
unsigned long hva;
pte_t *pte;
int level;
BUILD_BUG_ON(PT_PAGE_TABLE_LEVEL != (int)PG_LEVEL_4K ||
PT_DIRECTORY_LEVEL != (int)PG_LEVEL_2M ||
PT_PDPE_LEVEL != (int)PG_LEVEL_1G);
if (!PageCompound(pfn_to_page(pfn)))
return PT_PAGE_TABLE_LEVEL;
slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, true);
if (!slot)
return PT_PAGE_TABLE_LEVEL;
hva = __gfn_to_hva_memslot(slot, gfn);
pte = lookup_address_in_mm(vcpu->kvm->mm, hva, &level);
if (unlikely(!pte))
return PT_PAGE_TABLE_LEVEL;
return level;
}
static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
int max_level, kvm_pfn_t *pfnp, int max_level, kvm_pfn_t *pfnp,
int *levelp) int *levelp)
...@@ -3344,10 +3372,11 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, ...@@ -3344,10 +3372,11 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
kvm_is_zone_device_pfn(pfn)) kvm_is_zone_device_pfn(pfn))
return; return;
if (!kvm_is_transparent_hugepage(pfn)) level = host_pfn_mapping_level(vcpu, gfn, pfn);
if (level == PT_PAGE_TABLE_LEVEL)
return; return;
level = PT_DIRECTORY_LEVEL; level = min(level, max_level);
/* /*
* mmu_notifier_retry() was successful and mmu_lock is held, so * mmu_notifier_retry() was successful and mmu_lock is held, so
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment