Commit 7ae5840e authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini

KVM: x86/mmu: Document that zapping invalidated roots doesn't need to flush

Remove the misleading flush "handling" when zapping invalidated TDP MMU
roots, and document that flushing is unnecessary for all flavors of MMUs
when zapping invalid/obsolete roots/pages.  The "handling" in the TDP MMU
is dead code, as zap_gfn_range() is called with shared=true, in which
case it will never return true due to the flushing being handled by
tdp_mmu_zap_spte_atomic().

No functional change intended.
Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
Reviewed-by: default avatarBen Gardon <bgardon@google.com>
Message-Id: <20220226001546.360188-6-seanjc@google.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent db01416b
...@@ -5685,9 +5685,13 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm) ...@@ -5685,9 +5685,13 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm)
} }
/* /*
* Trigger a remote TLB flush before freeing the page tables to ensure * Kick all vCPUs (via remote TLB flush) before freeing the page tables
* KVM is not in the middle of a lockless shadow page table walk, which * to ensure KVM is not in the middle of a lockless shadow page table
* may reference the pages. * walk, which may reference the pages. The remote TLB flush itself is
* not required and is simply a convenient way to kick vCPUs as needed.
* KVM performs a local TLB flush when allocating a new root (see
* kvm_mmu_load()), and the reload in the caller ensure no vCPUs are
* running with an obsolete MMU.
*/ */
kvm_mmu_commit_zap_page(kvm, &kvm->arch.zapped_obsolete_pages); kvm_mmu_commit_zap_page(kvm, &kvm->arch.zapped_obsolete_pages);
} }
......
...@@ -851,7 +851,6 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) ...@@ -851,7 +851,6 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
{ {
struct kvm_mmu_page *next_root; struct kvm_mmu_page *next_root;
struct kvm_mmu_page *root; struct kvm_mmu_page *root;
bool flush = false;
lockdep_assert_held_read(&kvm->mmu_lock); lockdep_assert_held_read(&kvm->mmu_lock);
...@@ -864,7 +863,16 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) ...@@ -864,7 +863,16 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
rcu_read_unlock(); rcu_read_unlock();
flush = zap_gfn_range(kvm, root, 0, -1ull, true, flush, true); /*
* A TLB flush is unnecessary, invalidated roots are guaranteed
* to be unreachable by the guest (see kvm_tdp_mmu_put_root()
* for more details), and unlike the legacy MMU, no vCPU kick
* is needed to play nice with lockless shadow walks as the TDP
* MMU protects its paging structures via RCU. Note, zapping
* will still flush on yield, but that's a minor performance
* blip and not a functional issue.
*/
(void)zap_gfn_range(kvm, root, 0, -1ull, true, false, true);
/* /*
* Put the reference acquired in * Put the reference acquired in
...@@ -878,9 +886,6 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) ...@@ -878,9 +886,6 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
} }
rcu_read_unlock(); rcu_read_unlock();
if (flush)
kvm_flush_remote_tlbs(kvm);
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment