Commit d859b161 authored by Sean Christopherson's avatar Sean Christopherson

KVM: x86/mmu: Detect if unprotect will do anything based on invalid_list

Explicitly query the list of to-be-zapped shadow pages when checking to
see if unprotecting a gfn for retry has succeeded, i.e. if KVM should
retry the faulting instruction.

Add a comment to explain why the list needs to be checked before zapping,
which is the primary motivation for this change.

No functional change intended.
Reviewed-by: default avatarYuan Yao <yuan.yao@intel.com>
Link: https://lore.kernel.org/r/20240831001538.336683-22-seanjc@google.comSigned-off-by: default avatarSean Christopherson <seanjc@google.com>
parent 6b3dcabc
...@@ -2721,12 +2721,15 @@ bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, ...@@ -2721,12 +2721,15 @@ bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
goto out; goto out;
} }
r = false;
write_lock(&kvm->mmu_lock); write_lock(&kvm->mmu_lock);
for_each_gfn_valid_sp_with_gptes(kvm, sp, gpa_to_gfn(gpa)) { for_each_gfn_valid_sp_with_gptes(kvm, sp, gpa_to_gfn(gpa))
r = true;
kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list);
}
/*
* Snapshot the result before zapping, as zapping will remove all list
* entries, i.e. checking the list later would yield a false negative.
*/
r = !list_empty(&invalid_list);
kvm_mmu_commit_zap_page(kvm, &invalid_list); kvm_mmu_commit_zap_page(kvm, &invalid_list);
write_unlock(&kvm->mmu_lock); write_unlock(&kvm->mmu_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment