Commit c53e1708 authored by Marc Zyngier's avatar Marc Zyngier Committed by Sasha Levin

arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to stage-2

[ Upstream commit 8f36ebaf ]

When we fault in a page, we flush it to the PoC (Point of Coherency)
if the faulting vcpu has its own caches off, so that it can observe
the page we just brought it.

But if the vcpu has its caches on, we skip that step. Bad things
happen when *another* vcpu tries to access that page with its own
caches disabled. At that point, there is no garantee that the
data has made it to the PoC, and we access stale data.

The obvious fix is to always flush to PoC when a page is faulted
in, no matter what the state of the vcpu is.

Cc: stable@vger.kernel.org
Fixes: 2d58b733 ("arm64: KVM: force cache clean on page fault when caches are off")
Reviewed-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
Signed-off-by: default avatarSasha Levin <alexander.levin@verizon.com>
parent 75f37dab
...@@ -204,18 +204,12 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, pfn_t pfn, ...@@ -204,18 +204,12 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, pfn_t pfn,
* and iterate over the range. * and iterate over the range.
*/ */
bool need_flush = !vcpu_has_cache_enabled(vcpu) || ipa_uncached;
VM_BUG_ON(size & ~PAGE_MASK); VM_BUG_ON(size & ~PAGE_MASK);
if (!need_flush && !icache_is_pipt())
goto vipt_cache;
while (size) { while (size) {
void *va = kmap_atomic_pfn(pfn); void *va = kmap_atomic_pfn(pfn);
if (need_flush) kvm_flush_dcache_to_poc(va, PAGE_SIZE);
kvm_flush_dcache_to_poc(va, PAGE_SIZE);
if (icache_is_pipt()) if (icache_is_pipt())
__cpuc_coherent_user_range((unsigned long)va, __cpuc_coherent_user_range((unsigned long)va,
...@@ -227,7 +221,6 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, pfn_t pfn, ...@@ -227,7 +221,6 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, pfn_t pfn,
kunmap_atomic(va); kunmap_atomic(va);
} }
vipt_cache:
if (!icache_is_pipt() && !icache_is_vivt_asid_tagged()) { if (!icache_is_pipt() && !icache_is_vivt_asid_tagged()) {
/* any kind of VIPT cache */ /* any kind of VIPT cache */
__flush_icache_all(); __flush_icache_all();
......
...@@ -236,8 +236,7 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, pfn_t pfn, ...@@ -236,8 +236,7 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, pfn_t pfn,
{ {
void *va = page_address(pfn_to_page(pfn)); void *va = page_address(pfn_to_page(pfn));
if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached) kvm_flush_dcache_to_poc(va, size);
kvm_flush_dcache_to_poc(va, size);
if (!icache_is_aliasing()) { /* PIPT */ if (!icache_is_aliasing()) { /* PIPT */
flush_icache_range((unsigned long)va, flush_icache_range((unsigned long)va,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment