Commit b4a83900 authored by Paul Mackerras's avatar Paul Mackerras Committed by Alexander Graf

KVM: PPC: Book3S HV: Fix KSM memory corruption

Testing with KSM active in the host showed occasional corruption of
guest memory.  Typically a page that should have contained zeroes
would contain values that look like the contents of a user process
stack (values such as 0x0000_3fff_xxxx_xxx).

Code inspection in kvmppc_h_protect revealed that there was a race
condition with the possibility of granting write access to a page
which is read-only in the host page tables.  The code attempts to keep
the host mapping read-only if the host userspace PTE is read-only, but
if that PTE had been temporarily made invalid for any reason, the
read-only check would not trigger and the host HPTE could end up
read-write.  Examination of the guest HPT in the failure situation
revealed that there were indeed shared pages which should have been
read-only that were mapped read-write.

To close this race, we don't let a page go from being read-only to
being read-write, as far as the real HPTE mapping the page is
concerned (the guest view can go to read-write, but the actual mapping
stays read-only).  When the guest tries to write to the page, we take
an HDSI and let kvmppc_book3s_hv_page_fault take care of providing a
writable HPTE for the page.

This eliminates the occasional corruption of shared pages
that was previously seen with KSM active.
Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
parent dee6f24c
...@@ -667,40 +667,30 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags, ...@@ -667,40 +667,30 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags,
rev->guest_rpte = r; rev->guest_rpte = r;
note_hpte_modification(kvm, rev); note_hpte_modification(kvm, rev);
} }
r = (be64_to_cpu(hpte[1]) & ~mask) | bits;
/* Update HPTE */ /* Update HPTE */
if (v & HPTE_V_VALID) { if (v & HPTE_V_VALID) {
rb = compute_tlbie_rb(v, r, pte_index);
hpte[0] = cpu_to_be64(v & ~HPTE_V_VALID);
do_tlbies(kvm, &rb, 1, global_invalidates(kvm, flags), true);
/* /*
* If the host has this page as readonly but the guest * If the page is valid, don't let it transition from
* wants to make it read/write, reduce the permissions. * readonly to writable. If it should be writable, we'll
* Checking the host permissions involves finding the * take a trap and let the page fault code sort it out.
* memslot and then the Linux PTE for the page.
*/ */
if (hpte_is_writable(r) && kvm->arch.using_mmu_notifiers) { pte = be64_to_cpu(hpte[1]);
unsigned long psize, gfn, hva; r = (pte & ~mask) | bits;
struct kvm_memory_slot *memslot; if (hpte_is_writable(r) && kvm->arch.using_mmu_notifiers &&
pgd_t *pgdir = vcpu->arch.pgdir; !hpte_is_writable(pte))
pte_t pte; r = hpte_make_readonly(r);
/* If the PTE is changing, invalidate it first */
psize = hpte_page_size(v, r); if (r != pte) {
gfn = ((r & HPTE_R_RPN) & ~(psize - 1)) >> PAGE_SHIFT; rb = compute_tlbie_rb(v, r, pte_index);
memslot = __gfn_to_memslot(kvm_memslots_raw(kvm), gfn); hpte[0] = cpu_to_be64((v & ~HPTE_V_VALID) |
if (memslot) { HPTE_V_ABSENT);
hva = __gfn_to_hva_memslot(memslot, gfn); do_tlbies(kvm, &rb, 1, global_invalidates(kvm, flags),
pte = lookup_linux_pte_and_update(pgdir, hva, true);
1, &psize); hpte[1] = cpu_to_be64(r);
if (pte_present(pte) && !pte_write(pte))
r = hpte_make_readonly(r);
}
} }
} }
hpte[1] = cpu_to_be64(r); unlock_hpte(hpte, v & ~HPTE_V_HVLOCK);
eieio();
hpte[0] = cpu_to_be64(v & ~HPTE_V_HVLOCK);
asm volatile("ptesync" : : : "memory"); asm volatile("ptesync" : : : "memory");
return H_SUCCESS; return H_SUCCESS;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment