Commit 855149aa authored by Izik Eidus's avatar Izik Eidus Committed by Avi Kivity

KVM: MMU: fix dirty bit setting when removing write permissions

When mmu_set_spte() checks if a page related to spte should be release as
dirty or clean, it check if the shadow pte was writeble, but in case
rmap_write_protect() is called called it is possible for shadow ptes that were
writeble to become readonly and therefor mmu_set_spte will release the pages
as clean.

This patch fix this issue by marking the page as dirty inside
rmap_write_protect().
Signed-off-by: default avatarIzik Eidus <izike@qumranet.com>
Signed-off-by: default avatarAvi Kivity <avi@qumranet.com>
parent 69a9f69b
...@@ -626,6 +626,14 @@ static void rmap_write_protect(struct kvm *kvm, u64 gfn) ...@@ -626,6 +626,14 @@ static void rmap_write_protect(struct kvm *kvm, u64 gfn)
} }
spte = rmap_next(kvm, rmapp, spte); spte = rmap_next(kvm, rmapp, spte);
} }
if (write_protected) {
struct page *page;
spte = rmap_next(kvm, rmapp, NULL);
page = pfn_to_page((*spte & PT64_BASE_ADDR_MASK) >> PAGE_SHIFT);
SetPageDirty(page);
}
/* check for huge page mappings */ /* check for huge page mappings */
rmapp = gfn_to_rmap(kvm, gfn, 1); rmapp = gfn_to_rmap(kvm, gfn, 1);
spte = rmap_next(kvm, rmapp, NULL); spte = rmap_next(kvm, rmapp, NULL);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment