• Sean Christopherson's avatar
    KVM: x86/mmu: Add explicit access mask for MMIO SPTEs · 4af77151
    Sean Christopherson authored
    When shadow paging is enabled, KVM tracks the allowed access type for
    MMIO SPTEs so that it can do a permission check on a MMIO GVA cache hit
    without having to walk the guest's page tables.  The tracking is done
    by retaining the WRITE and USER bits of the access when inserting the
    MMIO SPTE (read access is implicitly allowed), which allows the MMIO
    page fault handler to retrieve and cache the WRITE/USER bits from the
    SPTE.
    
    Unfortunately for EPT, the mask used to retain the WRITE/USER bits is
    hardcoded using the x86 paging versions of the bits.  This funkiness
    happens to work because KVM uses a completely different mask/value for
    MMIO SPTEs when EPT is enabled, and the EPT mask/value just happens to
    overlap exactly with the x86 WRITE/USER bits[*].
    
    Explicitly define the access mask for MMIO SPTEs to accurately reflect
    that EPT does not want to incorporate any access bits into the SPTE, and
    so that KVM isn't subtly relying on EPT's WX bits always being set in
    MMIO SPTEs, e.g. attempting to use other bits for experimentation breaks
    horribly.
    
    Note, vcpu_match_mmio_gva() explicits prevents matching GVA==0, and all
    TDP flows explicit set mmio_gva to 0, i.e. zeroing vcpu->arch.access for
    EPT has no (known) functional impact.
    
    [*] Using WX to generate EPT misconfigurations (equivalent to reserved
        bit page fault) ensures KVM can employ its MMIO page fault tricks
        even platforms without reserved address bits.
    
    Fixes: ce88decf ("KVM: MMU: mmio page fault support")
    Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
    Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
    4af77151
mmu.c 159 KB