- 03 Feb, 2017 34 commits
-
-
James Hogan authored
When exiting from the guest, store the values of the CP0_BadInstr and CP0_BadInstrP registers if they exist, which contain the encodings of the instructions which caused the last synchronous exception. When the instruction is needed for emulation, kvm_get_badinstr() and kvm_get_badinstrp() are used instead of calling kvm_get_inst() directly, to decide whether to read the saved CP0_BadInstr/CP0_BadInstrP registers (if they exist), or read the instruction from memory (if not). The use of these registers should be more robust than using kvm_get_inst(), as it actually gives the instruction encoding seen by the hardware rather than relying on user accessors after the fact, which can be fooled by incoherent icache or a racing code modification. It will also work with VZ, where the guest virtual memory isn't directly accessible by the host with user accessors. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Currently kvm_get_inst() returns KVM_INVALID_INST in the event of a fault reading the guest instruction. This has the rather arbitrary magic value 0xdeadbeef. This API isn't very robust, and in fact 0xdeadbeef is a valid MIPS64 instruction encoding, namely "ld t1,-16657(s5)". Therefore change the kvm_get_inst() API to return 0 or -EFAULT, and to return the instruction via a u32 *out argument. We can then drop the KVM_INVALID_INST definition entirely. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
In order to make use of the CP0_BadInstr & CP0_BadInstrP registers we need to be a bit more careful not to treat code fetch faults as MMIO, lest we hit an UNPREDICTABLE register value when we try to emulate the MMIO load instruction but there was no valid instruction word available to the hardware. Add a kvm_is_ifetch_fault() helper to try to figure out whether a load fault was due to a code fetch, and prevent MMIO instruction emulation in that case. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
MIPS KVM uses its own variation of get_new_mmu_context() which takes an extra vcpu pointer (unused) and does exactly the same thing. Switch to just using get_new_mmu_context() directly and drop KVM's version of it as it doesn't really serve any purpose. The nearby declarations of kvm_mips_alloc_new_mmu_context(), kvm_mips_vcpu_load() and kvm_mips_vcpu_put() are also removed from kvm_host.h, as no definitions or users exist. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
When exceptions are injected into the MIPS KVM guest, the whole host TLB is flushed (except any entries in the guest KSeg0 range). This is certainly not mandated by the architecture when exceptions are taken (userland can't directly change TLB mappings anyway), and is a pretty heavyweight operation: - There may be hundreds of TLB entries especially when a 512 entry FTLB is present. These are walked and read and conditionally invalidated, so the TLBINV feature can't be used either. - It'll indiscriminately wipe out entries belonging to other memory spaces. A simple ASID regeneration would be much faster to perform, although it'd wipe out the guest KSeg0 mappings too. My suspicion is that this was simply to plaster over the fact that kvm_mips_host_tlb_inv() incorrectly only invalidated TLB entries in the ASID for guest usermode, and not the ASID for guest kernelmode. Now that the recent commit "KVM: MIPS/TLB: Flush host TLB entry in kernel ASID" fixes kvm_mips_host_tlb_inv() to flush TLB entries in the kernelmode ASID when the guest TLB changes, lets drop these calls and the otherwise unused kvm_mips_flush_host_tlb(). Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Now that KVM no longer uses wired entries we can safely use local_flush_tlb_all() when we need to flush the entire TLB (on the start of a new ASID cycle). This doesn't flush wired entries, which allows other code to use them without KVM clobbering them all the time. It also is more up to date, knowing about the tlbinv architectural feature, flushing of micro TLB on cores where that is necessary (Loongson I believe), and knows to stop the HTW while doing so. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Use protected_writeback_dcache_line() instead of flush_dcache_line(), and protected_flush_icache_line() instead of flush_icache_line(), so that CACHEE (the EVA variant) is used on EVA host kernels. Without this, guest floating point branch delay slot emulation via a trampoline on the user stack fails on EVA host kernels due to failure of the icache sync, resulting in the break instruction getting skipped and execution from the stack. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Now that we have GVA page tables, use standard user accesses with page faults disabled to read & modify guest instructions. This should be more robust (than the rather dodgy method of accessing guest mapped segments by just directly addressing them) and will also work with Enhanced Virtual Addressing (EVA) host kernel configurations where dedicated instructions are needed for accessing user mode memory. For simplicity and speed we do this regardless of the guest segment the address resides in, rather than handling guest KSeg0 specially with kmap_atomic() as before. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Now that the commpage doesn't use wired TLB entries, the per-CPU vm_init() callback is the only work done by kvm_mips_init_vm_percpu(). The trap & emulate implementation doesn't actually need to do anything from vm_init(), and the future VZ implementation would be better served by a kvm_arch_hardware_enable callback anyway. Therefore drop the vm_init() callback entirely, allowing the kvm_mips_init_vm_percpu() function to also be dropped, along with the kvm_mips_instance atomic counter. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Now that we have GVA page tables and an optimised TLB refill handler in place, convert the handling of commpage faults from the guest kernel to fill the GVA page table and invalidate the TLB entry, rather than filling the wired TLB entry directly. For simplicity we no longer use a wired entry for the commpage (refill should be much cheaper with the fast-path handler anyway). Since we don't need to manipulate the TLB directly any longer, move the function from tlb.c to mmu.c. This puts it closer to the similar functions handling KSeg0 and TLB mapped page faults from the guest. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Now that we have GVA page tables and an optimised TLB refill handler in place, convert the handling of page faults in TLB mapped segment from the guest to fill a single GVA page table entry and invalidate the TLB entry, rather than filling a TLB entry pair directly. Also remove the now unused kvm_mips_get_{kernel,user}_asid() functions in mmu.c and kvm_mips_host_tlb_write() in tlb.c. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Now that we have GVA page tables and an optimised TLB refill handler in place, convert the handling of KSeg0 page faults from the guest to fill the GVA page tables and invalidate the TLB entry, rather than filling a TLB entry directly. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Implement invalidation of specific pairs of GVA page table entries in one or both of the GVA page tables. This is used when existing mappings are replaced in the guest TLB by emulated TLBWI/TLBWR instructions. Due to the sharing of page tables in the host kernel range, we should be careful not to allow host pages to be invalidated. Add a helper kvm_mips_walk_pgd() which can be used when walking of either GPA (future patches) or GVA page tables is needed, optionally with allocation of page tables along the way when they don't exist. GPA page table walking will need to be protected by the kvm->mmu_lock, so we also add a small MMU page cache in each KVM VCPU, like that found for other architectures but smaller. This allows enough pages to be pre-allocated to handle a single fault without holding the lock, allowing the helper to run with the lock held without having to handle allocation failures. Using the same mechanism for GVA allows the same code to be used, and allows it to use the same cache of allocated pages if the GPA walk didn't need to allocate any new tables. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Implement invalidation of large ranges of virtual addresses from GVA page tables in response to a guest ASID change (immediately for guest kernel page table, lazily for guest user page table). We iterate through a range of page tables invalidating entries and freeing fully invalidated tables. To minimise overhead the exact ranges invalidated depends on the flags argument to kvm_mips_flush_gva_pt(), which also allows it to be used in future KVM_CAP_SYNC_MMU patches in response to GPA changes, which unlike guest TLB mapping changes affects guest KSeg0 mappings. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Refactor kvm_mips_host_tlb_inv() to also be able to invalidate any matching TLB entry in the kernel ASID rather than assuming only the TLB entries in the user ASID can change. Two new bool user/kernel arguments allow the caller to indicate whether the mapping should affect each of the ASIDs for guest user/kernel mode. - kvm_mips_invalidate_guest_tlb() (used by TLBWI/TLBWR emulation) can now invalidate any corresponding TLB entry in both the kernel ASID (guest kernel may have accessed any guest mapping), and the user ASID if the entry being replaced is in guest USeg (where guest user may also have accessed it). - The tlbmod fault handler (and the KSeg0 / TLB mapped / commpage fault handlers in later patches) can now invalidate the corresponding TLB entry in whichever ASID is currently active, since only a single page table will have been updated anyway. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
kvm_mips_host_tlb_inv() uses the TLBP instruction to probe the host TLB for an entry matching the given guest virtual address, and determines whether a match was found based on whether CP0_Index > 0. This is technically incorrect as an index of 0 (with the high bit clear) is a perfectly valid TLB index. This is harmless at the moment due to the use of at least 1 wired TLB entry for the KVM commpage, however we will soon be ridding ourselves of that particular wired entry so lets fix the condition in case the entry needing invalidation does land at TLB index 0. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Use functions from the general MIPS TLB exception vector generation code (tlbex.c) to construct a fast path TLB refill handler similar to the general one, but cut down and capable of preserving K0 and K1. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
tlbex.c uses the implementation dependent $22 CP0 register group on NetLogic cores, with the help of the c0_kscratch() helper. Allow these registers to be allocated by the KVM entry code too instead of assuming KScratch registers are all $31, which will also allow pgd_reg to be handled since it is allocated that way. We also drop the masking of kscratch_mask with 0xfc, as it is redundant for the standard KScratch registers (Config4.KScrExist won't have the low 2 bits set anyway), and apparently not necessary for NetLogic. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Activate the GVA page tables when in guest context. This will allow the normal Linux TLB refill handler to fill from it when guest memory is read, as well as preventing accidental reading from user memory. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Allocate GVA -> HPA page tables for guest kernel and guest user mode on each VCPU, to allow for fast path TLB refill handling to be added later. In the process kvm_arch_vcpu_init() needs updating to pass on any error from the vcpu_init() callback. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Wire up a vcpu uninit implementation callback. This will be used for the clean up of GVA->HPA page tables. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Set init_mm as the active_mm and update mm_cpumask(current->mm) to reflect that it isn't active when in guest context. This prevents cache management code from attempting cache flushes on host virtual addresses while in guest context, for example due to a cache management IPIs or later when writing of dynamically translated code hits copy on write. We do this using helpers in static kernel code to avoid having to export init_mm to modules. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
We only need the guest ASID loaded while in guest context, i.e. while running guest code and while handling guest exits. We load the guest ASID when entering the guest, however we restore the host ASID later than necessary, when the VCPU state is saved i.e. vcpu_put() or slightly earlier if preempted after returning to the host. This mismatch is both unpleasant and causes redundant host ASID restores in kvm_trap_emul_vcpu_put(). Lets explicitly restore the host ASID when returning to the host, and don't bother restoring the host ASID on context switch in unless we're already in guest context. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Add implementation callbacks for entering the guest (vcpu_run()) and reentering the guest (vcpu_reenter()), allowing implementation specific operations to be performed before entering the guest or after returning to the host without cluttering kvm_arch_vcpu_ioctl_run(). This allows the T&E specific lazy user GVA flush to be moved into trap_emul.c, along with disabling of the HTW. We also move kvm_mips_deliver_interrupts() as VZ will need to restore the guest timer state prior to delivering interrupts. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
The kvm_vcpu_arch structure contains both mm_structs for allocating MMU contexts (primarily the ASID) but it also copies the resulting ASIDs into guest_{user,kernel}_asid[] arrays which are referenced from uasm generated code. This duplication doesn't seem to serve any purpose, and it gets in the way of generalising the ASID handling across guest kernel/user modes, so lets just extract the ASID straight out of the mm_struct on demand, and in fact there are convenient cpu_context() and cpu_asid() macros for doing so. To reduce the verbosity of this code we do also add kern_mm and user_mm local variables where the kernel and user mm_structs are used. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
The MIPS KVM host and guest GVA ASIDs may need regenerating when scheduling a process in guest context, which is done from the kvm_arch_vcpu_load() / kvm_arch_vcpu_put() functions in mmu.c. However this is a fairly implementation specific detail. VZ for example may use GuestIDs instead of normal ASIDs to distinguish mappings belonging to different guests, and even on VZ without GuestID the root TLB will be used differently to trap & emulate. Trap & emulate GVA ASIDs only relate to the user part of the full address space, so can be left active during guest exit handling (guest context) to allow guest instructions to be easily read and translated. VZ root ASIDs however are for GPA mappings so can't be left active during normal kernel code. They also aren't useful for accessing guest virtual memory, and we should have CP0_BadInstr[P] registers available to provide encodings of trapping guest instructions anyway. Therefore move the ASID preemption handling into the implementation callback. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Convert the get_regs() and set_regs() callbacks to vcpu_load() and vcpu_put(), which provide a cpu argument and more closely match the kvm_arch_vcpu_load() / kvm_arch_vcpu_put() that they are called by. This is in preparation for moving ASID management into the implementations. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
KVM T&E uses an ASID for guest kernel mode and an ASID for guest user mode. The current ASID is saved when the guest is scheduled out, and restored when scheduling back in, with checks for whether the ASID needs to be regenerated. This isn't really necessary as the ASID can be easily determined by the current guest mode, so lets simplify it to just read the required ASID from guest_kernel_asid or guest_user_asid even if the ASID hasn't been regenerated. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
MIPS incompletely implements the KVM_NMI ioctl to supposedly perform a CPU reset, but all it actually does is invalidate the ASIDs. It doesn't expose the KVM_CAP_USER_NMI capability which is supposed to indicate the presence of the KVM_NMI ioctl, and no user software actually uses it on MIPS. Since this is dead code that would technically need updating for GVA page table handling in upcoming patches, remove it now. If we wanted to implement NMI injection later it can always be done properly along with the KVM_CAP_USER_NMI capability, and if we wanted to implement a proper CPU reset it would be better done with a separate ioctl. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Merge in MIPS prerequisites from GVA page tables and GPA page tables series. The same branch can also merge into the MIPS tree. Signed-off-by: James Hogan <james.hogan@imgtec.com>
-
James Hogan authored
The protected cache ops contain no out of line fixup code to return an error code in the event of a fault, with the cache op being skipped in that case. For KVM however we'd like to detect this case as page faulting will be disabled so it could happen during normal operation if the GVA page tables were flushed, and need to be handled by the caller. Add the out-of-line fixup code to load the error value -EFAULT into the return variable, and adapt the protected cache line functions to pass the error back to the caller. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Export to TLB exception code generating functions so that KVM can construct a fast TLB refill handler for guest context without reinventing the wheel quite so much. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Add include guards in asm/uasm.h to allow it to be safely used by a new header asm/tlbex.h in the next patch to expose TLB exception building functions for KVM to use. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
James Hogan authored
Export pmd_init(), invalid_pmd_table and tlbmiss_handler_setup_pgd to GPL kernel modules so that MIPS KVM can use the inline page table management functions and switch between page tables: - pmd_init() will be used directly by KVM to initialise newly allocated pmd tables with invalid lower level table pointers. - invalid_pmd_table is used by pud_present(), pud_none(), and pud_clear(), which KVM will use to test and clear pud entries. - tlbmiss_handler_setup_pgd() will be called by KVM entry code to switch to the appropriate GVA page tables. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
- 02 Feb, 2017 2 commits
-
-
James Hogan authored
pgd_alloc() references init_mm which is not exported to modules. In order for KVM to be able to use pgd_alloc() to allocate GVA page tables, move pgd_alloc() into a new pgtable.c file and export it to modules. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
Markus Elfring authored
* Return directly after a call of the function "copy_from_user" failed in a case block. * Delete the jump label "out" which became unnecessary with this refactoring. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: James Hogan <james.hogan@imgtec.com>
-
- 20 Jan, 2017 1 commit
-
-
Jim Mattson authored
This reverts commit bc613494. A CPUID instruction executed in VMX non-root mode always causes a VM-exit, regardless of the leaf being queried. Fixes: bc613494 ("KVM: nested VMX: disable perf cpuid reporting") Signed-off-by: Jim Mattson <jmattson@google.com> [The issue solved by bc613494 has been resolved with ff651cb6 ("KVM: nVMX: Add nested msr load/restore algorithm").] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
- 17 Jan, 2017 2 commits
-
-
Piotr Luc authored
Vector population count instructions for dwords and qwords are to be used in future Intel Xeon & Xeon Phi processors. The bit 14 of CPUID[level:0x07, ECX] indicates that the new instructions are supported by a processor. The spec can be found in the Intel Software Developer Manual (SDM) or in the Instruction Set Extensions Programming Reference (ISE). Signed-off-by: Piotr Luc <piotr.luc@intel.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
-
-
- 16 Jan, 2017 1 commit
-
-
Piotr Luc authored
Vector population count instructions for dwords and qwords are going to be available in future Intel Xeon & Xeon Phi processors. Bit 14 of CPUID[level:0x07, ECX] indicates that the instructions are supported by a processor. The specification can be found in the Intel Software Developer Manual (SDM) and in the Instruction Set Extensions Programming Reference (ISE). Populate the feature bit and clear it when xsave is disabled. Signed-off-by: Piotr Luc <piotr.luc@intel.com> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: kvm@vger.kernel.org Cc: Radim Krčmář <rkrcmar@redhat.com> Link: http://lkml.kernel.org/r/20170110173403.6010-2-piotr.luc@intel.comSigned-off-by: Thomas Gleixner <tglx@linutronix.de>
-