1. 06 Nov, 2015 2 commits
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Don't dynamically split core when already split · f74f2e2e
      Paul Mackerras authored
      In static micro-threading modes, the dynamic micro-threading code
      is supposed to be disabled, because subcores can't make independent
      decisions about what micro-threading mode to put the core in - there is
      only one micro-threading mode for the whole core.  The code that
      implements dynamic micro-threading checks for this, except that the
      check was missed in one case.  This means that it is possible for a
      subcore in static 2-way micro-threading mode to try to put the core
      into 4-way micro-threading mode, which usually leads to stuck CPUs,
      spinlock lockups, and other stalls in the host.
      
      The problem was in the can_split_piggybacked_subcores() function, which
      should always return false if the system is in a static micro-threading
      mode.  This fixes the problem by making can_split_piggybacked_subcores()
      use subcore_config_ok() for its checks, as subcore_config_ok() includes
      the necessary check for the static micro-threading modes.
      
      Credit to Gautham Shenoy for working out that the reason for the hangs
      and stalls we were seeing was that we were trying to do dynamic 4-way
      micro-threading while we were in static 2-way mode.
      
      Fixes: b4deba5c
      Cc: vger@stable.kernel.org # v4.3
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      f74f2e2e
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Synthesize segment fault if SLB lookup fails · cf29b215
      Paul Mackerras authored
      When handling a hypervisor data or instruction storage interrupt (HDSI
      or HISI), we look up the SLB entry for the address being accessed in
      order to translate the effective address to a virtual address which can
      be looked up in the guest HPT.  This lookup can occasionally fail due
      to the guest replacing an SLB entry without invalidating the evicted
      SLB entry.  In this situation an ERAT (effective to real address
      translation cache) entry can persist and be used by the hardware even
      though there is no longer a corresponding SLB entry.
      
      Previously we would just deliver a data or instruction storage interrupt
      (DSI or ISI) to the guest in this case.  However, this is not correct
      and has been observed to cause guests to crash, typically with a
      data storage protection interrupt on a store to the vmemmap area.
      
      Instead, what we do now is to synthesize a data or instruction segment
      interrupt.  That should cause the guest to reload an appropriate entry
      into the SLB and retry the faulting instruction.  If it still faults,
      we should find an appropriate SLB entry next time and be able to handle
      the fault.
      Tested-by: default avatarThomas Huth <thuth@redhat.com>
      Reviewed-by: default avatarDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      cf29b215
  2. 05 Nov, 2015 1 commit
    • Kai Huang's avatar
      KVM: VMX: Fix commit which broke PML · a3eaa864
      Kai Huang authored
      I found PML was broken since below commit:
      
      	commit feda805f
      	Author: Xiao Guangrong <guangrong.xiao@linux.intel.com>
      	Date:   Wed Sep 9 14:05:55 2015 +0800
      
      	KVM: VMX: unify SECONDARY_VM_EXEC_CONTROL update
      
      	Unify the update in vmx_cpuid_update()
      Signed-off-by: default avatarXiao Guangrong <guangrong.xiao@linux.intel.com>
      	[Rewrite to use vmcs_set_secondary_exec_control. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      
      The reason is in above commit vmx_cpuid_update calls vmx_secondary_exec_control,
      in which currently SECONDARY_EXEC_ENABLE_PML bit is cleared unconditionally (as
      PML is enabled in creating vcpu). Therefore if vcpu_cpuid_update is called after
      vcpu is created, PML will be disabled unexpectedly while log-dirty code still
      thinks PML is used.
      
      Fix this by clearing SECONDARY_EXEC_ENABLE_PML in vmx_secondary_exec_control
      only when PML is not supported or not enabled (!enable_pml). This is more
      reasonable as PML is currently either always enabled or disabled. With this
      explicit updating SECONDARY_EXEC_ENABLE_PML in vmx_enable{disable}_pml is not
      needed so also rename vmx_enable{disable}_pml to vmx_create{destroy}_pml_buffer.
      
      Fixes: feda805fSigned-off-by: default avatarKai Huang <kai.huang@linux.intel.com>
      [While at it, change a wrong ASSERT to an "if".  The condition can happen
       if creating the VCPU fails with ENOMEM. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a3eaa864
  3. 04 Nov, 2015 16 commits
  4. 02 Nov, 2015 2 commits
  5. 29 Oct, 2015 3 commits
    • Christian Borntraeger's avatar
      KVM: s390: use simple switch statement as multiplexer · 46b708ea
      Christian Borntraeger authored
      We currently do some magic shifting (by exploiting that exit codes
      are always a multiple of 4) and a table lookup to jump into the
      exit handlers. This causes some calculations and checks, just to
      do an potentially expensive function call.
      
      Changing that to a switch statement gives the compiler the chance
      to inline and dynamically decide between jump tables or inline
      compare and branches. In addition it makes the code more readable.
      
      bloat-o-meter gives me a small reduction in code size:
      
      add/remove: 0/7 grow/shrink: 1/1 up/down: 986/-1334 (-348)
      function                                     old     new   delta
      kvm_handle_sie_intercept                      72    1058    +986
      handle_prog                                  704     696      -8
      handle_noop                                   54       -     -54
      handle_partial_execution                      60       -     -60
      intercept_funcs                              120       -    -120
      handle_instruction                           198       -    -198
      handle_validity                              210       -    -210
      handle_stop                                  316       -    -316
      handle_external_interrupt                    368       -    -368
      
      Right now my gcc does conditional branches instead of jump tables.
      The inlining seems to give us enough cycles as some micro-benchmarking
      shows minimal improvements, but still in noise.
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Reviewed-by: default avatarCornelia Huck <cornelia.huck@de.ibm.com>
      46b708ea
    • Christian Borntraeger's avatar
      KVM: s390: drop useless newline in debugging data · 58c383c6
      Christian Borntraeger authored
      the s390 debug feature does not need newlines. In fact it will
      result in empty lines. Get rid of 4 leftovers.
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Acked-by: default avatarCornelia Huck <cornelia.huck@de.ibm.com>
      58c383c6
    • David Hildenbrand's avatar
      KVM: s390: SCA must not cross page boundaries · c5c2c393
      David Hildenbrand authored
      We seemed to have missed a few corner cases in commit f6c137ff
      ("KVM: s390: randomize sca address").
      
      The SCA has a maximum size of 2112 bytes. By setting the sca_offset to
      some unlucky numbers, we exceed the page.
      
      0x7c0 (1984) -> Fits exactly
      0x7d0 (2000) -> 16 bytes out
      0x7e0 (2016) -> 32 bytes out
      0x7f0 (2032) -> 48 bytes out
      
      One VCPU entry is 32 bytes long.
      
      For the last two cases, we actually write data to the other page.
      1. The address of the VCPU.
      2. Injection/delivery/clearing of SIGP externall calls via SIGP IF.
      
      Especially the 2. happens regularly. So this could produce two problems:
      1. The guest losing/getting external calls.
      2. Random memory overwrites in the host.
      
      So this problem happens on every 127 + 128 created VM with 64 VCPUs.
      
      Cc: stable@vger.kernel.org # v3.15+
      Acked-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: default avatarDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      c5c2c393
  6. 22 Oct, 2015 16 commits
    • Michal Marek's avatar
      KVM: arm: Do not indent the arguments of DECLARE_BITMAP · 5fdf876d
      Michal Marek authored
      Besides being a coding style issue, it confuses make tags:
      
      ctags: Warning: include/kvm/arm_vgic.h:307: null expansion of name pattern "\1"
      ctags: Warning: include/kvm/arm_vgic.h:308: null expansion of name pattern "\1"
      ctags: Warning: include/kvm/arm_vgic.h:309: null expansion of name pattern "\1"
      ctags: Warning: include/kvm/arm_vgic.h:317: null expansion of name pattern "\1"
      
      Cc: kvmarm@lists.cs.columbia.edu
      Signed-off-by: default avatarMichal Marek <mmarek@suse.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      5fdf876d
    • Mark Rutland's avatar
      arm64: kvm: restore EL1N SP for panic · db85c55f
      Mark Rutland authored
      If we panic in hyp mode, we inject a call to panic() into the EL1N host
      kernel. If a guest context is active, we first attempt to restore the
      minimal amount of state necessary to execute the host kernel with
      restore_sysregs.
      
      However, the SP is restored as part of restore_common_regs, and so we
      may return to the host's panic() function with the SP of the guest. Any
      calculations based on the SP will be bogus, and any attempt to access
      the stack will result in recursive data aborts.
      
      When running Linux as a guest, the guest's EL1N SP is like to be some
      valid kernel address. In this case, the host kernel may use that region
      as a stack for panic(), corrupting it in the process.
      
      Avoid the problem by restoring the host SP prior to returning to the
      host. To prevent misleading backtraces in the host, the FP is zeroed at
      the same time. We don't need any of the other "common" registers in
      order to panic successfully.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Cc: <kvmarm@lists.cs.columbia.edu>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      db85c55f
    • Christoffer Dall's avatar
      arm/arm64: KVM: Add tracepoints for vgic and timer · e21f0910
      Christoffer Dall authored
      The VGIC and timer code for KVM arm/arm64 doesn't have any tracepoints
      or tracepoint infrastructure defined.  Rewriting some of the timer code
      handling showed me how much we need this, so let's add these simple
      trace points once and for all and we can easily expand with additional
      trace points in these files as we go along.
      
      Cc: Wei Huang <wei@redhat.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      e21f0910
    • Christoffer Dall's avatar
      arm/arm64: KVM: Improve kvm_exit tracepoint · b5905dc1
      Christoffer Dall authored
      The ARM architecture only saves the exit class to the HSR (ESR_EL2 for
      arm64) on synchronous exceptions, not on asynchronous exceptions like an
      IRQ.  However, we only report the exception class on kvm_exit, which is
      confusing because an IRQ looks like it exited at some PC with the same
      reason as the previous exit.  Add a lookup table for the exception index
      and prepend the kvm_exit tracepoint text with the exception type to
      clarify this situation.
      
      Also resolve the exception class (EC) to a human-friendly text version
      so the trace output becomes immediately usable for debugging this code.
      
      Cc: Wei Huang <wei@redhat.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      b5905dc1
    • Pavel Fedin's avatar
      KVM: arm/arm64: Fix vGIC documentation · 952105ab
      Pavel Fedin authored
      Correct some old mistakes in the API documentation:
      
      1. VCPU is identified by index (using kvm_get_vcpu() function), but
         "cpu id" can be mistaken for affinity ID.
      2. Some error codes are wrong.
      
        [ Slightly tweaked some grammer and did some s/CPU index/vcpu_index/
          in the descriptions.  -Christoffer ]
      Signed-off-by: default avatarPavel Fedin <p.fedin@samsung.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      952105ab
    • Eric Auger's avatar
      KVM: arm/arm64: implement kvm_arm_[halt,resume]_guest · 3b92830a
      Eric Auger authored
      We introduce kvm_arm_halt_guest and resume functions. They
      will be used for IRQ forward state change.
      
      Halt is synchronous and prevents the guest from being re-entered.
      We use the same mechanism put in place for PSCI former pause,
      now renamed power_off. A new flag is introduced in arch vcpu state,
      pause, only meant to be used by those functions.
      Signed-off-by: default avatarEric Auger <eric.auger@linaro.org>
      Reviewed-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      3b92830a
    • Eric Auger's avatar
      KVM: arm/arm64: check power_off in critical section before VCPU run · 101d3da0
      Eric Auger authored
      In case a vcpu off PSCI call is called just after we executed the
      vcpu_sleep check, we can enter the guest although power_off
      is set. Let's check the power_off state in the critical section,
      just before entering the guest.
      Signed-off-by: default avatarEric Auger <eric.auger@linaro.org>
      Reported-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      Reviewed-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      101d3da0
    • Eric Auger's avatar
      KVM: arm/arm64: check power_off in kvm_arch_vcpu_runnable · 4f5f1dc0
      Eric Auger authored
      kvm_arch_vcpu_runnable now also checks whether the power_off
      flag is set.
      Signed-off-by: default avatarEric Auger <eric.auger@linaro.org>
      Reviewed-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      4f5f1dc0
    • Eric Auger's avatar
      KVM: arm/arm64: rename pause into power_off · 3781528e
      Eric Auger authored
      The kvm_vcpu_arch pause field is renamed into power_off to prepare
      for the introduction of a new pause field. Also vcpu_pause is renamed
      into vcpu_sleep since we will sleep until both power_off and pause are
      false.
      Signed-off-by: default avatarEric Auger <eric.auger@linaro.org>
      Reviewed-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      3781528e
    • Wei Huang's avatar
      arm/arm64: KVM : Enable vhost device selection under KVM config menu · 75755c6d
      Wei Huang authored
      vhost drivers provide guest VMs with better I/O performance and lower
      CPU utilization. This patch allows users to select vhost devices under
      KVM configuration menu on ARM. This makes vhost support on arm/arm64
      on a par with other architectures (e.g. x86, ppc).
      Signed-off-by: default avatarWei Huang <wei@redhat.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      75755c6d
    • Christoffer Dall's avatar
      arm/arm64: KVM: Support edge-triggered forwarded interrupts · 8fe2f19e
      Christoffer Dall authored
      We mark edge-triggered interrupts with the HW bit set as queued to
      prevent the VGIC code from injecting LRs with both the Active and
      Pending bits set at the same time while also setting the HW bit,
      because the hardware does not support this.
      
      However, this means that we must also clear the queued flag when we sync
      back a LR where the state on the physical distributor went from active
      to inactive because the guest deactivated the interrupt.  At this point
      we must also check if the interrupt is pending on the distributor, and
      tell the VGIC to queue it again if it is.
      
      Since these actions on the sync path are extremely close to those for
      level-triggered interrupts, rename process_level_irq to
      process_queued_irq, allowing it to cater for both cases.
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      8fe2f19e
    • Christoffer Dall's avatar
      arm/arm64: KVM: Rework the arch timer to use level-triggered semantics · 4b4b4512
      Christoffer Dall authored
      The arch timer currently uses edge-triggered semantics in the sense that
      the line is never sampled by the vgic and lowering the line from the
      timer to the vgic doesn't have any effect on the pending state of
      virtual interrupts in the vgic.  This means that we do not support a
      guest with the otherwise valid behavior of (1) disable interrupts (2)
      enable the timer (3) disable the timer (4) enable interrupts.  Such a
      guest would validly not expect to see any interrupts on real hardware,
      but will see interrupts on KVM.
      
      This patch fixes this shortcoming through the following series of
      changes.
      
      First, we change the flow of the timer/vgic sync/flush operations.  Now
      the timer is always flushed/synced before the vgic, because the vgic
      samples the state of the timer output.  This has the implication that we
      move the timer operations in to non-preempible sections, but that is
      fine after the previous commit getting rid of hrtimer schedules on every
      entry/exit.
      
      Second, we change the internal behavior of the timer, letting the timer
      keep track of its previous output state, and only lower/raise the line
      to the vgic when the state changes.  Note that in theory this could have
      been accomplished more simply by signalling the vgic every time the
      state *potentially* changed, but we don't want to be hitting the vgic
      more often than necessary.
      
      Third, we get rid of the use of the map->active field in the vgic and
      instead simply set the interrupt as active on the physical distributor
      whenever the input to the GIC is asserted and conversely clear the
      physical active state when the input to the GIC is deasserted.
      
      Fourth, and finally, we now initialize the timer PPIs (and all the other
      unused PPIs for now), to be level-triggered, and modify the sync code to
      sample the line state on HW sync and re-inject a new interrupt if it is
      still pending at that time.
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      4b4b4512
    • Christoffer Dall's avatar
      arm/arm64: KVM: Add forwarded physical interrupts documentation · 4cf1bc4c
      Christoffer Dall authored
      Forwarded physical interrupts on arm/arm64 is a tricky concept and the
      way we deal with them is not apparently easy to understand by reading
      various specs.
      
      Therefore, add a proper documentation file explaining the flow and
      rationale of the behavior of the vgic.
      
      Some of this text was contributed by Marc Zyngier and edited by me.
      Omissions and errors are all mine.
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      4cf1bc4c
    • Christoffer Dall's avatar
      arm/arm64: KVM: Use appropriate define in VGIC reset code · 54723bb3
      Christoffer Dall authored
      We currently initialize the SGIs to be enabled in the VGIC code, but we
      use the VGIC_NR_PPIS define for this purpose, instead of the the more
      natural VGIC_NR_SGIS.  Change this slightly confusing use of the
      defines.
      
      Note: This should have no functional change, as both names are defined
      to the number 16.
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      54723bb3
    • Christoffer Dall's avatar
      arm/arm64: KVM: Implement GICD_ICFGR as RO for PPIs · 8bf9a701
      Christoffer Dall authored
      The GICD_ICFGR allows the bits for the SGIs and PPIs to be read only.
      We currently simulate this behavior by writing a hardcoded value to the
      register for the SGIs and PPIs on every write of these bits to the
      register (ignoring what the guest actually wrote), and by writing the
      same value as the reset value to the register.
      
      This is a bit counter-intuitive, as the register is RO for these bits,
      and we can just implement it that way, allowing us to control the value
      of the bits purely in the reset code.
      Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      8bf9a701
    • Christoffer Dall's avatar
      arm/arm64: KVM: vgic: Factor out level irq processing on guest exit · 9103617d
      Christoffer Dall authored
      Currently vgic_process_maintenance() processes dealing with a completed
      level-triggered interrupt directly, but we are soon going to reuse this
      logic for level-triggered mapped interrupts with the HW bit set, so
      move this logic into a separate static function.
      
      Probably the most scary part of this commit is convincing yourself that
      the current flow is safe compared to the old one.  In the following I
      try to list the changes and why they are harmless:
      
        Move vgic_irq_clear_queued after kvm_notify_acked_irq:
          Harmless because the only potential effect of clearing the queued
          flag wrt.  kvm_set_irq is that vgic_update_irq_pending does not set
          the pending bit on the emulated CPU interface or in the
          pending_on_cpu bitmask if the function is called with level=1.
          However, the point of kvm_notify_acked_irq is to call kvm_set_irq
          with level=0, and we set the queued flag again in
          __kvm_vgic_sync_hwstate later on if the level is stil high.
      
        Move vgic_set_lr before kvm_notify_acked_irq:
          Also, harmless because the LR are cpu-local operations and
          kvm_notify_acked only affects the dist
      
        Move vgic_dist_irq_clear_soft_pend after kvm_notify_acked_irq:
          Also harmless, because now we check the level state in the
          clear_soft_pend function and lower the pending bits if the level is
          low.
      Reviewed-by: default avatarEric Auger <eric.auger@linaro.org>
      Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      9103617d