1. 14 Jan, 2018 4 commits
  2. 13 Jan, 2018 11 commits
    • James Morse's avatar
      firmware: arm_sdei: Add support for CPU and system power states · da351827
      James Morse authored
      When a CPU enters an idle lower-power state or is powering off, we
      need to mask SDE events so that no events can be delivered while we
      are messing with the MMU as the registered entry points won't be valid.
      
      If the system reboots, we want to unregister all events and mask the CPUs.
      For kexec this allows us to hand a clean slate to the next kernel
      instead of relying on it to call sdei_{private,system}_data_reset().
      
      For hibernate we unregister all events and re-register them on restore,
      in case we restored with the SDE code loaded at a different address.
      (e.g. KASLR).
      
      Add all the notifiers necessary to do this. We only support shared events
      so all events are left registered and enabled over CPU hotplug.
      Reviewed-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      [catalin.marinas@arm.com: added CPU_PM_ENTER_FAILED case]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      da351827
    • James Morse's avatar
      arm64: kernel: Add arch-specific SDEI entry code and CPU masking · f5df2696
      James Morse authored
      The Software Delegated Exception Interface (SDEI) is an ARM standard
      for registering callbacks from the platform firmware into the OS.
      This is typically used to implement RAS notifications.
      
      Such notifications enter the kernel at the registered entry-point
      with the register values of the interrupted CPU context. Because this
      is not a CPU exception, it cannot reuse the existing entry code.
      (crucially we don't implicitly know which exception level we interrupted),
      
      Add the entry point to entry.S to set us up for calling into C code. If
      the event interrupted code that had interrupts masked, we always return
      to that location. Otherwise we pretend this was an IRQ, and use SDEI's
      complete_and_resume call to return to vbar_el1 + offset.
      
      This allows the kernel to deliver signals to user space processes. For
      KVM this triggers the world switch, a quick spin round vcpu_run, then
      back into the guest, unless there are pending signals.
      
      Add sdei_mask_local_cpu() calls to the smp_send_stop() code, this covers
      the panic() code-path, which doesn't invoke cpuhotplug notifiers.
      
      Because we can interrupt entry-from/exit-to another EL, we can't trust the
      value in sp_el0 or x29, even if we interrupted the kernel, in this case
      the code in entry.S will save/restore sp_el0 and use the value in
      __entry_task.
      
      When we have VMAP stacks we can interrupt the stack-overflow test, which
      stirs x0 into sp, meaning we have to have our own VMAP stacks. For now
      these are allocated when we probe the interface. Future patches will add
      refcounting hooks to allow the arch code to allocate them lazily.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      f5df2696
    • James Morse's avatar
      arm64: uaccess: Add PAN helper · e1281f56
      James Morse authored
      Add __uaccess_{en,dis}able_hw_pan() helpers to set/clear the PSTATE.PAN
      bit.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      e1281f56
    • James Morse's avatar
      arm64: Add vmap_stack header file · ed8b20d4
      James Morse authored
      Today the arm64 arch code allocates an extra IRQ stack per-cpu. If we
      also have SDEI and VMAP stacks we need two extra per-cpu VMAP stacks.
      
      Move the VMAP stack allocation out to a helper in a new header file.
      This avoids missing THREADINFO_GFP, or getting the all-important alignment
      wrong.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ed8b20d4
    • James Morse's avatar
      firmware: arm_sdei: Add driver for Software Delegated Exceptions · ad6eb31e
      James Morse authored
      The Software Delegated Exception Interface (SDEI) is an ARM standard
      for registering callbacks from the platform firmware into the OS.
      This is typically used to implement firmware notifications (such as
      firmware-first RAS) or promote an IRQ that has been promoted to a
      firmware-assisted NMI.
      
      Add the code for detecting the SDEI version and the framework for
      registering and unregistering events. Subsequent patches will add the
      arch-specific backend code and the necessary power management hooks.
      
      Only shared events are supported, power management, private events and
      discovery for ACPI systems will be added by later patches.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ad6eb31e
    • James Morse's avatar
      Docs: dt: add devicetree binding for describing arm64 SDEI firmware · 86f04f64
      James Morse authored
      The Software Delegated Exception Interface (SDEI) is an ARM standard
      for registering callbacks from the platform firmware into the OS.
      This is typically used to implement RAS notifications, or from an
      IRQ that has been promoted to a firmware-assisted NMI.
      
      Add a new devicetree binding to describe the SDE firmware interface.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Acked-by: default avatarRob Herring <robh@kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      86f04f64
    • James Morse's avatar
      KVM: arm64: Stop save/restoring host tpidr_el1 on VHE · 1f742679
      James Morse authored
      Now that a VHE host uses tpidr_el2 for the cpu offset we no longer
      need KVM to save/restore tpidr_el1. Move this from the 'common' code
      into the non-vhe code. While we're at it, on VHE we don't need to
      save the ELR or SPSR as kernel_entry in entry.S will have pushed these
      onto the kernel stack, and will restore them from there. Move these
      to the non-vhe code as we need them to get back to the host.
      
      Finally remove the always-copy-tpidr we hid in the stage2 setup
      code, cpufeature's enable callback will do this for VHE, we only
      need KVM to do it for non-vhe. Add the copy into kvm-init instead.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarChristoffer Dall <cdall@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      1f742679
    • James Morse's avatar
      arm64: alternatives: use tpidr_el2 on VHE hosts · 6d99b689
      James Morse authored
      Now that KVM uses tpidr_el2 in the same way as Linux's cpu_offset in
      tpidr_el1, merge the two. This saves KVM from save/restoring tpidr_el1
      on VHE hosts, and allows future code to blindly access per-cpu variables
      without triggering world-switch.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarChristoffer Dall <cdall@linaro.org>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      6d99b689
    • James Morse's avatar
      KVM: arm64: Change hyp_panic()s dependency on tpidr_el2 · c97e166e
      James Morse authored
      Make tpidr_el2 a cpu-offset for per-cpu variables in the same way the
      host uses tpidr_el1. This lets tpidr_el{1,2} have the same value, and
      on VHE they can be the same register.
      
      KVM calls hyp_panic() when anything unexpected happens. This may occur
      while a guest owns the EL1 registers. KVM stashes the vcpu pointer in
      tpidr_el2, which it uses to find the host context in order to restore
      the host EL1 registers before parachuting into the host's panic().
      
      The host context is a struct kvm_cpu_context allocated in the per-cpu
      area, and mapped to hyp. Given the per-cpu offset for this CPU, this is
      easy to find. Change hyp_panic() to take a pointer to the
      struct kvm_cpu_context. Wrap these calls with an asm function that
      retrieves the struct kvm_cpu_context from the host's per-cpu area.
      
      Copy the per-cpu offset from the hosts tpidr_el1 into tpidr_el2 during
      kvm init. (Later patches will make this unnecessary for VHE hosts)
      
      We print out the vcpu pointer as part of the panic message. Add a back
      reference to the 'running vcpu' in the host cpu context to preserve this.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarChristoffer Dall <cdall@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c97e166e
    • James Morse's avatar
      KVM: arm/arm64: Convert kvm_host_cpu_state to a static per-cpu allocation · 36989e7f
      James Morse authored
      kvm_host_cpu_state is a per-cpu allocation made from kvm_arch_init()
      used to store the host EL1 registers when KVM switches to a guest.
      
      Make it easier for ASM to generate pointers into this per-cpu memory
      by making it a static allocation.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Acked-by: default avatarChristoffer Dall <cdall@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      36989e7f
    • James Morse's avatar
      KVM: arm64: Store vcpu on the stack during __guest_enter() · 32b03d10
      James Morse authored
      KVM uses tpidr_el2 as its private vcpu register, which makes sense for
      non-vhe world switch as only KVM can access this register. This means
      vhe Linux has to use tpidr_el1, which KVM has to save/restore as part
      of the host context.
      
      If the SDEI handler code runs behind KVMs back, it mustn't access any
      per-cpu variables. To allow this on systems with vhe we need to make
      the host use tpidr_el2, saving KVM from save/restoring it.
      
      __guest_enter() stores the host_ctxt on the stack, do the same with
      the vcpu.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarChristoffer Dall <cdall@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      32b03d10
  3. 12 Jan, 2018 1 commit
  4. 08 Jan, 2018 13 commits
  5. 05 Jan, 2018 2 commits
    • Dongjiu Geng's avatar
      arm64: v8.4: Support for new floating point multiplication instructions · 3b3b6810
      Dongjiu Geng authored
      ARM v8.4 extensions add new neon instructions for performing a
      multiplication of each FP16 element of one vector with the corresponding
      FP16 element of a second vector, and to add or subtract this without an
      intermediate rounding to the corresponding FP32 element in a third vector.
      
      This patch detects this feature and let the userspace know about it via a
      HWCAP bit and MRS emulation.
      
      Cc: Dave Martin <Dave.Martin@arm.com>
      Reviewed-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: default avatarDongjiu Geng <gengdongjiu@huawei.com>
      Reviewed-by: default avatarDave Martin <Dave.Martin@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      3b3b6810
    • Catalin Marinas's avatar
      arm64: asid: Do not replace active_asids if already 0 · a8ffaaa0
      Catalin Marinas authored
      Under some uncommon timing conditions, a generation check and
      xchg(active_asids, A1) in check_and_switch_context() on P1 can race with
      an ASID roll-over on P2. If P2 has not seen the update to
      active_asids[P1], it can re-allocate A1 to a new task T2 on P2. P1 ends
      up waiting on the spinlock since the xchg() returned 0 while P2 can go
      through a second ASID roll-over with (T2,A1,G2) active on P2. This
      roll-over copies active_asids[P1] == A1,G1 into reserved_asids[P1] and
      active_asids[P2] == A1,G2 into reserved_asids[P2]. A subsequent
      scheduling of T1 on P1 and T2 on P2 would match reserved_asids and get
      their generation bumped to G3:
      
      P1					P2
      --                                      --
      TTBR0.BADDR = T0
      TTBR0.ASID = A0
      asid_generation = G1
      check_and_switch_context(T1,A1,G1)
        generation match
      					check_and_switch_context(T2,A0,G0)
       				          new_context()
      					    ASID roll-over
      					    asid_generation = G2
      					    flush_context()
      					      active_asids[P1] = 0
      					      asid_map[A1] = 0
      					      reserved_asids[P1] = A0,G0
        xchg(active_asids, A1)
          active_asids[P1] = A1,G1
          xchg returns 0
        spin_lock_irqsave()
      					    allocated ASID (T2,A1,G2)
      					    asid_map[A1] = 1
      					  active_asids[P2] = A1,G2
      					...
      					check_and_switch_context(T3,A0,G0)
      					  new_context()
      					    ASID roll-over
      					    asid_generation = G3
      					    flush_context()
      					      active_asids[P1] = 0
      					      asid_map[A1] = 1
      					      reserved_asids[P1] = A1,G1
      					      reserved_asids[P2] = A1,G2
      					    allocated ASID (T3,A2,G3)
      					    asid_map[A2] = 1
      					  active_asids[P2] = A2,G3
        new_context()
          check_update_reserved_asid(A1,G1)
            matches reserved_asid[P1]
            reserved_asid[P1] = A1,G3
        updated T1 ASID to (T1,A1,G3)
      					check_and_switch_context(T2,A1,G2)
      					  new_context()
      					    check_and_switch_context(A1,G2)
      					      matches reserved_asids[P2]
      					      reserved_asids[P2] = A1,G3
      					  updated T2 ASID to (T2,A1,G3)
      
      At this point, we have two tasks, T1 and T2 both using ASID A1 with the
      latest generation G3. Any of them is allowed to be scheduled on the
      other CPU leading to two different tasks with the same ASID on the same
      CPU.
      
      This patch changes the xchg to cmpxchg so that the active_asids is only
      updated if non-zero to avoid a race with an ASID roll-over on a
      different CPU.
      
      The ASID allocation algorithm has been formally verified using the TLA+
      model checker (see
      https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/tree/asidalloc.tla
      for the spec).
      Reviewed-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a8ffaaa0
  6. 02 Jan, 2018 9 commits