1. 17 Jun, 2015 3 commits
    • Vladimir Murzin's avatar
      arm64: compat: print compat_sp instead of sp · 4e2ee96a
      Vladimir Murzin authored
      We check against compat_sp, but print out arm64's sp - fix it.
      Signed-off-by: default avatarVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      4e2ee96a
    • Dave P Martin's avatar
      arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP · b9bcc919
      Dave P Martin authored
      The memmap freeing code in free_unused_memmap() computes the end of
      each memblock by adding the memblock size onto the base.  However,
      if SPARSEMEM is enabled then the value (start) used for the base
      may already have been rounded downwards to work out which memmap
      entries to free after the previous memblock.
      
      This may cause memmap entries that are in use to get freed.
      
      In general, you're not likely to hit this problem unless there
      are at least 2 memblocks and one of them is not aligned to a
      sparsemem section boundary.  Note that carve-outs can increase
      the number of memblocks by splitting the regions listed in the
      device tree.
      
      This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
      vmemmap code deals with freeing the unused regions of the memmap
      instead of requiring the arch code to do it.
      
      This patch gets the memblock base out of the memblock directly when
      computing the block end address to ensure the correct value is used.
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      b9bcc919
    • Mark Rutland's avatar
      arm64: entry: fix context tracking for el0_sp_pc · 46b0567c
      Mark Rutland authored
      Commit 6c81fe79 ("arm64: enable context tracking") did not
      update el0_sp_pc to use ct_user_exit, but this appears to have been
      unintentional. In commit 6ab6463a ("arm64: adjust el0_sync so
      that a function can be called") we made x0 available, and in the return
      to userspace we call ct_user_enter in the kernel_exit macro.
      
      Due to this, we currently don't correctly inform RCU of the user->kernel
      transition, and may erroneously account for time spent in the kernel as
      if we were in an extended quiescent state when CONFIG_CONTEXT_TRACKING
      is enabled.
      
      As we do record the kernel->user transition, a userspace application
      making accesses from an unaligned stack pointer can demonstrate the
      imbalance, provoking the following warning:
      
      ------------[ cut here ]------------
      WARNING: CPU: 2 PID: 3660 at kernel/context_tracking.c:75 context_tracking_enter+0xd8/0xe4()
      Modules linked in:
      CPU: 2 PID: 3660 Comm: a.out Not tainted 4.1.0-rc7+ #8
      Hardware name: ARM Juno development board (r0) (DT)
      Call trace:
      [<ffffffc000089914>] dump_backtrace+0x0/0x124
      [<ffffffc000089a48>] show_stack+0x10/0x1c
      [<ffffffc0005b3cbc>] dump_stack+0x84/0xc8
      [<ffffffc0000b3214>] warn_slowpath_common+0x98/0xd0
      [<ffffffc0000b330c>] warn_slowpath_null+0x14/0x20
      [<ffffffc00013ada4>] context_tracking_enter+0xd4/0xe4
      [<ffffffc0005b534c>] preempt_schedule_irq+0xd4/0x114
      [<ffffffc00008561c>] el1_preempt+0x4/0x28
      [<ffffffc0001b8040>] exit_files+0x38/0x4c
      [<ffffffc0000b5b94>] do_exit+0x430/0x978
      [<ffffffc0000b614c>] do_group_exit+0x40/0xd4
      [<ffffffc0000c0208>] get_signal+0x23c/0x4f4
      [<ffffffc0000890b4>] do_signal+0x1ac/0x518
      [<ffffffc000089650>] do_notify_resume+0x5c/0x68
      ---[ end trace 963c192600337066 ]---
      
      This patch adds the missing ct_user_exit to the el0_sp_pc entry path,
      correcting the context tracking for this case.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Fixes: 6c81fe79 ("arm64: enable context tracking")
      Cc: <stable@vger.kernel.org> # v3.17+
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      46b0567c
  2. 15 Jun, 2015 1 commit
  3. 12 Jun, 2015 4 commits
  4. 11 Jun, 2015 3 commits
  5. 08 Jun, 2015 1 commit
    • Josh Stone's avatar
      arm64: fix missing syscall trace exit · 04d7e098
      Josh Stone authored
      If a syscall is entered without TIF_SYSCALL_TRACE set, then it goes on
      the fast path.  It's then possible to have TIF_SYSCALL_TRACE added in
      the middle of the syscall, but ret_fast_syscall doesn't check this flag
      again.  This causes a ptrace syscall-exit-stop to be missed.
      
      For instance, from a PTRACE_EVENT_FORK reported during do_fork, the
      tracer might resume with PTRACE_SYSCALL, setting TIF_SYSCALL_TRACE.
      Now the completion of the fork should have a syscall-exit-stop.
      
      Russell King fixed this on arm by re-checking _TIF_SYSCALL_WORK in the
      fast exit path.  Do the same on arm64.
      Reviewed-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarJosh Stone <jistone@redhat.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      04d7e098
  6. 05 Jun, 2015 5 commits
  7. 03 Jun, 2015 1 commit
    • Marc Zyngier's avatar
      arm64: insn: Add aarch64_{get,set}_branch_offset · 10b48f7e
      Marc Zyngier authored
      In order to deal with branches located in alternate sequences,
      but pointing to the main kernel text, it is required to extract
      the relative displacement encoded in the instruction, and to be
      able to update said instruction with a new offset (once it is
      known).
      
      For this, we introduce three new helpers:
      - aarch64_insn_is_branch_imm is a predicate indicating if the
        instruction is an immediate branch
      - aarch64_get_branch_offset returns a signed value representing
        the byte offset encoded in a branch instruction
      - aarch64_set_branch_offset takes an instruction and an offset,
        and returns the corresponding updated instruction.
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      10b48f7e
  8. 02 Jun, 2015 4 commits
  9. 01 Jun, 2015 1 commit
  10. 27 May, 2015 8 commits
  11. 21 May, 2015 1 commit
  12. 19 May, 2015 8 commits