1. 24 Mar, 2016 1 commit
  2. 21 Mar, 2016 4 commits
  3. 11 Mar, 2016 3 commits
    • Catalin Marinas's avatar
      arm64: kasan: Fix zero shadow mapping overriding kernel image shadow · 2776e0e8
      Catalin Marinas authored
      With the 16KB and 64KB page size configurations, SWAPPER_BLOCK_SIZE is
      PAGE_SIZE and ARM64_SWAPPER_USES_SECTION_MAPS is 0. Since
      kimg_shadow_end is not page aligned (_end shifted by
      KASAN_SHADOW_SCALE_SHIFT), the edges of previously mapped kernel image
      shadow via vmemmap_populate() may be overridden by subsequent calls to
      kasan_populate_zero_shadow(), leading to kernel panics like below:
      
      ------------------------------------------------------------------------------
      Unable to handle kernel paging request at virtual address fffffc100135068c
      pgd = fffffc8009ac0000
      [fffffc100135068c] *pgd=00000009ffee0003, *pud=00000009ffee0003, *pmd=00000009ffee0003, *pte=00e0000081a00793
      Internal error: Oops: 9600004f [#1] PREEMPT SMP
      Modules linked in:
      CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.5.0-rc4+ #1984
      Hardware name: Juno (DT)
      task: fffffe09001a0000 ti: fffffe0900200000 task.ti: fffffe0900200000
      PC is at __memset+0x4c/0x200
      LR is at kasan_unpoison_shadow+0x34/0x50
      pc : [<fffffc800846f1cc>] lr : [<fffffc800821ff54>] pstate: 00000245
      sp : fffffe0900203db0
      x29: fffffe0900203db0 x28: 0000000000000000
      x27: 0000000000000000 x26: 0000000000000000
      x25: fffffc80099b69d0 x24: 0000000000000001
      x23: 0000000000000000 x22: 0000000000002000
      x21: dffffc8000000000 x20: 1fffff9001350a8c
      x19: 0000000000002000 x18: 0000000000000008
      x17: 0000000000000147 x16: ffffffffffffffff
      x15: 79746972100e041d x14: ffffff0000000000
      x13: ffff000000000000 x12: 0000000000000000
      x11: 0101010101010101 x10: 1fffffc11c000000
      x9 : 0000000000000000 x8 : fffffc100135068c
      x7 : 0000000000000000 x6 : 000000000000003f
      x5 : 0000000000000040 x4 : 0000000000000004
      x3 : fffffc100134f651 x2 : 0000000000000400
      x1 : 0000000000000000 x0 : fffffc100135068c
      
      Process swapper/0 (pid: 1, stack limit = 0xfffffe0900200020)
      Call trace:
      [<fffffc800846f1cc>] __memset+0x4c/0x200
      [<fffffc8008220044>] __asan_register_globals+0x5c/0xb0
      [<fffffc8008a09d34>] _GLOBAL__sub_I_65535_1_sunrpc_cache_lookup+0x1c/0x28
      [<fffffc8008f20d28>] kernel_init_freeable+0x104/0x274
      [<fffffc80089e1948>] kernel_init+0x10/0xf8
      [<fffffc8008093a00>] ret_from_fork+0x10/0x50
      ------------------------------------------------------------------------------
      
      This patch aligns kimg_shadow_start and kimg_shadow_end to
      SWAPPER_BLOCK_SIZE in all configurations.
      
      Fixes: f9040773 ("arm64: move kernel image to base of vmalloc area")
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      2776e0e8
    • Catalin Marinas's avatar
      arm64: kasan: Use actual memory node when populating the kernel image shadow · 2f76969f
      Catalin Marinas authored
      With the 16KB or 64KB page configurations, the generic
      vmemmap_populate() implementation warns on potential offnode
      page_structs via vmemmap_verify() because the arm64 kasan_init() passes
      NUMA_NO_NODE instead of the actual node for the kernel image memory.
      
      Fixes: f9040773 ("arm64: move kernel image to base of vmalloc area")
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: default avatarJames Morse <james.morse@arm.com>
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      2f76969f
    • Catalin Marinas's avatar
      arm64: Update PTE_RDONLY in set_pte_at() for PROT_NONE permission · fdc69e7d
      Catalin Marinas authored
      The set_pte_at() function must update the hardware PTE_RDONLY bit
      depending on the state of the PTE_WRITE and PTE_DIRTY bits of the given
      entry value. However, it currently only performs this for pte_valid()
      entries, ignoring PTE_PROT_NONE. The side-effect is that PROT_NONE
      mappings would not have the PTE_RDONLY bit set. Without
      CONFIG_ARM64_HW_AFDBM, this is not an issue since such PROT_NONE pages
      are not accessible anyway.
      
      With commit 2f4b829c ("arm64: Add support for hardware updates of
      the access and dirty pte bits"), the ptep_set_wrprotect() function was
      re-written to cope with automatic hardware updates of the dirty state.
      As an optimisation, only PTE_RDONLY is checked to assess the "dirty"
      status. Since set_pte_at() does not set this bit for PROT_NONE mappings,
      such pages may be considered "dirty" as a result of
      ptep_set_wrprotect().
      
      This patch updates the pte_valid() check to pte_present() in
      set_pte_at(). It also adds PTE_PROT_NONE to the swap entry bits comment.
      
      Fixes: 2f4b829c ("arm64: Add support for hardware updates of the access and dirty pte bits")
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: default avatarGanapatrao Kulkarni <gkulkarni@caviumnetworks.com>
      Tested-by: default avatarGanapatrao Kulkarni <gkulkarni@cavium.com>
      Cc: <stable@vger.kernel.org>
      fdc69e7d
  4. 04 Mar, 2016 3 commits
  5. 03 Mar, 2016 1 commit
    • Ard Biesheuvel's avatar
      arm64: enable CONFIG_DEBUG_RODATA by default · 57efac2f
      Ard Biesheuvel authored
      In spite of its name, CONFIG_DEBUG_RODATA is an important hardening feature
      for production kernels, and distros all enable it by default in their
      kernel configs. However, since enabling it used to result in more granular,
      and thus less efficient kernel mappings, it is not enabled by default for
      performance reasons.
      
      However, since commit 2f39b5f9 ("arm64: mm: Mark .rodata as RO"), the
      various kernel segments (.text, .rodata, .init and .data) are already
      mapped individually, and the only effect of setting CONFIG_DEBUG_RODATA is
      that the existing .text and .rodata mappings are updated late in the boot
      sequence to have their read-only attributes set, which means that any
      performance concerns related to enabling CONFIG_DEBUG_RODATA are no longer
      valid.
      
      So from now on, make CONFIG_DEBUG_RODATA default to 'y'
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      57efac2f
  6. 02 Mar, 2016 2 commits
    • Mark Rutland's avatar
      arm64: Rework valid_user_regs · dbd4d7ca
      Mark Rutland authored
      We validate pstate using PSR_MODE32_BIT, which is part of the
      user-provided pstate (and cannot be trusted). Also, we conflate
      validation of AArch32 and AArch64 pstate values, making the code
      difficult to reason about.
      
      Instead, validate the pstate value based on the associated task. The
      task may or may not be current (e.g. when using ptrace), so this must be
      passed explicitly by callers. To avoid circular header dependencies via
      sched.h, is_compat_task is pulled out of asm/ptrace.h.
      
      To make the code possible to reason about, the AArch64 and AArch32
      validation is split into separate functions. Software must respect the
      RES0 policy for SPSR bits, and thus the kernel mirrors the hardware
      policy (RAZ/WI) for bits as-yet unallocated. When these acquire an
      architected meaning writes may be permitted (potentially with additional
      validation).
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      dbd4d7ca
    • Ard Biesheuvel's avatar
      arm64: mm: check at build time that PAGE_OFFSET divides the VA space evenly · 6d2aa549
      Ard Biesheuvel authored
      Commit 8439e62a ("arm64: mm: use bit ops rather than arithmetic in
      pa/va translations") changed the boundary check against PAGE_OFFSET from
      an arithmetic comparison to a bit test. This means we now silently assume
      that PAGE_OFFSET is a power of 2 that divides the kernel virtual address
      space into two equal halves. So make that assumption explicit.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      6d2aa549
  7. 01 Mar, 2016 1 commit
  8. 29 Feb, 2016 2 commits
  9. 26 Feb, 2016 9 commits
  10. 25 Feb, 2016 9 commits
    • Catalin Marinas's avatar
      arm64: Fix building error with 16KB pages and 36-bit VA · cac4b8cd
      Catalin Marinas authored
      In such configuration, Linux uses only two pages of page tables and
      __pud_populate() should not be used. However, the BUILD_BUG() triggers
      since pud_sect() is still defined and the compiler cannot eliminate such
      code, even though at run-time it should not be triggered. This patch
      extends the #ifdef ARM64_64K_PAGES condition for pud_sect to include
      PGTABLE_LEVELS < 3.
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      cac4b8cd
    • Suzuki K Poulose's avatar
      arm64: Rename cpuid_feature field extract routines · 28c5dcb2
      Suzuki K Poulose authored
      Now that we have a clear understanding of the sign of a feature,
      rename the routines to reflect the sign, so that it is not misused.
      The cpuid_feature_extract_field() now accepts a 'sign' parameter.
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki.poulose@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      28c5dcb2
    • Suzuki K Poulose's avatar
      arm64: capabilities: Handle sign of the feature bit · ff96f7bc
      Suzuki K Poulose authored
      Use the appropriate accessor for the feature bit by keeping
      track of the sign of the feature
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki.poulose@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ff96f7bc
    • Suzuki K Poulose's avatar
      arm64: cpufeature: Fix the sign of feature bits · 0710cfdb
      Suzuki K Poulose authored
      There is a confusion on whether the values of a feature are signed
      or not in ARM. This is not clearly mentioned in the ARM ARM either.
      We have dealt most of the bits as signed so far, and marked the
      rest as unsigned explicitly. This fixed in ARM ARM and will be rolled
      out soon.
      
      Here is the criteria in a nutshell:
      
      1) The fields, which are either signed or unsigned, use increasing
         numerical values to indicate an increase in functionality. Thus, if a value
         of 0x1 indicates the presence of some instructions, then the 0x2 value will
         indicate the presence of those instructions plus some additional instructions
         or functionality.
      
      2) For ID field values where the value 0x0 defines that a feature is not present,
         the number is an unsigned value.
      
      3) For some features where the feature was made optional or removed after the
         start of the definition of the architecture, the value 0x0 is used to
         indicate the presence of a feature, and 0xF indicates the absence of the
         feature. In these cases, the fields are, in effect, holding signed values.
      
      So with these rules applied, we have only the following fields which are signed and
      the rest are unsigned.
      
       a) ID_AA64PFR0_EL1: {FP, ASIMD}
       b) ID_AA64MMFR0_EL1: {TGran4K, TGran64K}
       c) ID_AA64DFR0_EL1: PMUVer (0xf - PMUv3 not implemented)
       d) ID_DFR0_EL1: PerfMon
       e) ID_MMFR0_EL1: {InnerShr, OuterShr}
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki.poulose@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      0710cfdb
    • Suzuki K Poulose's avatar
      arm64: cpufeature: Correct feature register tables · e5343503
      Suzuki K Poulose authored
      Correct the feature bit entries for :
        ID_DFR0
        ID_MMFR0
      
      to fix the default safe value for some of the bits.
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki.poulose@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      e5343503
    • Suzuki K Poulose's avatar
      arm64: Ensure the secondary CPUs have safe ASIDBits size · 13f417f3
      Suzuki K Poulose authored
      Adds a hook for checking whether a secondary CPU has the
      features used already by the kernel during early boot, based
      on the boot CPU and plugs in the check for ASID size.
      
      The ID_AA64MMFR0_EL1:ASIDBits determines the size of the mm context
      id and is used in the early boot to make decisions. The value is
      picked up from the Boot CPU and cannot be delayed until other CPUs
      are up. If a secondary CPU has a smaller size than that of the Boot
      CPU, things will break horribly and the usual SANITY check is not good
      enough to prevent the system from crashing. So, crash the system with
      enough information.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      13f417f3
    • Suzuki K Poulose's avatar
      arm64: Add helper for extracting ASIDBits · 038dc9c6
      Suzuki K Poulose authored
      Add a helper to extract ASIDBits on the current cpu
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      038dc9c6
    • Suzuki K Poulose's avatar
      arm64: Enable CPU capability verification unconditionally · fd9c2790
      Suzuki K Poulose authored
      We verify the capabilities of the secondary CPUs only when
      hotplug is enabled. The boot time activated CPUs do not
      go through the verification by checking whether the system
      wide capabilities were initialised or not.
      
      This patch removes the capability check dependency on CONFIG_HOTPLUG_CPU,
      to make sure that all the secondary CPUs go through the check.
      The boot time activated CPUs will still skip the system wide
      capability check. The plan is to hook in a check for CPU features
      used by the kernel at early boot up, based on the Boot CPU values.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      fd9c2790
    • Suzuki K Poulose's avatar
      arm64: Handle early CPU boot failures · bb905274
      Suzuki K Poulose authored
      A secondary CPU could fail to come online due to insufficient
      capabilities and could simply die or loop in the kernel.
      e.g, a CPU with no support for the selected kernel PAGE_SIZE
      loops in kernel with MMU turned off.
      or a hotplugged CPU which doesn't have one of the advertised
      system capability will die during the activation.
      
      There is no way to synchronise the status of the failing CPU
      back to the master. This patch solves the issue by adding a
      field to the secondary_data which can be updated by the failing
      CPU. If the secondary CPU fails even before turning the MMU on,
      it updates the status in a special variable reserved in the head.txt
      section to make sure that the update can be cache invalidated safely
      without possible sharing of cache write back granule.
      
      Here are the possible states :
      
       -1. CPU_MMU_OFF - Initial value set by the master CPU, this value
      indicates that the CPU could not turn the MMU on, hence the status
      could not be reliably updated in the secondary_data. Instead, the
      CPU has updated the status @ __early_cpu_boot_status.
      
       0. CPU_BOOT_SUCCESS - CPU has booted successfully.
      
       1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the
      master CPU to synchronise by issuing a cpu_ops->cpu_kill.
      
       2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is
      looping in the kernel. This information could be used by say,
      kexec to check if it is really safe to do a kexec reboot.
      
       3. CPU_PANIC_KERNEL - CPU detected some serious issues which
      requires kernel to crash immediately. The secondary CPU cannot
      call panic() until it has initialised the GIC. This flag can
      be used to instruct the master to do so.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      [catalin.marinas@arm.com: conflict resolution]
      [catalin.marinas@arm.com: converted "status" from int to long]
      [catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      bb905274
  11. 24 Feb, 2016 5 commits