- 16 May, 2014 4 commits
-
-
Mark Brown authored
As a legacy of the way 32 bit ARM did things the topology code uses a null topology map by default and then overwrites it by mapping cores with no information to a cluster by themselves later. In order to make it simpler to reset things as part of recovering from parse failures in firmware information directly set this configuration on init. A core will always be its own sibling so there should be no risk of confusion with firmware provided information. Signed-off-by: Mark Brown <broonie@linaro.org> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Zi Shen Lim authored
Remove unused and deprecated mc_capable() and smt_capable(). Both were added recently by f6e763b9 ("arm64: topology: Implement basic CPU topology support"). Uses of both were removed by 8e7fbcbc ("sched: Remove stale power aware scheduling remnants and dysfunctional knobs"). Signed-off-by: Zi Shen Lim <zlim@broadcom.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This reverts commit bc07c2c6. While the aim is increased security for --x memory maps, it does not protect against kernel level reads. Until SECCOMP is implemented for arm64, revert this patch to avoid giving a false idea of execute-only mappings. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
git://git.linaro.org/people/ard.biesheuvel/linux-armCatalin Marinas authored
FPSIMD register bank context switching and crypto algorithms optimisations for arm64 from Ard Biesheuvel. * tag 'for-3.16' of git://git.linaro.org/people/ard.biesheuvel/linux-arm: arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions arm64: pull in <asm/simd.h> from asm-generic arm64/crypto: AES in CCM mode using ARMv8 Crypto Extensions arm64/crypto: AES using ARMv8 Crypto Extensions arm64/crypto: GHASH secure hash using ARMv8 Crypto Extensions arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions arm64/crypto: SHA-1 using ARMv8 Crypto Extensions arm64: add support for kernel mode NEON in interrupt context arm64: defer reloading a task's FPSIMD state to userland resume arm64: add abstractions for FPSIMD state manipulation asm-generic: allow generic unaligned access if the arch supports it Conflicts: arch/arm64/include/asm/thread_info.h
-
- 14 May, 2014 7 commits
-
-
Ard Biesheuvel authored
This adds ARMv8 implementations of AES in ECB, CBC, CTR and XTS modes, both for ARMv8 with Crypto Extensions and for plain ARMv8 NEON. The Crypto Extensions version can only run on ARMv8 implementations that have support for these optional extensions. The plain NEON version is a table based yet time invariant implementation. All S-box substitutions are performed in parallel, leveraging the wide range of ARMv8's tbl/tbx instructions, and the huge NEON register file, which can comfortably hold the entire S-box and still have room to spare for doing the actual computations. The key expansion routines were borrowed from aes_generic. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
-
Ard Biesheuvel authored
This patch adds support for the AES-CCM encryption algorithm for CPUs that have support for the AES part of the ARM v8 Crypto Extensions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
This patch adds support for the AES symmetric encryption algorithm for CPUs that have support for the AES part of the ARM v8 Crypto Extensions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
This is a port to ARMv8 (Crypto Extensions) of the Intel implementation of the GHASH Secure Hash (used in the Galois/Counter chaining mode). It relies on the optional PMULL/PMULL2 instruction (polynomial multiply long, what Intel call carry-less multiply). Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
This patch adds support for the SHA-224 and SHA-256 Secure Hash Algorithms for CPUs that have support for the SHA-2 part of the ARM v8 Crypto Extensions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
This patch adds support for the SHA-1 Secure Hash Algorithm for CPUs that have support for the SHA-1 part of the ARM v8 Crypto Extensions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 12 May, 2014 5 commits
-
-
AKASHI Takahiro authored
Some kernel files may include both linux/compat.h and asm/compat.h directly or indirectly. Since both header files contain is_compat_task() under !CONFIG_COMPAT, compiling them with !CONFIG_COMPAT will eventually fail. Such files include kernel/auditsc.c, kernel/seccomp.c and init/do_mountfs.c (do_mountfs.c may read asm/compat.h via asm/ftrace.h once ftrace is implemented). So this patch proactively 1) removes is_compat_task() under !CONFIG_COMPAT from asm/compat.h 2) replaces asm/compat.h to linux/compat.h in kernel/*.c, but asm/compat.h is still necessary in ptrace.c and process.c because they use is_compat_thread(). Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
This macro, regs_return_value, is used mainly for audit to record system call's results, but may also be used in test_kprobes.c. Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Richard Guy Briggs <rgb@redhat.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
As done in arm, this change makes it easy to confirm we invoke syscall related hooks, including syscall tracepoint, audit and seccomp which would be implemented later, in correct order. That is, undoing operations in the opposite order on exit that they were done on entry. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
Currently syscall_trace() is called only for ptrace. With additional TIF_xx flags defined, it is now called in all the cases of audit, ftrace and seccomp in addition to ptrace. Acked-by: Richard Guy Briggs <rgb@redhat.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
Since mdscr_el1 is part of the debug register group, it is highly likely to be trapped by a hypervisor to prevent virtual machines from debugging (buggering?) each other. Unfortunately, this absolutely destroys our performance, since we access the register on many of our low-level fault handling paths to keep track of the various debug state machines. This patch removes our dependency on mdscr_el1 in the case that debugging is not being used. More specifically we: - Use TIF_SINGLESTEP to indicate that a task is stepping at EL0 and avoid disabling step in the MDSCR when we don't need to. MDSCR_EL1.SS handling is moved to kernel_entry, when trapping from userspace. - Ensure debug exceptions are re-enabled on *all* exception entry paths, even the debug exception handling path (where we re-enable exceptions after invoking the handler). Since we can now rely on MDSCR_EL1.SS being cleared by the entry code, exception handlers can usually enable debug immediately before enabling interrupts. - Remove all debug exception unmasking from ret_to_user and el1_preempt, since we will never get here with debug exceptions masked. This results in a slight change to kernel debug behaviour, where we now step into interrupt handlers and data aborts from EL1 when debugging the kernel, which is actually a useful thing to do. A side-effect of this is that it *does* potentially prevent stepping off {break,watch}points when there is a high-frequency interrupt source (e.g. a timer), so a debugger would need to use either breakpoints or manually disable interrupts to get around this issue. With this patch applied, guest performance is restored under KVM when debug register accesses are trapped (and we get a measurable performance increase on the host on Cortex-A57 too). Cc: Ian Campbell <ian.campbell@citrix.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 09 May, 2014 17 commits
-
-
Will Deacon authored
In order to ensure ordering and completion of inner-shareable maintenance instructions (cache and TLB) on AArch64, we can use the -ish suffix to the dmb and dsb instructions respectively. This patch updates our low-level cache and tlb maintenance routines to use the inner-shareable barrier variants where appropriate. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
In order to ensure completion of inner-shareable maintenance instructions (cache and TLB) on AArch64, we can use the -ish suffix to the dsb instruction. This patch relaxes our dsb sy instructions to dsb ish where possible. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
set_cpu_boot_mode_flag is used to identify which exception levels are encountered across the system by CPUs trying to enter the kernel. The basic algorithm is: if a CPU is booting at EL2, it will set a flag at an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable. Otherwise, a flag is set at an offset of zero into the same cacheline. This enables us to check that all CPUs booted at the same exception level. This cacheline is written with the stage-1 MMU off (that is, via a strongly-ordered mapping) and will bypass any clean lines in the cache, leading to potential coherence problems when the variable is later checked via the normal, cacheable mapping of the kernel image. This patch reworks the broken flushing code so that we: (1) Use a DMB to order the strongly-ordered write of the cacheline against the subsequent cache-maintenance operation (by-VA operations only hazard against normal, cacheable accesses). (2) Use a single dc ivac instruction to invalidate any clean lines containing a stale copy of the line after it has been updated. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
The recently introduced acquire/release accessors refer to smp_mb() in the !CONFIG_SMP case. This is confusing when reading the code, so use barrier() directly when we know we're UP. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
Now that all callers of the barrier macros are updated to pass the mandatory options, update the macros so the option is actually used. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
When calling our low-level barrier macros directly, we can often suffice with more relaxed behaviour than the default "all accesses, full system" option. This patch updates the users of dsb() to specify the option which they actually require. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Steve Capper authored
The tlb maintainence functions: __cpu_flush_user_tlb_range and __cpu_flush_kern_tlb_range do not take into consideration the page granule when looping through the address range, and repeatedly flush tlb entries for the same page when operating with 64K pages. This patch re-works the logic s.t. we instead advance the loop by 1 << (PAGE_SHIFT - 12), so avoid repeating ourselves. Also the routines have been converted from assembler to static inline functions to aid with legibility and potential compiler optimisations. The isb() has been removed from flush_tlb_kernel_range(.) as it is only needed when changing the execute permission of a mapping. If one needs to set an area of the kernel as execute/non-execute an isb() must be inserted after the call to flush_tlb_kernel_range. Cc: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Steve Capper <steve.capper@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
Some users of xchg() don't bother using the return value, which results in a compiler warning like the following (from kgdb): In file included from linux/arch/arm64/include/asm/atomic.h:27:0, from include/linux/atomic.h:4, from include/linux/spinlock.h:402, from include/linux/seqlock.h:35, from include/linux/time.h:5, from include/uapi/linux/timex.h:56, from include/linux/timex.h:56, from include/linux/sched.h:19, from include/linux/pid_namespace.h:4, from kernel/debug/debug_core.c:30: kernel/debug/debug_core.c: In function ‘kgdb_cpu_enter’: linux/arch/arm64/include/asm/cmpxchg.h:75:3: warning: value computed is not used [-Wunused-value] ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) ^ linux/arch/arm64/include/asm/atomic.h:132:30: note: in expansion of macro ‘xchg’ #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) kernel/debug/debug_core.c:504:4: note: in expansion of macro ‘atomic_xchg’ atomic_xchg(&kgdb_active, cpu); ^ This patch makes use of the same trick as we do for cmpxchg, by assigning the return value to a dummy variable in the xchg() macro itself. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Steve Capper authored
We have the capability to map 1GB level 1 blocks when using a 4K granule. This patch adjusts the create_mapping logic s.t. when mapping physical memory on boot, we attempt to use a 1GB block if both the VA and PA start and end are 1GB aligned. This both reduces the levels of lookup required to resolve a kernel logical address, as well as reduces TLB pressure on cores that support 1GB TLB entries. Signed-off-by: Steve Capper <steve.capper@linaro.org> Tested-by: Jungseok Lee <jays.lee@samsung.com> [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Bjorn Helgaas authored
arm64 sets CONFIG_64BIT=y and hence uses the "long counter" atomic64_t definition from include/linux/types.h. Make atomic64_read() return "long", not "long long". Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
The primary aim of this patchset is to remove the pgprot_default and prot_sect_default global variables and rely strictly on predefined values. The original goal was to be able to run SMP kernels on UP hardware by not setting the Shareability bit. However, it is unlikely to see UP ARMv8 hardware and even if we do, the Shareability bit is no longer assumed to disable cacheable accesses. A side effect is that the device mappings now have the Shareability attribute set. The hardware, however, should ignore it since Device accesses are always Outer Shareable. Following the removal of the two global variables, there is some PROT_* macro reshuffling and cleanup, including the __PAGE_* macros (replaced by PAGE_*). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
-
Catalin Marinas authored
The ARMv8 architecture allows execute-only user permissions by clearing the PTE_UXN and PTE_USER bits. The kernel, however, can still access such page, so execute-only page permission does not protect against read(2)/write(2) etc. accesses. Systems requiring such protection must implement/enable features like SECCOMP. This patch changes the arm64 __P100 and __S100 protection_map[] macros to the new __PAGE_EXECONLY attributes. A side effect is that pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER isn't set. To work around this, the check is done on the PTE_NG bit via the pte_valid_ng() macro. VM_READ is also checked now for page faults. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This information is useful for instruction emulators to detect read/write and access size without having to decode the faulting instruction. The current patch exports it via sigcontext (struct esr_context) and is only valid for SIGSEGV and SIGBUS. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This patch removes the aux_context structure (and the containing file) to allow the placement of the _aarch64_ctx end magic based on the context stored on the signal stack. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
For AArch32, bit 11 (WnR) of the FSR/ESR register is set when the fault was caused by a write access and applications like Qemu rely on such information being provided in sigcontext. This patch introduces the ESR_EL1 tracking for the arm64 kernel faults and sets bit 11 accordingly in compat sigcontext. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
The synchronisation with the boot thread already happens in __cpu_up() via wait_for_completion_timeout(). In addition, __cpu_up() calls are protected by the cpu_add_remove_lock mutex and already serialised. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
The hardware provides the maximum cache line size in the system via the CTR_EL0.CWG bits. This patch implements the cache_line_size() function to read such information, together with a sanity check if the statically defined L1_CACHE_BYTES is smaller than the hardware value. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
-
- 08 May, 2014 4 commits
-
-
Ard Biesheuvel authored
This patch modifies kernel_neon_begin() and kernel_neon_end(), so they may be called from any context. To address the case where only a couple of registers are needed, kernel_neon_begin_partial(u32) is introduced which takes as a parameter the number of bottom 'n' NEON q-registers required. To mark the end of such a partial section, the regular kernel_neon_end() should be used. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
-
Ard Biesheuvel authored
If a task gets scheduled out and back in again and nothing has touched its FPSIMD state in the mean time, there is really no reason to reload it from memory. Similarly, repeated calls to kernel_neon_begin() and kernel_neon_end() will preserve and restore the FPSIMD state every time. This patch defers the FPSIMD state restore to the last possible moment, i.e., right before the task returns to userland. If a task does not return to userland at all (for any reason), the existing FPSIMD state is preserved and may be reused by the owning task if it gets scheduled in again on the same CPU. This patch adds two more functions to abstract away from straight FPSIMD register file saves and restores: - fpsimd_restore_current_state -> ensure current's FPSIMD state is loaded - fpsimd_flush_task_state -> invalidate live copies of a task's FPSIMD state Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
-
Ard Biesheuvel authored
There are two tacit assumptions in the FPSIMD handling code that will no longer hold after the next patch that optimizes away some FPSIMD state restores: . the FPSIMD registers of this CPU contain the userland FPSIMD state of task 'current'; . when switching to a task, its FPSIMD state will always be restored from memory. This patch adds the following functions to abstract away from straight FPSIMD register file saves and restores: - fpsimd_preserve_current_state -> ensure current's FPSIMD state is saved - fpsimd_update_current_state -> replace current's FPSIMD state Where necessary, the signal handling and fork code are updated to use the above wrappers instead of poking into the FPSIMD registers directly. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
-
Ard Biesheuvel authored
Switch the default unaligned access method to 'hardware implemented' if HAVE_EFFICIENT_UNALIGNED_ACCESS is set. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de>
-
- 05 May, 2014 1 commit
-
-
Linus Torvalds authored
-
- 04 May, 2014 2 commits
-
-
git://git.samba.org/jlayton/linuxLinus Torvalds authored
Pull file locking change from Jeff Layton: "Only an email address change to the MAINTAINERS file" * tag 'locks-v3.15-3' of git://git.samba.org/jlayton/linux: MAINTAINERS: email address change for Jeff Layton
-
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds authored
Pull arm64 fixes from Catalin Marinas: "These are mostly arm64 fixes with an additional arm(64) platform fix for the initialisation of vexpress clocks (the latter only affecting arm64; the arch/arm64 code is SoC agnostic and does not rely on early SoC-specific calls) - vexpress platform clocks initialisation moved earlier following the arm64 move of of_clk_init() call in a previous commit - Default DMA ops changed to non-coherent to preserve compatibility with 32-bit ARM DT files. The "dma-coherent" property can be used to explicitly mark a device coherent. The Applied Micro DT file has been updated to avoid DMA cache maintenance for the X-Gene SATA controller (the only arm64 related driver with such assumption in -rc mainline) - Fixmap correction for earlyprintk - kern_addr_valid() fix for huge pages" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: vexpress: Initialise the sysregs before setting up the clocks arm64: Mark the Applied Micro X-Gene SATA controller as DMA coherent arm64: Use bus notifiers to set per-device coherent DMA ops arm64: Make default dma_ops to be noncoherent arm64: fixmap: fix missing sub-page offset for earlyprintk arm64: Fix for the arm64 kern_addr_valid() function
-