- 22 Sep, 2014 2 commits
-
-
Robin Murphy authored
Commit 591c1e ("of: configure the platform device dma parameters) introduced a common mechanism to configure DMA from DT properties. AMBA devices created from DT can take advantage of this, too. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Yi Li authored
SMBIOS is important for server hardware vendors. It implements a spec for providing descriptive information about the platform. Things like serial numbers, physical layout of the ports, build configuration data, and the like. This has been tested by dmidecode and lshw tools. This patch adds the call to dmi_scan_machine() to arm64_enter_virtual_mode(), as that is the point where the EFI Configuration Tables are registered as being available. It needs to be in an early_initcall anyway as dmi_id_init(), which is an arch_initcall itself, depends on dmi_scan_machine() having been called already. Signed-off-by: Yi Li <yi.li@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 19 Sep, 2014 1 commit
-
-
Catalin Marinas authored
The aarch64_insn_gen_branch_imm() function takes an enum as the last argument rather than a bool. It happens to work because AARCH64_INSN_BRANCH_LINK matches 'true' but better to use the actual type. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 17 Sep, 2014 1 commit
-
-
Ganapatrao Kulkarni authored
Initializing max_mapnr using set_max_mapnr() helper function instead of direct reference. Also not adding PHYS_PFN_OFFSET to max_pfn, since it already contains it. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@caviumnetworks.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 15 Sep, 2014 2 commits
-
-
Jon Masters authored
The kernel wants to enable reporting of asynchronous interrupts (i.e. System Errors) as early as possible. But if this happens too early then any pending System Error on initial entry into the kernel may never be reported where a user can see it. This situation will occur if the kernel is configured with CONFIG_PANIC_ON_OOPS set and (default or command line) enabled, in which case the kernel will panic as intended, however the associated logging messages indicating this failure condition will remain only in the kernel ring buffer and never be flushed out to the (not yet configured) console. Therefore, this patch moves the enabling of asynchronous interrupts during early setup to as early as reasonable, but after parsing any possible earlycon parameters setting up earlycon. Signed-off-by: Jon Masters <jcm@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Charlebois authored
Remove '#' from immediate parameter in AARCH64 inline assembly in mmu. This code now works with both gcc and clang. Signed-off-by: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Behan Webster <behanw@converseincode.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 12 Sep, 2014 7 commits
-
-
Laura Abbott authored
The start address needs to be actually updated after it is detected to be unaligned. Adjust it and the end address properly. Reported-by: Zi Shen Lim <zlim.lnx@gmail.com> Reviewed-by: Zi Shen Lim <zlim.lnx@gmail.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Daniel Borkmann authored
On ARM64, when the BPF JIT compiler fills the JIT image body with opcodes during translation of eBPF into ARM64 opcodes, we may fail for several reasons during that phase: one being that we jump to the notyet label for not yet supported eBPF instructions such as BPF_ST. In that case we only free offsets, but not the actual allocated target image where opcodes are being stored. Fix it by calling module_free() on dismantle time in case of errors. Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Acked-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
* cpuidle: arm64: add PSCI CPU_SUSPEND based cpu_suspend support arm64: kernel: introduce cpu_init_idle CPU operation arm64: kernel: refactor the CPU suspend API for retention states Documentation: arm: define DT idle states bindings
-
Lorenzo Pieralisi authored
This patch implements the cpu_suspend cpu operations method through the PSCI CPU SUSPEND API. The PSCI implementation translates the idle state index passed by the cpu_suspend core call into a valid PSCI state according to the PSCI states initialized at boot through the cpu_init_idle() CPU operations hook. The PSCI CPU suspend operation hook checks if the PSCI state is a standby state. If it is, it calls the PSCI suspend implementation straight away, without saving any context. If the state is a power down state the kernel calls the __cpu_suspend API (that saves the CPU context) and passed the PSCI suspend finisher as a parameter so that PSCI can be called by the __cpu_suspend implementation after saving and flushing the context as last function before power down. For power down states, entry point is set to cpu_resume physical address, that represents the default kernel execution address following a CPU reset. Reviewed-by: Ashwin Chaugule <ashwin.chaugule@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Lorenzo Pieralisi authored
The CPUidle subsystem on ARM64 machines requires the idle states implementation back-end to initialize idle states parameter upon boot. This patch adds a hook in the CPU operations structure that should be initialized by the CPU operations back-end in order to provide a function that initializes cpu idle states. This patch also adds the infrastructure to arm64 kernel required to export the CPU operations based initialization interface, so that drivers (ie CPUidle) can use it when they are initialized at probe time. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Lorenzo Pieralisi authored
CPU suspend is the standard kernel interface to be used to enter low-power states on ARM64 systems. Current cpu_suspend implementation by default assumes that all low power states are losing the CPU context, so the CPU registers must be saved and cleaned to DRAM upon state entry. Furthermore, the current cpu_suspend() implementation assumes that if the CPU suspend back-end method returns when called, this has to be considered an error regardless of the return code (which can be successful) since the CPU was not expected to return from a code path that is different from cpu_resume code path - eg returning from the reset vector. All in all this means that the current API does not cope well with low-power states that preserve the CPU context when entered (ie retention states), since first of all the context is saved for nothing on state entry for those states and a successful state entry can return as a normal function return, which is considered an error by the current CPU suspend implementation. This patch refactors the cpu_suspend() API so that it can be split in two separate functionalities. The arm64 cpu_suspend API just provides a wrapper around CPU suspend operation hook. A new function is introduced (for architecture code use only) for states that require context saving upon entry: __cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) __cpu_suspend() saves the context on function entry and calls the so called suspend finisher (ie fn) to complete the suspend operation. The finisher is not expected to return, unless it fails in which case the error is propagated back to the __cpu_suspend caller. The API refactoring results in the following pseudo code call sequence for a suspending CPU, when triggered from a kernel subsystem: /* * int cpu_suspend(unsigned long idx) * @idx: idle state index */ { -> cpu_suspend(idx) |---> CPU operations suspend hook called, if present |--> if (retention_state) |--> direct suspend back-end call (eg PSCI suspend) else |--> __cpu_suspend(idx, &back_end_finisher); } By refactoring the cpu_suspend API this way, the CPU operations back-end has a chance to detect whether idle states require state saving or not and can call the required suspend operations accordingly either through simple function call or indirectly through __cpu_suspend() which carries out state saving and suspend finisher dispatching to complete idle state entry. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Lorenzo Pieralisi authored
ARM based platforms implement a variety of power management schemes that allow processors to enter idle states at run-time. The parameters defining these idle states vary on a per-platform basis forcing the OS to hardcode the state parameters in platform specific static tables whose size grows as the number of platforms supported in the kernel increases and hampers device drivers standardization. Therefore, this patch aims at standardizing idle state device tree bindings for ARM platforms. Bindings define idle state parameters inclusive of entry methods and state latencies, to allow operating systems to retrieve the configuration entries from the device tree and initialize the related power management drivers, paving the way for common code in the kernel to deal with idle states and removing the need for static data in current and previous kernel versions. ARM64 platforms require the DT to define an entry-method property for idle states. On system implementing PSCI as an enable-method to enter low-power states the PSCI CPU suspend method requires the power_state parameter to be passed to the PSCI CPU suspend function. This parameter is specific to a power state and platform specific, therefore must be provided by firmware to the OS in order to enable proper call sequence. Thus, this patch also adds a property in the PSCI bindings that describes how the PSCI CPU suspend power_state parameter should be defined in DT in all device nodes that rely on PSCI CPU suspend method usage. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Rob Herring <robh@kernel.org> Reviewed-by: Sebastian Capella <sebcape@gmail.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 08 Sep, 2014 27 commits
-
-
Robert Richter authored
Raising the current maximum limit to 64. This is needed for Cavium's Thunder systems that will have at least 48 cores per die. The change keeps the current memory footprint in cpu mask structures. It does not break existing code. Setting the maximum to 64 cpus still boots systems with less cpus. Mark's Juno happily booted with a NR_CPUS=64 kernel. Tested on our Thunder system with 48 cores. We could see interrupts to all cores. Cc: Radha Mohan Chintakuntla <rchintakuntla@cavium.com> Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
The JIT compiler emits A64 instructions. It supports eBPF only. Legacy BPF is supported thanks to conversion by BPF core. JIT is enabled in the same way as for other architectures: echo 1 > /proc/sys/net/core/bpf_jit_enable Or for additional compiler output: echo 2 > /proc/sys/net/core/bpf_jit_enable See Documentation/networking/filter.txt for more information. The implementation passes all 57 tests in lib/test_bpf.c on ARMv8 Foundation Model :) Also tested by Will on Juno platform. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate logical (shifted register) instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate data-processing (3 source) instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate data-processing (2 source) instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate data-processing (1 source) instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate add/subtract (shifted register) instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate move wide (immediate) instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate bitfield instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate add/subtract (immediate) instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate load/store pair instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate load/store (register offset) instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate conditional branch (immediate) instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate unconditional branch (register) instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Zi Shen Lim authored
Introduce function to generate compare & branch (immediate) instructions. Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Behan Webster authored
The global register current_stack_pointer holds the current stack pointer. This change supports being able to compile the kernel with both gcc and clang. Author: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Behan Webster <behanw@converseincode.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Charlebois authored
To support both Clang and GCC, use the global stack register variable vs a local register variable. Author: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Behan Webster <behanw@converseincode.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Behan Webster authored
Use the global current_stack_pointer to get the value of the stack pointer. This change supports being able to compile the kernel with both gcc and clang. Signed-off-by: Behan Webster <behanw@converseincode.com> Signed-off-by: Mark Charlebois <charlebm@gmail.com> Reviewed-by: Olof Johansson <olof@lixom.net> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Behan Webster authored
Use the global current_stack_pointer to get the value of the stack pointer. This change supports being able to compile the kernel with both gcc and clang. Signed-off-by: Behan Webster <behanw@converseincode.com> Signed-off-by: Mark Charlebois <charlebm@gmail.com> Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de> Reviewed-by: Olof Johansson <olof@lixom.net> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Behan Webster authored
Use the global current_stack_pointer to get the value of the stack pointer. This change supports being able to compile the kernel with both gcc and clang. Signed-off-by: Behan Webster <behanw@converseincode.com> Signed-off-by: Mark Charlebois <charlebm@gmail.com> Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de> Reviewed-by: Olof Johansson <olof@lixom.net> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Behan Webster authored
Define a global named register for current_stack_pointer. The use of this new variable guarantees that both gcc and clang can access this register in C code. Signed-off-by: Behan Webster <behanw@converseincode.com> Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de> Reviewed-by: Mark Charlebois <charlebm@gmail.com> Reviewed-by: Olof Johansson <olof@lixom.net> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Laura Abbott authored
In a similar fashion to other architecture, add the infrastructure and Kconfig to enable DEBUG_SET_MODULE_RONX support. When enabled, module ranges will be marked read-only/no-execute as appropriate. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [will: fixed off-by-one in module end check] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Laura Abbott authored
It's useful to be able to change individual bits in ptes at times. Introduce functions for this and update existing pte_mk* functions to use these primatives. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [will: added missing inline keyword for new header functions] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Arun Chandran authored
The current soft_restart() and setup_restart implementations incorrectly assume that compiler will not spill/fill values to/from stack. However this assumption seems to be wrong, revealed by the disassembly of the currently existing code (v3.16) built with Linaro GCC 4.9-2014.05. ffffffc000085224 <soft_restart>: ffffffc000085224: a9be7bfd stp x29, x30, [sp,#-32]! ffffffc000085228: 910003fd mov x29, sp ffffffc00008522c: f9000fa0 str x0, [x29,#24] ffffffc000085230: 94003d21 bl ffffffc0000946b4 <setup_mm_for_reboot> ffffffc000085234: 94003b33 bl ffffffc000093f00 <flush_cache_all> ffffffc000085238: 94003dfa bl ffffffc000094a20 <cpu_cache_off> ffffffc00008523c: 94003b31 bl ffffffc000093f00 <flush_cache_all> ffffffc000085240: b0003321 adrp x1, ffffffc0006ea000 <reset_devices> ffffffc000085244: f9400fa0 ldr x0, [x29,#24] ----> spilled addr ffffffc000085248: f942fc22 ldr x2, [x1,#1528] ----> global memstart_addr ffffffc00008524c: f0000061 adrp x1, ffffffc000094000 <__inval_cache_range+0x40> ffffffc000085250: 91290021 add x1, x1, #0xa40 ffffffc000085254: 8b010041 add x1, x2, x1 ffffffc000085258: d2c00802 mov x2, #0x4000000000 // #274877906944 ffffffc00008525c: 8b020021 add x1, x1, x2 ffffffc000085260: d63f0020 blr x1 ... Here the compiler generates memory accesses after the cache is disabled, loading stale values for the spilled value and global variable. As we cannot control when the compiler will access memory we must rewrite the functions in assembly to stash values we need in registers prior to disabling the cache, avoiding the use of memory. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Arun Chandran <achandran@mvista.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
If we cannot relocate the kernel Image to its preferred offset of base of DRAM plus TEXT_OFFSET, instead relocate it to the lowest available 2 MB boundary plus TEXT_OFFSET. We may lose a bit of memory at the low end, but we can still proceed normally otherwise. Acked-by: Mark Salter <msalter@redhat.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Leif Lindholm <leif.lindholm@linaro.org> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
The static memory footprint of a kernel Image at boot is larger than the Image file itself. Things like .bss data and initial page tables are allocated statically but populated dynamically so their content is not contained in the Image file. However, if EFI (or GRUB) has loaded the Image at precisely the desired offset of base of DRAM + TEXT_OFFSET, the Image will be booted in place, and we have to make sure that the allocation done by the PE/COFF loader is large enough. Fix this by growing the PE/COFF .text section to cover the entire static memory footprint. The part of the section that is not covered by the payload will be zero initialised by the PE/COFF loader. Acked-by: Mark Salter <msalter@redhat.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Leif Lindholm <leif.lindholm@linaro.org> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
In certain cases the cpu-release-addr of a CPU may not fall in the linear mapping (e.g. when the kernel is loaded above this address due to the presence of other images in memory). This is problematic for the spin-table code as it assumes that it can trivially convert a cpu-release-addr to a valid VA in the linear map. This patch modifies the spin-table code to use a temporary cached mapping to write to a given cpu-release-addr, enabling us to support addresses regardless of whether they are covered by the linear mapping. Acked-by: Leif Lindholm <leif.lindholm@linaro.org> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Tested-by: Mark Salter <msalter@redhat.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> [ardb: added (__force void *) cast] Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-