- 29 Jul, 2016 1 commit
-
-
Ard Biesheuvel authored
Due to the untyped KIMAGE_VADDR constant, the linker may not notice that the __rela_offset and __dynsym_offset expressions are absolute values (i.e., are not subject to relocation). This does not matter for KASLR, but it does confuse kallsyms in relative mode, since it uses the lowest non-absolute symbol address as the anchor point, and expects all other symbol addresses to be within 4 GB of it. Fix this by qualifying these expressions as ABSOLUTE() explicitly. Fixes: 0cd3defe ("arm64: kernel: perform relocation processing from ID map") Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 27 Jul, 2016 1 commit
-
-
Catalin Marinas authored
Commit 0a8ea52c ("arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature") inadvertently removed the arch/arm prototype instead of the arm64 one introduced by the original patch. There should not be any bisection issues since this function is not called from anywhere else (it could as well be removed from arch/arm at some point). Fixes: 0a8ea52c ("arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 26 Jul, 2016 1 commit
-
-
Catalin Marinas authored
Selecting CONFIG_RANDOMIZE_BASE=y and CONFIG_MODULES=n fails to build the module PLTs support: CC arch/arm64/kernel/module-plts.o /work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c: In function ‘module_emit_plt_entry’: /work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c:32:49: error: dereferencing pointer to incomplete type ‘struct module’ This patch selects ARM64_MODULE_PLTS conditionally only if MODULES is enabled. Fixes: f80fb3a3 ("arm64: add support for kernel ASLR") Cc: <stable@vger.kernel.org> # 4.6+ Reported-by: Jeff Vander Stoep <jeffv@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 25 Jul, 2016 2 commits
-
-
Ard Biesheuvel authored
The kernel page table creation routines are accessible to other subsystems (e.g., EFI) via the create_pgd_mapping() entry point, which allows mappings to be created that are not covered by init_mm. Since generic code such as apply_to_page_range() may expect translation table pages that are not associated with init_mm to be covered by fully constructed struct pages, add a call to pgtable_page_ctor() in the alloc function used by create_pgd_mapping. Since it is no longer used by create_mapping_late(), also update the name of this function to better reflect its purpose. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Laura Abbott <labbott@redhat.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
The only purpose served by create_mapping_late() is to remap the already mapped .text and .rodata kernel segments with read-only permissions. Since we no longer allow block mappings to be split or merged, create_mapping_late() should not pass an allocation function pointer into __create_pgd_mapping(). So pass NULL instead. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Laura Abbott <labbott@redhat.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 21 Jul, 2016 5 commits
-
-
Catalin Marinas authored
* kprobes: arm64: kprobes: Add KASAN instrumentation around stack accesses arm64: kprobes: Cleanup jprobe_return arm64: kprobes: Fix overflow when saving stack arm64: kprobes: WARN if attempting to step with PSTATE.D=1 kprobes: Add arm64 case in kprobe example module arm64: Add kernel return probes support (kretprobes) arm64: Add trampoline code for kretprobes arm64: kprobes instruction simulation support arm64: Treat all entry code as non-kprobe-able arm64: Blacklist non-kprobe-able symbol arm64: Kprobes with single stepping support arm64: add conditional instruction simulation support arm64: Add more test functions to insn.c arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature
-
Suzuki K Poulose authored
Passing "nosmp" should boot the kernel with a single processor, without provision to enable secondary CPUs even if they are present. "nosmp" is implemented by setting maxcpus=0. At the moment we still mark the secondary CPUs present even with nosmp, which allows the userspace to bring them up. This patch corrects the smp_prepare_cpus() to honor the maxcpus == 0. Commit 44dbcc93 ("arm64: Fix behavior of maxcpus=N") fixed the behavior for maxcpus >= 1, but broke maxcpus = 0. Fixes: 44dbcc93 ("arm64: Fix behavior of maxcpus=N") Cc: <stable@vger.kernel.org> # 4.7+ Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> [catalin.marinas@arm.com: updated code comment] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K Poulose authored
In smp_prepare_boot_cpu(), we invoke cpuinfo_store_boot_cpu to store the cpuinfo in a per-cpu ptr, before initialising the per-cpu offset for the boot CPU. This patch reorders the sequence to make sure we initialise the per-cpu offset before accessing the per-cpu area. Commit 4b998ff1 ("arm64: Delay cpuinfo_store_boot_cpu") fixed the issue where we modified the per-cpu area even before the kernel initialises the per-cpu areas, but failed to wait until the boot cpu updated it's offset. Fixes: 4b998ff1 ("arm64: Delay cpuinfo_store_boot_cpu") Cc: <stable@vger.kernel.org> # 4.4+ Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This patch disables KASAN around the memcpy from/to the kernel or IRQ stacks to avoid warnings like below: BUG: KASAN: stack-out-of-bounds in setjmp_pre_handler+0xe4/0x170 at addr ffff800935cbbbc0 Read of size 128 by task swapper/0/1 page:ffff7e0024d72ec0 count:0 mapcount:0 mapping: (null) index:0x0 flags: 0x1000000000000000() page dumped because: kasan: bad access detected CPU: 4 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc4+ #1 Hardware name: ARM Juno development board (r0) (DT) Call trace: [<ffff20000808ad88>] dump_backtrace+0x0/0x280 [<ffff20000808b01c>] show_stack+0x14/0x20 [<ffff200008563a64>] dump_stack+0xa4/0xc8 [<ffff20000824a1fc>] kasan_report_error+0x4fc/0x528 [<ffff20000824a5e8>] kasan_report+0x40/0x48 [<ffff20000824948c>] check_memory_region+0x144/0x1a0 [<ffff200008249814>] memcpy+0x34/0x68 [<ffff200008c3ee2c>] setjmp_pre_handler+0xe4/0x170 [<ffff200008c3ec5c>] kprobe_breakpoint_handler+0xec/0x1d8 [<ffff2000080853a4>] brk_handler+0x5c/0xa0 [<ffff2000080813f0>] do_debug_exception+0xa0/0x138 Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
jprobe_return seems to have aged badly. Comments referring to non-existent behaviours, and a dangerous habit of messing with registers without telling the compiler. This patches applies the following remedies: - Fix the comments to describe the actual behaviour - Tidy up the asm sequence to directly assign the stack pointer without clobbering extra registers - Mark the rest of the function as unreachable() so that the compiler knows that there is no need for an epilogue - Stop making jprobe_return_break a global function (you really don't want to call that guy, and it isn't even a function). Tested with tcp_probe. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 20 Jul, 2016 1 commit
-
-
Marc Zyngier authored
The MIN_STACK_SIZE macro tries evaluate how much stack space needs to be saved in the jprobes_stack array, sized at 128 bytes. When using the IRQ stack, said macro can happily return up to IRQ_STACK_SIZE, which is 16kB. Mayhem follows. This patch fixes things by getting rid of the crazy macro and limiting the copy to be at most the size of the jprobes_stack array, no matter which stack we're on. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 19 Jul, 2016 16 commits
-
-
Will Deacon authored
Stepping with PSTATE.D=1 is bad news. The step won't generate a debug exception and we'll likely walk off into random data structures. This should never happen, but when it does, it's a PITA to debug. Add a WARN_ON to shout if we realise this is about to take place. Signed-off-by: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
The debug enable/disable macros are not used anywhere in the kernel, so remove them from irqflags.h Reported-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
There is no need to explicitly clear the SS bit immediately before setting it unconditionally. Reported-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
Clearing PSTATE.D is one of the requirements for generating a debug exception. The arm64 booting protocol requires that PSTATE.D is set, since many of the debug registers (for example, the hw_breakpoint registers) are UNKNOWN out of reset and could potentially generate spurious, fatal debug exceptions in early boot code if PSTATE.D was clear. Once the debug registers have been safely initialised, PSTATE.D is cleared, however this is currently broken for two reasons: (1) The boot CPU clears PSTATE.D in a postcore_initcall and secondary CPUs clear PSTATE.D in secondary_start_kernel. Since the initcall runs after SMP (and the scheduler) have been initialised, there is no guarantee that it is actually running on the boot CPU. In this case, the boot CPU is left with PSTATE.D set and is not capable of generating debug exceptions. (2) In a preemptible kernel, we may explicitly schedule on the IRQ return path to EL1. If an IRQ occurs with PSTATE.D set in the idle thread, then we may schedule the kthread_init thread, run the postcore_initcall to clear PSTATE.D and then context switch back to the idle thread before returning from the IRQ. The exception return path will then restore PSTATE.D from the stack, and set it again. This patch fixes the problem by moving the clearing of PSTATE.D earlier to proc.S. This has the desirable effect of clearing it in one place for all CPUs, long before we have to worry about the scheduler or any exception handling. We ensure that the previous reset of MDSCR_EL1 has completed before unmasking the exception, so that any spurious exceptions resulting from UNKNOWN debug registers are not generated. Without this patch applied, the kprobes selftests have been seen to fail under KVM, where we end up attempting to step the OOL instruction buffer with PSTATE.D set and therefore fail to complete the step. Cc: <stable@vger.kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
We currently define OBJCOPYFLAGS in the top-level arm64 Makefile, and thus these flags will be passed to all uses of objcopy, kernel-wide, for which they are not explicitly overridden. The flags we set are intended for converting vmlinux (and ELF) into Image (a raw binary), and thus the flags chosen are problematic for some other uses which do not expect a raw binary result, e.g. the upcoming lkdtm rodata test: http://www.openwall.com/lists/kernel-hardening/2016/06/08/2 This patch localises the objcopy flags such that they are only used for the vmlinux -> Image conversion. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: Kees Cook <keescook@chromium.org> Tested-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Vladimir Murzin authored
...and do not confuse source navigation tools ;) Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Sandeepa Prabhu authored
Add info prints in sample kprobe handlers for ARM64 Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Signed-off-by: David A. Long <dave.long@linaro.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Sandeepa Prabhu authored
The pre-handler of this special 'trampoline' kprobe executes the return probe handler functions and restores original return address in ELR_EL1. This way the saved pt_regs still hold the original register context to be carried back to the probed kernel function. Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Signed-off-by: David A. Long <dave.long@linaro.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
William Cohen authored
The trampoline code is used by kretprobes to capture a return from a probed function. This is done by saving the registers, calling the handler, and restoring the registers. The code then returns to the original saved caller return address. It is necessary to do this directly instead of using a software breakpoint because the code used in processing that breakpoint could itself be kprobe'd and cause a problematic reentry into the debug exception handler. Signed-off-by: William Cohen <wcohen@redhat.com> Signed-off-by: David A. Long <dave.long@linaro.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: removed unnecessary masking of the PSTATE bits] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Sandeepa Prabhu authored
Kprobes needs simulation of instructions that cannot be stepped from a different memory location, e.g.: those instructions that uses PC-relative addressing. In simulation, the behaviour of the instruction is implemented using a copy of pt_regs. The following instruction categories are simulated: - All branching instructions(conditional, register, and immediate) - Literal access instructions(load-literal, adr/adrp) Conditional execution is limited to branching instructions in ARM v8. If conditions at PSTATE do not match the condition fields of opcode, the instruction is effectively NOP. Thanks to Will Cohen for assorted suggested changes. Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Signed-off-by: William Cohen <wcohen@redhat.com> Signed-off-by: David A. Long <dave.long@linaro.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: removed linux/module.h include] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Pratyush Anand authored
Entry symbols are not kprobe safe. So blacklist them for kprobing. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: David A. Long <dave.long@linaro.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: Do not include syscall wrappers in .entry.text] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Pratyush Anand authored
Add all function symbols which are called from do_debug_exception under NOKPROBE_SYMBOL, as they can not kprobed. Signed-off-by: Pratyush Anand <panand@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Sandeepa Prabhu authored
Add support for basic kernel probes(kprobes) and jump probes (jprobes) for ARM64. Kprobes utilizes software breakpoint and single step debug exceptions supported on ARM v8. A software breakpoint is placed at the probe address to trap the kernel execution into the kprobe handler. ARM v8 supports enabling single stepping before the break exception return (ERET), with next PC in exception return address (ELR_EL1). The kprobe handler prepares an executable memory slot for out-of-line execution with a copy of the original instruction being probed, and enables single stepping. The PC is set to the out-of-line slot address before the ERET. With this scheme, the instruction is executed with the exact same register context except for the PC (and DAIF) registers. Debug mask (PSTATE.D) is enabled only when single stepping a recursive kprobe, e.g.: during kprobes reenter so that probed instruction can be single stepped within the kprobe handler -exception- context. The recursion depth of kprobe is always 2, i.e. upon probe re-entry, any further re-entry is prevented by not calling handlers and the case counted as a missed kprobe). Single stepping from the x-o-l slot has a drawback for PC-relative accesses like branching and symbolic literals access as the offset from the new PC (slot address) may not be ensured to fit in the immediate value of the opcode. Such instructions need simulation, so reject probing them. Instructions generating exceptions or cpu mode change are rejected for probing. Exclusive load/store instructions are rejected too. Additionally, the code is checked to see if it is inside an exclusive load/store sequence (code from Pratyush). System instructions are mostly enabled for stepping, except MSR/MRS accesses to "DAIF" flags in PSTATE, which are not safe for probing. This also changes arch/arm64/include/asm/ptrace.h to use include/asm-generic/ptrace.h. Thanks to Steve Capper and Pratyush Anand for several suggested Changes. Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Pratyush Anand <panand@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
David A. Long authored
Cease using the arm32 arm_check_condition() function and replace it with a local version for use in deprecated instruction support on arm64. Also make the function table used by this available for future use by kprobes and/or uprobes. This function is derived from code written by Sandeepa Prabhu. Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Signed-off-by: David A. Long <dave.long@linaro.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
David A. Long authored
Certain instructions are hard to execute correctly out-of-line (as in kprobes). Test functions are added to insn.[hc] to identify these. The instructions include any that use PC-relative addressing, change the PC, or change interrupt masking. For efficiency and simplicity test functions are also added for small collections of related instructions. Signed-off-by: David A. Long <dave.long@linaro.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
David A. Long authored
Add HAVE_REGS_AND_STACK_ACCESS_API feature for arm64, including supporting functions and defines. Signed-off-by: David A. Long <dave.long@linaro.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: Remove unused functions] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 12 Jul, 2016 3 commits
-
-
Steve Capper authored
It can be useful for JIT software to be aware of MIDR_EL1 and REVIDR_EL1 to ascertain the presence of any core errata that could affect code generation. This patch exposes these registers through sysfs: /sys/devices/system/cpu/cpu$ID/regs/identification/midr_el1 /sys/devices/system/cpu/cpu$ID/regs/identification/revidr_el1 where $ID is the cpu number. For big.LITTLE systems, one can have a mixture of cores (e.g. Cortex A53 and Cortex A57), thus all CPUs need to be enumerated. If the kernel does not have valid information to populate these entries with, an empty string is returned to userspace. Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Steve Capper <steve.capper@linaro.org> [suzuki.poulose@arm.com: ABI documentation updates, hotplug notifiers, kobject changes] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Kevin Brodsky authored
So far the arm64 clock_gettime() vDSO implementation only supported the following clocks, falling back to the syscall for the others: - CLOCK_REALTIME{,_COARSE} - CLOCK_MONOTONIC{,_COARSE} This patch adds support for the CLOCK_MONOTONIC_RAW clock, taking advantage of the recent refactoring of the vDSO time functions. Like the non-_COARSE clocks, this only works when the "arch_sys_counter" clocksource is in use (allowing us to read the current time from the virtual counter register), otherwise we also have to fall back to the syscall. Most of the data is shared with CLOCK_MONOTONIC, and the algorithm is similar. The reference implementation in kernel/time/timekeeping.c shows that: - CLOCK_MONOTONIC = tk->wall_to_monotonic + tk->xtime_sec + timekeeping_get_ns(&tk->tkr_mono) - CLOCK_MONOTONIC_RAW = tk->raw_time + timekeeping_get_ns(&tk->tkr_raw) - tkr_mono and tkr_raw are identical (in particular, same clocksource), except these members: * mult (only mono's multiplier is NTP-adjusted) * xtime_nsec (always 0 for raw) Therefore, tk->raw_time and tkr_raw->mult are now also stored in the vDSO data page. Cc: Ali Saidi <ali.saidi@arm.com> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Kevin Brodsky authored
Time functions are directly implemented in assembly in arm64, and it is desirable to keep it this way for performance reasons (everything fits in registers, so that the stack is not used at all). However, the current implementation is quite difficult to read and understand (even considering it's assembly). Additionally, due to the structure of __kernel_clock_gettime, which heavily uses conditional branches to share code between the different clocks, it is difficult to support a new clock without making the branches even harder to follow. This commit completely refactors the structure of clock_gettime (and gettimeofday along the way) while keeping exactly the same algorithms. We no longer try to share code; instead, macros provide common operations. This new approach comes with a number of advantages: - In clock_gettime, clock implementations are no longer interspersed, making them much more readable. Additionally, macros only use registers passed as arguments or reserved with .req, this way it is easy to make sure that registers are properly allocated. To avoid a large number of branches in a given execution path, a jump table is used; a normal execution uses 3 unconditional branches. - __do_get_tspec has been replaced with 2 macros (get_ts_clock_mono, get_clock_shifted_nsec) and explicit loading of data from the vDSO page. Consequently, clock_gettime and gettimeofday are now leaf functions, and saving x30 (lr) is no longer necessary. - Variables protected by tb_seq_count are now loaded all at once, allowing to merge the seqcnt_read macro into seqcnt_check. - For CLOCK_REALTIME_COARSE, removed an unused load of the wall to monotonic timespec. - For CLOCK_MONOTONIC_COARSE, removed a few shift instructions. Obviously, the downside of sharing less code is an increase in code size. However since the vDSO has its own code page, this does not really matter, as long as the size of the DSO remains below 4 kB. For now this should be all right: Before After vdso.so size (B) 2776 3000 Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 11 Jul, 2016 2 commits
-
-
Kevin Brodsky authored
arm64/kernel/{vdso,signal}.c include vdso-offsets.h, as well as any file that includes asm/vdso.h. Therefore, vdso-offsets.h must be generated before these files are compiled. The current rules in arm64/kernel/Makefile do not actually enforce this, because even though $(obj)/vdso is listed as a prerequisite for vdso-offsets.h, this does not result in the intended effect of building the vdso subdirectory (before all the other objects). As a consequence, depending on the order in which the rules are followed, vdso-offsets.h is updated or not before arm64/kernel/{vdso,signal}.o are built. The current rules also impose an unnecessary dependency on vdso-offsets.h for all arm64/kernel/*.o, resulting in unnecessary rebuilds. This is made obvious when using make -j: touch arch/arm64/kernel/vdso/gettimeofday.S && make -j$NCPUS arch/arm64/kernel will sometimes result in none of arm64/kernel/*.o being rebuilt, sometimes all of them, or even just some of them. It is quite difficult to ensure that a header is generated before it is used with recursive Makefiles by using normal rules. Instead, arch-specific generated headers are normally built in the archprepare recipe in the arch Makefile (see for instance arch/ia64/Makefile). Unfortunately, asm-offsets.h is included in gettimeofday.S, and must therefore be generated before vdso-offsets.h, which is not the case if archprepare is used. For this reason, a rule run after archprepare has to be used. This commit adds rules in arm64/Makefile to build vdso-offsets.h during the prepare step, ensuring that vdso-offsets.h is generated before building anything. It also removes the now-unnecessary dependencies on vdso-offsets.h in arm64/kernel/Makefile. Finally, it removes the duplication of asm-offsets.h between arm64/kernel/vdso/ and include/generated/ and makes include/generated/vdso-offsets.h a target in arm64/kernel/vdso/Makefile. Cc: Will Deacon <will.deacon@arm.com> Cc: Michal Marek <mmarek@suse.com> Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This reverts commit 90f777be. While this commit was aimed at fixing the dependencies, with a large make -j the vdso-offsets.h file is not generated, leading to build failures. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 08 Jul, 2016 3 commits
-
-
Lorenzo Pieralisi authored
Current bus notifier in ARM64 (__iommu_attach_notifier) attempts to attach dma_ops to a device on BUS_NOTIFY_ADD_DEVICE action notification. This will cause issues on ACPI based systems, where PCI devices can be added before the IOMMUs the devices are attached to had a chance to be probed, causing failures on attempts to attach dma_ops in that the domain for the respective IOMMU may not be set-up yet by the time the bus notifier is run. Devices dma_ops do not require to be set-up till the matching device drivers are probed. This means that instead of running the notifier attaching dma_ops to devices (__iommu_attach_notifier) on BUS_NOTIFY_ADD_DEVICE action, it can be run just before the device driver is bound to the device in question (on action BUS_NOTIFY_BIND_DRIVER) so that it is certain that its IOMMU group and domain are set-up accordingly at the time the notifier is triggered. This patch changes the notifier action upon which dma_ops are attached to devices and defer it to driver binding time, so that IOMMU devices have a chance to be probed and to register their bus notifiers before the dma_ops attach sequence for a device is actually carried out. As a result we also no longer need worry about racing with iommu_bus_notifier(), or about retrying the queue in case devices were added too early on DT-based systems, so clean up the notifier itself plus the additional workaround from 722ec35f ("arm64: dma-mapping: fix handling of devices registered before arch_initcall") Acked-by: Will Deacon <will.deacon@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> [rm: get rid of other now-redundant bits] Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
On a big-little system, PMUs can be wired to CPUs using per CPU interrups (PPI). In this case, it is important to make sure that the enable/disable do happen on the right set of CPUs. So instead of relying on the interrupt-affinity property, we can use the actual percpu affinity that DT exposes as part of the interrupt specifier. The DT binding is also updated to reflect the fact that the interrupt-affinity property shouldn't be used in that case. Acked-by: Rob Herring <robh@kernel.org> Tested-by: Caesar Wang <wxt@rock-chips.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
arch/arm64/kernel/{vdso,signal}.c include generated/vdso-offsets.h, and therefore the symbol offsets must be generated before these files are compiled. The current rules in arm64/kernel/Makefile do not actually enforce this, because even though $(obj)/vdso is listed as a prerequisite for vdso-offsets.h, this does not result in the intended effect of building the vdso subdirectory (before all the other objects). As a consequence, depending on the order in which the rules are followed, vdso-offsets.h is updated or not before arm64/kernel/{vdso,signal}.o are built. The current rules also impose an unnecessary dependency on vdso-offsets.h for all arm64/kernel/*.o, resulting in unnecessary rebuilds. This patch removes the arch/arm64/kernel/vdso/vdso-offsets.h file generation, leaving only the include/generated/vdso-offsets.h one. It adds a forced dependency check of the vdso-offsets.h file in arch/arm64/kernel/Makefile which, if not up to date according to the arch/arm64/kernel/vdso/Makefile rules (depending on vdso.so.dbg), will trigger the vdso/ subdirectory build and vdso-offsets.h re-generation. Automatic kbuild dependency rules between kernel/{vdso,signal}.c rules and vdso-offsets.h will guarantee that the vDSO object is built first, followed by the generated symbol offsets header file. Reported-by: Kevin Brodsky <kevin.brodsky@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 01 Jul, 2016 5 commits
-
-
Ard Biesheuvel authored
The routine __create_pgd_mapping() does nothing except calling init_pgd(), which has no other callers. So fold the latter into the former. Also, drop a comment that has gone stale. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
Since the efi_create_mapping() no longer generates block mappings and being the last user of the split_p*d code, remove these functions and the corresponding TLBI. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [ardb: replace 'overlapping regions' with 'block mappings' in commit log] Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
When running the OS with a page size > 4 KB, we need to round up mappings for regions that are not aligned to the OS's page size. We already avoid block mappings for EfiRuntimeServicesCode/Data regions for other reasons, but in the unlikely event that other unaliged regions exists that have the EFI_MEMORY_RUNTIME attribute set, ensure that unaligned regions are always mapped down to pages. This way, the overlapping page is guaranteed not to be covered by a block mapping that needs to be split. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
To avoid triggering diagnostics in the MMU code that are finicky about splitting block mappings into more granular mappings, ensure that regions that are likely to appear in the Memory Attributes table as well as the UEFI memory map are always mapped down to pages. This way, we can use apply_to_page_range() instead of create_pgd_mapping() for the second pass, which cannot split or merge block entries, and operates strictly on PTEs. Note that this aligns the arm64 Memory Attributes table handling code with the ARM code, which already uses apply_to_page_range() to set the strict permissions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Add a bool parameter 'allow_block_mappings' to create_pgd_mapping() and the various helper functions that it descends into, to give the caller control over whether block entries may be used to create the mapping. The UEFI runtime mapping routines will use this to avoid creating block entries that would need to split up into page entries when applying the permissions listed in the Memory Attributes firmware table. This also replaces the block_mappings_allowed() helper function that was added for DEBUG_PAGEALLOC functionality, but the resulting code is functionally equivalent (given that debug_page_alloc does not operate on EFI page table entries anyway) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-