- 16 Apr, 2019 3 commits
-
-
Andrew Murray authored
The ARMv8.5 DC CVADP instruction may be trapped to EL1 via SCTLR_EL1.UCI therefore let's provide a handler for it. Just like the CVAP instruction we use a 'sys' instruction instead of the 'dc' alias to avoid build issues with older toolchains. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Andrew Murray authored
The introduction of AT_HWCAP2 introduced accessors which ensure that hwcap features are set and tested appropriately. Let's now mandate access to elf_hwcap via these accessors by making elf_hwcap static within cpufeature.c. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Andrew Murray authored
As we will exhaust the first 32 bits of AT_HWCAP let's start exposing AT_HWCAP2 to userspace to give us up to 64 caps. Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we prefer to expand into AT_HWCAP2 in order to provide a consistent view to userspace between ILP32 and LP64. However internal to the kernel we prefer to continue to use the full space of elf_hwcap. To reduce complexity and allow for future expansion, we now represent hwcaps in the kernel as ordinals and use a KERNEL_HWCAP_ prefix. This allows us to support automatic feature based module loading for all our hwcaps. We introduce cpu_set_feature to set hwcaps which complements the existing cpu_have_feature helper. These helpers allow us to clean up existing direct uses of elf_hwcap and reduce any future effort required to move beyond 64 caps. For convenience we also introduce cpu_{have,set}_named_feature which makes use of the cpu_feature macro to allow providing a hwcap name without a {KERNEL_}HWCAP_ prefix. Signed-off-by: Andrew Murray <andrew.murray@arm.com> [will: use const_ilog2() and tweak documentation] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 12 Apr, 2019 1 commit
-
-
Masami Hiramatsu authored
Add regs_get_argument() which returns N th argument of the function call. On arm64, it supports up to 8th argument. Note that this chooses most probably assignment, in some case it can be incorrect (e.g. passing data structure or floating point etc.) This enables ftrace kprobe events to access kernel function arguments via $argN syntax. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> [will: tidied up the comment a bit] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 11 Apr, 2019 1 commit
-
-
Masahiro Yamada authored
We use $(LD) to link vmlinux, modules, decompressors, etc. VDSO is the only exceptional case where $(CC) is used as the linker driver, but I do not know why we need to do so. VDSO uses a special linker script, and does not link standard libraries at all. I changed the Makefile to use $(LD) rather than $(CC). I tested this, and VDSO worked for me. Users will be able to use their favorite linker (e.g. lld instead of of bfd) by passing LD= from the command line. My plan is to rewrite all VDSO Makefiles to use $(LD), then delete cc-ldoption. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 10 Apr, 2019 1 commit
-
-
Vincenzo Frascino authored
Currently, compat tasks running on arm64 can allocate memory up to TASK_SIZE_32 (UL(0x100000000)). This means that mmap() allocations, if we treat them as returning an array, are not compliant with the sections 6.5.8 of the C standard (C99) which states that: "If the expression P points to an element of an array object and the expression Q points to the last element of the same array object, the pointer expression Q+1 compares greater than P". Redefine TASK_SIZE_32 to address the issue. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Jann Horn <jannh@google.com> Cc: <stable@vger.kernel.org> Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> [will: fixed typo in comment] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 09 Apr, 2019 18 commits
-
-
Jean-Philippe Brucker authored
When the CPU comes out of suspend, the firmware may have modified the OS Double Lock Register. Save it in an unused slot of cpu_suspend_ctx, and restore it on resume. Cc: <stable@vger.kernel.org> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Jean-Philippe Brucker authored
Some firmwares may reboot CPUs with OS Double Lock set. Make sure that it is unlocked, in order to use debug exceptions. Cc: <stable@vger.kernel.org> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
The logic for early allocation of page tables is duplicated between pgd_kernel_pgtable_alloc() and pgd_pgtable_alloc(). Drop the duplication by calling one from the other and renaming pgd_kernel_pgtable_alloc() accordingly. Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Yu Zhao authored
Switch from per mm_struct to per pmd page table lock by enabling ARCH_ENABLE_SPLIT_PMD_PTLOCK. This provides better granularity for large system. I'm not sure if there is contention on mm->page_table_lock. Given the option comes at no cost (apart from initializing more spin locks), why not enable it now. We only do so when pmd is not folded, so we don't mistakenly call pgtable_pmd_page_ctor() on pud or p4d in pgd_pgtable_alloc(). Signed-off-by: Yu Zhao <yuzhao@google.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Anshuman Khandual authored
ARM64 standard pgtable functions are going to use pgtable_page_[ctor|dtor] or pgtable_pmd_page_[ctor|dtor] constructs. At present KVM guest stage-2 PUD|PMD|PTE level page tabe pages are allocated with __get_free_page() via mmu_memory_cache_alloc() but released with standard pud|pmd_free() or pte_free_kernel(). These will fail once they start calling into pgtable_ [pmd]_page_dtor() for pages which never originally went through respective constructor functions. Hence convert all stage-2 page table page release functions to call buddy directly while freeing pages. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Yu Zhao <yuzhao@google.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Yu Zhao authored
init_mm doesn't require page table lock to be initialized at any level. Add a separate page table allocator for it, and the new one skips page table ctors. The ctors allocate memory when ALLOC_SPLIT_PTLOCKS is set. Not calling them avoids memory leak in case we call pte_free_kernel() on init_mm. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Yu Zhao <yuzhao@google.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Yu Zhao authored
For pte page, use pgtable_page_ctor(); for pmd page, use pgtable_pmd_page_ctor(); and for the rest (pud, p4d and pgd), don't use any. For now, we don't select ARCH_ENABLE_SPLIT_PMD_PTLOCK and pgtable_pmd_page_ctor() is a nop. When we do in patch 3, we make sure pmd is not folded so we won't mistakenly call pgtable_pmd_page_ctor() on pud or p4d. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Yu Zhao <yuzhao@google.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
brk_handler() now looks pretty strange and can be refactored to drop its funny 'handler_found' local variable altogether. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
kprobes and uprobes reserve some BRK immediates for installing their probes. Define these along with the other reservations in brk-imm.h and rename the ESR definitions to be consistent with the others that we already have. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Now that the debug hook dispatching code takes the triggering exception level into account, there's no need for the hooks themselves to poke around with user_mode(regs). Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Kprobes bypasses our debug hook registration code so that it doesn't get tangled up with recursive debug exceptions from things like lockdep: http://lists.infradead.org/pipermail/linux-arm-kernel/2015-February/324385.html However, since then, (a) the hook list has become RCU protected and (b) the kprobes hooks were found not to filter out exceptions from userspace correctly. On top of that, the step handler is invoked directly from single_step_handler(), which *does* use the debug hook list, so it's clearly not the end of the world. For now, have kprobes use the debug hook registration API like everybody else. We can revisit this in the future if this is found to limit coverage significantly. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Mixing kernel and user debug hooks together is highly error-prone as it relies on all of the hooks to figure out whether the exception came from kernel or user, and then to act accordingly. Make our debug hook code a little more robust by maintaining separate hook lists for user and kernel, with separate registration functions to force callers to be explicit about the exception levels that they care about. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
The comment next to the definition of our 'break_hook' list head is at best wrong but mainly just meaningless. Rip it out. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Since the 'addr' parameter contains an UNKNOWN value for non-watchpoint debug exceptions, rename it to 'unused' for those hooks so we don't get tempted to use it in the future. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
do_debug_exception() goes out of its way to return a value that isn't ever used, so just make the thing void. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Torsten Duwe authored
In preparation for arm64 supporting ftrace built on other compiler options, let's have Makefiles remove the $(CC_FLAGS_FTRACE) flags, whatever these may be, rather than assuming '-pg'. There should be no functional change as a result of this patch. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: Torsten Duwe <duwe@suse.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Torsten Duwe authored
In preparation for arm64 supporting ftrace built on other compiler options, let's have Makefiles remove the $(CC_FLAGS_FTRACE) flags, whatever these maybe, rather than assuming '-pg'. While at it, fix arm32 as well. There should be no functional change as a result of this patch. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Torsten Duwe <duwe@suse.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Torsten Duwe authored
In preparation for arm64 supporting ftrace built on other compiler options, let's have the arm64 Makefiles remove the $(CC_FLAGS_FTRACE) flags, whatever these may be, rather than assuming '-pg'. There should be no functional change as a result of this patch. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Torsten Duwe <duwe@suse.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 05 Apr, 2019 1 commit
-
-
Alexandru Elisei authored
Following assembly code is not trivial; make it slightly easier to read by replacing some of the magic numbers with the defines which are already present in sysreg.h. Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 03 Apr, 2019 7 commits
-
-
Masahiro Yamada authored
- $(call if_changed,...) must have FORCE as a prerequisite - vdso.lds is a generated file, so it should be prefixed with $(obj)/ instead of $(src)/. - cmd_vdsosym is a one-liner rule, so the assignment with '=' is simpler. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
If the initrd payload isn't completely accessible via the linear map, then we print a warning during boot and nobble the virtual address of the payload so that we ignore it later on. Unfortunately, since commit c756c592 ("arm64: Utilize phys_initrd_start/phys_initrd_size"), the virtual address isn't initialised until later anyway, so we need to nobble the size of the payload to ensure that we don't try to use it later on. Fixes: c756c592 ("arm64: Utilize phys_initrd_start/phys_initrd_size") Reported-by: Pierre Kuo <vichy.kuo@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Wen Yang authored
The call to of_get_next_child returns a node pointer with refcount incremented thus it must be explicitly decremented after the last usage. Detected by coccinelle with the following warnings: ./arch/arm64/kernel/cpu_ops.c:102:1-7: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 69, but without a corresponding object release within this function. Signed-off-by: Wen Yang <wen.yang99@zte.com.cn> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
show_pte() doesn't have any external callers, so make it static. Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Matteo Croce authored
Since commit ad67b74d ("printk: hash addresses printed with %p"), two obfuscated kernel pointer are printed at every boot: vdso: 2 pages (1 code @ (____ptrval____), 1 data @ (____ptrval____)) Remove the the print completely, as it's useless without the addresses. Fixes: ad67b74d ("printk: hash addresses printed with %p") Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Miles Chen authored
When debugging with CONFIG_PAGE_OWNER, I noticed that the min_low_pfn on arm64 is always zero and the page owner scanning has to start from zero. We have to loop a while before we see the first valid pfn. (see: read_page_owner()) Setup min_low_pfn to save some loops. Before setting min_low_pfn: [ 21.265602] min_low_pfn=0, *ppos=0 Page allocated via order 0, mask 0x100cca(GFP_HIGHUSER_MOVABLE) PFN 262144 type Movable Block 512 type Movable Flags 0x8001e referenced|uptodate|dirty|lru|swapbacked) prep_new_page+0x13c/0x140 get_page_from_freelist+0x254/0x1068 __alloc_pages_nodemask+0xd4/0xcb8 After setting min_low_pfn: [ 11.025787] min_low_pfn=262144, *ppos=0 Page allocated via order 0, mask 0x100cca(GFP_HIGHUSER_MOVABLE) PFN 262144 type Movable Block 512 type Movable Flags 0x8001e referenced|uptodate|dirty|lru|swapbacked) prep_new_page+0x13c/0x140 get_page_from_freelist+0x254/0x1068 __alloc_pages_nodemask+0xd4/0xcb8 shmem_alloc_page+0x7c/0xa0 shmem_alloc_and_acct_page+0x124/0x1e8 shmem_getpage_gfp.isra.7+0x118/0x878 shmem_write_begin+0x38/0x68 Signed-off-by: Miles Chen <miles.chen@mediatek.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Qian Cai authored
Building a kernel with W=1 generates several warnings due to abuse of kernel-doc comments: | arch/arm64/mm/numa.c:281: warning: Cannot understand * | on line 281 - I thought it was a doc line Tidy up the comments to remove the warnings. Fixes: 1a2db300 ("arm64, numa: Add NUMA support for arm64 platforms.") Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 01 Apr, 2019 1 commit
-
-
Muchun Song authored
Although we don't actually make use of the 'max_mapnr' global variable, we do set it to a junk value for !CONFIG_FLATMEM configurations that leave mem_map uninitialised. To avoid somebody tripping over this in future, set 'max_mapnr' using max_pfn, which is calculated directly from the memblock information. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Muchun Song <smuchun@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 31 Mar, 2019 7 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds authored
Pull KVM fixes from Paolo Bonzini: "A collection of x86 and ARM bugfixes, and some improvements to documentation. On top of this, a cleanup of kvm_para.h headers, which were exported by some architectures even though they not support KVM at all. This is responsible for all the Kbuild changes in the diffstat" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (28 commits) Documentation: kvm: clarify KVM_SET_USER_MEMORY_REGION KVM: doc: Document the life cycle of a VM and its resources KVM: selftests: complete IO before migrating guest state KVM: selftests: disable stack protector for all KVM tests KVM: selftests: explicitly disable PIE for tests KVM: selftests: assert on exit reason in CR4/cpuid sync test KVM: x86: update %rip after emulating IO x86/kvm/hyper-v: avoid spurious pending stimer on vCPU init kvm/x86: Move MSR_IA32_ARCH_CAPABILITIES to array emulated_msrs KVM: x86: Emulate MSR_IA32_ARCH_CAPABILITIES on AMD hosts kvm: don't redefine flags as something else kvm: mmu: Used range based flushing in slot_handle_level_range KVM: export <linux/kvm_para.h> and <asm/kvm_para.h> iif KVM is supported KVM: x86: remove check on nr_mmu_pages in kvm_arch_commit_memory_region() kvm: nVMX: Add a vmentry check for HOST_SYSENTER_ESP and HOST_SYSENTER_EIP fields KVM: SVM: Workaround errata#1096 (insn_len maybe zero on SMAP violation) KVM: Reject device ioctls from processes other than the VM's creator KVM: doc: Fix incorrect word ordering regarding supported use of APIs KVM: x86: fix handling of role.cr4_pae and rename it to 'gpte_size' KVM: nVMX: Do not inherit quadrant and invalid for the root shadow EPT ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Thomas Gleixner: "A pile of x86 updates: - Prevent exceeding he valid physical address space in the /dev/mem limit checks. - Move all header content inside the header guard to prevent compile failures. - Fix the bogus __percpu annotation in this_cpu_has() which makes sparse very noisy. - Disable switch jump tables completely when retpolines are enabled. - Prevent leaking the trampoline address" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/realmode: Make set_real_mode_mem() static inline x86/cpufeature: Fix __percpu annotation in this_cpu_has() x86/mm: Don't exceed the valid physical address space x86/retpolines: Disable switch jump tables when retpolines are enabled x86/realmode: Don't leak the trampoline kernel address x86/boot: Fix incorrect ifdeffery scope x86/resctrl: Remove unused variable
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull perf tooling fixes from Thomas Gleixner: "Core libraries: - Fix max perf_event_attr.precise_ip detection. - Fix parser error for uncore event alias - Fixup ordering of kernel maps after obtaining the main kernel map address. Intel PT: - Fix TSC slip where A TSC packet can slip past MTC packets so that the timestamp appears to go backwards. - Fixes for exported-sql-viewer GUI conversion to python3. ARM coresight: - Fix the build by adding a missing case value for enumeration value introduced in newer library, that now is the required one. tool headers: - Syncronize kernel headers with the kernel, getting new io_uring and pidfd_send_signal syscalls so that 'perf trace' can handle them" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf pmu: Fix parser error for uncore event alias perf scripts python: exported-sql-viewer.py: Fix python3 support perf scripts python: exported-sql-viewer.py: Fix never-ending loop perf machine: Update kernel map address and re-order properly tools headers uapi: Sync powerpc's asm/kvm.h copy with the kernel sources tools headers: Update x86's syscall_64.tbl and uapi/asm-generic/unistd tools headers uapi: Update drm/i915_drm.h tools arch x86: Sync asm/cpufeatures.h with the kernel sources tools headers uapi: Sync linux/fcntl.h to get the F_SEAL_FUTURE_WRITE addition tools headers uapi: Sync asm-generic/mman-common.h and linux/mman.h perf evsel: Fix max perf_event_attr.precise_ip detection perf intel-pt: Fix TSC slip perf cs-etm: Add missing case value
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull CPU hotplug fixes from Thomas Gleixner: "Two SMT/hotplug related fixes: - Prevent crash when HOTPLUG_CPU is disabled and the CPU bringup aborts. This is triggered with the 'nosmt' command line option, but can happen by any abort condition. As the real unplug code is not compiled in, prevent the fail by keeping the CPU in zombie state. - Enforce HOTPLUG_CPU for SMP on x86 to avoid the above situation completely. With 'nosmt' being a popular option it's required to unplug the half brought up sibling CPUs (due to the MCE wreckage) completely" * 'smp-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/smp: Enforce CONFIG_HOTPLUG_CPU when SMP=y cpu/hotplug: Prevent crash when CPU bringup fails on CONFIG_HOTPLUG_CPU=n
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull locking fixlet from Thomas Gleixner: "Trivial update to the maintainers file" * 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: MAINTAINERS: Remove deleted file from futex file pattern
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull core fixes from Thomas Gleixner: "A small set of core updates: - Make the watchdog respect the selected CPU mask again. That was broken by the rework of the watchdog thread management and caused inconsistent state and NMI watchdog being unstoppable. - Ensure that the objtool build can find the libelf location. - Remove dead kcore stub code" * 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: watchdog: Respect watchdog cpumask on CPU hotplug objtool: Query pkg-config for libelf location proc/kcore: Remove unused kclist_add_remap()
-