- 26 Apr, 2016 8 commits
-
-
Ard Biesheuvel authored
If both ACPI and DT platform descriptions are available, and the kernel was configured at build time to support both flavours, the default policy is to prefer DT over ACPI, and preferring ACPI over DT while still allowing DT as a fallback is not possible. Since some enterprise features (such as RAS) depend on ACPI, it may be desirable for, e.g., distro installers to prefer ACPI boot but fall back to DT rather than failing completely if no ACPI tables are available. So introduce the 'acpi=on' kernel command line parameter for arm64, which signifies that ACPI should be used if available, and DT should only be used as a fallback. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
When booting a relocatable kernel image, there is no practical reason to refuse an image whose load address is not exactly TEXT_OFFSET bytes above a 2 MB aligned base address, as long as the physical and virtual misalignment with respect to the swapper block size are equal, and are both aligned to THREAD_SIZE. Since the virtual misalignment is under our control when we first enter the kernel proper, we can simply choose its value to be equal to the physical misalignment. So treat the misalignment of the physical load address as the initial KASLR offset, and fix up the remaining code to deal with that. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
For historical reasons, the kernel Image must be loaded into physical memory at a 512 KB offset above a 2 MB aligned base address. The region between the base address and the start of the kernel Image has no significance to the kernel itself, but it is currently mapped explicitly into the early kernel VMA range for all translation granules. In some cases (i.e., 4 KB granule), this is unavoidable, due to the 2 MB granularity of the early kernel mappings. However, in other cases, e.g., when running with larger page sizes, or in the future, with more granular KASLR, there is no reason to map it explicitly like we do currently. So update the logic so that the region is mapped only if that happens as a side effect of rounding the start address of the kernel to swapper block size, and leave it unmapped otherwise. Since the symbol kernel_img_size now simply resolves to the memory footprint of the kernel Image, we can drop its definition from image.h and opencode its calculation. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
When building a relocatable kernel, we currently rely on the fact that early 64-bit literal loads need to be deferred to after the relocation has been performed only if they involve symbol references, and not if they involve assemble time constants. While this is not an unreasonable assumption to make, it is better to switch to movk/movz sequences, since these are guaranteed to be resolved at link time, simply because there are no dynamic relocation types to describe them. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Implement a macro mov_q that can be used to move an immediate constant into a 64-bit register, using between 2 and 4 movz/movk instructions (depending on the operand) Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Refactor the relocation processing so that the code executes from the ID map while accessing the relocation tables via the virtual mapping. This way, we can use literals containing virtual addresses as before, instead of having to use convoluted absolute expressions. For symmetry with the secondary code path, the relocation code and the subsequent jump to the virtual entry point are implemented in a function called __primary_switch(), and __mmap_switched() is renamed to __primary_switched(). Also, the call sequence in stext() is aligned with the one in secondary_startup(), by replacing the awkward 'adr_l lr' and 'b cpu_setup' sequence with a simple branch and link. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
We can simply use a relocated 64-bit literal to store the address of __secondary_switched(), and the relocation code will ensure that it holds the correct value at secondary entry time, as long as we make sure that the literal is not dereferenced until after we have enabled the MMU. So jump via a small __secondary_switch() function covered by the ID map that performs the literal load and branch-to-register. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
This unexports some symbols from head.S that are only used locally. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 25 Apr, 2016 9 commits
-
-
Suzuki K Poulose authored
maxcpu=n sets the number of CPUs activated at boot time to a max of n, but allowing the remaining CPUs to be brought up later if the user decides to do so. However, on arm64 due to various reasons, we disallowed hotplugging CPUs beyond n, by marking them not present. Now that we have checks in place to make sure the hotplugged CPUs have compatible features with system and requires no new errata, relax the restriction. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
CPU Errata work arounds are detected and applied to the kernel code at boot time and the data is then freed up. If a new hotplugged CPU requires a work around which was not applied at boot time, there is nothing we can do but simply fail the booting. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
When introducing the whole CPU feature detection framework, we lost the capability to detect a mismatched GIC configuration (using the GICv2 MMIO interface, but having the system register interface enabled). In order to solve this, use the new this_cpu_has_cap() helper. Also move the check to the CPU interface path in order to catch systems where the first CPU has been correctly configured, but the secondaries are not. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
Now that the capabilities are only available once all the CPUs have booted, we're unable to check for a particular feature in any subsystem that gets initialized before then. In order to support this, introduce a local_cpu_has_cap() function that tests for the presence of a given capability independently of the whole framework. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> [ Added preemptible() check ] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> [will: remove duplicate initialisation of caps in this_cpu_has_cap] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
Add scope parameter to the arm64_cpu_capabilities::matches(), so that this can be reused for checking the capability on a given CPU vs the system wide. The system uses the default scope associated with the capability for initialising the CPU_HWCAPs and ELF_HWCAPs. Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Improve the readability of dt_scan_depth1_nodes by removing the nested conditionals. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Shannon Zhao authored
When it's a Xen domain0 booting with ACPI, it will supply a /chosen and a /hypervisor node in DT. So check if it needs to enable ACPI. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Julien Grall <julien.grall@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Annotate the KASAN shadow region with boundary markers, so that its mappings stand out in the page table dumper output. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
There is no need to initialize the vmemmap region boundaries dynamically, since they are compile time constants. So just add these constants to the global struct initializer, and drop the dynamic assignment and related code. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 22 Apr, 2016 3 commits
-
-
Ard Biesheuvel authored
The open coded conversion from struct page address to virtual address in lowmem_page_address() involves an intermediate conversion step to pfn number/physical address. Since the placement of the struct page array relative to the linear mapping may be completely independent from the placement of physical RAM (as is that case for arm64 after commit dfd55ad8 'arm64: vmemmap: use virtual projection of linear region'), the conversion to physical address and back again should factor out of the equation, but unfortunately, the shifting and pointer arithmetic involved prevent this from happening, and the resulting calculation essentially subtracts the address of the start of physical memory and adds it back again, in a way that prevents the compiler from optimizing it away. Since the start of physical memory is not a build time constant on arm64, the resulting conversion involves an unnecessary memory access, which we would like to get rid of. So replace the open coded conversion with a call to page_to_virt(), and use the open coded conversion as its default definition, to be overriden by the architecture, if desired. The existing arch specific definitions of page_to_virt are all equivalent to this default definition, so by itself this patch is a no-op. Acked-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
To align with generic code and other architectures that expect the macro page_to_virt to produce an expression whose type is 'void*', drop the arch specific definition, which is never referenced anyway. Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
To align with other architectures, the expression produced by expanding the macro page_to_virt() should be of type void*, since it returns a virtual address. Fix that, and also fix up an instance where page_to_virt was expected to return 'unsigned long', and drop another instance that was entirely unused (page_to_bus) Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 21 Apr, 2016 2 commits
-
-
Robin Murphy authored
With the IOMMU core now taking care of default domains for groups regardless of bus type, we can gleefully rip out this stop-gap, as slight recompense for having to expand the other one. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Robin Murphy authored
PCI devices now suffer the same hiccup as platform devices, in that they get their DMA ops configured before they have been added to their bus, and thus before we know whether they have successfully registered with an IOMMU or not. Until the necessary driver core changes to reorder calls during device creation have been worked out, extend our delayed notifier trick onto the PCI bus so as to avoid broken DMA ops once IOMMUs get plugged into the PCI code. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 20 Apr, 2016 8 commits
-
-
Suzuki K Poulose authored
Make sure we have AArch32 state available for running COMPAT binaries and also for switching the personality to PER_LINUX32. Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> [ Added cap bit, checks for HWCAP, personality ] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Tested-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
Add cpu_hwcap bit for keeping track of the support for 32bit EL0. Tested-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
On ARMv8 support for AArch32 state is optional. Hence it is not safe to check the AArch32 ID registers for sanity, which could lead to false warnings. This patch makes sure that the AArch32 state is implemented before we keep track of the 32bit ID registers. As per ARM ARM (D.1.21.2 - Support for Exception Levels and Execution States, DDI0487A.h), checking the support for AArch32 at EL0 is good enough to check the support for AArch32 (i.e, AArch32 at EL1 => AArch32 at EL0, but not vice versa). Tested-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
Adds a helper to extract the support for AArch32 at EL0 Tested-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
In order to handle systems which do not support 32bit at EL0, split the COMPAT HWCAP entries into a separate table which can be processed, only if the support is available. Tested-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
We use hwcaps for referring to ELF hwcaps capability information. However this can be confusing with 'cpu_hwcaps' which stands for the CPU capability bit field. This patch cleans up the names to make it a bit more readable. Tested-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
We haven't used the push/pop macros for a while now, as it's typically better to use immediate offsets for batches of accesses to the stack, as we now do in the entry assembly for the kernel and hyp code. Remove the unused macros. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Yang Shi authored
HAVE_ARCH_TRANSPARENT_HUGEPAGE has been defined in arch/Kconfig already, the ARM64 version is identical with it and the default value is Y. So remove the redundant definition and just select it under CONFIG_ARM64. Signed-off-by: Yang Shi <yang.shi@linaro.org> [will: sort into alphabetical order whilst I'm resolving conflicts] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 19 Apr, 2016 5 commits
-
-
Kefeng Wang authored
Show the bss segment information as with text and data in Virtual memory kernel layout. Acked-by: James Morse <james.morse@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Kefeng Wang authored
Each line with single pr_cont() in Virtual kernel memory layout, or the dump of the kernel memory layout in dmesg is not aligned when PRINTK_TIME enabled, due to the missing time stamps. Tested-by: James Morse <james.morse@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Arnd Bergmann authored
memblock_remove() takes a phys_addr_t, which may be narrower than 64 bits, causing a harmless warning: drivers/firmware/efi/arm-init.c: In function 'reserve_regions': include/linux/kernel.h:29:20: error: large integer implicitly truncated to unsigned type [-Werror=overflow] #define ULLONG_MAX (~0ULL) ^ drivers/firmware/efi/arm-init.c:152:21: note: in expansion of macro 'ULLONG_MAX' memblock_remove(0, ULLONG_MAX); This adds an explicit typecast to avoid the warning Fixes: 500899c2 ("efi: ARM/arm64: ignore DT memory nodes instead of removing them") Acked-by Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Jan Glauber authored
When CPUs are stopped during an abnormal operation like panic for each CPU a line is printed and the stack trace is dumped. This information is only interesting for the aborting CPU and on systems with many CPUs it only makes it harder to debug if after the aborting CPU the log is flooded with data about all other CPUs too. Therefore remove the stack dump and printk of other CPUs and only print a single line that the other CPUs are going to be stopped and, in case any CPUs remain online list them. Signed-off-by: Jan Glauber <jglauber@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Huang Shijie authored
We already re-enable interrupts where necessary in the entry code, so there is no need to do it again in do_page fault. This patch removes the redundant code. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Huang Shijie <shijie.huang@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 15 Apr, 2016 5 commits
-
-
Catalin Marinas authored
When hardware updates of the access and dirty states are enabled, the default ptep_set_access_flags() implementation based on calling set_pte_at() directly is potentially racy. This triggers the "racy dirty state clearing" warning in set_pte_at() because an existing writable PTE is overridden with a clean entry. There are two main scenarios for this situation: 1. The CPU getting an access fault does not support hardware updates of the access/dirty flags. However, a different agent in the system (e.g. SMMU) can do this, therefore overriding a writable entry with a clean one could potentially lose the automatically updated dirty status 2. A more complex situation is possible when all CPUs support hardware AF/DBM: a) Initial state: shareable + writable vma and pte_none(pte) b) Read fault taken by two threads of the same process on different CPUs c) CPU0 takes the mmap_sem and proceeds to handling the fault. It eventually reaches do_set_pte() which sets a writable + clean pte. CPU0 releases the mmap_sem d) CPU1 acquires the mmap_sem and proceeds to handle_pte_fault(). The pte entry it reads is present, writable and clean and it continues to pte_mkyoung() e) CPU1 calls ptep_set_access_flags() If between (d) and (e) the hardware (another CPU) updates the dirty state (clears PTE_RDONLY), CPU1 will override the PTR_RDONLY bit marking the entry clean again. This patch implements an arm64-specific ptep_set_access_flags() function to perform an atomic update of the PTE flags. Fixes: 2f4b829c ("arm64: Add support for hardware updates of the access and dirty pte bits") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Ming Lei <tom.leiming@gmail.com> Tested-by: Julien Grall <julien.grall@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # 4.3+ [will: reworded comment] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ganapatrao Kulkarni authored
Enable NUMA balancing for arm64 platforms. Add pte, pmd protnone helpers for use by automatic NUMA balancing. Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by: David Daney <david.daney@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ganapatrao Kulkarni authored
Attempt to get the memory and CPU NUMA node via of_numa. If that fails, default the dummy NUMA node and map all memory and CPUs to node 0. Tested-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by: David Daney <david.daney@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
David Daney authored
In order to extract NUMA information from the device tree, we need to have the tree in its unflattened form. Move the call to bootmem_init() in the tail of paging_init() into setup_arch, and adjust header files so that its declaration is visible. Move the unflatten_device_tree() call between the calls to paging_init() and bootmem_init(). Follow on patches add NUMA handling to bootmem_init(). Signed-off-by: David Daney <david.daney@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
David Daney authored
Add device tree parsing for NUMA topology using device "numa-node-id" property in distance-map and cpu nodes. This is a complete rewrite of a previous patch by: Ganapatrao Kulkarni<gkulkarni@caviumnetworks.com> Signed-off-by: David Daney <david.daney@cavium.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-