- 25 Apr, 2016 6 commits
-
-
Marc Zyngier authored
Now that the capabilities are only available once all the CPUs have booted, we're unable to check for a particular feature in any subsystem that gets initialized before then. In order to support this, introduce a local_cpu_has_cap() function that tests for the presence of a given capability independently of the whole framework. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> [ Added preemptible() check ] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> [will: remove duplicate initialisation of caps in this_cpu_has_cap] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
Add scope parameter to the arm64_cpu_capabilities::matches(), so that this can be reused for checking the capability on a given CPU vs the system wide. The system uses the default scope associated with the capability for initialising the CPU_HWCAPs and ELF_HWCAPs. Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Improve the readability of dt_scan_depth1_nodes by removing the nested conditionals. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Shannon Zhao authored
When it's a Xen domain0 booting with ACPI, it will supply a /chosen and a /hypervisor node in DT. So check if it needs to enable ACPI. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Julien Grall <julien.grall@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Annotate the KASAN shadow region with boundary markers, so that its mappings stand out in the page table dumper output. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
There is no need to initialize the vmemmap region boundaries dynamically, since they are compile time constants. So just add these constants to the global struct initializer, and drop the dynamic assignment and related code. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 22 Apr, 2016 3 commits
-
-
Ard Biesheuvel authored
The open coded conversion from struct page address to virtual address in lowmem_page_address() involves an intermediate conversion step to pfn number/physical address. Since the placement of the struct page array relative to the linear mapping may be completely independent from the placement of physical RAM (as is that case for arm64 after commit dfd55ad8 'arm64: vmemmap: use virtual projection of linear region'), the conversion to physical address and back again should factor out of the equation, but unfortunately, the shifting and pointer arithmetic involved prevent this from happening, and the resulting calculation essentially subtracts the address of the start of physical memory and adds it back again, in a way that prevents the compiler from optimizing it away. Since the start of physical memory is not a build time constant on arm64, the resulting conversion involves an unnecessary memory access, which we would like to get rid of. So replace the open coded conversion with a call to page_to_virt(), and use the open coded conversion as its default definition, to be overriden by the architecture, if desired. The existing arch specific definitions of page_to_virt are all equivalent to this default definition, so by itself this patch is a no-op. Acked-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
To align with generic code and other architectures that expect the macro page_to_virt to produce an expression whose type is 'void*', drop the arch specific definition, which is never referenced anyway. Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
To align with other architectures, the expression produced by expanding the macro page_to_virt() should be of type void*, since it returns a virtual address. Fix that, and also fix up an instance where page_to_virt was expected to return 'unsigned long', and drop another instance that was entirely unused (page_to_bus) Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 21 Apr, 2016 2 commits
-
-
Robin Murphy authored
With the IOMMU core now taking care of default domains for groups regardless of bus type, we can gleefully rip out this stop-gap, as slight recompense for having to expand the other one. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Robin Murphy authored
PCI devices now suffer the same hiccup as platform devices, in that they get their DMA ops configured before they have been added to their bus, and thus before we know whether they have successfully registered with an IOMMU or not. Until the necessary driver core changes to reorder calls during device creation have been worked out, extend our delayed notifier trick onto the PCI bus so as to avoid broken DMA ops once IOMMUs get plugged into the PCI code. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 20 Apr, 2016 8 commits
-
-
Suzuki K Poulose authored
Make sure we have AArch32 state available for running COMPAT binaries and also for switching the personality to PER_LINUX32. Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> [ Added cap bit, checks for HWCAP, personality ] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Tested-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
Add cpu_hwcap bit for keeping track of the support for 32bit EL0. Tested-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
On ARMv8 support for AArch32 state is optional. Hence it is not safe to check the AArch32 ID registers for sanity, which could lead to false warnings. This patch makes sure that the AArch32 state is implemented before we keep track of the 32bit ID registers. As per ARM ARM (D.1.21.2 - Support for Exception Levels and Execution States, DDI0487A.h), checking the support for AArch32 at EL0 is good enough to check the support for AArch32 (i.e, AArch32 at EL1 => AArch32 at EL0, but not vice versa). Tested-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
Adds a helper to extract the support for AArch32 at EL0 Tested-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
In order to handle systems which do not support 32bit at EL0, split the COMPAT HWCAP entries into a separate table which can be processed, only if the support is available. Tested-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
We use hwcaps for referring to ELF hwcaps capability information. However this can be confusing with 'cpu_hwcaps' which stands for the CPU capability bit field. This patch cleans up the names to make it a bit more readable. Tested-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
We haven't used the push/pop macros for a while now, as it's typically better to use immediate offsets for batches of accesses to the stack, as we now do in the entry assembly for the kernel and hyp code. Remove the unused macros. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Yang Shi authored
HAVE_ARCH_TRANSPARENT_HUGEPAGE has been defined in arch/Kconfig already, the ARM64 version is identical with it and the default value is Y. So remove the redundant definition and just select it under CONFIG_ARM64. Signed-off-by: Yang Shi <yang.shi@linaro.org> [will: sort into alphabetical order whilst I'm resolving conflicts] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 19 Apr, 2016 5 commits
-
-
Kefeng Wang authored
Show the bss segment information as with text and data in Virtual memory kernel layout. Acked-by: James Morse <james.morse@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Kefeng Wang authored
Each line with single pr_cont() in Virtual kernel memory layout, or the dump of the kernel memory layout in dmesg is not aligned when PRINTK_TIME enabled, due to the missing time stamps. Tested-by: James Morse <james.morse@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Arnd Bergmann authored
memblock_remove() takes a phys_addr_t, which may be narrower than 64 bits, causing a harmless warning: drivers/firmware/efi/arm-init.c: In function 'reserve_regions': include/linux/kernel.h:29:20: error: large integer implicitly truncated to unsigned type [-Werror=overflow] #define ULLONG_MAX (~0ULL) ^ drivers/firmware/efi/arm-init.c:152:21: note: in expansion of macro 'ULLONG_MAX' memblock_remove(0, ULLONG_MAX); This adds an explicit typecast to avoid the warning Fixes: 500899c2 ("efi: ARM/arm64: ignore DT memory nodes instead of removing them") Acked-by Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Jan Glauber authored
When CPUs are stopped during an abnormal operation like panic for each CPU a line is printed and the stack trace is dumped. This information is only interesting for the aborting CPU and on systems with many CPUs it only makes it harder to debug if after the aborting CPU the log is flooded with data about all other CPUs too. Therefore remove the stack dump and printk of other CPUs and only print a single line that the other CPUs are going to be stopped and, in case any CPUs remain online list them. Signed-off-by: Jan Glauber <jglauber@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Huang Shijie authored
We already re-enable interrupts where necessary in the entry code, so there is no need to do it again in do_page fault. This patch removes the redundant code. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Huang Shijie <shijie.huang@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 15 Apr, 2016 9 commits
-
-
Catalin Marinas authored
When hardware updates of the access and dirty states are enabled, the default ptep_set_access_flags() implementation based on calling set_pte_at() directly is potentially racy. This triggers the "racy dirty state clearing" warning in set_pte_at() because an existing writable PTE is overridden with a clean entry. There are two main scenarios for this situation: 1. The CPU getting an access fault does not support hardware updates of the access/dirty flags. However, a different agent in the system (e.g. SMMU) can do this, therefore overriding a writable entry with a clean one could potentially lose the automatically updated dirty status 2. A more complex situation is possible when all CPUs support hardware AF/DBM: a) Initial state: shareable + writable vma and pte_none(pte) b) Read fault taken by two threads of the same process on different CPUs c) CPU0 takes the mmap_sem and proceeds to handling the fault. It eventually reaches do_set_pte() which sets a writable + clean pte. CPU0 releases the mmap_sem d) CPU1 acquires the mmap_sem and proceeds to handle_pte_fault(). The pte entry it reads is present, writable and clean and it continues to pte_mkyoung() e) CPU1 calls ptep_set_access_flags() If between (d) and (e) the hardware (another CPU) updates the dirty state (clears PTE_RDONLY), CPU1 will override the PTR_RDONLY bit marking the entry clean again. This patch implements an arm64-specific ptep_set_access_flags() function to perform an atomic update of the PTE flags. Fixes: 2f4b829c ("arm64: Add support for hardware updates of the access and dirty pte bits") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Ming Lei <tom.leiming@gmail.com> Tested-by: Julien Grall <julien.grall@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # 4.3+ [will: reworded comment] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ganapatrao Kulkarni authored
Enable NUMA balancing for arm64 platforms. Add pte, pmd protnone helpers for use by automatic NUMA balancing. Reviewed-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by: David Daney <david.daney@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ganapatrao Kulkarni authored
Attempt to get the memory and CPU NUMA node via of_numa. If that fails, default the dummy NUMA node and map all memory and CPUs to node 0. Tested-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by: David Daney <david.daney@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
David Daney authored
In order to extract NUMA information from the device tree, we need to have the tree in its unflattened form. Move the call to bootmem_init() in the tail of paging_init() into setup_arch, and adjust header files so that its declaration is visible. Move the unflatten_device_tree() call between the calls to paging_init() and bootmem_init(). Follow on patches add NUMA handling to bootmem_init(). Signed-off-by: David Daney <david.daney@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
David Daney authored
Add device tree parsing for NUMA topology using device "numa-node-id" property in distance-map and cpu nodes. This is a complete rewrite of a previous patch by: Ganapatrao Kulkarni<gkulkarni@caviumnetworks.com> Signed-off-by: David Daney <david.daney@cavium.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ganapatrao Kulkarni authored
Add DT bindings for numa mapping of memory, CPUs and IOs. Reviewed-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by: David Daney <david.daney@cavium.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
There are two problems with the UEFI stub DT memory node removal routine: - it deletes nodes as it traverses the tree, which happens to work but is not supported, as deletion invalidates the node iterator; - deleting memory nodes entirely may discard annotations in the form of additional properties on the nodes. Since the discovery of DT memory nodes occurs strictly before the UEFI init sequence, we can simply clear the memblock memory table before parsing the UEFI memory map. This way, it is no longer necessary to remove the nodes, so we can remove that logic from the stub as well. Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Acked-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: David Daney <david.daney@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
With a VHE capable CPU, kernel can run at EL2 and is a decided at early boot. If some of the CPUs didn't start it EL2 or doesn't have VHE, we could have CPUs running at different exception levels, all in the same kernel! This patch adds an early check for the secondary CPUs to detect such situations. For each non-boot CPU add a sanity check to make sure we don't have different run levels w.r.t the boot CPU. We save the information on whether the boot CPU is running in hyp mode or not and ensure the remaining CPUs match it. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> [will: made boot_cpu_hyp_mode static] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
During the activation of a secondary CPU, we could report serious configuration issues and hence request to crash the kernel. We do this for CPU ASID bit check now. We will need it also for handling mismatched exception levels for the CPUs with VHE. Hence, add a helper to do the same for reusability. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 14 Apr, 2016 7 commits
-
-
James Morse authored
With CONFIG_PROVE_LOCKING, CONFIG_DEBUG_LOCKDEP and CONFIG_TRACE_IRQFLAGS enabled, lockdep will compare current->hardirqs_enabled with the flags from local_irq_save(). When a debug exception occurs, interrupts are disabled in entry.S, but lockdep isn't told, resulting in: DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled) ------------[ cut here ]------------ WARNING: at ../kernel/locking/lockdep.c:3523 Modules linked in: CPU: 3 PID: 1752 Comm: perf Not tainted 4.5.0-rc4+ #2204 Hardware name: ARM Juno development board (r1) (DT) task: ffffffc974868000 ti: ffffffc975f40000 task.ti: ffffffc975f40000 PC is at check_flags.part.35+0x17c/0x184 LR is at check_flags.part.35+0x17c/0x184 pc : [<ffffff80080fc93c>] lr : [<ffffff80080fc93c>] pstate: 600003c5 [...] ---[ end trace 74631f9305ef5020 ]--- Call trace: [<ffffff80080fc93c>] check_flags.part.35+0x17c/0x184 [<ffffff80080ffe30>] lock_acquire+0xa8/0xc4 [<ffffff8008093038>] breakpoint_handler+0x118/0x288 [<ffffff8008082434>] do_debug_exception+0x3c/0xa8 [<ffffff80080854b4>] el1_dbg+0x18/0x6c [<ffffff80081e82f4>] do_filp_open+0x64/0xdc [<ffffff80081d6e60>] do_sys_open+0x140/0x204 [<ffffff80081d6f58>] SyS_openat+0x10/0x18 [<ffffff8008085d30>] el0_svc_naked+0x24/0x28 possible reason: unannotated irqs-off. irq event stamp: 65857 hardirqs last enabled at (65857): [<ffffff80081fb1c0>] lookup_mnt+0xf4/0x1b4 hardirqs last disabled at (65856): [<ffffff80081fb188>] lookup_mnt+0xbc/0x1b4 softirqs last enabled at (65790): [<ffffff80080bdca4>] __do_softirq+0x1f8/0x290 softirqs last disabled at (65757): [<ffffff80080be038>] irq_exit+0x9c/0xd0 This patch adds the annotations to do_debug_exception(), while trying not to call trace_hardirqs_off() if el1_dbg() interrupted a task that already had irqs disabled. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Anna-Maria Gleixner authored
Since commit 1cf4f629 ("cpu/hotplug: Move online calls to hotplugged cpu") it is ensured that callbacks of CPU_ONLINE and CPU_DOWN_PREPARE are processed on the hotplugged CPU. Due to this SMP function calls are no longer required. Replace smp_call_function_single() with a direct call of hw_breakpoint_reset(). To keep the calling convention, interrupts are explicitly disabled around the call. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Anna-Maria Gleixner authored
Since commit 1cf4f629 ("cpu/hotplug: Move online calls to hotplugged cpu") it is ensured that callbacks of CPU_ONLINE and CPU_DOWN_PREPARE are processed on the hotplugged CPU. Due to this SMP function calls are no longer required. Replace smp_call_function_single() with a direct call to clear_os_lock(). The function writes the OSLAR register to clear OS locking. This does not require to be called with interrupts disabled, therefore the smp_call_function_single() calling convention is not preserved. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
The mapping of the kernel consist of four segments, each of which is mapped with different permission attributes and/or lifetimes. To optimize the TLB and translation table footprint, we define various opaque constants in the linker script that resolve to different aligment values depending on the page size and whether CONFIG_DEBUG_ALIGN_RODATA is set. Considering that - a 4 KB granule kernel benefits from a 64 KB segment alignment (due to the fact that it allows the use of the contiguous bit), - the minimum alignment of the .data segment is THREAD_SIZE already, not PAGE_SIZE (i.e., we already have padding between _data and the start of the .data payload in many cases), - 2 MB is a suitable alignment value on all granule sizes, either for mapping directly (level 2 on 4 KB), or via the contiguous bit (level 3 on 16 KB and 64 KB), - anything beyond 2 MB exceeds the minimum alignment mandated by the boot protocol, and can only be mapped efficiently if the physical alignment happens to be the same, we can simplify this by standardizing on 64 KB (or 2 MB) explicitly, i.e., regardless of granule size, all segments are aligned either to 64 KB, or to 2 MB if CONFIG_DEBUG_ALIGN_RODATA=y. This also means we can drop the Kconfig dependency of CONFIG_DEBUG_ALIGN_RODATA on CONFIG_ARM64_4K_PAGES. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Keeping .head.text out of the .text mapping buys us very little: its actual payload is only 4 KB, most of which is padding, but the page alignment may add up to 2 MB (in case of CONFIG_DEBUG_ALIGN_RODATA=y) of additional padding to the uncompressed kernel Image. Also, on 4 KB granule kernels, the 4 KB misalignment of .text forces us to map the adjacent 56 KB of code without the PTE_CONT attribute, and since this region contains things like the vector table and the GIC interrupt handling entry point, this region is likely to benefit from the reduced TLB pressure that results from PTE_CONT mappings. So remove the alignment between the .head.text and .text sections, and use the [_text, _etext) rather than the [_stext, _etext) interval for mapping the .text segment. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Apart from the arm64/linux and EFI header data structures, there is nothing in the .head.text section that must reside at the beginning of the Image. So let's move it to the .init section where it belongs. Note that this involves some minor tweaking of the EFI header, primarily because the address of 'stext' no longer coincides with the start of the .text section. It also requires a couple of relocated symbol references to be slightly rewritten or their definition moved to the linker script. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Replace the poorly defined term chunk with segment, which is a term that is already used by the ELF spec to describe contiguous mappings with the same permission attributes of statically allocated ranges of an executable. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-