- 19 Nov, 2019 2 commits
-
-
Christophe Leroy authored
Since commit f86ef74e ("powerpc/8xx: Fix vaddr for IMMR early remap"), the IMMR area has been mapped at startup with fixmap. Use that fixmap directly instead of calling ioremap(), this avoids calling ioremap() early before the slab is available. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f816ccdbd15b97cf43c5a8c7cc8dfa8db58ff036.1568294935.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
Functions cpm1_clk_setup(), cpm1_set_pin(), cpm_pic_init() and mpc8xx_pic_init() are only called from __init functions, so mark them __init as well. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c27168ef054f3a52edcf0ff91652700d53b3e32d.1568294563.git.christophe.leroy@c-s.fr
-
- 18 Nov, 2019 8 commits
-
-
Christophe Leroy authored
SET_MSR_EE() is just use in this file and doesn't provide any added value compared to mtmsr(). Drop it. Add a wrtee() inline function to use wrtee/wrteei insn. Replace #ifdefs by IS_ENABLED() Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a28a20514d5f6df9629c1a117b667e48c4272736.1567068137.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
Most 8xx registers have specific names, so just include reg_8xx.h all the time in reg.h in order to have them defined even when CONFIG_PPC_8xx is not selected. This will avoid the need for #ifdefs in C code. Guard SPRN_ICTRL in an #ifdef CONFIG_PPC_8xx as this register has same name but different meaning and different spr number as another register in the mpc7450. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/dd82934ad91aab607d0eb7e626c14e6ac0d654eb.1567068137.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
mftb() includes a feature fixup for CELL ppc. Use ASM_FTR_IFSET() macro instead of opencoding the setup of the fixup sections. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ac19713826fa55e9e7bfe3100c5a7b1712ab9526.1566999711.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
Commit d2f15e09 ("powerpc/32: always populate page tables for Abatron BDI.") wrongly sets page tables for any PPC32 for using BDI, and does't update them after init (remove RX on init section, set text and rodata read-only) Only the 8xx requires page tables to be populated for using the BDI. They also need to be populated in order to see the mappings in /sys/kernel/debug/kernel_page_tables On BOOK3S_32, pages that are not mapped by page tables are mapped by BATs. The BDI knows BATs and they can be viewed in /sys/kernel/debug/powerpc/block_address_translation Only set pagetables for RAM and IMMR on the 8xx and properly update them at the end of init. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c8610942203e0d93fcb02ad20c57edd3adb4c9d3.1566554029.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
DSISR (or ESR on some CPUs) has a bit to tell if the fault is due to a read or a write. Display it. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Santosh Sivaraj <santosh@fossix.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4f88d7e6fda53b5f80a71040ab400242f6c8cb93.1566400889.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
powerpc always selects CONFIG_MMU and CONFIG_MMU is not checked anywhere else in powerpc code. Drop the #ifdef and the alternative part of is_ioremap_addr() Fixes: 9bd3bb67 ("mm/nvdimm: add is_ioremap_addr and use that to check ioremap address") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/de395e444fb8dd7a6365c3314d78e15ebb3d7d1b.1566382245.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
BUG(), WARN() and friends are using a similar inline assembly to implement various traps with various flags. Lets refactor via a new BUG_ENTRY() macro. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c19a82b37677ace0eebb0dc8c2120373c29c8dd1.1566219503.git.christophe.leroy@c-s.fr
-
https://git.kernel.org/pub/scm/linux/kernel/git/scottwood/linuxMichael Ellerman authored
Merge changes from Scott: Includes a couple of device tree fixes, a spelling fix, and leftover code cleanup.
-
- 17 Nov, 2019 4 commits
-
-
Valentin Longchamp authored
This removes the warnings about the fact that the 4 pci bridges (i.e. the 4 pci hosts) don't have any ranges. Signed-off-by: Valentin Longchamp <valentin@longchamp.me> Signed-off-by: Scott Wood <oss@buserror.net>
-
Geert Uytterhoeven authored
Caching dates is never a good idea ;-) Fixes: e7affb1d ("powerpc/cache: add cache flush operation for various e500") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Scott Wood <oss@buserror.net>
-
Rasmus Villemoes authored
Since commit 302c059f (QE: use subsys_initcall to init qe), mpc85xx_qe_init() has done nothing apart from possibly emitting a pr_err(). As part of reducing the amount of QE-related code in arch/powerpc/ (and eventually support QE on other architectures), remove this low-hanging fruit. Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Scott Wood <oss@buserror.net>
-
Valentin Longchamp authored
Change all phy-connection-type properties to phy-mode that are better supported by the fman driver. Use the more readable fixed-link node for the 2 sgmii links. Change the RGMII link to rgmii-id as the clock delays are added by the phy. Signed-off-by: Valentin Longchamp <valentin@longchamp.me> Acked-by: Madalin Bucur <madalin.bucur@nxp.com> Signed-off-by: Scott Wood <oss@buserror.net>
-
- 14 Nov, 2019 2 commits
-
-
Harish authored
On older distributions like Sles12SP5 gcc does not recognize -no-pie option making the powerpc selftests build to fail Fixes the following: gcc: error: unrecognized command line option ‘-no-pie’ Signed-off-by: Harish <harish@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191113094219.14946-1-harish@linux.ibm.com
-
Michael Ellerman authored
This is a slight rebase of Scott's next branch, which contained the KASLR support for book3e 32-bit, to squash in a couple of small fixes. See the original pull request: https://lore.kernel.org/r/20191022232155.GA26174@home.buserror.net
-
- 13 Nov, 2019 24 commits
-
-
Jason Yan authored
Add document to explain how we implement KASLR for fsl_booke32. Signed-off-by: Jason Yan <yanaijie@huawei.com> Signed-off-by: Scott Wood <oss@buserror.net> [mpe: Add it to the index as well] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Jason Yan authored
Like all other architectures such as x86 or arm64, include KASLR offset in VMCOREINFO ELF notes to assist in debugging. After this, we can use crash --kaslr option to parse vmcore generated from a kaslr kernel. Note: The crash tool needs to support --kaslr too. Signed-off-by: Jason Yan <yanaijie@huawei.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Jason Yan authored
When kaslr is enabled, the kernel offset is different for every boot. This brings some difficult to debug the kernel. Dump out the kernel offset when panic so that we can easily debug the kernel. This code is derived from x86/arm64 which has similar functionality. Signed-off-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Diana Craciun <diana.craciun@nxp.com> Tested-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Jason Yan authored
One may want to disable kaslr when boot, so provide a cmdline parameter 'nokaslr' to support this. Signed-off-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Diana Craciun <diana.craciun@nxp.com> Tested-by: Diana Craciun <diana.craciun@nxp.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Jason Yan authored
The original kernel still exists in the memory, clear it now. Signed-off-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Diana Craciun <diana.craciun@nxp.com> Tested-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Jason Yan authored
After we have the basic support of relocate the kernel in some appropriate place, we can start to randomize the offset now. Entropy is derived from the banner and timer, which will change every build and boot. This not so much safe so additionally the bootloader may pass entropy via the /chosen/kaslr-seed node in device tree. We will use the first 512M of the low memory to randomize the kernel image. The memory will be split in 64M zones. We will use the lower 8 bit of the entropy to decide the index of the 64M zone. Then we chose a 16K aligned offset inside the 64M zone to put the kernel in. We also check if we will overlap with some areas like the dtb area, the initrd area or the crashkernel area. If we cannot find a proper area, kaslr will be disabled and boot from the original kernel. Some pieces of code are derived from arch/x86/boot/compressed/kaslr.c or arch/arm64/kernel/kaslr.c such as rotate_xor(). Credit goes to Kees and Ard. Signed-off-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Diana Craciun <diana.craciun@nxp.com> Tested-by: Diana Craciun <diana.craciun@nxp.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Jason Yan authored
This patch add support to boot kernel from places other than KERNELBASE. Since CONFIG_RELOCATABLE has already supported, what we need to do is map or copy kernel to a proper place and relocate. Freescale Book-E parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 entries are not suitable to map the kernel directly in a randomized region, so we chose to copy the kernel to a proper place and restart to relocate. The offset of the kernel was not randomized yet(a fixed 64M is set). We will randomize it in the next patch. Signed-off-by: Jason Yan <yanaijie@huawei.com> Tested-by: Diana Craciun <diana.craciun@nxp.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net> [mpe: Use PTRRELOC() in early_init()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Jason Yan authored
Add a new helper reloc_kernel_entry() to jump back to the start of the new kernel. After we put the new kernel in a randomized place we can use this new helper to enter the kernel and begin to relocate again. Signed-off-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Diana Craciun <diana.craciun@nxp.com> Tested-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Jason Yan authored
Add a new helper create_kaslr_tlb_entry() to create a tlb entry by the virtual and physical address. This is a preparation to support boot kernel at a randomized address. Signed-off-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Diana Craciun <diana.craciun@nxp.com> Tested-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Jason Yan authored
Now the kernel base is a fixed value - KERNELBASE. To support KASLR, we need a variable to store the kernel base. Signed-off-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Diana Craciun <diana.craciun@nxp.com> Tested-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Jason Yan authored
These two variables are both defined in init_32.c and init_64.c. Move them to init-common.c and make them __ro_after_init. Signed-off-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Diana Craciun <diana.craciun@nxp.com> Tested-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Jason Yan authored
M_IF_NEEDED is defined too many times. Move it to a common place and rename it to MAS2_M_IF_NEEDED which is much readable. Signed-off-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Diana Craciun <diana.craciun@nxp.com> Tested-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michal Suchanek authored
Currently it is not possible to distinguish the case when fadump is supported by firmware and disabled in kernel and completely unsupported using the kernel sysfs interface. User can investigate the devicetree but it is more reasonable to provide sysfs files in case we get some fadumpv2 in the future. With this patch sysfs files are available whenever fadump is supported by firmware. There is duplicate message about lack of support by firmware in fadump_reserve_mem and setup_fadump. Remove the duplicate message in setup_fadump. Signed-off-by: Michal Suchanek <msuchanek@suse.de> Reviewed-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191107164757.15140-1-msuchanek@suse.de
-
Michal Suchanek authored
Since commit ed1cd6de ("powerpc: Activate CONFIG_THREAD_INFO_IN_TASK") current_is_64bit() is quivalent to !is_32bit_task(). Remove the redundant function. Suggested-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190912194633.12045-1-msuchanek@suse.de
-
Sam Bobroff authored
Currently when an EEH error is detected, the system log receives the same (or almost the same) message twice: EEH: PHB#0 failure detected, location: N/A EEH: PHB#0 failure detected, location: N/A or EEH: eeh_dev_check_failure: Frozen PHB#0-PE#0 detected EEH: Frozen PHB#0-PE#0 detected This looks like a bug, but in fact the messages are from different functions and mean slightly different things. So keep both but change one of the messages slightly, so that it's clear they are different: EEH: PHB#0 failure detected, location: N/A EEH: Recovering PHB#0, location: N/A or EEH: eeh_dev_check_failure: Frozen PHB#0-PE#0 detected EEH: Recovering PHB#0-PE#0 Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/43817cb6e6631b0828b9a6e266f60d1f8ca8eb22.1571288375.git.sbobroff@linux.ibm.com
-
Leonardo Bras authored
Changes the return variable to bool (as the return value) and avoids doing a ternary operation before returning. Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190802133914.30413-1-leonardo@linux.ibm.com
-
Christoph Hellwig authored
The powerpc version of dma-mapping.h only contains a version of get_arch_dma_ops that always return NULL. Replace it with the asm-generic version that does the same. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190807150752.17894-1-hch@lst.de
-
Cédric Le Goater authored
When the machine crash handler is invoked, all interrupts are masked but interrupts which have not been started yet do not have an ESB page mapped in the Linux address space. This crashes the 'crash kexec' sequence on sPAPR guests. To fix, force the mapping of the ESB page when an interrupt is being mapped in the Linux IRQ number space. This is done by setting the initial state of the interrupt to OFF which is not necessarily the case on PowerNV. Fixes: 243e2511 ("powerpc/xive: Native exploitation of the XIVE interrupt controller") Cc: stable@vger.kernel.org # v4.12+ Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031063100.3864-1-clg@kaod.org
-
Andrew Donnellan authored
It's KUAP, not KAUP. Fix typo in INT_COMMON macro. Signed-off-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191022060603.24101-1-ajd@linux.ibm.com
-
Thomas Huth authored
The FSF does not reside in "675 Mass Ave, Cambridge" anymore... let's simply use proper SPDX identifiers instead. Signed-off-by: Thomas Huth <thuth@redhat.com> Acked-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190828060737.32531-1-thuth@redhat.com
-
Aneesh Kumar K.V authored
Avoids confusion when printing Oops message like below Faulting instruction address: 0xc00000000008bdb4 Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=64K MMU=Radix MMU=Hash SMP NR_CPUS=2048 NUMA PowerNV This was because we never clear the MMU_FTR_HPTE_TABLE feature flag even if we run with radix translation. It was discussed that we should look at this feature flag as an indication of the capability to run hash translation and we should not clear the flag even if we run in radix translation. All the code paths check for radix_enabled() check and if found true consider we are running with radix translation. Follow the same sequence for finding the MMU translation string to be used in Oops message. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Acked-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190711145814.17970-1-aneesh.kumar@linux.ibm.com
-
YueHaibing authored
arch/powerpc/platforms/cell/spufs/inode.c:201:22: warning: variable ctx set but not used [-Wunused-but-set-variable] It is not used since commit 67cba9fd ("move spu_forget() into spufs_rmdir()") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191023134423.15052-1-yuehaibing@huawei.com
-
YueHaibing authored
The callback function of call_rcu() just calls a kfree(), so we can use kfree_rcu() instead of call_rcu() + callback function. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190711141818.18044-1-yuehaibing@huawei.com
-
YueHaibing authored
Fix sparse warnings: arch/powerpc/platforms/powernv/opal-psr.c:20:1: warning: symbol 'psr_mutex' was not declared. Should it be static? arch/powerpc/platforms/powernv/opal-psr.c:27:3: warning: symbol 'psr_attrs' was not declared. Should it be static? arch/powerpc/platforms/powernv/opal-powercap.c:20:1: warning: symbol 'powercap_mutex' was not declared. Should it be static? arch/powerpc/platforms/powernv/opal-sensor-groups.c:20:1: warning: symbol 'sg_mutex' was not declared. Should it be static? Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190702131733.44100-1-yuehaibing@huawei.com
-