- 05 Nov, 2019 2 commits
-
-
Nathan Lynch authored
Remove some stray blank lines, convert a printk to pr_warn, and address a line length violation. One functional change: use WARN_ON instead of BUG_ON in case H_PROD of a ceded thread yields an unexpected result from the platform. We can expect this code path to get uninterruptibly stuck in __cpu_die() if this happens, but that's more desirable than crashing. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Fixes: b6db63d1 ("pseries/pseries: Add code to online/offline CPUs of a DLPAR node") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191016183611.10867-2-nathanl@linux.ibm.com
-
Michael Ellerman authored
On systems where TM (Transactional Memory) is disabled the tm-signal-sigreturn-nt test causes a SIGILL: test: tm_signal_sigreturn_nt tags: git_version:7c202575 !! child died by signal 4 failure: tm_signal_sigreturn_nt We should skip the test if TM is not available. Fixes: 34642d70 ("selftests/powerpc: Add checks for transactional sigreturn") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191104233524.24348-1-mpe@ellerman.id.au
-
- 04 Nov, 2019 1 commit
-
-
Michael Ellerman authored
Merge our fixes branch, primarily to bring in the powernv CPU hotplug warning fix.
-
- 30 Oct, 2019 4 commits
-
-
Michael Ellerman authored
Some of our scripts are passed $objdump and then call it as "$objdump". This doesn't work if it contains spaces because we're using ccache, for example you get errors such as: ./arch/powerpc/tools/relocs_check.sh: line 48: ccache ppc64le-objdump: No such file or directory ./arch/powerpc/tools/unrel_branch_check.sh: line 26: ccache ppc64le-objdump: No such file or directory Fix it by not quoting the string when we expand it, allowing the shell to do the right thing for us. Fixes: a71aa05e ("powerpc: Convert relocs_check to a shell script using grep") Fixes: 4ea80652 ("powerpc/64s: Tool to flag direct branches from unrelocated interrupt vectors") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191024004730.32135-1-mpe@ellerman.id.au
-
Michael Ellerman authored
As part of the uapi we export a lot of PT_xx defines for each register in struct pt_regs. These are expressed as an index from gpr[0], in units of unsigned long. Currently there's nothing tying the values of those defines to the actual layout of the struct. But we *don't* want to change the uapi defines to derive the PT_xx values based on the layout of the struct, those values are ABI and must never change. Instead we want to do the reverse, make sure that the layout of the struct never changes vs the PT_xx defines. So add build time checks of that. This probably seems paranoid, but at least once in the past someone has sent a patch that would have broken the ABI if it hadn't been spotted. Although it probably would have been detected via testing, it's preferable to just quash any issues at the source. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191030111231.22720-1-mpe@ellerman.id.au
-
Mathieu Malaterre authored
`pt_regs_check` is a dummy function, its purpose is to break the build if struct pt_regs and struct user_pt_regs don't match. This function has no functionnal purpose, and will get eliminated at link time or after init depending on CONFIG_LD_DEAD_CODE_DATA_ELIMINATION This commit adds a prototype to fix warning at W=1: arch/powerpc/kernel/ptrace.c:3339:13: error: no previous prototype for ‘pt_regs_check’ [-Werror=missing-prototypes] Suggested-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Mathieu Malaterre <malat@debian.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20181208154624.6504-1-malat@debian.org
-
Michael Ellerman authored
This test uses the PMU to count branch prediction hits/misses for a known loop, and compare the result to the reported spectre v2 mitigation. This gives us a way of sanity checking that the reported mitigation is actually in effect. Sample output for some cases, eg: Power9: sysfs reports: 'Vulnerable' PM_BR_PRED_CCACHE: result 368 running/enabled 5792777124 PM_BR_MPRED_CCACHE: result 319 running/enabled 5792775546 PM_BR_PRED_PCACHE: result 2147483281 running/enabled 5792773128 PM_BR_MPRED_PCACHE: result 213604201 running/enabled 5792771640 Miss percent 9 % OK - Measured branch prediction rates match reported spectre v2 mitigation. sysfs reports: 'Mitigation: Indirect branch serialisation (kernel only)' PM_BR_PRED_CCACHE: result 895 running/enabled 5780320920 PM_BR_MPRED_CCACHE: result 822 running/enabled 5780312414 PM_BR_PRED_PCACHE: result 2147482754 running/enabled 5780308836 PM_BR_MPRED_PCACHE: result 213639731 running/enabled 5780307912 Miss percent 9 % OK - Measured branch prediction rates match reported spectre v2 mitigation. sysfs reports: 'Mitigation: Indirect branch cache disabled' PM_BR_PRED_CCACHE: result 2147483649 running/enabled 20540186160 PM_BR_MPRED_CCACHE: result 2147483649 running/enabled 20540180056 PM_BR_PRED_PCACHE: result 0 running/enabled 20540176090 PM_BR_MPRED_PCACHE: result 0 running/enabled 20540174182 Miss percent 100 % OK - Measured branch prediction rates match reported spectre v2 mitigation. Power8: sysfs reports: 'Vulnerable' PM_BR_PRED_CCACHE: result 2147483649 running/enabled 3505888142 PM_BR_MPRED_CCACHE: result 9 running/enabled 3505882788 Miss percent 0 % OK - Measured branch prediction rates match reported spectre v2 mitigation. sysfs reports: 'Mitigation: Indirect branch cache disabled' PM_BR_PRED_CCACHE: result 2147483649 running/enabled 16931421988 PM_BR_MPRED_CCACHE: result 2147483649 running/enabled 16931416478 Miss percent 100 % OK - Measured branch prediction rates match reported spectre v2 mitigation. success: spectre_v2 Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190520105520.22274-1-mpe@ellerman.id.au
-
- 29 Oct, 2019 3 commits
-
-
Nicholas Piggin authored
Commit e78a7614 ("idle: Prevent late-arriving interrupts from disrupting offline") changes arch_cpu_idle_dead to be called with interrupts disabled, which triggers the WARN in pnv_smp_cpu_kill_self. Fix this by fixing up irq_happened after hard disabling, rather than requiring there are no pending interrupts, similarly to what was done done until commit 2525db04 ("powerpc/powernv: Simplify lazy IRQ handling in CPU offline"). Fixes: e78a7614 ("idle: Prevent late-arriving interrupts from disrupting offline") Reported-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Add unexpected_mask rather than checking for known bad values, change the WARN_ON() to a WARN_ON_ONCE()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191022115814.22456-1-npiggin@gmail.com
-
Michael Ellerman authored
Some of our TM (Transactional Memory) tests, list "r1" (the stack pointer) as a clobbered register. GCC >= 9 doesn't accept this, and the build breaks: ptrace-tm-spd-tar.c: In function 'tm_spd_tar': ptrace-tm-spd-tar.c:31:2: error: listing the stack pointer register 'r1' in a clobber list is deprecated [-Werror=deprecated] 31 | asm __volatile__( | ^~~ ptrace-tm-spd-tar.c:31:2: note: the value of the stack pointer after an 'asm' statement must be the same as it was before the statement We do have some fairly large inline asm blocks in these tests, and some of them do change the value of r1. However they should all return to C with the value in r1 restored, so I think it's legitimate to say r1 is not clobbered. As Segher points out, the r1 clobbers may have been added because of the use of `or 1,1,1`, however that doesn't actually clobber r1. Segher also points out that some of these tests do clobber LR, because they call functions, and that is not listed in the clobbers, so add that where appropriate. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191029095324.14669-1-mpe@ellerman.id.au
-
Thiago Jung Bauermann authored
The ultravisor will do an integrity check of the kernel image but we relocated it so the check will fail. Restore the original image by relocating it back to the kernel virtual base address. This works because during build vmlinux is linked with an expected virtual runtime address of KERNELBASE. Fixes: 6a9c930b ("powerpc/prom_init: Add the ESM call to prom_init") Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Tested-by: Michael Anderson <andmike@linux.ibm.com> [mpe: Add IS_ENABLED() to fix the CONFIG_RELOCATABLE=n build] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190911163433.12822-1-bauerman@linux.ibm.com
-
- 28 Oct, 2019 9 commits
-
-
Aneesh Kumar K.V authored
With bolted hash page table entry, kernel currently only use primary hash group when inserting the hash page table entry. In the rare case where kernel find all the 8 primary hash slot occupied by bolted entries, this can result in hash page table insert failure for bolted entries. Avoid this by using the secondary hash group. This is different from what kernel does for the non-bolted mapping. With non-bolted entries kernel will try secondary before removing an existing entry from hash page table group. With bolted prefer primary hash group and hence try to insert the page table entry by removing a slot from primary before trying the secondary hash group. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191024093542.29777-3-aneesh.kumar@linux.ibm.com
-
Aneesh Kumar K.V authored
If the hypervisor returned H_PTEG_FULL for H_ENTER hcall, retry a hash page table insert by removing a random entry from the group. After some runtime, it is very well possible to find all the 8 hash page table entry slot in the hpte group used for mapping. Don't fail a bolted entry insert in that case. With Storage class memory a user can find this error easily since a namespace enable/disable is equivalent to memory add/remove. This results in failures as reported below: $ ndctl create-namespace -r region1 -t pmem -m devdax -a 65536 -s 100M libndctl: ndctl_dax_enable: dax1.3: failed to enable Error: namespace1.2: failed to enable failed to create namespace: No such device or address In kernel log we find the details as below: Unable to create mapping for hot added memory 0xc000042006000000..0xc00004200d000000: -1 dax_pmem: probe of dax1.3 failed with error -14 This indicates that we failed to create a bolted hash table entry for direct-map address backing the namespace. We also observe failures such that not all namespaces will be enabled with ndctl enable-namespace all command. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191024093542.29777-2-aneesh.kumar@linux.ibm.com
-
Aneesh Kumar K.V authored
No functional change in this patch. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191024093542.29777-1-aneesh.kumar@linux.ibm.com
-
Michael Ellerman authored
accumulate_stolen_time() is called prior to interrupt state being reconciled, which can trip the warning in arch_local_irq_restore(): WARNING: CPU: 5 PID: 1017 at arch/powerpc/kernel/irq.c:258 .arch_local_irq_restore+0x9c/0x130 ... NIP .arch_local_irq_restore+0x9c/0x130 LR .rb_start_commit+0x38/0x80 Call Trace: .ring_buffer_lock_reserve+0xe4/0x620 .trace_function+0x44/0x210 .function_trace_call+0x148/0x170 .ftrace_ops_no_ops+0x180/0x1d0 ftrace_call+0x4/0x8 .accumulate_stolen_time+0x1c/0xb0 decrementer_common+0x124/0x160 For now just mark it as notrace. We may change the ordering to call it after interrupt state has been reconciled, but that is a larger change. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191024055932.27940-1-mpe@ellerman.id.au
-
Michael Ellerman authored
We have several "defconfigs" that are not actually full defconfigs they are just a base set of options which are then merged with other fragments to produce a working defconfig. The most obvious example is corenet_basic_defconfig which only contains one symbol CONFIG_CORENET_GENERIC=y. And in fact if you build it as a "defconfig" that one symbol ends up undefined, because its prerequisites are missing. There is also mpc85xx_base_defconfig which doesn't actually enable CONFIG_PPC_85xx. To avoid confusion, rename these config fragments to "foo_base.config" to make it clearer that they are not full defconfigs and are instaed just fragments that are used to generate real defconfigs. Reported-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190528081614.26096-1-mpe@ellerman.id.au
-
Andrew Donnellan authored
Add a debug config fragment that we can use to put useful debug options into. It can be used like: # make foo_defconfig # make debug.config Currently the only option included is to enable debugfs SCOM access. Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Andrew Donnellan <ajd@linux.ibm.com> [mpe: Drop the special targets, just use the fragment directly] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190801045855.5822-1-ajd@linux.ibm.com
-
Aneesh Kumar K.V authored
With commit: 7cc7867f ("mm/devm_memremap_pages: enable sub-section remap") pmem namespaces are remapped in 2M chunks. On architectures like ppc64 we can map the memmap area using 16MB hugepage size and that can cover a memory range of 16G. While enabling new pmem namespaces, since memory is added in sub-section chunks, before creating a new memmap mapping, kernel should check whether there is an existing memmap mapping covering the new pmem namespace. Currently, this is validated by checking whether the section covering the range is already initialized or not. Considering there can be multiple namespaces in the same section this can result in wrong validation. Update this to check for sub-sections in the range. This is done by checking for all pfns in the range we are mapping. We could optimize this by checking only just one pfn in each sub-section. But since this is not fast-path we keep this simple. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190917123851.22553-1-aneesh.kumar@linux.ibm.com
-
Christopher M. Riedl authored
Xmon should be either fully or partially disabled depending on the kernel lockdown state. Put xmon into read-only mode for lockdown=integrity and prevent user entry into xmon when lockdown=confidentiality. Xmon checks the lockdown state on every attempted entry: (1) during early xmon'ing (2) when triggered via sysrq (3) when toggled via debugfs (4) when triggered via a previously enabled breakpoint The following lockdown state transitions are handled: (1) lockdown=none -> lockdown=integrity set xmon read-only mode (2) lockdown=none -> lockdown=confidentiality clear all breakpoints, set xmon read-only mode, prevent user re-entry into xmon (3) lockdown=integrity -> lockdown=confidentiality clear all breakpoints, set xmon read-only mode, prevent user re-entry into xmon Suggested-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Christopher M. Riedl <cmr@informatik.wtf> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190907061124.1947-3-cmr@informatik.wtf
-
Christopher M. Riedl authored
Read-only mode should not prevent listing and clearing any active breakpoints. Tested-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Christopher M. Riedl <cmr@informatik.wtf> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190907061124.1947-2-cmr@informatik.wtf
-
- 25 Oct, 2019 1 commit
-
-
Frederic Barrat authored
Recent cleanup in the way EEH support is added to a device causes a kernel oops when the cxl driver probes a device and creates virtual devices discovered on the FPGA: BUG: Kernel NULL pointer dereference at 0x000000a0 Faulting instruction address: 0xc000000000048070 Oops: Kernel access of bad area, sig: 7 [#1] ... NIP eeh_add_device_late.part.9+0x50/0x1e0 LR eeh_add_device_late.part.9+0x3c/0x1e0 Call Trace: _dev_info+0x5c/0x6c (unreliable) pnv_pcibios_bus_add_device+0x60/0xb0 pcibios_bus_add_device+0x40/0x60 pci_bus_add_device+0x30/0x100 pci_bus_add_devices+0x64/0xd0 cxl_pci_vphb_add+0xe0/0x130 [cxl] cxl_probe+0x504/0x5b0 [cxl] local_pci_probe+0x6c/0x110 work_for_cpu_fn+0x38/0x60 The root cause is that those cxl virtual devices don't have a representation in the device tree and therefore no associated pci_dn structure. In eeh_add_device_late(), pdn is NULL, so edev is NULL and we oops. We never had explicit support for EEH for those virtual devices. Instead, EEH events are reported to the (real) pci device and handled by the cxl driver. Which can then forward to the virtual devices and handle dependencies. The fact that we try adding EEH support for the virtual devices is new and a side-effect of the recent cleanup. This patch fixes it by skipping adding EEH support on powernv for devices which don't have a pci_dn structure. The cxl driver doesn't create virtual devices on pseries so this patch doesn't fix it there intentionally. Fixes: b905f8cd ("powerpc/eeh: EEH for pSeries hot plug") Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com> Reviewed-by: Sam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191016162833.22509-1-fbarrat@linux.ibm.com
-
- 24 Oct, 2019 1 commit
-
-
Michael Ellerman authored
The defaults for the sigfuz test is to run for 4000 iterations, but that can take quite a while and the test harness may kill the test. Reduce the number of iterations to 600, which gives a runtime of roughly 1 minute on a Power8 system. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191013234643.3430-1-mpe@ellerman.id.au
-
- 16 Oct, 2019 1 commit
-
-
Christophe Leroy authored
Make sure starting addr is aligned to segment boundary so that when incrementing the segment, the starting address of the new segment is below the end address. Otherwise the last segment might get missed. Fixes: a68c31fc ("powerpc/32s: Implement Kernel Userspace Access Protection") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/067a1b09f15f421d40797c2d04c22d4049a1cee8.1571071875.git.christophe.leroy@c-s.fr
-
- 13 Oct, 2019 1 commit
-
-
Michael Ellerman authored
Merge our fixes branch, to bring in the fixes for the KVM PCR bug and the spufs crash.
-
- 11 Oct, 2019 7 commits
-
-
Deb McLemore authored
When issuing a BMC soft poweroff during IPL, the poweroff can be lost so the machine would not poweroff. This is because opal messages can be received before the opal-power code registered its notifiers. Fix it by buffering messages. If we receive a message and do not yet have a handler for that type, store the message and replay when a handler for that type is registered. Signed-off-by: Deb McLemore <debmc@linux.vnet.ibm.com> [mpe: Single unlock path in opal_message_notifier_register(), tweak comments/formatting and change log.] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1526868278-4204-1-git-send-email-debmc@linux.vnet.ibm.com
-
Qian Cai authored
pkey_allows_readwrite() was first introduced in the commit 5586cf61 ("powerpc: introduce execute-only pkey"), but the usage was removed entirely in the commit a4fcc877 ("powerpc/pkeys: Preallocate execute-only key"). Found by the "-Wunused-function" compiler warning flag. Fixes: a4fcc877 ("powerpc/pkeys: Preallocate execute-only key") Signed-off-by: Qian Cai <cai@lca.pw> Acked-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1568733750-14580-1-git-send-email-cai@lca.pw
-
Qian Cai authored
At the beginning of setup_64.c, it has, #ifdef DEBUG #define DBG(fmt...) udbg_printf(fmt) #else #define DBG(fmt...) #endif where DBG() could be compiled away, and generate warnings, arch/powerpc/kernel/setup_64.c: In function 'initialize_cache_info': arch/powerpc/kernel/setup_64.c:579:49: warning: suggest braces around empty body in an 'if' statement [-Wempty-body] DBG("Argh, can't find dcache properties !\n"); ^ arch/powerpc/kernel/setup_64.c:582:49: warning: suggest braces around empty body in an 'if' statement [-Wempty-body] DBG("Argh, can't find icache properties !\n"); Fix it by using the suggestions from Michael: "Neither of those sites should use DBG(), that's not really early boot code, they should just use pr_warn(). And the other uses of DBG() in initialize_cache_info() should just be removed. In smp_release_cpus() the entry/exit DBG's should just be removed, and the spinning_secondaries line should just be pr_debug(). That would just leave the two calls in early_setup(). If we taught udbg_printf() to return early when udbg_putc is NULL, then we could just call udbg_printf() unconditionally and get rid of the DBG macro entirely." Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Qian Cai <cai@lca.pw> [mpe: Split udbg change out into previous patch] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1563215552-8166-1-git-send-email-cai@lca.pw
-
Michael Ellerman authored
Make udbg_printf() check if udbg_putc is set, and if not just return. This makes it safe to call udbg_printf() anytime, even when a udbg backend has not been registered, which means we can avoid some ifdefs at call sites. Signed-off-by: Qian Cai <cai@lca.pw> [mpe: Split out of larger patch, write change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Hari Bathini authored
arch/powerpc/kernel/fadump.c file needs to be compiled in if 'config FA_DUMP' or 'config PRESERVE_FA_DUMP' is set. The current syntax achieves that but looks a bit odd. Fix it for better readability. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/157063484064.11906.3586824898111397624.stgit@hbathini.in.ibm.com
-
Hari Bathini authored
FADump is supported on PowerNV platform. To fulfill this support, the petitboot kernel must be FADump aware. Enable config PRESERVE_FA_DUMP to make the petitboot kernel FADump aware. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/157062986936.23016.10146169203560084401.stgit@hbathini.in.ibm.com
-
Emmanuel Nicolet authored
The spu_fs_context was not set in fc->fs_private, this caused a crash when accessing ctx->mode in spufs_create_root(). Fixes: d2e0981c ("vfs: Convert spufs to use the new mount API") Signed-off-by: Emmanuel Nicolet <emmanuel.nicolet@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20191008141342.GA266797@gmail.com
-
- 10 Oct, 2019 1 commit
-
-
Vaibhav Jain authored
A validation check to prevent out of bounds read/write inside functions papr_scm_meta_{get,set}() is off-by-one that prevent reads and writes to the last byte of the label area. This bug manifests as a failure to probe a dimm when libnvdimm is unable to read the entire config-area as advertised by ND_CMD_GET_CONFIG_SIZE. This usually happens when there are large number of namespaces created in the region backed by the dimm and the label-index spans max possible config-area. An error of the form below usually reported in the kernel logs: [ 255.293912] nvdimm: probe of nmem0 failed with error -22 The patch fixes these validation checks there by letting libnvdimm access the entire config-area. Fixes: 53e80bd0('powerpc/nvdimm: Add support for multibyte read/write for metadata') Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190927062002.3169-1-vaibhav@linux.ibm.com
-
- 09 Oct, 2019 4 commits
-
-
Jordan Niethe authored
kvmhv_switch_to_host() in arch/powerpc/kvm/book3s_hv_rmhandlers.S needs to set kvmppc_vcore->in_guest to 0 to signal secondary CPUs to continue. This happens after resetting the PCR. Before commit 13c7bb3c ("powerpc/64s: Set reserved PCR bits"), r0 would always be 0 before it was stored to kvmppc_vcore->in_guest. However because of this change in the commit: /* Reset PCR */ ld r0, VCORE_PCR(r5) - cmpdi r0, 0 + LOAD_REG_IMMEDIATE(r6, PCR_MASK) + cmpld r0, r6 beq 18f - li r0, 0 - mtspr SPRN_PCR, r0 + mtspr SPRN_PCR, r6 18: /* Signal secondary CPUs to continue */ stb r0,VCORE_IN_GUEST(r5) We are no longer comparing r0 against 0 and loading it with 0 if it contains something else. Hence when we store r0 to kvmppc_vcore->in_guest, it might not be 0. This means that secondary CPUs will not be signalled to continue. Those CPUs get stuck and errors like the following are logged: KVM: CPU 1 seems to be stuck KVM: CPU 2 seems to be stuck KVM: CPU 3 seems to be stuck KVM: CPU 4 seems to be stuck KVM: CPU 5 seems to be stuck KVM: CPU 6 seems to be stuck KVM: CPU 7 seems to be stuck This can be reproduced with: $ for i in `seq 1 7` ; do chcpu -d $i ; done ; $ taskset -c 0 qemu-system-ppc64 -smp 8,threads=8 \ -M pseries,accel=kvm,kvm-type=HV -m 1G -nographic -vga none \ -kernel vmlinux -initrd initrd.cpio.xz Fix by making sure r0 is 0 before storing it to kvmppc_vcore->in_guest. Fixes: 13c7bb3c ("powerpc/64s: Set reserved PCR bits") Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Alistair Popple <alistair@popple.id.au> Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191004025317.19340-1-jniethe5@gmail.com
-
Desnes A. Nunes do Rosario authored
Newer versions of GCC (>= 9) demand that the size of the string to be copied must be explicitly smaller than the size of the destination. Thus, the NULL char has to be taken into account on strncpy. This will avoid the following compiling error: tlbie_test.c: In function 'main': tlbie_test.c:639:4: error: 'strncpy' specified bound 100 equals destination size strncpy(logdir, optarg, LOGDIR_NAME_SIZE); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors Signed-off-by: Desnes A. Nunes do Rosario <desnesn@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191003211010.9711-1-desnesn@linux.ibm.com
-
Laurent Dufour authored
Since commit 1211ee61 ("powerpc/pseries: Read TLB Block Invalidate Characteristics"), a warning message is displayed when booting a guest on top of KVM: lpar: arch/powerpc/platforms/pseries/lpar.c pseries_lpar_read_hblkrm_characteristics Error calling get-system-parameter (0xfffffffd) This message is displayed because this hypervisor is not supporting the H_BLOCK_REMOVE hcall and thus is not exposing the corresponding feature. Reading the TLB Block Invalidate Characteristics should not be done if the feature is not exposed. Fixes: 1211ee61 ("powerpc/pseries: Read TLB Block Invalidate Characteristics") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191001132928.72555-1-ldufour@linux.ibm.com
-
Stephen Rothwell authored
After merging the powerpc tree, today's linux-next build (powerpc64 allnoconfig) failed like this: arch/powerpc/mm/book3s64/pgtable.c:216:3: error: implicit declaration of function 'radix__flush_all_lpid_guest' radix__flush_all_lpid_guest() is only declared for CONFIG_PPC_RADIX_MMU which is not set for this build. Fix it by adding an empty version for the RADIX_MMU=n case, which should never be called. Fixes: 99161de3 ("powerpc/64s/radix: tidy up TLB flushing code") Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> [mpe: Munge change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190930101342.36c1afa0@canb.auug.org.au
-
- 06 Oct, 2019 4 commits
-
-
Linus Torvalds authored
-
Linus Torvalds authored
In commit 4ed28639 ("fs, elf: drop MAP_FIXED usage from elf_map") we changed elf to use MAP_FIXED_NOREPLACE instead of MAP_FIXED for the executable mappings. Then, people reported that it broke some binaries that had overlapping segments from the same file, and commit ad55eac7 ("elf: enforce MAP_FIXED on overlaying elf segments") re-instated MAP_FIXED for some overlaying elf segment cases. But only some - despite the summary line of that commit, it only did it when it also does a temporary brk vma for one obvious overlapping case. Now Russell King reports another overlapping case with old 32-bit x86 binaries, which doesn't trigger that limited case. End result: we had better just drop MAP_FIXED_NOREPLACE entirely, and go back to MAP_FIXED. Yes, it's a sign of old binaries generated with old tool-chains, but we do pride ourselves on not breaking existing setups. This still leaves MAP_FIXED_NOREPLACE in place for the load_elf_interp() and the old load_elf_library() use-cases, because nobody has reported breakage for those. Yet. Note that in all the cases seen so far, the overlapping elf sections seem to be just re-mapping of the same executable with different section attributes. We could possibly introduce a new MAP_FIXED_NOFILECHANGE flag or similar, which acts like NOREPLACE, but allows just remapping the same executable file using different protection flags. It's not clear that would make a huge difference to anything, but if people really hate that "elf remaps over previous maps" behavior, maybe at least a more limited form of remapping would alleviate some concerns. Alternatively, we should take a look at our elf_map() logic to see if we end up not mapping things properly the first time. In the meantime, this is the minimal "don't do that then" patch while people hopefully think about it more. Reported-by: Russell King <linux@armlinux.org.uk> Fixes: 4ed28639 ("fs, elf: drop MAP_FIXED usage from elf_map") Fixes: ad55eac7 ("elf: enforce MAP_FIXED on overlaying elf segments") Cc: Michal Hocko <mhocko@suse.com> Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.infradead.org/users/hch/dma-mappingLinus Torvalds authored
Pull dma-mapping regression fix from Christoph Hellwig: "Revert an incorret hunk from a patch that caused problems on various arm boards (Andrey Smirnov)" * tag 'dma-mapping-5.4-1' of git://git.infradead.org/users/hch/dma-mapping: dma-mapping: fix false positive warnings in dma_common_free_remap()
-
git://git.kernel.org/pub/scm/linux/kernel/git/soc/socLinus Torvalds authored
Pull ARM SoC fixes from Olof Johansson: "A few fixes this time around: - Fixup of some clock specifications for DRA7 (device-tree fix) - Removal of some dead/legacy CPU OPP/PM code for OMAP that throws warnings at boot - A few more minor fixups for OMAPs, most around display - Enable STM32 QSPI as =y since their rootfs sometimes comes from there - Switch CONFIG_REMOTEPROC to =y since it went from tristate to bool - Fix of thermal zone definition for ux500 (5.4 regression)" * tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: ARM: multi_v7_defconfig: Fix SPI_STM32_QSPI support ARM: dts: ux500: Fix up the CPU thermal zone arm64/ARM: configs: Change CONFIG_REMOTEPROC from m to y ARM: dts: am4372: Set memory bandwidth limit for DISPC ARM: OMAP2+: Fix warnings with broken omap2_set_init_voltage() ARM: OMAP2+: Add missing LCDC midlemode for am335x ARM: OMAP2+: Fix missing reset done flag for am3 and am43 ARM: dts: Fix gpio0 flags for am335x-icev2 ARM: omap2plus_defconfig: Enable more droid4 devices as loadable modules ARM: omap2plus_defconfig: Enable DRM_TI_TFP410 DTS: ARM: gta04: introduce legacy spi-cs-high to make display work again ARM: dts: Fix wrong clocks for dra7 mcasp clk: ti: dra7: Fix mcasp8 clock bits
-
- 05 Oct, 2019 1 commit
-
-
Linus Torvalds authored
Merge tag 'kbuild-fixes-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild fixes from Masahiro Yamada: - remove unneeded ar-option and KBUILD_ARFLAGS - remove long-deprecated SUBDIRS - fix modpost to suppress false-positive warnings for UML builds - fix namespace.pl to handle relative paths to ${objtree}, ${srctree} - make setlocalversion work for /bin/sh - make header archive reproducible - fix some Makefiles and documents * tag 'kbuild-fixes-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: kheaders: make headers archive reproducible kbuild: update compile-test header list for v5.4-rc2 kbuild: two minor updates for Documentation/kbuild/modules.rst scripts/setlocalversion: clear local variable to make it work for sh namespace: fix namespace.pl script to support relative paths video/logo: do not generate unneeded logo C files video/logo: remove unneeded *.o pattern from clean-files integrity: remove pointless subdir-$(CONFIG_...) integrity: remove unneeded, broken attempt to add -fshort-wchar modpost: fix static EXPORT_SYMBOL warnings for UML build kbuild: correct formatting of header in kbuild module docs kbuild: remove SUBDIRS support kbuild: remove ar-option and KBUILD_ARFLAGS
-