- 15 Sep, 2020 3 commits
-
-
Ravi Bangoria authored
When kernel is compiled with CONFIG_HAVE_HW_BREAKPOINT=N, user can still create watchpoint using PPC_PTRACE_SETHWDEBUG, with limited functionalities. But, such watchpoints are never firing because of the missing privilege settings. Fix that. It's safe to set HW_BRK_TYPE_PRIV_ALL because we don't really leak any kernel address in signal info. Setting HW_BRK_TYPE_PRIV_ALL will also help to find scenarios when kernel accesses user memory. Reported-by: Pedro Miraglia Franco de Carvalho <pedromfc@linux.ibm.com> Suggested-by: Pedro Miraglia Franco de Carvalho <pedromfc@linux.ibm.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200902042945.129369-4-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
Vector load/store instructions are special because they are always aligned. Thus unaligned EA needs to be aligned down before comparing it with watch ranges. Otherwise we might consider valid event as invalid. Fixes: 74c68810 ("powerpc/watchpoint: Prepare handler to handle more than one watchpoint") Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200902042945.129369-3-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
On p10 predecessors, watchpoint with quadword access is compared at quadword length. If the watch range is doubleword or less than that in a first half of quadword aligned 16 bytes, and if there is any unaligned quadword access which will access only the 2nd half, the handler should consider it as extraneous and emulate/single-step it before continuing. Fixes: 74c68810 ("powerpc/watchpoint: Prepare handler to handle more than one watchpoint") Reported-by: Pedro Miraglia Franco de Carvalho <pedromfc@linux.ibm.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200902042945.129369-2-ravi.bangoria@linux.ibm.com
-
- 14 Sep, 2020 3 commits
-
-
Thiago Jung Bauermann authored
POWER secure guests (i.e., guests which use the Protected Execution Facility) need to use SWIOTLB to be able to do I/O with the hypervisor, but they don't need the SWIOTLB memory to be in low addresses since the hypervisor doesn't have any addressing limitation. This solves a SWIOTLB initialization problem we are seeing in secure guests with 128 GB of RAM: they are configured with 4 GB of crashkernel reserved memory, which leaves no space for SWIOTLB in low addresses. To do this, we use mostly the same code as swiotlb_init(), but allocate the buffer using memblock_alloc() instead of memblock_alloc_low(). Fixes: 2efbc58f ("powerpc/pseries/svm: Force SWIOTLB for secure guests") Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200818221126.391073-1-bauerman@linux.ibm.com
-
Michael Ellerman authored
When we added the VDSO32 kconfig symbol, which controls building of the 32-bit VDSO, we made it depend on CPU_BIG_ENDIAN (for 64-bit). That was because back then COMPAT was always enabled for 64-bit, so depending on it would have left the 32-bit VDSO always enabled, which we didn't want. But since then we have made COMPAT selectable, and off by default for ppc64le, so VDSO32 should really depend on that. For most people this makes no difference, none of the defconfigs change, it's only if someone is building ppc64le with COMPAT=y, they will now also get VDSO32. If they've enabled COMPAT in order to run 32-bit binaries they presumably also want the 32-bit VDSO. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Link: https://lore.kernel.org/r/20200908125850.407939-1-mpe@ellerman.id.au
-
Michael Ellerman authored
Bring in our fixes branch for this cycle which avoids some small conflicts with upcoming commits.
-
- 09 Sep, 2020 1 commit
-
-
Vaibhav Jain authored
The newly introduced 'perf_stats' attribute uses the default access mode of 0444, allowing non-root users to access performance stats of an nvdimm and potentially force the kernel into issuing a large number of expensive hypercalls. Since the information exposed by this attribute cannot be cached it is better to ward off access to this attribute from users who don't need to access to these performance statistics. Hence update the access mode of 'perf_stats' attribute to be only readable by root users. Fixes: 2d02bf83 ("powerpc/papr_scm: Fetch nvdimm performance stats from PHYP") Reported-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200907110540.21349-1-vaibhav@linux.ibm.com
-
- 08 Sep, 2020 29 commits
-
-
Nicholas Piggin authored
The ISA v3.1 the copy-paste facility has a new memory move functionality which allows the copy buffer to be pasted to domestic memory (RAM) as opposed to foreign memory (accelerator). This means the POWER9 trick of avoiding the cp_abort on context switch if the process had not mapped foreign memory does not work on POWER10. Do the cp_abort unconditionally there. KVM must also cp_abort on guest exit to prevent copy buffer state leaking between contexts. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200825075535.224536-1-npiggin@gmail.com
-
Joel Stanley authored
It's not done anything for a long time. Save the percpu variable, and emit a warning to remind users to not expect it to do anything. This uses pr_warn_once instead of pr_warn_ratelimit as testing 'ppc64_cpu --smt=off' on a 24 core / 4 SMT system showed the warning to be noisy, as the online/offline loop is slow. Fixes: 3fa8cad8 ("powerpc/pseries/cpuidle: smt-snooze-delay cleanup.") Cc: stable@vger.kernel.org # v3.14 Signed-off-by: Joel Stanley <joel@jms.id.au> Acked-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200902000012.3440389-1-joel@jms.id.au
-
Joel Stanley authored
Often the firmware will guard out cores after a crash. This often undesirable, and is not immediately noticeable. This adds an informative message when a CPU device tree nodes are marked bad in the device tree. Signed-off-by: Joel Stanley <joel@jms.id.au> [mpe: Use an eye-catcher that's less likely to get us in trouble] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190801051630.5804-1-joel@jms.id.au
-
Leonardo Bras authored
On LoPAR "DMA Window Manipulation Calls", it's recommended to remove the default DMA window for the device, before attempting to configure a DDW, in order to make the maximum resources available for the next DDW to be created. This is a requirement for using DDW on devices in which hypervisor allows only one DMA window. If setting up a new DDW fails anywhere after the removal of this default DMA window, it's needed to restore the default DMA window. For this, an implementation of ibm,reset-pe-dma-windows rtas call is needed: Platforms supporting the DDW option starting with LoPAR level 2.7 implement ibm,ddw-extensions. The first extension available (index 2) carries the token for ibm,reset-pe-dma-windows rtas call, which is used to restore the default DMA window for a device, if it has been deleted. It does so by resetting the TCE table allocation for the PE to it's boot time value, available in "ibm,dma-window" device tree node. Signed-off-by: Leonardo Bras <leobras.c@gmail.com> Tested-by: David Dai <zdai@linux.vnet.ibm.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200805030455.123024-5-leobras.c@gmail.com
-
Leonardo Bras authored
Move the window-removing part of remove_ddw into a new function (remove_dma_window), so it can be used to remove other DMA windows. It's useful for removing DMA windows that don't create DIRECT64_PROPNAME property, like the default DMA window from the device, which uses "ibm,dma-window". Signed-off-by: Leonardo Bras <leobras.c@gmail.com> Tested-by: David Dai <zdai@linux.vnet.ibm.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200805030455.123024-4-leobras.c@gmail.com
-
Leonardo Bras authored
>From LoPAR level 2.8, "ibm,ddw-extensions" index 3 can make the number of outputs from "ibm,query-pe-dma-windows" go from 5 to 6. This change of output size is meant to expand the address size of largest_available_block PE TCE from 32-bit to 64-bit, which ends up shifting page_size and migration_capable. This ends up requiring the update of ddw_query_response->largest_available_block from u32 to u64, and manually assigning the values from the buffer into this struct, according to output size. Also, a routine was created for helping reading the ddw extensions as suggested by LoPAR: First reading the size of the extension array from index 0, checking if the property exists, and then returning it's value. Signed-off-by: Leonardo Bras <leobras.c@gmail.com> Tested-by: David Dai <zdai@linux.vnet.ibm.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200805030455.123024-3-leobras.c@gmail.com
-
Leonardo Bras authored
Create defines to help handling ibm,ddw-applicable values, avoiding confusion about the index of given operations. Signed-off-by: Leonardo Bras <leobras.c@gmail.com> Tested-by: David Dai <zdai@linux.vnet.ibm.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200805030455.123024-2-leobras.c@gmail.com
-
Jordan Niethe authored
Update the CPU to ISA Version Mapping document to include Power10 and ISA v3.1. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Make sure ISA reference is unique] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200827040556.1783-1-jniethe5@gmail.com
-
Russell Currey authored
As of commit bdc48fa1, scripts/checkpatch.pl now has a default line length warning of 100 characters. The powerpc wrapper script was using a length of 90 instead of 80 in order to make checkpatch less restrictive, but now it's making it more restrictive instead. I think it makes sense to just use the default value now. Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200828020542.393022-1-ruscur@russell.cc
-
Jordan Niethe authored
The signal handler in the alignment handler self test has the ability to jump over the instruction that triggered the signal. It does this by incrementing the PT_NIP in the user context by 4. If it were a prefixed instruction this will mean that the suffix is then executed which is incorrect. Instead check if the major opcode indicates a prefixed instruction (e.g. it is 1) and if so increment PT_NIP by 8. If ISA v3.1 is not available treat it as a word instruction even if the major opcode is 1. Fixes: 620a6473 ("selftests/powerpc: Add prefixed loads/stores to alignment_handler test") Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Fix 32-bit build, rename haveprefixes to prefixes_enabled] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200824131231.14008-1-jniethe5@gmail.com
-
Jordan Niethe authored
As of commit 147c0516 ("powerpc/boot: Add support for 64bit little endian wrapper") the comment in the Makefile is misleading. The wrapper packaging 64bit kernel may built as a 32 or 64 bit elf. Update the comment to reflect this. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200825035147.3239-1-jniethe5@gmail.com
-
Michael Ellerman authored
The last caller was removed in 2014 in commit fb5a5157 ("powerpc: Remove platforms/wsp and associated pieces"). As Jordan noticed even though there are no callers, the code above in fsl_secondary_thread_init() falls through into generic_secondary_thread_init(). So we can remove the _GLOBAL but not the body of the function. However because fsl_secondary_thread_init() is inside #ifdef CONFIG_PPC_BOOK3E, we can never reach the body of generic_secondary_thread_init() unless CONFIG_PPC_BOOK3E is enabled, so we can wrap the whole thing in a single #ifdef. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200819015704.1976364-1-mpe@ellerman.id.au
-
Michael Ellerman authored
On older CPUs the switch_endian() syscall doesn't work. Currently that causes the switch_endian_test to just crash. Instead detect the failure and properly exit with a failure message. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200819015727.1977134-9-mpe@ellerman.id.au
-
Michael Ellerman authored
If we're running on a CPU without VMX/VSX then don't touch them. This is fragile, the compiler could spill a VMX/VSX register and break the test anyway. But in practice it seems to work, ie. the test runs to completion on a system without VSX with this change. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200819015727.1977134-8-mpe@ellerman.id.au
-
Michael Ellerman authored
This is a test of specific piece of logic in isa207-common.c, which is only used on Power8 or later. So skip it on older CPUs. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200819015727.1977134-7-mpe@ellerman.id.au
-
Michael Ellerman authored
Both these tests use PMU events that only work on newer CPUs, so skip them on older CPUs. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200819015727.1977134-6-mpe@ellerman.id.au
-
Michael Ellerman authored
The DSCR tests fail on systems that don't have DSCR, so check for the DSCR in hwcap and skip if it's not present. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200819015727.1977134-5-mpe@ellerman.id.au
-
Michael Ellerman authored
utils.h provides have_hwcap() and have_hwcap2() which check for a feature bit. Those bits are defined in asm/cputable.h, so include it in utils.h so users of utils.h don't have to do it manually. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200819015727.1977134-4-mpe@ellerman.id.au
-
Michael Ellerman authored
This version of set_dscr() was added for the RFI flush test, and is fairly specific to it. It also clashes with the version of set_dscr() in dscr/dscr.h. So move it into the RFI flush test where it's used. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200819015727.1977134-3-mpe@ellerman.id.au
-
Michael Ellerman authored
On older systems this test takes longer to run (duh), give it five minutes which is long enough on a G5 970FX @ 1.6GHz. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200819015727.1977134-2-mpe@ellerman.id.au
-
Michael Ellerman authored
These platforms don't show the MMU in /proc/cpuinfo, but they always use hash, so teach using_hash_mmu() that. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200819015727.1977134-1-mpe@ellerman.id.au
-
Michael Ellerman authored
This test creates some threads, which write to TM SPRs, and then makes sure the registers maintain the correct values across context switches and contention with other threads. But currently the test finishes almost instantaneously, which reduces the chance of it hitting an interesting condition. So increase the number of loops, so it runs a bit longer, though still less than 2s on a Power8. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200813013445.686464-3-mpe@ellerman.id.au
-
Michael Ellerman authored
This test tries to set affinity to CPUs that don't exist, especially if the set of online CPUs doesn't start at 0. But there's no real reason for it to use setaffinity in the first place, it's just trying to create lots of threads to cause contention. So drop the setaffinity entirely. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200813013445.686464-2-mpe@ellerman.id.au
-
Michael Ellerman authored
Several of the TM tests fail spuriously if CPU 0 is offline, because they blindly try to affinitise to CPU 0. Fix them by picking any online CPU and using that instead. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200813013445.686464-1-mpe@ellerman.id.au
-
Oliver O'Halloran authored
These annoy me every time I see them. Why are they here? They're not even needed for 80cols compliance. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200818044557.135497-1-oohall@gmail.com
-
Christophe Leroy authored
ftrace_graph_ret_addr() is always defined and returns 'ip' when CONFIG_FUNCTION GRAPH_TRACER is not set. So the #ifdef is not needed, remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9d11143d4e27ba8274369a926968756917584868.1597643153.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Enable pre-update addressing mode in __get_user_asm() and __put_user_asm() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/13041c7df39e89ddf574ea0cdc6dedfdd9734140.1597235091.git.christophe.leroy@csgroup.eu
-
Gautham R. Shenoy authored
Commit d947fb4c ("cpuidle: pseries: Fixup exit latency for CEDE(0)") sets the exit latency of CEDE(0) based on the latency values of the Extended CEDE states advertised by the platform. The values advertised by the platform are in timebase ticks. However the cpuidle framework requires the latency values in microseconds. If the tb-ticks value advertised by the platform correspond to a value smaller than 1us, during the conversion from tb-ticks to microseconds, in the current code, the result becomes zero. This is incorrect as it puts a CEDE state on par with the snooze state. This patch fixes this by rounding up the result obtained while converting the latency value from tb-ticks to microseconds. It also prints a warning in case we discover an extended-cede state with wakeup latency to be 0. In such a case, ensure that CEDE(0) has a non-zero wakeup latency. Fixes: d947fb4c ("cpuidle: pseries: Fixup exit latency for CEDE(0)") Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1599125247-28488-1-git-send-email-ego@linux.vnet.ibm.com
-
Alexey Kardashevskiy authored
There are 2 problems with it: 1. "<" vs expected "<<" 2. the shift number is an IOMMU page number mask, not an address mask as the IOMMU page shift is missing. This did not hit us before f1565c24 ("powerpc: use the generic dma_ops_bypass mode") because we had additional code to handle bypass mask so this chunk (almost?) never executed.However there were reports that aacraid does not work with "iommu=nobypass". After f1565c24, aacraid (and probably others which call dma_get_required_mask() before setting the mask) was unable to enable 64bit DMA and fall back to using IOMMU which was known not to work, one of the problems is double free of an IOMMU page. This fixes DMA for aacraid, both with and without "iommu=nobypass" in the kernel command line. Verified with "stress-ng -d 4". Fixes: 6a5c7be5 ("powerpc: Override dma_get_required_mask by platform hook and ops") Cc: stable@vger.kernel.org # v3.2+ Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200908015106.79661-1-aik@ozlabs.ru
-
- 03 Sep, 2020 1 commit
-
-
Michael Ellerman authored
This reverts commit f2af2010. It added a usage of cc-ldoption, but cc-ldoption was removed in commit 055efab3 ("kbuild: drop support for cc-ldoption"). Reported-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 02 Sep, 2020 3 commits
-
-
Aneesh Kumar K.V authored
The test is broken w.r.t page table update rules and results in kernel crash as below. Disable the support until we get the tests updated. [ 21.083519] kernel BUG at arch/powerpc/mm/pgtable.c:304! cpu 0x0: Vector: 700 (Program Check) at [c000000c6d1e76c0] pc: c00000000009a5ec: assert_pte_locked+0x14c/0x380 lr: c0000000005eeeec: pte_update+0x11c/0x190 sp: c000000c6d1e7950 msr: 8000000002029033 current = 0xc000000c6d172c80 paca = 0xc000000003ba0000 irqmask: 0x03 irq_happened: 0x01 pid = 1, comm = swapper/0 kernel BUG at arch/powerpc/mm/pgtable.c:304! [link register ] c0000000005eeeec pte_update+0x11c/0x190 [c000000c6d1e7950] 0000000000000001 (unreliable) [c000000c6d1e79b0] c0000000005eee14 pte_update+0x44/0x190 [c000000c6d1e7a10] c000000001a2ca9c pte_advanced_tests+0x160/0x3d8 [c000000c6d1e7ab0] c000000001a2d4fc debug_vm_pgtable+0x7e8/0x1338 [c000000c6d1e7ba0] c0000000000116ec do_one_initcall+0xac/0x5f0 [c000000c6d1e7c80] c0000000019e4fac kernel_init_freeable+0x4dc/0x5a4 [c000000c6d1e7db0] c000000000012474 kernel_init+0x24/0x160 [c000000c6d1e7e20] c00000000000cbd0 ret_from_kernel_thread+0x5c/0x6c With DEBUG_VM disabled [ 20.530152] BUG: Kernel NULL pointer dereference on read at 0x00000000 [ 20.530183] Faulting instruction address: 0xc0000000000df330 cpu 0x33: Vector: 380 (Data SLB Access) at [c000000c6d19f700] pc: c0000000000df330: memset+0x68/0x104 lr: c00000000009f6d8: hash__pmdp_huge_get_and_clear+0xe8/0x1b0 sp: c000000c6d19f990 msr: 8000000002009033 dar: 0 current = 0xc000000c6d177480 paca = 0xc00000001ec4f400 irqmask: 0x03 irq_happened: 0x01 pid = 1, comm = swapper/0 [link register ] c00000000009f6d8 hash__pmdp_huge_get_and_clear+0xe8/0x1b0 [c000000c6d19f990] c00000000009f748 hash__pmdp_huge_get_and_clear+0x158/0x1b0 (unreliable) [c000000c6d19fa10] c0000000019ebf30 pmd_advanced_tests+0x1f0/0x378 [c000000c6d19fab0] c0000000019ed088 debug_vm_pgtable+0x79c/0x1244 [c000000c6d19fba0] c0000000000116ec do_one_initcall+0xac/0x5f0 [c000000c6d19fc80] c0000000019a4fac kernel_init_freeable+0x4dc/0x5a4 [c000000c6d19fdb0] c000000000012474 kernel_init+0x24/0x160 [c000000c6d19fe20] c00000000000cbd0 ret_from_kernel_thread+0x5c/0x6c 33:mon> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200902040122.136414-1-aneesh.kumar@linux.ibm.com
-
Christophe Leroy authored
At the time being, __put_user()/__get_user() and friends only use D-form addressing, with 0 offset. Ex: lwz reg1, 0(reg2) Give the compiler the opportunity to use other adressing modes whenever possible, to get more optimised code. Hereunder is a small exemple: struct test { u32 item1; u16 item2; u8 item3; u64 item4; }; int set_test_user(struct test __user *from, struct test __user *to) { int err; u32 item1; u16 item2; u8 item3; u64 item4; err = __get_user(item1, &from->item1); err |= __get_user(item2, &from->item2); err |= __get_user(item3, &from->item3); err |= __get_user(item4, &from->item4); err |= __put_user(item1, &to->item1); err |= __put_user(item2, &to->item2); err |= __put_user(item3, &to->item3); err |= __put_user(item4, &to->item4); return err; } Before the patch: 00000df0 <set_test_user>: df0: 94 21 ff f0 stwu r1,-16(r1) df4: 39 40 00 00 li r10,0 df8: 93 c1 00 08 stw r30,8(r1) dfc: 93 e1 00 0c stw r31,12(r1) e00: 7d 49 53 78 mr r9,r10 e04: 80 a3 00 00 lwz r5,0(r3) e08: 38 e3 00 04 addi r7,r3,4 e0c: 7d 46 53 78 mr r6,r10 e10: a0 e7 00 00 lhz r7,0(r7) e14: 7d 29 33 78 or r9,r9,r6 e18: 39 03 00 06 addi r8,r3,6 e1c: 7d 46 53 78 mr r6,r10 e20: 89 08 00 00 lbz r8,0(r8) e24: 7d 29 33 78 or r9,r9,r6 e28: 38 63 00 08 addi r3,r3,8 e2c: 7d 46 53 78 mr r6,r10 e30: 83 c3 00 00 lwz r30,0(r3) e34: 83 e3 00 04 lwz r31,4(r3) e38: 7d 29 33 78 or r9,r9,r6 e3c: 7d 43 53 78 mr r3,r10 e40: 90 a4 00 00 stw r5,0(r4) e44: 7d 29 1b 78 or r9,r9,r3 e48: 38 c4 00 04 addi r6,r4,4 e4c: 7d 43 53 78 mr r3,r10 e50: b0 e6 00 00 sth r7,0(r6) e54: 7d 29 1b 78 or r9,r9,r3 e58: 38 e4 00 06 addi r7,r4,6 e5c: 7d 43 53 78 mr r3,r10 e60: 99 07 00 00 stb r8,0(r7) e64: 7d 23 1b 78 or r3,r9,r3 e68: 38 84 00 08 addi r4,r4,8 e6c: 93 c4 00 00 stw r30,0(r4) e70: 93 e4 00 04 stw r31,4(r4) e74: 7c 63 53 78 or r3,r3,r10 e78: 83 c1 00 08 lwz r30,8(r1) e7c: 83 e1 00 0c lwz r31,12(r1) e80: 38 21 00 10 addi r1,r1,16 e84: 4e 80 00 20 blr After the patch: 00000dbc <set_test_user>: dbc: 39 40 00 00 li r10,0 dc0: 7d 49 53 78 mr r9,r10 dc4: 80 03 00 00 lwz r0,0(r3) dc8: 7d 48 53 78 mr r8,r10 dcc: a1 63 00 04 lhz r11,4(r3) dd0: 7d 29 43 78 or r9,r9,r8 dd4: 7d 48 53 78 mr r8,r10 dd8: 88 a3 00 06 lbz r5,6(r3) ddc: 7d 29 43 78 or r9,r9,r8 de0: 7d 48 53 78 mr r8,r10 de4: 80 c3 00 08 lwz r6,8(r3) de8: 80 e3 00 0c lwz r7,12(r3) dec: 7d 29 43 78 or r9,r9,r8 df0: 7d 43 53 78 mr r3,r10 df4: 90 04 00 00 stw r0,0(r4) df8: 7d 29 1b 78 or r9,r9,r3 dfc: 7d 43 53 78 mr r3,r10 e00: b1 64 00 04 sth r11,4(r4) e04: 7d 29 1b 78 or r9,r9,r3 e08: 7d 43 53 78 mr r3,r10 e0c: 98 a4 00 06 stb r5,6(r4) e10: 7d 23 1b 78 or r3,r9,r3 e14: 90 c4 00 08 stw r6,8(r4) e18: 90 e4 00 0c stw r7,12(r4) e1c: 7c 63 53 78 or r3,r3,r10 e20: 4e 80 00 20 blr Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c27bc4e598daf3bbb225de7a1f5c52121cf1e279.1597235091.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
flush_instruction_cache() is never used on 8xx, remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/245cabd8f291facac8c8c5fd370e361a69e02860.1597384145.git.christophe.leroy@csgroup.eu
-