- 22 May, 2022 20 commits
-
-
Naveen N. Rao authored
Stop using the ftrace trampoline for init section once kernel init is complete. Fixes: 67361cf8 ("powerpc/ftrace: Handle large kernel configs") Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220516071422.463738-1-naveen.n.rao@linux.vnet.ibm.com
-
Christophe Leroy authored
All supported versions of GCC & clang support asm goto. Remove the !CONFIG_CC_HAS_ASM_GOTO version of arch_local_irq_restore() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/58df50c9e77e2ed945bacdead30412770578886b.1652715336.git.christophe.leroy@csgroup.eu
-
Russell Currey authored
In commit f3054ffd ("selftests/powerpc: Return skip code for spectre_v2"), the spectre_v2 selftest is updated to be aware of cases where the vulnerability status reported in sysfs is incorrect, skipping the test instead. This happens because qemu can misrepresent the mitigation status of the host to the guest. If the count cache is disabled in the host, and this is correctly reported to the guest, then the guest won't apply mitigations. If the guest is then migrated to a new host where mitigations are necessary, it is now vulnerable because it has not applied mitigations. Update the selftest to report when we see excessive misses, indicative of the count cache being disabled. If software flushing is enabled, also warn that these flushes are just wasting performance. Signed-off-by: Russell Currey <ruscur@russell.cc> [mpe: Rebase and update change log appropriately] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210608064809.199116-1-ruscur@russell.cc
-
Russell Currey authored
The device-tree property no-need-store-drain-on-priv-state-switch is equivalent to H_CPU_BEHAV_NO_STF_BARRIER from the H_CPU_GET_CHARACTERISTICS hcall on pseries. Since commit 84ed26fd ("powerpc/security: Add a security feature for STF barrier") powernv systems with this device-tree property have been enabling the STF barrier when they have no need for it. This patch fixes this by clearing the STF barrier feature on those systems. Fixes: 84ed26fd ("powerpc/security: Add a security feature for STF barrier") Reported-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220404101536.104794-2-ruscur@russell.cc
-
Russell Currey authored
The device-tree properties no-need-l1d-flush-msr-pr-1-to-0 and no-need-l1d-flush-kernel-on-user-access are the equivalents of H_CPU_BEHAV_NO_L1D_FLUSH_ENTRY and H_CPU_BEHAV_NO_L1D_FLUSH_UACCESS from the H_GET_CPU_CHARACTERISTICS hcall on pseries respectively. In commit d02fa40d ("powerpc/powernv: Remove POWER9 PVR version check for entry and uaccess flushes") the condition for disabling the L1D flush on kernel entry and user access was changed from any non-P9 CPU to only checking P7 and P8. Without the appropriate device-tree checks for newer processors on powernv, these flushes are unnecessarily enabled on those systems. This patch corrects this. Fixes: d02fa40d ("powerpc/powernv: Remove POWER9 PVR version check for entry and uaccess flushes") Reported-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220404101536.104794-1-ruscur@russell.cc
-
Pali Rohár authored
P2020 also contains Power Management Controller and their registers at offset 0xe0070 compatible with mpc8548. So add PMC node into DTS include file fsl/p2020si-post.dtsi Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220506203621.26314-1-pali@kernel.org
-
Michael Ellerman authored
We added checks to __pa() / __va() to ensure they're only called with appropriate addresses. But using BUG_ON() is too strong, it means virt_addr_valid() will BUG when DEBUG_VIRTUAL is enabled. Instead switch them to warnings, arm64 does the same. Fixes: 4dd7554a ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220406145802.538416-5-mpe@ellerman.id.au
-
Michael Ellerman authored
In the previous commit powerpc added PAGE_SIZE related config symbols using the generic names. So there's no need to refer to them in the definition of PAGE_SIZE_LESS_THAN_64KB etc, the negative dependency on the generic symbol is sufficient (in this case !PAGE_SIZE_64KB). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220505125123.2088143-2-mpe@ellerman.id.au
-
Michael Ellerman authored
Other arches (sh, mips, hexagon) use standard names for PAGE_SIZE related config symbols. Add matching symbols for powerpc, which are enabled by default but depend on our architecture specific PAGE_SIZE symbols. This allows generic/driver code to express dependencies on the PAGE_SIZE without needing to refer to architecture specific config symbols. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220505125123.2088143-1-mpe@ellerman.id.au
-
Haren Myneni authored
VAS entry is created as a misc device and the sysfs comments should list the proper entries Reported-by: Matheus Castanho <mscastanho@ibm.com> Signed-off-by: Haren Myneni <haren@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6dee950c7b72a4965c102208041f14a063cf5a8c.camel@linux.ibm.com
-
Haren Myneni authored
In init_winctx_regs(), __pa() is called on winctx->rx_fifo and this function is called to initialize registers for receive and fault windows. But the real address is passed in winctx->rx_fifo for receive windows and the virtual address for fault windows which causes errors with DEBUG_VIRTUAL enabled. Fixes this issue by assigning only real address to rx_fifo in vas_rx_win_attr struct for both receive and fault windows. Reported-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Haren Myneni <haren@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/338e958c7ab8f3b266fa794a1f80f99b9671829e.camel@linux.ibm.com
-
Christophe Leroy authored
The following PPC_INST_XXX macros are not used anymore outside ppc-opcode.h: - PPC_INST_LD - PPC_INST_STD - PPC_INST_ADDIS - PPC_INST_ADD - PPC_INST_DIVD Remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8c28636126f69141419953b5638b4a908c184dc1.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Convert last users of PPC_INST_BL to PPC_RAW_BL() And remove PPC_INST_BL. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d9eacb758e7ae7cf224211ebe3f6f7d409a333be.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Use PPC_LI_MASK and PPC_LI() instead of opencoding. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3d56d7bc3200403773d54e62659d0e01292a055d.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Convert last users of PPC_INST_BRANCH to PPC_RAW_BRANCH() And remove PPC_INST_BRANCH. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fa8807108a2ef2287a2c9651d6e1ff7c051923d9.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
module_trampoline_target() is quite a hot path used when activating/deactivating function tracer. Avoid the heavy copy_from_kernel_nofault() by doing four calls to copy_inst_from_kernel_nofault(). Use __copy_inst_from_kernel_nofault() for the 3 last calls. First call is done to copy_from_kernel_nofault() to check address is within kernel space. No risk to wrap out the top of kernel space because the last page is never mapped so if address is in last page the first copy will fails and the other ones will never be performed. And also make it notrace just like all functions that call it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c55559103e014b7863161559d340e8e9484eaaa6.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
On the same model as get_user() versus __get_user(), introduce __copy_inst_from_kernel_nofault() which doesn't check address. To be used by callers that have already checked that the adress is a kernel address. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1f3702890d6dbd64702b61834753bcc96851c18c.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
A lot of #ifdefs can be replaced by IS_ENABLED() Do so. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fold in changes suggested by Naveen and Christophe on list] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/18ce6708d6f8c71d87436f9c6019f04df4125128.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Avoid ifdefs around expected_nop_sequence(). While at it make it a bool. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/305d22472f1f92127fba09692df6bb5d079a8cd0.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
0x80000000 is SZ_2G. Use it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fix comparison against unsigned -SZ_2G] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bb6626e884acffe87b58736291df57db3deaa9b9.1652074503.git.christophe.leroy@csgroup.eu
-
- 19 May, 2022 20 commits
-
-
Christophe Leroy authored
PPC_RAW_xxx() macros are self explanatory and less error prone than open coding. Use them in ftrace.c Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9292094c9a69cef6d29ee83f435a557b59c45065.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
To make it explicit, use BRANCH_SET_LINK instead of value 1 when calling create_branch(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d57847063ac93660a5af620d4df1847f10edf61a.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
ftrace_plt_tramps table is never filled so it is useless. Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/daeeb618a6619e3a7e3f82f1bd83ca7c25af6330.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Since commit 0c0c5230 ("powerpc: Only support DYNAMIC_FTRACE not static"), CONFIG_DYNAMIC_FTRACE is always selected when CONFIG_FUNCTION_TRACER is selected. To avoid confusion and have the reader wonder what's happen when CONFIG_FUNCTION_TRACER is selected and CONFIG_DYNAMIC_FTRACE is not, use CONFIG_FUNCTION_TRACER in ifdefs instead of CONFIG_DYNAMIC_FTRACE. As CONFIG_FUNCTION_GRAPH_TRACER depends on CONFIG_FUNCTION_TRACER, ftrace.o doesn't need to appear for both symbols in Makefile. Then as ftrace.o is built only when CONFIG_FUNCTION_TRACER is selected ifdef CONFIG_FUNCTION_TRACER is not needed in ftrace.c, and since it implies CONFIG_DYNAMIC_FTRACE, CONFIG_DYNAMIC_FTRACE is not needed in ftrace.c Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/628d357503eb90b4a034f99b7df516caaff4d279.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Since commit 7bea7ac0 ("powerpc/syscalls: Fix syscall tracing") ftrace.o is not needed anymore for CONFIG_FTRACE_SYSCALLS. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/275932a5d61543b825ff9a64f61abed6da5d4a2a.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Since c93d4f6e ("powerpc/ftrace: Add module_trampoline_target() for PPC32"), __ftrace_make_nop() for PPC32 is very similar to the one for PPC64. Same for __ftrace_make_call(). Make them common. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/96f53c237316dab4b1b8c682685266faa92da816.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Now that we have CONFIG_PPC64_ELF_ABI_V1 and CONFIG_PPC64_ELF_ABI_V2, get rid of all indirect detection of ABI version. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/709d9d69523c14c8a9fba4486395dca0f2d675b1.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Replace all uses of PPC64_ELF_ABI_v1 and PPC64_ELF_ABI_v2 by resp CONFIG_PPC64_ELF_ABI_V1 and CONFIG_PPC64_ELF_ABI_V2. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ba13d59e8c50bc9aa6328f1c7f0c0d0278e0a3a7.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
At the time being, we use CONFIG_CPU_LITTLE_ENDIAN and CONFIG_CPU_BIG_ENDIAN to pass -mabi=elfv1 or elfv2 to compiler, then define a PPC64_ELF_ABI_v1 or PPC64_ELF_ABI_v2 macro in asm/types.h based on _CALL_ELF define set by the compiler. Make it more straight forward with a CONFIG option that is directly usable. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1eca1addbc550167da9841c7340a010d0c4b2200.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Instead of returning -EPERM when patch_instruction() fails, just return what patch_instruction returns. That simplifies ftrace_modify_code(): 0: 94 21 ff c0 stwu r1,-64(r1) 4: 93 e1 00 3c stw r31,60(r1) 8: 7c 7f 1b 79 mr. r31,r3 c: 40 80 00 30 bge 3c <ftrace_modify_code+0x3c> 10: 93 c1 00 38 stw r30,56(r1) 14: 7c 9e 23 78 mr r30,r4 18: 7c a4 2b 78 mr r4,r5 1c: 80 bf 00 00 lwz r5,0(r31) 20: 7c 1e 28 40 cmplw r30,r5 24: 40 82 00 34 bne 58 <ftrace_modify_code+0x58> 28: 83 c1 00 38 lwz r30,56(r1) 2c: 7f e3 fb 78 mr r3,r31 30: 83 e1 00 3c lwz r31,60(r1) 34: 38 21 00 40 addi r1,r1,64 38: 48 00 00 00 b 38 <ftrace_modify_code+0x38> 38: R_PPC_REL24 patch_instruction Before: 0: 94 21 ff c0 stwu r1,-64(r1) 4: 93 e1 00 3c stw r31,60(r1) 8: 7c 7f 1b 79 mr. r31,r3 c: 40 80 00 4c bge 58 <ftrace_modify_code+0x58> 10: 93 c1 00 38 stw r30,56(r1) 14: 7c 9e 23 78 mr r30,r4 18: 7c a4 2b 78 mr r4,r5 1c: 80 bf 00 00 lwz r5,0(r31) 20: 7c 08 02 a6 mflr r0 24: 90 01 00 44 stw r0,68(r1) 28: 7c 1e 28 40 cmplw r30,r5 2c: 40 82 00 48 bne 74 <ftrace_modify_code+0x74> 30: 7f e3 fb 78 mr r3,r31 34: 48 00 00 01 bl 34 <ftrace_modify_code+0x34> 34: R_PPC_REL24 patch_instruction 38: 80 01 00 44 lwz r0,68(r1) 3c: 20 63 00 00 subfic r3,r3,0 40: 83 c1 00 38 lwz r30,56(r1) 44: 7c 63 19 10 subfe r3,r3,r3 48: 7c 08 03 a6 mtlr r0 4c: 83 e1 00 3c lwz r31,60(r1) 50: 38 21 00 40 addi r1,r1,64 54: 4e 80 00 20 blr It improves ftrace activation/deactivation duration by about 3%. Modify patch_instruction() return on failure to -EPERM in order to match with ftrace expectations. Other users of patch_instruction() do not care about the exact error value returned. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/49a8597230713e2633e7d9d7b56140787c4a7e20.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Inlining ftrace_modify_code(), it increases a bit the size of ftrace code but brings 5% improvment on ftrace activation. Usually in C files we let gcc decide what to do but here it really help to 'help' gcc to decide to inline, thought we don't want to force it with an __always_inline that would be too much for CONFIG_CC_OPTIMIZE_FOR_SIZE. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1597a06d57cfc80e6853838c4066e799bf6c7977.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
create_branch() is a good candidate for inlining because: - Flags can be folded in. - Range tests are likely to be already done. Hence reducing the create_branch() to only a set of instructions. So inline it. It improves ftrace activation by 10%. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/69851cc9a7bf8f03d025e6d29e165f2d0bd3bb6e.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Use is_offset_in_branch_range() instead of create_branch() to check if a target is within branch range. This patch together with the previous one improves ftrace activation time by 7% Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/912ae51782f5a53c44e435497c8c3fb5cc632387.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Test in is_offset_in_branch_range() and is_offset_in_cond_branch_range() are simple tests that are worth inlining. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a05be0ccb7373e6a9789a1988fcd0c810f5f9269.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Since commit d5937db1 ("powerpc/code-patching: Fix patch_branch() return on out-of-range failure") patch_branch() fails with -ERANGE when trying to branch out of range. No need to perform the test twice. Remove redundant create_branch() calls. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/aa45fbad0b4b7493080835d8276c0cb4ce146503.1652074503.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
When we have CONFIG_DYNAMIC_FTRACE_WITH_ARGS, prepare_ftrace_return() is called by ftrace_graph_func() otherwise prepare_ftrace_return() is called from assembly. Refactor prepare_ftrace_return() into a static __prepare_ftrace_return() that will be called by both prepare_ftrace_return() and ftrace_graph_func(). It will allow GCC to fold __prepare_ftrace_return() inside ftrace_graph_func(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0d42deafe353980c66cf19d3132638c05ba9f4a9.1652074503.git.christophe.leroy@csgroup.eu
-
Nicholas Piggin authored
rtas_call must not be called with the MMU disabled because in case of rtas error, log_error is called which requires MMU enabled. Add a test and warning for this. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220308135047.478297-14-npiggin@gmail.com
-
Nicholas Piggin authored
PAPR specifies that RTAS may be called with MSR[RI] enabled if the calling context is recoverable, and RTAS will manage RI as necessary. Call the rtas entry point with RI enabled, and add a check to ensure the caller has RI enabled. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220308135047.478297-10-npiggin@gmail.com
-
Nicholas Piggin authored
On 64-bit, PACA is saved in a SPRG so it does not need to be saved on stack. We also don't need to mask off the top bits for real mode addresses because the architecture does this for us. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220308135047.478297-8-npiggin@gmail.com
-
Nicholas Piggin authored
Disable MSR[EE] in C code rather than asm. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220308135047.478297-5-npiggin@gmail.com
-