- 27 Jan, 2020 16 commits
-
-
Christophe Leroy authored
Since kasan_init_region() is not used anymore for modules, KASAN init is done while slab_is_available() is false. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/84b27bf08b41c8343efd88e10f2eccd8e9f85593.1579024426.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
Unloading/Reloading of modules seems to fail with KASAN_VMALLOC but works properly with it. Force selection of KASAN_VMALLOC when MODULES are selected, and drop module_alloc() which was dedicated to KASAN for modules. Reported-by: <erhard_f@mailbox.org> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://bugzilla.kernel.org/show_bug.cgi?id=205283 Link: https://lore.kernel.org/r/f909da11aecb59ab7f32ba01fae6f356eaa4d7bc.1579024426.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
Move CONFIG_PPC32 at the same place as CONFIG_PPC64 for consistency. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6f28085c2a1aa987093d50db17586633bbf8e206.1579024426.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
Add support of KASAN_VMALLOC on PPC32. To allow this, the early shadow covering the VMALLOC space need to be removed once high_memory var is set and before freeing memblock. And the VMALLOC area need to be aligned such that boundaries are covered by a full shadow page. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/031dec5487bde9b2181c8b3c9800e1879cf98c1a.1579024426.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
Running vdsotest leaves many times the following log: [ 79.629901] vdsotest[396]: User access of kernel address (ffffffff) - exploit attempt? (uid: 0) A pointer set to (-1) is likely a programming error similar to a NULL pointer and is not worth logging as an exploit attempt. Don't log user accesses to 0xffffffff. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0728849e826ba16f1fbd6fa7f5c6cc87bd64e097.1577087627.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
A few changes to retrieve DAR and DSISR from struct regs instead of retrieving them directly, as they may have changed due to a TLB miss. Also modifies hash_page() and friends to work with virtual data addresses instead of physical ones. Same on load_up_fpu() and load_up_altivec(). Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Fix tovirt_vmstack call in head_32.S to fix CHRP build] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2e2509a242fd5f3e23df4a06530c18060c4d321e.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
Trying VMAP_STACK with KVM, vmlinux was not starting. This was due to SRR0 and SRR1 clobbered by an ISI due to the rfi being in a different page than the mtsrr0/1: c0003fe0 <mmu_off>: c0003fe0: 38 83 00 54 addi r4,r3,84 c0003fe4: 7c 60 00 a6 mfmsr r3 c0003fe8: 70 60 00 30 andi. r0,r3,48 c0003fec: 4d 82 00 20 beqlr c0003ff0: 7c 63 00 78 andc r3,r3,r0 c0003ff4: 7c 9a 03 a6 mtsrr0 r4 c0003ff8: 7c 7b 03 a6 mtsrr1 r3 c0003ffc: 7c 00 04 ac hwsync c0004000: 4c 00 00 64 rfi Align the 4 instruction block used to deactivate MMU to order 4, so that the block never crosses a page boundary. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/30d2cda111b7977227fff067fa7e358440e2b3a4.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
The part decidated to handling hash_page() is fully unneeded for processors not having real hash pages like the 603. Lets enlarge the content of the feature fixup, and provide an alternative which jumps directly instead of getting NIPs. Also, in preparation of VMAP stacks, the end of DSI handler has moved to later in the code as it won't fit anymore once VMAP stacks are there. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c31b22c91af8b011d0a4fd9e52ad6afb4b593f71.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
This patch enables CONFIG_VMAP_STACK. For that, a few changes are done in head_8xx.S. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d7ba1e34e80898310d6a314cbebe48baa32894ef.1576916812.git.christophe.leroy@c-s.fr
-
Michael Ellerman authored
When we enable VMAP_STACK there will not be enough room for the alignment handler at 0x600 in head_8xx.S. For now move the tail of the alignment handler out of line, and branch to it. Suggested-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Breakpoint exception is big. Split it to support future growth on exception prolog. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1dda3293d86d0f715b13b2633c95d2188a42a02c.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
Move DataStoreTLBMiss perf handler in order to cope with future growing exception prolog. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/75dd28b04efd2cbdbf01153173d99c11cdff2f08.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
head_8xx.S has entries for all exceptions from 0x100 to 0x1f00. Several of them do not exist and are never generated by the 8xx in accordance with the documentation. Remove those entry points to make some room for future growing exception code. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/66f92866fe9524cf0f056016921c7d53adaef3a0.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
In preparation of handling CONFIG_VMAP_STACK, DTLB miss handler need to use different scratch registers than other exception handlers in order to not jeopardise exception entry on stack DTLB misses. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c5287ea59ae9630f505019b309bf94029241635f.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
In order to also catch overflows on IRQ stacks, use vmapped stacks. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d33ad1b36ddff4dcc19f96c592c12a61cf85d406.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
To avoid recursive faults, stack overflow detection has to be performed before writing in the stack in exception prologs. Do it by checking the alignment. If the stack pointer alignment is wrong, it means it is pointing to the following or preceding page. Without VMAP stack, a stack overflow is catastrophic. With VMAP stack, a stack overflow isn't destructive, so don't panic. Kill the task with SIGSEGV instead. A dedicated overflow stack is set up for each CPU. lkdtm: Performing direct entry EXHAUST_STACK lkdtm: Calling function with 512 frame size to depth 32 ... lkdtm: loop 32/32 ... lkdtm: loop 31/32 ... lkdtm: loop 30/32 ... lkdtm: loop 29/32 ... lkdtm: loop 28/32 ... lkdtm: loop 27/32 ... lkdtm: loop 26/32 ... lkdtm: loop 25/32 ... lkdtm: loop 24/32 ... lkdtm: loop 23/32 ... lkdtm: loop 22/32 ... lkdtm: loop 21/32 ... lkdtm: loop 20/32 ... Kernel stack overflow in process test[359], r1=c900c008 Oops: Kernel stack overflow, sig: 6 [#1] BE PAGE_SIZE=4K MMU=Hash PowerMac Modules linked in: CPU: 0 PID: 359 Comm: test Not tainted 5.3.0-rc7+ #2225 NIP: c0622060 LR: c0626710 CTR: 00000000 REGS: c0895f48 TRAP: 0000 Not tainted (5.3.0-rc7+) MSR: 00001032 <ME,IR,DR,RI> CR: 28004224 XER: 00000000 GPR00: c0626ca4 c900c008 c783c000 c07335cc c900c010 c07335cc c900c0f0 c07335cc GPR08: c900c0f0 00000001 00000000 00000000 28008222 00000000 00000000 00000000 GPR16: 00000000 00000000 10010128 10010000 b799c245 10010158 c07335cc 00000025 GPR24: c0690000 c08b91d4 c068f688 00000020 c900c0f0 c068f668 c08b95b4 c08b91d4 NIP [c0622060] format_decode+0x0/0x4d4 LR [c0626710] vsnprintf+0x80/0x5fc Call Trace: [c900c068] [c0626ca4] vscnprintf+0x18/0x48 [c900c078] [c007b944] vprintk_store+0x40/0x214 [c900c0b8] [c007bf50] vprintk_emit+0x90/0x1dc [c900c0e8] [c007c5cc] printk+0x50/0x60 [c900c128] [c03da5b0] recursive_loop+0x44/0x6c [c900c338] [c03da5c4] recursive_loop+0x58/0x6c [c900c548] [c03da5c4] recursive_loop+0x58/0x6c [c900c758] [c03da5c4] recursive_loop+0x58/0x6c [c900c968] [c03da5c4] recursive_loop+0x58/0x6c [c900cb78] [c03da5c4] recursive_loop+0x58/0x6c [c900cd88] [c03da5c4] recursive_loop+0x58/0x6c [c900cf98] [c03da5c4] recursive_loop+0x58/0x6c [c900d1a8] [c03da5c4] recursive_loop+0x58/0x6c [c900d3b8] [c03da5c4] recursive_loop+0x58/0x6c [c900d5c8] [c03da5c4] recursive_loop+0x58/0x6c [c900d7d8] [c03da5c4] recursive_loop+0x58/0x6c [c900d9e8] [c03da5c4] recursive_loop+0x58/0x6c [c900dbf8] [c03da5c4] recursive_loop+0x58/0x6c [c900de08] [c03da67c] lkdtm_EXHAUST_STACK+0x30/0x4c [c900de18] [c03da3e8] direct_entry+0xc8/0x140 [c900de48] [c029fb40] full_proxy_write+0x64/0xcc [c900de68] [c01500f8] __vfs_write+0x30/0x1d0 [c900dee8] [c0152cb8] vfs_write+0xb8/0x1d4 [c900df08] [c0152f7c] ksys_write+0x58/0xe8 [c900df38] [c0014208] ret_from_syscall+0x0/0x34 --- interrupt: c01 at 0xf806664 LR = 0x1000c868 Instruction dump: 4bffff91 80010014 7c832378 7c0803a6 38210010 4e800020 3d20c08a 3ca0c089 8089a0cc 38a58f0c 38600001 4ba2d494 <9421ffe0> 7c0802a6 bfc10018 7c9f2378 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1b89c121b4070c7ee99e4f22cc178f15a736b07b.1576916812.git.christophe.leroy@c-s.fr
-
- 26 Jan, 2020 7 commits
-
-
Christophe Leroy authored
In order to ease stack overflow detection, align stack to 2 * THREAD_SIZE when using VMAP_STACK. This allows overflow detection using a single bit check. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/60e9ae86b7d2cdcf21468787076d345663648f46.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
To support CONFIG_VMAP_STACK, the kernel has to activate Data MMU Translation for accessing the stack. Before doing that it must save SRR0, SRR1 and also DAR and DSISR when relevant, in order to not loose them in case there is a Data TLB Miss once the translation is reactivated. This patch adds fields in thread struct for saving those registers. It prepares entry_32.S to handle exception entry with Data MMU Translation enabled and alters EXCEPTION_PROLOG macros to save SRR0, SRR1, DAR and DSISR then reenables Data MMU. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a775a1fea60f190e0f63503463fb775310a2009b.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
Refactor reading and saving of DAR and DSISR in exception vectors. This will ease the implementation of VMAP stack. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1286b3e51b07727c6b4b05f2df9af3f9b1717fb5.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
In order to simplify VMAP stack implementation, move MSR_PR test into EXCEPTION_PROLOG_0. This requires to not modify cr0 between EXCEPTION_PROLOG_0 and EXCEPTION_PROLOG_1. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5c8b5bba692b92654dbd363a229a1ba91db725bb.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
handle_page_fault() is the only function that save DAR/DEAR itself. Save DAR/DEAR before calling handle_page_fault() to prepare for VMAP stack which will require to save even before. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3a4d58d378091086f00fde42b59610c80289e120.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
This patch creates a macro for the very first part of exception prolog, this will help when implementing CONFIG_VMAP_STACK Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2249fe62f481121a180e9655ad2b998093f318f3.1576916812.git.christophe.leroy@c-s.fr
-
Christophe Leroy authored
On PPC32, MTMSRD() is simply defined as mtmsr. Replace MTMSRD(reg) by mtmsr reg in files dedicated to PPC32, this makes the code less obscure. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/22469e78230edea3dbd0c79a555d73124f6c6d93.1576916812.git.christophe.leroy@c-s.fr
-
- 25 Jan, 2020 9 commits
-
-
Oliver O'Halloran authored
Some newer cards supported by aacraid can take up to 40s to recover after an EEH event. This causes spurious failures in the basic EEH self-test since the current maximim timeout is only 30s. Fix the immediate issue by bumping the timeout to a default of 60s, and allow the wait time to be specified via an environmental variable (EEH_MAX_WAIT). Reported-by: Steve Best <sbest@redhat.com> Suggested-by: Douglas Miller <dougmill@us.ibm.com> Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200122031125.25991-1-oohall@gmail.com
-
Michael Bringmann authored
Correct overflow problem in calculation and display of Maximum Memory value to syscfg. Signed-off-by: Michael Bringmann <mwb@linux.ibm.com> [mpe: Only n_lmbs needs casting to unsigned long] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5577aef8-1d5a-ca95-ff0a-9c7b5977e5bf@linux.ibm.com
-
Jordan Niethe authored
Commit a25bd72b ("powerpc/mm/radix: Workaround prefetch issue with KVM") introduced a number of workarounds as coming out of a guest with the mmu enabled would make the cpu would start running in hypervisor state with the PID value from the guest. The cpu will then start prefetching for the hypervisor with that PID value. In Power9 DD2.2 the cpu behaviour was modified to fix this. When accessing Quadrant 0 in hypervisor mode with LPID != 0 prefetching will not be performed. This means that we can get rid of the workarounds for Power9 DD2.2 and later revisions. Add a new cpu feature CPU_FTR_P9_RADIX_PREFETCH_BUG to indicate if the workarounds are needed. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Acked-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191206031722.25781-1-jniethe5@gmail.com
-
Vaibhav Jain authored
String 'bus_desc.provider_name' allocated inside papr_scm_nvdimm_init() will leaks in case call to nvdimm_bus_register() fails or when papr_scm_remove() is called. This minor patch ensures that 'bus_desc.provider_name' is freed in error path for nvdimm_bus_register() as well as in papr_scm_remove(). Fixes: b5beae5e ("powerpc/pseries: Add driver for PAPR SCM regions") Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200122155140.120429-1-vaibhav@linux.ibm.com
-
Sukadev Bhattiprolu authored
Fix couple of compile errors I stumbled upon with CONFIG_XMON=y and CONFIG_XMON_DISASSEMBLY=n Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200123010455.GA15080@us.ibm.com
-
Christophe Leroy authored
Instead of opencoding, use probe_user_read() to failessly read a user location and probe_user_write() for writing to user. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e041f5eedb23f09ab553be8a91c3de2087147320.1579800517.git.christophe.leroy@c-s.fr
-
Chen Zhou authored
Fixes coccicheck warning: arch/powerpc/platforms/maple/setup.c:232:15-16: WARNING comparing pointer to 0 Compare pointer-typed values to NULL rather than 0. Signed-off-by: Chen Zhou <chenzhou10@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200121013153.9937-1-chenzhou10@huawei.com
-
Tyrel Datwyler authored
Commit e5afdf9d ("powerpc/vfio_spapr_tce: Add reference counting to iommu_table") missed an iommu_table allocation in the pseries vio code. The iommu_table is allocated with kzalloc and as a result the associated kref gets a value of zero. This has the side effect that during a DLPAR remove of the associated virtual IOA the iommu_tce_table_put() triggers a use-after-free underflow warning. Call Trace: [c0000002879e39f0] [c00000000071ecb4] refcount_warn_saturate+0x184/0x190 (unreliable) [c0000002879e3a50] [c0000000000500ac] iommu_tce_table_put+0x9c/0xb0 [c0000002879e3a70] [c0000000000f54e4] vio_dev_release+0x34/0x70 [c0000002879e3aa0] [c00000000087cfa4] device_release+0x54/0xf0 [c0000002879e3b10] [c000000000d64c84] kobject_cleanup+0xa4/0x240 [c0000002879e3b90] [c00000000087d358] put_device+0x28/0x40 [c0000002879e3bb0] [c0000000007a328c] dlpar_remove_slot+0x15c/0x250 [c0000002879e3c50] [c0000000007a348c] remove_slot_store+0xac/0xf0 [c0000002879e3cd0] [c000000000d64220] kobj_attr_store+0x30/0x60 [c0000002879e3cf0] [c0000000004ff13c] sysfs_kf_write+0x6c/0xa0 [c0000002879e3d10] [c0000000004fde4c] kernfs_fop_write+0x18c/0x260 [c0000002879e3d60] [c000000000410f3c] __vfs_write+0x3c/0x70 [c0000002879e3d80] [c000000000415408] vfs_write+0xc8/0x250 [c0000002879e3dd0] [c0000000004157dc] ksys_write+0x7c/0x120 [c0000002879e3e20] [c00000000000b278] system_call+0x5c/0x68 Further, since the refcount was always zero the iommu_tce_table_put() fails to call the iommu_table release function resulting in a leak. Fix this issue be initilizing the iommu_table kref immediately after allocation. Fixes: e5afdf9d ("powerpc/vfio_spapr_tce: Add reference counting to iommu_table") Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1579558202-26052-1-git-send-email-tyreld@linux.ibm.com
-
Aneesh Kumar K.V authored
Setting ND_REGION_PAGEMAP flag implies namespace mode defaults to fsdax mode. This also means kernel ends up creating struct page backing for these namspace ranges. With large namespaces that is not the right thing to do. We should let the user select the mode he/she wants the namespace to be created with. Hence disable ND_REGION_PAGEMAP for papr_scm regions. We still keep the flag for of_pmem because it supports only small persistent memory regions. This is similar to what is done for x86 with commit commit: 004f1afb ("libnvdimm, pmem: direct map legacy pmem by default") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200108064647.169637-1-aneesh.kumar@linux.ibm.com
-
- 23 Jan, 2020 8 commits
-
-
Greg Kurz authored
As reported by ./scripts/checkpatch.pl --strict: CHECK: extern prototypes should be avoided in .h files Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/157384145834.181768.944827793193636924.stgit@bahia.lan
-
Greg Kurz authored
Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/156219139988.578018.1046848908285019838.stgit@bahia.lan
-
Krzysztof Kozlowski authored
Adjust indentation from spaces to tab (+optional two spaces) as in coding style with command like: $ sed -e 's/^ /\t/' -i */Kconfig Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191120134115.14918-1-krzk@kernel.org
-
Laurentiu Tudor authored
Michael Ellerman made a call for volunteers from NXP to maintain this driver and I offered myself. Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com> Acked-by: Timur Tabi <timur@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200114110012.17351-1-laurentiu.tudor@nxp.com
-
Oliver O'Halloran authored
This is only used in pci-ioda.c so move it there and rename it to match. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200110070207.439-6-oohall@gmail.com
-
Oliver O'Halloran authored
pnv_pci_dma_dev_setup() does nothing but call the phb->dma_dev_setup() callback, if one exists. That callback is only set for normal PCIe PHBs so we can remove the layer of indirection and use the ioda version in the pci_controller_ops. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200110070207.439-5-oohall@gmail.com
-
Oliver O'Halloran authored
An ioda_pe for each VF is allocated in pnv_pci_sriov_enable() before the pci_dev for the VF is created. We need to set the pe->pdev pointer at some point after the pci_dev is created. Currently we do that in: pcibios_bus_add_device() pnv_pci_dma_dev_setup() (via phb->ops.dma_dev_setup) /* fixup is done here */ pnv_pci_ioda_dma_dev_setup() (via pnv_phb->dma_dev_setup) The fixup needs to be done before setting up DMA for for the VF's PE, but there's no real reason to delay it until this point. Move the fixup into pnv_pci_ioda_fixup_iov() so the ordering is: pcibios_add_device() pnv_pci_ioda_fixup_iov() (via ppc_md.pcibios_fixup_sriov) pcibios_bus_add_device() ... This isn't strictly required, but it's a slightly more logical place to do the fixup and it simplifies pnv_pci_dma_dev_setup(). Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200110070207.439-4-oohall@gmail.com
-
Oliver O'Halloran authored
The pnv_pci_dma_dev_setup() only does something when: 1) There PHB contains VFs, or 2) The PHB defines a dma_dev_setup() callback in the pnv_phb structure. Neither is true for NPU PHBs so there's no reason to set the callback. Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200110070207.439-3-oohall@gmail.com
-