- 20 Apr, 2023 5 commits
-
-
Pali Rohár authored
Move uli_init() function into existing driver fsl_uli1575.c file in order to share its code between more platforms and board files. Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230409000812.18904-5-pali@kernel.org
-
Pali Rohár authored
Function uli_exclude_device() is called only from mpc86xx_exclude_device() and mpc85xx_exclude_device() functions. Both those functions are same, so merge its logic directly into the uli_exclude_device() function. Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230409000812.18904-4-pali@kernel.org
-
Pali Rohár authored
Function mpc85xx_exclude_device() is installed and used only when pci_with_uli is fsl_pci_primary. So replace check for pci_with_uli by fsl_pci_primary in mpc85xx_exclude_device() and move pci_with_uli variable declaration into function mpc85xx_ds_uli_init() where it is used. Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230409000812.18904-3-pali@kernel.org
-
Christophe Leroy authored
Use a single line for uli_exclude_device(). Add uli_exclude_device() prototype in ppc-pci.h and guard it. Remove that prototype from mpc85xx_ds.c and mpc86xx_hpcn.c files. Make uli_pirq_to_irq[] static as it is used only in that file. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230409000812.18904-2-pali@kernel.org
-
Nicholas Piggin authored
-mcpu=power10 will generate prefixed and pcrel code by default, which we do not support. The general kernel disables these with cflags, but those were missed for the boot wrapper. Fixes: 4b2a9315 ("powerpc/64s: POWER10 CPU Kconfig build option") Cc: stable@vger.kernel.org # v6.1+ Reported-by: Danny Tsen <dtsen@linux.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230407040909.230998-1-npiggin@gmail.com
-
- 12 Apr, 2023 1 commit
-
-
Nicholas Piggin authored
Use the preferred form of branch-and-link for finding the current address so objtool doesn't think it is an unannotated intra-function call. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230407040924.231023-1-npiggin@gmail.com
-
- 11 Apr, 2023 10 commits
-
-
Nathan Chancellor authored
When building with W=1 after commit 80b6093b ("kbuild: add -Wundef to KBUILD_CPPFLAGS for W=1 builds"), the following warning occurs. In file included from arch/powerpc/kvm/bookehv_interrupts.S:26: arch/powerpc/kvm/../kernel/head_booke.h:20:6: warning: "THREAD_SHIFT" is not defined, evaluates to 0 [-Wundef] 20 | #if (THREAD_SHIFT < 15) | ^~~~~~~~~~~~ THREAD_SHIFT is defined in thread_info.h but it is not directly included in head_booke.h, so it is possible for THREAD_SHIFT to be undefined. Add the include to ensure that THREAD_SHIFT is always defined. Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/202304050954.yskLdczH-lkp@intel.com/Signed-off-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Masahiro Yamada <masahiroy@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230406-wundef-thread_shift_booke-v1-1-8deffa4d84f9@kernel.org
-
Nicholas Piggin authored
syscalls do not set the PPR field in their interrupt frame and return from syscall always sets the default PPR for userspace, so setting the value in the ret_from_fork frame is not necessary and mildly inconsistent. Remove it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230325122904.2375060-9-npiggin@gmail.com
-
Nicholas Piggin authored
In the kernel user thread path, don't set _TIF_RESTOREALL because the thread is required to call kernel_execve() before it returns, which will set _TIF_RESTOREALL if necessary via start_thread(). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230325122904.2375060-8-npiggin@gmail.com
-
Nicholas Piggin authored
Kernel created user threads start similarly to kernel threads in that they call a kernel function after first returning from _switch, so they share ret_from_kernel_thread for this. Kernel threads never return from that function though, whereas user threads often do (although some don't, e.g., IO threads). Split these startup functions in two, and catch kernel threads that improperly return from their function. This is intended to make the complicated code a little bit easier to understand. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230325122904.2375060-7-npiggin@gmail.com
-
Nicholas Piggin authored
When copy_thread is given a kernel function to run in arg->fn, this does not necessarily mean it is a kernel thread. User threads can be created this way (e.g., kernel_init, see also x86's copy_thread()). These threads run a kernel function which may call kernel_execve() and return, which returns like a userspace exec(2) syscall. Kernel threads are to be differentiated with PF_KTHREAD, will always have arg->fn set, and should never return from that function, instead calling kthread_exit() to exit. Create separate paths for the kthread and user kernel thread creation logic. The kthread path will never exit and does not require a user interrupt frame, so it gets a minimal stack frame. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230325122904.2375060-6-npiggin@gmail.com
-
Nicholas Piggin authored
If the system call return path always restores NVGPRs then there is no need for ret_from_fork to do it. The HANDLER_RESTORE_NVGPRS does the right thing for this. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230325122904.2375060-5-npiggin@gmail.com
-
Nicholas Piggin authored
The kernel thread path in copy_thread creates a user interrupt frame on stack and stores the function and arg parameters there, and ret_from_kernel_thread loads them. This is a slightly confusing way to overload that frame. Non-volatile registers are loaded from the switch frame, so the parameters can be stored there. The user interrupt frame is now only used by user threads when they return to user. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230325122904.2375060-4-npiggin@gmail.com
-
Nicholas Piggin authored
The ret_from_fork code for 64e and 32-bit set r3 for syscall_exit_prepare the same way that 64s does, so there should be no need to special-case them in copy_thread. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230325122904.2375060-3-npiggin@gmail.com
-
Nicholas Piggin authored
The pkey registers (AMR, IAMR) do not get loaded from the switch frame so it is pointless to save anything there. Remove the dead code. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230325122904.2375060-2-npiggin@gmail.com
-
Michael Ellerman authored
The amdgpu driver builds some of its code with hard-float enabled, whereas the rest of the kernel is built with soft-float. When building with 64-bit long double, if soft-float and hard-float objects are linked together, the build fails due to incompatible ABI tags. In the past there have been build errors in the amdgpu driver caused by this, some of those were due to bad intermingling of soft & hard-float code, but those issues have now all been fixed since commit 58ddbecb ("drm/amd/display: move remaining FPU code to dml folder"). However it's still possible for soft & hard-float objects to end up linked together, if the amdgpu driver is built-in to the kernel along with the test_emulate_step.c code, which uses soft-float. That happens in an allyesconfig build. Currently those build errors are avoided because the amdgpu driver is gated on 128-bit long double being enabled. But that's not a detail the amdgpu driver should need to be aware of, and if another driver starts using hard-float the same problem would occur. All versions of the 64-bit ABI specify that long-double is 128-bits. However some compilers, notably the kernel.org ones, are built to use 64-bit long double by default. Apart from this issue of soft vs hard-float, the kernel doesn't care what size long double is. In particular the kernel using 128-bit long double doesn't impact userspace's ability to use 64-bit long double, as musl does. So always build the 64-bit kernel with 128-bit long double. That should avoid any build errors due to the incompatible ABI tags. Excluding the code that uses soft/hard-float, the vmlinux is identical with/without the flag. It does mean any code which is incorrectly intermingling soft & hard-float code will build without error, so those bugs will need to be caught by testing rather than at build time. For more background see: - commit d11219ad ("amdgpu: disable powerpc support for the newer display engine") - commit c653c591 ("drm/amdgpu: Re-enable DCN for 64-bit powerpc") - https://lore.kernel.org/r/dab9cbd8-2626-4b99-8098-31fe76397d2d@app.fastmail.comSigned-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org> Link: https://msgid.link/20230404102847.3303623-1-mpe@ellerman.id.au
-
- 04 Apr, 2023 8 commits
-
-
Nysal Jan K.A authored
Remove arch_atomic_try_cmpxchg_lock function as it is no longer used since commit 9f61521c ("powerpc/qspinlock: powerpc qspinlock implementation") Signed-off-by: Nysal Jan K.A <nysal@linux.ibm.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230224103940.1328725-1-nysal@linux.ibm.com
-
Nicholas Miehlbradt authored
Walks the stack when copy_{to,from}_user address is in the stack to ensure that the object being copied is entirely a single stack frame and does not contain stack metadata. Substantially similar to the x86 implementation. The back chain is used to traverse the stack and identify stack frame boundaries. Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230228054355.300628-1-nicholas@linux.ibm.com
-
Rob Herring authored
Replace open coded reading of "reg" or of_get_address()/ of_translate_address() calls with a single call to of_address_to_resource(). Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230329220337.141295-1-robh@kernel.org
-
Rob Herring authored
Replace of_get_property()+of_translate_address()+ioremap() with a call to of_iomap() which does all those steps. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230327223109.820381-1-robh@kernel.org
-
Rob Herring authored
Replace of_address_to_resource()+ioremap() with a call to of_iomap() which does both of those steps. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230327223103.820229-1-robh@kernel.org
-
Rob Herring authored
icp_native_init_one_node() only needs the number of entries in "reg". Replace the open coded "reg" parsing with of_address_count() to get the number of "reg" entries. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230327223056.820086-1-robh@kernel.org
-
Rob Herring authored
"ranges" is a standard property with common parsing functions. Users shouldn't be implementing their own parsing of it. Reimplement the ISA brige "ranges" parsing using the common ranges iterator functions. The common routines are flexible enough to work on PCI and non-PCI to ISA bridges, so refactor pci_process_ISA_OF_ranges() and isa_bridge_init_non_pci() into a single implementation. Signed-off-by: Rob Herring <robh@kernel.org> [mpe: Unsplit some strings and use pr_xxx()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230327223045.819852-1-robh@kernel.org
-
Michael Ellerman authored
Merge our KVM topic branch to bring some KVM commits into next for wider testing.
-
- 03 Apr, 2023 6 commits
-
-
Paul Mackerras authored
Now that we can read prefixed instructions from a HV KVM guest and emulate prefixed load/store instructions to emulated MMIO locations, we can add HFSCR_PREFIXED into the set of bits that are set in the HFSCR for a HV KVM guest on POWER10, allowing the guest to use prefixed instructions. PR KVM has not yet been extended to handle prefixed instructions in all situations where we might need to emulate them, so prevent the guest from enabling prefixed instructions in the FSCR for now. Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Tested-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Tested-by: Sachin Sant <sachinp@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/ZAgs25dCmLrVkBdU@cleo
-
Paul Mackerras authored
In order to handle emulation of prefixed instructions in the guest, this first makes vcpu->arch.last_inst be an unsigned long, i.e. 64 bits on 64-bit platforms. For prefixed instructions, the upper 32 bits are used for the prefix and the lower 32 bits for the suffix, and both halves are byte-swapped if the guest endianness differs from the host. Next, vcpu->arch.emul_inst is now 64 bits wide, to match the HEIR register on POWER10. Like HEIR, for a prefixed instruction it is defined to have the prefix is in the top 32 bits and the suffix in the bottom 32 bits, with both halves in the correct byte order. kvmppc_get_last_inst is extended on 64-bit machines to put the prefix and suffix in the right places in the ppc_inst_t being returned. kvmppc_load_last_inst now returns the instruction in an unsigned long in the same format as vcpu->arch.last_inst. It makes the decision about whether to fetch a suffix based on the SRR1_PREFIXED bit in the MSR image stored in the vcpu struct, which generally comes from SRR1 or HSRR1 on an interrupt. This bit is defined in Power ISA v3.1B to be set if the interrupt occurred due to a prefixed instruction and cleared otherwise for all interrupts except for instruction storage interrupt, which does not come to the hypervisor. It is set to zero for asynchronous interrupts such as external interrupts. In previous ISA versions it was always set to 0 for all interrupts except instruction storage interrupt. The code in book3s_hv_rmhandlers.S that loads the faulting instruction on a HDSI is only used on POWER8 and therefore doesn't ever need to load a suffix. [npiggin@gmail.com - check that the is-prefixed bit in SRR1 matches the type of instruction that was fetched.] Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Tested-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/ZAgsq9h1CCzouQuV@cleo
-
Paul Mackerras authored
This changes kvmppc_get_last_inst() so that the instruction it fetches is returned in a ppc_inst_t variable rather than a u32. This will allow us to return a 64-bit prefixed instruction on those 64-bit machines that implement Power ISA v3.1 or later, such as POWER10. On 32-bit platforms, ppc_inst_t is 32 bits wide, and is turned back into a u32 by ppc_inst_val, which is an identity operation on those platforms. Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Tested-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/ZAgsiPlL9O7KnlZZ@cleo
-
Nicholas Piggin authored
Pass the hypervisor (H)SRR1[PREFIX] indication through to synchronous interrupts injected into the guest. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230330103224.3589928-3-npiggin@gmail.com
-
Nicholas Piggin authored
The prefix architecture in ISA v3.1 introduces a prefixed bit in SRR1 for many types of synchronous interrupts which is set when the interrupt is caused by a prefixed instruction. This requires KVM to be able to set this bit when injecting interrupts into a guest. Plumb through the SRR1 "flags" argument to the core_queue APIs where it's missing for this. For now they are set to 0, which is no change. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Fixup kvmppc_core_queue_alignment() in booke.c] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230330103224.3589928-2-npiggin@gmail.com
-
Michael Ellerman authored
Fix various W=1 warnings in booke.c: arch/powerpc/kvm/booke.c:1008:5: error: no previous prototype for ‘kvmppc_handle_exit’ [-Werror=missing-prototypes] 1008 | int kvmppc_handle_exit(struct kvm_vcpu *vcpu, unsigned int exit_nr) | ^~~~~~~~~~~~~~~~~~ arch/powerpc/kvm/booke.c:1009: warning: Function parameter or member 'vcpu' not described in 'kvmppc_handle_exit' arch/powerpc/kvm/booke.c:1009: warning: Function parameter or member 'exit_nr' not described in 'kvmppc_handle_exit' Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/r/202304020827.3LEZ86WB-lkp@intel.com/Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230403045314.3095410-1-mpe@ellerman.id.au
-
- 30 Mar, 2023 10 commits
-
-
Rob Herring authored
It is preferred to use typed property access functions (i.e. of_property_read_<type> functions) rather than low-level of_get_property/of_find_property functions for reading properties. As part of this, convert of_get_property/of_find_property calls to the recently added of_property_present() helper when we just want to test for presence of a property and nothing more. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230310144735.1546817-1-robh@kernel.org
-
Rob Herring authored
It is preferred to use typed property access functions (i.e. of_property_read_<type> functions) rather than low-level of_get_property/of_find_property functions for reading properties. Convert reading boolean properties to of_property_read_bool(). Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230310144659.1541127-1-robh@kernel.org
-
Rob Herring authored
It is preferred to use typed property access functions (i.e. of_property_read_<type> functions) rather than low-level of_get_property/of_find_property functions for reading properties. As part of this, convert of_get_property/of_find_property calls to the recently added of_property_present() helper when we just want to test for presence of a property and nothing more. Signed-off-by: Rob Herring <robh@kernel.org> [mpe: Drop change in ppc4xx_probe_pci_bridge(), formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230310144657.1541039-1-robh@kernel.org
-
Nathan Lynch authored
Add lockdep annotations for the following properties that must hold: * Any error log retrieval must be atomically coupled with the prior RTAS call, without a window for another RTAS call to occur before the error log can be retrieved. * All users of the core rtas_args parameter block must hold rtas_lock. Move the definitions of rtas_lock and rtas_args up in the file so that __do_enter_rtas_trace() can refer to them. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230220-rtas-queue-for-6-4-v1-6-010e4416f13f@linux.ibm.com
-
Nathan Lynch authored
The 'filter' member is a pointer, not a bool; fix the wording accordingly. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230220-rtas-queue-for-6-4-v1-4-010e4416f13f@linux.ibm.com
-
Nathan Lynch authored
Add documentation for rtas_call_unlocked(), including details on how it differs from rtas_call(). Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230220-rtas-queue-for-6-4-v1-3-010e4416f13f@linux.ibm.com
-
Nathan Lynch authored
Using memcpy() isn't safe when buf is identical to rtas_err_buf, which can happen during boot before slab is up. Full context which may not be obvious from the diff: if (altbuf) { buf = altbuf; } else { buf = rtas_err_buf; if (slab_is_available()) buf = kmalloc(RTAS_ERROR_LOG_MAX, GFP_ATOMIC); } if (buf) memcpy(buf, rtas_err_buf, RTAS_ERROR_LOG_MAX); This was found by inspection and I'm not aware of it causing problems in practice. It appears to have been introduced by commit 033ef338 ("powerpc: Merge rtas.c into arch/powerpc/kernel"); the old ppc64 version of this code did not have this problem. Use memmove() instead. Fixes: 033ef338 ("powerpc: Merge rtas.c into arch/powerpc/kernel") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230220-rtas-queue-for-6-4-v1-2-010e4416f13f@linux.ibm.com
-
Nathan Lynch authored
CHRP and PAPR agree: "In order to make an RTAS call, the operating system must construct an argument call buffer aligned on an eight byte boundary in physically contiguous real memory [...]." (7.2.7 Calling Mechanism and Conventions). struct rtas_args is the type used for this argument call buffer. The unarchitected 'rets' member happens to produce 8-byte alignment for the struct on 64-bit targets in practice. But without an alignment directive the structure will have only 4-byte alignment on 32-bit targets: $ nm b/{before,after}/chrp32/vmlinux | grep rtas_args c096881c b rtas_args c0968820 b rtas_args Add an alignment directive to the struct rtas_args declaration so all instances have the alignment required by the specs. rtas-types.h no longer refers to any spinlock types, so drop the spinlock_types.h inclusion while we're here. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230220-rtas-queue-for-6-4-v1-1-010e4416f13f@linux.ibm.com
-
Randy Dunlap authored
LEDS_TRIGGER_DISK depends on ATA, so selecting LEDS_TRIGGER_DISK when ATA is not set/enabled causes a Kconfig warning: WARNING: unmet direct dependencies detected for LEDS_TRIGGER_DISK Depends on [n]: NEW_LEDS [=y] && LEDS_TRIGGERS [=y] && ATA [=n] Selected by [y]: - ADB_PMU_LED_DISK [=y] && MACINTOSH_DRIVERS [=y] && ADB_PMU_LED [=y] && LEDS_CLASS [=y] Fix this by making ADB_PMU_LED_DISK depend on ATA. Seen on both PPC32 and PPC64. Fixes: 0e865a80 ("macintosh: Remove dependency on IDE_GD_ATA if ADB_PMU_LED_DISK is selected") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230223014241.20878-1-rdunlap@infradead.org
-
Randy Dunlap authored
Use "%pa" format specifier for resource_size_t to avoid a compiler printk format warning. arch/powerpc/sysdev/tsi108_pci.c: In function 'tsi108_setup_pci': include/linux/kern_levels.h:5:25: error: format '%x' expects argument of type 'unsigned int', but argument 2 has type 'resource_size_t' Fixes: c4342ff9 ("[POWERPC] Update mpc7448hpc2 board irq support using device tree") Fixes: 2b9d7467 ("[POWERPC] Add tsi108 pci and platform device data register function") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> [mpe: Use pr_info() and unsplit string] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230223070116.660-5-rdunlap@infradead.org
-