Commit eff0cb3d authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pci-v5.20-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull pci updates from Bjorn Helgaas:
 "Enumeration:

   - Consolidate duplicated 'next function' scanning and extend to allow
     'isolated functions' on s390, similar to existing hypervisors
     (Niklas Schnelle)

  Resource management:
   - Implement pci_iobar_pfn() for sparc, which allows us to remove the
     sparc-specific pci_mmap_page_range() and pci_mmap_resource_range().

     This removes the ability to map the entire PCI I/O space using
     /proc/bus/pci, but we believe that's already been broken since
     v2.6.28 (Arnd Bergmann)

   - Move common PCI definitions to asm-generic/pci.h and rework others
     to be be more specific and more encapsulated in arches that need
     them (Stafford Horne)

  Power management:

   - Convert drivers to new *_PM_OPS macros to avoid need for '#ifdef
     CONFIG_PM_SLEEP' or '__maybe_unused' (Bjorn Helgaas)

  Virtualization:

   - Add ACS quirk for Broadcom BCM5750x multifunction NICs that isolate
     the functions but don't advertise an ACS capability (Pavan Chebbi)

  Error handling:

   - Clear PCI Status register during enumeration in case firmware left
     errors logged (Kai-Heng Feng)

   - When we have native control of AER, enable error reporting for all
     devices that support AER. Previously only a few drivers enabled
     this (Stefan Roese)

   - Keep AER error reporting enabled for switches. Previously we
     enabled this during enumeration but immediately disabled it (Stefan
     Roese)

   - Iterate over error counters instead of error strings to avoid
     printing junk in AER sysfs counters (Mohamed Khalfella)

  ASPM:

   - Remove pcie_aspm_pm_state_change() so ASPM config changes, e.g.,
     via sysfs, are not lost across power state changes (Kai-Heng Feng)

  Endpoint framework:

   - Don't stop an EPC when unbinding an EPF from it (Shunsuke Mie)

  Endpoint embedded DMA controller driver:

   - Simplify and clean up support for the DesignWare embedded DMA
     (eDMA) controller (Frank Li, Serge Semin)

  Broadcom STB PCIe controller driver:

   - Avoid config space accesses when link is down because we can't
     recover from the CPU aborts these cause (Jim Quinlan)

   - Look for power regulators described under Root Ports in DT and
     enable them before scanning the secondary bus (Jim Quinlan)

   - Disable/enable regulators in suspend/resume (Jim Quinlan)

  Freescale i.MX6 PCIe controller driver:

   - Simplify and clean up clock and PHY management (Richard Zhu)

   - Disable/enable regulators in suspend/resume (Richard Zhu)

   - Set PCIE_DBI_RO_WR_EN before writing DBI registers (Richard Zhu)

   - Allow speeds faster than Gen2 (Richard Zhu)

   - Make link being down a non-fatal error so controller probe doesn't
     fail if there are no Endpoints connected (Richard Zhu)

  Loongson PCIe controller driver:

   - Add ACPI and MCFG support for Loongson LS7A (Huacai Chen)

   - Avoid config reads to non-existent LS2K/LS7A devices because a
     hardware defect causes machine hangs (Huacai Chen)

   - Work around LS7A integrated devices that report incorrect Interrupt
     Pin values (Jianmin Lv)

  Marvell Aardvark PCIe controller driver:

   - Add support for AER and Slot capability on emulated bridge (Pali
     Rohár)

  MediaTek PCIe controller driver:

   - Add Airoha EN7532 to DT binding (John Crispin)

   - Allow building of driver for ARCH_AIROHA (Felix Fietkau)

  MediaTek PCIe Gen3 controller driver:

   - Print decoded LTSSM state when the link doesn't come up (Jianjun
     Wang)

  NVIDIA Tegra194 PCIe controller driver:

   - Convert DT binding to json-schema (Vidya Sagar)

   - Add DT bindings and driver support for Tegra234 Root Port and
     Endpoint mode (Vidya Sagar)

   - Fix some Root Port interrupt handling issues (Vidya Sagar)

   - Set default Max Payload Size to 256 bytes (Vidya Sagar)

   - Fix Data Link Feature capability programming (Vidya Sagar)

   - Extend Endpoint mode support to devices beyond Controller-5 (Vidya
     Sagar)

  Qualcomm PCIe controller driver:

   - Rework clock, reset, PHY power-on ordering to avoid hangs and
     improve consistency (Robert Marko, Christian Marangi)

   - Move pipe_clk handling to PHY drivers (Dmitry Baryshkov)

   - Add IPQ60xx support (Selvam Sathappan Periakaruppan)

   - Allow ASPM L1 and substates for 2.7.0 (Krishna chaitanya chundru)

   - Add support for more than 32 MSI interrupts (Dmitry Baryshkov)

  Renesas R-Car PCIe controller driver:

   - Convert DT binding to json-schema (Herve Codina)

   - Add Renesas RZ/N1D (R9A06G032) to rcar-gen2 DT binding and driver
     (Herve Codina)

  Samsung Exynos PCIe controller driver:

   - Fix phy-exynos-pcie driver so it follows the 'phy_init() before
     phy_power_on()' PHY programming model (Marek Szyprowski)

  Synopsys DesignWare PCIe controller driver:

   - Simplify and clean up the DWC core extensively (Serge Semin)

   - Fix an issue with programming the ATU for regions that cross a 4GB
     boundary (Serge Semin)

   - Enable the CDM check if 'snps,enable-cdm-check' exists; previously
     we skipped it if 'num-lanes' was absent (Serge Semin)

   - Allocate a 32-bit DMA-able page to be MSI target instead of using a
     driver data structure that may not be addressable with 32-bit
     address (Will McVicker)

   - Add DWC core support for more than 32 MSI interrupts (Dmitry
     Baryshkov)

  Xilinx Versal CPM PCIe controller driver:

   - Add DT binding and driver support for Versal CPM5 Gen5 Root Port
     (Bharat Kumar Gogada)"

* tag 'pci-v5.20-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (150 commits)
  PCI: imx6: Support more than Gen2 speed link mode
  PCI: imx6: Set PCIE_DBI_RO_WR_EN before writing DBI registers
  PCI: imx6: Reformat suspend callback to keep symmetric with resume
  PCI: imx6: Move the imx6_pcie_ltssm_disable() earlier
  PCI: imx6: Disable clocks in reverse order of enable
  PCI: imx6: Do not hide PHY driver callbacks and refine the error handling
  PCI: imx6: Reduce resume time by only starting link if it was up before suspend
  PCI: imx6: Mark the link down as non-fatal error
  PCI: imx6: Move regulator enable out of imx6_pcie_deassert_core_reset()
  PCI: imx6: Turn off regulator when system is in suspend mode
  PCI: imx6: Call host init function directly in resume
  PCI: imx6: Disable i.MX6QDL clock when disabling ref clocks
  PCI: imx6: Propagate .host_init() errors to caller
  PCI: imx6: Collect clock enables in imx6_pcie_clk_enable()
  PCI: imx6: Factor out ref clock disable to match enable
  PCI: imx6: Move imx6_pcie_clk_disable() earlier
  PCI: imx6: Move imx6_pcie_enable_ref_clk() earlier
  PCI: imx6: Move PHY management functions together
  PCI: imx6: Move imx6_pcie_grp_offset(), imx6_pcie_configure_type() earlier
  PCI: imx6: Convert to NOIRQ_SYSTEM_SLEEP_PM_OPS()
  ...
parents 31be1d0f c4f36c3a
......@@ -125,14 +125,14 @@ Following piece of code illustrates the usage of the SR-IOV API.
...
}
static int dev_suspend(struct pci_dev *dev, pm_message_t state)
static int dev_suspend(struct device *dev)
{
...
return 0;
}
static int dev_resume(struct pci_dev *dev)
static int dev_resume(struct device *dev)
{
...
......@@ -165,8 +165,7 @@ Following piece of code illustrates the usage of the SR-IOV API.
.id_table = dev_id_table,
.probe = dev_probe,
.remove = dev_remove,
.suspend = dev_suspend,
.resume = dev_resume,
.driver.pm = &dev_pm_ops,
.shutdown = dev_shutdown,
.sriov_configure = dev_sriov_configure,
};
......@@ -125,7 +125,7 @@ implementation of that functionality. To support the historical interface of
mmap() through files in /proc/bus/pci, platforms may also set HAVE_PCI_MMAP.
Alternatively, platforms which set HAVE_PCI_MMAP may provide their own
implementation of pci_mmap_page_range() instead of defining
implementation of pci_mmap_resource_range() instead of defining
ARCH_GENERIC_PCI_MMAP_RESOURCE.
Platforms which support write-combining maps of PCI resources must define
......
......@@ -7,6 +7,7 @@ Required properties:
"mediatek,mt7622-pcie"
"mediatek,mt7623-pcie"
"mediatek,mt7629-pcie"
"airoha,en7523-pcie"
- device_type: Must be "pci"
- reg: Base addresses and lengths of the root ports.
- reg-names: Names of the above areas to use during resource lookup.
......
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/nvidia,tegra194-pcie-ep.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: NVIDIA Tegra194 (and later) PCIe Endpoint controller (Synopsys DesignWare Core based)
maintainers:
- Thierry Reding <thierry.reding@gmail.com>
- Jon Hunter <jonathanh@nvidia.com>
- Vidya Sagar <vidyas@nvidia.com>
description: |
This PCIe controller is based on the Synopsys DesignWare PCIe IP and thus
inherits all the common properties defined in snps,dw-pcie-ep.yaml. Some
of the controller instances are dual mode; they can work either in Root
Port mode or Endpoint mode but one at a time.
On Tegra194, controllers C0, C4 and C5 support Endpoint mode.
On Tegra234, controllers C5, C6, C7 and C10 support Endpoint mode.
Note: On Tegra194's P2972-0000 platform, only C5 controller can be enabled to
operate in the Endpoint mode because of the way the platform is designed.
properties:
compatible:
enum:
- nvidia,tegra194-pcie-ep
- nvidia,tegra234-pcie-ep
reg:
items:
- description: controller's application logic registers
- description: iATU and DMA registers. This is where the iATU (internal
Address Translation Unit) registers of the PCIe core are made
available for software access.
- description: aperture where the Root Port's own configuration
registers are available.
- description: aperture used to map the remote Root Complex address space
reg-names:
items:
- const: appl
- const: atu_dma
- const: dbi
- const: addr_space
interrupts:
items:
- description: controller interrupt
interrupt-names:
items:
- const: intr
clocks:
items:
- description: module clock
clock-names:
items:
- const: core
resets:
items:
- description: APB bus interface reset
- description: module reset
reset-names:
items:
- const: apb
- const: core
reset-gpios:
description: Must contain a phandle to a GPIO controller followed by GPIO
that is being used as PERST input signal. Please refer to pci.txt.
phys:
minItems: 1
maxItems: 8
phy-names:
minItems: 1
items:
- const: p2u-0
- const: p2u-1
- const: p2u-2
- const: p2u-3
- const: p2u-4
- const: p2u-5
- const: p2u-6
- const: p2u-7
power-domains:
maxItems: 1
description: |
A phandle to the node that controls power to the respective PCIe
controller and a specifier name for the PCIe controller.
Tegra194 specifiers are defined in "include/dt-bindings/power/tegra194-powergate.h"
Tegra234 specifiers are defined in "include/dt-bindings/power/tegra234-powergate.h"
interconnects:
items:
- description: memory read client
- description: memory write client
interconnect-names:
items:
- const: dma-mem # read
- const: write
dma-coherent: true
nvidia,bpmp:
$ref: /schemas/types.yaml#/definitions/phandle-array
description: |
Must contain a pair of phandles to BPMP controller node followed by
controller ID. Following are the controller IDs for each controller:
Tegra194
0: C0
1: C1
2: C2
3: C3
4: C4
5: C5
Tegra234
0 : C0
1 : C1
2 : C2
3 : C3
4 : C4
5 : C5
6 : C6
7 : C7
8 : C8
9 : C9
10: C10
items:
- items:
- description: phandle to BPMP controller node
- description: PCIe controller ID
maximum: 10
nvidia,aspm-cmrt-us:
description: Common Mode Restore Time for proper operation of ASPM to be
specified in microseconds
nvidia,aspm-pwr-on-t-us:
description: Power On time for proper operation of ASPM to be specified in
microseconds
nvidia,aspm-l0s-entrance-latency-us:
description: ASPM L0s entrance latency to be specified in microseconds
vddio-pex-ctl-supply:
description: A phandle to the regulator supply for PCIe side band signals
nvidia,refclk-select-gpios:
maxItems: 1
description: GPIO used to enable REFCLK to controller from the host
nvidia,enable-ext-refclk:
description: |
This boolean property needs to be present if the controller is configured
to receive Reference Clock from the host.
NOTE: This is applicable only for Tegra234.
$ref: /schemas/types.yaml#/definitions/flag
nvidia,enable-srns:
description: |
This boolean property needs to be present if the controller is
configured to operate in SRNS (Separate Reference Clocks with No
Spread-Spectrum Clocking). NOTE: This is applicable only for
Tegra234.
$ref: /schemas/types.yaml#/definitions/flag
allOf:
- $ref: /schemas/pci/snps,dw-pcie-ep.yaml#
unevaluatedProperties: false
required:
- interrupts
- interrupt-names
- clocks
- clock-names
- resets
- reset-names
- power-domains
- reset-gpios
- vddio-pex-ctl-supply
- num-lanes
- phys
- phy-names
- nvidia,bpmp
examples:
- |
#include <dt-bindings/clock/tegra194-clock.h>
#include <dt-bindings/gpio/tegra194-gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/power/tegra194-powergate.h>
#include <dt-bindings/reset/tegra194-reset.h>
bus@0 {
#address-cells = <2>;
#size-cells = <2>;
ranges = <0x0 0x0 0x0 0x8 0x0>;
pcie-ep@141a0000 {
compatible = "nvidia,tegra194-pcie-ep";
reg = <0x00 0x141a0000 0x0 0x00020000>, /* appl registers (128K) */
<0x00 0x3a040000 0x0 0x00040000>, /* iATU_DMA reg space (256K) */
<0x00 0x3a080000 0x0 0x00040000>, /* DBI reg space (256K) */
<0x1c 0x00000000 0x4 0x00000000>; /* Address Space (16G) */
reg-names = "appl", "atu_dma", "dbi", "addr_space";
interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */
interrupt-names = "intr";
clocks = <&bpmp TEGRA194_CLK_PEX1_CORE_5>;
clock-names = "core";
resets = <&bpmp TEGRA194_RESET_PEX1_CORE_5_APB>,
<&bpmp TEGRA194_RESET_PEX1_CORE_5>;
reset-names = "apb", "core";
power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8A>;
pinctrl-names = "default";
pinctrl-0 = <&clkreq_c5_bi_dir_state>;
nvidia,bpmp = <&bpmp 5>;
nvidia,aspm-cmrt-us = <60>;
nvidia,aspm-pwr-on-t-us = <20>;
nvidia,aspm-l0s-entrance-latency-us = <3>;
vddio-pex-ctl-supply = <&vdd_1v8ao>;
reset-gpios = <&gpio TEGRA194_MAIN_GPIO(GG, 1) GPIO_ACTIVE_LOW>;
nvidia,refclk-select-gpios = <&gpio_aon TEGRA194_AON_GPIO(AA, 5)
GPIO_ACTIVE_HIGH>;
num-lanes = <8>;
phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>,
<&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>,
<&p2u_nvhs_6>, <&p2u_nvhs_7>;
phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3", "p2u-4",
"p2u-5", "p2u-6", "p2u-7";
};
};
- |
#include <dt-bindings/clock/tegra234-clock.h>
#include <dt-bindings/gpio/tegra234-gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/power/tegra234-powergate.h>
#include <dt-bindings/reset/tegra234-reset.h>
bus@0 {
#address-cells = <2>;
#size-cells = <2>;
ranges = <0x0 0x0 0x0 0x8 0x0>;
pcie-ep@141a0000 {
compatible = "nvidia,tegra234-pcie-ep";
power-domains = <&bpmp TEGRA234_POWER_DOMAIN_PCIEX8A>;
reg = <0x00 0x141a0000 0x0 0x00020000>, /* appl registers (128K) */
<0x00 0x3a040000 0x0 0x00040000>, /* iATU_DMA reg space (256K) */
<0x00 0x3a080000 0x0 0x00040000>, /* DBI reg space (256K) */
<0x27 0x40000000 0x4 0x00000000>; /* Address Space (16G) */
reg-names = "appl", "atu_dma", "dbi", "addr_space";
interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */
interrupt-names = "intr";
clocks = <&bpmp TEGRA234_CLK_PEX1_C5_CORE>;
clock-names = "core";
resets = <&bpmp TEGRA234_RESET_PEX1_CORE_5_APB>,
<&bpmp TEGRA234_RESET_PEX1_CORE_5>;
reset-names = "apb", "core";
nvidia,bpmp = <&bpmp 5>;
nvidia,enable-ext-refclk;
nvidia,aspm-cmrt-us = <60>;
nvidia,aspm-pwr-on-t-us = <20>;
nvidia,aspm-l0s-entrance-latency-us = <3>;
vddio-pex-ctl-supply = <&p3701_vdd_1v8_ls>;
reset-gpios = <&gpio TEGRA234_MAIN_GPIO(AF, 1) GPIO_ACTIVE_LOW>;
nvidia,refclk-select-gpios = <&gpio_aon
TEGRA234_AON_GPIO(AA, 4)
GPIO_ACTIVE_HIGH>;
num-lanes = <8>;
phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>,
<&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>,
<&p2u_nvhs_6>, <&p2u_nvhs_7>;
phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3", "p2u-4",
"p2u-5", "p2u-6", "p2u-7";
};
};
Renesas AHB to PCI bridge
-------------------------
This is the bridge used internally to connect the USB controllers to the
AHB. There is one bridge instance per USB port connected to the internal
OHCI and EHCI controllers.
Required properties:
- compatible: "renesas,pci-r8a7742" for the R8A7742 SoC;
"renesas,pci-r8a7743" for the R8A7743 SoC;
"renesas,pci-r8a7744" for the R8A7744 SoC;
"renesas,pci-r8a7745" for the R8A7745 SoC;
"renesas,pci-r8a7790" for the R8A7790 SoC;
"renesas,pci-r8a7791" for the R8A7791 SoC;
"renesas,pci-r8a7793" for the R8A7793 SoC;
"renesas,pci-r8a7794" for the R8A7794 SoC;
"renesas,pci-rcar-gen2" for a generic R-Car Gen2 or
RZ/G1 compatible device.
When compatible with the generic version, nodes must list the
SoC-specific version corresponding to the platform first
followed by the generic version.
- reg: A list of physical regions to access the device: the first is
the operational registers for the OHCI/EHCI controllers and the
second is for the bridge configuration and control registers.
- interrupts: interrupt for the device.
- clocks: The reference to the device clock.
- bus-range: The PCI bus number range; as this is a single bus, the range
should be specified as the same value twice.
- #address-cells: must be 3.
- #size-cells: must be 2.
- #interrupt-cells: must be 1.
- interrupt-map: standard property used to define the mapping of the PCI
interrupts to the GIC interrupts.
- interrupt-map-mask: standard property that helps to define the interrupt
mapping.
Optional properties:
- dma-ranges: a single range for the inbound memory region. If not supplied,
defaults to 1GiB at 0x40000000. Note there are hardware restrictions on the
allowed combinations of address and size.
Example SoC configuration:
pci0: pci@ee090000 {
compatible = "renesas,pci-r8a7790", "renesas,pci-rcar-gen2";
clocks = <&mstp7_clks R8A7790_CLK_EHCI>;
reg = <0x0 0xee090000 0x0 0xc00>,
<0x0 0xee080000 0x0 0x1100>;
interrupts = <0 108 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
bus-range = <0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
dma-ranges = <0x42000000 0 0x40000000 0 0x40000000 0 0x40000000>;
interrupt-map-mask = <0xff00 0 0 0x7>;
interrupt-map = <0x0000 0 0 1 &gic 0 108 IRQ_TYPE_LEVEL_HIGH
0x0800 0 0 1 &gic 0 108 IRQ_TYPE_LEVEL_HIGH
0x1000 0 0 2 &gic 0 108 IRQ_TYPE_LEVEL_HIGH>;
usb@1,0 {
reg = <0x800 0 0 0 0>;
phys = <&usb0 0>;
phy-names = "usb";
};
usb@2,0 {
reg = <0x1000 0 0 0 0>;
phys = <&usb0 0>;
phy-names = "usb";
};
};
Example board setup:
&pci0 {
status = "okay";
pinctrl-0 = <&usb0_pins>;
pinctrl-names = "default";
};
......@@ -11,7 +11,7 @@ maintainers:
- Stanimir Varbanov <svarbanov@mm-sol.com>
description: |
Qualcomm PCIe root complex controller is bansed on the Synopsys DesignWare
Qualcomm PCIe root complex controller is based on the Synopsys DesignWare
PCIe IP.
properties:
......@@ -43,11 +43,12 @@ properties:
maxItems: 5
interrupts:
maxItems: 1
minItems: 1
maxItems: 8
interrupt-names:
items:
- const: msi
minItems: 1
maxItems: 8
# Common definitions for clocks, clock-names and reset.
# Platform constraints are described later.
......@@ -614,7 +615,7 @@ allOf:
- if:
not:
properties:
compatibles:
compatible:
contains:
enum:
- qcom,pcie-msm8996
......@@ -623,6 +624,50 @@ allOf:
- resets
- reset-names
# Newer chipsets support either 1 or 8 MSI vectors
# On older chipsets it's always 1 MSI vector
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-msm8996
- qcom,pcie-sc7280
- qcom,pcie-sc8180x
- qcom,pcie-sdm845
- qcom,pcie-sm8150
- qcom,pcie-sm8250
- qcom,pcie-sm8450-pcie0
- qcom,pcie-sm8450-pcie1
then:
oneOf:
- properties:
interrupts:
maxItems: 1
interrupt-names:
items:
- const: msi
- properties:
interrupts:
minItems: 8
interrupt-names:
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
else:
properties:
interrupts:
maxItems: 1
interrupt-names:
items:
- const: msi
unevaluatedProperties: false
examples:
......
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/renesas,pci-rcar-gen2.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Renesas AHB to PCI bridge
maintainers:
- Marek Vasut <marek.vasut+renesas@gmail.com>
- Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
description: |
This is the bridge used internally to connect the USB controllers to the
AHB. There is one bridge instance per USB port connected to the internal
OHCI and EHCI controllers.
properties:
compatible:
oneOf:
- items:
- enum:
- renesas,pci-r8a7742 # RZ/G1H
- renesas,pci-r8a7743 # RZ/G1M
- renesas,pci-r8a7744 # RZ/G1N
- renesas,pci-r8a7745 # RZ/G1E
- renesas,pci-r8a7790 # R-Car H2
- renesas,pci-r8a7791 # R-Car M2-W
- renesas,pci-r8a7793 # R-Car M2-N
- renesas,pci-r8a7794 # R-Car E2
- const: renesas,pci-rcar-gen2 # R-Car Gen2 and RZ/G1
- items:
- enum:
- renesas,pci-r9a06g032 # RZ/N1D
- const: renesas,pci-rzn1 # RZ/N1
reg:
items:
- description: Operational registers for the OHCI/EHCI controllers.
- description: Bridge configuration and control registers.
interrupts:
maxItems: 1
clocks: true
clock-names: true
resets:
maxItems: 1
power-domains:
maxItems: 1
bus-range:
description: |
The PCI bus number range; as this is a single bus, the range
should be specified as the same value twice.
dma-ranges:
description: |
A single range for the inbound memory region. If not supplied,
defaults to 1GiB at 0x40000000. Note there are hardware restrictions on
the allowed combinations of address and size.
maxItems: 1
patternProperties:
'usb@[0-1],0':
type: object
description:
This a USB controller PCI device
properties:
reg:
description:
Identify the correct bus, device and function number in the
form <bdf 0 0 0 0>.
items:
minItems: 5
maxItems: 5
phys:
description:
Reference to the USB phy
maxItems: 1
phy-names:
maxItems: 1
required:
- reg
- phys
- phy-names
unevaluatedProperties: false
required:
- compatible
- reg
- interrupts
- interrupt-map
- interrupt-map-mask
- clocks
- power-domains
- bus-range
- "#address-cells"
- "#size-cells"
- "#interrupt-cells"
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- if:
properties:
compatible:
contains:
enum:
- renesas,pci-rzn1
then:
properties:
clocks:
items:
- description: Internal bus clock (AHB) for HOST
- description: Internal bus clock (AHB) Power Management
- description: PCI clock for USB subsystem
clock-names:
items:
- const: hclkh
- const: hclkpm
- const: pciclk
required:
- clock-names
else:
properties:
clocks:
items:
- description: Device clock
clock-names:
items:
- const: pclk
required:
- resets
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/r8a7790-cpg-mssr.h>
#include <dt-bindings/power/r8a7790-sysc.h>
pci@ee090000 {
compatible = "renesas,pci-r8a7790", "renesas,pci-rcar-gen2";
device_type = "pci";
reg = <0xee090000 0xc00>,
<0xee080000 0x1100>;
clocks = <&cpg CPG_MOD 703>;
power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
resets = <&cpg 703>;
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>;
bus-range = <0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
ranges = <0x02000000 0 0xee080000 0xee080000 0 0x00010000>;
dma-ranges = <0x42000000 0 0x40000000 0x40000000 0 0x40000000>;
interrupt-map-mask = <0xf800 0 0 0x7>;
interrupt-map = <0x0000 0 0 1 &gic GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>,
<0x0800 0 0 1 &gic GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>,
<0x1000 0 0 2 &gic GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>;
usb@1,0 {
reg = <0x800 0 0 0 0>;
phys = <&usb0 0>;
phy-names = "usb";
};
usb@2,0 {
reg = <0x1000 0 0 0 0>;
phys = <&usb0 0>;
phy-names = "usb";
};
};
......@@ -34,8 +34,8 @@ properties:
minItems: 2
maxItems: 5
items:
enum: [ dbi, dbi2, config, atu, app, elbi, mgmt, ctrl, parf, cfg, link,
ulreg, smu, mpu, apb, phy ]
enum: [ dbi, dbi2, config, atu, atu_dma, app, appl, elbi, mgmt, ctrl,
parf, cfg, link, ulreg, smu, mpu, apb, phy ]
num-lanes:
description: |
......
......@@ -14,17 +14,23 @@ allOf:
properties:
compatible:
const: xlnx,versal-cpm-host-1.00
enum:
- xlnx,versal-cpm-host-1.00
- xlnx,versal-cpm5-host
reg:
items:
- description: CPM system level control and status registers.
- description: Configuration space region and bridge registers.
- description: CPM5 control and status registers.
minItems: 2
reg-names:
items:
- const: cpm_slcr
- const: cfg
- const: cpm_csr
minItems: 2
interrupts:
maxItems: 1
......@@ -95,4 +101,34 @@ examples:
interrupt-controller;
};
};
cpm5_pcie: pcie@fcdd0000 {
compatible = "xlnx,versal-cpm5-host";
device_type = "pci";
#address-cells = <3>;
#interrupt-cells = <1>;
#size-cells = <2>;
interrupts = <0 72 4>;
interrupt-parent = <&gic>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc_1 0>,
<0 0 0 2 &pcie_intc_1 1>,
<0 0 0 3 &pcie_intc_1 2>,
<0 0 0 4 &pcie_intc_1 3>;
bus-range = <0x00 0xff>;
ranges = <0x02000000 0x0 0xe0000000 0x0 0xe0000000 0x0 0x10000000>,
<0x43000000 0x80 0x00000000 0x80 0x00000000 0x0 0x80000000>;
msi-map = <0x0 &its_gic 0x0 0x10000>;
reg = <0x00 0xfcdd0000 0x00 0x1000>,
<0x06 0x00000000 0x00 0x1000000>,
<0x00 0xfce20000 0x00 0x1000000>;
reg-names = "cpm_slcr", "cfg", "cpm_csr";
pcie_intc_1: interrupt-controller {
#address-cells = <0>;
#interrupt-cells = <1>;
interrupt-controller;
};
};
};
......@@ -15862,6 +15862,14 @@ L: linux-pci@vger.kernel.org
S: Maintained
F: drivers/pci/controller/dwc/*spear*
PCI DRIVER FOR XILINX VERSAL CPM
M: Bharat Kumar Gogada <bharat.kumar.gogada@amd.com>
M: Michal Simek <michal.simek@amd.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/xilinx-versal-cpm.yaml
F: drivers/pci/controller/pcie-xilinx-cpm.c
PCMCIA SUBSYSTEM
M: Dominik Brodowski <linux@dominikbrodowski.net>
S: Odd Fixes
......
......@@ -365,13 +365,4 @@ extern void free_dma(unsigned int dmanr); /* release it again */
#define KERNEL_HAVE_CHECK_DMA
extern int check_dma(unsigned int dmanr);
/* From PCI */
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy (0)
#endif
#endif /* _ASM_DMA_H */
......@@ -56,12 +56,6 @@ struct pci_controller {
/* IOMMU controls. */
/* TODO: integrate with include/asm-generic/pci.h ? */
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
return channel ? 15 : 14;
}
#define pci_domain_nr(bus) ((struct pci_controller *)(bus)->sysdata)->index
static inline int pci_proc_domain(struct pci_bus *bus)
......
......@@ -7,10 +7,5 @@
#define ASM_ARC_DMA_H
#define MAX_DMA_ADDRESS 0xC0000000
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy 0
#endif
#endif
......@@ -143,10 +143,4 @@ extern int get_dma_residue(unsigned int chan);
#endif /* CONFIG_ISA_DMA_API */
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy (0)
#endif
#endif /* __ASM_ARM_DMA_H */
......@@ -22,11 +22,6 @@ static inline int pci_proc_domain(struct pci_bus *bus)
#define HAVE_PCI_MMAP
#define ARCH_GENERIC_PCI_MMAP_RESOURCE
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
return channel ? 15 : 14;
}
extern void pcibios_report_status(unsigned int status_mask, int warn);
#endif /* __KERNEL__ */
......
......@@ -9,7 +9,6 @@
#include <asm/io.h>
#define PCIBIOS_MIN_IO 0x1000
#define PCIBIOS_MIN_MEM 0
/*
* Set to 1 if the kernel should re-assign all PCI bus numbers
......@@ -18,21 +17,8 @@
(pci_has_flag(PCI_REASSIGN_ALL_BUS))
#define arch_can_pci_mmap_wc() 1
#define ARCH_GENERIC_PCI_MMAP_RESOURCE 1
extern int isa_dma_bridge_buggy;
#ifdef CONFIG_PCI
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
/* no legacy IRQ on arm64 */
return -ENODEV;
}
static inline int pci_proc_domain(struct pci_bus *bus)
{
return 1;
}
#endif /* CONFIG_PCI */
/* Generic PCI */
#include <asm-generic/pci.h>
#endif /* __ASM_PCI_H */
......@@ -9,26 +9,7 @@
#include <asm/io.h>
#define PCIBIOS_MIN_IO 0
#define PCIBIOS_MIN_MEM 0
/* C-SKY shim does not initialize PCI bus */
#define pcibios_assign_all_busses() 1
extern int isa_dma_bridge_buggy;
#ifdef CONFIG_PCI
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
/* no legacy IRQ on csky */
return -ENODEV;
}
static inline int pci_proc_domain(struct pci_bus *bus)
{
/* always show the domain in /proc */
return 1;
}
#endif /* CONFIG_PCI */
/* Generic PCI */
#include <asm-generic/pci.h>
#endif /* __ASM_CSKY_PCI_H */
......@@ -12,8 +12,6 @@
extern unsigned long MAX_DMA_ADDRESS;
extern int isa_dma_bridge_buggy;
#define free_dma(x)
#endif /* _ASM_IA64_DMA_H */
......@@ -63,10 +63,4 @@ static inline int pci_proc_domain(struct pci_bus *bus)
return (pci_domain_nr(bus) != 0);
}
#define HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
return channel ? isa_irq_to_vector(15) : isa_irq_to_vector(14);
}
#endif /* _ASM_IA64_PCI_H */
......@@ -6,10 +6,4 @@
bootmem allocator (but this should do it for this) */
#define MAX_DMA_ADDRESS PAGE_OFFSET
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy (0)
#endif
#endif /* _M68K_DMA_H */
......@@ -2,8 +2,6 @@
#ifndef _ASM_M68K_PCI_H
#define _ASM_M68K_PCI_H
#include <asm-generic/pci.h>
#define pcibios_assign_all_busses() 1
#define PCIBIOS_MIN_IO 0x00000100
......
......@@ -9,10 +9,4 @@
/* Virtual address corresponding to last available physical memory address. */
#define MAX_DMA_ADDRESS (CONFIG_KERNEL_START + memory_size - 1)
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy (0)
#endif
#endif /* _ASM_MICROBLAZE_DMA_H */
......@@ -307,12 +307,4 @@ static __inline__ int get_dma_residue(unsigned int dmanr)
extern int request_dma(unsigned int dmanr, const char * device_id); /* reserve a DMA channel */
extern void free_dma(unsigned int dmanr); /* release it again */
/* From PCI */
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy (0)
#endif
#endif /* _ASM_DMA_H */
......@@ -139,10 +139,4 @@ static inline int pci_proc_domain(struct pci_bus *bus)
/* Do platform specific device initialization at pci_enable_device() time */
extern int pcibios_plat_dev_init(struct pci_dev *dev);
/* Chances are this interrupt is wired PC-style ... */
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
return channel ? 15 : 14;
}
#endif /* _ASM_PCI_H */
......@@ -176,10 +176,4 @@ static __inline__ void set_dma_count(unsigned int dmanr, unsigned int count)
#define free_dma(dmanr)
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy (0)
#endif
#endif /* _ASM_DMA_H */
......@@ -162,11 +162,6 @@ extern void pcibios_init_bridge(struct pci_dev *);
#define PCIBIOS_MIN_IO 0x10
#define PCIBIOS_MIN_MEM 0x1000 /* NBPG - but pci/setup-res.c dies */
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
return channel ? 15 : 14;
}
#define HAVE_PCI_MMAP
#define ARCH_GENERIC_PCI_MMAP_RESOURCE
......
......@@ -340,11 +340,5 @@ extern int request_dma(unsigned int dmanr, const char *device_id);
/* release it again */
extern void free_dma(unsigned int dmanr);
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy (0)
#endif
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_DMA_H */
......@@ -39,7 +39,6 @@
#define pcibios_assign_all_busses() \
(pci_has_flag(PCI_REASSIGN_ALL_BUS))
#define HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
if (ppc_md.pci_get_legacy_ide_irq)
......
......@@ -12,31 +12,7 @@
#include <asm/io.h>
#define PCIBIOS_MIN_IO 0
#define PCIBIOS_MIN_MEM 0
/* RISC-V shim does not initialize PCI bus */
#define pcibios_assign_all_busses() 1
#define ARCH_GENERIC_PCI_MMAP_RESOURCE 1
extern int isa_dma_bridge_buggy;
#ifdef CONFIG_PCI
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
/* no legacy IRQ on risc-v */
return -ENODEV;
}
static inline int pci_proc_domain(struct pci_bus *bus)
{
/* always show the domain in /proc */
return 1;
}
#ifdef CONFIG_NUMA
#if defined(CONFIG_PCI) && defined(CONFIG_NUMA)
static inline int pcibus_to_node(struct pci_bus *bus)
{
return dev_to_node(&bus->dev);
......@@ -46,8 +22,9 @@ static inline int pcibus_to_node(struct pci_bus *bus)
cpu_all_mask : \
cpumask_of_node(pcibus_to_node(bus)))
#endif
#endif /* CONFIG_NUMA */
#endif /* defined(CONFIG_PCI) && defined(CONFIG_NUMA) */
#endif /* CONFIG_PCI */
/* Generic PCI */
#include <asm-generic/pci.h>
#endif /* _ASM_RISCV_PCI_H */
......@@ -11,10 +11,4 @@
*/
#define MAX_DMA_ADDRESS 0x80000000
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy (0)
#endif
#endif /* _ASM_S390_DMA_H */
......@@ -6,7 +6,6 @@
#include <linux/mutex.h>
#include <linux/iommu.h>
#include <linux/pci_hotplug.h>
#include <asm-generic/pci.h>
#include <asm/pci_clp.h>
#include <asm/pci_debug.h>
#include <asm/pci_insn.h>
......
......@@ -145,9 +145,6 @@ int zpci_bus_scan_bus(struct zpci_bus *zbus)
struct zpci_dev *zdev;
int devfn, rc, ret = 0;
if (!zbus->function[0])
return 0;
for (devfn = 0; devfn < ZPCI_FUNCTIONS_PER_BUS; devfn++) {
zdev = zbus->function[devfn];
if (zdev && zdev->state == ZPCI_FN_STATE_CONFIGURED) {
......@@ -184,26 +181,26 @@ void zpci_bus_scan_busses(void)
/* zpci_bus_create_pci_bus - Create the PCI bus associated with this zbus
* @zbus: the zbus holding the zdevices
* @f0: function 0 of the bus
* @fr: PCI root function that will determine the bus's domain, and bus speeed
* @ops: the pci operations
*
* Function zero is taken as a parameter as this is used to determine the
* domain, multifunction property and maximum bus speed of the entire bus.
* The PCI function @fr determines the domain (its UID), multifunction property
* and maximum bus speed of the entire bus.
*
* Return: 0 on success, an error code otherwise
*/
static int zpci_bus_create_pci_bus(struct zpci_bus *zbus, struct zpci_dev *f0, struct pci_ops *ops)
static int zpci_bus_create_pci_bus(struct zpci_bus *zbus, struct zpci_dev *fr, struct pci_ops *ops)
{
struct pci_bus *bus;
int domain;
domain = zpci_alloc_domain((u16)f0->uid);
domain = zpci_alloc_domain((u16)fr->uid);
if (domain < 0)
return domain;
zbus->domain_nr = domain;
zbus->multifunction = f0->rid_available;
zbus->max_bus_speed = f0->max_bus_speed;
zbus->multifunction = fr->rid_available;
zbus->max_bus_speed = fr->max_bus_speed;
/*
* Note that the zbus->resources are taken over and zbus->resources
......@@ -303,47 +300,6 @@ void pcibios_bus_add_device(struct pci_dev *pdev)
}
}
/* zpci_bus_create_hotplug_slots - Add hotplug slot(s) for device added to bus
* @zdev: the zPCI device that was newly added
*
* Add the hotplug slot(s) for the newly added PCI function. Normally this is
* simply the slot for the function itself. If however we are adding the
* function 0 on a zbus, it might be that we already registered functions on
* that zbus but could not create their hotplug slots yet so add those now too.
*
* Return: 0 on success, an error code otherwise
*/
static int zpci_bus_create_hotplug_slots(struct zpci_dev *zdev)
{
struct zpci_bus *zbus = zdev->zbus;
int devfn, rc = 0;
rc = zpci_init_slot(zdev);
if (rc)
return rc;
zdev->has_hp_slot = 1;
if (zdev->devfn == 0 && zbus->multifunction) {
/* Now that function 0 is there we can finally create the
* hotplug slots for those functions with devfn != 0 that have
* been parked in zbus->function[] waiting for us to be able to
* create the PCI bus.
*/
for (devfn = 1; devfn < ZPCI_FUNCTIONS_PER_BUS; devfn++) {
zdev = zbus->function[devfn];
if (zdev && !zdev->has_hp_slot) {
rc = zpci_init_slot(zdev);
if (rc)
return rc;
zdev->has_hp_slot = 1;
}
}
}
return rc;
}
static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
{
int rc = -EINVAL;
......@@ -352,21 +308,19 @@ static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
pr_err("devfn %04x is already assigned\n", zdev->devfn);
return rc;
}
zdev->zbus = zbus;
zbus->function[zdev->devfn] = zdev;
zpci_nb_devices++;
if (zbus->bus) {
if (zbus->multifunction && !zdev->rid_available) {
WARN_ONCE(1, "rid_available not set for multifunction\n");
goto error;
}
zpci_bus_create_hotplug_slots(zdev);
} else {
/* Hotplug slot will be created once function 0 appears */
zbus->multifunction = 1;
}
rc = zpci_init_slot(zdev);
if (rc)
goto error;
zdev->has_hp_slot = 1;
return 0;
......@@ -400,7 +354,11 @@ int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)
return -ENOMEM;
}
if (zdev->devfn == 0) {
if (!zbus->bus) {
/* The UID of the first PCI function registered with a zpci_bus
* is used as the domain number for that bus. Currently there
* is exactly one zpci_bus per domain.
*/
rc = zpci_bus_create_pci_bus(zbus, zdev, ops);
if (rc)
goto error;
......
......@@ -137,10 +137,4 @@ extern int register_chan_caps(const char *dmac, struct dma_chan_caps *capslist);
extern int dma_create_sysfs_files(struct dma_channel *, struct dma_info *);
extern void dma_remove_sysfs_files(struct dma_channel *, struct dma_info *);
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy (0)
#endif
#endif /* __ASM_SH_DMA_H */
......@@ -88,10 +88,4 @@ static inline int pci_proc_domain(struct pci_bus *bus)
return hose->need_domain_info;
}
/* Chances are this interrupt is wired PC-style ... */
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
return channel ? 15 : 14;
}
#endif /* __ASM_SH_PCI_H */
......@@ -82,14 +82,6 @@
#define DMA_BURST64 0x40
#define DMA_BURSTBITS 0x7f
/* From PCI */
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy (0)
#endif
#ifdef CONFIG_SPARC32
struct device;
......
......@@ -37,16 +37,8 @@ static inline int pci_proc_domain(struct pci_bus *bus)
#define HAVE_PCI_MMAP
#define arch_can_pci_mmap_io() 1
#define HAVE_ARCH_PCI_GET_UNMAPPED_AREA
#define ARCH_GENERIC_PCI_MMAP_RESOURCE
#define get_pci_unmapped_area get_fb_unmapped_area
#endif /* CONFIG_SPARC64 */
#if defined(CONFIG_SPARC64) || defined(CONFIG_LEON_PCI)
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
return PCI_IRQ_NONE;
}
#else
#include <asm-generic/pci.h>
#endif
#endif /* ___ASM_SPARC_PCI_H */
......@@ -751,156 +751,15 @@ int pcibios_enable_device(struct pci_dev *dev, int mask)
}
/* Platform support for /proc/bus/pci/X/Y mmap()s. */
/* If the user uses a host-bridge as the PCI device, he may use
* this to perform a raw mmap() of the I/O or MEM space behind
* that controller.
*
* This can be useful for execution of x86 PCI bios initialization code
* on a PCI card, like the xfree86 int10 stuff does.
*/
static int __pci_mmap_make_offset_bus(struct pci_dev *pdev, struct vm_area_struct *vma,
enum pci_mmap_state mmap_state)
int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma)
{
struct pci_pbm_info *pbm = pdev->dev.archdata.host_controller;
unsigned long space_size, user_offset, user_size;
if (mmap_state == pci_mmap_io) {
space_size = resource_size(&pbm->io_space);
} else {
space_size = resource_size(&pbm->mem_space);
}
/* Make sure the request is in range. */
user_offset = vma->vm_pgoff << PAGE_SHIFT;
user_size = vma->vm_end - vma->vm_start;
if (user_offset >= space_size ||
(user_offset + user_size) > space_size)
return -EINVAL;
if (mmap_state == pci_mmap_io) {
vma->vm_pgoff = (pbm->io_space.start +
user_offset) >> PAGE_SHIFT;
} else {
vma->vm_pgoff = (pbm->mem_space.start +
user_offset) >> PAGE_SHIFT;
}
return 0;
}
/* Adjust vm_pgoff of VMA such that it is the physical page offset
* corresponding to the 32-bit pci bus offset for DEV requested by the user.
*
* Basically, the user finds the base address for his device which he wishes
* to mmap. They read the 32-bit value from the config space base register,
* add whatever PAGE_SIZE multiple offset they wish, and feed this into the
* offset parameter of mmap on /proc/bus/pci/XXX for that device.
*
* Returns negative error code on failure, zero on success.
*/
static int __pci_mmap_make_offset(struct pci_dev *pdev,
struct vm_area_struct *vma,
enum pci_mmap_state mmap_state)
{
unsigned long user_paddr, user_size;
int i, err;
/* First compute the physical address in vma->vm_pgoff,
* making sure the user offset is within range in the
* appropriate PCI space.
*/
err = __pci_mmap_make_offset_bus(pdev, vma, mmap_state);
if (err)
return err;
/* If this is a mapping on a host bridge, any address
* is OK.
*/
if ((pdev->class >> 8) == PCI_CLASS_BRIDGE_HOST)
return err;
/* Otherwise make sure it's in the range for one of the
* device's resources.
*/
user_paddr = vma->vm_pgoff << PAGE_SHIFT;
user_size = vma->vm_end - vma->vm_start;
for (i = 0; i <= PCI_ROM_RESOURCE; i++) {
struct resource *rp = &pdev->resource[i];
resource_size_t aligned_end;
resource_size_t ioaddr = pci_resource_start(pdev, bar);
/* Active? */
if (!rp->flags)
continue;
/* Same type? */
if (i == PCI_ROM_RESOURCE) {
if (mmap_state != pci_mmap_mem)
continue;
} else {
if ((mmap_state == pci_mmap_io &&
(rp->flags & IORESOURCE_IO) == 0) ||
(mmap_state == pci_mmap_mem &&
(rp->flags & IORESOURCE_MEM) == 0))
continue;
}
/* Align the resource end to the next page address.
* PAGE_SIZE intentionally added instead of (PAGE_SIZE - 1),
* because actually we need the address of the next byte
* after rp->end.
*/
aligned_end = (rp->end + PAGE_SIZE) & PAGE_MASK;
if ((rp->start <= user_paddr) &&
(user_paddr + user_size) <= aligned_end)
break;
}
if (i > PCI_ROM_RESOURCE)
if (!pbm)
return -EINVAL;
return 0;
}
/* Set vm_page_prot of VMA, as appropriate for this architecture, for a pci
* device mapping.
*/
static void __pci_mmap_set_pgprot(struct pci_dev *dev, struct vm_area_struct *vma,
enum pci_mmap_state mmap_state)
{
/* Our io_remap_pfn_range takes care of this, do nothing. */
}
/* Perform the actual remap of the pages for a PCI device mapping, as appropriate
* for this architecture. The region in the process to map is described by vm_start
* and vm_end members of VMA, the base physical address is found in vm_pgoff.
* The pci device structure is provided so that architectures may make mapping
* decisions on a per-device or per-bus basis.
*
* Returns a negative error code on failure, zero on success.
*/
int pci_mmap_page_range(struct pci_dev *dev, int bar,
struct vm_area_struct *vma,
enum pci_mmap_state mmap_state, int write_combine)
{
int ret;
ret = __pci_mmap_make_offset(dev, vma, mmap_state);
if (ret < 0)
return ret;
__pci_mmap_set_pgprot(dev, vma, mmap_state);
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
ret = io_remap_pfn_range(vma, vma->vm_start,
vma->vm_pgoff,
vma->vm_end - vma->vm_start,
vma->vm_page_prot);
if (ret)
return ret;
vma->vm_pgoff += (ioaddr + pbm->io_space.start) >> PAGE_SHIFT;
return 0;
}
......
......@@ -4,28 +4,8 @@
#include <linux/types.h>
#include <asm/io.h>
#define PCIBIOS_MIN_IO 0
#define PCIBIOS_MIN_MEM 0
#define pcibios_assign_all_busses() 1
extern int isa_dma_bridge_buggy;
#ifdef CONFIG_PCI
static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
{
/* no legacy IRQs */
return -ENODEV;
}
#endif
#ifdef CONFIG_PCI_DOMAINS
static inline int pci_proc_domain(struct pci_bus *bus)
{
/* always show the domain in /proc */
return 1;
}
#endif /* CONFIG_PCI */
/* Generic PCI */
#include <asm-generic/pci.h>
#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN
/*
......
......@@ -307,12 +307,4 @@ extern int request_dma(unsigned int dmanr, const char *device_id);
extern void free_dma(unsigned int dmanr);
#endif
/* From PCI */
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy (0)
#endif
#endif /* _ASM_X86_DMA_H */
......@@ -105,9 +105,6 @@ static inline void early_quirks(void) { }
extern void pci_iommu_alloc(void);
/* generic pci stuff */
#include <asm-generic/pci.h>
#ifdef CONFIG_NUMA
/* Returns the node based on pci bus */
static inline int __pcibus_to_node(const struct pci_bus *bus)
......
// SPDX-License-Identifier: GPL-2.0
#include <linux/bitops.h>
#include <linux/delay.h>
#include <linux/isa-dma.h>
#include <linux/pci.h>
#include <asm/dma.h>
#include <linux/io.h>
......
......@@ -52,11 +52,4 @@
extern int request_dma(unsigned int dmanr, const char * device_id);
extern void free_dma(unsigned int dmanr);
#ifdef CONFIG_PCI
extern int isa_dma_bridge_buggy;
#else
#define isa_dma_bridge_buggy (0)
#endif
#endif
......@@ -43,7 +43,4 @@
#define ARCH_GENERIC_PCI_MMAP_RESOURCE 1
#define arch_can_pci_mmap_io() 1
/* Generic PCI */
#include <asm-generic/pci.h>
#endif /* _XTENSA_PCI_H */
......@@ -41,6 +41,8 @@ struct mcfg_fixup {
static struct mcfg_fixup mcfg_quirks[] = {
/* { OEM_ID, OEM_TABLE_ID, REV, SEGMENT, BUS_RANGE, ops, cfgres }, */
#ifdef CONFIG_ARM64
#define AL_ECAM(table_id, rev, seg, ops) \
{ "AMAZON", table_id, rev, seg, MCFG_BUS_ANY, ops }
......@@ -169,6 +171,17 @@ static struct mcfg_fixup mcfg_quirks[] = {
ALTRA_ECAM_QUIRK(1, 13),
ALTRA_ECAM_QUIRK(1, 14),
ALTRA_ECAM_QUIRK(1, 15),
#endif /* ARM64 */
#ifdef CONFIG_LOONGARCH
#define LOONGSON_ECAM_MCFG(table_id, seg) \
{ "LOONGS", table_id, 1, seg, MCFG_BUS_ANY, &loongson_pci_ecam_ops }
LOONGSON_ECAM_MCFG("\0", 0),
LOONGSON_ECAM_MCFG("LOONGSON", 0),
LOONGSON_ECAM_MCFG("\0", 1),
LOONGSON_ECAM_MCFG("LOONGSON", 1),
#endif /* LOONGARCH */
};
static char mcfg_oem_id[ACPI_OEM_ID_SIZE];
......
......@@ -8,7 +8,7 @@
#include <linux/slab.h>
#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <asm/dma.h>
#include <linux/isa-dma.h>
#include <linux/comedi/comedidev.h>
#include <linux/comedi/comedi_isadma.h>
......
......@@ -64,8 +64,8 @@ static struct dw_edma_burst *dw_edma_alloc_burst(struct dw_edma_chunk *chunk)
static struct dw_edma_chunk *dw_edma_alloc_chunk(struct dw_edma_desc *desc)
{
struct dw_edma_chip *chip = desc->chan->dw->chip;
struct dw_edma_chan *chan = desc->chan;
struct dw_edma *dw = chan->chip->dw;
struct dw_edma_chunk *chunk;
chunk = kzalloc(sizeof(*chunk), GFP_NOWAIT);
......@@ -82,11 +82,11 @@ static struct dw_edma_chunk *dw_edma_alloc_chunk(struct dw_edma_desc *desc)
*/
chunk->cb = !(desc->chunks_alloc % 2);
if (chan->dir == EDMA_DIR_WRITE) {
chunk->ll_region.paddr = dw->ll_region_wr[chan->id].paddr;
chunk->ll_region.vaddr = dw->ll_region_wr[chan->id].vaddr;
chunk->ll_region.paddr = chip->ll_region_wr[chan->id].paddr;
chunk->ll_region.vaddr = chip->ll_region_wr[chan->id].vaddr;
} else {
chunk->ll_region.paddr = dw->ll_region_rd[chan->id].paddr;
chunk->ll_region.vaddr = dw->ll_region_rd[chan->id].vaddr;
chunk->ll_region.paddr = chip->ll_region_rd[chan->id].paddr;
chunk->ll_region.vaddr = chip->ll_region_rd[chan->id].vaddr;
}
if (desc->chunk) {
......@@ -339,20 +339,39 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
if (!chan->configured)
return NULL;
switch (chan->config.direction) {
case DMA_DEV_TO_MEM: /* local DMA */
if (dir == DMA_DEV_TO_MEM && chan->dir == EDMA_DIR_READ)
break;
return NULL;
case DMA_MEM_TO_DEV: /* local DMA */
if (dir == DMA_MEM_TO_DEV && chan->dir == EDMA_DIR_WRITE)
break;
/*
* Local Root Port/End-point Remote End-point
* +-----------------------+ PCIe bus +----------------------+
* | | +-+ | |
* | DEV_TO_MEM Rx Ch <----+ +---+ Tx Ch DEV_TO_MEM |
* | | | | | |
* | MEM_TO_DEV Tx Ch +----+ +---> Rx Ch MEM_TO_DEV |
* | | +-+ | |
* +-----------------------+ +----------------------+
*
* 1. Normal logic:
* If eDMA is embedded into the DW PCIe RP/EP and controlled from the
* CPU/Application side, the Rx channel (EDMA_DIR_READ) will be used
* for the device read operations (DEV_TO_MEM) and the Tx channel
* (EDMA_DIR_WRITE) - for the write operations (MEM_TO_DEV).
*
* 2. Inverted logic:
* If eDMA is embedded into a Remote PCIe EP and is controlled by the
* MWr/MRd TLPs sent from the CPU's PCIe host controller, the Tx
* channel (EDMA_DIR_WRITE) will be used for the device read operations
* (DEV_TO_MEM) and the Rx channel (EDMA_DIR_READ) - for the write
* operations (MEM_TO_DEV).
*
* It is the client driver responsibility to choose a proper channel
* for the DMA transfers.
*/
if (chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) {
if ((chan->dir == EDMA_DIR_READ && dir != DMA_DEV_TO_MEM) ||
(chan->dir == EDMA_DIR_WRITE && dir != DMA_MEM_TO_DEV))
return NULL;
default: /* remote DMA */
if (dir == DMA_MEM_TO_DEV && chan->dir == EDMA_DIR_READ)
break;
if (dir == DMA_DEV_TO_MEM && chan->dir == EDMA_DIR_WRITE)
break;
} else {
if ((chan->dir == EDMA_DIR_WRITE && dir != DMA_DEV_TO_MEM) ||
(chan->dir == EDMA_DIR_READ && dir != DMA_MEM_TO_DEV))
return NULL;
}
......@@ -423,7 +442,7 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
chunk->ll_region.sz += burst->sz;
desc->alloc_sz += burst->sz;
if (chan->dir == EDMA_DIR_WRITE) {
if (dir == DMA_DEV_TO_MEM) {
burst->sar = src_addr;
if (xfer->type == EDMA_XFER_CYCLIC) {
burst->dar = xfer->xfer.cyclic.paddr;
......@@ -663,7 +682,7 @@ static int dw_edma_alloc_chan_resources(struct dma_chan *dchan)
if (chan->status != EDMA_ST_IDLE)
return -EBUSY;
pm_runtime_get(chan->chip->dev);
pm_runtime_get(chan->dw->chip->dev);
return 0;
}
......@@ -685,15 +704,15 @@ static void dw_edma_free_chan_resources(struct dma_chan *dchan)
cpu_relax();
}
pm_runtime_put(chan->chip->dev);
pm_runtime_put(chan->dw->chip->dev);
}
static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write,
static int dw_edma_channel_setup(struct dw_edma *dw, bool write,
u32 wr_alloc, u32 rd_alloc)
{
struct dw_edma_chip *chip = dw->chip;
struct dw_edma_region *dt_region;
struct device *dev = chip->dev;
struct dw_edma *dw = chip->dw;
struct dw_edma_chan *chan;
struct dw_edma_irq *irq;
struct dma_device *dma;
......@@ -726,7 +745,7 @@ static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write,
chan->vc.chan.private = dt_region;
chan->chip = chip;
chan->dw = dw;
chan->id = j;
chan->dir = write ? EDMA_DIR_WRITE : EDMA_DIR_READ;
chan->configured = false;
......@@ -734,9 +753,9 @@ static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write,
chan->status = EDMA_ST_IDLE;
if (write)
chan->ll_max = (dw->ll_region_wr[j].sz / EDMA_LL_SZ);
chan->ll_max = (chip->ll_region_wr[j].sz / EDMA_LL_SZ);
else
chan->ll_max = (dw->ll_region_rd[j].sz / EDMA_LL_SZ);
chan->ll_max = (chip->ll_region_rd[j].sz / EDMA_LL_SZ);
chan->ll_max -= 1;
dev_vdbg(dev, "L. List:\tChannel %s[%u] max_cnt=%u\n",
......@@ -766,13 +785,13 @@ static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write,
vchan_init(&chan->vc, dma);
if (write) {
dt_region->paddr = dw->dt_region_wr[j].paddr;
dt_region->vaddr = dw->dt_region_wr[j].vaddr;
dt_region->sz = dw->dt_region_wr[j].sz;
dt_region->paddr = chip->dt_region_wr[j].paddr;
dt_region->vaddr = chip->dt_region_wr[j].vaddr;
dt_region->sz = chip->dt_region_wr[j].sz;
} else {
dt_region->paddr = dw->dt_region_rd[j].paddr;
dt_region->vaddr = dw->dt_region_rd[j].vaddr;
dt_region->sz = dw->dt_region_rd[j].sz;
dt_region->paddr = chip->dt_region_rd[j].paddr;
dt_region->vaddr = chip->dt_region_rd[j].vaddr;
dt_region->sz = chip->dt_region_rd[j].sz;
}
dw_edma_v0_core_device_config(chan);
......@@ -826,11 +845,11 @@ static inline void dw_edma_add_irq_mask(u32 *mask, u32 alloc, u16 cnt)
(*mask)++;
}
static int dw_edma_irq_request(struct dw_edma_chip *chip,
static int dw_edma_irq_request(struct dw_edma *dw,
u32 *wr_alloc, u32 *rd_alloc)
{
struct device *dev = chip->dev;
struct dw_edma *dw = chip->dw;
struct dw_edma_chip *chip = dw->chip;
struct device *dev = dw->chip->dev;
u32 wr_mask = 1;
u32 rd_mask = 1;
int i, err = 0;
......@@ -839,12 +858,16 @@ static int dw_edma_irq_request(struct dw_edma_chip *chip,
ch_cnt = dw->wr_ch_cnt + dw->rd_ch_cnt;
if (dw->nr_irqs < 1)
if (chip->nr_irqs < 1 || !chip->ops->irq_vector)
return -EINVAL;
if (dw->nr_irqs == 1) {
dw->irq = devm_kcalloc(dev, chip->nr_irqs, sizeof(*dw->irq), GFP_KERNEL);
if (!dw->irq)
return -ENOMEM;
if (chip->nr_irqs == 1) {
/* Common IRQ shared among all channels */
irq = dw->ops->irq_vector(dev, 0);
irq = chip->ops->irq_vector(dev, 0);
err = request_irq(irq, dw_edma_interrupt_common,
IRQF_SHARED, dw->name, &dw->irq[0]);
if (err) {
......@@ -854,9 +877,11 @@ static int dw_edma_irq_request(struct dw_edma_chip *chip,
if (irq_get_msi_desc(irq))
get_cached_msi_msg(irq, &dw->irq[0].msi);
dw->nr_irqs = 1;
} else {
/* Distribute IRQs equally among all channels */
int tmp = dw->nr_irqs;
int tmp = chip->nr_irqs;
while (tmp && (*wr_alloc + *rd_alloc) < ch_cnt) {
dw_edma_dec_irq_alloc(&tmp, wr_alloc, dw->wr_ch_cnt);
......@@ -867,7 +892,7 @@ static int dw_edma_irq_request(struct dw_edma_chip *chip,
dw_edma_add_irq_mask(&rd_mask, *rd_alloc, dw->rd_ch_cnt);
for (i = 0; i < (*wr_alloc + *rd_alloc); i++) {
irq = dw->ops->irq_vector(dev, i);
irq = chip->ops->irq_vector(dev, i);
err = request_irq(irq,
i < *wr_alloc ?
dw_edma_interrupt_write :
......@@ -901,20 +926,22 @@ int dw_edma_probe(struct dw_edma_chip *chip)
return -EINVAL;
dev = chip->dev;
if (!dev)
if (!dev || !chip->ops)
return -EINVAL;
dw = chip->dw;
if (!dw || !dw->irq || !dw->ops || !dw->ops->irq_vector)
return -EINVAL;
dw = devm_kzalloc(dev, sizeof(*dw), GFP_KERNEL);
if (!dw)
return -ENOMEM;
dw->chip = chip;
raw_spin_lock_init(&dw->lock);
dw->wr_ch_cnt = min_t(u16, dw->wr_ch_cnt,
dw->wr_ch_cnt = min_t(u16, chip->ll_wr_cnt,
dw_edma_v0_core_ch_count(dw, EDMA_DIR_WRITE));
dw->wr_ch_cnt = min_t(u16, dw->wr_ch_cnt, EDMA_MAX_WR_CH);
dw->rd_ch_cnt = min_t(u16, dw->rd_ch_cnt,
dw->rd_ch_cnt = min_t(u16, chip->ll_rd_cnt,
dw_edma_v0_core_ch_count(dw, EDMA_DIR_READ));
dw->rd_ch_cnt = min_t(u16, dw->rd_ch_cnt, EDMA_MAX_RD_CH);
......@@ -936,17 +963,17 @@ int dw_edma_probe(struct dw_edma_chip *chip)
dw_edma_v0_core_off(dw);
/* Request IRQs */
err = dw_edma_irq_request(chip, &wr_alloc, &rd_alloc);
err = dw_edma_irq_request(dw, &wr_alloc, &rd_alloc);
if (err)
return err;
/* Setup write channels */
err = dw_edma_channel_setup(chip, true, wr_alloc, rd_alloc);
err = dw_edma_channel_setup(dw, true, wr_alloc, rd_alloc);
if (err)
goto err_irq_free;
/* Setup read channels */
err = dw_edma_channel_setup(chip, false, wr_alloc, rd_alloc);
err = dw_edma_channel_setup(dw, false, wr_alloc, rd_alloc);
if (err)
goto err_irq_free;
......@@ -954,15 +981,15 @@ int dw_edma_probe(struct dw_edma_chip *chip)
pm_runtime_enable(dev);
/* Turn debugfs on */
dw_edma_v0_core_debugfs_on(chip);
dw_edma_v0_core_debugfs_on(dw);
chip->dw = dw;
return 0;
err_irq_free:
for (i = (dw->nr_irqs - 1); i >= 0; i--)
free_irq(dw->ops->irq_vector(dev, i), &dw->irq[i]);
dw->nr_irqs = 0;
free_irq(chip->ops->irq_vector(dev, i), &dw->irq[i]);
return err;
}
......@@ -980,7 +1007,7 @@ int dw_edma_remove(struct dw_edma_chip *chip)
/* Free irqs */
for (i = (dw->nr_irqs - 1); i >= 0; i--)
free_irq(dw->ops->irq_vector(dev, i), &dw->irq[i]);
free_irq(chip->ops->irq_vector(dev, i), &dw->irq[i]);
/* Power management */
pm_runtime_disable(dev);
......@@ -1001,7 +1028,7 @@ int dw_edma_remove(struct dw_edma_chip *chip)
}
/* Turn debugfs off */
dw_edma_v0_core_debugfs_off(chip);
dw_edma_v0_core_debugfs_off(dw);
return 0;
}
......
......@@ -15,20 +15,12 @@
#include "../virt-dma.h"
#define EDMA_LL_SZ 24
#define EDMA_MAX_WR_CH 8
#define EDMA_MAX_RD_CH 8
enum dw_edma_dir {
EDMA_DIR_WRITE = 0,
EDMA_DIR_READ
};
enum dw_edma_map_format {
EDMA_MF_EDMA_LEGACY = 0x0,
EDMA_MF_EDMA_UNROLL = 0x1,
EDMA_MF_HDMA_COMPAT = 0x5
};
enum dw_edma_request {
EDMA_REQ_NONE = 0,
EDMA_REQ_STOP,
......@@ -57,12 +49,6 @@ struct dw_edma_burst {
u32 sz;
};
struct dw_edma_region {
phys_addr_t paddr;
void __iomem *vaddr;
size_t sz;
};
struct dw_edma_chunk {
struct list_head list;
struct dw_edma_chan *chan;
......@@ -87,7 +73,7 @@ struct dw_edma_desc {
struct dw_edma_chan {
struct virt_dma_chan vc;
struct dw_edma_chip *chip;
struct dw_edma *dw;
int id;
enum dw_edma_dir dir;
......@@ -109,10 +95,6 @@ struct dw_edma_irq {
struct dw_edma *dw;
};
struct dw_edma_core_ops {
int (*irq_vector)(struct device *dev, unsigned int nr);
};
struct dw_edma {
char name[20];
......@@ -122,21 +104,14 @@ struct dw_edma {
struct dma_device rd_edma;
u16 rd_ch_cnt;
struct dw_edma_region rg_region; /* Registers */
struct dw_edma_region ll_region_wr[EDMA_MAX_WR_CH];
struct dw_edma_region ll_region_rd[EDMA_MAX_RD_CH];
struct dw_edma_region dt_region_wr[EDMA_MAX_WR_CH];
struct dw_edma_region dt_region_rd[EDMA_MAX_RD_CH];
struct dw_edma_irq *irq;
int nr_irqs;
enum dw_edma_map_format mf;
struct dw_edma_chan *chan;
const struct dw_edma_core_ops *ops;
raw_spinlock_t lock; /* Only for legacy */
struct dw_edma_chip *chip;
#ifdef CONFIG_DEBUG_FS
struct dentry *debugfs;
#endif /* CONFIG_DEBUG_FS */
......
......@@ -148,7 +148,6 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
struct dw_edma_pcie_data vsec_data;
struct device *dev = &pdev->dev;
struct dw_edma_chip *chip;
struct dw_edma *dw;
int err, nr_irqs;
int i, mask;
......@@ -197,10 +196,6 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
if (!chip)
return -ENOMEM;
dw = devm_kzalloc(dev, sizeof(*dw), GFP_KERNEL);
if (!dw)
return -ENOMEM;
/* IRQs allocation */
nr_irqs = pci_alloc_irq_vectors(pdev, 1, vsec_data.irqs,
PCI_IRQ_MSI | PCI_IRQ_MSIX);
......@@ -211,29 +206,23 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
}
/* Data structure initialization */
chip->dw = dw;
chip->dev = dev;
chip->id = pdev->devfn;
chip->irq = pdev->irq;
dw->mf = vsec_data.mf;
dw->nr_irqs = nr_irqs;
dw->ops = &dw_edma_pcie_core_ops;
dw->wr_ch_cnt = vsec_data.wr_ch_cnt;
dw->rd_ch_cnt = vsec_data.rd_ch_cnt;
chip->mf = vsec_data.mf;
chip->nr_irqs = nr_irqs;
chip->ops = &dw_edma_pcie_core_ops;
dw->rg_region.vaddr = pcim_iomap_table(pdev)[vsec_data.rg.bar];
if (!dw->rg_region.vaddr)
return -ENOMEM;
chip->ll_wr_cnt = vsec_data.wr_ch_cnt;
chip->ll_rd_cnt = vsec_data.rd_ch_cnt;
dw->rg_region.vaddr += vsec_data.rg.off;
dw->rg_region.paddr = pdev->resource[vsec_data.rg.bar].start;
dw->rg_region.paddr += vsec_data.rg.off;
dw->rg_region.sz = vsec_data.rg.sz;
chip->reg_base = pcim_iomap_table(pdev)[vsec_data.rg.bar];
if (!chip->reg_base)
return -ENOMEM;
for (i = 0; i < dw->wr_ch_cnt; i++) {
struct dw_edma_region *ll_region = &dw->ll_region_wr[i];
struct dw_edma_region *dt_region = &dw->dt_region_wr[i];
for (i = 0; i < chip->ll_wr_cnt; i++) {
struct dw_edma_region *ll_region = &chip->ll_region_wr[i];
struct dw_edma_region *dt_region = &chip->dt_region_wr[i];
struct dw_edma_block *ll_block = &vsec_data.ll_wr[i];
struct dw_edma_block *dt_block = &vsec_data.dt_wr[i];
......@@ -256,9 +245,9 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
dt_region->sz = dt_block->sz;
}
for (i = 0; i < dw->rd_ch_cnt; i++) {
struct dw_edma_region *ll_region = &dw->ll_region_rd[i];
struct dw_edma_region *dt_region = &dw->dt_region_rd[i];
for (i = 0; i < chip->ll_rd_cnt; i++) {
struct dw_edma_region *ll_region = &chip->ll_region_rd[i];
struct dw_edma_region *dt_region = &chip->dt_region_rd[i];
struct dw_edma_block *ll_block = &vsec_data.ll_rd[i];
struct dw_edma_block *dt_block = &vsec_data.dt_rd[i];
......@@ -282,45 +271,45 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
}
/* Debug info */
if (dw->mf == EDMA_MF_EDMA_LEGACY)
pci_dbg(pdev, "Version:\teDMA Port Logic (0x%x)\n", dw->mf);
else if (dw->mf == EDMA_MF_EDMA_UNROLL)
pci_dbg(pdev, "Version:\teDMA Unroll (0x%x)\n", dw->mf);
else if (dw->mf == EDMA_MF_HDMA_COMPAT)
pci_dbg(pdev, "Version:\tHDMA Compatible (0x%x)\n", dw->mf);
if (chip->mf == EDMA_MF_EDMA_LEGACY)
pci_dbg(pdev, "Version:\teDMA Port Logic (0x%x)\n", chip->mf);
else if (chip->mf == EDMA_MF_EDMA_UNROLL)
pci_dbg(pdev, "Version:\teDMA Unroll (0x%x)\n", chip->mf);
else if (chip->mf == EDMA_MF_HDMA_COMPAT)
pci_dbg(pdev, "Version:\tHDMA Compatible (0x%x)\n", chip->mf);
else
pci_dbg(pdev, "Version:\tUnknown (0x%x)\n", dw->mf);
pci_dbg(pdev, "Version:\tUnknown (0x%x)\n", chip->mf);
pci_dbg(pdev, "Registers:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n",
pci_dbg(pdev, "Registers:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p)\n",
vsec_data.rg.bar, vsec_data.rg.off, vsec_data.rg.sz,
dw->rg_region.vaddr, &dw->rg_region.paddr);
chip->reg_base);
for (i = 0; i < dw->wr_ch_cnt; i++) {
for (i = 0; i < chip->ll_wr_cnt; i++) {
pci_dbg(pdev, "L. List:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n",
i, vsec_data.ll_wr[i].bar,
vsec_data.ll_wr[i].off, dw->ll_region_wr[i].sz,
dw->ll_region_wr[i].vaddr, &dw->ll_region_wr[i].paddr);
vsec_data.ll_wr[i].off, chip->ll_region_wr[i].sz,
chip->ll_region_wr[i].vaddr, &chip->ll_region_wr[i].paddr);
pci_dbg(pdev, "Data:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n",
i, vsec_data.dt_wr[i].bar,
vsec_data.dt_wr[i].off, dw->dt_region_wr[i].sz,
dw->dt_region_wr[i].vaddr, &dw->dt_region_wr[i].paddr);
vsec_data.dt_wr[i].off, chip->dt_region_wr[i].sz,
chip->dt_region_wr[i].vaddr, &chip->dt_region_wr[i].paddr);
}
for (i = 0; i < dw->rd_ch_cnt; i++) {
for (i = 0; i < chip->ll_rd_cnt; i++) {
pci_dbg(pdev, "L. List:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n",
i, vsec_data.ll_rd[i].bar,
vsec_data.ll_rd[i].off, dw->ll_region_rd[i].sz,
dw->ll_region_rd[i].vaddr, &dw->ll_region_rd[i].paddr);
vsec_data.ll_rd[i].off, chip->ll_region_rd[i].sz,
chip->ll_region_rd[i].vaddr, &chip->ll_region_rd[i].paddr);
pci_dbg(pdev, "Data:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n",
i, vsec_data.dt_rd[i].bar,
vsec_data.dt_rd[i].off, dw->dt_region_rd[i].sz,
dw->dt_region_rd[i].vaddr, &dw->dt_region_rd[i].paddr);
vsec_data.dt_rd[i].off, chip->dt_region_rd[i].sz,
chip->dt_region_rd[i].vaddr, &chip->dt_region_rd[i].paddr);
}
pci_dbg(pdev, "Nr. IRQs:\t%u\n", dw->nr_irqs);
pci_dbg(pdev, "Nr. IRQs:\t%u\n", chip->nr_irqs);
/* Validating if PCI interrupts were enabled */
if (!pci_dev_msi_enabled(pdev)) {
......@@ -328,10 +317,6 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
return -EPERM;
}
dw->irq = devm_kcalloc(dev, nr_irqs, sizeof(*dw->irq), GFP_KERNEL);
if (!dw->irq)
return -ENOMEM;
/* Starting eDMA driver */
err = dw_edma_probe(chip);
if (err) {
......
......@@ -25,7 +25,7 @@ enum dw_edma_control {
static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw)
{
return dw->rg_region.vaddr;
return dw->chip->reg_base;
}
#define SET_32(dw, name, value) \
......@@ -96,7 +96,7 @@ static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw)
static inline struct dw_edma_v0_ch_regs __iomem *
__dw_ch_regs(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch)
{
if (dw->mf == EDMA_MF_EDMA_LEGACY)
if (dw->chip->mf == EDMA_MF_EDMA_LEGACY)
return &(__dw_regs(dw)->type.legacy.ch);
if (dir == EDMA_DIR_WRITE)
......@@ -108,7 +108,7 @@ __dw_ch_regs(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch)
static inline void writel_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch,
u32 value, void __iomem *addr)
{
if (dw->mf == EDMA_MF_EDMA_LEGACY) {
if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) {
u32 viewport_sel;
unsigned long flags;
......@@ -133,7 +133,7 @@ static inline u32 readl_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch,
{
u32 value;
if (dw->mf == EDMA_MF_EDMA_LEGACY) {
if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) {
u32 viewport_sel;
unsigned long flags;
......@@ -169,7 +169,7 @@ static inline u32 readl_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch,
static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch,
u64 value, void __iomem *addr)
{
if (dw->mf == EDMA_MF_EDMA_LEGACY) {
if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) {
u32 viewport_sel;
unsigned long flags;
......@@ -194,7 +194,7 @@ static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch,
{
u32 value;
if (dw->mf == EDMA_MF_EDMA_LEGACY) {
if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) {
u32 viewport_sel;
unsigned long flags;
......@@ -256,7 +256,7 @@ u16 dw_edma_v0_core_ch_count(struct dw_edma *dw, enum dw_edma_dir dir)
enum dma_status dw_edma_v0_core_ch_status(struct dw_edma_chan *chan)
{
struct dw_edma *dw = chan->chip->dw;
struct dw_edma *dw = chan->dw;
u32 tmp;
tmp = FIELD_GET(EDMA_V0_CH_STATUS_MASK,
......@@ -272,7 +272,7 @@ enum dma_status dw_edma_v0_core_ch_status(struct dw_edma_chan *chan)
void dw_edma_v0_core_clear_done_int(struct dw_edma_chan *chan)
{
struct dw_edma *dw = chan->chip->dw;
struct dw_edma *dw = chan->dw;
SET_RW_32(dw, chan->dir, int_clear,
FIELD_PREP(EDMA_V0_DONE_INT_MASK, BIT(chan->id)));
......@@ -280,7 +280,7 @@ void dw_edma_v0_core_clear_done_int(struct dw_edma_chan *chan)
void dw_edma_v0_core_clear_abort_int(struct dw_edma_chan *chan)
{
struct dw_edma *dw = chan->chip->dw;
struct dw_edma *dw = chan->dw;
SET_RW_32(dw, chan->dir, int_clear,
FIELD_PREP(EDMA_V0_ABORT_INT_MASK, BIT(chan->id)));
......@@ -301,6 +301,7 @@ u32 dw_edma_v0_core_status_abort_int(struct dw_edma *dw, enum dw_edma_dir dir)
static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk)
{
struct dw_edma_burst *child;
struct dw_edma_chan *chan = chunk->chan;
struct dw_edma_v0_lli __iomem *lli;
struct dw_edma_v0_llp __iomem *llp;
u32 control = 0, i = 0;
......@@ -314,9 +315,11 @@ static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk)
j = chunk->bursts_alloc;
list_for_each_entry(child, &chunk->burst->list, list) {
j--;
if (!j)
control |= (DW_EDMA_V0_LIE | DW_EDMA_V0_RIE);
if (!j) {
control |= DW_EDMA_V0_LIE;
if (!(chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL))
control |= DW_EDMA_V0_RIE;
}
/* Channel control */
SET_LL_32(&lli[i].control, control);
/* Transfer size */
......@@ -357,7 +360,7 @@ static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk)
void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
{
struct dw_edma_chan *chan = chunk->chan;
struct dw_edma *dw = chan->chip->dw;
struct dw_edma *dw = chan->dw;
u32 tmp;
dw_edma_v0_core_write_chunk(chunk);
......@@ -365,7 +368,7 @@ void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
if (first) {
/* Enable engine */
SET_RW_32(dw, chan->dir, engine_en, BIT(0));
if (dw->mf == EDMA_MF_HDMA_COMPAT) {
if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) {
switch (chan->id) {
case 0:
SET_RW_COMPAT(dw, chan->dir, ch0_pwr_en,
......@@ -427,7 +430,7 @@ void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
int dw_edma_v0_core_device_config(struct dw_edma_chan *chan)
{
struct dw_edma *dw = chan->chip->dw;
struct dw_edma *dw = chan->dw;
u32 tmp = 0;
/* MSI done addr - low, high */
......@@ -497,12 +500,12 @@ int dw_edma_v0_core_device_config(struct dw_edma_chan *chan)
}
/* eDMA debugfs callbacks */
void dw_edma_v0_core_debugfs_on(struct dw_edma_chip *chip)
void dw_edma_v0_core_debugfs_on(struct dw_edma *dw)
{
dw_edma_v0_debugfs_on(chip);
dw_edma_v0_debugfs_on(dw);
}
void dw_edma_v0_core_debugfs_off(struct dw_edma_chip *chip)
void dw_edma_v0_core_debugfs_off(struct dw_edma *dw)
{
dw_edma_v0_debugfs_off(chip);
dw_edma_v0_debugfs_off(dw);
}
......@@ -22,7 +22,7 @@ u32 dw_edma_v0_core_status_abort_int(struct dw_edma *chan, enum dw_edma_dir dir)
void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first);
int dw_edma_v0_core_device_config(struct dw_edma_chan *chan);
/* eDMA debug fs callbacks */
void dw_edma_v0_core_debugfs_on(struct dw_edma_chip *chip);
void dw_edma_v0_core_debugfs_off(struct dw_edma_chip *chip);
void dw_edma_v0_core_debugfs_on(struct dw_edma *dw);
void dw_edma_v0_core_debugfs_off(struct dw_edma *dw);
#endif /* _DW_EDMA_V0_CORE_H */
......@@ -54,7 +54,7 @@ struct debugfs_entries {
static int dw_edma_debugfs_u32_get(void *data, u64 *val)
{
void __iomem *reg = (void __force __iomem *)data;
if (dw->mf == EDMA_MF_EDMA_LEGACY &&
if (dw->chip->mf == EDMA_MF_EDMA_LEGACY &&
reg >= (void __iomem *)&regs->type.legacy.ch) {
void __iomem *ptr = &regs->type.legacy.ch;
u32 viewport_sel = 0;
......@@ -173,7 +173,7 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir)
nr_entries = ARRAY_SIZE(debugfs_regs);
dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir);
if (dw->mf == EDMA_MF_HDMA_COMPAT) {
if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) {
nr_entries = ARRAY_SIZE(debugfs_unroll_regs);
dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries,
regs_dir);
......@@ -242,7 +242,7 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir)
nr_entries = ARRAY_SIZE(debugfs_regs);
dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir);
if (dw->mf == EDMA_MF_HDMA_COMPAT) {
if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) {
nr_entries = ARRAY_SIZE(debugfs_unroll_regs);
dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries,
regs_dir);
......@@ -282,13 +282,13 @@ static void dw_edma_debugfs_regs(void)
dw_edma_debugfs_regs_rd(regs_dir);
}
void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip)
void dw_edma_v0_debugfs_on(struct dw_edma *_dw)
{
dw = chip->dw;
dw = _dw;
if (!dw)
return;
regs = dw->rg_region.vaddr;
regs = dw->chip->reg_base;
if (!regs)
return;
......@@ -296,16 +296,16 @@ void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip)
if (!dw->debugfs)
return;
debugfs_create_u32("mf", 0444, dw->debugfs, &dw->mf);
debugfs_create_u32("mf", 0444, dw->debugfs, &dw->chip->mf);
debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt);
debugfs_create_u16("rd_ch_cnt", 0444, dw->debugfs, &dw->rd_ch_cnt);
dw_edma_debugfs_regs();
}
void dw_edma_v0_debugfs_off(struct dw_edma_chip *chip)
void dw_edma_v0_debugfs_off(struct dw_edma *_dw)
{
dw = chip->dw;
dw = _dw;
if (!dw)
return;
......
......@@ -12,14 +12,14 @@
#include <linux/dma/edma.h>
#ifdef CONFIG_DEBUG_FS
void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip);
void dw_edma_v0_debugfs_off(struct dw_edma_chip *chip);
void dw_edma_v0_debugfs_on(struct dw_edma *dw);
void dw_edma_v0_debugfs_off(struct dw_edma *dw);
#else
static inline void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip)
static inline void dw_edma_v0_debugfs_on(struct dw_edma *dw)
{
}
static inline void dw_edma_v0_debugfs_off(struct dw_edma_chip *chip)
static inline void dw_edma_v0_debugfs_off(struct dw_edma *dw)
{
}
#endif /* CONFIG_DEBUG_FS */
......
......@@ -237,7 +237,7 @@ config PCIE_ROCKCHIP_EP
config PCIE_MEDIATEK
tristate "MediaTek PCIe controller"
depends on ARCH_MEDIATEK || COMPILE_TEST
depends on ARCH_AIROHA || ARCH_MEDIATEK || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
help
......@@ -293,7 +293,7 @@ config PCI_HYPERV_INTERFACE
config PCI_LOONGSON
bool "LOONGSON PCI Controller"
depends on MACH_LOONGSON64 || COMPILE_TEST
depends on OF
depends on OF || ACPI
depends on PCI_QUIRKS
default MACH_LOONGSON64
help
......
......@@ -243,7 +243,6 @@ int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie)
return ret;
}
#ifdef CONFIG_PM_SLEEP
static int cdns_pcie_suspend_noirq(struct device *dev)
{
struct cdns_pcie *pcie = dev_get_drvdata(dev);
......@@ -266,9 +265,8 @@ static int cdns_pcie_resume_noirq(struct device *dev)
return 0;
}
#endif
const struct dev_pm_ops cdns_pcie_pm_ops = {
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(cdns_pcie_suspend_noirq,
NOIRQ_SYSTEM_SLEEP_PM_OPS(cdns_pcie_suspend_noirq,
cdns_pcie_resume_noirq)
};
......@@ -178,7 +178,7 @@ static void dra7xx_pcie_enable_interrupts(struct dra7xx_pcie *dra7xx)
dra7xx_pcie_enable_msi_interrupts(dra7xx);
}
static int dra7xx_pcie_host_init(struct pcie_port *pp)
static int dra7xx_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci);
......@@ -202,7 +202,7 @@ static const struct irq_domain_ops intx_domain_ops = {
.xlate = pci_irqd_intx_xlate,
};
static int dra7xx_pcie_handle_msi(struct pcie_port *pp, int index)
static int dra7xx_pcie_handle_msi(struct dw_pcie_rp *pp, int index)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
unsigned long val;
......@@ -224,7 +224,7 @@ static int dra7xx_pcie_handle_msi(struct pcie_port *pp, int index)
return 1;
}
static void dra7xx_pcie_handle_msi_irq(struct pcie_port *pp)
static void dra7xx_pcie_handle_msi_irq(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
int ret, i, count, num_ctrls;
......@@ -255,8 +255,8 @@ static void dra7xx_pcie_msi_irq_handler(struct irq_desc *desc)
{
struct irq_chip *chip = irq_desc_get_chip(desc);
struct dra7xx_pcie *dra7xx;
struct dw_pcie_rp *pp;
struct dw_pcie *pci;
struct pcie_port *pp;
unsigned long reg;
u32 bit;
......@@ -344,7 +344,7 @@ static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg)
return IRQ_HANDLED;
}
static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp)
static int dra7xx_pcie_init_irq_domain(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev;
......@@ -475,7 +475,7 @@ static int dra7xx_add_pcie_port(struct dra7xx_pcie *dra7xx,
{
int ret;
struct dw_pcie *pci = dra7xx->pci;
struct pcie_port *pp = &pci->pp;
struct dw_pcie_rp *pp = &pci->pp;
struct device *dev = pci->dev;
pp->irq = platform_get_irq(pdev, 1);
......@@ -483,7 +483,7 @@ static int dra7xx_add_pcie_port(struct dra7xx_pcie *dra7xx,
return pp->irq;
/* MSI IRQ is muxed */
pp->msi_irq = -ENODEV;
pp->msi_irq[0] = -ENODEV;
ret = dra7xx_pcie_init_irq_domain(pp);
if (ret < 0)
......@@ -862,7 +862,6 @@ static int dra7xx_pcie_probe(struct platform_device *pdev)
return ret;
}
#ifdef CONFIG_PM_SLEEP
static int dra7xx_pcie_suspend(struct device *dev)
{
struct dra7xx_pcie *dra7xx = dev_get_drvdata(dev);
......@@ -919,7 +918,6 @@ static int dra7xx_pcie_resume_noirq(struct device *dev)
return 0;
}
#endif
static void dra7xx_pcie_shutdown(struct platform_device *pdev)
{
......@@ -940,8 +938,8 @@ static void dra7xx_pcie_shutdown(struct platform_device *pdev)
}
static const struct dev_pm_ops dra7xx_pcie_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(dra7xx_pcie_suspend, dra7xx_pcie_resume)
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(dra7xx_pcie_suspend_noirq,
SYSTEM_SLEEP_PM_OPS(dra7xx_pcie_suspend, dra7xx_pcie_resume)
NOIRQ_SYSTEM_SLEEP_PM_OPS(dra7xx_pcie_suspend_noirq,
dra7xx_pcie_resume_noirq)
};
......
......@@ -249,7 +249,7 @@ static int exynos_pcie_link_up(struct dw_pcie *pci)
return (val & PCIE_ELBI_XMLH_LINKUP);
}
static int exynos_pcie_host_init(struct pcie_port *pp)
static int exynos_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct exynos_pcie *ep = to_exynos_pcie(pci);
......@@ -258,9 +258,8 @@ static int exynos_pcie_host_init(struct pcie_port *pp)
exynos_pcie_assert_core_reset(ep);
phy_reset(ep->phy);
phy_power_on(ep->phy);
phy_init(ep->phy);
phy_power_on(ep->phy);
exynos_pcie_deassert_core_reset(ep);
exynos_pcie_enable_irq_pulse(ep);
......@@ -276,7 +275,7 @@ static int exynos_add_pcie_port(struct exynos_pcie *ep,
struct platform_device *pdev)
{
struct dw_pcie *pci = &ep->pci;
struct pcie_port *pp = &pci->pp;
struct dw_pcie_rp *pp = &pci->pp;
struct device *dev = &pdev->dev;
int ret;
......@@ -292,7 +291,7 @@ static int exynos_add_pcie_port(struct exynos_pcie *ep,
}
pp->ops = &exynos_pcie_host_ops;
pp->msi_irq = -ENODEV;
pp->msi_irq[0] = -ENODEV;
ret = dw_pcie_host_init(pp);
if (ret) {
......@@ -390,7 +389,7 @@ static int __exit exynos_pcie_remove(struct platform_device *pdev)
return 0;
}
static int __maybe_unused exynos_pcie_suspend_noirq(struct device *dev)
static int exynos_pcie_suspend_noirq(struct device *dev)
{
struct exynos_pcie *ep = dev_get_drvdata(dev);
......@@ -402,11 +401,11 @@ static int __maybe_unused exynos_pcie_suspend_noirq(struct device *dev)
return 0;
}
static int __maybe_unused exynos_pcie_resume_noirq(struct device *dev)
static int exynos_pcie_resume_noirq(struct device *dev)
{
struct exynos_pcie *ep = dev_get_drvdata(dev);
struct dw_pcie *pci = &ep->pci;
struct pcie_port *pp = &pci->pp;
struct dw_pcie_rp *pp = &pci->pp;
int ret;
ret = regulator_bulk_enable(ARRAY_SIZE(ep->supplies), ep->supplies);
......@@ -421,7 +420,7 @@ static int __maybe_unused exynos_pcie_resume_noirq(struct device *dev)
}
static const struct dev_pm_ops exynos_pcie_pm_ops = {
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(exynos_pcie_suspend_noirq,
NOIRQ_SYSTEM_SLEEP_PM_OPS(exynos_pcie_suspend_noirq,
exynos_pcie_resume_noirq)
};
......
This diff is collapsed.
......@@ -109,7 +109,7 @@ struct ks_pcie_of_data {
enum dw_pcie_device_mode mode;
const struct dw_pcie_host_ops *host_ops;
const struct dw_pcie_ep_ops *ep_ops;
unsigned int version;
u32 version;
};
struct keystone_pcie {
......@@ -147,7 +147,7 @@ static void ks_pcie_app_writel(struct keystone_pcie *ks_pcie, u32 offset,
static void ks_pcie_msi_irq_ack(struct irq_data *data)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(data);
struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(data);
struct keystone_pcie *ks_pcie;
u32 irq = data->hwirq;
struct dw_pcie *pci;
......@@ -167,7 +167,7 @@ static void ks_pcie_msi_irq_ack(struct irq_data *data)
static void ks_pcie_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(data);
struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(data);
struct keystone_pcie *ks_pcie;
struct dw_pcie *pci;
u64 msi_target;
......@@ -192,7 +192,7 @@ static int ks_pcie_msi_set_affinity(struct irq_data *irq_data,
static void ks_pcie_msi_mask(struct irq_data *data)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(data);
struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(data);
struct keystone_pcie *ks_pcie;
u32 irq = data->hwirq;
struct dw_pcie *pci;
......@@ -216,7 +216,7 @@ static void ks_pcie_msi_mask(struct irq_data *data)
static void ks_pcie_msi_unmask(struct irq_data *data)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(data);
struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(data);
struct keystone_pcie *ks_pcie;
u32 irq = data->hwirq;
struct dw_pcie *pci;
......@@ -247,7 +247,7 @@ static struct irq_chip ks_pcie_msi_irq_chip = {
.irq_unmask = ks_pcie_msi_unmask,
};
static int ks_pcie_msi_host_init(struct pcie_port *pp)
static int ks_pcie_msi_host_init(struct dw_pcie_rp *pp)
{
pp->msi_irq_chip = &ks_pcie_msi_irq_chip;
return dw_pcie_allocate_domains(pp);
......@@ -390,7 +390,7 @@ static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
u32 val;
u32 num_viewport = ks_pcie->num_viewport;
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct dw_pcie_rp *pp = &pci->pp;
u64 start, end;
struct resource *mem;
int i;
......@@ -428,7 +428,7 @@ static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
static void __iomem *ks_pcie_other_map_bus(struct pci_bus *bus,
unsigned int devfn, int where)
{
struct pcie_port *pp = bus->sysdata;
struct dw_pcie_rp *pp = bus->sysdata;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
u32 reg;
......@@ -456,7 +456,7 @@ static struct pci_ops ks_child_pcie_ops = {
*/
static int ks_pcie_v3_65_add_bus(struct pci_bus *bus)
{
struct pcie_port *pp = bus->sysdata;
struct dw_pcie_rp *pp = bus->sysdata;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
......@@ -574,7 +574,7 @@ static void ks_pcie_msi_irq_handler(struct irq_desc *desc)
struct keystone_pcie *ks_pcie = irq_desc_get_handler_data(desc);
u32 offset = irq - ks_pcie->msi_host_irq;
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct dw_pcie_rp *pp = &pci->pp;
struct device *dev = pci->dev;
struct irq_chip *chip = irq_desc_get_chip(desc);
u32 vector, reg, pos;
......@@ -799,7 +799,7 @@ static int __init ks_pcie_init_id(struct keystone_pcie *ks_pcie)
return 0;
}
static int __init ks_pcie_host_init(struct pcie_port *pp)
static int __init ks_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
......@@ -1069,19 +1069,19 @@ static int ks_pcie_am654_set_mode(struct device *dev,
static const struct ks_pcie_of_data ks_pcie_rc_of_data = {
.host_ops = &ks_pcie_host_ops,
.version = 0x365A,
.version = DW_PCIE_VER_365A,
};
static const struct ks_pcie_of_data ks_pcie_am654_rc_of_data = {
.host_ops = &ks_pcie_am654_host_ops,
.mode = DW_PCIE_RC_TYPE,
.version = 0x490A,
.version = DW_PCIE_VER_490A,
};
static const struct ks_pcie_of_data ks_pcie_am654_ep_of_data = {
.ep_ops = &ks_pcie_am654_ep_ops,
.mode = DW_PCIE_EP_TYPE,
.version = 0x490A,
.version = DW_PCIE_VER_490A,
};
static const struct of_device_id ks_pcie_of_match[] = {
......@@ -1114,12 +1114,12 @@ static int __init ks_pcie_probe(struct platform_device *pdev)
struct device_link **link;
struct gpio_desc *gpiod;
struct resource *res;
unsigned int version;
void __iomem *base;
u32 num_viewport;
struct phy **phy;
u32 num_lanes;
char name[10];
u32 version;
int ret;
int irq;
int i;
......@@ -1233,7 +1233,7 @@ static int __init ks_pcie_probe(struct platform_device *pdev)
goto err_get_sync;
}
if (pci->version >= 0x480A)
if (dw_pcie_ver_is_ge(pci, 480A))
ret = ks_pcie_am654_set_mode(dev, mode);
else
ret = ks_pcie_set_mode(dev);
......@@ -1324,7 +1324,7 @@ static struct platform_driver ks_pcie_driver __refdata = {
.remove = __exit_p(ks_pcie_remove),
.driver = {
.name = "keystone-pcie",
.of_match_table = of_match_ptr(ks_pcie_of_match),
.of_match_table = ks_pcie_of_match,
},
};
builtin_platform_driver(ks_pcie_driver);
......@@ -32,15 +32,6 @@ struct ls_pcie_ep {
const struct ls_pcie_ep_drvdata *drvdata;
};
static int ls_pcie_establish_link(struct dw_pcie *pci)
{
return 0;
}
static const struct dw_pcie_ops dw_ls_pcie_ep_ops = {
.start_link = ls_pcie_establish_link,
};
static const struct pci_epc_features*
ls_pcie_ep_get_features(struct dw_pcie_ep *ep)
{
......@@ -106,19 +97,16 @@ static const struct dw_pcie_ep_ops ls_pcie_ep_ops = {
static const struct ls_pcie_ep_drvdata ls1_ep_drvdata = {
.ops = &ls_pcie_ep_ops,
.dw_pcie_ops = &dw_ls_pcie_ep_ops,
};
static const struct ls_pcie_ep_drvdata ls2_ep_drvdata = {
.func_offset = 0x20000,
.ops = &ls_pcie_ep_ops,
.dw_pcie_ops = &dw_ls_pcie_ep_ops,
};
static const struct ls_pcie_ep_drvdata lx2_ep_drvdata = {
.func_offset = 0x8000,
.ops = &ls_pcie_ep_ops,
.dw_pcie_ops = &dw_ls_pcie_ep_ops,
};
static const struct of_device_id ls_pcie_ep_of_match[] = {
......
......@@ -74,7 +74,7 @@ static void ls_pcie_fix_error_response(struct ls_pcie *pcie)
iowrite32(PCIE_ABSERR_SETTING, pci->dbi_base + PCIE_ABSERR);
}
static int ls_pcie_host_init(struct pcie_port *pp)
static int ls_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct ls_pcie *pcie = to_ls_pcie(pci);
......
......@@ -370,7 +370,7 @@ static int meson_pcie_link_up(struct dw_pcie *pci)
return 0;
}
static int meson_pcie_host_init(struct pcie_port *pp)
static int meson_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct meson_pcie *mp = to_meson_pcie(pci);
......
......@@ -217,7 +217,7 @@ static inline void al_pcie_target_bus_set(struct al_pcie *pcie,
static void __iomem *al_pcie_conf_addr_map_bus(struct pci_bus *bus,
unsigned int devfn, int where)
{
struct pcie_port *pp = bus->sysdata;
struct dw_pcie_rp *pp = bus->sysdata;
struct al_pcie *pcie = to_al_pcie(to_dw_pcie_from_pp(pp));
unsigned int busnr = bus->number;
struct al_pcie_target_bus_cfg *target_bus_cfg = &pcie->target_bus_cfg;
......@@ -245,7 +245,7 @@ static struct pci_ops al_child_pci_ops = {
static void al_pcie_config_prepare(struct al_pcie *pcie)
{
struct al_pcie_target_bus_cfg *target_bus_cfg;
struct pcie_port *pp = &pcie->pci->pp;
struct dw_pcie_rp *pp = &pcie->pci->pp;
unsigned int ecam_bus_mask;
u32 cfg_control_offset;
u8 subordinate_bus;
......@@ -289,7 +289,7 @@ static void al_pcie_config_prepare(struct al_pcie *pcie)
al_pcie_controller_writel(pcie, cfg_control_offset, reg);
}
static int al_pcie_host_init(struct pcie_port *pp)
static int al_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct al_pcie *pcie = to_al_pcie(pci);
......
......@@ -166,7 +166,7 @@ static int armada8k_pcie_start_link(struct dw_pcie *pci)
return 0;
}
static int armada8k_pcie_host_init(struct pcie_port *pp)
static int armada8k_pcie_host_init(struct dw_pcie_rp *pp)
{
u32 reg;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
......@@ -233,7 +233,7 @@ static int armada8k_add_pcie_port(struct armada8k_pcie *pcie,
struct platform_device *pdev)
{
struct dw_pcie *pci = pcie->pci;
struct pcie_port *pp = &pci->pp;
struct dw_pcie_rp *pp = &pci->pp;
struct device *dev = &pdev->dev;
int ret;
......@@ -343,7 +343,7 @@ static struct platform_driver armada8k_pcie_driver = {
.probe = armada8k_pcie_probe,
.driver = {
.name = "armada8k-pcie",
.of_match_table = of_match_ptr(armada8k_pcie_of_match),
.of_match_table = armada8k_pcie_of_match,
.suppress_bind_attrs = true,
},
};
......
......@@ -97,7 +97,7 @@ static void artpec6_pcie_writel(struct artpec6_pcie *artpec6_pcie, u32 offset, u
static u64 artpec6_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 pci_addr)
{
struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pci);
struct pcie_port *pp = &pci->pp;
struct dw_pcie_rp *pp = &pci->pp;
struct dw_pcie_ep *ep = &pci->ep;
switch (artpec6_pcie->mode) {
......@@ -315,7 +315,7 @@ static void artpec6_pcie_deassert_core_reset(struct artpec6_pcie *artpec6_pcie)
usleep_range(100, 200);
}
static int artpec6_pcie_host_init(struct pcie_port *pp)
static int artpec6_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pci);
......
......@@ -154,9 +154,8 @@ static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
return 0;
}
static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no,
enum pci_barno bar, dma_addr_t cpu_addr,
enum dw_pcie_as_type as_type)
static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
dma_addr_t cpu_addr, enum pci_barno bar)
{
int ret;
u32 free_win;
......@@ -168,8 +167,8 @@ static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no,
return -EINVAL;
}
ret = dw_pcie_prog_inbound_atu(pci, func_no, free_win, bar, cpu_addr,
as_type);
ret = dw_pcie_prog_inbound_atu(pci, func_no, free_win, type,
cpu_addr, bar);
if (ret < 0) {
dev_err(pci->dev, "Failed to program IB window\n");
return ret;
......@@ -185,8 +184,9 @@ static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, u8 func_no,
phys_addr_t phys_addr,
u64 pci_addr, size_t size)
{
u32 free_win;
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
u32 free_win;
int ret;
free_win = find_first_zero_bit(ep->ob_window_map, pci->num_ob_windows);
if (free_win >= pci->num_ob_windows) {
......@@ -194,8 +194,10 @@ static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, u8 func_no,
return -EINVAL;
}
dw_pcie_prog_ep_outbound_atu(pci, func_no, free_win, PCIE_ATU_TYPE_MEM,
ret = dw_pcie_prog_ep_outbound_atu(pci, func_no, free_win, PCIE_ATU_TYPE_MEM,
phys_addr, pci_addr, size);
if (ret)
return ret;
set_bit(free_win, ep->ob_window_map);
ep->outbound_addr[free_win] = phys_addr;
......@@ -213,7 +215,7 @@ static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
__dw_pcie_ep_reset_bar(pci, func_no, bar, epf_bar->flags);
dw_pcie_disable_atu(pci, atu_index, DW_PCIE_REGION_INBOUND);
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, atu_index);
clear_bit(atu_index, ep->ib_window_map);
ep->epf_bar[bar] = NULL;
}
......@@ -221,27 +223,25 @@ static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_bar *epf_bar)
{
int ret;
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar = epf_bar->barno;
size_t size = epf_bar->size;
int flags = epf_bar->flags;
enum dw_pcie_as_type as_type;
u32 reg;
unsigned int func_offset = 0;
int ret, type;
u32 reg;
func_offset = dw_pcie_ep_func_select(ep, func_no);
reg = PCI_BASE_ADDRESS_0 + (4 * bar) + func_offset;
if (!(flags & PCI_BASE_ADDRESS_SPACE))
as_type = DW_PCIE_AS_MEM;
type = PCIE_ATU_TYPE_MEM;
else
as_type = DW_PCIE_AS_IO;
type = PCIE_ATU_TYPE_IO;
ret = dw_pcie_ep_inbound_atu(ep, func_no, bar,
epf_bar->phys_addr, as_type);
ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar);
if (ret)
return ret;
......@@ -289,7 +289,7 @@ static void dw_pcie_ep_unmap_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
if (ret < 0)
return;
dw_pcie_disable_atu(pci, atu_index, DW_PCIE_REGION_OUTBOUND);
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_OB, atu_index);
clear_bit(atu_index, ep->ob_window_map);
}
......@@ -435,8 +435,7 @@ static void dw_pcie_ep_stop(struct pci_epc *epc)
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
if (pci->ops && pci->ops->stop_link)
pci->ops->stop_link(pci);
dw_pcie_stop_link(pci);
}
static int dw_pcie_ep_start(struct pci_epc *epc)
......@@ -444,10 +443,7 @@ static int dw_pcie_ep_start(struct pci_epc *epc)
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
if (!pci->ops || !pci->ops->start_link)
return -EINVAL;
return pci->ops->start_link(pci);
return dw_pcie_start_link(pci);
}
static const struct pci_epc_features*
......@@ -699,17 +695,15 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
if (!pci->dbi_base2) {
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi2");
if (!res)
if (!res) {
pci->dbi_base2 = pci->dbi_base + SZ_4K;
else {
} else {
pci->dbi_base2 = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(pci->dbi_base2))
return PTR_ERR(pci->dbi_base2);
}
}
dw_pcie_iatu_detect(pci);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space");
if (!res)
return -EINVAL;
......@@ -717,16 +711,16 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
ep->phys_base = res->start;
ep->addr_size = resource_size(res);
ep->ib_window_map = devm_kcalloc(dev,
BITS_TO_LONGS(pci->num_ib_windows),
sizeof(long),
dw_pcie_version_detect(pci);
dw_pcie_iatu_detect(pci);
ep->ib_window_map = devm_bitmap_zalloc(dev, pci->num_ib_windows,
GFP_KERNEL);
if (!ep->ib_window_map)
return -ENOMEM;
ep->ob_window_map = devm_kcalloc(dev,
BITS_TO_LONGS(pci->num_ob_windows),
sizeof(long),
ep->ob_window_map = devm_bitmap_zalloc(dev, pci->num_ob_windows,
GFP_KERNEL);
if (!ep->ob_window_map)
return -ENOMEM;
......@@ -780,8 +774,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys,
epc->mem->window.page_size);
if (!ep->msi_mem) {
ret = -ENOMEM;
dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n");
return -ENOMEM;
goto err_exit_epc_mem;
}
if (ep->ops->get_features) {
......@@ -790,6 +785,19 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
return 0;
}
return dw_pcie_ep_init_complete(ep);
ret = dw_pcie_ep_init_complete(ep);
if (ret)
goto err_free_epc_mem;
return 0;
err_free_epc_mem:
pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem,
epc->mem->window.page_size);
err_exit_epc_mem:
pci_epc_mem_exit(epc);
return ret;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_init);
......@@ -17,13 +17,11 @@
#include <linux/platform_device.h>
#include <linux/resource.h>
#include <linux/types.h>
#include <linux/regmap.h>
#include "pcie-designware.h"
struct dw_plat_pcie {
struct dw_pcie *pci;
struct regmap *regmap;
enum dw_pcie_device_mode mode;
};
......@@ -31,20 +29,9 @@ struct dw_plat_pcie_of_data {
enum dw_pcie_device_mode mode;
};
static const struct of_device_id dw_plat_pcie_of_match[];
static const struct dw_pcie_host_ops dw_plat_pcie_host_ops = {
};
static int dw_plat_pcie_establish_link(struct dw_pcie *pci)
{
return 0;
}
static const struct dw_pcie_ops dw_pcie_ops = {
.start_link = dw_plat_pcie_establish_link,
};
static void dw_plat_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
......@@ -96,7 +83,7 @@ static int dw_plat_add_pcie_port(struct dw_plat_pcie *dw_plat_pcie,
struct platform_device *pdev)
{
struct dw_pcie *pci = dw_plat_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct dw_pcie_rp *pp = &pci->pp;
struct device *dev = &pdev->dev;
int ret;
......@@ -140,7 +127,6 @@ static int dw_plat_pcie_probe(struct platform_device *pdev)
return -ENOMEM;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
dw_plat_pcie->pci = pci;
dw_plat_pcie->mode = mode;
......@@ -153,20 +139,21 @@ static int dw_plat_pcie_probe(struct platform_device *pdev)
return -ENODEV;
ret = dw_plat_add_pcie_port(dw_plat_pcie, pdev);
if (ret < 0)
return ret;
break;
case DW_PCIE_EP_TYPE:
if (!IS_ENABLED(CONFIG_PCIE_DW_PLAT_EP))
return -ENODEV;
pci->ep.ops = &pcie_ep_ops;
return dw_pcie_ep_init(&pci->ep);
ret = dw_pcie_ep_init(&pci->ep);
break;
default:
dev_err(dev, "INVALID device type %d\n", dw_plat_pcie->mode);
ret = -EINVAL;
break;
}
return 0;
return ret;
}
static const struct dw_plat_pcie_of_data dw_plat_pcie_rc_of_data = {
......
......@@ -186,7 +186,7 @@ static int rockchip_pcie_start_link(struct dw_pcie *pci)
return 0;
}
static int rockchip_pcie_host_init(struct pcie_port *pp)
static int rockchip_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
......@@ -288,7 +288,7 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct rockchip_pcie *rockchip;
struct pcie_port *pp;
struct dw_pcie_rp *pp;
int ret;
rockchip = devm_kzalloc(dev, sizeof(*rockchip), GFP_KERNEL);
......
......@@ -16,11 +16,9 @@
#include <linux/gpio.h>
#include <linux/gpio/consumer.h>
#include <linux/kernel.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/regulator/consumer.h>
#include <linux/resource.h>
#include <linux/types.h>
#include <linux/interrupt.h>
......@@ -236,7 +234,7 @@ static int fu740_pcie_start_link(struct dw_pcie *pci)
return ret;
}
static int fu740_pcie_host_init(struct pcie_port *pp)
static int fu740_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct fu740_pcie *afp = to_fu740_pcie(pci);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment