Commit 0ab7b12c authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pci-v4.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
 "PCI changes:

   - add support for PCI on ARM64 boxes with ACPI. We already had this
     for theoretical spec-compliant hardware; now we're adding quirks
     for the actual hardware (Cavium, HiSilicon, Qualcomm, X-Gene)

   - add runtime PM support for hotplug ports

   - enable runtime suspend for Intel UHCI that uses platform-specific
     wakeup signaling

   - add yet another host bridge registration interface. We hope this is
     extensible enough to subsume the others

   - expose device revision in sysfs for DRM

   - to avoid device conflicts, make sure any VF BAR updates are done
     before enabling the VF

   - avoid unnecessary link retrains for ASPM

   - allow INTx masking on Mellanox devices that support it

   - allow access to non-standard VPD for Chelsio devices

   - update Broadcom iProc support for PAXB v2, PAXC v2, inbound DMA,
     etc

   - update Rockchip support for max-link-speed

   - add NVIDIA Tegra210 support

   - add Layerscape LS1046a support

   - update R-Car compatibility strings

   - add Qualcomm MSM8996 support

   - remove some uninformative bootup messages"

* tag 'pci-v4.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (115 commits)
  PCI: Enable access to non-standard VPD for Chelsio devices (cxgb3)
  PCI: Expand "VPD access disabled" quirk message
  PCI: pciehp: Remove loading message
  PCI: hotplug: Remove hotplug core message
  PCI: Remove service driver load/unload messages
  PCI/AER: Log AER IRQ when claiming Root Port
  PCI/AER: Log errors with PCI device, not PCIe service device
  PCI/AER: Remove unused version macros
  PCI/PME: Log PME IRQ when claiming Root Port
  PCI/PME: Drop unused support for PMEs from Root Complex Event Collectors
  PCI: Move config space size macros to pci_regs.h
  x86/platform/intel-mid: Constify mid_pci_platform_pm
  PCI/ASPM: Don't retrain link if ASPM not possible
  PCI: iproc: Skip check for legacy IRQ on PAXC buses
  PCI: pciehp: Leave power indicator on when enabling already-enabled slot
  PCI: pciehp: Prioritize data-link event over presence detect
  PCI: rcar: Add gen3 fallback compatibility string for pcie-rcar
  PCI: rcar: Use gen2 fallback compatibility last
  PCI: rcar-gen2: Use gen2 fallback compatibility last
  PCI: rockchip: Move the deassert of pm/aclk/pclk after phy_init()
  ..
parents a9a16a6d b08d2e61
......@@ -294,3 +294,10 @@ Description:
a firmware bug to the system vendor. Writing to this file
taints the kernel with TAINT_FIRMWARE_WORKAROUND, which
reduces the supportability of your system.
What: /sys/bus/pci/devices/.../revision
Date: November 2016
Contact: Emil Velikov <emil.l.velikov@gmail.com>
Description:
This file contains the revision field of the the PCI device.
The value comes from device config space. The file is read only.
* Broadcom iProc PCIe controller with the platform bus interface
Required properties:
- compatible: Must be "brcm,iproc-pcie" for PAXB, or "brcm,iproc-pcie-paxc"
for PAXC. PAXB-based root complex is used for external endpoint devices.
PAXC-based root complex is connected to emulated endpoint devices
internal to the ASIC
- compatible:
"brcm,iproc-pcie" for the first generation of PAXB based controller,
used in SoCs including NSP, Cygnus, NS2, and Pegasus
"brcm,iproc-pcie-paxb-v2" for the second generation of PAXB-based
controllers, used in Stingray
"brcm,iproc-pcie-paxc" for the first generation of PAXC based
controller, used in NS2
"brcm,iproc-pcie-paxc-v2" for the second generation of PAXC based
controller, used in Stingray
PAXB-based root complex is used for external endpoint devices. PAXC-based
root complex is connected to emulated endpoint devices internal to the ASIC
- reg: base address and length of the PCIe controller I/O register space
- #interrupt-cells: set to <1>
- interrupt-map-mask and interrupt-map, standard PCI properties to define the
......@@ -19,6 +26,10 @@ Required properties:
Optional properties:
- phys: phandle of the PCIe PHY device
- phy-names: must be "pcie-phy"
- dma-coherent: present if DMA operations are coherent
- dma-ranges: Some PAXB-based root complexes do not have inbound mapping done
by the ASIC after power on reset. In this case, SW is required to configure
the mapping, based on inbound memory regions specified by this property.
- brcm,pcie-ob: Some iProc SoCs do not have the outbound address mapping done
by the ASIC after power on reset. In this case, SW needs to configure it
......@@ -29,11 +40,6 @@ effective:
Required:
- brcm,pcie-ob-axi-offset: The offset from the AXI address to the internal
address used by the iProc PCIe core (not the PCIe address)
- brcm,pcie-ob-window-size: The outbound address mapping window size (in MB)
Optional:
- brcm,pcie-ob-oarr-size: Some iProc SoCs need the OARR size bit to be set to
increase the outbound window size
MSI support (optional):
......@@ -41,10 +47,19 @@ For older platforms without MSI integrated in the GIC, iProc PCIe core provides
an event queue based MSI support. The iProc MSI uses host memories to store
MSI posted writes in the event queues
- msi-parent: Link to the device node of the MSI controller. On newer iProc
platforms, the MSI controller may be gicv2m or gicv3-its. On older iProc
platforms without MSI support in its interrupt controller, one may use the
event queue based MSI support integrated within the iProc PCIe core.
On newer iProc platforms, gicv2m or gicv3-its based MSI support should be used
- msi-map: Maps a Requester ID to an MSI controller and associated MSI
sideband data
- msi-parent: Link to the device node of the MSI controller, used when no MSI
sideband data is passed between the iProc PCIe controller and the MSI
controller
Refer to the following binding documents for more detailed description on
the use of 'msi-map' and 'msi-parent':
Documentation/devicetree/bindings/pci/pci-msi.txt
Documentation/devicetree/bindings/interrupt-controller/msi.txt
When the iProc event queue based MSI is used, one needs to define the
following properties in the MSI device node:
......@@ -80,9 +95,7 @@ Example:
phy-names = "pcie-phy";
brcm,pcie-ob;
brcm,pcie-ob-oarr-size;
brcm,pcie-ob-axi-offset = <0x00000000>;
brcm,pcie-ob-window-size = <256>;
msi-parent = <&msi0>;
......
......@@ -15,6 +15,7 @@ Required properties:
- compatible: should contain the platform identifier such as:
"fsl,ls1021a-pcie", "snps,dw-pcie"
"fsl,ls2080a-pcie", "fsl,ls2085a-pcie", "snps,dw-pcie"
"fsl,ls1046a-pcie"
- reg: base addresses and lengths of the PCIe controller
- interrupts: A list of interrupt outputs of the controller. Must contain an
entry for each entry in the interrupt-names property.
......
......@@ -110,6 +110,20 @@ Power supplies for Tegra124:
- avdd-pll-erefe-supply: Power supply for PLLE (shared with USB3). Must
supply 1.05 V.
Power supplies for Tegra210:
- Required:
- avdd-pll-uerefe-supply: Power supply for PLLE (shared with USB3). Must
supply 1.05 V.
- hvddio-pex-supply: High-voltage supply for PCIe I/O and PCIe output
clocks. Must supply 1.8 V.
- dvddio-pex-supply: Power supply for digital PCIe I/O. Must supply 1.05 V.
- dvdd-pex-pll-supply: Power supply for dedicated (internal) PCIe PLL. Must
supply 1.05 V.
- hvdd-pex-pll-e-supply: High-voltage supply for PLLE (shared with USB3).
Must supply 3.3 V.
- vddio-pex-ctl-supply: Power supply for PCIe control I/O partition. Must
supply 1.8 V.
Root ports are defined as subnodes of the PCIe controller node.
Required properties:
......@@ -436,3 +450,99 @@ Board DTS:
status = "okay";
};
};
Tegra210:
---------
SoC DTSI:
pcie-controller@01003000 {
compatible = "nvidia,tegra210-pcie";
device_type = "pci";
reg = <0x0 0x01003000 0x0 0x00000800 /* PADS registers */
0x0 0x01003800 0x0 0x00000800 /* AFI registers */
0x0 0x02000000 0x0 0x10000000>; /* configuration space */
reg-names = "pads", "afi", "cs";
interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */
<GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */
interrupt-names = "intr", "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &gic GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
bus-range = <0x00 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges = <0x82000000 0 0x01000000 0x0 0x01000000 0 0x00001000 /* port 0 configuration space */
0x82000000 0 0x01001000 0x0 0x01001000 0 0x00001000 /* port 1 configuration space */
0x81000000 0 0x0 0x0 0x12000000 0 0x00010000 /* downstream I/O (64 KiB) */
0x82000000 0 0x13000000 0x0 0x13000000 0 0x0d000000 /* non-prefetchable memory (208 MiB) */
0xc2000000 0 0x20000000 0x0 0x20000000 0 0x20000000>; /* prefetchable memory (512 MiB) */
clocks = <&tegra_car TEGRA210_CLK_PCIE>,
<&tegra_car TEGRA210_CLK_AFI>,
<&tegra_car TEGRA210_CLK_PLL_E>,
<&tegra_car TEGRA210_CLK_CML0>;
clock-names = "pex", "afi", "pll_e", "cml";
resets = <&tegra_car 70>,
<&tegra_car 72>,
<&tegra_car 74>;
reset-names = "pex", "afi", "pcie_x";
status = "disabled";
pci@1,0 {
device_type = "pci";
assigned-addresses = <0x82000800 0 0x01000000 0 0x1000>;
reg = <0x000800 0 0 0 0>;
status = "disabled";
#address-cells = <3>;
#size-cells = <2>;
ranges;
nvidia,num-lanes = <4>;
};
pci@2,0 {
device_type = "pci";
assigned-addresses = <0x82001000 0 0x01001000 0 0x1000>;
reg = <0x001000 0 0 0 0>;
status = "disabled";
#address-cells = <3>;
#size-cells = <2>;
ranges;
nvidia,num-lanes = <1>;
};
};
Board DTS:
pcie-controller@01003000 {
status = "okay";
avdd-pll-uerefe-supply = <&avdd_1v05_pll>;
hvddio-pex-supply = <&vdd_1v8>;
dvddio-pex-supply = <&vdd_pex_1v05>;
dvdd-pex-pll-supply = <&vdd_pex_1v05>;
hvdd-pex-pll-e-supply = <&vdd_1v8>;
vddio-pex-ctl-supply = <&vdd_1v8>;
pci@1,0 {
phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-0}>,
<&{/padctl@7009f000/pads/pcie/lanes/pcie-1}>,
<&{/padctl@7009f000/pads/pcie/lanes/pcie-2}>,
<&{/padctl@7009f000/pads/pcie/lanes/pcie-3}>;
phy-names = "pcie-0", "pcie-1", "pcie-2", "pcie-3";
status = "okay";
};
pci@2,0 {
phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-4}>;
phy-names = "pcie-0";
status = "okay";
};
};
......@@ -18,3 +18,9 @@ driver implementation may support the following properties:
host bridges in the system, otherwise potentially conflicting domain numbers
may be assigned to root buses behind different host bridges. The domain
number for each host bridge in the system must be unique.
- max-link-speed:
If present this property specifies PCI gen for link capability. Host
drivers could add this as a strategy to avoid unnecessary operation for
unsupported link speed, for instance, trying to do training for
unsupported link speed, etc. Must be '4' for gen4, '3' for gen3, '2'
for gen2, and '1' for gen1. Any other values are invalid.
......@@ -7,6 +7,7 @@
- "qcom,pcie-ipq8064" for ipq8064
- "qcom,pcie-apq8064" for apq8064
- "qcom,pcie-apq8084" for apq8084
- "qcom,pcie-msm8996" for msm8996 or apq8096
- reg:
Usage: required
......@@ -92,6 +93,17 @@
- "aux" Auxiliary (AUX) clock
- "bus_master" Master AXI clock
- "bus_slave" Slave AXI clock
- clock-names:
Usage: required for msm8996/apq8096
Value type: <stringlist>
Definition: Should contain the following entries
- "pipe" Pipe Clock driving internal logic
- "aux" Auxiliary (AUX) clock
- "cfg" Configuration clock
- "bus_master" Master AXI clock
- "bus_slave" Slave AXI clock
- resets:
Usage: required
Value type: <prop-encoded-array>
......@@ -115,7 +127,7 @@
- "core" Core reset
- power-domains:
Usage: required for apq8084
Usage: required for apq8084 and msm8996/apq8096
Value type: <prop-encoded-array>
Definition: A phandle and power domain specifier pair to the
power domain which is responsible for collapsing
......
......@@ -7,6 +7,7 @@ compatible: "renesas,pcie-r8a7779" for the R8A7779 SoC;
"renesas,pcie-r8a7793" for the R8A7793 SoC;
"renesas,pcie-r8a7795" for the R8A7795 SoC;
"renesas,pcie-rcar-gen2" for a generic R-Car Gen2 compatible device.
"renesas,pcie-rcar-gen3" for a generic R-Car Gen3 compatible device.
When compatible with the generic version, nodes must list the
SoC-specific version corresponding to the platform first
......
......@@ -17,6 +17,7 @@ that support it. For example, a given bus might look like this:
| |-- resource0
| |-- resource1
| |-- resource2
| |-- revision
| |-- rom
| |-- subsystem_device
| |-- subsystem_vendor
......@@ -41,6 +42,7 @@ files, each with their own function.
resource PCI resource host addresses (ascii, ro)
resource0..N PCI resource N, if present (binary, mmap, rw[1])
resource0_wc..N_wc PCI WC map resource N, if prefetchable (binary, mmap)
revision PCI revision (ascii, ro)
rom PCI ROM resource, if present (binary, ro)
subsystem_device PCI subsystem device (ascii, ro)
subsystem_vendor PCI subsystem vendor (ascii, ro)
......
......@@ -7,6 +7,32 @@ / {
model = "NVIDIA Jetson TX1 Developer Kit";
compatible = "nvidia,p2371-2180", "nvidia,tegra210";
pcie-controller@01003000 {
status = "okay";
avdd-pll-uerefe-supply = <&avdd_1v05_pll>;
hvddio-pex-supply = <&vdd_1v8>;
dvddio-pex-supply = <&vdd_pex_1v05>;
dvdd-pex-pll-supply = <&vdd_pex_1v05>;
hvdd-pex-pll-e-supply = <&vdd_1v8>;
vddio-pex-ctl-supply = <&vdd_1v8>;
pci@1,0 {
phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-0}>,
<&{/padctl@7009f000/pads/pcie/lanes/pcie-1}>,
<&{/padctl@7009f000/pads/pcie/lanes/pcie-2}>,
<&{/padctl@7009f000/pads/pcie/lanes/pcie-3}>;
phy-names = "pcie-0", "pcie-1", "pcie-2", "pcie-3";
status = "okay";
};
pci@2,0 {
phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-4}>;
phy-names = "pcie-0";
status = "okay";
};
};
host1x@50000000 {
dsi@54300000 {
status = "okay";
......
......@@ -11,6 +11,69 @@ / {
#address-cells = <2>;
#size-cells = <2>;
pcie-controller@01003000 {
compatible = "nvidia,tegra210-pcie";
device_type = "pci";
reg = <0x0 0x01003000 0x0 0x00000800 /* PADS registers */
0x0 0x01003800 0x0 0x00000800 /* AFI registers */
0x0 0x02000000 0x0 0x10000000>; /* configuration space */
reg-names = "pads", "afi", "cs";
interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */
<GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */
interrupt-names = "intr", "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &gic GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
bus-range = <0x00 0xff>;
#address-cells = <3>;
#size-cells = <2>;
ranges = <0x82000000 0 0x01000000 0x0 0x01000000 0 0x00001000 /* port 0 configuration space */
0x82000000 0 0x01001000 0x0 0x01001000 0 0x00001000 /* port 1 configuration space */
0x81000000 0 0x0 0x0 0x12000000 0 0x00010000 /* downstream I/O (64 KiB) */
0x82000000 0 0x13000000 0x0 0x13000000 0 0x0d000000 /* non-prefetchable memory (208 MiB) */
0xc2000000 0 0x20000000 0x0 0x20000000 0 0x20000000>; /* prefetchable memory (512 MiB) */
clocks = <&tegra_car TEGRA210_CLK_PCIE>,
<&tegra_car TEGRA210_CLK_AFI>,
<&tegra_car TEGRA210_CLK_PLL_E>,
<&tegra_car TEGRA210_CLK_CML0>;
clock-names = "pex", "afi", "pll_e", "cml";
resets = <&tegra_car 70>,
<&tegra_car 72>,
<&tegra_car 74>;
reset-names = "pex", "afi", "pcie_x";
status = "disabled";
pci@1,0 {
device_type = "pci";
assigned-addresses = <0x82000800 0 0x01000000 0 0x1000>;
reg = <0x000800 0 0 0 0>;
status = "disabled";
#address-cells = <3>;
#size-cells = <2>;
ranges;
nvidia,num-lanes = <4>;
};
pci@2,0 {
device_type = "pci";
assigned-addresses = <0x82001000 0 0x01001000 0 0x1000>;
reg = <0x001000 0 0 0 0>;
status = "disabled";
#address-cells = <3>;
#size-cells = <2>;
ranges;
nvidia,num-lanes = <1>;
};
};
host1x@50000000 {
compatible = "nvidia,tegra210-host1x", "simple-bus";
reg = <0x0 0x50000000 0x0 0x00034000>;
......
......@@ -114,6 +114,19 @@ int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
return 0;
}
static int pci_acpi_root_prepare_resources(struct acpi_pci_root_info *ci)
{
struct resource_entry *entry, *tmp;
int status;
status = acpi_pci_probe_root_resources(ci);
resource_list_for_each_entry_safe(entry, tmp, &ci->resources) {
if (!(entry->res->flags & IORESOURCE_WINDOW))
resource_list_destroy_entry(entry);
}
return status;
}
/*
* Lookup the bus range for the domain in MCFG, and set up config space
* mapping.
......@@ -121,31 +134,33 @@ int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
static struct pci_config_window *
pci_acpi_setup_ecam_mapping(struct acpi_pci_root *root)
{
struct device *dev = &root->device->dev;
struct resource *bus_res = &root->secondary;
u16 seg = root->segment;
struct pci_config_window *cfg;
struct pci_ecam_ops *ecam_ops;
struct resource cfgres;
unsigned int bsz;
/* Use address from _CBA if present, otherwise lookup MCFG */
if (!root->mcfg_addr)
root->mcfg_addr = pci_mcfg_lookup(seg, bus_res);
struct acpi_device *adev;
struct pci_config_window *cfg;
int ret;
if (!root->mcfg_addr) {
dev_err(&root->device->dev, "%04x:%pR ECAM region not found\n",
seg, bus_res);
ret = pci_mcfg_lookup(root, &cfgres, &ecam_ops);
if (ret) {
dev_err(dev, "%04x:%pR ECAM region not found\n", seg, bus_res);
return NULL;
}
bsz = 1 << pci_generic_ecam_ops.bus_shift;
cfgres.start = root->mcfg_addr + bus_res->start * bsz;
cfgres.end = cfgres.start + resource_size(bus_res) * bsz - 1;
cfgres.flags = IORESOURCE_MEM;
cfg = pci_ecam_create(&root->device->dev, &cfgres, bus_res,
&pci_generic_ecam_ops);
adev = acpi_resource_consumer(&cfgres);
if (adev)
dev_info(dev, "ECAM area %pR reserved by %s\n", &cfgres,
dev_name(&adev->dev));
else
dev_warn(dev, FW_BUG "ECAM area %pR not reserved in ACPI namespace\n",
&cfgres);
cfg = pci_ecam_create(dev, &cfgres, bus_res, ecam_ops);
if (IS_ERR(cfg)) {
dev_err(&root->device->dev, "%04x:%pR error %ld mapping ECAM\n",
seg, bus_res, PTR_ERR(cfg));
dev_err(dev, "%04x:%pR error %ld mapping ECAM\n", seg, bus_res,
PTR_ERR(cfg));
return NULL;
}
......@@ -159,33 +174,37 @@ static void pci_acpi_generic_release_info(struct acpi_pci_root_info *ci)
ri = container_of(ci, struct acpi_pci_generic_root_info, common);
pci_ecam_free(ri->cfg);
kfree(ci->ops);
kfree(ri);
}
static struct acpi_pci_root_ops acpi_pci_root_ops = {
.release_info = pci_acpi_generic_release_info,
};
/* Interface called from ACPI code to setup PCI host controller */
struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
{
int node = acpi_get_node(root->device->handle);
struct acpi_pci_generic_root_info *ri;
struct pci_bus *bus, *child;
struct acpi_pci_root_ops *root_ops;
ri = kzalloc_node(sizeof(*ri), GFP_KERNEL, node);
if (!ri)
return NULL;
root_ops = kzalloc_node(sizeof(*root_ops), GFP_KERNEL, node);
if (!root_ops)
return NULL;
ri->cfg = pci_acpi_setup_ecam_mapping(root);
if (!ri->cfg) {
kfree(ri);
kfree(root_ops);
return NULL;
}
acpi_pci_root_ops.pci_ops = &ri->cfg->ops->pci_ops;
bus = acpi_pci_root_create(root, &acpi_pci_root_ops, &ri->common,
ri->cfg);
root_ops->release_info = pci_acpi_generic_release_info;
root_ops->prepare_resources = pci_acpi_root_prepare_resources;
root_ops->pci_ops = &ri->cfg->ops->pci_ops;
bus = acpi_pci_root_create(root, root_ops, &ri->common, ri->cfg);
if (!bus)
return NULL;
......
......@@ -22,6 +22,7 @@
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/pci-acpi.h>
#include <linux/pci-ecam.h>
/* Structure to hold entries from the MCFG table */
struct mcfg_entry {
......@@ -32,12 +33,166 @@ struct mcfg_entry {
u8 bus_end;
};
#ifdef CONFIG_PCI_QUIRKS
struct mcfg_fixup {
char oem_id[ACPI_OEM_ID_SIZE + 1];
char oem_table_id[ACPI_OEM_TABLE_ID_SIZE + 1];
u32 oem_revision;
u16 segment;
struct resource bus_range;
struct pci_ecam_ops *ops;
struct resource cfgres;
};
#define MCFG_BUS_RANGE(start, end) DEFINE_RES_NAMED((start), \
((end) - (start) + 1), \
NULL, IORESOURCE_BUS)
#define MCFG_BUS_ANY MCFG_BUS_RANGE(0x0, 0xff)
static struct mcfg_fixup mcfg_quirks[] = {
/* { OEM_ID, OEM_TABLE_ID, REV, SEGMENT, BUS_RANGE, ops, cfgres }, */
#define QCOM_ECAM32(seg) \
{ "QCOM ", "QDF2432 ", 1, seg, MCFG_BUS_ANY, &pci_32b_ops }
QCOM_ECAM32(0),
QCOM_ECAM32(1),
QCOM_ECAM32(2),
QCOM_ECAM32(3),
QCOM_ECAM32(4),
QCOM_ECAM32(5),
QCOM_ECAM32(6),
QCOM_ECAM32(7),
#define HISI_QUAD_DOM(table_id, seg, ops) \
{ "HISI ", table_id, 0, (seg) + 0, MCFG_BUS_ANY, ops }, \
{ "HISI ", table_id, 0, (seg) + 1, MCFG_BUS_ANY, ops }, \
{ "HISI ", table_id, 0, (seg) + 2, MCFG_BUS_ANY, ops }, \
{ "HISI ", table_id, 0, (seg) + 3, MCFG_BUS_ANY, ops }
HISI_QUAD_DOM("HIP05 ", 0, &hisi_pcie_ops),
HISI_QUAD_DOM("HIP06 ", 0, &hisi_pcie_ops),
HISI_QUAD_DOM("HIP07 ", 0, &hisi_pcie_ops),
HISI_QUAD_DOM("HIP07 ", 4, &hisi_pcie_ops),
HISI_QUAD_DOM("HIP07 ", 8, &hisi_pcie_ops),
HISI_QUAD_DOM("HIP07 ", 12, &hisi_pcie_ops),
#define THUNDER_PEM_RES(addr, node) \
DEFINE_RES_MEM((addr) + ((u64) (node) << 44), 0x39 * SZ_16M)
#define THUNDER_PEM_QUIRK(rev, node) \
{ "CAVIUM", "THUNDERX", rev, 4 + (10 * (node)), MCFG_BUS_ANY, \
&thunder_pem_ecam_ops, THUNDER_PEM_RES(0x88001f000000UL, node) }, \
{ "CAVIUM", "THUNDERX", rev, 5 + (10 * (node)), MCFG_BUS_ANY, \
&thunder_pem_ecam_ops, THUNDER_PEM_RES(0x884057000000UL, node) }, \
{ "CAVIUM", "THUNDERX", rev, 6 + (10 * (node)), MCFG_BUS_ANY, \
&thunder_pem_ecam_ops, THUNDER_PEM_RES(0x88808f000000UL, node) }, \
{ "CAVIUM", "THUNDERX", rev, 7 + (10 * (node)), MCFG_BUS_ANY, \
&thunder_pem_ecam_ops, THUNDER_PEM_RES(0x89001f000000UL, node) }, \
{ "CAVIUM", "THUNDERX", rev, 8 + (10 * (node)), MCFG_BUS_ANY, \
&thunder_pem_ecam_ops, THUNDER_PEM_RES(0x894057000000UL, node) }, \
{ "CAVIUM", "THUNDERX", rev, 9 + (10 * (node)), MCFG_BUS_ANY, \
&thunder_pem_ecam_ops, THUNDER_PEM_RES(0x89808f000000UL, node) }
/* SoC pass2.x */
THUNDER_PEM_QUIRK(1, 0),
THUNDER_PEM_QUIRK(1, 1),
#define THUNDER_ECAM_QUIRK(rev, seg) \
{ "CAVIUM", "THUNDERX", rev, seg, MCFG_BUS_ANY, \
&pci_thunder_ecam_ops }
/* SoC pass1.x */
THUNDER_PEM_QUIRK(2, 0), /* off-chip devices */
THUNDER_PEM_QUIRK(2, 1), /* off-chip devices */
THUNDER_ECAM_QUIRK(2, 0),
THUNDER_ECAM_QUIRK(2, 1),
THUNDER_ECAM_QUIRK(2, 2),
THUNDER_ECAM_QUIRK(2, 3),
THUNDER_ECAM_QUIRK(2, 10),
THUNDER_ECAM_QUIRK(2, 11),
THUNDER_ECAM_QUIRK(2, 12),
THUNDER_ECAM_QUIRK(2, 13),
#define XGENE_V1_ECAM_MCFG(rev, seg) \
{"APM ", "XGENE ", rev, seg, MCFG_BUS_ANY, \
&xgene_v1_pcie_ecam_ops }
#define XGENE_V2_ECAM_MCFG(rev, seg) \
{"APM ", "XGENE ", rev, seg, MCFG_BUS_ANY, \
&xgene_v2_pcie_ecam_ops }
/* X-Gene SoC with v1 PCIe controller */
XGENE_V1_ECAM_MCFG(1, 0),
XGENE_V1_ECAM_MCFG(1, 1),
XGENE_V1_ECAM_MCFG(1, 2),
XGENE_V1_ECAM_MCFG(1, 3),
XGENE_V1_ECAM_MCFG(1, 4),
XGENE_V1_ECAM_MCFG(2, 0),
XGENE_V1_ECAM_MCFG(2, 1),
XGENE_V1_ECAM_MCFG(2, 2),
XGENE_V1_ECAM_MCFG(2, 3),
XGENE_V1_ECAM_MCFG(2, 4),
/* X-Gene SoC with v2.1 PCIe controller */
XGENE_V2_ECAM_MCFG(3, 0),
XGENE_V2_ECAM_MCFG(3, 1),
/* X-Gene SoC with v2.2 PCIe controller */
XGENE_V2_ECAM_MCFG(4, 0),
XGENE_V2_ECAM_MCFG(4, 1),
XGENE_V2_ECAM_MCFG(4, 2),
};
static char mcfg_oem_id[ACPI_OEM_ID_SIZE];
static char mcfg_oem_table_id[ACPI_OEM_TABLE_ID_SIZE];
static u32 mcfg_oem_revision;
static int pci_mcfg_quirk_matches(struct mcfg_fixup *f, u16 segment,
struct resource *bus_range)
{
if (!memcmp(f->oem_id, mcfg_oem_id, ACPI_OEM_ID_SIZE) &&
!memcmp(f->oem_table_id, mcfg_oem_table_id,
ACPI_OEM_TABLE_ID_SIZE) &&
f->oem_revision == mcfg_oem_revision &&
f->segment == segment &&
resource_contains(&f->bus_range, bus_range))
return 1;
return 0;
}
#endif
static void pci_mcfg_apply_quirks(struct acpi_pci_root *root,
struct resource *cfgres,
struct pci_ecam_ops **ecam_ops)
{
#ifdef CONFIG_PCI_QUIRKS
u16 segment = root->segment;
struct resource *bus_range = &root->secondary;
struct mcfg_fixup *f;
int i;
for (i = 0, f = mcfg_quirks; i < ARRAY_SIZE(mcfg_quirks); i++, f++) {
if (pci_mcfg_quirk_matches(f, segment, bus_range)) {
if (f->cfgres.start)
*cfgres = f->cfgres;
if (f->ops)
*ecam_ops = f->ops;
dev_info(&root->device->dev, "MCFG quirk: ECAM at %pR for %pR with %ps\n",
cfgres, bus_range, *ecam_ops);
return;
}
}
#endif
}
/* List to save MCFG entries */
static LIST_HEAD(pci_mcfg_list);
phys_addr_t pci_mcfg_lookup(u16 seg, struct resource *bus_res)
int pci_mcfg_lookup(struct acpi_pci_root *root, struct resource *cfgres,
struct pci_ecam_ops **ecam_ops)
{
struct pci_ecam_ops *ops = &pci_generic_ecam_ops;
struct resource *bus_res = &root->secondary;
u16 seg = root->segment;
struct mcfg_entry *e;
struct resource res;
/* Use address from _CBA if present, otherwise lookup MCFG */
if (root->mcfg_addr)
goto skip_lookup;
/*
* We expect exact match, unless MCFG entry end bus covers more than
......@@ -45,10 +200,32 @@ phys_addr_t pci_mcfg_lookup(u16 seg, struct resource *bus_res)
*/
list_for_each_entry(e, &pci_mcfg_list, list) {
if (e->segment == seg && e->bus_start == bus_res->start &&
e->bus_end >= bus_res->end)
return e->addr;
e->bus_end >= bus_res->end) {
root->mcfg_addr = e->addr;
}
}
skip_lookup:
memset(&res, 0, sizeof(res));
if (root->mcfg_addr) {
res.start = root->mcfg_addr + (bus_res->start << 20);
res.end = res.start + (resource_size(bus_res) << 20) - 1;
res.flags = IORESOURCE_MEM;
}
/*
* Allow quirks to override default ECAM ops and CFG resource
* range. This may even fabricate a CFG resource range in case
* MCFG does not have it. Invalid CFG start address means MCFG
* firmware bug or we need another quirk in array.
*/
pci_mcfg_apply_quirks(root, &res, &ops);
if (!res.start)
return -ENXIO;
*cfgres = res;
*ecam_ops = ops;
return 0;
}
......@@ -79,6 +256,13 @@ static __init int pci_mcfg_parse(struct acpi_table_header *header)
list_add(&e->list, &pci_mcfg_list);
}
#ifdef CONFIG_PCI_QUIRKS
/* Save MCFG IDs and revision for quirks matching */
memcpy(mcfg_oem_id, header->oem_id, ACPI_OEM_ID_SIZE);
memcpy(mcfg_oem_table_id, header->oem_table_id, ACPI_OEM_TABLE_ID_SIZE);
mcfg_oem_revision = header->oem_revision;
#endif
pr_info("MCFG table detected, %d entries\n", n);
return 0;
}
......
......@@ -664,3 +664,60 @@ int acpi_dev_filter_resource_type(struct acpi_resource *ares,
return (type & types) ? 0 : 1;
}
EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type);
static int acpi_dev_consumes_res(struct acpi_device *adev, struct resource *res)
{
struct list_head resource_list;
struct resource_entry *rentry;
int ret, found = 0;
INIT_LIST_HEAD(&resource_list);
ret = acpi_dev_get_resources(adev, &resource_list, NULL, NULL);
if (ret < 0)
return 0;
list_for_each_entry(rentry, &resource_list, node) {
if (resource_contains(rentry->res, res)) {
found = 1;
break;
}
}
acpi_dev_free_resource_list(&resource_list);
return found;
}
static acpi_status acpi_res_consumer_cb(acpi_handle handle, u32 depth,
void *context, void **ret)
{
struct resource *res = context;
struct acpi_device **consumer = (struct acpi_device **) ret;
struct acpi_device *adev;
if (acpi_bus_get_device(handle, &adev))
return AE_OK;
if (acpi_dev_consumes_res(adev, res)) {
*consumer = adev;
return AE_CTRL_TERMINATE;
}
return AE_OK;
}
/**
* acpi_resource_consumer - Find the ACPI device that consumes @res.
* @res: Resource to search for.
*
* Search the current resource settings (_CRS) of every ACPI device node
* for @res. If we find an ACPI device whose _CRS includes @res, return
* it. Otherwise, return NULL.
*/
struct acpi_device *acpi_resource_consumer(struct resource *res)
{
struct acpi_device *consumer = NULL;
acpi_get_devices(NULL, acpi_res_consumer_cb, res, (void **) &consumer);
return consumer;
}
......@@ -4020,49 +4020,51 @@ int mlx4_restart_one(struct pci_dev *pdev)
return err;
}
#define MLX_SP(id) { PCI_VDEVICE(MELLANOX, id), MLX4_PCI_DEV_FORCE_SENSE_PORT }
#define MLX_VF(id) { PCI_VDEVICE(MELLANOX, id), MLX4_PCI_DEV_IS_VF }
#define MLX_GN(id) { PCI_VDEVICE(MELLANOX, id), 0 }
static const struct pci_device_id mlx4_pci_table[] = {
/* MT25408 "Hermon" SDR */
{ PCI_VDEVICE(MELLANOX, 0x6340), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25408 "Hermon" DDR */
{ PCI_VDEVICE(MELLANOX, 0x634a), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25408 "Hermon" QDR */
{ PCI_VDEVICE(MELLANOX, 0x6354), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25408 "Hermon" DDR PCIe gen2 */
{ PCI_VDEVICE(MELLANOX, 0x6732), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25408 "Hermon" QDR PCIe gen2 */
{ PCI_VDEVICE(MELLANOX, 0x673c), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25408 "Hermon" EN 10GigE */
{ PCI_VDEVICE(MELLANOX, 0x6368), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25408 "Hermon" EN 10GigE PCIe gen2 */
{ PCI_VDEVICE(MELLANOX, 0x6750), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25458 ConnectX EN 10GBASE-T 10GigE */
{ PCI_VDEVICE(MELLANOX, 0x6372), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25458 ConnectX EN 10GBASE-T+Gen2 10GigE */
{ PCI_VDEVICE(MELLANOX, 0x675a), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT26468 ConnectX EN 10GigE PCIe gen2*/
{ PCI_VDEVICE(MELLANOX, 0x6764), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT26438 ConnectX EN 40GigE PCIe gen2 5GT/s */
{ PCI_VDEVICE(MELLANOX, 0x6746), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT26478 ConnectX2 40GigE PCIe gen2 */
{ PCI_VDEVICE(MELLANOX, 0x676e), MLX4_PCI_DEV_FORCE_SENSE_PORT },
/* MT25400 Family [ConnectX-2 Virtual Function] */
{ PCI_VDEVICE(MELLANOX, 0x1002), MLX4_PCI_DEV_IS_VF },
/* MT25408 "Hermon" */
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_SDR), /* SDR */
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_DDR), /* DDR */
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_QDR), /* QDR */
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_DDR_GEN2), /* DDR Gen2 */
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_QDR_GEN2), /* QDR Gen2 */
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_EN), /* EN 10GigE */
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_EN_GEN2), /* EN 10GigE Gen2 */
/* MT25458 ConnectX EN 10GBASE-T */
MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX_EN),
MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_T_GEN2), /* Gen2 */
/* MT26468 ConnectX EN 10GigE PCIe Gen2*/
MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_GEN2),
/* MT26438 ConnectX EN 40GigE PCIe Gen2 5GT/s */
MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_5_GEN2),
/* MT26478 ConnectX2 40GigE PCIe Gen2 */
MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX2),
/* MT25400 Family [ConnectX-2] */
MLX_VF(0x1002), /* Virtual Function */
/* MT27500 Family [ConnectX-3] */
{ PCI_VDEVICE(MELLANOX, 0x1003), 0 },
/* MT27500 Family [ConnectX-3 Virtual Function] */
{ PCI_VDEVICE(MELLANOX, 0x1004), MLX4_PCI_DEV_IS_VF },
{ PCI_VDEVICE(MELLANOX, 0x1005), 0 }, /* MT27510 Family */
{ PCI_VDEVICE(MELLANOX, 0x1006), 0 }, /* MT27511 Family */
{ PCI_VDEVICE(MELLANOX, 0x1007), 0 }, /* MT27520 Family */
{ PCI_VDEVICE(MELLANOX, 0x1008), 0 }, /* MT27521 Family */
{ PCI_VDEVICE(MELLANOX, 0x1009), 0 }, /* MT27530 Family */
{ PCI_VDEVICE(MELLANOX, 0x100a), 0 }, /* MT27531 Family */
{ PCI_VDEVICE(MELLANOX, 0x100b), 0 }, /* MT27540 Family */
{ PCI_VDEVICE(MELLANOX, 0x100c), 0 }, /* MT27541 Family */
{ PCI_VDEVICE(MELLANOX, 0x100d), 0 }, /* MT27550 Family */
{ PCI_VDEVICE(MELLANOX, 0x100e), 0 }, /* MT27551 Family */
{ PCI_VDEVICE(MELLANOX, 0x100f), 0 }, /* MT27560 Family */
{ PCI_VDEVICE(MELLANOX, 0x1010), 0 }, /* MT27561 Family */
MLX_GN(PCI_DEVICE_ID_MELLANOX_CONNECTX3),
MLX_VF(0x1004), /* Virtual Function */
MLX_GN(0x1005), /* MT27510 Family */
MLX_GN(0x1006), /* MT27511 Family */
MLX_GN(PCI_DEVICE_ID_MELLANOX_CONNECTX3_PRO), /* MT27520 Family */
MLX_GN(0x1008), /* MT27521 Family */
MLX_GN(0x1009), /* MT27530 Family */
MLX_GN(0x100a), /* MT27531 Family */
MLX_GN(0x100b), /* MT27540 Family */
MLX_GN(0x100c), /* MT27541 Family */
MLX_GN(0x100d), /* MT27550 Family */
MLX_GN(0x100e), /* MT27551 Family */
MLX_GN(0x100f), /* MT27560 Family */
MLX_GN(0x1010), /* MT27561 Family */
/*
* See the mellanox_check_broken_intx_masking() quirk when
* adding devices
*/
{ 0, }
};
......
......@@ -119,6 +119,27 @@ int of_get_pci_domain_nr(struct device_node *node)
}
EXPORT_SYMBOL_GPL(of_get_pci_domain_nr);
/**
* This function will try to find the limitation of link speed by finding
* a property called "max-link-speed" of the given device node.
*
* @node: device tree node with the max link speed information
*
* Returns the associated max link speed from DT, or a negative value if the
* required property is not found or is invalid.
*/
int of_pci_get_max_link_speed(struct device_node *node)
{
u32 max_link_speed;
if (of_property_read_u32(node, "max-link-speed", &max_link_speed) ||
max_link_speed > 4)
return -EINVAL;
return max_link_speed;
}
EXPORT_SYMBOL_GPL(of_pci_get_max_link_speed);
/**
* of_pci_check_probe_only - Setup probe only mode if linux,pci-probe-only
* is present and valid
......
......@@ -142,10 +142,22 @@ int pci_generic_config_write32(struct pci_bus *bus, unsigned int devfn,
if (size == 4) {
writel(val, addr);
return PCIBIOS_SUCCESSFUL;
} else {
mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8));
}
/*
* In general, hardware that supports only 32-bit writes on PCI is
* not spec-compliant. For example, software may perform a 16-bit
* write. If the hardware only supports 32-bit accesses, we must
* do a 32-bit read, merge in the 16 bits we intend to write,
* followed by a 32-bit write. If the 16 bits we *don't* intend to
* write happen to have any RW1C (write-one-to-clear) bits set, we
* just inadvertently cleared something we shouldn't have.
*/
dev_warn_ratelimited(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n",
size, pci_domain_nr(bus), bus->number,
PCI_SLOT(devfn), PCI_FUNC(devfn), where);
mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8));
tmp = readl(addr) & mask;
tmp |= val << ((where & 0x3) * 8);
writel(tmp, addr);
......
......@@ -320,7 +320,7 @@ void pci_bus_add_device(struct pci_dev *dev)
pci_fixup_device(pci_fixup_final, dev);
pci_create_sysfs_dev_files(dev);
pci_proc_attach_device(dev);
pci_bridge_d3_device_changed(dev);
pci_bridge_d3_update(dev);
dev->match_driver = true;
retval = device_attach(&dev->dev);
......
......@@ -162,3 +162,15 @@ struct pci_ecam_ops pci_generic_ecam_ops = {
.write = pci_generic_config_write,
}
};
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
/* ECAM ops for 32-bit access only (non-compliant) */
struct pci_ecam_ops pci_32b_ops = {
.bus_shift = 20,
.pci_ops = {
.map_bus = pci_ecam_map_bus,
.read = pci_generic_config_read32,
.write = pci_generic_config_write32,
}
};
#endif
......@@ -69,7 +69,7 @@ config PCI_IMX6
config PCI_TEGRA
bool "NVIDIA Tegra PCIe controller"
depends on ARCH_TEGRA && !ARM64
depends on ARCH_TEGRA
help
Say Y here if you want support for the PCIe host controller found
on NVIDIA Tegra SoCs.
......@@ -133,8 +133,8 @@ config PCIE_XILINX
config PCI_XGENE
bool "X-Gene PCIe controller"
depends on ARCH_XGENE
depends on OF
depends on ARM64
depends on OF || (ACPI && PCI_QUIRKS)
select PCIEPORTBUS
help
Say Y here if you want internal PCI support on APM X-Gene SoC.
......@@ -240,14 +240,16 @@ config PCIE_QCOM
config PCI_HOST_THUNDER_PEM
bool "Cavium Thunder PCIe controller to off-chip devices"
depends on OF && ARM64
depends on ARM64
depends on OF || (ACPI && PCI_QUIRKS)
select PCI_HOST_COMMON
help
Say Y here if you want PCIe support for CN88XX Cavium Thunder SoCs.
config PCI_HOST_THUNDER_ECAM
bool "Cavium Thunder ECAM controller to on-chip devices on pass-1.x silicon"
depends on OF && ARM64
depends on ARM64
depends on OF || (ACPI && PCI_QUIRKS)
select PCI_HOST_COMMON
help
Say Y here if you want ECAM support for CN88XX-Pass-1.x Cavium Thunder SoCs.
......@@ -276,7 +278,7 @@ config PCIE_ARTPEC6
config PCIE_ROCKCHIP
bool "Rockchip PCIe controller"
depends on ARCH_ROCKCHIP
depends on ARCH_ROCKCHIP || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
select MFD_SYSCON
......@@ -286,7 +288,7 @@ config PCIE_ROCKCHIP
4 slots.
config VMD
depends on PCI_MSI && X86_64
depends on PCI_MSI && X86_64 && SRCU
tristate "Intel Volume Management Device Driver"
default N
---help---
......
......@@ -15,7 +15,6 @@ obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o
obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o
obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o
obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o
obj-$(CONFIG_PCI_XGENE) += pci-xgene.o
obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o
obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o
obj-$(CONFIG_PCI_VERSATILE) += pci-versatile.o
......@@ -25,11 +24,23 @@ obj-$(CONFIG_PCIE_IPROC_PLATFORM) += pcie-iproc-platform.o
obj-$(CONFIG_PCIE_IPROC_BCMA) += pcie-iproc-bcma.o
obj-$(CONFIG_PCIE_ALTERA) += pcie-altera.o
obj-$(CONFIG_PCIE_ALTERA_MSI) += pcie-altera-msi.o
obj-$(CONFIG_PCI_HISI) += pcie-hisi.o
obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
obj-$(CONFIG_PCI_HOST_THUNDER_ECAM) += pci-thunder-ecam.o
obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o
obj-$(CONFIG_VMD) += vmd.o
# The following drivers are for devices that use the generic ACPI
# pci_root.c driver but don't support standard ECAM config access.
# They contain MCFG quirks to replace the generic ECAM accessors with
# device-specific ones that are shared with the DT driver.
# The ACPI driver is generic and should not require driver-specific
# config options to be enabled, so we always build these drivers on
# ARM64 and use internal ifdefs to only build the pieces we need
# depending on whether ACPI, the DT driver, or both are enabled.
obj-$(CONFIG_ARM64) += pcie-hisi.o
obj-$(CONFIG_ARM64) += pci-thunder-ecam.o
obj-$(CONFIG_ARM64) += pci-thunder-pem.o
obj-$(CONFIG_ARM64) += pci-xgene.o
......@@ -378,6 +378,8 @@ struct hv_pcibus_device {
struct msi_domain_info msi_info;
struct msi_controller msi_chip;
struct irq_domain *irq_domain;
struct retarget_msi_interrupt retarget_msi_interrupt_params;
spinlock_t retarget_msi_interrupt_lock;
};
/*
......@@ -755,7 +757,7 @@ static int hv_set_affinity(struct irq_data *data, const struct cpumask *dest,
return parent->chip->irq_set_affinity(parent, dest, force);
}
void hv_irq_mask(struct irq_data *data)
static void hv_irq_mask(struct irq_data *data)
{
pci_msi_mask_irq(data);
}
......@@ -770,38 +772,44 @@ void hv_irq_mask(struct irq_data *data)
* is built out of this PCI bus's instance GUID and the function
* number of the device.
*/
void hv_irq_unmask(struct irq_data *data)
static void hv_irq_unmask(struct irq_data *data)
{
struct msi_desc *msi_desc = irq_data_get_msi_desc(data);
struct irq_cfg *cfg = irqd_cfg(data);
struct retarget_msi_interrupt params;
struct retarget_msi_interrupt *params;
struct hv_pcibus_device *hbus;
struct cpumask *dest;
struct pci_bus *pbus;
struct pci_dev *pdev;
int cpu;
unsigned long flags;
dest = irq_data_get_affinity_mask(data);
pdev = msi_desc_to_pci_dev(msi_desc);
pbus = pdev->bus;
hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
memset(&params, 0, sizeof(params));
params.partition_id = HV_PARTITION_ID_SELF;
params.source = 1; /* MSI(-X) */
params.address = msi_desc->msg.address_lo;
params.data = msi_desc->msg.data;
params.device_id = (hbus->hdev->dev_instance.b[5] << 24) |
spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags);
params = &hbus->retarget_msi_interrupt_params;
memset(params, 0, sizeof(*params));
params->partition_id = HV_PARTITION_ID_SELF;
params->source = 1; /* MSI(-X) */
params->address = msi_desc->msg.address_lo;
params->data = msi_desc->msg.data;
params->device_id = (hbus->hdev->dev_instance.b[5] << 24) |
(hbus->hdev->dev_instance.b[4] << 16) |
(hbus->hdev->dev_instance.b[7] << 8) |
(hbus->hdev->dev_instance.b[6] & 0xf8) |
PCI_FUNC(pdev->devfn);
params.vector = cfg->vector;
params->vector = cfg->vector;
for_each_cpu_and(cpu, dest, cpu_online_mask)
params.vp_mask |= (1ULL << vmbus_cpu_number_to_vp_number(cpu));
params->vp_mask |= (1ULL << vmbus_cpu_number_to_vp_number(cpu));
hv_do_hypercall(HVCALL_RETARGET_INTERRUPT, &params, NULL);
hv_do_hypercall(HVCALL_RETARGET_INTERRUPT, params, NULL);
spin_unlock_irqrestore(&hbus->retarget_msi_interrupt_lock, flags);
pci_msi_unmask_irq(data);
}
......@@ -1271,9 +1279,9 @@ static struct hv_pci_dev *new_pcichild_device(struct hv_pcibus_device *hbus,
struct hv_pci_dev *hpdev;
struct pci_child_message *res_req;
struct q_res_req_compl comp_pkt;
union {
struct {
struct pci_packet init_packet;
u8 buffer[0x100];
u8 buffer[sizeof(struct pci_child_message)];
} pkt;
unsigned long flags;
int ret;
......@@ -1582,6 +1590,10 @@ static void hv_eject_device_work(struct work_struct *work)
pci_dev_put(pdev);
}
spin_lock_irqsave(&hpdev->hbus->device_list_lock, flags);
list_del(&hpdev->list_entry);
spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags);
memset(&ctxt, 0, sizeof(ctxt));
ejct_pkt = (struct pci_eject_response *)&ctxt.pkt.message;
ejct_pkt->message_type.type = PCI_EJECTION_COMPLETE;
......@@ -1590,10 +1602,6 @@ static void hv_eject_device_work(struct work_struct *work)
sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt,
VM_PKT_DATA_INBAND, 0);
spin_lock_irqsave(&hpdev->hbus->device_list_lock, flags);
list_del(&hpdev->list_entry);
spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags);
put_pcichild(hpdev, hv_pcidev_ref_childlist);
put_pcichild(hpdev, hv_pcidev_ref_pnp);
put_hvpcibus(hpdev->hbus);
......@@ -2186,6 +2194,7 @@ static int hv_pci_probe(struct hv_device *hdev,
INIT_LIST_HEAD(&hbus->resources_for_children);
spin_lock_init(&hbus->config_lock);
spin_lock_init(&hbus->device_list_lock);
spin_lock_init(&hbus->retarget_msi_interrupt_lock);
sema_init(&hbus->enum_sem, 1);
init_completion(&hbus->remove_event);
......@@ -2266,24 +2275,32 @@ static int hv_pci_probe(struct hv_device *hdev,
return ret;
}
/**
* hv_pci_remove() - Remove routine for this VMBus channel
* @hdev: VMBus's tracking struct for this root PCI bus
*
* Return: 0 on success, -errno on failure
*/
static int hv_pci_remove(struct hv_device *hdev)
static void hv_pci_bus_exit(struct hv_device *hdev)
{
int ret;
struct hv_pcibus_device *hbus;
union {
struct hv_pcibus_device *hbus = hv_get_drvdata(hdev);
struct {
struct pci_packet teardown_packet;
u8 buffer[0x100];
u8 buffer[sizeof(struct pci_message)];
} pkt;
struct pci_bus_relations relations;
struct hv_pci_compl comp_pkt;
int ret;
hbus = hv_get_drvdata(hdev);
/*
* After the host sends the RESCIND_CHANNEL message, it doesn't
* access the per-channel ringbuffer any longer.
*/
if (hdev->channel->rescind)
return;
/* Delete any children which might still exist. */
memset(&relations, 0, sizeof(relations));
hv_pci_devices_present(hbus, &relations);
ret = hv_send_resources_released(hdev);
if (ret)
dev_err(&hdev->device,
"Couldn't send resources released packet(s)\n");
memset(&pkt.teardown_packet, 0, sizeof(pkt.teardown_packet));
init_completion(&comp_pkt.host_event);
......@@ -2298,7 +2315,19 @@ static int hv_pci_remove(struct hv_device *hdev)
VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
if (!ret)
wait_for_completion_timeout(&comp_pkt.host_event, 10 * HZ);
}
/**
* hv_pci_remove() - Remove routine for this VMBus channel
* @hdev: VMBus's tracking struct for this root PCI bus
*
* Return: 0 on success, -errno on failure
*/
static int hv_pci_remove(struct hv_device *hdev)
{
struct hv_pcibus_device *hbus;
hbus = hv_get_drvdata(hdev);
if (hbus->state == hv_pcibus_installed) {
/* Remove the bus from PCI's point of view. */
pci_lock_rescan_remove();
......@@ -2307,17 +2336,10 @@ static int hv_pci_remove(struct hv_device *hdev)
pci_unlock_rescan_remove();
}
ret = hv_send_resources_released(hdev);
if (ret)
dev_err(&hdev->device,
"Couldn't send resources released packet(s)\n");
hv_pci_bus_exit(hdev);
vmbus_close(hdev->channel);
/* Delete any children which might still exist. */
memset(&relations, 0, sizeof(relations));
hv_pci_devices_present(hbus, &relations);
iounmap(hbus->cfg_addr);
hv_free_config_window(hbus);
pci_free_resource_list(&hbus->resources_for_children);
......
......@@ -35,12 +35,10 @@
#define PCIE_STRFMR1 0x71c /* Symbol Timer & Filter Mask Register1 */
#define PCIE_DBI_RO_WR_EN 0x8bc /* DBI Read-Only Write Enable Register */
/* PEX LUT registers */
#define PCIE_LUT_DBG 0x7FC /* PEX LUT Debug Register */
struct ls_pcie_drvdata {
u32 lut_offset;
u32 ltssm_shift;
u32 lut_dbg;
struct pcie_host_ops *ops;
};
......@@ -134,7 +132,7 @@ static int ls_pcie_link_up(struct pcie_port *pp)
struct ls_pcie *pcie = to_ls_pcie(pp);
u32 state;
state = (ioread32(pcie->lut + PCIE_LUT_DBG) >>
state = (ioread32(pcie->lut + pcie->drvdata->lut_dbg) >>
pcie->drvdata->ltssm_shift) &
LTSSM_STATE_MASK;
......@@ -196,18 +194,28 @@ static struct ls_pcie_drvdata ls1021_drvdata = {
static struct ls_pcie_drvdata ls1043_drvdata = {
.lut_offset = 0x10000,
.ltssm_shift = 24,
.lut_dbg = 0x7fc,
.ops = &ls_pcie_host_ops,
};
static struct ls_pcie_drvdata ls1046_drvdata = {
.lut_offset = 0x80000,
.ltssm_shift = 24,
.lut_dbg = 0x407fc,
.ops = &ls_pcie_host_ops,
};
static struct ls_pcie_drvdata ls2080_drvdata = {
.lut_offset = 0x80000,
.ltssm_shift = 0,
.lut_dbg = 0x7fc,
.ops = &ls_pcie_host_ops,
};
static const struct of_device_id ls_pcie_of_match[] = {
{ .compatible = "fsl,ls1021a-pcie", .data = &ls1021_drvdata },
{ .compatible = "fsl,ls1043a-pcie", .data = &ls1043_drvdata },
{ .compatible = "fsl,ls1046a-pcie", .data = &ls1046_drvdata },
{ .compatible = "fsl,ls2080a-pcie", .data = &ls2080_drvdata },
{ .compatible = "fsl,ls2085a-pcie", .data = &ls2080_drvdata },
{ },
......@@ -252,10 +260,8 @@ static int __init ls_pcie_probe(struct platform_device *pdev)
dbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
pcie->pp.dbi_base = devm_ioremap_resource(dev, dbi_base);
if (IS_ERR(pcie->pp.dbi_base)) {
dev_err(dev, "missing *regs* space\n");
if (IS_ERR(pcie->pp.dbi_base))
return PTR_ERR(pcie->pp.dbi_base);
}
pcie->lut = pcie->pp.dbi_base + pcie->drvdata->lut_offset;
......
......@@ -430,10 +430,10 @@ static int rcar_pci_probe(struct platform_device *pdev)
}
static struct of_device_id rcar_pci_of_match[] = {
{ .compatible = "renesas,pci-rcar-gen2", },
{ .compatible = "renesas,pci-r8a7790", },
{ .compatible = "renesas,pci-r8a7791", },
{ .compatible = "renesas,pci-r8a7794", },
{ .compatible = "renesas,pci-rcar-gen2", },
{ },
};
......
This diff is collapsed.
......@@ -14,6 +14,8 @@
#include <linux/pci-ecam.h>
#include <linux/platform_device.h>
#if defined(CONFIG_PCI_HOST_THUNDER_ECAM) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
static void set_val(u32 v, int where, int size, u32 *val)
{
int shift = (where & 3) * 8;
......@@ -346,7 +348,7 @@ static int thunder_ecam_config_write(struct pci_bus *bus, unsigned int devfn,
return pci_generic_config_write(bus, devfn, where, size, val);
}
static struct pci_ecam_ops pci_thunder_ecam_ops = {
struct pci_ecam_ops pci_thunder_ecam_ops = {
.bus_shift = 20,
.pci_ops = {
.map_bus = pci_ecam_map_bus,
......@@ -355,6 +357,8 @@ static struct pci_ecam_ops pci_thunder_ecam_ops = {
}
};
#ifdef CONFIG_PCI_HOST_THUNDER_ECAM
static const struct of_device_id thunder_ecam_of_match[] = {
{ .compatible = "cavium,pci-host-thunder-ecam" },
{ },
......@@ -373,3 +377,6 @@ static struct platform_driver thunder_ecam_driver = {
.probe = thunder_ecam_probe,
};
builtin_platform_driver(thunder_ecam_driver);
#endif
#endif
......@@ -18,8 +18,12 @@
#include <linux/init.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include <linux/pci-acpi.h>
#include <linux/pci-ecam.h>
#include <linux/platform_device.h>
#include "../pci.h"
#if defined(CONFIG_PCI_HOST_THUNDER_PEM) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
#define PEM_CFG_WR 0x28
#define PEM_CFG_RD 0x30
......@@ -284,35 +288,16 @@ static int thunder_pem_config_write(struct pci_bus *bus, unsigned int devfn,
return pci_generic_config_write(bus, devfn, where, size, val);
}
static int thunder_pem_init(struct pci_config_window *cfg)
static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg,
struct resource *res_pem)
{
struct device *dev = cfg->parent;
resource_size_t bar4_start;
struct resource *res_pem;
struct thunder_pem_pci *pem_pci;
struct platform_device *pdev;
/* Only OF support for now */
if (!dev->of_node)
return -EINVAL;
resource_size_t bar4_start;
pem_pci = devm_kzalloc(dev, sizeof(*pem_pci), GFP_KERNEL);
if (!pem_pci)
return -ENOMEM;
pdev = to_platform_device(dev);
/*
* The second register range is the PEM bridge to the PCIe
* bus. It has a different config access method than those
* devices behind the bridge.
*/
res_pem = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!res_pem) {
dev_err(dev, "missing \"reg[1]\"property\n");
return -EINVAL;
}
pem_pci->pem_reg_base = devm_ioremap(dev, res_pem->start, 0x10000);
if (!pem_pci->pem_reg_base)
return -ENOMEM;
......@@ -332,9 +317,69 @@ static int thunder_pem_init(struct pci_config_window *cfg)
return 0;
}
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
static int thunder_pem_acpi_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct acpi_device *adev = to_acpi_device(dev);
struct acpi_pci_root *root = acpi_driver_data(adev);
struct resource *res_pem;
int ret;
res_pem = devm_kzalloc(&adev->dev, sizeof(*res_pem), GFP_KERNEL);
if (!res_pem)
return -ENOMEM;
ret = acpi_get_rc_resources(dev, "THRX0002", root->segment, res_pem);
if (ret) {
dev_err(dev, "can't get rc base address\n");
return ret;
}
return thunder_pem_init(dev, cfg, res_pem);
}
struct pci_ecam_ops thunder_pem_ecam_ops = {
.bus_shift = 24,
.init = thunder_pem_acpi_init,
.pci_ops = {
.map_bus = pci_ecam_map_bus,
.read = thunder_pem_config_read,
.write = thunder_pem_config_write,
}
};
#endif
#ifdef CONFIG_PCI_HOST_THUNDER_PEM
static int thunder_pem_platform_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct platform_device *pdev = to_platform_device(dev);
struct resource *res_pem;
if (!dev->of_node)
return -EINVAL;
/*
* The second register range is the PEM bridge to the PCIe
* bus. It has a different config access method than those
* devices behind the bridge.
*/
res_pem = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!res_pem) {
dev_err(dev, "missing \"reg[1]\"property\n");
return -EINVAL;
}
return thunder_pem_init(dev, cfg, res_pem);
}
static struct pci_ecam_ops pci_thunder_pem_ops = {
.bus_shift = 24,
.init = thunder_pem_init,
.init = thunder_pem_platform_init,
.pci_ops = {
.map_bus = pci_ecam_map_bus,
.read = thunder_pem_config_read,
......@@ -360,3 +405,6 @@ static struct platform_driver thunder_pem_driver = {
.probe = thunder_pem_probe,
};
builtin_platform_driver(thunder_pem_driver);
#endif
#endif
......@@ -27,6 +27,8 @@
#include <linux/of_irq.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/pci-acpi.h>
#include <linux/pci-ecam.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
......@@ -64,7 +66,9 @@
/* PCIe IP version */
#define XGENE_PCIE_IP_VER_UNKN 0
#define XGENE_PCIE_IP_VER_1 1
#define XGENE_PCIE_IP_VER_2 2
#if defined(CONFIG_PCI_XGENE) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
struct xgene_pcie_port {
struct device_node *node;
struct device *dev;
......@@ -91,13 +95,24 @@ static inline u32 pcie_bar_low_val(u32 addr, u32 flags)
return (addr & PCI_BASE_ADDRESS_MEM_MASK) | flags;
}
static inline struct xgene_pcie_port *pcie_bus_to_port(struct pci_bus *bus)
{
struct pci_config_window *cfg;
if (acpi_disabled)
return (struct xgene_pcie_port *)(bus->sysdata);
cfg = bus->sysdata;
return (struct xgene_pcie_port *)(cfg->priv);
}
/*
* When the address bit [17:16] is 2'b01, the Configuration access will be
* treated as Type 1 and it will be forwarded to external PCIe device.
*/
static void __iomem *xgene_pcie_get_cfg_base(struct pci_bus *bus)
{
struct xgene_pcie_port *port = bus->sysdata;
struct xgene_pcie_port *port = pcie_bus_to_port(bus);
if (bus->number >= (bus->primary + 1))
return port->cfg_base + AXI_EP_CFG_ACCESS;
......@@ -111,7 +126,7 @@ static void __iomem *xgene_pcie_get_cfg_base(struct pci_bus *bus)
*/
static void xgene_pcie_set_rtdid_reg(struct pci_bus *bus, uint devfn)
{
struct xgene_pcie_port *port = bus->sysdata;
struct xgene_pcie_port *port = pcie_bus_to_port(bus);
unsigned int b, d, f;
u32 rtdid_val = 0;
......@@ -158,7 +173,7 @@ static void __iomem *xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn,
int where, int size, u32 *val)
{
struct xgene_pcie_port *port = bus->sysdata;
struct xgene_pcie_port *port = pcie_bus_to_port(bus);
if (pci_generic_config_read32(bus, devfn, where & ~0x3, 4, val) !=
PCIBIOS_SUCCESSFUL)
......@@ -182,13 +197,103 @@ static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn,
return PCIBIOS_SUCCESSFUL;
}
#endif
static struct pci_ops xgene_pcie_ops = {
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
static int xgene_get_csr_resource(struct acpi_device *adev,
struct resource *res)
{
struct device *dev = &adev->dev;
struct resource_entry *entry;
struct list_head list;
unsigned long flags;
int ret;
INIT_LIST_HEAD(&list);
flags = IORESOURCE_MEM;
ret = acpi_dev_get_resources(adev, &list,
acpi_dev_filter_resource_type_cb,
(void *) flags);
if (ret < 0) {
dev_err(dev, "failed to parse _CRS method, error code %d\n",
ret);
return ret;
}
if (ret == 0) {
dev_err(dev, "no IO and memory resources present in _CRS\n");
return -EINVAL;
}
entry = list_first_entry(&list, struct resource_entry, node);
*res = *entry->res;
acpi_dev_free_resource_list(&list);
return 0;
}
static int xgene_pcie_ecam_init(struct pci_config_window *cfg, u32 ipversion)
{
struct device *dev = cfg->parent;
struct acpi_device *adev = to_acpi_device(dev);
struct xgene_pcie_port *port;
struct resource csr;
int ret;
port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
if (!port)
return -ENOMEM;
ret = xgene_get_csr_resource(adev, &csr);
if (ret) {
dev_err(dev, "can't get CSR resource\n");
kfree(port);
return ret;
}
port->csr_base = devm_ioremap_resource(dev, &csr);
if (IS_ERR(port->csr_base)) {
kfree(port);
return -ENOMEM;
}
port->cfg_base = cfg->win;
port->version = ipversion;
cfg->priv = port;
return 0;
}
static int xgene_v1_pcie_ecam_init(struct pci_config_window *cfg)
{
return xgene_pcie_ecam_init(cfg, XGENE_PCIE_IP_VER_1);
}
struct pci_ecam_ops xgene_v1_pcie_ecam_ops = {
.bus_shift = 16,
.init = xgene_v1_pcie_ecam_init,
.pci_ops = {
.map_bus = xgene_pcie_map_bus,
.read = xgene_pcie_config_read32,
.write = pci_generic_config_write32,
.write = pci_generic_config_write,
}
};
static int xgene_v2_pcie_ecam_init(struct pci_config_window *cfg)
{
return xgene_pcie_ecam_init(cfg, XGENE_PCIE_IP_VER_2);
}
struct pci_ecam_ops xgene_v2_pcie_ecam_ops = {
.bus_shift = 16,
.init = xgene_v2_pcie_ecam_init,
.pci_ops = {
.map_bus = xgene_pcie_map_bus,
.read = xgene_pcie_config_read32,
.write = pci_generic_config_write,
}
};
#endif
#if defined(CONFIG_PCI_XGENE)
static u64 xgene_pcie_set_ib_mask(struct xgene_pcie_port *port, u32 addr,
u32 flags, u64 size)
{
......@@ -521,6 +626,12 @@ static int xgene_pcie_setup(struct xgene_pcie_port *port,
return 0;
}
static struct pci_ops xgene_pcie_ops = {
.map_bus = xgene_pcie_map_bus,
.read = xgene_pcie_config_read32,
.write = pci_generic_config_write32,
};
static int xgene_pcie_probe_bridge(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
......@@ -591,3 +702,4 @@ static struct platform_driver xgene_pcie_driver = {
.probe = xgene_pcie_probe_bridge,
};
builtin_platform_driver(xgene_pcie_driver);
#endif
......@@ -550,10 +550,8 @@ static int altera_pcie_parse_dt(struct altera_pcie *pcie)
cra = platform_get_resource_byname(pdev, IORESOURCE_MEM, "Cra");
pcie->cra_base = devm_ioremap_resource(dev, cra);
if (IS_ERR(pcie->cra_base)) {
dev_err(dev, "failed to map cra memory\n");
if (IS_ERR(pcie->cra_base))
return PTR_ERR(pcie->cra_base);
}
/* setup IRQ */
pcie->irq = platform_get_irq(pdev, 0);
......@@ -641,8 +639,4 @@ static struct platform_driver altera_pcie_driver = {
},
};
static int altera_pcie_init(void)
{
return platform_driver_register(&altera_pcie_driver);
}
device_initcall(altera_pcie_init);
builtin_platform_driver(altera_pcie_driver);
......@@ -18,7 +18,106 @@
#include <linux/of_pci.h>
#include <linux/platform_device.h>
#include <linux/of_device.h>
#include <linux/pci.h>
#include <linux/pci-acpi.h>
#include <linux/pci-ecam.h>
#include <linux/regmap.h>
#include "../pci.h"
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
static int hisi_pcie_acpi_rd_conf(struct pci_bus *bus, u32 devfn, int where,
int size, u32 *val)
{
struct pci_config_window *cfg = bus->sysdata;
int dev = PCI_SLOT(devfn);
if (bus->number == cfg->busr.start) {
/* access only one slot on each root port */
if (dev > 0)
return PCIBIOS_DEVICE_NOT_FOUND;
else
return pci_generic_config_read32(bus, devfn, where,
size, val);
}
return pci_generic_config_read(bus, devfn, where, size, val);
}
static int hisi_pcie_acpi_wr_conf(struct pci_bus *bus, u32 devfn,
int where, int size, u32 val)
{
struct pci_config_window *cfg = bus->sysdata;
int dev = PCI_SLOT(devfn);
if (bus->number == cfg->busr.start) {
/* access only one slot on each root port */
if (dev > 0)
return PCIBIOS_DEVICE_NOT_FOUND;
else
return pci_generic_config_write32(bus, devfn, where,
size, val);
}
return pci_generic_config_write(bus, devfn, where, size, val);
}
static void __iomem *hisi_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
int where)
{
struct pci_config_window *cfg = bus->sysdata;
void __iomem *reg_base = cfg->priv;
if (bus->number == cfg->busr.start)
return reg_base + where;
else
return pci_ecam_map_bus(bus, devfn, where);
}
static int hisi_pcie_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct acpi_device *adev = to_acpi_device(dev);
struct acpi_pci_root *root = acpi_driver_data(adev);
struct resource *res;
void __iomem *reg_base;
int ret;
/*
* Retrieve RC base and size from a HISI0081 device with _UID
* matching our segment.
*/
res = devm_kzalloc(dev, sizeof(*res), GFP_KERNEL);
if (!res)
return -ENOMEM;
ret = acpi_get_rc_resources(dev, "HISI0081", root->segment, res);
if (ret) {
dev_err(dev, "can't get rc base address\n");
return -ENOMEM;
}
reg_base = devm_ioremap(dev, res->start, resource_size(res));
if (!reg_base)
return -ENOMEM;
cfg->priv = reg_base;
return 0;
}
struct pci_ecam_ops hisi_pcie_ops = {
.bus_shift = 20,
.init = hisi_pcie_init,
.pci_ops = {
.map_bus = hisi_pcie_map_bus,
.read = hisi_pcie_acpi_rd_conf,
.write = hisi_pcie_acpi_wr_conf,
}
};
#endif
#ifdef CONFIG_PCI_HISI
#include "pcie-designware.h"
......@@ -185,17 +284,13 @@ static int hisi_pcie_probe(struct platform_device *pdev)
reg = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rc_dbi");
pp->dbi_base = devm_ioremap_resource(dev, reg);
if (IS_ERR(pp->dbi_base)) {
dev_err(dev, "cannot get rc_dbi base\n");
if (IS_ERR(pp->dbi_base))
return PTR_ERR(pp->dbi_base);
}
ret = hisi_add_pcie_port(hisi_pcie, pdev);
if (ret)
return ret;
dev_warn(dev, "only 32-bit config accesses supported; smaller writes may corrupt adjacent RW1C fields\n");
return 0;
}
......@@ -227,3 +322,5 @@ static struct platform_driver hisi_pcie_driver = {
},
};
builtin_platform_driver(hisi_pcie_driver);
#endif
......@@ -54,6 +54,7 @@ static int iproc_pcie_bcma_probe(struct bcma_device *bdev)
pcie->dev = dev;
pcie->type = IPROC_PCIE_PAXB_BCMA;
pcie->base = bdev->io_addr;
if (!pcie->base) {
dev_err(dev, "no controller registers\n");
......
......@@ -563,6 +563,7 @@ int iproc_msi_init(struct iproc_pcie *pcie, struct device_node *node)
}
switch (pcie->type) {
case IPROC_PCIE_PAXB_BCMA:
case IPROC_PCIE_PAXB:
msi->reg_offsets = iproc_msi_reg_paxb;
msi->nr_eq_region = 1;
......
......@@ -30,9 +30,15 @@ static const struct of_device_id iproc_pcie_of_match_table[] = {
{
.compatible = "brcm,iproc-pcie",
.data = (int *)IPROC_PCIE_PAXB,
}, {
.compatible = "brcm,iproc-pcie-paxb-v2",
.data = (int *)IPROC_PCIE_PAXB_V2,
}, {
.compatible = "brcm,iproc-pcie-paxc",
.data = (int *)IPROC_PCIE_PAXC,
}, {
.compatible = "brcm,iproc-pcie-paxc-v2",
.data = (int *)IPROC_PCIE_PAXC_V2,
},
{ /* sentinel */ }
};
......@@ -84,19 +90,6 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
return ret;
}
pcie->ob.axi_offset = val;
ret = of_property_read_u32(np, "brcm,pcie-ob-window-size",
&val);
if (ret) {
dev_err(dev,
"missing brcm,pcie-ob-window-size property\n");
return ret;
}
pcie->ob.window_size = (resource_size_t)val * SZ_1M;
if (of_property_read_bool(np, "brcm,pcie-ob-oarr-size"))
pcie->ob.set_oarr_size = true;
pcie->need_ob_cfg = true;
}
......@@ -115,7 +108,14 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
return ret;
}
/* PAXC doesn't support legacy IRQs, skip mapping */
switch (pcie->type) {
case IPROC_PCIE_PAXC:
case IPROC_PCIE_PAXC_V2:
break;
default:
pcie->map_irq = of_irq_parse_and_map_pci;
}
ret = iproc_pcie_setup(pcie, &res);
if (ret)
......
This diff is collapsed.
......@@ -24,23 +24,34 @@
* endpoint devices.
*/
enum iproc_pcie_type {
IPROC_PCIE_PAXB = 0,
IPROC_PCIE_PAXB_BCMA = 0,
IPROC_PCIE_PAXB,
IPROC_PCIE_PAXB_V2,
IPROC_PCIE_PAXC,
IPROC_PCIE_PAXC_V2,
};
/**
* iProc PCIe outbound mapping
* @set_oarr_size: indicates the OARR size bit needs to be set
* @axi_offset: offset from the AXI address to the internal address used by
* the iProc PCIe core
* @window_size: outbound window size
* @nr_windows: total number of supported outbound mapping windows
*/
struct iproc_pcie_ob {
bool set_oarr_size;
resource_size_t axi_offset;
resource_size_t window_size;
unsigned int nr_windows;
};
/**
* iProc PCIe inbound mapping
* @nr_regions: total number of supported inbound mapping regions
*/
struct iproc_pcie_ib {
unsigned int nr_regions;
};
struct iproc_pcie_ob_map;
struct iproc_pcie_ib_map;
struct iproc_msi;
/**
......@@ -55,14 +66,25 @@ struct iproc_msi;
* @root_bus: pointer to root bus
* @phy: optional PHY device that controls the Serdes
* @map_irq: function callback to map interrupts
* @ep_is_internal: indicates an internal emulated endpoint device is connected
* @has_apb_err_disable: indicates the controller can be configured to prevent
* unsupported request from being forwarded as an APB bus error
*
* @need_ob_cfg: indicates SW needs to configure the outbound mapping window
* @ob: outbound mapping parameters
* @ob: outbound mapping related parameters
* @ob_map: outbound mapping related parameters specific to the controller
*
* @ib: inbound mapping related parameters
* @ib_map: outbound mapping region related parameters
*
* @need_msi_steer: indicates additional configuration of the iProc PCIe
* controller is required to steer MSI writes to external interrupt controller
* @msi: MSI data
*/
struct iproc_pcie {
struct device *dev;
enum iproc_pcie_type type;
const u16 *reg_offsets;
u16 *reg_offsets;
void __iomem *base;
phys_addr_t base_addr;
#ifdef CONFIG_ARM
......@@ -71,8 +93,17 @@ struct iproc_pcie {
struct pci_bus *root_bus;
struct phy *phy;
int (*map_irq)(const struct pci_dev *, u8, u8);
bool ep_is_internal;
bool has_apb_err_disable;
bool need_ob_cfg;
struct iproc_pcie_ob ob;
const struct iproc_pcie_ob_map *ob_map;
struct iproc_pcie_ib ib;
const struct iproc_pcie_ib_map *ib_map;
bool need_msi_steer;
struct iproc_msi *msi;
};
......
......@@ -36,11 +36,17 @@
#include "pcie-designware.h"
#define PCIE20_PARF_SYS_CTRL 0x00
#define PCIE20_PARF_PHY_CTRL 0x40
#define PCIE20_PARF_PHY_REFCLK 0x4C
#define PCIE20_PARF_DBI_BASE_ADDR 0x168
#define PCIE20_PARF_SLV_ADDR_SPACE_SIZE 0x16c
#define PCIE20_PARF_SLV_ADDR_SPACE_SIZE 0x16C
#define PCIE20_PARF_MHI_CLOCK_RESET_CTRL 0x174
#define PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT 0x178
#define PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1A8
#define PCIE20_PARF_LTSSM 0x1B0
#define PCIE20_PARF_SID_OFFSET 0x234
#define PCIE20_PARF_BDF_TRANSLATE_CFG 0x24C
#define PCIE20_ELBI_SYS_CTRL 0x04
#define PCIE20_ELBI_SYS_CTRL_LT_ENABLE BIT(0)
......@@ -72,9 +78,18 @@ struct qcom_pcie_resources_v1 {
struct regulator *vdda;
};
struct qcom_pcie_resources_v2 {
struct clk *aux_clk;
struct clk *master_clk;
struct clk *slave_clk;
struct clk *cfg_clk;
struct clk *pipe_clk;
};
union qcom_pcie_resources {
struct qcom_pcie_resources_v0 v0;
struct qcom_pcie_resources_v1 v1;
struct qcom_pcie_resources_v2 v2;
};
struct qcom_pcie;
......@@ -82,7 +97,9 @@ struct qcom_pcie;
struct qcom_pcie_ops {
int (*get_resources)(struct qcom_pcie *pcie);
int (*init)(struct qcom_pcie *pcie);
int (*post_init)(struct qcom_pcie *pcie);
void (*deinit)(struct qcom_pcie *pcie);
void (*ltssm_enable)(struct qcom_pcie *pcie);
};
struct qcom_pcie {
......@@ -116,17 +133,35 @@ static irqreturn_t qcom_pcie_msi_irq_handler(int irq, void *arg)
return dw_handle_msi_irq(pp);
}
static int qcom_pcie_establish_link(struct qcom_pcie *pcie)
static void qcom_pcie_v0_v1_ltssm_enable(struct qcom_pcie *pcie)
{
u32 val;
if (dw_pcie_link_up(&pcie->pp))
return 0;
/* enable link training */
val = readl(pcie->elbi + PCIE20_ELBI_SYS_CTRL);
val |= PCIE20_ELBI_SYS_CTRL_LT_ENABLE;
writel(val, pcie->elbi + PCIE20_ELBI_SYS_CTRL);
}
static void qcom_pcie_v2_ltssm_enable(struct qcom_pcie *pcie)
{
u32 val;
/* enable link training */
val = readl(pcie->parf + PCIE20_PARF_LTSSM);
val |= BIT(8);
writel(val, pcie->parf + PCIE20_PARF_LTSSM);
}
static int qcom_pcie_establish_link(struct qcom_pcie *pcie)
{
if (dw_pcie_link_up(&pcie->pp))
return 0;
/* Enable Link Training state machine */
if (pcie->ops->ltssm_enable)
pcie->ops->ltssm_enable(pcie);
return dw_pcie_wait_for_link(&pcie->pp);
}
......@@ -421,6 +456,113 @@ static int qcom_pcie_init_v1(struct qcom_pcie *pcie)
return ret;
}
static int qcom_pcie_get_resources_v2(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_v2 *res = &pcie->res.v2;
struct device *dev = pcie->pp.dev;
res->aux_clk = devm_clk_get(dev, "aux");
if (IS_ERR(res->aux_clk))
return PTR_ERR(res->aux_clk);
res->cfg_clk = devm_clk_get(dev, "cfg");
if (IS_ERR(res->cfg_clk))
return PTR_ERR(res->cfg_clk);
res->master_clk = devm_clk_get(dev, "bus_master");
if (IS_ERR(res->master_clk))
return PTR_ERR(res->master_clk);
res->slave_clk = devm_clk_get(dev, "bus_slave");
if (IS_ERR(res->slave_clk))
return PTR_ERR(res->slave_clk);
res->pipe_clk = devm_clk_get(dev, "pipe");
if (IS_ERR(res->pipe_clk))
return PTR_ERR(res->pipe_clk);
return 0;
}
static int qcom_pcie_init_v2(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_v2 *res = &pcie->res.v2;
struct device *dev = pcie->pp.dev;
u32 val;
int ret;
ret = clk_prepare_enable(res->aux_clk);
if (ret) {
dev_err(dev, "cannot prepare/enable aux clock\n");
return ret;
}
ret = clk_prepare_enable(res->cfg_clk);
if (ret) {
dev_err(dev, "cannot prepare/enable cfg clock\n");
goto err_cfg_clk;
}
ret = clk_prepare_enable(res->master_clk);
if (ret) {
dev_err(dev, "cannot prepare/enable master clock\n");
goto err_master_clk;
}
ret = clk_prepare_enable(res->slave_clk);
if (ret) {
dev_err(dev, "cannot prepare/enable slave clock\n");
goto err_slave_clk;
}
/* enable PCIe clocks and resets */
val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL);
val &= ~BIT(0);
writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL);
/* change DBI base address */
writel(0, pcie->parf + PCIE20_PARF_DBI_BASE_ADDR);
/* MAC PHY_POWERDOWN MUX DISABLE */
val = readl(pcie->parf + PCIE20_PARF_SYS_CTRL);
val &= ~BIT(29);
writel(val, pcie->parf + PCIE20_PARF_SYS_CTRL);
val = readl(pcie->parf + PCIE20_PARF_MHI_CLOCK_RESET_CTRL);
val |= BIT(4);
writel(val, pcie->parf + PCIE20_PARF_MHI_CLOCK_RESET_CTRL);
val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2);
val |= BIT(31);
writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2);
return 0;
err_slave_clk:
clk_disable_unprepare(res->master_clk);
err_master_clk:
clk_disable_unprepare(res->cfg_clk);
err_cfg_clk:
clk_disable_unprepare(res->aux_clk);
return ret;
}
static int qcom_pcie_post_init_v2(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_v2 *res = &pcie->res.v2;
struct device *dev = pcie->pp.dev;
int ret;
ret = clk_prepare_enable(res->pipe_clk);
if (ret) {
dev_err(dev, "cannot prepare/enable pipe clock\n");
return ret;
}
return 0;
}
static int qcom_pcie_link_up(struct pcie_port *pp)
{
struct qcom_pcie *pcie = to_qcom_pcie(pp);
......@@ -429,6 +571,17 @@ static int qcom_pcie_link_up(struct pcie_port *pp)
return !!(val & PCI_EXP_LNKSTA_DLLLA);
}
static void qcom_pcie_deinit_v2(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_v2 *res = &pcie->res.v2;
clk_disable_unprepare(res->pipe_clk);
clk_disable_unprepare(res->slave_clk);
clk_disable_unprepare(res->master_clk);
clk_disable_unprepare(res->cfg_clk);
clk_disable_unprepare(res->aux_clk);
}
static void qcom_pcie_host_init(struct pcie_port *pp)
{
struct qcom_pcie *pcie = to_qcom_pcie(pp);
......@@ -444,6 +597,9 @@ static void qcom_pcie_host_init(struct pcie_port *pp)
if (ret)
goto err_deinit;
if (pcie->ops->post_init)
pcie->ops->post_init(pcie);
dw_pcie_setup_rc(pp);
if (IS_ENABLED(CONFIG_PCI_MSI))
......@@ -487,12 +643,22 @@ static const struct qcom_pcie_ops ops_v0 = {
.get_resources = qcom_pcie_get_resources_v0,
.init = qcom_pcie_init_v0,
.deinit = qcom_pcie_deinit_v0,
.ltssm_enable = qcom_pcie_v0_v1_ltssm_enable,
};
static const struct qcom_pcie_ops ops_v1 = {
.get_resources = qcom_pcie_get_resources_v1,
.init = qcom_pcie_init_v1,
.deinit = qcom_pcie_deinit_v1,
.ltssm_enable = qcom_pcie_v0_v1_ltssm_enable,
};
static const struct qcom_pcie_ops ops_v2 = {
.get_resources = qcom_pcie_get_resources_v2,
.init = qcom_pcie_init_v2,
.post_init = qcom_pcie_post_init_v2,
.deinit = qcom_pcie_deinit_v2,
.ltssm_enable = qcom_pcie_v2_ltssm_enable,
};
static int qcom_pcie_probe(struct platform_device *pdev)
......@@ -572,6 +738,7 @@ static const struct of_device_id qcom_pcie_match[] = {
{ .compatible = "qcom,pcie-ipq8064", .data = &ops_v0 },
{ .compatible = "qcom,pcie-apq8064", .data = &ops_v0 },
{ .compatible = "qcom,pcie-apq8084", .data = &ops_v1 },
{ .compatible = "qcom,pcie-msm8996", .data = &ops_v2 },
{ }
};
......
......@@ -1071,13 +1071,14 @@ static int rcar_pcie_parse_map_dma_ranges(struct rcar_pcie *pcie,
static const struct of_device_id rcar_pcie_of_match[] = {
{ .compatible = "renesas,pcie-r8a7779", .data = rcar_pcie_hw_init_h1 },
{ .compatible = "renesas,pcie-rcar-gen2",
.data = rcar_pcie_hw_init_gen2 },
{ .compatible = "renesas,pcie-r8a7790",
.data = rcar_pcie_hw_init_gen2 },
{ .compatible = "renesas,pcie-r8a7791",
.data = rcar_pcie_hw_init_gen2 },
{ .compatible = "renesas,pcie-rcar-gen2",
.data = rcar_pcie_hw_init_gen2 },
{ .compatible = "renesas,pcie-r8a7795", .data = rcar_pcie_hw_init },
{ .compatible = "renesas,pcie-rcar-gen3", .data = rcar_pcie_hw_init },
{},
};
......
This diff is collapsed.
......@@ -296,8 +296,4 @@ static struct platform_driver spear13xx_pcie_driver = {
},
};
static int __init spear13xx_pcie_init(void)
{
return platform_driver_register(&spear13xx_pcie_driver);
}
device_initcall(spear13xx_pcie_init);
builtin_platform_driver(spear13xx_pcie_driver);
......@@ -19,6 +19,7 @@
#include <linux/module.h>
#include <linux/msi.h>
#include <linux/pci.h>
#include <linux/srcu.h>
#include <linux/rculist.h>
#include <linux/rcupdate.h>
......@@ -39,7 +40,6 @@ static DEFINE_RAW_SPINLOCK(list_lock);
/**
* struct vmd_irq - private data to map driver IRQ to the VMD shared vector
* @node: list item for parent traversal.
* @rcu: RCU callback item for freeing.
* @irq: back pointer to parent.
* @enabled: true if driver enabled IRQ
* @virq: the virtual IRQ value provided to the requesting driver.
......@@ -49,7 +49,6 @@ static DEFINE_RAW_SPINLOCK(list_lock);
*/
struct vmd_irq {
struct list_head node;
struct rcu_head rcu;
struct vmd_irq_list *irq;
bool enabled;
unsigned int virq;
......@@ -58,11 +57,13 @@ struct vmd_irq {
/**
* struct vmd_irq_list - list of driver requested IRQs mapping to a VMD vector
* @irq_list: the list of irq's the VMD one demuxes to.
* @srcu: SRCU struct for local synchronization.
* @count: number of child IRQs assigned to this vector; used to track
* sharing.
*/
struct vmd_irq_list {
struct list_head irq_list;
struct srcu_struct srcu;
unsigned int count;
};
......@@ -224,14 +225,14 @@ static void vmd_msi_free(struct irq_domain *domain,
struct vmd_irq *vmdirq = irq_get_chip_data(virq);
unsigned long flags;
synchronize_rcu();
synchronize_srcu(&vmdirq->irq->srcu);
/* XXX: Potential optimization to rebalance */
raw_spin_lock_irqsave(&list_lock, flags);
vmdirq->irq->count--;
raw_spin_unlock_irqrestore(&list_lock, flags);
kfree_rcu(vmdirq, rcu);
kfree(vmdirq);
}
static int vmd_msi_prepare(struct irq_domain *domain, struct device *dev,
......@@ -646,11 +647,12 @@ static irqreturn_t vmd_irq(int irq, void *data)
{
struct vmd_irq_list *irqs = data;
struct vmd_irq *vmdirq;
int idx;
rcu_read_lock();
idx = srcu_read_lock(&irqs->srcu);
list_for_each_entry_rcu(vmdirq, &irqs->irq_list, node)
generic_handle_irq(vmdirq->virq);
rcu_read_unlock();
srcu_read_unlock(&irqs->srcu, idx);
return IRQ_HANDLED;
}
......@@ -696,6 +698,10 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
return -ENOMEM;
for (i = 0; i < vmd->msix_count; i++) {
err = init_srcu_struct(&vmd->irqs[i].srcu);
if (err)
return err;
INIT_LIST_HEAD(&vmd->irqs[i].irq_list);
err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i),
vmd_irq, 0, "vmd", &vmd->irqs[i]);
......@@ -714,12 +720,20 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
return 0;
}
static void vmd_cleanup_srcu(struct vmd_dev *vmd)
{
int i;
for (i = 0; i < vmd->msix_count; i++)
cleanup_srcu_struct(&vmd->irqs[i].srcu);
}
static void vmd_remove(struct pci_dev *dev)
{
struct vmd_dev *vmd = pci_get_drvdata(dev);
vmd_detach_resources(vmd);
pci_set_drvdata(dev, NULL);
vmd_cleanup_srcu(vmd);
sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
pci_stop_root_bus(vmd->bus);
pci_remove_root_bus(vmd->bus);
......@@ -727,7 +741,7 @@ static void vmd_remove(struct pci_dev *dev)
irq_domain_remove(vmd->irq_domain);
}
#ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP
static int vmd_suspend(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
......
......@@ -222,35 +222,6 @@ static void acpiphp_post_dock_fixup(struct acpi_device *adev)
acpiphp_let_context_go(context);
}
/* Check whether the PCI device is managed by native PCIe hotplug driver */
static bool device_is_managed_by_native_pciehp(struct pci_dev *pdev)
{
u32 reg32;
acpi_handle tmp;
struct acpi_pci_root *root;
/* Check whether the PCIe port supports native PCIe hotplug */
if (pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &reg32))
return false;
if (!(reg32 & PCI_EXP_SLTCAP_HPC))
return false;
/*
* Check whether native PCIe hotplug has been enabled for
* this PCIe hierarchy.
*/
tmp = acpi_find_root_bridge_handle(pdev);
if (!tmp)
return false;
root = acpi_pci_find_root(tmp);
if (!root)
return false;
if (!(root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL))
return false;
return true;
}
/**
* acpiphp_add_context - Add ACPIPHP context to an ACPI device object.
* @handle: ACPI handle of the object to add a context to.
......@@ -334,7 +305,7 @@ static acpi_status acpiphp_add_context(acpi_handle handle, u32 lvl, void *data,
* expose slots to user space in those cases.
*/
if ((acpi_pci_check_ejectable(pbus, handle) || is_dock_device(adev))
&& !(pdev && device_is_managed_by_native_pciehp(pdev))) {
&& !(pdev && pdev->is_hotplug_bridge && pciehp_is_native(pdev))) {
unsigned long long sun;
int retval;
......
......@@ -867,7 +867,8 @@ static int cpqhpc_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
*/
if ((pdev->revision <= 2) && (vendor_id != PCI_VENDOR_ID_INTEL)) {
err(msg_HPC_not_supported);
return -ENODEV;
rc = -ENODEV;
goto err_disable_device;
}
/* TODO: This code can be made to support non-Compaq or Intel
......
......@@ -23,6 +23,9 @@
*
* Send feedback to <kristen.c.accardi@intel.com>
*
* Authors:
* Greg Kroah-Hartman <greg@kroah.com>
* Scott Murray <scottm@somanetworks.com>
*/
#include <linux/module.h> /* try_module_get & module_put */
......@@ -50,15 +53,9 @@
#define info(format, arg...) printk(KERN_INFO "%s: " format, MY_NAME, ## arg)
#define warn(format, arg...) printk(KERN_WARNING "%s: " format, MY_NAME, ## arg)
/* local variables */
static bool debug;
#define DRIVER_VERSION "0.5"
#define DRIVER_AUTHOR "Greg Kroah-Hartman <greg@kroah.com>, Scott Murray <scottm@somanetworks.com>"
#define DRIVER_DESC "PCI Hot Plug PCI Core"
static LIST_HEAD(pci_hotplug_slot_list);
static DEFINE_MUTEX(pci_hp_mutex);
......@@ -534,7 +531,6 @@ static int __init pci_hotplug_init(void)
return result;
}
info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
return result;
}
device_initcall(pci_hotplug_init);
......
......@@ -25,6 +25,10 @@
*
* Send feedback to <greg@kroah.com>, <kristen.c.accardi@intel.com>
*
* Authors:
* Dan Zink <dan.zink@compaq.com>
* Greg Kroah-Hartman <greg@kroah.com>
* Dely Sy <dely.l.sy@intel.com>"
*/
#include <linux/moduleparam.h>
......@@ -42,10 +46,6 @@ bool pciehp_poll_mode;
int pciehp_poll_time;
static bool pciehp_force;
#define DRIVER_VERSION "0.4"
#define DRIVER_AUTHOR "Dan Zink <dan.zink@compaq.com>, Greg Kroah-Hartman <greg@kroah.com>, Dely Sy <dely.l.sy@intel.com>"
#define DRIVER_DESC "PCI Express Hot Plug Controller Driver"
/*
* not really modular, but the easiest way to keep compat with existing
* bootargs behaviour is to continue using module_param here.
......@@ -333,7 +333,6 @@ static int __init pcied_init(void)
retval = pcie_port_service_register(&hpdriver_portdrv);
dbg("pcie_port_service_register = %d\n", retval);
info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
if (retval)
dbg("Failure to register service\n");
......
......@@ -31,6 +31,7 @@
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/slab.h>
#include <linux/pm_runtime.h>
#include <linux/pci.h>
#include "../pci.h"
#include "pciehp.h"
......@@ -98,6 +99,7 @@ static int board_added(struct slot *p_slot)
pciehp_green_led_blink(p_slot);
/* Check link training status */
pm_runtime_get_sync(&ctrl->pcie->port->dev);
retval = pciehp_check_link_status(ctrl);
if (retval) {
ctrl_err(ctrl, "Failed to check link status\n");
......@@ -118,12 +120,14 @@ static int board_added(struct slot *p_slot)
if (retval != -EEXIST)
goto err_exit;
}
pm_runtime_put(&ctrl->pcie->port->dev);
pciehp_green_led_on(p_slot);
pciehp_set_attention_status(p_slot, 0);
return 0;
err_exit:
pm_runtime_put(&ctrl->pcie->port->dev);
set_slot_off(ctrl, p_slot);
return retval;
}
......@@ -137,7 +141,9 @@ static int remove_board(struct slot *p_slot)
int retval;
struct controller *ctrl = p_slot->ctrl;
pm_runtime_get_sync(&ctrl->pcie->port->dev);
retval = pciehp_unconfigure_device(p_slot);
pm_runtime_put(&ctrl->pcie->port->dev);
if (retval)
return retval;
......@@ -410,7 +416,7 @@ int pciehp_enable_slot(struct slot *p_slot)
if (getstatus) {
ctrl_info(ctrl, "Slot(%s): Already enabled\n",
slot_name(p_slot));
return -EINVAL;
return 0;
}
}
......
......@@ -620,8 +620,18 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
pciehp_queue_interrupt_event(slot, INT_BUTTON_PRESS);
}
/* Check Presence Detect Changed */
if (events & PCI_EXP_SLTSTA_PDC) {
/*
* Check Link Status Changed at higher precedence than Presence
* Detect Changed. The PDS value may be set to "card present" from
* out-of-band detection, which may be in conflict with a Link Down
* and cause the wrong event to queue.
*/
if (events & PCI_EXP_SLTSTA_DLLSC) {
ctrl_info(ctrl, "Slot(%s): Link %s\n", slot_name(slot),
link ? "Up" : "Down");
pciehp_queue_interrupt_event(slot, link ? INT_LINK_UP :
INT_LINK_DOWN);
} else if (events & PCI_EXP_SLTSTA_PDC) {
present = !!(status & PCI_EXP_SLTSTA_PDS);
ctrl_info(ctrl, "Slot(%s): Card %spresent\n", slot_name(slot),
present ? "" : "not ");
......@@ -636,13 +646,6 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
pciehp_queue_interrupt_event(slot, INT_POWER_FAULT);
}
if (events & PCI_EXP_SLTSTA_DLLSC) {
ctrl_info(ctrl, "Slot(%s): Link %s\n", slot_name(slot),
link ? "Up" : "Down");
pciehp_queue_interrupt_event(slot, link ? INT_LINK_UP :
INT_LINK_DOWN);
}
return IRQ_HANDLED;
}
......
......@@ -306,13 +306,6 @@ static int sriov_enable(struct pci_dev *dev, int nr_virtfn)
return rc;
}
pci_iov_set_numvfs(dev, nr_virtfn);
iov->ctrl |= PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE;
pci_cfg_access_lock(dev);
pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl);
msleep(100);
pci_cfg_access_unlock(dev);
iov->initial_VFs = initial;
if (nr_virtfn < initial)
initial = nr_virtfn;
......@@ -323,6 +316,13 @@ static int sriov_enable(struct pci_dev *dev, int nr_virtfn)
goto err_pcibios;
}
pci_iov_set_numvfs(dev, nr_virtfn);
iov->ctrl |= PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE;
pci_cfg_access_lock(dev);
pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl);
msleep(100);
pci_cfg_access_unlock(dev);
for (i = 0; i < initial; i++) {
rc = pci_iov_add_virtfn(dev, i, 0);
if (rc)
......@@ -554,21 +554,61 @@ void pci_iov_release(struct pci_dev *dev)
}
/**
* pci_iov_resource_bar - get position of the SR-IOV BAR
* pci_iov_update_resource - update a VF BAR
* @dev: the PCI device
* @resno: the resource number
*
* Returns position of the BAR encapsulated in the SR-IOV capability.
* Update a VF BAR in the SR-IOV capability of a PF.
*/
int pci_iov_resource_bar(struct pci_dev *dev, int resno)
void pci_iov_update_resource(struct pci_dev *dev, int resno)
{
if (resno < PCI_IOV_RESOURCES || resno > PCI_IOV_RESOURCE_END)
return 0;
struct pci_sriov *iov = dev->is_physfn ? dev->sriov : NULL;
struct resource *res = dev->resource + resno;
int vf_bar = resno - PCI_IOV_RESOURCES;
struct pci_bus_region region;
u16 cmd;
u32 new;
int reg;
/*
* The generic pci_restore_bars() path calls this for all devices,
* including VFs and non-SR-IOV devices. If this is not a PF, we
* have nothing to do.
*/
if (!iov)
return;
BUG_ON(!dev->is_physfn);
pci_read_config_word(dev, iov->pos + PCI_SRIOV_CTRL, &cmd);
if ((cmd & PCI_SRIOV_CTRL_VFE) && (cmd & PCI_SRIOV_CTRL_MSE)) {
dev_WARN(&dev->dev, "can't update enabled VF BAR%d %pR\n",
vf_bar, res);
return;
}
return dev->sriov->pos + PCI_SRIOV_BAR +
4 * (resno - PCI_IOV_RESOURCES);
/*
* Ignore unimplemented BARs, unused resource slots for 64-bit
* BARs, and non-movable resources, e.g., those described via
* Enhanced Allocation.
*/
if (!res->flags)
return;
if (res->flags & IORESOURCE_UNSET)
return;
if (res->flags & IORESOURCE_PCI_FIXED)
return;
pcibios_resource_to_bus(dev->bus, &region, res);
new = region.start;
new |= res->flags & ~PCI_BASE_ADDRESS_MEM_MASK;
reg = iov->pos + PCI_SRIOV_BAR + 4 * vf_bar;
pci_write_config_dword(dev, reg, new);
if (res->flags & IORESOURCE_MEM_64) {
new = region.start >> 16 >> 16;
pci_write_config_dword(dev, reg + 4, new);
}
}
resource_size_t __weak pcibios_iov_resource_alignment(struct pci_dev *dev,
......
......@@ -1302,7 +1302,8 @@ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr)
} else if (dev->msi_enabled) {
struct msi_desc *entry = first_pci_msi_entry(dev);
if (WARN_ON_ONCE(!entry || nr >= entry->nvec_used))
if (WARN_ON_ONCE(!entry || !entry->affinity ||
nr >= entry->nvec_used))
return NULL;
return &entry->affinity[nr];
......
......@@ -29,6 +29,82 @@ const u8 pci_acpi_dsm_uuid[] = {
0x91, 0x17, 0xea, 0x4d, 0x19, 0xc3, 0x43, 0x4d
};
#if defined(CONFIG_PCI_QUIRKS) && defined(CONFIG_ARM64)
static int acpi_get_rc_addr(struct acpi_device *adev, struct resource *res)
{
struct device *dev = &adev->dev;
struct resource_entry *entry;
struct list_head list;
unsigned long flags;
int ret;
INIT_LIST_HEAD(&list);
flags = IORESOURCE_MEM;
ret = acpi_dev_get_resources(adev, &list,
acpi_dev_filter_resource_type_cb,
(void *) flags);
if (ret < 0) {
dev_err(dev, "failed to parse _CRS method, error code %d\n",
ret);
return ret;
}
if (ret == 0) {
dev_err(dev, "no IO and memory resources present in _CRS\n");
return -EINVAL;
}
entry = list_first_entry(&list, struct resource_entry, node);
*res = *entry->res;
acpi_dev_free_resource_list(&list);
return 0;
}
static acpi_status acpi_match_rc(acpi_handle handle, u32 lvl, void *context,
void **retval)
{
u16 *segment = context;
unsigned long long uid;
acpi_status status;
status = acpi_evaluate_integer(handle, "_UID", NULL, &uid);
if (ACPI_FAILURE(status) || uid != *segment)
return AE_CTRL_DEPTH;
*(acpi_handle *)retval = handle;
return AE_CTRL_TERMINATE;
}
int acpi_get_rc_resources(struct device *dev, const char *hid, u16 segment,
struct resource *res)
{
struct acpi_device *adev;
acpi_status status;
acpi_handle handle;
int ret;
status = acpi_get_devices(hid, acpi_match_rc, &segment, &handle);
if (ACPI_FAILURE(status)) {
dev_err(dev, "can't find _HID %s device to locate resources\n",
hid);
return -ENODEV;
}
ret = acpi_bus_get_device(handle, &adev);
if (ret)
return ret;
ret = acpi_get_rc_addr(adev, res);
if (ret) {
dev_err(dev, "can't get resource from %s\n",
dev_name(&adev->dev));
return ret;
}
return 0;
}
#endif
phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle)
{
acpi_status status = AE_NOT_EXIST;
......@@ -293,6 +369,30 @@ int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp)
}
EXPORT_SYMBOL_GPL(pci_get_hp_params);
/**
* pciehp_is_native - Check whether a hotplug port is handled by the OS
* @pdev: Hotplug port to check
*
* Walk up from @pdev to the host bridge, obtain its cached _OSC Control Field
* and return the value of the "PCI Express Native Hot Plug control" bit.
* On failure to obtain the _OSC Control Field return %false.
*/
bool pciehp_is_native(struct pci_dev *pdev)
{
struct acpi_pci_root *root;
acpi_handle handle;
handle = acpi_find_root_bridge_handle(pdev);
if (!handle)
return false;
root = acpi_pci_find_root(handle);
if (!root)
return false;
return root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL;
}
/**
* pci_acpi_wake_bus - Root bus wakeup notification fork function.
* @work: Work item to handle.
......
......@@ -54,7 +54,7 @@ static bool mid_pci_need_resume(struct pci_dev *dev)
return false;
}
static struct pci_platform_pm_ops mid_pci_platform_pm = {
static const struct pci_platform_pm_ops mid_pci_platform_pm = {
.is_manageable = mid_pci_power_manageable,
.set_state = mid_pci_set_power_state,
.get_state = mid_pci_get_power_state,
......
......@@ -50,6 +50,7 @@ pci_config_attr(vendor, "0x%04x\n");
pci_config_attr(device, "0x%04x\n");
pci_config_attr(subsystem_vendor, "0x%04x\n");
pci_config_attr(subsystem_device, "0x%04x\n");
pci_config_attr(revision, "0x%02x\n");
pci_config_attr(class, "0x%06x\n");
pci_config_attr(irq, "%u\n");
......@@ -568,6 +569,7 @@ static struct attribute *pci_dev_attrs[] = {
&dev_attr_device.attr,
&dev_attr_subsystem_vendor.attr,
&dev_attr_subsystem_device.attr,
&dev_attr_revision.attr,
&dev_attr_class.attr,
&dev_attr_irq.attr,
&dev_attr_local_cpus.attr,
......
......@@ -564,10 +564,6 @@ static void pci_restore_bars(struct pci_dev *dev)
{
int i;
/* Per SR-IOV spec 3.4.1.11, VF BARs are RO zero */
if (dev->is_virtfn)
return;
for (i = 0; i < PCI_BRIDGE_RESOURCES; i++)
pci_update_resource(dev, i);
}
......@@ -2106,6 +2102,10 @@ bool pci_dev_run_wake(struct pci_dev *dev)
if (!dev->pme_support)
return false;
/* PME-capable in principle, but not from the intended sleep state */
if (!pci_pme_capable(dev, pci_target_state(dev)))
return false;
while (bus->parent) {
struct pci_dev *bridge = bus->self;
......@@ -2226,7 +2226,7 @@ void pci_config_pm_runtime_put(struct pci_dev *pdev)
* This function checks if it is possible to move the bridge to D3.
* Currently we only allow D3 for recent enough PCIe ports.
*/
static bool pci_bridge_d3_possible(struct pci_dev *bridge)
bool pci_bridge_d3_possible(struct pci_dev *bridge)
{
unsigned int year;
......@@ -2239,6 +2239,14 @@ static bool pci_bridge_d3_possible(struct pci_dev *bridge)
case PCI_EXP_TYPE_DOWNSTREAM:
if (pci_bridge_d3_disable)
return false;
/*
* Hotplug ports handled by firmware in System Management Mode
* may not be put into D3 by the OS (Thunderbolt on non-Macs).
*/
if (bridge->is_hotplug_bridge && !pciehp_is_native(bridge))
return false;
if (pci_bridge_d3_force)
return true;
......@@ -2259,32 +2267,36 @@ static bool pci_bridge_d3_possible(struct pci_dev *bridge)
static int pci_dev_check_d3cold(struct pci_dev *dev, void *data)
{
bool *d3cold_ok = data;
bool no_d3cold;
/*
* The device needs to be allowed to go D3cold and if it is wake
* capable to do so from D3cold.
*/
no_d3cold = dev->no_d3cold || !dev->d3cold_allowed ||
(device_may_wakeup(&dev->dev) && !pci_pme_capable(dev, PCI_D3cold)) ||
!pci_power_manageable(dev);
if (/* The device needs to be allowed to go D3cold ... */
dev->no_d3cold || !dev->d3cold_allowed ||
*d3cold_ok = !no_d3cold;
/* ... and if it is wakeup capable to do so from D3cold. */
(device_may_wakeup(&dev->dev) &&
!pci_pme_capable(dev, PCI_D3cold)) ||
return no_d3cold;
/* If it is a bridge it must be allowed to go to D3. */
!pci_power_manageable(dev) ||
/* Hotplug interrupts cannot be delivered if the link is down. */
dev->is_hotplug_bridge)
*d3cold_ok = false;
return !*d3cold_ok;
}
/*
* pci_bridge_d3_update - Update bridge D3 capabilities
* @dev: PCI device which is changed
* @remove: Is the device being removed
*
* Update upstream bridge PM capabilities accordingly depending on if the
* device PM configuration was changed or the device is being removed. The
* change is also propagated upstream.
*/
static void pci_bridge_d3_update(struct pci_dev *dev, bool remove)
void pci_bridge_d3_update(struct pci_dev *dev)
{
bool remove = !device_is_registered(&dev->dev);
struct pci_dev *bridge;
bool d3cold_ok = true;
......@@ -2292,55 +2304,39 @@ static void pci_bridge_d3_update(struct pci_dev *dev, bool remove)
if (!bridge || !pci_bridge_d3_possible(bridge))
return;
pci_dev_get(bridge);
/*
* If the device is removed we do not care about its D3cold
* capabilities.
* If D3 is currently allowed for the bridge, removing one of its
* children won't change that.
*/
if (remove && bridge->bridge_d3)
return;
/*
* If D3 is currently allowed for the bridge and a child is added or
* changed, disallowance of D3 can only be caused by that child, so
* we only need to check that single device, not any of its siblings.
*
* If D3 is currently not allowed for the bridge, checking the device
* first may allow us to skip checking its siblings.
*/
if (!remove)
pci_dev_check_d3cold(dev, &d3cold_ok);
if (d3cold_ok) {
/*
* We need to go through all children to find out if all of
* them can still go to D3cold.
* If D3 is currently not allowed for the bridge, this may be caused
* either by the device being changed/removed or any of its siblings,
* so we need to go through all children to find out if one of them
* continues to block D3.
*/
if (d3cold_ok && !bridge->bridge_d3)
pci_walk_bus(bridge->subordinate, pci_dev_check_d3cold,
&d3cold_ok);
}
if (bridge->bridge_d3 != d3cold_ok) {
bridge->bridge_d3 = d3cold_ok;
/* Propagate change to upstream bridges */
pci_bridge_d3_update(bridge, false);
pci_bridge_d3_update(bridge);
}
pci_dev_put(bridge);
}
/**
* pci_bridge_d3_device_changed - Update bridge D3 capabilities on change
* @dev: PCI device that was changed
*
* If a device is added or its PM configuration, such as is it allowed to
* enter D3cold, is changed this function updates upstream bridge PM
* capabilities accordingly.
*/
void pci_bridge_d3_device_changed(struct pci_dev *dev)
{
pci_bridge_d3_update(dev, false);
}
/**
* pci_bridge_d3_device_removed - Update bridge D3 capabilities on remove
* @dev: PCI device being removed
*
* Function updates upstream bridge PM capabilities based on other devices
* still left on the bus.
*/
void pci_bridge_d3_device_removed(struct pci_dev *dev)
{
pci_bridge_d3_update(dev, true);
}
/**
......@@ -2355,7 +2351,7 @@ void pci_d3cold_enable(struct pci_dev *dev)
{
if (dev->no_d3cold) {
dev->no_d3cold = false;
pci_bridge_d3_device_changed(dev);
pci_bridge_d3_update(dev);
}
}
EXPORT_SYMBOL_GPL(pci_d3cold_enable);
......@@ -2372,7 +2368,7 @@ void pci_d3cold_disable(struct pci_dev *dev)
{
if (!dev->no_d3cold) {
dev->no_d3cold = true;
pci_bridge_d3_device_changed(dev);
pci_bridge_d3_update(dev);
}
}
EXPORT_SYMBOL_GPL(pci_d3cold_disable);
......@@ -4831,36 +4827,6 @@ int pci_select_bars(struct pci_dev *dev, unsigned long flags)
}
EXPORT_SYMBOL(pci_select_bars);
/**
* pci_resource_bar - get position of the BAR associated with a resource
* @dev: the PCI device
* @resno: the resource number
* @type: the BAR type to be filled in
*
* Returns BAR position in config space, or 0 if the BAR is invalid.
*/
int pci_resource_bar(struct pci_dev *dev, int resno, enum pci_bar_type *type)
{
int reg;
if (resno < PCI_ROM_RESOURCE) {
*type = pci_bar_unknown;
return PCI_BASE_ADDRESS_0 + 4 * resno;
} else if (resno == PCI_ROM_RESOURCE) {
*type = pci_bar_mem32;
return dev->rom_base_reg;
} else if (resno < PCI_BRIDGE_RESOURCES) {
/* device specific resource */
*type = pci_bar_unknown;
reg = pci_iov_resource_bar(dev, resno);
if (reg)
return reg;
}
dev_err(&dev->dev, "BAR %d: invalid resource\n", resno);
return 0;
}
/* Some architectures require additional programming to enable VGA */
static arch_set_vga_state_t arch_set_vga_state;
......
#ifndef DRIVERS_PCI_H
#define DRIVERS_PCI_H
#define PCI_CFG_SPACE_SIZE 256
#define PCI_CFG_SPACE_EXP_SIZE 4096
#define PCI_FIND_CAP_TTL 48
extern const unsigned char pcie_link_speed[];
......@@ -85,8 +82,8 @@ void pci_pm_init(struct pci_dev *dev);
void pci_ea_init(struct pci_dev *dev);
void pci_allocate_cap_save_buffers(struct pci_dev *dev);
void pci_free_cap_save_buffers(struct pci_dev *dev);
void pci_bridge_d3_device_changed(struct pci_dev *dev);
void pci_bridge_d3_device_removed(struct pci_dev *dev);
bool pci_bridge_d3_possible(struct pci_dev *dev);
void pci_bridge_d3_update(struct pci_dev *dev);
static inline void pci_wakeup_event(struct pci_dev *dev)
{
......@@ -245,7 +242,6 @@ bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl,
int pci_setup_device(struct pci_dev *dev);
int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
struct resource *res, unsigned int reg);
int pci_resource_bar(struct pci_dev *dev, int resno, enum pci_bar_type *type);
void pci_configure_ari(struct pci_dev *dev);
void __pci_bus_size_bridges(struct pci_bus *bus,
struct list_head *realloc_head);
......@@ -289,7 +285,7 @@ static inline void pci_restore_ats_state(struct pci_dev *dev)
#ifdef CONFIG_PCI_IOV
int pci_iov_init(struct pci_dev *dev);
void pci_iov_release(struct pci_dev *dev);
int pci_iov_resource_bar(struct pci_dev *dev, int resno);
void pci_iov_update_resource(struct pci_dev *dev, int resno);
resource_size_t pci_sriov_resource_alignment(struct pci_dev *dev, int resno);
void pci_restore_iov_state(struct pci_dev *dev);
int pci_iov_bus_range(struct pci_bus *bus);
......@@ -303,10 +299,6 @@ static inline void pci_iov_release(struct pci_dev *dev)
{
}
static inline int pci_iov_resource_bar(struct pci_dev *dev, int resno)
{
return 0;
}
static inline void pci_restore_iov_state(struct pci_dev *dev)
{
}
......@@ -356,4 +348,9 @@ static inline int pci_dev_specific_reset(struct pci_dev *dev, int probe)
}
#endif
#if defined(CONFIG_PCI_QUIRKS) && defined(CONFIG_ARM64)
int acpi_get_rc_resources(struct device *dev, const char *hid, u16 segment,
struct resource *res);
#endif
#endif /* DRIVERS_PCI_H */
......@@ -30,13 +30,6 @@
#include "aerdrv.h"
#include "../../pci.h"
/*
* Version Information
*/
#define DRIVER_VERSION "v1.0"
#define DRIVER_AUTHOR "tom.l.nguyen@intel.com"
#define DRIVER_DESC "Root Port Advanced Error Reporting Driver"
static int aer_probe(struct pcie_device *dev);
static void aer_remove(struct pcie_device *dev);
static pci_ers_result_t aer_error_detected(struct pci_dev *dev,
......@@ -297,12 +290,12 @@ static int aer_probe(struct pcie_device *dev)
{
int status;
struct aer_rpc *rpc;
struct device *device = &dev->device;
struct device *device = &dev->port->dev;
/* Alloc rpc data structure */
rpc = aer_alloc_rpc(dev);
if (!rpc) {
dev_printk(KERN_DEBUG, device, "alloc rpc failed\n");
dev_printk(KERN_DEBUG, device, "alloc AER rpc failed\n");
aer_remove(dev);
return -ENOMEM;
}
......@@ -310,7 +303,8 @@ static int aer_probe(struct pcie_device *dev)
/* Request IRQ ISR */
status = request_irq(dev->irq, aer_irq, IRQF_SHARED, "aerdrv", dev);
if (status) {
dev_printk(KERN_DEBUG, device, "request IRQ failed\n");
dev_printk(KERN_DEBUG, device, "request AER IRQ %d failed\n",
dev->irq);
aer_remove(dev);
return status;
}
......@@ -318,8 +312,8 @@ static int aer_probe(struct pcie_device *dev)
rpc->isr = 1;
aer_enable_rootport(rpc);
return status;
dev_info(device, "AER enabled with IRQ %d\n", dev->irq);
return 0;
}
/**
......
......@@ -351,12 +351,26 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
return;
}
/* Get upstream/downstream components' register state */
pcie_get_aspm_reg(parent, &upreg);
child = list_entry(linkbus->devices.next, struct pci_dev, bus_list);
pcie_get_aspm_reg(child, &dwreg);
/*
* If ASPM not supported, don't mess with the clocks and link,
* bail out now.
*/
if (!(upreg.support & dwreg.support))
return;
/* Configure common clock before checking latencies */
pcie_aspm_configure_common_clock(link);
/* Get upstream/downstream components' register state */
/*
* Re-read upstream/downstream components' register state
* after clock configuration
*/
pcie_get_aspm_reg(parent, &upreg);
child = list_entry(linkbus->devices.next, struct pci_dev, bus_list);
pcie_get_aspm_reg(child, &dwreg);
/*
......@@ -886,8 +900,8 @@ static ssize_t clk_ctl_store(struct device *dev,
return n;
}
static DEVICE_ATTR(link_state, 0644, link_state_show, link_state_store);
static DEVICE_ATTR(clk_ctl, 0644, clk_ctl_show, clk_ctl_store);
static DEVICE_ATTR_RW(link_state);
static DEVICE_ATTR_RW(clk_ctl);
static char power_group[] = "power";
void pcie_aspm_create_sysfs_dev_files(struct pci_dev *pdev)
......
......@@ -300,8 +300,6 @@ static irqreturn_t pcie_pme_irq(int irq, void *context)
*/
static int pcie_pme_set_native(struct pci_dev *dev, void *ign)
{
dev_info(&dev->dev, "Signaling PME through PCIe PME interrupt\n");
device_set_run_wake(&dev->dev, true);
dev->pme_interrupt = true;
return 0;
......@@ -319,23 +317,8 @@ static int pcie_pme_set_native(struct pci_dev *dev, void *ign)
static void pcie_pme_mark_devices(struct pci_dev *port)
{
pcie_pme_set_native(port, NULL);
if (port->subordinate) {
if (port->subordinate)
pci_walk_bus(port->subordinate, pcie_pme_set_native, NULL);
} else {
struct pci_bus *bus = port->bus;
struct pci_dev *dev;
/* Check if this is a root port event collector. */
if (pci_pcie_type(port) != PCI_EXP_TYPE_RC_EC || !bus)
return;
down_read(&pci_bus_sem);
list_for_each_entry(dev, &bus->devices, bus_list)
if (pci_is_pcie(dev)
&& pci_pcie_type(dev) == PCI_EXP_TYPE_RC_END)
pcie_pme_set_native(dev, NULL);
up_read(&pci_bus_sem);
}
}
/**
......@@ -364,12 +347,14 @@ static int pcie_pme_probe(struct pcie_device *srv)
ret = request_irq(srv->irq, pcie_pme_irq, IRQF_SHARED, "PCIe PME", srv);
if (ret) {
kfree(data);
} else {
pcie_pme_mark_devices(port);
pcie_pme_interrupt_enable(port, true);
return ret;
}
return ret;
dev_info(&port->dev, "Signaling PME with IRQ %d\n", srv->irq);
pcie_pme_mark_devices(port);
pcie_pme_interrupt_enable(port, true);
return 0;
}
static bool pcie_pme_check_wakeup(struct pci_bus *bus)
......
......@@ -499,7 +499,6 @@ static int pcie_port_probe_service(struct device *dev)
if (status)
return status;
dev_printk(KERN_DEBUG, dev, "service driver %s loaded\n", driver->name);
get_device(dev);
return 0;
}
......@@ -524,8 +523,6 @@ static int pcie_port_remove_service(struct device *dev)
pciedev = to_pcie_device(dev);
driver = to_service_driver(dev->driver);
if (driver && driver->remove) {
dev_printk(KERN_DEBUG, dev, "unloading service driver %s\n",
driver->name);
driver->remove(pciedev);
put_device(dev);
}
......
......@@ -19,6 +19,7 @@
#include <linux/dmi.h>
#include <linux/pci-aspm.h>
#include "../pci.h"
#include "portdrv.h"
#include "aer/aerdrv.h"
......@@ -149,15 +150,7 @@ static int pcie_portdrv_probe(struct pci_dev *dev,
pci_save_state(dev);
/*
* Prevent runtime PM if the port is advertising support for PCIe
* hotplug. Otherwise the BIOS hotplug SMI code might not be able
* to enumerate devices behind this port properly (the port is
* powered down preventing all config space accesses to the
* subordinate devices). We can't be sure for native PCIe hotplug
* either so prevent that as well.
*/
if (!dev->is_hotplug_bridge) {
if (pci_bridge_d3_possible(dev)) {
/*
* Keep the port resumed 100ms to make sure things like
* config space accesses from userspace (lspci) will not
......@@ -175,7 +168,7 @@ static int pcie_portdrv_probe(struct pci_dev *dev,
static void pcie_portdrv_remove(struct pci_dev *dev)
{
if (!dev->is_hotplug_bridge) {
if (pci_bridge_d3_possible(dev)) {
pm_runtime_forbid(&dev->dev);
pm_runtime_get_noresume(&dev->dev);
pm_runtime_dont_use_autosuspend(&dev->dev);
......
This diff is collapsed.
This diff is collapsed.
......@@ -40,7 +40,7 @@ static void pci_destroy_dev(struct pci_dev *dev)
list_del(&dev->bus_list);
up_write(&pci_bus_sem);
pci_bridge_d3_device_removed(dev);
pci_bridge_d3_update(dev);
pci_free_resources(dev);
put_device(&dev->dev);
}
......
......@@ -35,6 +35,11 @@ int pci_enable_rom(struct pci_dev *pdev)
if (res->flags & IORESOURCE_ROM_SHADOW)
return 0;
/*
* Ideally pci_update_resource() would update the ROM BAR address,
* and we would only set the enable bit here. But apparently some
* devices have buggy ROM BARs that read as zero when disabled.
*/
pcibios_resource_to_bus(pdev->bus, &region, res);
pci_read_config_dword(pdev, pdev->rom_base_reg, &rom_addr);
rom_addr &= ~PCI_ROM_ADDRESS_MASK;
......
This diff is collapsed.
......@@ -129,6 +129,10 @@ static int uhci_pci_init(struct usb_hcd *hcd)
if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_HP)
uhci->wait_for_hp = 1;
/* Intel controllers use non-PME wakeup signalling */
if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_INTEL)
device_set_run_wake(uhci_dev(uhci), 1);
/* Set up pointers to PCI-specific functions */
uhci->reset_hc = uhci_pci_reset_hc;
uhci->check_and_reset_hc = uhci_pci_check_and_reset_hc;
......
......@@ -31,8 +31,6 @@
#include "vfio_pci_private.h"
#define PCI_CFG_SPACE_SIZE 256
/* Fake capability ID for standard config space */
#define PCI_CAP_ID_BASIC 0
......
......@@ -437,6 +437,8 @@ static inline int acpi_dev_filter_resource_type_cb(struct acpi_resource *ares,
return acpi_dev_filter_resource_type(ares, (unsigned long)arg);
}
struct acpi_device *acpi_resource_consumer(struct resource *res);
int acpi_check_resource_conflict(const struct resource *res);
int acpi_check_region(resource_size_t start, resource_size_t n,
......@@ -787,6 +789,11 @@ static inline int acpi_reconfig_notifier_unregister(struct notifier_block *nb)
return -EINVAL;
}
static inline struct acpi_device *acpi_resource_consumer(struct resource *res)
{
return NULL;
}
#endif /* !CONFIG_ACPI */
#ifdef CONFIG_ACPI_HOTPLUG_IOAPIC
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment