Commit b9132c32 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'cxl-for-5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl

Pull CXL (Compute Express Link) updates from Dan Williams:
 "This development cycle extends the subsystem to discover CXL resources
  throughout a CXL/PCIe switch topology and respond to hot add/remove
  events anywhere in that topology.

  This is more foundational infrastructure in preparation for dynamic
  memory region provisioning support. Recall that CXL memory regions, as
  the new "Theory of Operation" section of
  Documentation/driver-api/cxl/memory-devices.rst describes, bring
  storage volume striping semantics to memory.

  The hot add/remove behavior is validated with extensions to the
  cxl_test unit test environment and this test in the cxl-cli test
  suite:

      https://github.com/pmem/ndctl/blob/djbw/for-74/cxl/test/cxl-topology.sh

  Summary:

   - Add a driver for 'struct cxl_memdev' objects responsible for
     CXL.mem operation as distinct from 'cxl_pci' mailbox operations.

     Its primary responsibility is enumerating an endpoint 'struct
     cxl_port' and all the 'struct cxl_port' instances between an
     endpoint and the CXL platform root.

   - Add a driver for 'struct cxl_port' objects responsible for
     enumerating and operating all Host-managed Device Memory (HDM)
     decoder resources between the platform-level CXL memory
     description, all intervening host bridges / switches, and the HDM
     resources in endpoints.

   - Update the cxl_pci driver to validate CXL.mem operation precursors
     to HDM decoder operation like ready-polling, and legacy CXL 1.1
     DVSEC based CXL.mem configuration.

   - Add basic lockdep coverage for usage of device_lock() on CXL
     subsystem objects similar to what exists for LIBNVDIMM. Include a
     compile-time switch for which subsystem to validate at run-time.

   - Update cxl_test to emulate a one level switch topology.

   - Document a "Theory of Operation" for the subsystem.

   - Add 'numa_node' and 'serial' attributes to cxl_memdev sysfs

   - Include miscellaneous fixes for spec / QEMU CXL emulation
     compatibility and static analysis reports"

* tag 'cxl-for-5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl: (48 commits)
  cxl/core/port: Fix NULL but dereferenced coccicheck error
  cxl/port: Hold port reference until decoder release
  cxl/port: Fix endpoint refcount leak
  cxl/core: Fix cxl_device_lock() class detection
  cxl/core/port: Fix unregister_port() lock assertion
  cxl/regs: Fix size of CXL Capability Header Register
  cxl/core/port: Handle invalid decoders
  cxl/core/port: Fix / relax decoder target enumeration
  tools/testing/cxl: Add a physical_node link
  tools/testing/cxl: Enumerate mock decoders
  tools/testing/cxl: Mock one level of switches
  tools/testing/cxl: Fix root port to host bridge assignment
  tools/testing/cxl: Mock dvsec_ranges()
  cxl/core/port: Add endpoint decoders
  cxl/core: Move target_list out of base decoder attributes
  cxl/mem: Add the cxl_mem driver
  cxl/core/port: Add switch port enumeration
  cxl/memdev: Add numa_node attribute
  cxl/pci: Emit device serial number
  cxl/pci: Implement wait for media active
  ...
parents b14ffae3 05e81553
What: /sys/bus/cxl/flush
Date: Januarry, 2022
KernelVersion: v5.18
Contact: linux-cxl@vger.kernel.org
Description:
(WO) If userspace manually unbinds a port the kernel schedules
all descendant memdevs for unbind. Writing '1' to this attribute
flushes that work.
What: /sys/bus/cxl/devices/memX/firmware_version What: /sys/bus/cxl/devices/memX/firmware_version
Date: December, 2020 Date: December, 2020
KernelVersion: v5.12 KernelVersion: v5.12
...@@ -25,6 +34,24 @@ Description: ...@@ -25,6 +34,24 @@ Description:
identically named field in the Identify Memory Device Output identically named field in the Identify Memory Device Output
Payload in the CXL-2.0 specification. Payload in the CXL-2.0 specification.
What: /sys/bus/cxl/devices/memX/serial
Date: January, 2022
KernelVersion: v5.18
Contact: linux-cxl@vger.kernel.org
Description:
(RO) 64-bit serial number per the PCIe Device Serial Number
capability. Mandatory for CXL devices, see CXL 2.0 8.1.12.2
Memory Device PCIe Capabilities and Extended Capabilities.
What: /sys/bus/cxl/devices/memX/numa_node
Date: January, 2022
KernelVersion: v5.18
Contact: linux-cxl@vger.kernel.org
Description:
(RO) If NUMA is enabled and the platform has affinitized the
host PCI device for this memory device, emit the CPU node
affinity for this device.
What: /sys/bus/cxl/devices/*/devtype What: /sys/bus/cxl/devices/*/devtype
Date: June, 2021 Date: June, 2021
KernelVersion: v5.14 KernelVersion: v5.14
...@@ -34,6 +61,15 @@ Description: ...@@ -34,6 +61,15 @@ Description:
the same value communicated in the DEVTYPE environment variable the same value communicated in the DEVTYPE environment variable
for uevents for devices on the "cxl" bus. for uevents for devices on the "cxl" bus.
What: /sys/bus/cxl/devices/*/modalias
Date: December, 2021
KernelVersion: v5.18
Contact: linux-cxl@vger.kernel.org
Description:
CXL device objects export the modalias attribute which mirrors
the same value communicated in the MODALIAS environment variable
for uevents for devices on the "cxl" bus.
What: /sys/bus/cxl/devices/portX/uport What: /sys/bus/cxl/devices/portX/uport
Date: June, 2021 Date: June, 2021
KernelVersion: v5.14 KernelVersion: v5.14
......
...@@ -13,25 +13,26 @@ menuconfig CXL_BUS ...@@ -13,25 +13,26 @@ menuconfig CXL_BUS
if CXL_BUS if CXL_BUS
config CXL_MEM config CXL_PCI
tristate "CXL.mem: Memory Devices" tristate "PCI manageability"
default CXL_BUS default CXL_BUS
help help
The CXL.mem protocol allows a device to act as a provider of The CXL specification defines a "CXL memory device" sub-class in the
"System RAM" and/or "Persistent Memory" that is fully coherent PCI "memory controller" base class of devices. Device's identified by
as if the memory was attached to the typical CPU memory this class code provide support for volatile and / or persistent
controller. memory to be mapped into the system address map (Host-managed Device
Memory (HDM)).
Say 'y/m' to enable a driver that will attach to CXL.mem devices for Say 'y/m' to enable a driver that will attach to CXL memory expander
configuration and management primarily via the mailbox interface. See devices enumerated by the memory device class code for configuration
Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification for more and management primarily via the mailbox interface. See Chapter 2.3
details. Type 3 CXL Device in the CXL 2.0 specification for more details.
If unsure say 'm'. If unsure say 'm'.
config CXL_MEM_RAW_COMMANDS config CXL_MEM_RAW_COMMANDS
bool "RAW Command Interface for Memory Devices" bool "RAW Command Interface for Memory Devices"
depends on CXL_MEM depends on CXL_PCI
help help
Enable CXL RAW command interface. Enable CXL RAW command interface.
...@@ -76,4 +77,25 @@ config CXL_PMEM ...@@ -76,4 +77,25 @@ config CXL_PMEM
provisioning the persistent memory capacity of CXL memory expanders. provisioning the persistent memory capacity of CXL memory expanders.
If unsure say 'm'. If unsure say 'm'.
config CXL_MEM
tristate "CXL: Memory Expansion"
depends on CXL_PCI
default CXL_BUS
help
The CXL.mem protocol allows a device to act as a provider of "System
RAM" and/or "Persistent Memory" that is fully coherent as if the
memory were attached to the typical CPU memory controller. This is
known as HDM "Host-managed Device Memory".
Say 'y/m' to enable a driver that will attach to CXL.mem devices for
memory expansion and control of HDM. See Chapter 9.13 in the CXL 2.0
specification for a detailed description of HDM.
If unsure say 'm'.
config CXL_PORT
default CXL_BUS
tristate
endif endif
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_CXL_BUS) += core/ obj-$(CONFIG_CXL_BUS) += core/
obj-$(CONFIG_CXL_MEM) += cxl_pci.o obj-$(CONFIG_CXL_PCI) += cxl_pci.o
obj-$(CONFIG_CXL_MEM) += cxl_mem.o
obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o
obj-$(CONFIG_CXL_PMEM) += cxl_pmem.o obj-$(CONFIG_CXL_PMEM) += cxl_pmem.o
obj-$(CONFIG_CXL_PORT) += cxl_port.o
cxl_mem-y := mem.o
cxl_pci-y := pci.o cxl_pci-y := pci.o
cxl_acpi-y := acpi.o cxl_acpi-y := acpi.o
cxl_pmem-y := pmem.o cxl_pmem-y := pmem.o
cxl_port-y := port.o
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/pci.h> #include <linux/pci.h>
#include "cxlpci.h"
#include "cxl.h" #include "cxl.h"
/* Encode defined in CXL 2.0 8.2.5.12.7 HDM Decoder Control Register */ /* Encode defined in CXL 2.0 8.2.5.12.7 HDM Decoder Control Register */
...@@ -14,7 +15,7 @@ ...@@ -14,7 +15,7 @@
static unsigned long cfmws_to_decoder_flags(int restrictions) static unsigned long cfmws_to_decoder_flags(int restrictions)
{ {
unsigned long flags = 0; unsigned long flags = CXL_DECODER_F_ENABLE;
if (restrictions & ACPI_CEDT_CFMWS_RESTRICT_TYPE2) if (restrictions & ACPI_CEDT_CFMWS_RESTRICT_TYPE2)
flags |= CXL_DECODER_F_TYPE2; flags |= CXL_DECODER_F_TYPE2;
...@@ -101,16 +102,14 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg, ...@@ -101,16 +102,14 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
for (i = 0; i < CFMWS_INTERLEAVE_WAYS(cfmws); i++) for (i = 0; i < CFMWS_INTERLEAVE_WAYS(cfmws); i++)
target_map[i] = cfmws->interleave_targets[i]; target_map[i] = cfmws->interleave_targets[i];
cxld = cxl_decoder_alloc(root_port, CFMWS_INTERLEAVE_WAYS(cfmws)); cxld = cxl_root_decoder_alloc(root_port, CFMWS_INTERLEAVE_WAYS(cfmws));
if (IS_ERR(cxld)) if (IS_ERR(cxld))
return 0; return 0;
cxld->flags = cfmws_to_decoder_flags(cfmws->restrictions); cxld->flags = cfmws_to_decoder_flags(cfmws->restrictions);
cxld->target_type = CXL_DECODER_EXPANDER; cxld->target_type = CXL_DECODER_EXPANDER;
cxld->range = (struct range){ cxld->platform_res = (struct resource)DEFINE_RES_MEM(cfmws->base_hpa,
.start = cfmws->base_hpa, cfmws->window_size);
.end = cfmws->base_hpa + cfmws->window_size - 1,
};
cxld->interleave_ways = CFMWS_INTERLEAVE_WAYS(cfmws); cxld->interleave_ways = CFMWS_INTERLEAVE_WAYS(cfmws);
cxld->interleave_granularity = CFMWS_INTERLEAVE_GRANULARITY(cfmws); cxld->interleave_granularity = CFMWS_INTERLEAVE_GRANULARITY(cfmws);
...@@ -120,67 +119,17 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg, ...@@ -120,67 +119,17 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
else else
rc = cxl_decoder_autoremove(dev, cxld); rc = cxl_decoder_autoremove(dev, cxld);
if (rc) { if (rc) {
dev_err(dev, "Failed to add decoder for %#llx-%#llx\n", dev_err(dev, "Failed to add decoder for %pr\n",
cfmws->base_hpa, &cxld->platform_res);
cfmws->base_hpa + cfmws->window_size - 1);
return 0; return 0;
} }
dev_dbg(dev, "add: %s node: %d range %#llx-%#llx\n", dev_dbg(dev, "add: %s node: %d range %pr\n", dev_name(&cxld->dev),
dev_name(&cxld->dev), phys_to_target_node(cxld->range.start), phys_to_target_node(cxld->platform_res.start),
cfmws->base_hpa, cfmws->base_hpa + cfmws->window_size - 1); &cxld->platform_res);
return 0; return 0;
} }
__mock int match_add_root_ports(struct pci_dev *pdev, void *data)
{
struct cxl_walk_context *ctx = data;
struct pci_bus *root_bus = ctx->root;
struct cxl_port *port = ctx->port;
int type = pci_pcie_type(pdev);
struct device *dev = ctx->dev;
u32 lnkcap, port_num;
int rc;
if (pdev->bus != root_bus)
return 0;
if (!pci_is_pcie(pdev))
return 0;
if (type != PCI_EXP_TYPE_ROOT_PORT)
return 0;
if (pci_read_config_dword(pdev, pci_pcie_cap(pdev) + PCI_EXP_LNKCAP,
&lnkcap) != PCIBIOS_SUCCESSFUL)
return 0;
/* TODO walk DVSEC to find component register base */
port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap);
rc = cxl_add_dport(port, &pdev->dev, port_num, CXL_RESOURCE_NONE);
if (rc) {
ctx->error = rc;
return rc;
}
ctx->count++;
dev_dbg(dev, "add dport%d: %s\n", port_num, dev_name(&pdev->dev));
return 0;
}
static struct cxl_dport *find_dport_by_dev(struct cxl_port *port, struct device *dev)
{
struct cxl_dport *dport;
device_lock(&port->dev);
list_for_each_entry(dport, &port->dports, list)
if (dport->dport == dev) {
device_unlock(&port->dev);
return dport;
}
device_unlock(&port->dev);
return NULL;
}
__mock struct acpi_device *to_cxl_host_bridge(struct device *host, __mock struct acpi_device *to_cxl_host_bridge(struct device *host,
struct device *dev) struct device *dev)
{ {
...@@ -204,83 +153,35 @@ static int add_host_bridge_uport(struct device *match, void *arg) ...@@ -204,83 +153,35 @@ static int add_host_bridge_uport(struct device *match, void *arg)
struct device *host = root_port->dev.parent; struct device *host = root_port->dev.parent;
struct acpi_device *bridge = to_cxl_host_bridge(host, match); struct acpi_device *bridge = to_cxl_host_bridge(host, match);
struct acpi_pci_root *pci_root; struct acpi_pci_root *pci_root;
struct cxl_walk_context ctx;
int single_port_map[1], rc;
struct cxl_decoder *cxld;
struct cxl_dport *dport; struct cxl_dport *dport;
struct cxl_port *port; struct cxl_port *port;
int rc;
if (!bridge) if (!bridge)
return 0; return 0;
dport = find_dport_by_dev(root_port, match); dport = cxl_find_dport_by_dev(root_port, match);
if (!dport) { if (!dport) {
dev_dbg(host, "host bridge expected and not found\n"); dev_dbg(host, "host bridge expected and not found\n");
return 0; return 0;
} }
port = devm_cxl_add_port(host, match, dport->component_reg_phys,
root_port);
if (IS_ERR(port))
return PTR_ERR(port);
dev_dbg(host, "%s: add: %s\n", dev_name(match), dev_name(&port->dev));
/* /*
* Note that this lookup already succeeded in * Note that this lookup already succeeded in
* to_cxl_host_bridge(), so no need to check for failure here * to_cxl_host_bridge(), so no need to check for failure here
*/ */
pci_root = acpi_pci_find_root(bridge->handle); pci_root = acpi_pci_find_root(bridge->handle);
ctx = (struct cxl_walk_context){ rc = devm_cxl_register_pci_bus(host, match, pci_root->bus);
.dev = host,
.root = pci_root->bus,
.port = port,
};
pci_walk_bus(pci_root->bus, match_add_root_ports, &ctx);
if (ctx.count == 0)
return -ENODEV;
if (ctx.error)
return ctx.error;
if (ctx.count > 1)
return 0;
/* TODO: Scan CHBCR for HDM Decoder resources */
/*
* Per the CXL specification (8.2.5.12 CXL HDM Decoder Capability
* Structure) single ported host-bridges need not publish a decoder
* capability when a passthrough decode can be assumed, i.e. all
* transactions that the uport sees are claimed and passed to the single
* dport. Disable the range until the first CXL region is enumerated /
* activated.
*/
cxld = cxl_decoder_alloc(port, 1);
if (IS_ERR(cxld))
return PTR_ERR(cxld);
cxld->interleave_ways = 1;
cxld->interleave_granularity = PAGE_SIZE;
cxld->target_type = CXL_DECODER_EXPANDER;
cxld->range = (struct range) {
.start = 0,
.end = -1,
};
device_lock(&port->dev);
dport = list_first_entry(&port->dports, typeof(*dport), list);
device_unlock(&port->dev);
single_port_map[0] = dport->port_id;
rc = cxl_decoder_add(cxld, single_port_map);
if (rc) if (rc)
put_device(&cxld->dev); return rc;
else
rc = cxl_decoder_autoremove(host, cxld); port = devm_cxl_add_port(host, match, dport->component_reg_phys,
root_port);
if (IS_ERR(port))
return PTR_ERR(port);
dev_dbg(host, "%s: add: %s\n", dev_name(match), dev_name(&port->dev));
if (rc == 0) return 0;
dev_dbg(host, "add: %s\n", dev_name(&cxld->dev));
return rc;
} }
struct cxl_chbs_context { struct cxl_chbs_context {
...@@ -309,9 +210,9 @@ static int cxl_get_chbcr(union acpi_subtable_headers *header, void *arg, ...@@ -309,9 +210,9 @@ static int cxl_get_chbcr(union acpi_subtable_headers *header, void *arg,
static int add_host_bridge_dport(struct device *match, void *arg) static int add_host_bridge_dport(struct device *match, void *arg)
{ {
int rc;
acpi_status status; acpi_status status;
unsigned long long uid; unsigned long long uid;
struct cxl_dport *dport;
struct cxl_chbs_context ctx; struct cxl_chbs_context ctx;
struct cxl_port *root_port = arg; struct cxl_port *root_port = arg;
struct device *host = root_port->dev.parent; struct device *host = root_port->dev.parent;
...@@ -340,11 +241,11 @@ static int add_host_bridge_dport(struct device *match, void *arg) ...@@ -340,11 +241,11 @@ static int add_host_bridge_dport(struct device *match, void *arg)
return 0; return 0;
} }
rc = cxl_add_dport(root_port, match, uid, ctx.chbcr); dport = devm_cxl_add_dport(root_port, match, uid, ctx.chbcr);
if (rc) { if (IS_ERR(dport)) {
dev_err(host, "failed to add downstream port: %s\n", dev_err(host, "failed to add downstream port: %s\n",
dev_name(match)); dev_name(match));
return rc; return PTR_ERR(dport);
} }
dev_dbg(host, "add dport%llu: %s\n", uid, dev_name(match)); dev_dbg(host, "add dport%llu: %s\n", uid, dev_name(match));
return 0; return 0;
...@@ -413,7 +314,8 @@ static int cxl_acpi_probe(struct platform_device *pdev) ...@@ -413,7 +314,8 @@ static int cxl_acpi_probe(struct platform_device *pdev)
if (rc < 0) if (rc < 0)
return rc; return rc;
return 0; /* In case PCI is scanned before ACPI re-trigger memdev attach */
return cxl_bus_rescan();
} }
static const struct acpi_device_id cxl_acpi_ids[] = { static const struct acpi_device_id cxl_acpi_ids[] = {
......
...@@ -2,8 +2,10 @@ ...@@ -2,8 +2,10 @@
obj-$(CONFIG_CXL_BUS) += cxl_core.o obj-$(CONFIG_CXL_BUS) += cxl_core.o
ccflags-y += -I$(srctree)/drivers/cxl ccflags-y += -I$(srctree)/drivers/cxl
cxl_core-y := bus.o cxl_core-y := port.o
cxl_core-y += pmem.o cxl_core-y += pmem.o
cxl_core-y += regs.o cxl_core-y += regs.o
cxl_core-y += memdev.o cxl_core-y += memdev.o
cxl_core-y += mbox.o cxl_core-y += mbox.o
cxl_core-y += pci.o
cxl_core-y += hdm.o
...@@ -14,6 +14,8 @@ struct cxl_mem_query_commands; ...@@ -14,6 +14,8 @@ struct cxl_mem_query_commands;
int cxl_query_cmd(struct cxl_memdev *cxlmd, int cxl_query_cmd(struct cxl_memdev *cxlmd,
struct cxl_mem_query_commands __user *q); struct cxl_mem_query_commands __user *q);
int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s); int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s);
void __iomem *devm_cxl_iomap_block(struct device *dev, resource_size_t addr,
resource_size_t length);
int cxl_memdev_init(void); int cxl_memdev_init(void);
void cxl_memdev_exit(void); void cxl_memdev_exit(void);
......
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. All rights reserved. */
#include <linux/io-64-nonatomic-hi-lo.h>
#include <linux/device.h>
#include <linux/delay.h>
#include "cxlmem.h"
#include "core.h"
/**
* DOC: cxl core hdm
*
* Compute Express Link Host Managed Device Memory, starting with the
* CXL 2.0 specification, is managed by an array of HDM Decoder register
* instances per CXL port and per CXL endpoint. Define common helpers
* for enumerating these registers and capabilities.
*/
static int add_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld,
int *target_map)
{
int rc;
rc = cxl_decoder_add_locked(cxld, target_map);
if (rc) {
put_device(&cxld->dev);
dev_err(&port->dev, "Failed to add decoder\n");
return rc;
}
rc = cxl_decoder_autoremove(&port->dev, cxld);
if (rc)
return rc;
dev_dbg(&cxld->dev, "Added to port %s\n", dev_name(&port->dev));
return 0;
}
/*
* Per the CXL specification (8.2.5.12 CXL HDM Decoder Capability Structure)
* single ported host-bridges need not publish a decoder capability when a
* passthrough decode can be assumed, i.e. all transactions that the uport sees
* are claimed and passed to the single dport. Disable the range until the first
* CXL region is enumerated / activated.
*/
int devm_cxl_add_passthrough_decoder(struct cxl_port *port)
{
struct cxl_decoder *cxld;
struct cxl_dport *dport;
int single_port_map[1];
cxld = cxl_switch_decoder_alloc(port, 1);
if (IS_ERR(cxld))
return PTR_ERR(cxld);
device_lock_assert(&port->dev);
dport = list_first_entry(&port->dports, typeof(*dport), list);
single_port_map[0] = dport->port_id;
return add_hdm_decoder(port, cxld, single_port_map);
}
EXPORT_SYMBOL_NS_GPL(devm_cxl_add_passthrough_decoder, CXL);
static void parse_hdm_decoder_caps(struct cxl_hdm *cxlhdm)
{
u32 hdm_cap;
hdm_cap = readl(cxlhdm->regs.hdm_decoder + CXL_HDM_DECODER_CAP_OFFSET);
cxlhdm->decoder_count = cxl_hdm_decoder_count(hdm_cap);
cxlhdm->target_count =
FIELD_GET(CXL_HDM_DECODER_TARGET_COUNT_MASK, hdm_cap);
if (FIELD_GET(CXL_HDM_DECODER_INTERLEAVE_11_8, hdm_cap))
cxlhdm->interleave_mask |= GENMASK(11, 8);
if (FIELD_GET(CXL_HDM_DECODER_INTERLEAVE_14_12, hdm_cap))
cxlhdm->interleave_mask |= GENMASK(14, 12);
}
static void __iomem *map_hdm_decoder_regs(struct cxl_port *port,
void __iomem *crb)
{
struct cxl_component_reg_map map;
cxl_probe_component_regs(&port->dev, crb, &map);
if (!map.hdm_decoder.valid) {
dev_err(&port->dev, "HDM decoder registers invalid\n");
return IOMEM_ERR_PTR(-ENXIO);
}
return crb + map.hdm_decoder.offset;
}
/**
* devm_cxl_setup_hdm - map HDM decoder component registers
* @port: cxl_port to map
*/
struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port)
{
struct device *dev = &port->dev;
void __iomem *crb, *hdm;
struct cxl_hdm *cxlhdm;
cxlhdm = devm_kzalloc(dev, sizeof(*cxlhdm), GFP_KERNEL);
if (!cxlhdm)
return ERR_PTR(-ENOMEM);
cxlhdm->port = port;
crb = devm_cxl_iomap_block(dev, port->component_reg_phys,
CXL_COMPONENT_REG_BLOCK_SIZE);
if (!crb) {
dev_err(dev, "No component registers mapped\n");
return ERR_PTR(-ENXIO);
}
hdm = map_hdm_decoder_regs(port, crb);
if (IS_ERR(hdm))
return ERR_CAST(hdm);
cxlhdm->regs.hdm_decoder = hdm;
parse_hdm_decoder_caps(cxlhdm);
if (cxlhdm->decoder_count == 0) {
dev_err(dev, "Spec violation. Caps invalid\n");
return ERR_PTR(-ENXIO);
}
return cxlhdm;
}
EXPORT_SYMBOL_NS_GPL(devm_cxl_setup_hdm, CXL);
static int to_interleave_granularity(u32 ctrl)
{
int val = FIELD_GET(CXL_HDM_DECODER0_CTRL_IG_MASK, ctrl);
return 256 << val;
}
static int to_interleave_ways(u32 ctrl)
{
int val = FIELD_GET(CXL_HDM_DECODER0_CTRL_IW_MASK, ctrl);
switch (val) {
case 0 ... 4:
return 1 << val;
case 8 ... 10:
return 3 << (val - 8);
default:
return 0;
}
}
static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld,
int *target_map, void __iomem *hdm, int which)
{
u64 size, base;
u32 ctrl;
int i;
union {
u64 value;
unsigned char target_id[8];
} target_list;
ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(which));
base = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_BASE_LOW_OFFSET(which));
size = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_SIZE_LOW_OFFSET(which));
if (!(ctrl & CXL_HDM_DECODER0_CTRL_COMMITTED))
size = 0;
if (base == U64_MAX || size == U64_MAX) {
dev_warn(&port->dev, "decoder%d.%d: Invalid resource range\n",
port->id, cxld->id);
return -ENXIO;
}
cxld->decoder_range = (struct range) {
.start = base,
.end = base + size - 1,
};
/* switch decoders are always enabled if committed */
if (ctrl & CXL_HDM_DECODER0_CTRL_COMMITTED) {
cxld->flags |= CXL_DECODER_F_ENABLE;
if (ctrl & CXL_HDM_DECODER0_CTRL_LOCK)
cxld->flags |= CXL_DECODER_F_LOCK;
}
cxld->interleave_ways = to_interleave_ways(ctrl);
if (!cxld->interleave_ways) {
dev_warn(&port->dev,
"decoder%d.%d: Invalid interleave ways (ctrl: %#x)\n",
port->id, cxld->id, ctrl);
return -ENXIO;
}
cxld->interleave_granularity = to_interleave_granularity(ctrl);
if (FIELD_GET(CXL_HDM_DECODER0_CTRL_TYPE, ctrl))
cxld->target_type = CXL_DECODER_EXPANDER;
else
cxld->target_type = CXL_DECODER_ACCELERATOR;
if (is_cxl_endpoint(to_cxl_port(cxld->dev.parent)))
return 0;
target_list.value =
ioread64_hi_lo(hdm + CXL_HDM_DECODER0_TL_LOW(which));
for (i = 0; i < cxld->interleave_ways; i++)
target_map[i] = target_list.target_id[i];
return 0;
}
/**
* devm_cxl_enumerate_decoders - add decoder objects per HDM register set
* @cxlhdm: Structure to populate with HDM capabilities
*/
int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm)
{
void __iomem *hdm = cxlhdm->regs.hdm_decoder;
struct cxl_port *port = cxlhdm->port;
int i, committed, failed;
u32 ctrl;
/*
* Since the register resource was recently claimed via request_region()
* be careful about trusting the "not-committed" status until the commit
* timeout has elapsed. The commit timeout is 10ms (CXL 2.0
* 8.2.5.12.20), but double it to be tolerant of any clock skew between
* host and target.
*/
for (i = 0, committed = 0; i < cxlhdm->decoder_count; i++) {
ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(i));
if (ctrl & CXL_HDM_DECODER0_CTRL_COMMITTED)
committed++;
}
/* ensure that future checks of committed can be trusted */
if (committed != cxlhdm->decoder_count)
msleep(20);
for (i = 0, failed = 0; i < cxlhdm->decoder_count; i++) {
int target_map[CXL_DECODER_MAX_INTERLEAVE] = { 0 };
int rc, target_count = cxlhdm->target_count;
struct cxl_decoder *cxld;
if (is_cxl_endpoint(port))
cxld = cxl_endpoint_decoder_alloc(port);
else
cxld = cxl_switch_decoder_alloc(port, target_count);
if (IS_ERR(cxld)) {
dev_warn(&port->dev,
"Failed to allocate the decoder\n");
return PTR_ERR(cxld);
}
rc = init_hdm_decoder(port, cxld, target_map,
cxlhdm->regs.hdm_decoder, i);
if (rc) {
put_device(&cxld->dev);
failed++;
continue;
}
rc = add_hdm_decoder(port, cxld, target_map);
if (rc) {
dev_warn(&port->dev,
"Failed to add decoder to port\n");
return rc;
}
}
if (failed == cxlhdm->decoder_count) {
dev_err(&port->dev, "No valid decoders found\n");
return -ENXIO;
}
return 0;
}
EXPORT_SYMBOL_NS_GPL(devm_cxl_enumerate_decoders, CXL);
...@@ -89,10 +89,29 @@ static ssize_t pmem_size_show(struct device *dev, struct device_attribute *attr, ...@@ -89,10 +89,29 @@ static ssize_t pmem_size_show(struct device *dev, struct device_attribute *attr,
static struct device_attribute dev_attr_pmem_size = static struct device_attribute dev_attr_pmem_size =
__ATTR(size, 0444, pmem_size_show, NULL); __ATTR(size, 0444, pmem_size_show, NULL);
static ssize_t serial_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
struct cxl_dev_state *cxlds = cxlmd->cxlds;
return sysfs_emit(buf, "%#llx\n", cxlds->serial);
}
static DEVICE_ATTR_RO(serial);
static ssize_t numa_node_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", dev_to_node(dev));
}
static DEVICE_ATTR_RO(numa_node);
static struct attribute *cxl_memdev_attributes[] = { static struct attribute *cxl_memdev_attributes[] = {
&dev_attr_serial.attr,
&dev_attr_firmware_version.attr, &dev_attr_firmware_version.attr,
&dev_attr_payload_max.attr, &dev_attr_payload_max.attr,
&dev_attr_label_storage_size.attr, &dev_attr_label_storage_size.attr,
&dev_attr_numa_node.attr,
NULL, NULL,
}; };
...@@ -106,8 +125,17 @@ static struct attribute *cxl_memdev_ram_attributes[] = { ...@@ -106,8 +125,17 @@ static struct attribute *cxl_memdev_ram_attributes[] = {
NULL, NULL,
}; };
static umode_t cxl_memdev_visible(struct kobject *kobj, struct attribute *a,
int n)
{
if (!IS_ENABLED(CONFIG_NUMA) && a == &dev_attr_numa_node.attr)
return 0;
return a->mode;
}
static struct attribute_group cxl_memdev_attribute_group = { static struct attribute_group cxl_memdev_attribute_group = {
.attrs = cxl_memdev_attributes, .attrs = cxl_memdev_attributes,
.is_visible = cxl_memdev_visible,
}; };
static struct attribute_group cxl_memdev_ram_attribute_group = { static struct attribute_group cxl_memdev_ram_attribute_group = {
...@@ -134,6 +162,12 @@ static const struct device_type cxl_memdev_type = { ...@@ -134,6 +162,12 @@ static const struct device_type cxl_memdev_type = {
.groups = cxl_memdev_attribute_groups, .groups = cxl_memdev_attribute_groups,
}; };
bool is_cxl_memdev(struct device *dev)
{
return dev->type == &cxl_memdev_type;
}
EXPORT_SYMBOL_NS_GPL(is_cxl_memdev, CXL);
/** /**
* set_exclusive_cxl_commands() - atomically disable user cxl commands * set_exclusive_cxl_commands() - atomically disable user cxl commands
* @cxlds: The device state to operate on * @cxlds: The device state to operate on
...@@ -185,6 +219,15 @@ static void cxl_memdev_unregister(void *_cxlmd) ...@@ -185,6 +219,15 @@ static void cxl_memdev_unregister(void *_cxlmd)
put_device(dev); put_device(dev);
} }
static void detach_memdev(struct work_struct *work)
{
struct cxl_memdev *cxlmd;
cxlmd = container_of(work, typeof(*cxlmd), detach_work);
device_release_driver(&cxlmd->dev);
put_device(&cxlmd->dev);
}
static struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds, static struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds,
const struct file_operations *fops) const struct file_operations *fops)
{ {
...@@ -209,6 +252,7 @@ static struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds, ...@@ -209,6 +252,7 @@ static struct cxl_memdev *cxl_memdev_alloc(struct cxl_dev_state *cxlds,
dev->devt = MKDEV(cxl_mem_major, cxlmd->id); dev->devt = MKDEV(cxl_mem_major, cxlmd->id);
dev->type = &cxl_memdev_type; dev->type = &cxl_memdev_type;
device_set_pm_not_required(dev); device_set_pm_not_required(dev);
INIT_WORK(&cxlmd->detach_work, detach_memdev);
cdev = &cxlmd->cdev; cdev = &cxlmd->cdev;
cdev_init(cdev, fops); cdev_init(cdev, fops);
......
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2021 Intel Corporation. All rights reserved. */
#include <linux/device.h>
#include <linux/pci.h>
#include <cxlpci.h>
#include <cxl.h>
#include "core.h"
/**
* DOC: cxl core pci
*
* Compute Express Link protocols are layered on top of PCIe. CXL core provides
* a set of helpers for CXL interactions which occur via PCIe.
*/
struct cxl_walk_context {
struct pci_bus *bus;
struct cxl_port *port;
int type;
int error;
int count;
};
static int match_add_dports(struct pci_dev *pdev, void *data)
{
struct cxl_walk_context *ctx = data;
struct cxl_port *port = ctx->port;
int type = pci_pcie_type(pdev);
struct cxl_register_map map;
struct cxl_dport *dport;
u32 lnkcap, port_num;
int rc;
if (pdev->bus != ctx->bus)
return 0;
if (!pci_is_pcie(pdev))
return 0;
if (type != ctx->type)
return 0;
if (pci_read_config_dword(pdev, pci_pcie_cap(pdev) + PCI_EXP_LNKCAP,
&lnkcap))
return 0;
rc = cxl_find_regblock(pdev, CXL_REGLOC_RBI_COMPONENT, &map);
if (rc)
dev_dbg(&port->dev, "failed to find component registers\n");
port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap);
dport = devm_cxl_add_dport(port, &pdev->dev, port_num,
cxl_regmap_to_base(pdev, &map));
if (IS_ERR(dport)) {
ctx->error = PTR_ERR(dport);
return PTR_ERR(dport);
}
ctx->count++;
dev_dbg(&port->dev, "add dport%d: %s\n", port_num, dev_name(&pdev->dev));
return 0;
}
/**
* devm_cxl_port_enumerate_dports - enumerate downstream ports of the upstream port
* @port: cxl_port whose ->uport is the upstream of dports to be enumerated
*
* Returns a positive number of dports enumerated or a negative error
* code.
*/
int devm_cxl_port_enumerate_dports(struct cxl_port *port)
{
struct pci_bus *bus = cxl_port_to_pci_bus(port);
struct cxl_walk_context ctx;
int type;
if (!bus)
return -ENXIO;
if (pci_is_root_bus(bus))
type = PCI_EXP_TYPE_ROOT_PORT;
else
type = PCI_EXP_TYPE_DOWNSTREAM;
ctx = (struct cxl_walk_context) {
.port = port,
.bus = bus,
.type = type,
};
pci_walk_bus(bus, match_add_dports, &ctx);
if (ctx.count == 0)
return -ENODEV;
if (ctx.error)
return ctx.error;
return ctx.count;
}
EXPORT_SYMBOL_NS_GPL(devm_cxl_port_enumerate_dports, CXL);
...@@ -57,24 +57,30 @@ bool is_cxl_nvdimm_bridge(struct device *dev) ...@@ -57,24 +57,30 @@ bool is_cxl_nvdimm_bridge(struct device *dev)
} }
EXPORT_SYMBOL_NS_GPL(is_cxl_nvdimm_bridge, CXL); EXPORT_SYMBOL_NS_GPL(is_cxl_nvdimm_bridge, CXL);
__mock int match_nvdimm_bridge(struct device *dev, const void *data) static int match_nvdimm_bridge(struct device *dev, void *data)
{ {
return is_cxl_nvdimm_bridge(dev); return is_cxl_nvdimm_bridge(dev);
} }
struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(struct cxl_nvdimm *cxl_nvd) struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(struct cxl_nvdimm *cxl_nvd)
{ {
struct cxl_port *port = find_cxl_root(&cxl_nvd->dev);
struct device *dev; struct device *dev;
dev = bus_find_device(&cxl_bus_type, NULL, cxl_nvd, match_nvdimm_bridge); if (!port)
return NULL;
dev = device_find_child(&port->dev, NULL, match_nvdimm_bridge);
put_device(&port->dev);
if (!dev) if (!dev)
return NULL; return NULL;
return to_cxl_nvdimm_bridge(dev); return to_cxl_nvdimm_bridge(dev);
} }
EXPORT_SYMBOL_NS_GPL(cxl_find_nvdimm_bridge, CXL); EXPORT_SYMBOL_NS_GPL(cxl_find_nvdimm_bridge, CXL);
static struct cxl_nvdimm_bridge * static struct cxl_nvdimm_bridge *cxl_nvdimm_bridge_alloc(struct cxl_port *port)
cxl_nvdimm_bridge_alloc(struct cxl_port *port)
{ {
struct cxl_nvdimm_bridge *cxl_nvb; struct cxl_nvdimm_bridge *cxl_nvb;
struct device *dev; struct device *dev;
...@@ -115,10 +121,10 @@ static void unregister_nvb(void *_cxl_nvb) ...@@ -115,10 +121,10 @@ static void unregister_nvb(void *_cxl_nvb)
* work to flush. Once the state has been changed to 'dead' then no new * work to flush. Once the state has been changed to 'dead' then no new
* work can be queued by user-triggered bind. * work can be queued by user-triggered bind.
*/ */
device_lock(&cxl_nvb->dev); cxl_device_lock(&cxl_nvb->dev);
flush = cxl_nvb->state != CXL_NVB_NEW; flush = cxl_nvb->state != CXL_NVB_NEW;
cxl_nvb->state = CXL_NVB_DEAD; cxl_nvb->state = CXL_NVB_DEAD;
device_unlock(&cxl_nvb->dev); cxl_device_unlock(&cxl_nvb->dev);
/* /*
* Even though the device core will trigger device_release_driver() * Even though the device core will trigger device_release_driver()
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <cxlmem.h> #include <cxlmem.h>
#include <cxlpci.h>
/** /**
* DOC: cxl registers * DOC: cxl registers
...@@ -35,7 +36,7 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base, ...@@ -35,7 +36,7 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base,
struct cxl_component_reg_map *map) struct cxl_component_reg_map *map)
{ {
int cap, cap_count; int cap, cap_count;
u64 cap_array; u32 cap_array;
*map = (struct cxl_component_reg_map) { 0 }; *map = (struct cxl_component_reg_map) { 0 };
...@@ -45,11 +46,11 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base, ...@@ -45,11 +46,11 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base,
*/ */
base += CXL_CM_OFFSET; base += CXL_CM_OFFSET;
cap_array = readq(base + CXL_CM_CAP_HDR_OFFSET); cap_array = readl(base + CXL_CM_CAP_HDR_OFFSET);
if (FIELD_GET(CXL_CM_CAP_HDR_ID_MASK, cap_array) != CM_CAP_HDR_CAP_ID) { if (FIELD_GET(CXL_CM_CAP_HDR_ID_MASK, cap_array) != CM_CAP_HDR_CAP_ID) {
dev_err(dev, dev_err(dev,
"Couldn't locate the CXL.cache and CXL.mem capability array header./n"); "Couldn't locate the CXL.cache and CXL.mem capability array header.\n");
return; return;
} }
...@@ -158,9 +159,8 @@ void cxl_probe_device_regs(struct device *dev, void __iomem *base, ...@@ -158,9 +159,8 @@ void cxl_probe_device_regs(struct device *dev, void __iomem *base,
} }
EXPORT_SYMBOL_NS_GPL(cxl_probe_device_regs, CXL); EXPORT_SYMBOL_NS_GPL(cxl_probe_device_regs, CXL);
static void __iomem *devm_cxl_iomap_block(struct device *dev, void __iomem *devm_cxl_iomap_block(struct device *dev, resource_size_t addr,
resource_size_t addr, resource_size_t length)
resource_size_t length)
{ {
void __iomem *ret_val; void __iomem *ret_val;
struct resource *res; struct resource *res;
...@@ -247,3 +247,58 @@ int cxl_map_device_regs(struct pci_dev *pdev, ...@@ -247,3 +247,58 @@ int cxl_map_device_regs(struct pci_dev *pdev,
return 0; return 0;
} }
EXPORT_SYMBOL_NS_GPL(cxl_map_device_regs, CXL); EXPORT_SYMBOL_NS_GPL(cxl_map_device_regs, CXL);
static void cxl_decode_regblock(u32 reg_lo, u32 reg_hi,
struct cxl_register_map *map)
{
map->block_offset = ((u64)reg_hi << 32) |
(reg_lo & CXL_DVSEC_REG_LOCATOR_BLOCK_OFF_LOW_MASK);
map->barno = FIELD_GET(CXL_DVSEC_REG_LOCATOR_BIR_MASK, reg_lo);
map->reg_type = FIELD_GET(CXL_DVSEC_REG_LOCATOR_BLOCK_ID_MASK, reg_lo);
}
/**
* cxl_find_regblock() - Locate register blocks by type
* @pdev: The CXL PCI device to enumerate.
* @type: Register Block Indicator id
* @map: Enumeration output, clobbered on error
*
* Return: 0 if register block enumerated, negative error code otherwise
*
* A CXL DVSEC may point to one or more register blocks, search for them
* by @type.
*/
int cxl_find_regblock(struct pci_dev *pdev, enum cxl_regloc_type type,
struct cxl_register_map *map)
{
u32 regloc_size, regblocks;
int regloc, i;
map->block_offset = U64_MAX;
regloc = pci_find_dvsec_capability(pdev, PCI_DVSEC_VENDOR_ID_CXL,
CXL_DVSEC_REG_LOCATOR);
if (!regloc)
return -ENXIO;
pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, &regloc_size);
regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size);
regloc += CXL_DVSEC_REG_LOCATOR_BLOCK1_OFFSET;
regblocks = (regloc_size - CXL_DVSEC_REG_LOCATOR_BLOCK1_OFFSET) / 8;
for (i = 0; i < regblocks; i++, regloc += 8) {
u32 reg_lo, reg_hi;
pci_read_config_dword(pdev, regloc, &reg_lo);
pci_read_config_dword(pdev, regloc + 4, &reg_hi);
cxl_decode_regblock(reg_lo, reg_hi, map);
if (map->reg_type == type)
return 0;
}
map->block_offset = U64_MAX;
return -ENODEV;
}
EXPORT_SYMBOL_NS_GPL(cxl_find_regblock, CXL);
This diff is collapsed.
...@@ -34,12 +34,14 @@ ...@@ -34,12 +34,14 @@
* @dev: driver core device object * @dev: driver core device object
* @cdev: char dev core object for ioctl operations * @cdev: char dev core object for ioctl operations
* @cxlds: The device state backing this device * @cxlds: The device state backing this device
* @detach_work: active memdev lost a port in its ancestry
* @id: id number of this memdev instance. * @id: id number of this memdev instance.
*/ */
struct cxl_memdev { struct cxl_memdev {
struct device dev; struct device dev;
struct cdev cdev; struct cdev cdev;
struct cxl_dev_state *cxlds; struct cxl_dev_state *cxlds;
struct work_struct detach_work;
int id; int id;
}; };
...@@ -48,6 +50,12 @@ static inline struct cxl_memdev *to_cxl_memdev(struct device *dev) ...@@ -48,6 +50,12 @@ static inline struct cxl_memdev *to_cxl_memdev(struct device *dev)
return container_of(dev, struct cxl_memdev, dev); return container_of(dev, struct cxl_memdev, dev);
} }
bool is_cxl_memdev(struct device *dev);
static inline bool is_cxl_endpoint(struct cxl_port *port)
{
return is_cxl_memdev(port->uport);
}
struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds); struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds);
/** /**
...@@ -89,6 +97,18 @@ struct cxl_mbox_cmd { ...@@ -89,6 +97,18 @@ struct cxl_mbox_cmd {
*/ */
#define CXL_CAPACITY_MULTIPLIER SZ_256M #define CXL_CAPACITY_MULTIPLIER SZ_256M
/**
* struct cxl_endpoint_dvsec_info - Cached DVSEC info
* @mem_enabled: cached value of mem_enabled in the DVSEC, PCIE_DEVICE
* @ranges: Number of active HDM ranges this device uses.
* @dvsec_range: cached attributes of the ranges in the DVSEC, PCIE_DEVICE
*/
struct cxl_endpoint_dvsec_info {
bool mem_enabled;
int ranges;
struct range dvsec_range[2];
};
/** /**
* struct cxl_dev_state - The driver device state * struct cxl_dev_state - The driver device state
* *
...@@ -98,6 +118,7 @@ struct cxl_mbox_cmd { ...@@ -98,6 +118,7 @@ struct cxl_mbox_cmd {
* *
* @dev: The device associated with this CXL state * @dev: The device associated with this CXL state
* @regs: Parsed register blocks * @regs: Parsed register blocks
* @cxl_dvsec: Offset to the PCIe device DVSEC
* @payload_size: Size of space for payload * @payload_size: Size of space for payload
* (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
* @lsa_size: Size of Label Storage Area * @lsa_size: Size of Label Storage Area
...@@ -116,7 +137,11 @@ struct cxl_mbox_cmd { ...@@ -116,7 +137,11 @@ struct cxl_mbox_cmd {
* @active_persistent_bytes: sum of hard + soft persistent * @active_persistent_bytes: sum of hard + soft persistent
* @next_volatile_bytes: volatile capacity change pending device reset * @next_volatile_bytes: volatile capacity change pending device reset
* @next_persistent_bytes: persistent capacity change pending device reset * @next_persistent_bytes: persistent capacity change pending device reset
* @component_reg_phys: register base of component registers
* @info: Cached DVSEC information about the device.
* @serial: PCIe Device Serial Number
* @mbox_send: @dev specific transport for transmitting mailbox commands * @mbox_send: @dev specific transport for transmitting mailbox commands
* @wait_media_ready: @dev specific method to await media ready
* *
* See section 8.2.9.5.2 Capacity Configuration and Label Storage for * See section 8.2.9.5.2 Capacity Configuration and Label Storage for
* details on capacity parameters. * details on capacity parameters.
...@@ -125,6 +150,7 @@ struct cxl_dev_state { ...@@ -125,6 +150,7 @@ struct cxl_dev_state {
struct device *dev; struct device *dev;
struct cxl_regs regs; struct cxl_regs regs;
int cxl_dvsec;
size_t payload_size; size_t payload_size;
size_t lsa_size; size_t lsa_size;
...@@ -145,7 +171,12 @@ struct cxl_dev_state { ...@@ -145,7 +171,12 @@ struct cxl_dev_state {
u64 next_volatile_bytes; u64 next_volatile_bytes;
u64 next_persistent_bytes; u64 next_persistent_bytes;
resource_size_t component_reg_phys;
struct cxl_endpoint_dvsec_info info;
u64 serial;
int (*mbox_send)(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd); int (*mbox_send)(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd);
int (*wait_media_ready)(struct cxl_dev_state *cxlds);
}; };
enum cxl_opcode { enum cxl_opcode {
...@@ -264,4 +295,12 @@ int cxl_mem_create_range_info(struct cxl_dev_state *cxlds); ...@@ -264,4 +295,12 @@ int cxl_mem_create_range_info(struct cxl_dev_state *cxlds);
struct cxl_dev_state *cxl_dev_state_create(struct device *dev); struct cxl_dev_state *cxl_dev_state_create(struct device *dev);
void set_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); void set_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds);
void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds);
struct cxl_hdm {
struct cxl_component_regs regs;
unsigned int decoder_count;
unsigned int target_count;
unsigned int interleave_mask;
struct cxl_port *port;
};
#endif /* __CXL_MEM_H__ */ #endif /* __CXL_MEM_H__ */
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
#ifndef __CXL_PCI_H__
#define __CXL_PCI_H__
#include <linux/pci.h>
#include "cxl.h"
#define CXL_MEMORY_PROGIF 0x10
/*
* See section 8.1 Configuration Space Registers in the CXL 2.0
* Specification. Names are taken straight from the specification with "CXL" and
* "DVSEC" redundancies removed. When obvious, abbreviations may be used.
*/
#define PCI_DVSEC_HEADER1_LENGTH_MASK GENMASK(31, 20)
#define PCI_DVSEC_VENDOR_ID_CXL 0x1E98
/* CXL 2.0 8.1.3: PCIe DVSEC for CXL Device */
#define CXL_DVSEC_PCIE_DEVICE 0
#define CXL_DVSEC_CAP_OFFSET 0xA
#define CXL_DVSEC_MEM_CAPABLE BIT(2)
#define CXL_DVSEC_HDM_COUNT_MASK GENMASK(5, 4)
#define CXL_DVSEC_CTRL_OFFSET 0xC
#define CXL_DVSEC_MEM_ENABLE BIT(2)
#define CXL_DVSEC_RANGE_SIZE_HIGH(i) (0x18 + (i * 0x10))
#define CXL_DVSEC_RANGE_SIZE_LOW(i) (0x1C + (i * 0x10))
#define CXL_DVSEC_MEM_INFO_VALID BIT(0)
#define CXL_DVSEC_MEM_ACTIVE BIT(1)
#define CXL_DVSEC_MEM_SIZE_LOW_MASK GENMASK(31, 28)
#define CXL_DVSEC_RANGE_BASE_HIGH(i) (0x20 + (i * 0x10))
#define CXL_DVSEC_RANGE_BASE_LOW(i) (0x24 + (i * 0x10))
#define CXL_DVSEC_MEM_BASE_LOW_MASK GENMASK(31, 28)
/* CXL 2.0 8.1.4: Non-CXL Function Map DVSEC */
#define CXL_DVSEC_FUNCTION_MAP 2
/* CXL 2.0 8.1.5: CXL 2.0 Extensions DVSEC for Ports */
#define CXL_DVSEC_PORT_EXTENSIONS 3
/* CXL 2.0 8.1.6: GPF DVSEC for CXL Port */
#define CXL_DVSEC_PORT_GPF 4
/* CXL 2.0 8.1.7: GPF DVSEC for CXL Device */
#define CXL_DVSEC_DEVICE_GPF 5
/* CXL 2.0 8.1.8: PCIe DVSEC for Flex Bus Port */
#define CXL_DVSEC_PCIE_FLEXBUS_PORT 7
/* CXL 2.0 8.1.9: Register Locator DVSEC */
#define CXL_DVSEC_REG_LOCATOR 8
#define CXL_DVSEC_REG_LOCATOR_BLOCK1_OFFSET 0xC
#define CXL_DVSEC_REG_LOCATOR_BIR_MASK GENMASK(2, 0)
#define CXL_DVSEC_REG_LOCATOR_BLOCK_ID_MASK GENMASK(15, 8)
#define CXL_DVSEC_REG_LOCATOR_BLOCK_OFF_LOW_MASK GENMASK(31, 16)
/* Register Block Identifier (RBI) */
enum cxl_regloc_type {
CXL_REGLOC_RBI_EMPTY = 0,
CXL_REGLOC_RBI_COMPONENT,
CXL_REGLOC_RBI_VIRT,
CXL_REGLOC_RBI_MEMDEV,
CXL_REGLOC_RBI_TYPES
};
static inline resource_size_t cxl_regmap_to_base(struct pci_dev *pdev,
struct cxl_register_map *map)
{
if (map->block_offset == U64_MAX)
return CXL_RESOURCE_NONE;
return pci_resource_start(pdev, map->barno) + map->block_offset;
}
int devm_cxl_port_enumerate_dports(struct cxl_port *port);
#endif /* __CXL_PCI_H__ */
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. All rights reserved. */
#include <linux/device.h>
#include <linux/module.h>
#include <linux/pci.h>
#include "cxlmem.h"
#include "cxlpci.h"
/**
* DOC: cxl mem
*
* CXL memory endpoint devices and switches are CXL capable devices that are
* participating in CXL.mem protocol. Their functionality builds on top of the
* CXL.io protocol that allows enumerating and configuring components via
* standard PCI mechanisms.
*
* The cxl_mem driver owns kicking off the enumeration of this CXL.mem
* capability. With the detection of a CXL capable endpoint, the driver will
* walk up to find the platform specific port it is connected to, and determine
* if there are intervening switches in the path. If there are switches, a
* secondary action is to enumerate those (implemented in cxl_core). Finally the
* cxl_mem driver adds the device it is bound to as a CXL endpoint-port for use
* in higher level operations.
*/
static int wait_for_media(struct cxl_memdev *cxlmd)
{
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct cxl_endpoint_dvsec_info *info = &cxlds->info;
int rc;
if (!info->mem_enabled)
return -EBUSY;
rc = cxlds->wait_media_ready(cxlds);
if (rc)
return rc;
/*
* We know the device is active, and enabled, if any ranges are non-zero
* we'll need to check later before adding the port since that owns the
* HDM decoder registers.
*/
return 0;
}
static int create_endpoint(struct cxl_memdev *cxlmd,
struct cxl_port *parent_port)
{
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct cxl_port *endpoint;
endpoint = devm_cxl_add_port(&parent_port->dev, &cxlmd->dev,
cxlds->component_reg_phys, parent_port);
if (IS_ERR(endpoint))
return PTR_ERR(endpoint);
dev_dbg(&cxlmd->dev, "add: %s\n", dev_name(&endpoint->dev));
if (!endpoint->dev.driver) {
dev_err(&cxlmd->dev, "%s failed probe\n",
dev_name(&endpoint->dev));
return -ENXIO;
}
return cxl_endpoint_autoremove(cxlmd, endpoint);
}
/**
* cxl_dvsec_decode_init() - Setup HDM decoding for the endpoint
* @cxlds: Device state
*
* Additionally, enables global HDM decoding. Warning: don't call this outside
* of probe. Once probe is complete, the port driver owns all access to the HDM
* decoder registers.
*
* Returns: false if DVSEC Ranges are being used instead of HDM
* decoders, or if it can not be determined if DVSEC Ranges are in use.
* Otherwise, returns true.
*/
__mock bool cxl_dvsec_decode_init(struct cxl_dev_state *cxlds)
{
struct cxl_endpoint_dvsec_info *info = &cxlds->info;
struct cxl_register_map map;
struct cxl_component_reg_map *cmap = &map.component_map;
bool global_enable, do_hdm_init = false;
void __iomem *crb;
u32 global_ctrl;
/* map hdm decoder */
crb = ioremap(cxlds->component_reg_phys, CXL_COMPONENT_REG_BLOCK_SIZE);
if (!crb) {
dev_dbg(cxlds->dev, "Failed to map component registers\n");
return false;
}
cxl_probe_component_regs(cxlds->dev, crb, cmap);
if (!cmap->hdm_decoder.valid) {
dev_dbg(cxlds->dev, "Invalid HDM decoder registers\n");
goto out;
}
global_ctrl = readl(crb + cmap->hdm_decoder.offset +
CXL_HDM_DECODER_CTRL_OFFSET);
global_enable = global_ctrl & CXL_HDM_DECODER_ENABLE;
if (!global_enable && info->ranges) {
dev_dbg(cxlds->dev,
"DVSEC ranges already programmed and HDM decoders not enabled.\n");
goto out;
}
do_hdm_init = true;
/*
* Permanently (for this boot at least) opt the device into HDM
* operation. Individual HDM decoders still need to be enabled after
* this point.
*/
if (!global_enable) {
dev_dbg(cxlds->dev, "Enabling HDM decode\n");
writel(global_ctrl | CXL_HDM_DECODER_ENABLE,
crb + cmap->hdm_decoder.offset +
CXL_HDM_DECODER_CTRL_OFFSET);
}
out:
iounmap(crb);
return do_hdm_init;
}
static int cxl_mem_probe(struct device *dev)
{
struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct cxl_port *parent_port;
int rc;
/*
* Someone is trying to reattach this device after it lost its port
* connection (an endpoint port previously registered by this memdev was
* disabled). This racy check is ok because if the port is still gone,
* no harm done, and if the port hierarchy comes back it will re-trigger
* this probe. Port rescan and memdev detach work share the same
* single-threaded workqueue.
*/
if (work_pending(&cxlmd->detach_work))
return -EBUSY;
rc = wait_for_media(cxlmd);
if (rc) {
dev_err(dev, "Media not active (%d)\n", rc);
return rc;
}
/*
* If DVSEC ranges are being used instead of HDM decoder registers there
* is no use in trying to manage those.
*/
if (!cxl_dvsec_decode_init(cxlds)) {
struct cxl_endpoint_dvsec_info *info = &cxlds->info;
int i;
/* */
for (i = 0; i < 2; i++) {
u64 base, size;
/*
* Give a nice warning to the user that BIOS has really
* botched things for them if it didn't place DVSEC
* ranges in the memory map.
*/
base = info->dvsec_range[i].start;
size = range_len(&info->dvsec_range[i]);
if (size && !region_intersects(base, size,
IORESOURCE_SYSTEM_RAM,
IORES_DESC_NONE)) {
dev_err(dev,
"DVSEC range %#llx-%#llx must be reserved by BIOS, but isn't\n",
base, base + size - 1);
}
}
dev_err(dev,
"Active DVSEC range registers in use. Will not bind.\n");
return -EBUSY;
}
rc = devm_cxl_enumerate_ports(cxlmd);
if (rc)
return rc;
parent_port = cxl_mem_find_port(cxlmd);
if (!parent_port) {
dev_err(dev, "CXL port topology not found\n");
return -ENXIO;
}
cxl_device_lock(&parent_port->dev);
if (!parent_port->dev.driver) {
dev_err(dev, "CXL port topology %s not enabled\n",
dev_name(&parent_port->dev));
rc = -ENXIO;
goto out;
}
rc = create_endpoint(cxlmd, parent_port);
out:
cxl_device_unlock(&parent_port->dev);
put_device(&parent_port->dev);
return rc;
}
static struct cxl_driver cxl_mem_driver = {
.name = "cxl_mem",
.probe = cxl_mem_probe,
.id = CXL_DEVICE_MEMORY_EXPANDER,
};
module_cxl_driver(cxl_mem_driver);
MODULE_LICENSE("GPL v2");
MODULE_IMPORT_NS(CXL);
MODULE_ALIAS_CXL(CXL_DEVICE_MEMORY_EXPANDER);
/*
* create_endpoint() wants to validate port driver attach immediately after
* endpoint registration.
*/
MODULE_SOFTDEP("pre: cxl_port");
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
#ifndef __CXL_PCI_H__
#define __CXL_PCI_H__
#define CXL_MEMORY_PROGIF 0x10
/*
* See section 8.1 Configuration Space Registers in the CXL 2.0
* Specification
*/
#define PCI_DVSEC_HEADER1_LENGTH_MASK GENMASK(31, 20)
#define PCI_DVSEC_VENDOR_ID_CXL 0x1E98
#define PCI_DVSEC_ID_CXL 0x0
#define PCI_DVSEC_ID_CXL_REGLOC_DVSEC_ID 0x8
#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET 0xC
/* BAR Indicator Register (BIR) */
#define CXL_REGLOC_BIR_MASK GENMASK(2, 0)
/* Register Block Identifier (RBI) */
enum cxl_regloc_type {
CXL_REGLOC_RBI_EMPTY = 0,
CXL_REGLOC_RBI_COMPONENT,
CXL_REGLOC_RBI_VIRT,
CXL_REGLOC_RBI_MEMDEV,
CXL_REGLOC_RBI_TYPES
};
#define CXL_REGLOC_RBI_MASK GENMASK(15, 8)
#define CXL_REGLOC_ADDR_MASK GENMASK(31, 16)
#endif /* __CXL_PCI_H__ */
...@@ -43,7 +43,7 @@ static int cxl_nvdimm_probe(struct device *dev) ...@@ -43,7 +43,7 @@ static int cxl_nvdimm_probe(struct device *dev)
if (!cxl_nvb) if (!cxl_nvb)
return -ENXIO; return -ENXIO;
device_lock(&cxl_nvb->dev); cxl_device_lock(&cxl_nvb->dev);
if (!cxl_nvb->nvdimm_bus) { if (!cxl_nvb->nvdimm_bus) {
rc = -ENXIO; rc = -ENXIO;
goto out; goto out;
...@@ -68,7 +68,7 @@ static int cxl_nvdimm_probe(struct device *dev) ...@@ -68,7 +68,7 @@ static int cxl_nvdimm_probe(struct device *dev)
dev_set_drvdata(dev, nvdimm); dev_set_drvdata(dev, nvdimm);
rc = devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm); rc = devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);
out: out:
device_unlock(&cxl_nvb->dev); cxl_device_unlock(&cxl_nvb->dev);
put_device(&cxl_nvb->dev); put_device(&cxl_nvb->dev);
return rc; return rc;
...@@ -233,7 +233,7 @@ static void cxl_nvb_update_state(struct work_struct *work) ...@@ -233,7 +233,7 @@ static void cxl_nvb_update_state(struct work_struct *work)
struct nvdimm_bus *victim_bus = NULL; struct nvdimm_bus *victim_bus = NULL;
bool release = false, rescan = false; bool release = false, rescan = false;
device_lock(&cxl_nvb->dev); cxl_device_lock(&cxl_nvb->dev);
switch (cxl_nvb->state) { switch (cxl_nvb->state) {
case CXL_NVB_ONLINE: case CXL_NVB_ONLINE:
if (!online_nvdimm_bus(cxl_nvb)) { if (!online_nvdimm_bus(cxl_nvb)) {
...@@ -251,7 +251,7 @@ static void cxl_nvb_update_state(struct work_struct *work) ...@@ -251,7 +251,7 @@ static void cxl_nvb_update_state(struct work_struct *work)
default: default:
break; break;
} }
device_unlock(&cxl_nvb->dev); cxl_device_unlock(&cxl_nvb->dev);
if (release) if (release)
device_release_driver(&cxl_nvb->dev); device_release_driver(&cxl_nvb->dev);
...@@ -327,9 +327,9 @@ static int cxl_nvdimm_bridge_reset(struct device *dev, void *data) ...@@ -327,9 +327,9 @@ static int cxl_nvdimm_bridge_reset(struct device *dev, void *data)
return 0; return 0;
cxl_nvb = to_cxl_nvdimm_bridge(dev); cxl_nvb = to_cxl_nvdimm_bridge(dev);
device_lock(dev); cxl_device_lock(dev);
cxl_nvb->state = CXL_NVB_NEW; cxl_nvb->state = CXL_NVB_NEW;
device_unlock(dev); cxl_device_unlock(dev);
return 0; return 0;
} }
......
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. All rights reserved. */
#include <linux/device.h>
#include <linux/module.h>
#include <linux/slab.h>
#include "cxlmem.h"
#include "cxlpci.h"
/**
* DOC: cxl port
*
* The port driver enumerates dport via PCI and scans for HDM
* (Host-managed-Device-Memory) decoder resources via the
* @component_reg_phys value passed in by the agent that registered the
* port. All descendant ports of a CXL root port (described by platform
* firmware) are managed in this drivers context. Each driver instance
* is responsible for tearing down the driver context of immediate
* descendant ports. The locking for this is validated by
* CONFIG_PROVE_CXL_LOCKING.
*
* The primary service this driver provides is presenting APIs to other
* drivers to utilize the decoders, and indicating to userspace (via bind
* status) the connectivity of the CXL.mem protocol throughout the
* PCIe topology.
*/
static void schedule_detach(void *cxlmd)
{
schedule_cxl_memdev_detach(cxlmd);
}
static int cxl_port_probe(struct device *dev)
{
struct cxl_port *port = to_cxl_port(dev);
struct cxl_hdm *cxlhdm;
int rc;
if (is_cxl_endpoint(port)) {
struct cxl_memdev *cxlmd = to_cxl_memdev(port->uport);
get_device(&cxlmd->dev);
rc = devm_add_action_or_reset(dev, schedule_detach, cxlmd);
if (rc)
return rc;
} else {
rc = devm_cxl_port_enumerate_dports(port);
if (rc < 0)
return rc;
if (rc == 1)
return devm_cxl_add_passthrough_decoder(port);
}
cxlhdm = devm_cxl_setup_hdm(port);
if (IS_ERR(cxlhdm))
return PTR_ERR(cxlhdm);
rc = devm_cxl_enumerate_decoders(cxlhdm);
if (rc) {
dev_err(dev, "Couldn't enumerate decoders (%d)\n", rc);
return rc;
}
return 0;
}
static struct cxl_driver cxl_port_driver = {
.name = "cxl_port",
.probe = cxl_port_probe,
.id = CXL_DEVICE_PORT,
};
module_cxl_driver(cxl_port_driver);
MODULE_LICENSE("GPL v2");
MODULE_IMPORT_NS(CXL);
MODULE_ALIAS_CXL(CXL_DEVICE_PORT);
...@@ -185,7 +185,7 @@ static inline void devm_nsio_disable(struct device *dev, ...@@ -185,7 +185,7 @@ static inline void devm_nsio_disable(struct device *dev,
} }
#endif #endif
#ifdef CONFIG_PROVE_LOCKING #ifdef CONFIG_PROVE_NVDIMM_LOCKING
extern struct class *nd_class; extern struct class *nd_class;
enum { enum {
......
...@@ -1544,6 +1544,29 @@ config CSD_LOCK_WAIT_DEBUG ...@@ -1544,6 +1544,29 @@ config CSD_LOCK_WAIT_DEBUG
include the IPI handler function currently executing (if any) include the IPI handler function currently executing (if any)
and relevant stack traces. and relevant stack traces.
choice
prompt "Lock debugging: prove subsystem device_lock() correctness"
depends on PROVE_LOCKING
help
For subsystems that have instrumented their usage of the device_lock()
with nested annotations, enable lock dependency checking. The locking
hierarchy 'subclass' identifiers are not compatible across
sub-systems, so only one can be enabled at a time.
config PROVE_NVDIMM_LOCKING
bool "NVDIMM"
depends on LIBNVDIMM
help
Enable lockdep to validate nd_device_lock() usage.
config PROVE_CXL_LOCKING
bool "CXL"
depends on CXL_BUS
help
Enable lockdep to validate cxl_device_lock() usage.
endchoice
endmenu # lock debugging endmenu # lock debugging
config TRACE_IRQFLAGS config TRACE_IRQFLAGS
......
...@@ -3,8 +3,11 @@ ldflags-y += --wrap=acpi_table_parse_cedt ...@@ -3,8 +3,11 @@ ldflags-y += --wrap=acpi_table_parse_cedt
ldflags-y += --wrap=is_acpi_device_node ldflags-y += --wrap=is_acpi_device_node
ldflags-y += --wrap=acpi_evaluate_integer ldflags-y += --wrap=acpi_evaluate_integer
ldflags-y += --wrap=acpi_pci_find_root ldflags-y += --wrap=acpi_pci_find_root
ldflags-y += --wrap=pci_walk_bus
ldflags-y += --wrap=nvdimm_bus_register ldflags-y += --wrap=nvdimm_bus_register
ldflags-y += --wrap=devm_cxl_port_enumerate_dports
ldflags-y += --wrap=devm_cxl_setup_hdm
ldflags-y += --wrap=devm_cxl_add_passthrough_decoder
ldflags-y += --wrap=devm_cxl_enumerate_decoders
DRIVERS := ../../../drivers DRIVERS := ../../../drivers
CXL_SRC := $(DRIVERS)/cxl CXL_SRC := $(DRIVERS)/cxl
...@@ -23,15 +26,26 @@ obj-m += cxl_pmem.o ...@@ -23,15 +26,26 @@ obj-m += cxl_pmem.o
cxl_pmem-y := $(CXL_SRC)/pmem.o cxl_pmem-y := $(CXL_SRC)/pmem.o
cxl_pmem-y += config_check.o cxl_pmem-y += config_check.o
obj-m += cxl_port.o
cxl_port-y := $(CXL_SRC)/port.o
cxl_port-y += config_check.o
obj-m += cxl_mem.o
cxl_mem-y := $(CXL_SRC)/mem.o
cxl_mem-y += mock_mem.o
cxl_mem-y += config_check.o
obj-m += cxl_core.o obj-m += cxl_core.o
cxl_core-y := $(CXL_CORE_SRC)/bus.o cxl_core-y := $(CXL_CORE_SRC)/port.o
cxl_core-y += $(CXL_CORE_SRC)/pmem.o cxl_core-y += $(CXL_CORE_SRC)/pmem.o
cxl_core-y += $(CXL_CORE_SRC)/regs.o cxl_core-y += $(CXL_CORE_SRC)/regs.o
cxl_core-y += $(CXL_CORE_SRC)/memdev.o cxl_core-y += $(CXL_CORE_SRC)/memdev.o
cxl_core-y += $(CXL_CORE_SRC)/mbox.o cxl_core-y += $(CXL_CORE_SRC)/mbox.o
cxl_core-y += $(CXL_CORE_SRC)/pci.o
cxl_core-y += $(CXL_CORE_SRC)/hdm.o
cxl_core-y += config_check.o cxl_core-y += config_check.o
cxl_core-y += mock_pmem.o
obj-m += test/ obj-m += test/
...@@ -4,7 +4,6 @@ ...@@ -4,7 +4,6 @@
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/pci.h>
#include <cxl.h> #include <cxl.h>
#include "test/mock.h" #include "test/mock.h"
...@@ -34,76 +33,3 @@ struct acpi_device *to_cxl_host_bridge(struct device *host, struct device *dev) ...@@ -34,76 +33,3 @@ struct acpi_device *to_cxl_host_bridge(struct device *host, struct device *dev)
put_cxl_mock_ops(index); put_cxl_mock_ops(index);
return found; return found;
} }
static int match_add_root_port(struct pci_dev *pdev, void *data)
{
struct cxl_walk_context *ctx = data;
struct pci_bus *root_bus = ctx->root;
struct cxl_port *port = ctx->port;
int type = pci_pcie_type(pdev);
struct device *dev = ctx->dev;
u32 lnkcap, port_num;
int rc;
if (pdev->bus != root_bus)
return 0;
if (!pci_is_pcie(pdev))
return 0;
if (type != PCI_EXP_TYPE_ROOT_PORT)
return 0;
if (pci_read_config_dword(pdev, pci_pcie_cap(pdev) + PCI_EXP_LNKCAP,
&lnkcap) != PCIBIOS_SUCCESSFUL)
return 0;
/* TODO walk DVSEC to find component register base */
port_num = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap);
rc = cxl_add_dport(port, &pdev->dev, port_num, CXL_RESOURCE_NONE);
if (rc) {
dev_err(dev, "failed to add dport: %s (%d)\n",
dev_name(&pdev->dev), rc);
ctx->error = rc;
return rc;
}
ctx->count++;
dev_dbg(dev, "add dport%d: %s\n", port_num, dev_name(&pdev->dev));
return 0;
}
static int mock_add_root_port(struct platform_device *pdev, void *data)
{
struct cxl_walk_context *ctx = data;
struct cxl_port *port = ctx->port;
struct device *dev = ctx->dev;
int rc;
rc = cxl_add_dport(port, &pdev->dev, pdev->id, CXL_RESOURCE_NONE);
if (rc) {
dev_err(dev, "failed to add dport: %s (%d)\n",
dev_name(&pdev->dev), rc);
ctx->error = rc;
return rc;
}
ctx->count++;
dev_dbg(dev, "add dport%d: %s\n", pdev->id, dev_name(&pdev->dev));
return 0;
}
int match_add_root_ports(struct pci_dev *dev, void *data)
{
int index, rc;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
struct platform_device *pdev = (struct platform_device *) dev;
if (ops && ops->is_mock_port(pdev))
rc = mock_add_root_port(pdev, data);
else
rc = match_add_root_port(dev, data);
put_cxl_mock_ops(index);
return rc;
}
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. All rights reserved. */
#include <linux/types.h>
struct cxl_dev_state;
bool cxl_dvsec_decode_init(struct cxl_dev_state *cxlds)
{
return true;
}
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2021 Intel Corporation. All rights reserved. */
#include <cxl.h>
#include "test/mock.h"
#include <core/core.h>
int match_nvdimm_bridge(struct device *dev, const void *data)
{
int index, rc = 0;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
const struct cxl_nvdimm *cxl_nvd = data;
if (ops) {
if (dev->type == &cxl_nvdimm_bridge_type &&
(ops->is_mock_dev(dev->parent->parent) ==
ops->is_mock_dev(cxl_nvd->dev.parent->parent)))
rc = 1;
} else
rc = dev->type == &cxl_nvdimm_bridge_type;
put_cxl_mock_ops(index);
return rc;
}
This diff is collapsed.
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/mod_devicetable.h> #include <linux/mod_devicetable.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/delay.h>
#include <linux/sizes.h> #include <linux/sizes.h>
#include <linux/bits.h> #include <linux/bits.h>
#include <cxlmem.h> #include <cxlmem.h>
...@@ -236,11 +237,25 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd * ...@@ -236,11 +237,25 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *
return rc; return rc;
} }
static int cxl_mock_wait_media_ready(struct cxl_dev_state *cxlds)
{
msleep(100);
return 0;
}
static void label_area_release(void *lsa) static void label_area_release(void *lsa)
{ {
vfree(lsa); vfree(lsa);
} }
static void mock_validate_dvsec_ranges(struct cxl_dev_state *cxlds)
{
struct cxl_endpoint_dvsec_info *info;
info = &cxlds->info;
info->mem_enabled = true;
}
static int cxl_mock_mem_probe(struct platform_device *pdev) static int cxl_mock_mem_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
...@@ -261,7 +276,9 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) ...@@ -261,7 +276,9 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
if (IS_ERR(cxlds)) if (IS_ERR(cxlds))
return PTR_ERR(cxlds); return PTR_ERR(cxlds);
cxlds->serial = pdev->id;
cxlds->mbox_send = cxl_mock_mbox_send; cxlds->mbox_send = cxl_mock_mbox_send;
cxlds->wait_media_ready = cxl_mock_wait_media_ready;
cxlds->payload_size = SZ_4K; cxlds->payload_size = SZ_4K;
rc = cxl_enumerate_cmds(cxlds); rc = cxl_enumerate_cmds(cxlds);
...@@ -276,6 +293,8 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) ...@@ -276,6 +293,8 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
if (rc) if (rc)
return rc; return rc;
mock_validate_dvsec_ranges(cxlds);
cxlmd = devm_cxl_add_memdev(cxlds); cxlmd = devm_cxl_add_memdev(cxlds);
if (IS_ERR(cxlmd)) if (IS_ERR(cxlmd))
return PTR_ERR(cxlmd); return PTR_ERR(cxlmd);
......
...@@ -7,6 +7,8 @@ ...@@ -7,6 +7,8 @@
#include <linux/export.h> #include <linux/export.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <cxlmem.h>
#include <cxlpci.h>
#include "mock.h" #include "mock.h"
static LIST_HEAD(mock); static LIST_HEAD(mock);
...@@ -114,32 +116,6 @@ struct acpi_pci_root *__wrap_acpi_pci_find_root(acpi_handle handle) ...@@ -114,32 +116,6 @@ struct acpi_pci_root *__wrap_acpi_pci_find_root(acpi_handle handle)
} }
EXPORT_SYMBOL_GPL(__wrap_acpi_pci_find_root); EXPORT_SYMBOL_GPL(__wrap_acpi_pci_find_root);
void __wrap_pci_walk_bus(struct pci_bus *bus,
int (*cb)(struct pci_dev *, void *), void *userdata)
{
int index;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
if (ops && ops->is_mock_bus(bus)) {
int rc, i;
/*
* Simulate 2 root ports per host-bridge and no
* depth recursion.
*/
for (i = 0; i < 2; i++) {
rc = cb((struct pci_dev *) ops->mock_port(bus, i),
userdata);
if (rc)
break;
}
} else
pci_walk_bus(bus, cb, userdata);
put_cxl_mock_ops(index);
}
EXPORT_SYMBOL_GPL(__wrap_pci_walk_bus);
struct nvdimm_bus * struct nvdimm_bus *
__wrap_nvdimm_bus_register(struct device *dev, __wrap_nvdimm_bus_register(struct device *dev,
struct nvdimm_bus_descriptor *nd_desc) struct nvdimm_bus_descriptor *nd_desc)
...@@ -155,5 +131,68 @@ __wrap_nvdimm_bus_register(struct device *dev, ...@@ -155,5 +131,68 @@ __wrap_nvdimm_bus_register(struct device *dev,
} }
EXPORT_SYMBOL_GPL(__wrap_nvdimm_bus_register); EXPORT_SYMBOL_GPL(__wrap_nvdimm_bus_register);
struct cxl_hdm *__wrap_devm_cxl_setup_hdm(struct cxl_port *port)
{
int index;
struct cxl_hdm *cxlhdm;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
if (ops && ops->is_mock_port(port->uport))
cxlhdm = ops->devm_cxl_setup_hdm(port);
else
cxlhdm = devm_cxl_setup_hdm(port);
put_cxl_mock_ops(index);
return cxlhdm;
}
EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_setup_hdm, CXL);
int __wrap_devm_cxl_add_passthrough_decoder(struct cxl_port *port)
{
int rc, index;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
if (ops && ops->is_mock_port(port->uport))
rc = ops->devm_cxl_add_passthrough_decoder(port);
else
rc = devm_cxl_add_passthrough_decoder(port);
put_cxl_mock_ops(index);
return rc;
}
EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_add_passthrough_decoder, CXL);
int __wrap_devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm)
{
int rc, index;
struct cxl_port *port = cxlhdm->port;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
if (ops && ops->is_mock_port(port->uport))
rc = ops->devm_cxl_enumerate_decoders(cxlhdm);
else
rc = devm_cxl_enumerate_decoders(cxlhdm);
put_cxl_mock_ops(index);
return rc;
}
EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_enumerate_decoders, CXL);
int __wrap_devm_cxl_port_enumerate_dports(struct cxl_port *port)
{
int rc, index;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
if (ops && ops->is_mock_port(port->uport))
rc = ops->devm_cxl_port_enumerate_dports(port);
else
rc = devm_cxl_port_enumerate_dports(port);
put_cxl_mock_ops(index);
return rc;
}
EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_port_enumerate_dports, CXL);
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_IMPORT_NS(ACPI); MODULE_IMPORT_NS(ACPI);
MODULE_IMPORT_NS(CXL);
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#include <linux/list.h> #include <linux/list.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <cxl.h>
struct cxl_mock_ops { struct cxl_mock_ops {
struct list_head list; struct list_head list;
...@@ -15,10 +16,13 @@ struct cxl_mock_ops { ...@@ -15,10 +16,13 @@ struct cxl_mock_ops {
struct acpi_object_list *arguments, struct acpi_object_list *arguments,
unsigned long long *data); unsigned long long *data);
struct acpi_pci_root *(*acpi_pci_find_root)(acpi_handle handle); struct acpi_pci_root *(*acpi_pci_find_root)(acpi_handle handle);
struct platform_device *(*mock_port)(struct pci_bus *bus, int index);
bool (*is_mock_bus)(struct pci_bus *bus); bool (*is_mock_bus)(struct pci_bus *bus);
bool (*is_mock_port)(struct platform_device *pdev); bool (*is_mock_port)(struct device *dev);
bool (*is_mock_dev)(struct device *dev); bool (*is_mock_dev)(struct device *dev);
int (*devm_cxl_port_enumerate_dports)(struct cxl_port *port);
struct cxl_hdm *(*devm_cxl_setup_hdm)(struct cxl_port *port);
int (*devm_cxl_add_passthrough_decoder)(struct cxl_port *port);
int (*devm_cxl_enumerate_decoders)(struct cxl_hdm *hdm);
}; };
void register_cxl_mock_ops(struct cxl_mock_ops *ops); void register_cxl_mock_ops(struct cxl_mock_ops *ops);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment