Commit 02c163e9 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'cxl-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl

Pull CXL updates from Dan Williams:
 "CXL has mechanisms to enumerate the performance characteristics of
  memory devices. Those mechanisms allow Linux to build the equivalent
  of ACPI SRAT, SLIT, and HMAT tables dynamically at runtime. That
  capability is necessary because static ACPI can not represent dynamic
  CXL configurations (and reconfigurations).

  So, building on the v6.8 work to add "Quality of Service" enumeration,
  this update plumbs CXL "access coordinates" (read/write access latency
  and bandwidth) in all the same places that ACPI HMAT feeds similar
  data. Follow-on patches from the -mm side can then use that data to
  feed mechanisms like mm/memory-tiers.c. Greg has acked the touch to
  drivers/base/.

  The other feature update this cycle is support for CXL error injection
  via the ACPI EINJ module. That facility enables injection of bus
  protocol errors provided the user knows the magic address values to
  insert in the interface. To hide that magic, and make this easier to
  use, new error injection attributes were added to CXL debugfs. That
  interface injects the errors relative to a CXL object rather than
  require user tooling to know how to lookup and inject RCRB (Root
  Complex Register Block) addresses into the raw EINJ debugfs interface.
  It received some helpful review comments from Tony, but no explicit
  acks from the ACPI side. The primary user visible change for existing
  EINJ users is that they may find that einj.ko was already loaded by
  cxl_core.ko. Previously, einj.ko was only loaded on demand.

  The usual collection of miscellaneous cleanups are also present this
  cycle.

  Summary:

   - Supplement ACPI HMAT reported memory performance with native CXL
     memory performance enumeration

   - Add support for CXL error injection via the ACPI EINJ mechanism

   - Cleanup CXL DOE and CDAT integration

   - Miscellaneous cleanups and fixes"

* tag 'cxl-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl: (21 commits)
  Documentation/ABI/testing/debugfs-cxl: Fix "Unexpected indentation"
  lib/firmware_table: Provide buffer length argument to cdat_table_parse()
  cxl/pci: Get rid of pointer arithmetic reading CDAT table
  cxl/pci: Rename DOE mailbox handle to doe_mb
  cxl: Fix the incorrect assignment of SSLBIS entry pointer initial location
  cxl/core: Add CXL EINJ debugfs files
  EINJ, Documentation: Update EINJ kernel doc
  EINJ: Add CXL error type support
  EINJ: Migrate to a platform driver
  cxl/region: Deal with numa nodes not enumerated by SRAT
  cxl/region: Add memory hotplug notifier for cxl region
  cxl/region: Add sysfs attribute for locality attributes of CXL regions
  cxl/region: Calculate performance data for a region
  cxl: Set cxlmd->endpoint before adding port device
  cxl: Move QoS class to be calculated from the nearest CPU
  cxl: Split out host bridge access coordinates
  cxl: Split out combine_coordinates() for common shared usage
  ACPI: HMAT / cxl: Add retrieval of generic port coordinates for both access classes
  ACPI: HMAT: Introduce 2 levels of generic port access class
  base/node / ACPI: Enumerate node access class for 'struct access_coordinate'
  ...
parents 5c84b051 ed1ff2fb
...@@ -33,3 +33,37 @@ Description: ...@@ -33,3 +33,37 @@ Description:
device cannot clear poison from the address, -ENXIO is returned. device cannot clear poison from the address, -ENXIO is returned.
The clear_poison attribute is only visible for devices The clear_poison attribute is only visible for devices
supporting the capability. supporting the capability.
What: /sys/kernel/debug/cxl/einj_types
Date: January, 2024
KernelVersion: v6.9
Contact: linux-cxl@vger.kernel.org
Description:
(RO) Prints the CXL protocol error types made available by
the platform in the format:
0x<error number> <error type>
The possible error types are (as of ACPI v6.5):
0x1000 CXL.cache Protocol Correctable
0x2000 CXL.cache Protocol Uncorrectable non-fatal
0x4000 CXL.cache Protocol Uncorrectable fatal
0x8000 CXL.mem Protocol Correctable
0x10000 CXL.mem Protocol Uncorrectable non-fatal
0x20000 CXL.mem Protocol Uncorrectable fatal
The <error number> can be written to einj_inject to inject
<error type> into a chosen dport.
What: /sys/kernel/debug/cxl/$dport_dev/einj_inject
Date: January, 2024
KernelVersion: v6.9
Contact: linux-cxl@vger.kernel.org
Description:
(WO) Writing an integer to this file injects the corresponding
CXL protocol error into $dport_dev ($dport_dev will be a device
name from /sys/bus/pci/devices). The integer to type mapping for
injection can be found by reading from einj_types. If the dport
was enumerated in RCH mode, a CXL 1.1 error is injected, otherwise
a CXL 2.0 error is injected.
...@@ -552,3 +552,37 @@ Description: ...@@ -552,3 +552,37 @@ Description:
attribute is only visible for devices supporting the attribute is only visible for devices supporting the
capability. The retrieved errors are logged as kernel capability. The retrieved errors are logged as kernel
events when cxl_poison event tracing is enabled. events when cxl_poison event tracing is enabled.
What: /sys/bus/cxl/devices/regionZ/accessY/read_bandwidth
/sys/bus/cxl/devices/regionZ/accessY/write_banwidth
Date: Jan, 2024
KernelVersion: v6.9
Contact: linux-cxl@vger.kernel.org
Description:
(RO) The aggregated read or write bandwidth of the region. The
number is the accumulated read or write bandwidth of all CXL memory
devices that contributes to the region in MB/s. It is
identical data that should appear in
/sys/devices/system/node/nodeX/accessY/initiators/read_bandwidth or
/sys/devices/system/node/nodeX/accessY/initiators/write_bandwidth.
See Documentation/ABI/stable/sysfs-devices-node. access0 provides
the number to the closest initiator and access1 provides the
number to the closest CPU.
What: /sys/bus/cxl/devices/regionZ/accessY/read_latency
/sys/bus/cxl/devices/regionZ/accessY/write_latency
Date: Jan, 2024
KernelVersion: v6.9
Contact: linux-cxl@vger.kernel.org
Description:
(RO) The read or write latency of the region. The number is
the worst read or write latency of all CXL memory devices that
contributes to the region in nanoseconds. It is identical data
that should appear in
/sys/devices/system/node/nodeX/accessY/initiators/read_latency or
/sys/devices/system/node/nodeX/accessY/initiators/write_latency.
See Documentation/ABI/stable/sysfs-devices-node. access0 provides
the number to the closest initiator and access1 provides the
number to the closest CPU.
...@@ -32,6 +32,10 @@ configuration:: ...@@ -32,6 +32,10 @@ configuration::
CONFIG_ACPI_APEI CONFIG_ACPI_APEI
CONFIG_ACPI_APEI_EINJ CONFIG_ACPI_APEI_EINJ
...and to (optionally) enable CXL protocol error injection set::
CONFIG_ACPI_APEI_EINJ_CXL
The EINJ user interface is in <debugfs mount point>/apei/einj. The EINJ user interface is in <debugfs mount point>/apei/einj.
The following files belong to it: The following files belong to it:
...@@ -118,6 +122,24 @@ The following files belong to it: ...@@ -118,6 +122,24 @@ The following files belong to it:
this actually works depends on what operations the BIOS actually this actually works depends on what operations the BIOS actually
includes in the trigger phase. includes in the trigger phase.
CXL error types are supported from ACPI 6.5 onwards (given a CXL port
is present). The EINJ user interface for CXL error types is at
<debugfs mount point>/cxl. The following files belong to it:
- einj_types:
Provides the same functionality as available_error_types above, but
for CXL error types
- $dport_dev/einj_inject:
Injects a CXL error type into the CXL port represented by $dport_dev,
where $dport_dev is the name of the CXL port (usually a PCIe device name).
Error injections targeting a CXL 2.0+ port can use the legacy interface
under <debugfs mount point>/apei/einj, while CXL 1.1/1.0 port injections
must use this file.
BIOS versions based on the ACPI 4.0 specification have limited options BIOS versions based on the ACPI 4.0 specification have limited options
in controlling where the errors are injected. Your BIOS may support an in controlling where the errors are injected. Your BIOS may support an
extension (enabled with the param_extension=1 module parameter, or boot extension (enabled with the param_extension=1 module parameter, or boot
...@@ -181,6 +203,18 @@ You should see something like this in dmesg:: ...@@ -181,6 +203,18 @@ You should see something like this in dmesg::
[22715.834759] EDAC sbridge MC3: PROCESSOR 0:306e7 TIME 1422553404 SOCKET 0 APIC 0 [22715.834759] EDAC sbridge MC3: PROCESSOR 0:306e7 TIME 1422553404 SOCKET 0 APIC 0
[22716.616173] EDAC MC3: 1 CE memory read error on CPU_SrcID#0_Channel#0_DIMM#0 (channel:0 slot:0 page:0x12345 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:0 channel_mask:1 rank:0) [22716.616173] EDAC MC3: 1 CE memory read error on CPU_SrcID#0_Channel#0_DIMM#0 (channel:0 slot:0 page:0x12345 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:0 channel_mask:1 rank:0)
A CXL error injection example with $dport_dev=0000:e0:01.1::
# cd /sys/kernel/debug/cxl/
# ls
0000:e0:01.1 0000:0c:00.0
# cat einj_types # See which errors can be injected
0x00008000 CXL.mem Protocol Correctable
0x00010000 CXL.mem Protocol Uncorrectable non-fatal
0x00020000 CXL.mem Protocol Uncorrectable fatal
# cd 0000:e0:01.1 # Navigate to dport to inject into
# echo 0x8000 > einj_inject # Inject error
Special notes for injection into SGX enclaves: Special notes for injection into SGX enclaves:
There may be a separate BIOS setup option to enable SGX injection. There may be a separate BIOS setup option to enable SGX injection.
......
...@@ -5321,6 +5321,7 @@ M: Dan Williams <dan.j.williams@intel.com> ...@@ -5321,6 +5321,7 @@ M: Dan Williams <dan.j.williams@intel.com>
L: linux-cxl@vger.kernel.org L: linux-cxl@vger.kernel.org
S: Maintained S: Maintained
F: drivers/cxl/ F: drivers/cxl/
F: include/linux/cxl-einj.h
F: include/linux/cxl-event.h F: include/linux/cxl-event.h
F: include/uapi/linux/cxl_mem.h F: include/uapi/linux/cxl_mem.h
F: tools/testing/cxl/ F: tools/testing/cxl/
......
...@@ -60,6 +60,19 @@ config ACPI_APEI_EINJ ...@@ -60,6 +60,19 @@ config ACPI_APEI_EINJ
mainly used for debugging and testing the other parts of mainly used for debugging and testing the other parts of
APEI and some other RAS features. APEI and some other RAS features.
config ACPI_APEI_EINJ_CXL
bool "CXL Error INJection Support"
default ACPI_APEI_EINJ
depends on ACPI_APEI_EINJ
depends on CXL_BUS && CXL_BUS <= ACPI_APEI_EINJ
help
Support for CXL protocol Error INJection through debugfs/cxl.
Availability and which errors are supported is dependent on
the host platform. Look to ACPI v6.5 section 18.6.4 and kernel
EINJ documentation for more information.
If unsure say 'n'
config ACPI_APEI_ERST_DEBUG config ACPI_APEI_ERST_DEBUG
tristate "APEI Error Record Serialization Table (ERST) Debug Support" tristate "APEI Error Record Serialization Table (ERST) Debug Support"
depends on ACPI_APEI depends on ACPI_APEI
......
...@@ -2,6 +2,8 @@ ...@@ -2,6 +2,8 @@
obj-$(CONFIG_ACPI_APEI) += apei.o obj-$(CONFIG_ACPI_APEI) += apei.o
obj-$(CONFIG_ACPI_APEI_GHES) += ghes.o obj-$(CONFIG_ACPI_APEI_GHES) += ghes.o
obj-$(CONFIG_ACPI_APEI_EINJ) += einj.o obj-$(CONFIG_ACPI_APEI_EINJ) += einj.o
einj-y := einj-core.o
einj-$(CONFIG_ACPI_APEI_EINJ_CXL) += einj-cxl.o
obj-$(CONFIG_ACPI_APEI_ERST_DEBUG) += erst-dbg.o obj-$(CONFIG_ACPI_APEI_ERST_DEBUG) += erst-dbg.o
apei-y := apei-base.o hest.o erst.o bert.o apei-y := apei-base.o hest.o erst.o bert.o
...@@ -130,4 +130,22 @@ static inline u32 cper_estatus_len(struct acpi_hest_generic_status *estatus) ...@@ -130,4 +130,22 @@ static inline u32 cper_estatus_len(struct acpi_hest_generic_status *estatus)
} }
int apei_osc_setup(void); int apei_osc_setup(void);
int einj_get_available_error_type(u32 *type);
int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2, u64 param3,
u64 param4);
int einj_cxl_rch_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
u64 param3, u64 param4);
bool einj_is_cxl_error_type(u64 type);
int einj_validate_error_type(u64 type);
#ifndef ACPI_EINJ_CXL_CACHE_CORRECTABLE
#define ACPI_EINJ_CXL_CACHE_CORRECTABLE BIT(12)
#define ACPI_EINJ_CXL_CACHE_UNCORRECTABLE BIT(13)
#define ACPI_EINJ_CXL_CACHE_FATAL BIT(14)
#define ACPI_EINJ_CXL_MEM_CORRECTABLE BIT(15)
#define ACPI_EINJ_CXL_MEM_UNCORRECTABLE BIT(16)
#define ACPI_EINJ_CXL_MEM_FATAL BIT(17)
#endif
#endif #endif
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <linux/nmi.h> #include <linux/nmi.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/platform_device.h>
#include <asm/unaligned.h> #include <asm/unaligned.h>
#include "apei-internal.h" #include "apei-internal.h"
...@@ -36,6 +37,12 @@ ...@@ -36,6 +37,12 @@
#define MEM_ERROR_MASK (ACPI_EINJ_MEMORY_CORRECTABLE | \ #define MEM_ERROR_MASK (ACPI_EINJ_MEMORY_CORRECTABLE | \
ACPI_EINJ_MEMORY_UNCORRECTABLE | \ ACPI_EINJ_MEMORY_UNCORRECTABLE | \
ACPI_EINJ_MEMORY_FATAL) ACPI_EINJ_MEMORY_FATAL)
#define CXL_ERROR_MASK (ACPI_EINJ_CXL_CACHE_CORRECTABLE | \
ACPI_EINJ_CXL_CACHE_UNCORRECTABLE | \
ACPI_EINJ_CXL_CACHE_FATAL | \
ACPI_EINJ_CXL_MEM_CORRECTABLE | \
ACPI_EINJ_CXL_MEM_UNCORRECTABLE | \
ACPI_EINJ_CXL_MEM_FATAL)
/* /*
* ACPI version 5 provides a SET_ERROR_TYPE_WITH_ADDRESS action. * ACPI version 5 provides a SET_ERROR_TYPE_WITH_ADDRESS action.
...@@ -137,6 +144,11 @@ static struct apei_exec_ins_type einj_ins_type[] = { ...@@ -137,6 +144,11 @@ static struct apei_exec_ins_type einj_ins_type[] = {
*/ */
static DEFINE_MUTEX(einj_mutex); static DEFINE_MUTEX(einj_mutex);
/*
* Exported APIs use this flag to exit early if einj_probe() failed.
*/
bool einj_initialized __ro_after_init;
static void *einj_param; static void *einj_param;
static void einj_exec_ctx_init(struct apei_exec_context *ctx) static void einj_exec_ctx_init(struct apei_exec_context *ctx)
...@@ -160,7 +172,7 @@ static int __einj_get_available_error_type(u32 *type) ...@@ -160,7 +172,7 @@ static int __einj_get_available_error_type(u32 *type)
} }
/* Get error injection capabilities of the platform */ /* Get error injection capabilities of the platform */
static int einj_get_available_error_type(u32 *type) int einj_get_available_error_type(u32 *type)
{ {
int rc; int rc;
...@@ -530,8 +542,8 @@ static int __einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2, ...@@ -530,8 +542,8 @@ static int __einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
} }
/* Inject the specified hardware error */ /* Inject the specified hardware error */
static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2, int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2, u64 param3,
u64 param3, u64 param4) u64 param4)
{ {
int rc; int rc;
u64 base_addr, size; u64 base_addr, size;
...@@ -554,8 +566,17 @@ static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2, ...@@ -554,8 +566,17 @@ static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
if (type & ACPI5_VENDOR_BIT) { if (type & ACPI5_VENDOR_BIT) {
if (vendor_flags != SETWA_FLAGS_MEM) if (vendor_flags != SETWA_FLAGS_MEM)
goto inject; goto inject;
} else if (!(type & MEM_ERROR_MASK) && !(flags & SETWA_FLAGS_MEM)) } else if (!(type & MEM_ERROR_MASK) && !(flags & SETWA_FLAGS_MEM)) {
goto inject; goto inject;
}
/*
* Injections targeting a CXL 1.0/1.1 port have to be injected
* via the einj_cxl_rch_error_inject() path as that does the proper
* validation of the given RCRB base (MMIO) address.
*/
if (einj_is_cxl_error_type(type) && (flags & SETWA_FLAGS_MEM))
return -EINVAL;
/* /*
* Disallow crazy address masks that give BIOS leeway to pick * Disallow crazy address masks that give BIOS leeway to pick
...@@ -587,6 +608,21 @@ static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2, ...@@ -587,6 +608,21 @@ static int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
return rc; return rc;
} }
int einj_cxl_rch_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
u64 param3, u64 param4)
{
int rc;
if (!(einj_is_cxl_error_type(type) && (flags & SETWA_FLAGS_MEM)))
return -EINVAL;
mutex_lock(&einj_mutex);
rc = __einj_error_inject(type, flags, param1, param2, param3, param4);
mutex_unlock(&einj_mutex);
return rc;
}
static u32 error_type; static u32 error_type;
static u32 error_flags; static u32 error_flags;
static u64 error_param1; static u64 error_param1;
...@@ -607,12 +643,6 @@ static struct { u32 mask; const char *str; } const einj_error_type_string[] = { ...@@ -607,12 +643,6 @@ static struct { u32 mask; const char *str; } const einj_error_type_string[] = {
{ BIT(9), "Platform Correctable" }, { BIT(9), "Platform Correctable" },
{ BIT(10), "Platform Uncorrectable non-fatal" }, { BIT(10), "Platform Uncorrectable non-fatal" },
{ BIT(11), "Platform Uncorrectable fatal"}, { BIT(11), "Platform Uncorrectable fatal"},
{ BIT(12), "CXL.cache Protocol Correctable" },
{ BIT(13), "CXL.cache Protocol Uncorrectable non-fatal" },
{ BIT(14), "CXL.cache Protocol Uncorrectable fatal" },
{ BIT(15), "CXL.mem Protocol Correctable" },
{ BIT(16), "CXL.mem Protocol Uncorrectable non-fatal" },
{ BIT(17), "CXL.mem Protocol Uncorrectable fatal" },
{ BIT(31), "Vendor Defined Error Types" }, { BIT(31), "Vendor Defined Error Types" },
}; };
...@@ -641,22 +671,26 @@ static int error_type_get(void *data, u64 *val) ...@@ -641,22 +671,26 @@ static int error_type_get(void *data, u64 *val)
return 0; return 0;
} }
static int error_type_set(void *data, u64 val) bool einj_is_cxl_error_type(u64 type)
{ {
return (type & CXL_ERROR_MASK) && (!(type & ACPI5_VENDOR_BIT));
}
int einj_validate_error_type(u64 type)
{
u32 tval, vendor, available_error_type = 0;
int rc; int rc;
u32 available_error_type = 0;
u32 tval, vendor;
/* Only low 32 bits for error type are valid */ /* Only low 32 bits for error type are valid */
if (val & GENMASK_ULL(63, 32)) if (type & GENMASK_ULL(63, 32))
return -EINVAL; return -EINVAL;
/* /*
* Vendor defined types have 0x80000000 bit set, and * Vendor defined types have 0x80000000 bit set, and
* are not enumerated by ACPI_EINJ_GET_ERROR_TYPE * are not enumerated by ACPI_EINJ_GET_ERROR_TYPE
*/ */
vendor = val & ACPI5_VENDOR_BIT; vendor = type & ACPI5_VENDOR_BIT;
tval = val & 0x7fffffff; tval = type & GENMASK(30, 0);
/* Only one error type can be specified */ /* Only one error type can be specified */
if (tval & (tval - 1)) if (tval & (tval - 1))
...@@ -665,9 +699,21 @@ static int error_type_set(void *data, u64 val) ...@@ -665,9 +699,21 @@ static int error_type_set(void *data, u64 val)
rc = einj_get_available_error_type(&available_error_type); rc = einj_get_available_error_type(&available_error_type);
if (rc) if (rc)
return rc; return rc;
if (!(val & available_error_type)) if (!(type & available_error_type))
return -EINVAL; return -EINVAL;
} }
return 0;
}
static int error_type_set(void *data, u64 val)
{
int rc;
rc = einj_validate_error_type(val);
if (rc)
return rc;
error_type = val; error_type = val;
return 0; return 0;
...@@ -703,21 +749,21 @@ static int einj_check_table(struct acpi_table_einj *einj_tab) ...@@ -703,21 +749,21 @@ static int einj_check_table(struct acpi_table_einj *einj_tab)
return 0; return 0;
} }
static int __init einj_init(void) static int __init einj_probe(struct platform_device *pdev)
{ {
int rc; int rc;
acpi_status status; acpi_status status;
struct apei_exec_context ctx; struct apei_exec_context ctx;
if (acpi_disabled) { if (acpi_disabled) {
pr_info("ACPI disabled.\n"); pr_debug("ACPI disabled.\n");
return -ENODEV; return -ENODEV;
} }
status = acpi_get_table(ACPI_SIG_EINJ, 0, status = acpi_get_table(ACPI_SIG_EINJ, 0,
(struct acpi_table_header **)&einj_tab); (struct acpi_table_header **)&einj_tab);
if (status == AE_NOT_FOUND) { if (status == AE_NOT_FOUND) {
pr_warn("EINJ table not found.\n"); pr_debug("EINJ table not found.\n");
return -ENODEV; return -ENODEV;
} else if (ACPI_FAILURE(status)) { } else if (ACPI_FAILURE(status)) {
pr_err("Failed to get EINJ table: %s\n", pr_err("Failed to get EINJ table: %s\n",
...@@ -805,7 +851,7 @@ static int __init einj_init(void) ...@@ -805,7 +851,7 @@ static int __init einj_init(void)
return rc; return rc;
} }
static void __exit einj_exit(void) static void __exit einj_remove(struct platform_device *pdev)
{ {
struct apei_exec_context ctx; struct apei_exec_context ctx;
...@@ -826,6 +872,40 @@ static void __exit einj_exit(void) ...@@ -826,6 +872,40 @@ static void __exit einj_exit(void)
acpi_put_table((struct acpi_table_header *)einj_tab); acpi_put_table((struct acpi_table_header *)einj_tab);
} }
static struct platform_device *einj_dev;
static struct platform_driver einj_driver = {
.remove_new = einj_remove,
.driver = {
.name = "acpi-einj",
},
};
static int __init einj_init(void)
{
struct platform_device_info einj_dev_info = {
.name = "acpi-einj",
.id = -1,
};
int rc;
einj_dev = platform_device_register_full(&einj_dev_info);
if (IS_ERR(einj_dev))
return PTR_ERR(einj_dev);
rc = platform_driver_probe(&einj_driver, einj_probe);
einj_initialized = rc == 0;
return 0;
}
static void __exit einj_exit(void)
{
if (einj_initialized)
platform_driver_unregister(&einj_driver);
platform_device_del(einj_dev);
}
module_init(einj_init); module_init(einj_init);
module_exit(einj_exit); module_exit(einj_exit);
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* CXL Error INJection support. Used by CXL core to inject
* protocol errors into CXL ports.
*
* Copyright (C) 2023 Advanced Micro Devices, Inc.
*
* Author: Ben Cheatham <benjamin.cheatham@amd.com>
*/
#include <linux/einj-cxl.h>
#include <linux/seq_file.h>
#include <linux/pci.h>
#include "apei-internal.h"
/* Defined in einj-core.c */
extern bool einj_initialized;
static struct { u32 mask; const char *str; } const einj_cxl_error_type_string[] = {
{ ACPI_EINJ_CXL_CACHE_CORRECTABLE, "CXL.cache Protocol Correctable" },
{ ACPI_EINJ_CXL_CACHE_UNCORRECTABLE, "CXL.cache Protocol Uncorrectable non-fatal" },
{ ACPI_EINJ_CXL_CACHE_FATAL, "CXL.cache Protocol Uncorrectable fatal" },
{ ACPI_EINJ_CXL_MEM_CORRECTABLE, "CXL.mem Protocol Correctable" },
{ ACPI_EINJ_CXL_MEM_UNCORRECTABLE, "CXL.mem Protocol Uncorrectable non-fatal" },
{ ACPI_EINJ_CXL_MEM_FATAL, "CXL.mem Protocol Uncorrectable fatal" },
};
int einj_cxl_available_error_type_show(struct seq_file *m, void *v)
{
int cxl_err, rc;
u32 available_error_type = 0;
rc = einj_get_available_error_type(&available_error_type);
if (rc)
return rc;
for (int pos = 0; pos < ARRAY_SIZE(einj_cxl_error_type_string); pos++) {
cxl_err = ACPI_EINJ_CXL_CACHE_CORRECTABLE << pos;
if (available_error_type & cxl_err)
seq_printf(m, "0x%08x\t%s\n",
einj_cxl_error_type_string[pos].mask,
einj_cxl_error_type_string[pos].str);
}
return 0;
}
EXPORT_SYMBOL_NS_GPL(einj_cxl_available_error_type_show, CXL);
static int cxl_dport_get_sbdf(struct pci_dev *dport_dev, u64 *sbdf)
{
struct pci_bus *pbus;
struct pci_host_bridge *bridge;
u64 seg = 0, bus;
pbus = dport_dev->bus;
bridge = pci_find_host_bridge(pbus);
if (!bridge)
return -ENODEV;
if (bridge->domain_nr != PCI_DOMAIN_NR_NOT_SET)
seg = bridge->domain_nr;
bus = pbus->number;
*sbdf = (seg << 24) | (bus << 16) | dport_dev->devfn;
return 0;
}
int einj_cxl_inject_rch_error(u64 rcrb, u64 type)
{
int rc;
/* Only CXL error types can be specified */
if (!einj_is_cxl_error_type(type))
return -EINVAL;
rc = einj_validate_error_type(type);
if (rc)
return rc;
return einj_cxl_rch_error_inject(type, 0x2, rcrb, GENMASK_ULL(63, 0),
0, 0);
}
EXPORT_SYMBOL_NS_GPL(einj_cxl_inject_rch_error, CXL);
int einj_cxl_inject_error(struct pci_dev *dport, u64 type)
{
u64 param4 = 0;
int rc;
/* Only CXL error types can be specified */
if (!einj_is_cxl_error_type(type))
return -EINVAL;
rc = einj_validate_error_type(type);
if (rc)
return rc;
rc = cxl_dport_get_sbdf(dport, &param4);
if (rc)
return rc;
return einj_error_inject(type, 0x4, 0, 0, 0, param4);
}
EXPORT_SYMBOL_NS_GPL(einj_cxl_inject_error, CXL);
bool einj_cxl_is_initialized(void)
{
return einj_initialized;
}
EXPORT_SYMBOL_NS_GPL(einj_cxl_is_initialized, CXL);
...@@ -59,9 +59,8 @@ struct target_cache { ...@@ -59,9 +59,8 @@ struct target_cache {
}; };
enum { enum {
NODE_ACCESS_CLASS_0 = 0, NODE_ACCESS_CLASS_GENPORT_SINK_LOCAL = ACCESS_COORDINATE_MAX,
NODE_ACCESS_CLASS_1, NODE_ACCESS_CLASS_GENPORT_SINK_CPU,
NODE_ACCESS_CLASS_GENPORT_SINK,
NODE_ACCESS_CLASS_MAX, NODE_ACCESS_CLASS_MAX,
}; };
...@@ -75,6 +74,7 @@ struct memory_target { ...@@ -75,6 +74,7 @@ struct memory_target {
struct node_cache_attrs cache_attrs; struct node_cache_attrs cache_attrs;
u8 gen_port_device_handle[ACPI_SRAT_DEVICE_HANDLE_SIZE]; u8 gen_port_device_handle[ACPI_SRAT_DEVICE_HANDLE_SIZE];
bool registered; bool registered;
bool ext_updated; /* externally updated */
}; };
struct memory_initiator { struct memory_initiator {
...@@ -127,7 +127,8 @@ static struct memory_target *acpi_find_genport_target(u32 uid) ...@@ -127,7 +127,8 @@ static struct memory_target *acpi_find_genport_target(u32 uid)
/** /**
* acpi_get_genport_coordinates - Retrieve the access coordinates for a generic port * acpi_get_genport_coordinates - Retrieve the access coordinates for a generic port
* @uid: ACPI unique id * @uid: ACPI unique id
* @coord: The access coordinates written back out for the generic port * @coord: The access coordinates written back out for the generic port.
* Expect 2 levels array.
* *
* Return: 0 on success. Errno on failure. * Return: 0 on success. Errno on failure.
* *
...@@ -143,7 +144,10 @@ int acpi_get_genport_coordinates(u32 uid, ...@@ -143,7 +144,10 @@ int acpi_get_genport_coordinates(u32 uid,
if (!target) if (!target)
return -ENOENT; return -ENOENT;
*coord = target->coord[NODE_ACCESS_CLASS_GENPORT_SINK]; coord[ACCESS_COORDINATE_LOCAL] =
target->coord[NODE_ACCESS_CLASS_GENPORT_SINK_LOCAL];
coord[ACCESS_COORDINATE_CPU] =
target->coord[NODE_ACCESS_CLASS_GENPORT_SINK_CPU];
return 0; return 0;
} }
...@@ -325,6 +329,35 @@ static void hmat_update_target_access(struct memory_target *target, ...@@ -325,6 +329,35 @@ static void hmat_update_target_access(struct memory_target *target,
} }
} }
int hmat_update_target_coordinates(int nid, struct access_coordinate *coord,
enum access_coordinate_class access)
{
struct memory_target *target;
int pxm;
if (nid == NUMA_NO_NODE)
return -EINVAL;
pxm = node_to_pxm(nid);
guard(mutex)(&target_lock);
target = find_mem_target(pxm);
if (!target)
return -ENODEV;
hmat_update_target_access(target, ACPI_HMAT_READ_LATENCY,
coord->read_latency, access);
hmat_update_target_access(target, ACPI_HMAT_WRITE_LATENCY,
coord->write_latency, access);
hmat_update_target_access(target, ACPI_HMAT_READ_BANDWIDTH,
coord->read_bandwidth, access);
hmat_update_target_access(target, ACPI_HMAT_WRITE_BANDWIDTH,
coord->write_bandwidth, access);
target->ext_updated = true;
return 0;
}
EXPORT_SYMBOL_GPL(hmat_update_target_coordinates);
static __init void hmat_add_locality(struct acpi_hmat_locality *hmat_loc) static __init void hmat_add_locality(struct acpi_hmat_locality *hmat_loc)
{ {
struct memory_locality *loc; struct memory_locality *loc;
...@@ -374,11 +407,11 @@ static __init void hmat_update_target(unsigned int tgt_pxm, unsigned int init_px ...@@ -374,11 +407,11 @@ static __init void hmat_update_target(unsigned int tgt_pxm, unsigned int init_px
if (target && target->processor_pxm == init_pxm) { if (target && target->processor_pxm == init_pxm) {
hmat_update_target_access(target, type, value, hmat_update_target_access(target, type, value,
NODE_ACCESS_CLASS_0); ACCESS_COORDINATE_LOCAL);
/* If the node has a CPU, update access 1 */ /* If the node has a CPU, update access 1 */
if (node_state(pxm_to_node(init_pxm), N_CPU)) if (node_state(pxm_to_node(init_pxm), N_CPU))
hmat_update_target_access(target, type, value, hmat_update_target_access(target, type, value,
NODE_ACCESS_CLASS_1); ACCESS_COORDINATE_CPU);
} }
} }
...@@ -696,8 +729,13 @@ static void hmat_update_target_attrs(struct memory_target *target, ...@@ -696,8 +729,13 @@ static void hmat_update_target_attrs(struct memory_target *target,
u32 best = 0; u32 best = 0;
int i; int i;
/* Don't update if an external agent has changed the data. */
if (target->ext_updated)
return;
/* Don't update for generic port if there's no device handle */ /* Don't update for generic port if there's no device handle */
if (access == NODE_ACCESS_CLASS_GENPORT_SINK && if ((access == NODE_ACCESS_CLASS_GENPORT_SINK_LOCAL ||
access == NODE_ACCESS_CLASS_GENPORT_SINK_CPU) &&
!(*(u16 *)target->gen_port_device_handle)) !(*(u16 *)target->gen_port_device_handle))
return; return;
...@@ -709,7 +747,8 @@ static void hmat_update_target_attrs(struct memory_target *target, ...@@ -709,7 +747,8 @@ static void hmat_update_target_attrs(struct memory_target *target,
*/ */
if (target->processor_pxm != PXM_INVAL) { if (target->processor_pxm != PXM_INVAL) {
cpu_nid = pxm_to_node(target->processor_pxm); cpu_nid = pxm_to_node(target->processor_pxm);
if (access == 0 || node_state(cpu_nid, N_CPU)) { if (access == ACCESS_COORDINATE_LOCAL ||
node_state(cpu_nid, N_CPU)) {
set_bit(target->processor_pxm, p_nodes); set_bit(target->processor_pxm, p_nodes);
return; return;
} }
...@@ -737,7 +776,9 @@ static void hmat_update_target_attrs(struct memory_target *target, ...@@ -737,7 +776,9 @@ static void hmat_update_target_attrs(struct memory_target *target,
list_for_each_entry(initiator, &initiators, node) { list_for_each_entry(initiator, &initiators, node) {
u32 value; u32 value;
if (access == 1 && !initiator->has_cpu) { if ((access == ACCESS_COORDINATE_CPU ||
access == NODE_ACCESS_CLASS_GENPORT_SINK_CPU) &&
!initiator->has_cpu) {
clear_bit(initiator->processor_pxm, p_nodes); clear_bit(initiator->processor_pxm, p_nodes);
continue; continue;
} }
...@@ -770,20 +811,24 @@ static void __hmat_register_target_initiators(struct memory_target *target, ...@@ -770,20 +811,24 @@ static void __hmat_register_target_initiators(struct memory_target *target,
} }
} }
static void hmat_register_generic_target_initiators(struct memory_target *target) static void hmat_update_generic_target(struct memory_target *target)
{ {
static DECLARE_BITMAP(p_nodes, MAX_NUMNODES); static DECLARE_BITMAP(p_nodes, MAX_NUMNODES);
__hmat_register_target_initiators(target, p_nodes, hmat_update_target_attrs(target, p_nodes,
NODE_ACCESS_CLASS_GENPORT_SINK); NODE_ACCESS_CLASS_GENPORT_SINK_LOCAL);
hmat_update_target_attrs(target, p_nodes,
NODE_ACCESS_CLASS_GENPORT_SINK_CPU);
} }
static void hmat_register_target_initiators(struct memory_target *target) static void hmat_register_target_initiators(struct memory_target *target)
{ {
static DECLARE_BITMAP(p_nodes, MAX_NUMNODES); static DECLARE_BITMAP(p_nodes, MAX_NUMNODES);
__hmat_register_target_initiators(target, p_nodes, 0); __hmat_register_target_initiators(target, p_nodes,
__hmat_register_target_initiators(target, p_nodes, 1); ACCESS_COORDINATE_LOCAL);
__hmat_register_target_initiators(target, p_nodes,
ACCESS_COORDINATE_CPU);
} }
static void hmat_register_target_cache(struct memory_target *target) static void hmat_register_target_cache(struct memory_target *target)
...@@ -835,7 +880,7 @@ static void hmat_register_target(struct memory_target *target) ...@@ -835,7 +880,7 @@ static void hmat_register_target(struct memory_target *target)
*/ */
mutex_lock(&target_lock); mutex_lock(&target_lock);
if (*(u16 *)target->gen_port_device_handle) { if (*(u16 *)target->gen_port_device_handle) {
hmat_register_generic_target_initiators(target); hmat_update_generic_target(target);
target->registered = true; target->registered = true;
} }
mutex_unlock(&target_lock); mutex_unlock(&target_lock);
...@@ -854,8 +899,8 @@ static void hmat_register_target(struct memory_target *target) ...@@ -854,8 +899,8 @@ static void hmat_register_target(struct memory_target *target)
if (!target->registered) { if (!target->registered) {
hmat_register_target_initiators(target); hmat_register_target_initiators(target);
hmat_register_target_cache(target); hmat_register_target_cache(target);
hmat_register_target_perf(target, NODE_ACCESS_CLASS_0); hmat_register_target_perf(target, ACCESS_COORDINATE_LOCAL);
hmat_register_target_perf(target, NODE_ACCESS_CLASS_1); hmat_register_target_perf(target, ACCESS_COORDINATE_CPU);
target->registered = true; target->registered = true;
} }
mutex_unlock(&target_lock); mutex_unlock(&target_lock);
...@@ -927,7 +972,7 @@ static int hmat_calculate_adistance(struct notifier_block *self, ...@@ -927,7 +972,7 @@ static int hmat_calculate_adistance(struct notifier_block *self,
return NOTIFY_OK; return NOTIFY_OK;
mutex_lock(&target_lock); mutex_lock(&target_lock);
hmat_update_target_attrs(target, p_nodes, 1); hmat_update_target_attrs(target, p_nodes, ACCESS_COORDINATE_CPU);
mutex_unlock(&target_lock); mutex_unlock(&target_lock);
perf = &target->coord[1]; perf = &target->coord[1];
......
...@@ -29,6 +29,8 @@ static int node_to_pxm_map[MAX_NUMNODES] ...@@ -29,6 +29,8 @@ static int node_to_pxm_map[MAX_NUMNODES]
unsigned char acpi_srat_revision __initdata; unsigned char acpi_srat_revision __initdata;
static int acpi_numa __initdata; static int acpi_numa __initdata;
static int last_real_pxm;
void __init disable_srat(void) void __init disable_srat(void)
{ {
acpi_numa = -1; acpi_numa = -1;
...@@ -536,6 +538,7 @@ int __init acpi_numa_init(void) ...@@ -536,6 +538,7 @@ int __init acpi_numa_init(void)
if (node_to_pxm_map[i] > fake_pxm) if (node_to_pxm_map[i] > fake_pxm)
fake_pxm = node_to_pxm_map[i]; fake_pxm = node_to_pxm_map[i];
} }
last_real_pxm = fake_pxm;
fake_pxm++; fake_pxm++;
acpi_table_parse_cedt(ACPI_CEDT_TYPE_CFMWS, acpi_parse_cfmws, acpi_table_parse_cedt(ACPI_CEDT_TYPE_CFMWS, acpi_parse_cfmws,
&fake_pxm); &fake_pxm);
...@@ -547,6 +550,14 @@ int __init acpi_numa_init(void) ...@@ -547,6 +550,14 @@ int __init acpi_numa_init(void)
return 0; return 0;
} }
bool acpi_node_backed_by_real_pxm(int nid)
{
int pxm = node_to_pxm(nid);
return pxm <= last_real_pxm;
}
EXPORT_SYMBOL_GPL(acpi_node_backed_by_real_pxm);
static int acpi_get_pxm(acpi_handle h) static int acpi_get_pxm(acpi_handle h)
{ {
unsigned long long pxm; unsigned long long pxm;
......
...@@ -253,7 +253,7 @@ int __init_or_acpilib acpi_table_parse_entries_array( ...@@ -253,7 +253,7 @@ int __init_or_acpilib acpi_table_parse_entries_array(
count = acpi_parse_entries_array(id, table_size, count = acpi_parse_entries_array(id, table_size,
(union fw_table_header *)table_header, (union fw_table_header *)table_header,
proc, proc_num, max_entries); 0, proc, proc_num, max_entries);
acpi_put_table(table_header); acpi_put_table(table_header);
return count; return count;
......
...@@ -126,7 +126,7 @@ static void node_access_release(struct device *dev) ...@@ -126,7 +126,7 @@ static void node_access_release(struct device *dev)
} }
static struct node_access_nodes *node_init_node_access(struct node *node, static struct node_access_nodes *node_init_node_access(struct node *node,
unsigned int access) enum access_coordinate_class access)
{ {
struct node_access_nodes *access_node; struct node_access_nodes *access_node;
struct device *dev; struct device *dev;
...@@ -191,7 +191,7 @@ static struct attribute *access_attrs[] = { ...@@ -191,7 +191,7 @@ static struct attribute *access_attrs[] = {
* @access: The access class the for the given attributes * @access: The access class the for the given attributes
*/ */
void node_set_perf_attrs(unsigned int nid, struct access_coordinate *coord, void node_set_perf_attrs(unsigned int nid, struct access_coordinate *coord,
unsigned int access) enum access_coordinate_class access)
{ {
struct node_access_nodes *c; struct node_access_nodes *c;
struct node *node; struct node *node;
...@@ -215,6 +215,7 @@ void node_set_perf_attrs(unsigned int nid, struct access_coordinate *coord, ...@@ -215,6 +215,7 @@ void node_set_perf_attrs(unsigned int nid, struct access_coordinate *coord,
} }
} }
} }
EXPORT_SYMBOL_GPL(node_set_perf_attrs);
/** /**
* struct node_cache_info - Internal tracking for memory node caches * struct node_cache_info - Internal tracking for memory node caches
...@@ -689,7 +690,7 @@ int register_cpu_under_node(unsigned int cpu, unsigned int nid) ...@@ -689,7 +690,7 @@ int register_cpu_under_node(unsigned int cpu, unsigned int nid)
*/ */
int register_memory_node_under_compute_node(unsigned int mem_nid, int register_memory_node_under_compute_node(unsigned int mem_nid,
unsigned int cpu_nid, unsigned int cpu_nid,
unsigned int access) enum access_coordinate_class access)
{ {
struct node *init_node, *targ_node; struct node *init_node, *targ_node;
struct node_access_nodes *initiator, *target; struct node_access_nodes *initiator, *target;
......
...@@ -530,13 +530,15 @@ static int get_genport_coordinates(struct device *dev, struct cxl_dport *dport) ...@@ -530,13 +530,15 @@ static int get_genport_coordinates(struct device *dev, struct cxl_dport *dport)
if (kstrtou32(acpi_device_uid(hb), 0, &uid)) if (kstrtou32(acpi_device_uid(hb), 0, &uid))
return -EINVAL; return -EINVAL;
rc = acpi_get_genport_coordinates(uid, &dport->hb_coord); rc = acpi_get_genport_coordinates(uid, dport->hb_coord);
if (rc < 0) if (rc < 0)
return rc; return rc;
/* Adjust back to picoseconds from nanoseconds */ /* Adjust back to picoseconds from nanoseconds */
dport->hb_coord.read_latency *= 1000; for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) {
dport->hb_coord.write_latency *= 1000; dport->hb_coord[i].read_latency *= 1000;
dport->hb_coord[i].write_latency *= 1000;
}
return 0; return 0;
} }
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#include "cxlmem.h" #include "cxlmem.h"
#include "core.h" #include "core.h"
#include "cxl.h" #include "cxl.h"
#include "core.h"
struct dsmas_entry { struct dsmas_entry {
struct range dpa_range; struct range dpa_range;
...@@ -149,28 +150,35 @@ static int cxl_cdat_endpoint_process(struct cxl_port *port, ...@@ -149,28 +150,35 @@ static int cxl_cdat_endpoint_process(struct cxl_port *port,
int rc; int rc;
rc = cdat_table_parse(ACPI_CDAT_TYPE_DSMAS, cdat_dsmas_handler, rc = cdat_table_parse(ACPI_CDAT_TYPE_DSMAS, cdat_dsmas_handler,
dsmas_xa, port->cdat.table); dsmas_xa, port->cdat.table, port->cdat.length);
rc = cdat_table_parse_output(rc); rc = cdat_table_parse_output(rc);
if (rc) if (rc)
return rc; return rc;
rc = cdat_table_parse(ACPI_CDAT_TYPE_DSLBIS, cdat_dslbis_handler, rc = cdat_table_parse(ACPI_CDAT_TYPE_DSLBIS, cdat_dslbis_handler,
dsmas_xa, port->cdat.table); dsmas_xa, port->cdat.table, port->cdat.length);
return cdat_table_parse_output(rc); return cdat_table_parse_output(rc);
} }
static int cxl_port_perf_data_calculate(struct cxl_port *port, static int cxl_port_perf_data_calculate(struct cxl_port *port,
struct xarray *dsmas_xa) struct xarray *dsmas_xa)
{ {
struct access_coordinate c; struct access_coordinate ep_c;
struct access_coordinate coord[ACCESS_COORDINATE_MAX];
struct dsmas_entry *dent; struct dsmas_entry *dent;
int valid_entries = 0; int valid_entries = 0;
unsigned long index; unsigned long index;
int rc; int rc;
rc = cxl_endpoint_get_perf_coordinates(port, &c); rc = cxl_endpoint_get_perf_coordinates(port, &ep_c);
if (rc) {
dev_dbg(&port->dev, "Failed to retrieve ep perf coordinates.\n");
return rc;
}
rc = cxl_hb_get_perf_coordinates(port, coord);
if (rc) { if (rc) {
dev_dbg(&port->dev, "Failed to retrieve perf coordinates.\n"); dev_dbg(&port->dev, "Failed to retrieve hb perf coordinates.\n");
return rc; return rc;
} }
...@@ -185,18 +193,19 @@ static int cxl_port_perf_data_calculate(struct cxl_port *port, ...@@ -185,18 +193,19 @@ static int cxl_port_perf_data_calculate(struct cxl_port *port,
xa_for_each(dsmas_xa, index, dent) { xa_for_each(dsmas_xa, index, dent) {
int qos_class; int qos_class;
dent->coord.read_latency = dent->coord.read_latency + cxl_coordinates_combine(&dent->coord, &dent->coord, &ep_c);
c.read_latency; /*
dent->coord.write_latency = dent->coord.write_latency + * Keeping the host bridge coordinates separate from the dsmas
c.write_latency; * coordinates in order to allow calculation of access class
dent->coord.read_bandwidth = min_t(int, c.read_bandwidth, * 0 and 1 for region later.
dent->coord.read_bandwidth); */
dent->coord.write_bandwidth = min_t(int, c.write_bandwidth, cxl_coordinates_combine(&coord[ACCESS_COORDINATE_CPU],
dent->coord.write_bandwidth); &coord[ACCESS_COORDINATE_CPU],
&dent->coord);
dent->entries = 1; dent->entries = 1;
rc = cxl_root->ops->qos_class(cxl_root, &dent->coord, 1, rc = cxl_root->ops->qos_class(cxl_root,
&qos_class); &coord[ACCESS_COORDINATE_CPU],
1, &qos_class);
if (rc != 1) if (rc != 1)
continue; continue;
...@@ -389,36 +398,38 @@ EXPORT_SYMBOL_NS_GPL(cxl_endpoint_parse_cdat, CXL); ...@@ -389,36 +398,38 @@ EXPORT_SYMBOL_NS_GPL(cxl_endpoint_parse_cdat, CXL);
static int cdat_sslbis_handler(union acpi_subtable_headers *header, void *arg, static int cdat_sslbis_handler(union acpi_subtable_headers *header, void *arg,
const unsigned long end) const unsigned long end)
{ {
struct acpi_cdat_sslbis_table {
struct acpi_cdat_header header;
struct acpi_cdat_sslbis sslbis_header;
struct acpi_cdat_sslbe entries[];
} *tbl = (struct acpi_cdat_sslbis_table *)header;
int size = sizeof(header->cdat) + sizeof(tbl->sslbis_header);
struct acpi_cdat_sslbis *sslbis; struct acpi_cdat_sslbis *sslbis;
int size = sizeof(header->cdat) + sizeof(*sslbis);
struct cxl_port *port = arg; struct cxl_port *port = arg;
struct device *dev = &port->dev; struct device *dev = &port->dev;
struct acpi_cdat_sslbe *entry;
int remain, entries, i; int remain, entries, i;
u16 len; u16 len;
len = le16_to_cpu((__force __le16)header->cdat.length); len = le16_to_cpu((__force __le16)header->cdat.length);
remain = len - size; remain = len - size;
if (!remain || remain % sizeof(*entry) || if (!remain || remain % sizeof(tbl->entries[0]) ||
(unsigned long)header + len > end) { (unsigned long)header + len > end) {
dev_warn(dev, "Malformed SSLBIS table length: (%u)\n", len); dev_warn(dev, "Malformed SSLBIS table length: (%u)\n", len);
return -EINVAL; return -EINVAL;
} }
/* Skip common header */ sslbis = &tbl->sslbis_header;
sslbis = (struct acpi_cdat_sslbis *)((unsigned long)header +
sizeof(header->cdat));
/* Unrecognized data type, we can skip */ /* Unrecognized data type, we can skip */
if (sslbis->data_type > ACPI_HMAT_WRITE_BANDWIDTH) if (sslbis->data_type > ACPI_HMAT_WRITE_BANDWIDTH)
return 0; return 0;
entries = remain / sizeof(*entry); entries = remain / sizeof(tbl->entries[0]);
entry = (struct acpi_cdat_sslbe *)((unsigned long)header + sizeof(*sslbis)); if (struct_size(tbl, entries, entries) != len)
return -EINVAL;
for (i = 0; i < entries; i++) { for (i = 0; i < entries; i++) {
u16 x = le16_to_cpu((__force __le16)entry->portx_id); u16 x = le16_to_cpu((__force __le16)tbl->entries[i].portx_id);
u16 y = le16_to_cpu((__force __le16)entry->porty_id); u16 y = le16_to_cpu((__force __le16)tbl->entries[i].porty_id);
__le64 le_base; __le64 le_base;
__le16 le_val; __le16 le_val;
struct cxl_dport *dport; struct cxl_dport *dport;
...@@ -448,8 +459,8 @@ static int cdat_sslbis_handler(union acpi_subtable_headers *header, void *arg, ...@@ -448,8 +459,8 @@ static int cdat_sslbis_handler(union acpi_subtable_headers *header, void *arg,
break; break;
} }
le_base = (__force __le64)sslbis->entry_base_unit; le_base = (__force __le64)tbl->sslbis_header.entry_base_unit;
le_val = (__force __le16)entry->latency_or_bandwidth; le_val = (__force __le16)tbl->entries[i].latency_or_bandwidth;
if (check_mul_overflow(le64_to_cpu(le_base), if (check_mul_overflow(le64_to_cpu(le_base),
le16_to_cpu(le_val), &val)) le16_to_cpu(le_val), &val))
...@@ -462,8 +473,6 @@ static int cdat_sslbis_handler(union acpi_subtable_headers *header, void *arg, ...@@ -462,8 +473,6 @@ static int cdat_sslbis_handler(union acpi_subtable_headers *header, void *arg,
sslbis->data_type, sslbis->data_type,
val); val);
} }
entry++;
} }
return 0; return 0;
...@@ -477,11 +486,108 @@ void cxl_switch_parse_cdat(struct cxl_port *port) ...@@ -477,11 +486,108 @@ void cxl_switch_parse_cdat(struct cxl_port *port)
return; return;
rc = cdat_table_parse(ACPI_CDAT_TYPE_SSLBIS, cdat_sslbis_handler, rc = cdat_table_parse(ACPI_CDAT_TYPE_SSLBIS, cdat_sslbis_handler,
port, port->cdat.table); port, port->cdat.table, port->cdat.length);
rc = cdat_table_parse_output(rc); rc = cdat_table_parse_output(rc);
if (rc) if (rc)
dev_dbg(&port->dev, "Failed to parse SSLBIS: %d\n", rc); dev_dbg(&port->dev, "Failed to parse SSLBIS: %d\n", rc);
} }
EXPORT_SYMBOL_NS_GPL(cxl_switch_parse_cdat, CXL); EXPORT_SYMBOL_NS_GPL(cxl_switch_parse_cdat, CXL);
/**
* cxl_coordinates_combine - Combine the two input coordinates
*
* @out: Output coordinate of c1 and c2 combined
* @c1: input coordinates
* @c2: input coordinates
*/
void cxl_coordinates_combine(struct access_coordinate *out,
struct access_coordinate *c1,
struct access_coordinate *c2)
{
if (c1->write_bandwidth && c2->write_bandwidth)
out->write_bandwidth = min(c1->write_bandwidth,
c2->write_bandwidth);
out->write_latency = c1->write_latency + c2->write_latency;
if (c1->read_bandwidth && c2->read_bandwidth)
out->read_bandwidth = min(c1->read_bandwidth,
c2->read_bandwidth);
out->read_latency = c1->read_latency + c2->read_latency;
}
MODULE_IMPORT_NS(CXL); MODULE_IMPORT_NS(CXL);
void cxl_region_perf_data_calculate(struct cxl_region *cxlr,
struct cxl_endpoint_decoder *cxled)
{
struct cxl_memdev *cxlmd = cxled_to_memdev(cxled);
struct cxl_port *port = cxlmd->endpoint;
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
struct access_coordinate hb_coord[ACCESS_COORDINATE_MAX];
struct access_coordinate coord;
struct range dpa = {
.start = cxled->dpa_res->start,
.end = cxled->dpa_res->end,
};
struct cxl_dpa_perf *perf;
int rc;
switch (cxlr->mode) {
case CXL_DECODER_RAM:
perf = &mds->ram_perf;
break;
case CXL_DECODER_PMEM:
perf = &mds->pmem_perf;
break;
default:
return;
}
lockdep_assert_held(&cxl_dpa_rwsem);
if (!range_contains(&perf->dpa_range, &dpa))
return;
rc = cxl_hb_get_perf_coordinates(port, hb_coord);
if (rc) {
dev_dbg(&port->dev, "Failed to retrieve hb perf coordinates.\n");
return;
}
for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) {
/* Pickup the host bridge coords */
cxl_coordinates_combine(&coord, &hb_coord[i], &perf->coord);
/* Get total bandwidth and the worst latency for the cxl region */
cxlr->coord[i].read_latency = max_t(unsigned int,
cxlr->coord[i].read_latency,
coord.read_latency);
cxlr->coord[i].write_latency = max_t(unsigned int,
cxlr->coord[i].write_latency,
coord.write_latency);
cxlr->coord[i].read_bandwidth += coord.read_bandwidth;
cxlr->coord[i].write_bandwidth += coord.write_bandwidth;
/*
* Convert latency to nanosec from picosec to be consistent
* with the resulting latency coordinates computed by the
* HMAT_REPORTING code.
*/
cxlr->coord[i].read_latency =
DIV_ROUND_UP(cxlr->coord[i].read_latency, 1000);
cxlr->coord[i].write_latency =
DIV_ROUND_UP(cxlr->coord[i].write_latency, 1000);
}
}
int cxl_update_hmat_access_coordinates(int nid, struct cxl_region *cxlr,
enum access_coordinate_class access)
{
return hmat_update_target_coordinates(nid, &cxlr->coord[access], access);
}
bool cxl_need_node_perf_attrs_update(int nid)
{
return !acpi_node_backed_by_real_pxm(nid);
}
...@@ -90,4 +90,8 @@ enum cxl_poison_trace_type { ...@@ -90,4 +90,8 @@ enum cxl_poison_trace_type {
long cxl_pci_get_latency(struct pci_dev *pdev); long cxl_pci_get_latency(struct pci_dev *pdev);
int cxl_update_hmat_access_coordinates(int nid, struct cxl_region *cxlr,
enum access_coordinate_class access);
bool cxl_need_node_perf_attrs_update(int nid);
#endif /* __CXL_CORE_H__ */ #endif /* __CXL_CORE_H__ */
...@@ -518,14 +518,14 @@ EXPORT_SYMBOL_NS_GPL(cxl_hdm_decode_init, CXL); ...@@ -518,14 +518,14 @@ EXPORT_SYMBOL_NS_GPL(cxl_hdm_decode_init, CXL);
FIELD_PREP(CXL_DOE_TABLE_ACCESS_ENTRY_HANDLE, (entry_handle))) FIELD_PREP(CXL_DOE_TABLE_ACCESS_ENTRY_HANDLE, (entry_handle)))
static int cxl_cdat_get_length(struct device *dev, static int cxl_cdat_get_length(struct device *dev,
struct pci_doe_mb *cdat_doe, struct pci_doe_mb *doe_mb,
size_t *length) size_t *length)
{ {
__le32 request = CDAT_DOE_REQ(0); __le32 request = CDAT_DOE_REQ(0);
__le32 response[2]; __le32 response[2];
int rc; int rc;
rc = pci_doe(cdat_doe, PCI_DVSEC_VENDOR_ID_CXL, rc = pci_doe(doe_mb, PCI_DVSEC_VENDOR_ID_CXL,
CXL_DOE_PROTOCOL_TABLE_ACCESS, CXL_DOE_PROTOCOL_TABLE_ACCESS,
&request, sizeof(request), &request, sizeof(request),
&response, sizeof(response)); &response, sizeof(response));
...@@ -543,56 +543,58 @@ static int cxl_cdat_get_length(struct device *dev, ...@@ -543,56 +543,58 @@ static int cxl_cdat_get_length(struct device *dev,
} }
static int cxl_cdat_read_table(struct device *dev, static int cxl_cdat_read_table(struct device *dev,
struct pci_doe_mb *cdat_doe, struct pci_doe_mb *doe_mb,
void *cdat_table, size_t *cdat_length) struct cdat_doe_rsp *rsp, size_t *length)
{ {
size_t length = *cdat_length + sizeof(__le32); size_t received, remaining = *length;
__le32 *data = cdat_table; unsigned int entry_handle = 0;
int entry_handle = 0; union cdat_data *data;
__le32 saved_dw = 0; __le32 saved_dw = 0;
do { do {
__le32 request = CDAT_DOE_REQ(entry_handle); __le32 request = CDAT_DOE_REQ(entry_handle);
struct cdat_entry_header *entry;
size_t entry_dw;
int rc; int rc;
rc = pci_doe(cdat_doe, PCI_DVSEC_VENDOR_ID_CXL, rc = pci_doe(doe_mb, PCI_DVSEC_VENDOR_ID_CXL,
CXL_DOE_PROTOCOL_TABLE_ACCESS, CXL_DOE_PROTOCOL_TABLE_ACCESS,
&request, sizeof(request), &request, sizeof(request),
data, length); rsp, sizeof(*rsp) + remaining);
if (rc < 0) { if (rc < 0) {
dev_err(dev, "DOE failed: %d", rc); dev_err(dev, "DOE failed: %d", rc);
return rc; return rc;
} }
/* 1 DW Table Access Response Header + CDAT entry */ if (rc < sizeof(*rsp))
entry = (struct cdat_entry_header *)(data + 1);
if ((entry_handle == 0 &&
rc != sizeof(__le32) + sizeof(struct cdat_header)) ||
(entry_handle > 0 &&
(rc < sizeof(__le32) + sizeof(*entry) ||
rc != sizeof(__le32) + le16_to_cpu(entry->length))))
return -EIO; return -EIO;
data = (union cdat_data *)rsp->data;
received = rc - sizeof(*rsp);
if (entry_handle == 0) {
if (received != sizeof(data->header))
return -EIO;
} else {
if (received < sizeof(data->entry) ||
received != le16_to_cpu(data->entry.length))
return -EIO;
}
/* Get the CXL table access header entry handle */ /* Get the CXL table access header entry handle */
entry_handle = FIELD_GET(CXL_DOE_TABLE_ACCESS_ENTRY_HANDLE, entry_handle = FIELD_GET(CXL_DOE_TABLE_ACCESS_ENTRY_HANDLE,
le32_to_cpu(data[0])); le32_to_cpu(rsp->doe_header));
entry_dw = rc / sizeof(__le32);
/* Skip Header */
entry_dw -= 1;
/* /*
* Table Access Response Header overwrote the last DW of * Table Access Response Header overwrote the last DW of
* previous entry, so restore that DW * previous entry, so restore that DW
*/ */
*data = saved_dw; rsp->doe_header = saved_dw;
length -= entry_dw * sizeof(__le32); remaining -= received;
data += entry_dw; rsp = (void *)rsp + received;
saved_dw = *data; saved_dw = rsp->doe_header;
} while (entry_handle != CXL_DOE_TABLE_ACCESS_LAST_ENTRY); } while (entry_handle != CXL_DOE_TABLE_ACCESS_LAST_ENTRY);
/* Length in CDAT header may exceed concatenation of CDAT entries */ /* Length in CDAT header may exceed concatenation of CDAT entries */
*cdat_length -= length - sizeof(__le32); *length -= remaining;
return 0; return 0;
} }
...@@ -617,11 +619,11 @@ void read_cdat_data(struct cxl_port *port) ...@@ -617,11 +619,11 @@ void read_cdat_data(struct cxl_port *port)
{ {
struct device *uport = port->uport_dev; struct device *uport = port->uport_dev;
struct device *dev = &port->dev; struct device *dev = &port->dev;
struct pci_doe_mb *cdat_doe; struct pci_doe_mb *doe_mb;
struct pci_dev *pdev = NULL; struct pci_dev *pdev = NULL;
struct cxl_memdev *cxlmd; struct cxl_memdev *cxlmd;
size_t cdat_length; struct cdat_doe_rsp *buf;
void *cdat_table, *cdat_buf; size_t table_length, length;
int rc; int rc;
if (is_cxl_memdev(uport)) { if (is_cxl_memdev(uport)) {
...@@ -638,39 +640,48 @@ void read_cdat_data(struct cxl_port *port) ...@@ -638,39 +640,48 @@ void read_cdat_data(struct cxl_port *port)
if (!pdev) if (!pdev)
return; return;
cdat_doe = pci_find_doe_mailbox(pdev, PCI_DVSEC_VENDOR_ID_CXL, doe_mb = pci_find_doe_mailbox(pdev, PCI_DVSEC_VENDOR_ID_CXL,
CXL_DOE_PROTOCOL_TABLE_ACCESS); CXL_DOE_PROTOCOL_TABLE_ACCESS);
if (!cdat_doe) { if (!doe_mb) {
dev_dbg(dev, "No CDAT mailbox\n"); dev_dbg(dev, "No CDAT mailbox\n");
return; return;
} }
port->cdat_available = true; port->cdat_available = true;
if (cxl_cdat_get_length(dev, cdat_doe, &cdat_length)) { if (cxl_cdat_get_length(dev, doe_mb, &length)) {
dev_dbg(dev, "No CDAT length\n"); dev_dbg(dev, "No CDAT length\n");
return; return;
} }
cdat_buf = devm_kzalloc(dev, cdat_length + sizeof(__le32), GFP_KERNEL); /*
if (!cdat_buf) * The begin of the CDAT buffer needs space for additional 4
return; * bytes for the DOE header. Table data starts afterwards.
*/
buf = devm_kzalloc(dev, sizeof(*buf) + length, GFP_KERNEL);
if (!buf)
goto err;
table_length = length;
rc = cxl_cdat_read_table(dev, cdat_doe, cdat_buf, &cdat_length); rc = cxl_cdat_read_table(dev, doe_mb, buf, &length);
if (rc) if (rc)
goto err; goto err;
cdat_table = cdat_buf + sizeof(__le32); if (table_length != length)
if (cdat_checksum(cdat_table, cdat_length)) dev_warn(dev, "Malformed CDAT table length (%zu:%zu), discarding trailing data\n",
table_length, length);
if (cdat_checksum(buf->data, length))
goto err; goto err;
port->cdat.table = cdat_table; port->cdat.table = buf->data;
port->cdat.length = cdat_length; port->cdat.length = length;
return;
return;
err: err:
/* Don't leave table data allocated on error */ /* Don't leave table data allocated on error */
devm_kfree(dev, cdat_buf); devm_kfree(dev, buf);
dev_err(dev, "Failed to read/validate CDAT.\n"); dev_err(dev, "Failed to read/validate CDAT.\n");
} }
EXPORT_SYMBOL_NS_GPL(read_cdat_data, CXL); EXPORT_SYMBOL_NS_GPL(read_cdat_data, CXL);
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/memregion.h> #include <linux/memregion.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/einj-cxl.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/module.h> #include <linux/module.h>
...@@ -793,6 +794,40 @@ static int cxl_dport_setup_regs(struct device *host, struct cxl_dport *dport, ...@@ -793,6 +794,40 @@ static int cxl_dport_setup_regs(struct device *host, struct cxl_dport *dport,
return rc; return rc;
} }
DEFINE_SHOW_ATTRIBUTE(einj_cxl_available_error_type);
static int cxl_einj_inject(void *data, u64 type)
{
struct cxl_dport *dport = data;
if (dport->rch)
return einj_cxl_inject_rch_error(dport->rcrb.base, type);
return einj_cxl_inject_error(to_pci_dev(dport->dport_dev), type);
}
DEFINE_DEBUGFS_ATTRIBUTE(cxl_einj_inject_fops, NULL, cxl_einj_inject,
"0x%llx\n");
static void cxl_debugfs_create_dport_dir(struct cxl_dport *dport)
{
struct dentry *dir;
if (!einj_cxl_is_initialized())
return;
/*
* dport_dev needs to be a PCIe port for CXL 2.0+ ports because
* EINJ expects a dport SBDF to be specified for 2.0 error injection.
*/
if (!dport->rch && !dev_is_pci(dport->dport_dev))
return;
dir = cxl_debugfs_create_dir(dev_name(dport->dport_dev));
debugfs_create_file("einj_inject", 0200, dir, dport,
&cxl_einj_inject_fops);
}
static struct cxl_port *__devm_cxl_add_port(struct device *host, static struct cxl_port *__devm_cxl_add_port(struct device *host,
struct device *uport_dev, struct device *uport_dev,
resource_size_t component_reg_phys, resource_size_t component_reg_phys,
...@@ -822,6 +857,7 @@ static struct cxl_port *__devm_cxl_add_port(struct device *host, ...@@ -822,6 +857,7 @@ static struct cxl_port *__devm_cxl_add_port(struct device *host,
*/ */
port->reg_map = cxlds->reg_map; port->reg_map = cxlds->reg_map;
port->reg_map.host = &port->dev; port->reg_map.host = &port->dev;
cxlmd->endpoint = port;
} else if (parent_dport) { } else if (parent_dport) {
rc = dev_set_name(dev, "port%d", port->id); rc = dev_set_name(dev, "port%d", port->id);
if (rc) if (rc)
...@@ -1149,6 +1185,8 @@ __devm_cxl_add_dport(struct cxl_port *port, struct device *dport_dev, ...@@ -1149,6 +1185,8 @@ __devm_cxl_add_dport(struct cxl_port *port, struct device *dport_dev,
if (dev_is_pci(dport_dev)) if (dev_is_pci(dport_dev))
dport->link_latency = cxl_pci_get_latency(to_pci_dev(dport_dev)); dport->link_latency = cxl_pci_get_latency(to_pci_dev(dport_dev));
cxl_debugfs_create_dport_dir(dport);
return dport; return dport;
} }
...@@ -1374,7 +1412,6 @@ int cxl_endpoint_autoremove(struct cxl_memdev *cxlmd, struct cxl_port *endpoint) ...@@ -1374,7 +1412,6 @@ int cxl_endpoint_autoremove(struct cxl_memdev *cxlmd, struct cxl_port *endpoint)
get_device(host); get_device(host);
get_device(&endpoint->dev); get_device(&endpoint->dev);
cxlmd->endpoint = endpoint;
cxlmd->depth = endpoint->depth; cxlmd->depth = endpoint->depth;
return devm_add_action_or_reset(dev, delete_endpoint, cxlmd); return devm_add_action_or_reset(dev, delete_endpoint, cxlmd);
} }
...@@ -2096,18 +2133,36 @@ bool schedule_cxl_memdev_detach(struct cxl_memdev *cxlmd) ...@@ -2096,18 +2133,36 @@ bool schedule_cxl_memdev_detach(struct cxl_memdev *cxlmd)
} }
EXPORT_SYMBOL_NS_GPL(schedule_cxl_memdev_detach, CXL); EXPORT_SYMBOL_NS_GPL(schedule_cxl_memdev_detach, CXL);
static void combine_coordinates(struct access_coordinate *c1, /**
struct access_coordinate *c2) * cxl_hb_get_perf_coordinates - Retrieve performance numbers between initiator
* and host bridge
*
* @port: endpoint cxl_port
* @coord: output access coordinates
*
* Return: errno on failure, 0 on success.
*/
int cxl_hb_get_perf_coordinates(struct cxl_port *port,
struct access_coordinate *coord)
{ {
if (c2->write_bandwidth) struct cxl_port *iter = port;
c1->write_bandwidth = min(c1->write_bandwidth, struct cxl_dport *dport;
c2->write_bandwidth);
c1->write_latency += c2->write_latency; if (!is_cxl_endpoint(port))
return -EINVAL;
if (c2->read_bandwidth) dport = iter->parent_dport;
c1->read_bandwidth = min(c1->read_bandwidth, while (iter && !is_cxl_root(to_cxl_port(iter->dev.parent))) {
c2->read_bandwidth); iter = to_cxl_port(iter->dev.parent);
c1->read_latency += c2->read_latency; dport = iter->parent_dport;
}
coord[ACCESS_COORDINATE_LOCAL] =
dport->hb_coord[ACCESS_COORDINATE_LOCAL];
coord[ACCESS_COORDINATE_CPU] =
dport->hb_coord[ACCESS_COORDINATE_CPU];
return 0;
} }
/** /**
...@@ -2143,7 +2198,7 @@ int cxl_endpoint_get_perf_coordinates(struct cxl_port *port, ...@@ -2143,7 +2198,7 @@ int cxl_endpoint_get_perf_coordinates(struct cxl_port *port,
* nothing to gather. * nothing to gather.
*/ */
while (iter && !is_cxl_root(to_cxl_port(iter->dev.parent))) { while (iter && !is_cxl_root(to_cxl_port(iter->dev.parent))) {
combine_coordinates(&c, &dport->sw_coord); cxl_coordinates_combine(&c, &c, &dport->sw_coord);
c.write_latency += dport->link_latency; c.write_latency += dport->link_latency;
c.read_latency += dport->link_latency; c.read_latency += dport->link_latency;
...@@ -2151,9 +2206,6 @@ int cxl_endpoint_get_perf_coordinates(struct cxl_port *port, ...@@ -2151,9 +2206,6 @@ int cxl_endpoint_get_perf_coordinates(struct cxl_port *port,
dport = iter->parent_dport; dport = iter->parent_dport;
} }
/* Augment with the generic port (host bridge) perf data */
combine_coordinates(&c, &dport->hb_coord);
/* Get the calculated PCI paths bandwidth */ /* Get the calculated PCI paths bandwidth */
pdev = to_pci_dev(port->uport_dev->parent); pdev = to_pci_dev(port->uport_dev->parent);
bw = pcie_bandwidth_available(pdev, NULL, NULL, NULL); bw = pcie_bandwidth_available(pdev, NULL, NULL, NULL);
...@@ -2221,6 +2273,10 @@ static __init int cxl_core_init(void) ...@@ -2221,6 +2273,10 @@ static __init int cxl_core_init(void)
cxl_debugfs = debugfs_create_dir("cxl", NULL); cxl_debugfs = debugfs_create_dir("cxl", NULL);
if (einj_cxl_is_initialized())
debugfs_create_file("einj_types", 0400, cxl_debugfs, NULL,
&einj_cxl_available_error_type_fops);
cxl_mbox_init(); cxl_mbox_init();
rc = cxl_memdev_init(); rc = cxl_memdev_init();
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#include <linux/genalloc.h> #include <linux/genalloc.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/memory.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/uuid.h> #include <linux/uuid.h>
#include <linux/sort.h> #include <linux/sort.h>
...@@ -30,6 +31,108 @@ ...@@ -30,6 +31,108 @@
static struct cxl_region *to_cxl_region(struct device *dev); static struct cxl_region *to_cxl_region(struct device *dev);
#define __ACCESS_ATTR_RO(_level, _name) { \
.attr = { .name = __stringify(_name), .mode = 0444 }, \
.show = _name##_access##_level##_show, \
}
#define ACCESS_DEVICE_ATTR_RO(level, name) \
struct device_attribute dev_attr_access##level##_##name = __ACCESS_ATTR_RO(level, name)
#define ACCESS_ATTR_RO(level, attrib) \
static ssize_t attrib##_access##level##_show(struct device *dev, \
struct device_attribute *attr, \
char *buf) \
{ \
struct cxl_region *cxlr = to_cxl_region(dev); \
\
if (cxlr->coord[level].attrib == 0) \
return -ENOENT; \
\
return sysfs_emit(buf, "%u\n", cxlr->coord[level].attrib); \
} \
static ACCESS_DEVICE_ATTR_RO(level, attrib)
ACCESS_ATTR_RO(0, read_bandwidth);
ACCESS_ATTR_RO(0, read_latency);
ACCESS_ATTR_RO(0, write_bandwidth);
ACCESS_ATTR_RO(0, write_latency);
#define ACCESS_ATTR_DECLARE(level, attrib) \
(&dev_attr_access##level##_##attrib.attr)
static struct attribute *access0_coordinate_attrs[] = {
ACCESS_ATTR_DECLARE(0, read_bandwidth),
ACCESS_ATTR_DECLARE(0, write_bandwidth),
ACCESS_ATTR_DECLARE(0, read_latency),
ACCESS_ATTR_DECLARE(0, write_latency),
NULL
};
ACCESS_ATTR_RO(1, read_bandwidth);
ACCESS_ATTR_RO(1, read_latency);
ACCESS_ATTR_RO(1, write_bandwidth);
ACCESS_ATTR_RO(1, write_latency);
static struct attribute *access1_coordinate_attrs[] = {
ACCESS_ATTR_DECLARE(1, read_bandwidth),
ACCESS_ATTR_DECLARE(1, write_bandwidth),
ACCESS_ATTR_DECLARE(1, read_latency),
ACCESS_ATTR_DECLARE(1, write_latency),
NULL
};
#define ACCESS_VISIBLE(level) \
static umode_t cxl_region_access##level##_coordinate_visible( \
struct kobject *kobj, struct attribute *a, int n) \
{ \
struct device *dev = kobj_to_dev(kobj); \
struct cxl_region *cxlr = to_cxl_region(dev); \
\
if (a == &dev_attr_access##level##_read_latency.attr && \
cxlr->coord[level].read_latency == 0) \
return 0; \
\
if (a == &dev_attr_access##level##_write_latency.attr && \
cxlr->coord[level].write_latency == 0) \
return 0; \
\
if (a == &dev_attr_access##level##_read_bandwidth.attr && \
cxlr->coord[level].read_bandwidth == 0) \
return 0; \
\
if (a == &dev_attr_access##level##_write_bandwidth.attr && \
cxlr->coord[level].write_bandwidth == 0) \
return 0; \
\
return a->mode; \
}
ACCESS_VISIBLE(0);
ACCESS_VISIBLE(1);
static const struct attribute_group cxl_region_access0_coordinate_group = {
.name = "access0",
.attrs = access0_coordinate_attrs,
.is_visible = cxl_region_access0_coordinate_visible,
};
static const struct attribute_group *get_cxl_region_access0_group(void)
{
return &cxl_region_access0_coordinate_group;
}
static const struct attribute_group cxl_region_access1_coordinate_group = {
.name = "access1",
.attrs = access1_coordinate_attrs,
.is_visible = cxl_region_access1_coordinate_visible,
};
static const struct attribute_group *get_cxl_region_access1_group(void)
{
return &cxl_region_access1_coordinate_group;
}
static ssize_t uuid_show(struct device *dev, struct device_attribute *attr, static ssize_t uuid_show(struct device *dev, struct device_attribute *attr,
char *buf) char *buf)
{ {
...@@ -1752,6 +1855,8 @@ static int cxl_region_attach(struct cxl_region *cxlr, ...@@ -1752,6 +1855,8 @@ static int cxl_region_attach(struct cxl_region *cxlr,
return -EINVAL; return -EINVAL;
} }
cxl_region_perf_data_calculate(cxlr, cxled);
if (test_bit(CXL_REGION_F_AUTO, &cxlr->flags)) { if (test_bit(CXL_REGION_F_AUTO, &cxlr->flags)) {
int i; int i;
...@@ -2067,6 +2172,8 @@ static const struct attribute_group *region_groups[] = { ...@@ -2067,6 +2172,8 @@ static const struct attribute_group *region_groups[] = {
&cxl_base_attribute_group, &cxl_base_attribute_group,
&cxl_region_group, &cxl_region_group,
&cxl_region_target_group, &cxl_region_target_group,
&cxl_region_access0_coordinate_group,
&cxl_region_access1_coordinate_group,
NULL, NULL,
}; };
...@@ -2120,6 +2227,7 @@ static void unregister_region(void *_cxlr) ...@@ -2120,6 +2227,7 @@ static void unregister_region(void *_cxlr)
struct cxl_region_params *p = &cxlr->params; struct cxl_region_params *p = &cxlr->params;
int i; int i;
unregister_memory_notifier(&cxlr->memory_notifier);
device_del(&cxlr->dev); device_del(&cxlr->dev);
/* /*
...@@ -2164,6 +2272,63 @@ static struct cxl_region *cxl_region_alloc(struct cxl_root_decoder *cxlrd, int i ...@@ -2164,6 +2272,63 @@ static struct cxl_region *cxl_region_alloc(struct cxl_root_decoder *cxlrd, int i
return cxlr; return cxlr;
} }
static bool cxl_region_update_coordinates(struct cxl_region *cxlr, int nid)
{
int cset = 0;
int rc;
for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) {
if (cxlr->coord[i].read_bandwidth) {
rc = 0;
if (cxl_need_node_perf_attrs_update(nid))
node_set_perf_attrs(nid, &cxlr->coord[i], i);
else
rc = cxl_update_hmat_access_coordinates(nid, cxlr, i);
if (rc == 0)
cset++;
}
}
if (!cset)
return false;
rc = sysfs_update_group(&cxlr->dev.kobj, get_cxl_region_access0_group());
if (rc)
dev_dbg(&cxlr->dev, "Failed to update access0 group\n");
rc = sysfs_update_group(&cxlr->dev.kobj, get_cxl_region_access1_group());
if (rc)
dev_dbg(&cxlr->dev, "Failed to update access1 group\n");
return true;
}
static int cxl_region_perf_attrs_callback(struct notifier_block *nb,
unsigned long action, void *arg)
{
struct cxl_region *cxlr = container_of(nb, struct cxl_region,
memory_notifier);
struct cxl_region_params *p = &cxlr->params;
struct cxl_endpoint_decoder *cxled = p->targets[0];
struct cxl_decoder *cxld = &cxled->cxld;
struct memory_notify *mnb = arg;
int nid = mnb->status_change_nid;
int region_nid;
if (nid == NUMA_NO_NODE || action != MEM_ONLINE)
return NOTIFY_DONE;
region_nid = phys_to_target_node(cxld->hpa_range.start);
if (nid != region_nid)
return NOTIFY_DONE;
if (!cxl_region_update_coordinates(cxlr, nid))
return NOTIFY_DONE;
return NOTIFY_OK;
}
/** /**
* devm_cxl_add_region - Adds a region to a decoder * devm_cxl_add_region - Adds a region to a decoder
* @cxlrd: root decoder * @cxlrd: root decoder
...@@ -2211,6 +2376,10 @@ static struct cxl_region *devm_cxl_add_region(struct cxl_root_decoder *cxlrd, ...@@ -2211,6 +2376,10 @@ static struct cxl_region *devm_cxl_add_region(struct cxl_root_decoder *cxlrd,
if (rc) if (rc)
goto err; goto err;
cxlr->memory_notifier.notifier_call = cxl_region_perf_attrs_callback;
cxlr->memory_notifier.priority = CXL_CALLBACK_PRI;
register_memory_notifier(&cxlr->memory_notifier);
rc = devm_add_action_or_reset(port->uport_dev, unregister_region, cxlr); rc = devm_add_action_or_reset(port->uport_dev, unregister_region, cxlr);
if (rc) if (rc)
return ERR_PTR(rc); return ERR_PTR(rc);
......
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
#include <linux/libnvdimm.h> #include <linux/libnvdimm.h>
#include <linux/bitfield.h> #include <linux/bitfield.h>
#include <linux/notifier.h>
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/log2.h> #include <linux/log2.h>
#include <linux/node.h> #include <linux/node.h>
...@@ -517,6 +518,8 @@ struct cxl_region_params { ...@@ -517,6 +518,8 @@ struct cxl_region_params {
* @cxlr_pmem: (for pmem regions) cached copy of the nvdimm bridge * @cxlr_pmem: (for pmem regions) cached copy of the nvdimm bridge
* @flags: Region state flags * @flags: Region state flags
* @params: active + config params for the region * @params: active + config params for the region
* @coord: QoS access coordinates for the region
* @memory_notifier: notifier for setting the access coordinates to node
*/ */
struct cxl_region { struct cxl_region {
struct device dev; struct device dev;
...@@ -527,6 +530,8 @@ struct cxl_region { ...@@ -527,6 +530,8 @@ struct cxl_region {
struct cxl_pmem_region *cxlr_pmem; struct cxl_pmem_region *cxlr_pmem;
unsigned long flags; unsigned long flags;
struct cxl_region_params params; struct cxl_region_params params;
struct access_coordinate coord[ACCESS_COORDINATE_MAX];
struct notifier_block memory_notifier;
}; };
struct cxl_nvdimm_bridge { struct cxl_nvdimm_bridge {
...@@ -671,7 +676,7 @@ struct cxl_dport { ...@@ -671,7 +676,7 @@ struct cxl_dport {
struct cxl_port *port; struct cxl_port *port;
struct cxl_regs regs; struct cxl_regs regs;
struct access_coordinate sw_coord; struct access_coordinate sw_coord;
struct access_coordinate hb_coord; struct access_coordinate hb_coord[ACCESS_COORDINATE_MAX];
long link_latency; long link_latency;
}; };
...@@ -879,9 +884,17 @@ void cxl_switch_parse_cdat(struct cxl_port *port); ...@@ -879,9 +884,17 @@ void cxl_switch_parse_cdat(struct cxl_port *port);
int cxl_endpoint_get_perf_coordinates(struct cxl_port *port, int cxl_endpoint_get_perf_coordinates(struct cxl_port *port,
struct access_coordinate *coord); struct access_coordinate *coord);
int cxl_hb_get_perf_coordinates(struct cxl_port *port,
struct access_coordinate *coord);
void cxl_region_perf_data_calculate(struct cxl_region *cxlr,
struct cxl_endpoint_decoder *cxled);
void cxl_memdev_update_perf(struct cxl_memdev *cxlmd); void cxl_memdev_update_perf(struct cxl_memdev *cxlmd);
void cxl_coordinates_combine(struct access_coordinate *out,
struct access_coordinate *c1,
struct access_coordinate *c2);
/* /*
* Unit test builds overrides this to __weak, find the 'strong' version * Unit test builds overrides this to __weak, find the 'strong' version
* of these symbols in tools/testing/cxl/. * of these symbols in tools/testing/cxl/.
......
...@@ -71,6 +71,15 @@ enum cxl_regloc_type { ...@@ -71,6 +71,15 @@ enum cxl_regloc_type {
CXL_REGLOC_RBI_TYPES CXL_REGLOC_RBI_TYPES
}; };
/*
* Table Access DOE, CDAT Read Entry Response
*
* Spec refs:
*
* CXL 3.1 8.1.11, Table 8-14: Read Entry Response
* CDAT Specification 1.03: 2 CDAT Data Structures
*/
struct cdat_header { struct cdat_header {
__le32 length; __le32 length;
u8 revision; u8 revision;
...@@ -85,6 +94,21 @@ struct cdat_entry_header { ...@@ -85,6 +94,21 @@ struct cdat_entry_header {
__le16 length; __le16 length;
} __packed; } __packed;
/*
* The DOE CDAT read response contains a CDAT read entry (either the
* CDAT header or a structure).
*/
union cdat_data {
struct cdat_header header;
struct cdat_entry_header entry;
} __packed;
/* There is an additional CDAT response header of 4 bytes. */
struct cdat_doe_rsp {
__le32 doe_header;
u8 data[];
} __packed;
/* /*
* CXL v3.0 6.2.3 Table 6-4 * CXL v3.0 6.2.3 Table 6-4
* The table indicates that if PCIe Flit Mode is set, then CXL is in 256B flits * The table indicates that if PCIe Flit Mode is set, then CXL is in 256B flits
......
...@@ -1548,4 +1548,25 @@ static inline void acpi_use_parent_companion(struct device *dev) ...@@ -1548,4 +1548,25 @@ static inline void acpi_use_parent_companion(struct device *dev)
ACPI_COMPANION_SET(dev, ACPI_COMPANION(dev->parent)); ACPI_COMPANION_SET(dev, ACPI_COMPANION(dev->parent));
} }
#ifdef CONFIG_ACPI_HMAT
int hmat_update_target_coordinates(int nid, struct access_coordinate *coord,
enum access_coordinate_class access);
#else
static inline int hmat_update_target_coordinates(int nid,
struct access_coordinate *coord,
enum access_coordinate_class access)
{
return -EOPNOTSUPP;
}
#endif
#ifdef CONFIG_ACPI_NUMA
bool acpi_node_backed_by_real_pxm(int nid);
#else
static inline bool acpi_node_backed_by_real_pxm(int nid)
{
return false;
}
#endif
#endif /*_LINUX_ACPI_H*/ #endif /*_LINUX_ACPI_H*/
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* CXL protocol Error INJection support.
*
* Copyright (c) 2023 Advanced Micro Devices, Inc.
* All Rights Reserved.
*
* Author: Ben Cheatham <benjamin.cheatham@amd.com>
*/
#ifndef EINJ_CXL_H
#define EINJ_CXL_H
#include <linux/errno.h>
#include <linux/types.h>
struct pci_dev;
struct seq_file;
#if IS_ENABLED(CONFIG_ACPI_APEI_EINJ_CXL)
int einj_cxl_available_error_type_show(struct seq_file *m, void *v);
int einj_cxl_inject_error(struct pci_dev *dport_dev, u64 type);
int einj_cxl_inject_rch_error(u64 rcrb, u64 type);
bool einj_cxl_is_initialized(void);
#else /* !IS_ENABLED(CONFIG_ACPI_APEI_EINJ_CXL) */
static inline int einj_cxl_available_error_type_show(struct seq_file *m,
void *v)
{
return -ENXIO;
}
static inline int einj_cxl_inject_error(struct pci_dev *dport_dev, u64 type)
{
return -ENXIO;
}
static inline int einj_cxl_inject_rch_error(u64 rcrb, u64 type)
{
return -ENXIO;
}
static inline bool einj_cxl_is_initialized(void) { return false; }
#endif /* CONFIG_ACPI_APEI_EINJ_CXL */
#endif /* EINJ_CXL_H */
...@@ -40,12 +40,14 @@ union acpi_subtable_headers { ...@@ -40,12 +40,14 @@ union acpi_subtable_headers {
int acpi_parse_entries_array(char *id, unsigned long table_size, int acpi_parse_entries_array(char *id, unsigned long table_size,
union fw_table_header *table_header, union fw_table_header *table_header,
unsigned long max_length,
struct acpi_subtable_proc *proc, struct acpi_subtable_proc *proc,
int proc_num, unsigned int max_entries); int proc_num, unsigned int max_entries);
int cdat_table_parse(enum acpi_cdat_type type, int cdat_table_parse(enum acpi_cdat_type type,
acpi_tbl_entry_handler_arg handler_arg, void *arg, acpi_tbl_entry_handler_arg handler_arg, void *arg,
struct acpi_table_cdat *table_header); struct acpi_table_cdat *table_header,
unsigned long length);
/* CXL is the only non-ACPI consumer of the FIRMWARE_TABLE library */ /* CXL is the only non-ACPI consumer of the FIRMWARE_TABLE library */
#if IS_ENABLED(CONFIG_ACPI) && !IS_ENABLED(CONFIG_CXL_BUS) #if IS_ENABLED(CONFIG_ACPI) && !IS_ENABLED(CONFIG_CXL_BUS)
......
...@@ -123,6 +123,7 @@ struct mem_section; ...@@ -123,6 +123,7 @@ struct mem_section;
#define DEFAULT_CALLBACK_PRI 0 #define DEFAULT_CALLBACK_PRI 0
#define SLAB_CALLBACK_PRI 1 #define SLAB_CALLBACK_PRI 1
#define HMAT_CALLBACK_PRI 2 #define HMAT_CALLBACK_PRI 2
#define CXL_CALLBACK_PRI 5
#define MM_COMPUTE_BATCH_PRI 10 #define MM_COMPUTE_BATCH_PRI 10
#define CPUSET_CALLBACK_PRI 10 #define CPUSET_CALLBACK_PRI 10
#define MEMTIER_HOTPLUG_PRI 100 #define MEMTIER_HOTPLUG_PRI 100
......
...@@ -34,6 +34,18 @@ struct access_coordinate { ...@@ -34,6 +34,18 @@ struct access_coordinate {
unsigned int write_latency; unsigned int write_latency;
}; };
/*
* ACCESS_COORDINATE_LOCAL correlates to ACCESS CLASS 0
* - access_coordinate between target node and nearest initiator node
* ACCESS_COORDINATE_CPU correlates to ACCESS CLASS 1
* - access_coordinate between target node and nearest CPU node
*/
enum access_coordinate_class {
ACCESS_COORDINATE_LOCAL,
ACCESS_COORDINATE_CPU,
ACCESS_COORDINATE_MAX
};
enum cache_indexing { enum cache_indexing {
NODE_CACHE_DIRECT_MAP, NODE_CACHE_DIRECT_MAP,
NODE_CACHE_INDEXED, NODE_CACHE_INDEXED,
...@@ -66,7 +78,7 @@ struct node_cache_attrs { ...@@ -66,7 +78,7 @@ struct node_cache_attrs {
#ifdef CONFIG_HMEM_REPORTING #ifdef CONFIG_HMEM_REPORTING
void node_add_cache(unsigned int nid, struct node_cache_attrs *cache_attrs); void node_add_cache(unsigned int nid, struct node_cache_attrs *cache_attrs);
void node_set_perf_attrs(unsigned int nid, struct access_coordinate *coord, void node_set_perf_attrs(unsigned int nid, struct access_coordinate *coord,
unsigned access); enum access_coordinate_class access);
#else #else
static inline void node_add_cache(unsigned int nid, static inline void node_add_cache(unsigned int nid,
struct node_cache_attrs *cache_attrs) struct node_cache_attrs *cache_attrs)
...@@ -75,7 +87,7 @@ static inline void node_add_cache(unsigned int nid, ...@@ -75,7 +87,7 @@ static inline void node_add_cache(unsigned int nid,
static inline void node_set_perf_attrs(unsigned int nid, static inline void node_set_perf_attrs(unsigned int nid,
struct access_coordinate *coord, struct access_coordinate *coord,
unsigned access) enum access_coordinate_class access)
{ {
} }
#endif #endif
...@@ -137,7 +149,7 @@ extern void unregister_memory_block_under_nodes(struct memory_block *mem_blk); ...@@ -137,7 +149,7 @@ extern void unregister_memory_block_under_nodes(struct memory_block *mem_blk);
extern int register_memory_node_under_compute_node(unsigned int mem_nid, extern int register_memory_node_under_compute_node(unsigned int mem_nid,
unsigned int cpu_nid, unsigned int cpu_nid,
unsigned access); enum access_coordinate_class access);
#else #else
static inline void node_dev_init(void) static inline void node_dev_init(void)
{ {
......
...@@ -127,6 +127,7 @@ static __init_or_fwtbl_lib int call_handler(struct acpi_subtable_proc *proc, ...@@ -127,6 +127,7 @@ static __init_or_fwtbl_lib int call_handler(struct acpi_subtable_proc *proc,
* *
* @id: table id (for debugging purposes) * @id: table id (for debugging purposes)
* @table_size: size of the root table * @table_size: size of the root table
* @max_length: maximum size of the table (ignore if 0)
* @table_header: where does the table start? * @table_header: where does the table start?
* @proc: array of acpi_subtable_proc struct containing entry id * @proc: array of acpi_subtable_proc struct containing entry id
* and associated handler with it * and associated handler with it
...@@ -148,18 +149,21 @@ static __init_or_fwtbl_lib int call_handler(struct acpi_subtable_proc *proc, ...@@ -148,18 +149,21 @@ static __init_or_fwtbl_lib int call_handler(struct acpi_subtable_proc *proc,
int __init_or_fwtbl_lib int __init_or_fwtbl_lib
acpi_parse_entries_array(char *id, unsigned long table_size, acpi_parse_entries_array(char *id, unsigned long table_size,
union fw_table_header *table_header, union fw_table_header *table_header,
unsigned long max_length,
struct acpi_subtable_proc *proc, struct acpi_subtable_proc *proc,
int proc_num, unsigned int max_entries) int proc_num, unsigned int max_entries)
{ {
unsigned long table_end, subtable_len, entry_len; unsigned long table_len, table_end, subtable_len, entry_len;
struct acpi_subtable_entry entry; struct acpi_subtable_entry entry;
enum acpi_subtable_type type; enum acpi_subtable_type type;
int count = 0; int count = 0;
int i; int i;
type = acpi_get_subtable_type(id); type = acpi_get_subtable_type(id);
table_end = (unsigned long)table_header + table_len = acpi_table_get_length(type, table_header);
acpi_table_get_length(type, table_header); if (max_length && max_length < table_len)
table_len = max_length;
table_end = (unsigned long)table_header + table_len;
/* Parse all entries looking for a match. */ /* Parse all entries looking for a match. */
...@@ -208,7 +212,8 @@ int __init_or_fwtbl_lib ...@@ -208,7 +212,8 @@ int __init_or_fwtbl_lib
cdat_table_parse(enum acpi_cdat_type type, cdat_table_parse(enum acpi_cdat_type type,
acpi_tbl_entry_handler_arg handler_arg, acpi_tbl_entry_handler_arg handler_arg,
void *arg, void *arg,
struct acpi_table_cdat *table_header) struct acpi_table_cdat *table_header,
unsigned long length)
{ {
struct acpi_subtable_proc proc = { struct acpi_subtable_proc proc = {
.id = type, .id = type,
...@@ -222,6 +227,6 @@ cdat_table_parse(enum acpi_cdat_type type, ...@@ -222,6 +227,6 @@ cdat_table_parse(enum acpi_cdat_type type,
return acpi_parse_entries_array(ACPI_SIG_CDAT, return acpi_parse_entries_array(ACPI_SIG_CDAT,
sizeof(struct acpi_table_cdat), sizeof(struct acpi_table_cdat),
(union fw_table_header *)table_header, (union fw_table_header *)table_header,
&proc, 1, 0); length, &proc, 1, 0);
} }
EXPORT_SYMBOL_FWTBL_LIB(cdat_table_parse); EXPORT_SYMBOL_FWTBL_LIB(cdat_table_parse);
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment