Commit 0c66a95c authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'cxl-for-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl

Pull CXL (Compute Express Link) updates from Dan Williams:
 "This subsystem is still in the build-out phase as the bulk of the
  update is improvements to enumeration and fleshing out the device
  model. In terms of new features, more mailbox commands have been added
  to the allowed-list in support of persistent memory provisioning
  support targeting v5.15.

  The critical update from an enumeration perspective is support for the
  CXL Fixed Memory Window Structure that indicates to Linux which system
  physical address ranges decode to the CXL Host Bridges in the system.
  This allows the driver to detect which address ranges have been mapped
  by firmware and what address ranges are available for future hotplug.

  So, again, mostly skeleton this round, with more meat targeting v5.15.

  Summary:

   - Add support for the CXL Fixed Memory Window Structure, a recent
     extension of the ACPI CEDT (CXL Early Discovery Table)

   - Add infrastructure for component registers

   - Add HDM (Host-managed device memory) decoder definitions

   - Define a device model for an HDM decoder tree

   - Bridge CXL persistent memory capabilities to an NVDIMM bus /
     device-model

   - Switch to fine grained mapping of CXL MMIO registers to allow
     different drivers / system software to own individual register
     blocks

   - Enable media provisioning commands, and publish the label storage
     area size in sysfs

   - Miscellaneous cleanups and fixes"

* tag 'cxl-for-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl: (34 commits)
  cxl/pci: Rename CXL REGLOC ID
  cxl/acpi: Use the ACPI CFMWS to create static decoder objects
  cxl/acpi: Add the Host Bridge base address to CXL port objects
  cxl/pmem: Register 'pmem' / cxl_nvdimm devices
  libnvdimm: Drop unused device power management support
  libnvdimm: Export nvdimm shutdown helper, nvdimm_delete()
  cxl/pmem: Add initial infrastructure for pmem support
  cxl/core: Add cxl-bus driver infrastructure
  cxl/pci: Add media provisioning required commands
  cxl/component_regs: Fix offset
  cxl/hdm: Fix decoder count calculation
  cxl/acpi: Introduce cxl_decoder objects
  cxl/acpi: Enumerate host bridge root ports
  cxl/acpi: Add downstream port data to cxl_port instances
  cxl/Kconfig: Default drivers to CONFIG_CXL_BUS
  cxl/acpi: Introduce the root of a cxl_port topology
  cxl/pci: Fixup devm_cxl_iomap_block() to take a 'struct device *'
  cxl/pci: Add HDM decoder capabilities
  cxl/pci: Reserve individual register block regions
  cxl/pci: Map registers based on capabilities
  ...
parents 855ff900 4ad6181e
...@@ -24,3 +24,106 @@ Description: ...@@ -24,3 +24,106 @@ Description:
(RO) "Persistent Only Capacity" as bytes. Represents the (RO) "Persistent Only Capacity" as bytes. Represents the
identically named field in the Identify Memory Device Output identically named field in the Identify Memory Device Output
Payload in the CXL-2.0 specification. Payload in the CXL-2.0 specification.
What: /sys/bus/cxl/devices/*/devtype
Date: June, 2021
KernelVersion: v5.14
Contact: linux-cxl@vger.kernel.org
Description:
CXL device objects export the devtype attribute which mirrors
the same value communicated in the DEVTYPE environment variable
for uevents for devices on the "cxl" bus.
What: /sys/bus/cxl/devices/portX/uport
Date: June, 2021
KernelVersion: v5.14
Contact: linux-cxl@vger.kernel.org
Description:
CXL port objects are enumerated from either a platform firmware
device (ACPI0017 and ACPI0016) or PCIe switch upstream port with
CXL component registers. The 'uport' symlink connects the CXL
portX object to the device that published the CXL port
capability.
What: /sys/bus/cxl/devices/portX/dportY
Date: June, 2021
KernelVersion: v5.14
Contact: linux-cxl@vger.kernel.org
Description:
CXL port objects are enumerated from either a platform firmware
device (ACPI0017 and ACPI0016) or PCIe switch upstream port with
CXL component registers. The 'dportY' symlink identifies one or
more downstream ports that the upstream port may target in its
decode of CXL memory resources. The 'Y' integer reflects the
hardware port unique-id used in the hardware decoder target
list.
What: /sys/bus/cxl/devices/decoderX.Y
Date: June, 2021
KernelVersion: v5.14
Contact: linux-cxl@vger.kernel.org
Description:
CXL decoder objects are enumerated from either a platform
firmware description, or a CXL HDM decoder register set in a
PCIe device (see CXL 2.0 section 8.2.5.12 CXL HDM Decoder
Capability Structure). The 'X' in decoderX.Y represents the
cxl_port container of this decoder, and 'Y' represents the
instance id of a given decoder resource.
What: /sys/bus/cxl/devices/decoderX.Y/{start,size}
Date: June, 2021
KernelVersion: v5.14
Contact: linux-cxl@vger.kernel.org
Description:
The 'start' and 'size' attributes together convey the physical
address base and number of bytes mapped in the decoder's decode
window. For decoders of devtype "cxl_decoder_root" the address
range is fixed. For decoders of devtype "cxl_decoder_switch" the
address is bounded by the decode range of the cxl_port ancestor
of the decoder's cxl_port, and dynamically updates based on the
active memory regions in that address space.
What: /sys/bus/cxl/devices/decoderX.Y/locked
Date: June, 2021
KernelVersion: v5.14
Contact: linux-cxl@vger.kernel.org
Description:
CXL HDM decoders have the capability to lock the configuration
until the next device reset. For decoders of devtype
"cxl_decoder_root" there is no standard facility to unlock them.
For decoders of devtype "cxl_decoder_switch" a secondary bus
reset, of the PCIe bridge that provides the bus for this
decoders uport, unlocks / resets the decoder.
What: /sys/bus/cxl/devices/decoderX.Y/target_list
Date: June, 2021
KernelVersion: v5.14
Contact: linux-cxl@vger.kernel.org
Description:
Display a comma separated list of the current decoder target
configuration. The list is ordered by the current configured
interleave order of the decoder's dport instances. Each entry in
the list is a dport id.
What: /sys/bus/cxl/devices/decoderX.Y/cap_{pmem,ram,type2,type3}
Date: June, 2021
KernelVersion: v5.14
Contact: linux-cxl@vger.kernel.org
Description:
When a CXL decoder is of devtype "cxl_decoder_root", it
represents a fixed memory window identified by platform
firmware. A fixed window may only support a subset of memory
types. The 'cap_*' attributes indicate whether persistent
memory, volatile memory, accelerator memory, and / or expander
memory may be mapped behind this decoder's memory window.
What: /sys/bus/cxl/devices/decoderX.Y/target_type
Date: June, 2021
KernelVersion: v5.14
Contact: linux-cxl@vger.kernel.org
Description:
When a CXL decoder is of devtype "cxl_decoder_switch", it can
optionally decode either accelerator memory (type-2) or expander
memory (type-3). The 'target_type' attribute indicates the
current setting which may dynamically change based on what
memory regions are activated in this decode hierarchy.
...@@ -22,16 +22,22 @@ This section covers the driver infrastructure for a CXL memory device. ...@@ -22,16 +22,22 @@ This section covers the driver infrastructure for a CXL memory device.
CXL Memory Device CXL Memory Device
----------------- -----------------
.. kernel-doc:: drivers/cxl/mem.c .. kernel-doc:: drivers/cxl/pci.c
:doc: cxl mem :doc: cxl pci
.. kernel-doc:: drivers/cxl/mem.c .. kernel-doc:: drivers/cxl/pci.c
:internal: :internal:
CXL Bus CXL Core
------- --------
.. kernel-doc:: drivers/cxl/bus.c .. kernel-doc:: drivers/cxl/cxl.h
:doc: cxl bus :doc: cxl objects
.. kernel-doc:: drivers/cxl/cxl.h
:internal:
.. kernel-doc:: drivers/cxl/core.c
:doc: cxl core
External Interfaces External Interfaces
=================== ===================
......
...@@ -15,21 +15,17 @@ if CXL_BUS ...@@ -15,21 +15,17 @@ if CXL_BUS
config CXL_MEM config CXL_MEM
tristate "CXL.mem: Memory Devices" tristate "CXL.mem: Memory Devices"
default CXL_BUS
help help
The CXL.mem protocol allows a device to act as a provider of The CXL.mem protocol allows a device to act as a provider of
"System RAM" and/or "Persistent Memory" that is fully coherent "System RAM" and/or "Persistent Memory" that is fully coherent
as if the memory was attached to the typical CPU memory as if the memory was attached to the typical CPU memory
controller. controller.
Say 'y/m' to enable a driver (named "cxl_mem.ko" when built as Say 'y/m' to enable a driver that will attach to CXL.mem devices for
a module) that will attach to CXL.mem devices for configuration and management primarily via the mailbox interface. See
configuration, provisioning, and health monitoring. This Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification for more
driver is required for dynamic provisioning of CXL.mem details.
attached memory which is a prerequisite for persistent memory
support. Typically volatile memory is mapped by platform
firmware and included in the platform memory map, but in some
cases the OS is responsible for mapping that memory. See
Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification.
If unsure say 'm'. If unsure say 'm'.
...@@ -50,4 +46,33 @@ config CXL_MEM_RAW_COMMANDS ...@@ -50,4 +46,33 @@ config CXL_MEM_RAW_COMMANDS
potential impact to memory currently in use by the kernel. potential impact to memory currently in use by the kernel.
If developing CXL hardware or the driver say Y, otherwise say N. If developing CXL hardware or the driver say Y, otherwise say N.
config CXL_ACPI
tristate "CXL ACPI: Platform Support"
depends on ACPI
default CXL_BUS
help
Enable support for host managed device memory (HDM) resources
published by a platform's ACPI CXL memory layout description. See
Chapter 9.14.1 CXL Early Discovery Table (CEDT) in the CXL 2.0
specification, and CXL Fixed Memory Window Structures (CEDT.CFMWS)
(https://www.computeexpresslink.org/spec-landing). The CXL core
consumes these resource to publish the root of a cxl_port decode
hierarchy to map regions that represent System RAM, or Persistent
Memory regions to be managed by LIBNVDIMM.
If unsure say 'm'.
config CXL_PMEM
tristate "CXL PMEM: Persistent Memory Support"
depends on LIBNVDIMM
default CXL_BUS
help
In addition to typical memory resources a platform may also advertise
support for persistent memory attached via CXL. This support is
managed via a bridge driver from CXL to the LIBNVDIMM system
subsystem. Say 'y/m' to enable support for enumerating and
provisioning the persistent memory capacity of CXL memory expanders.
If unsure say 'm'.
endif endif
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_CXL_BUS) += cxl_bus.o obj-$(CONFIG_CXL_BUS) += cxl_core.o
obj-$(CONFIG_CXL_MEM) += cxl_mem.o obj-$(CONFIG_CXL_MEM) += cxl_pci.o
obj-$(CONFIG_CXL_ACPI) += cxl_acpi.o
obj-$(CONFIG_CXL_PMEM) += cxl_pmem.o
ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL
cxl_bus-y := bus.o cxl_core-y := core.o
cxl_mem-y := mem.o cxl_pci-y := pci.o
cxl_acpi-y := acpi.o
cxl_pmem-y := pmem.o
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
#include <linux/device.h>
#include <linux/module.h>
/**
* DOC: cxl bus
*
* The CXL bus provides namespace for control devices and a rendezvous
* point for cross-device interleave coordination.
*/
struct bus_type cxl_bus_type = {
.name = "cxl",
};
EXPORT_SYMBOL_GPL(cxl_bus_type);
static __init int cxl_bus_init(void)
{
return bus_register(&cxl_bus_type);
}
static void cxl_bus_exit(void)
{
bus_unregister(&cxl_bus_type);
}
module_init(cxl_bus_init);
module_exit(cxl_bus_exit);
MODULE_LICENSE("GPL v2");
This diff is collapsed.
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright(c) 2020-2021 Intel Corporation. */
#ifndef __CXL_MEM_H__
#define __CXL_MEM_H__
#include <linux/cdev.h>
#include "cxl.h"
/* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */
#define CXLMDEV_STATUS_OFFSET 0x0
#define CXLMDEV_DEV_FATAL BIT(0)
#define CXLMDEV_FW_HALT BIT(1)
#define CXLMDEV_STATUS_MEDIA_STATUS_MASK GENMASK(3, 2)
#define CXLMDEV_MS_NOT_READY 0
#define CXLMDEV_MS_READY 1
#define CXLMDEV_MS_ERROR 2
#define CXLMDEV_MS_DISABLED 3
#define CXLMDEV_READY(status) \
(FIELD_GET(CXLMDEV_STATUS_MEDIA_STATUS_MASK, status) == \
CXLMDEV_MS_READY)
#define CXLMDEV_MBOX_IF_READY BIT(4)
#define CXLMDEV_RESET_NEEDED_MASK GENMASK(7, 5)
#define CXLMDEV_RESET_NEEDED_NOT 0
#define CXLMDEV_RESET_NEEDED_COLD 1
#define CXLMDEV_RESET_NEEDED_WARM 2
#define CXLMDEV_RESET_NEEDED_HOT 3
#define CXLMDEV_RESET_NEEDED_CXL 4
#define CXLMDEV_RESET_NEEDED(status) \
(FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) != \
CXLMDEV_RESET_NEEDED_NOT)
/*
* An entire PCI topology full of devices should be enough for any
* config
*/
#define CXL_MEM_MAX_DEVS 65536
/**
* struct cxl_memdev - CXL bus object representing a Type-3 Memory Device
* @dev: driver core device object
* @cdev: char dev core object for ioctl operations
* @cxlm: pointer to the parent device driver data
* @id: id number of this memdev instance.
*/
struct cxl_memdev {
struct device dev;
struct cdev cdev;
struct cxl_mem *cxlm;
int id;
};
/**
* struct cxl_mem - A CXL memory device
* @pdev: The PCI device associated with this CXL device.
* @cxlmd: Logical memory device chardev / interface
* @regs: Parsed register blocks
* @payload_size: Size of space for payload
* (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
* @lsa_size: Size of Label Storage Area
* (CXL 2.0 8.2.9.5.1.1 Identify Memory Device)
* @mbox_mutex: Mutex to synchronize mailbox access.
* @firmware_version: Firmware version for the memory device.
* @enabled_cmds: Hardware commands found enabled in CEL.
* @pmem_range: Persistent memory capacity information.
* @ram_range: Volatile memory capacity information.
*/
struct cxl_mem {
struct pci_dev *pdev;
struct cxl_memdev *cxlmd;
struct cxl_regs regs;
size_t payload_size;
size_t lsa_size;
struct mutex mbox_mutex; /* Protects device mailbox and firmware */
char firmware_version[0x10];
unsigned long *enabled_cmds;
struct range pmem_range;
struct range ram_range;
};
#endif /* __CXL_MEM_H__ */
This diff is collapsed.
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
#define PCI_DVSEC_VENDOR_ID_CXL 0x1E98 #define PCI_DVSEC_VENDOR_ID_CXL 0x1E98
#define PCI_DVSEC_ID_CXL 0x0 #define PCI_DVSEC_ID_CXL 0x0
#define PCI_DVSEC_ID_CXL_REGLOC_OFFSET 0x8 #define PCI_DVSEC_ID_CXL_REGLOC_DVSEC_ID 0x8
#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET 0xC #define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET 0xC
/* BAR Indicator Register (BIR) */ /* BAR Indicator Register (BIR) */
......
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2021 Intel Corporation. All rights reserved. */
#include <linux/libnvdimm.h>
#include <linux/device.h>
#include <linux/module.h>
#include <linux/ndctl.h>
#include <linux/async.h>
#include <linux/slab.h>
#include "mem.h"
#include "cxl.h"
/*
* Ordered workqueue for cxl nvdimm device arrival and departure
* to coordinate bus rescans when a bridge arrives and trigger remove
* operations when the bridge is removed.
*/
static struct workqueue_struct *cxl_pmem_wq;
static void unregister_nvdimm(void *nvdimm)
{
nvdimm_delete(nvdimm);
}
static int match_nvdimm_bridge(struct device *dev, const void *data)
{
return strcmp(dev_name(dev), "nvdimm-bridge") == 0;
}
static struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(void)
{
struct device *dev;
dev = bus_find_device(&cxl_bus_type, NULL, NULL, match_nvdimm_bridge);
if (!dev)
return NULL;
return to_cxl_nvdimm_bridge(dev);
}
static int cxl_nvdimm_probe(struct device *dev)
{
struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
struct cxl_nvdimm_bridge *cxl_nvb;
unsigned long flags = 0;
struct nvdimm *nvdimm;
int rc = -ENXIO;
cxl_nvb = cxl_find_nvdimm_bridge();
if (!cxl_nvb)
return -ENXIO;
device_lock(&cxl_nvb->dev);
if (!cxl_nvb->nvdimm_bus)
goto out;
set_bit(NDD_LABELING, &flags);
nvdimm = nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd, NULL, flags, 0, 0,
NULL);
if (!nvdimm)
goto out;
rc = devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);
out:
device_unlock(&cxl_nvb->dev);
put_device(&cxl_nvb->dev);
return rc;
}
static struct cxl_driver cxl_nvdimm_driver = {
.name = "cxl_nvdimm",
.probe = cxl_nvdimm_probe,
.id = CXL_DEVICE_NVDIMM,
};
static int cxl_pmem_ctl(struct nvdimm_bus_descriptor *nd_desc,
struct nvdimm *nvdimm, unsigned int cmd, void *buf,
unsigned int buf_len, int *cmd_rc)
{
return -ENOTTY;
}
static bool online_nvdimm_bus(struct cxl_nvdimm_bridge *cxl_nvb)
{
if (cxl_nvb->nvdimm_bus)
return true;
cxl_nvb->nvdimm_bus =
nvdimm_bus_register(&cxl_nvb->dev, &cxl_nvb->nd_desc);
return cxl_nvb->nvdimm_bus != NULL;
}
static int cxl_nvdimm_release_driver(struct device *dev, void *data)
{
if (!is_cxl_nvdimm(dev))
return 0;
device_release_driver(dev);
return 0;
}
static void offline_nvdimm_bus(struct nvdimm_bus *nvdimm_bus)
{
if (!nvdimm_bus)
return;
/*
* Set the state of cxl_nvdimm devices to unbound / idle before
* nvdimm_bus_unregister() rips the nvdimm objects out from
* underneath them.
*/
bus_for_each_dev(&cxl_bus_type, NULL, NULL, cxl_nvdimm_release_driver);
nvdimm_bus_unregister(nvdimm_bus);
}
static void cxl_nvb_update_state(struct work_struct *work)
{
struct cxl_nvdimm_bridge *cxl_nvb =
container_of(work, typeof(*cxl_nvb), state_work);
struct nvdimm_bus *victim_bus = NULL;
bool release = false, rescan = false;
device_lock(&cxl_nvb->dev);
switch (cxl_nvb->state) {
case CXL_NVB_ONLINE:
if (!online_nvdimm_bus(cxl_nvb)) {
dev_err(&cxl_nvb->dev,
"failed to establish nvdimm bus\n");
release = true;
} else
rescan = true;
break;
case CXL_NVB_OFFLINE:
case CXL_NVB_DEAD:
victim_bus = cxl_nvb->nvdimm_bus;
cxl_nvb->nvdimm_bus = NULL;
break;
default:
break;
}
device_unlock(&cxl_nvb->dev);
if (release)
device_release_driver(&cxl_nvb->dev);
if (rescan) {
int rc = bus_rescan_devices(&cxl_bus_type);
dev_dbg(&cxl_nvb->dev, "rescan: %d\n", rc);
}
offline_nvdimm_bus(victim_bus);
put_device(&cxl_nvb->dev);
}
static void cxl_nvdimm_bridge_remove(struct device *dev)
{
struct cxl_nvdimm_bridge *cxl_nvb = to_cxl_nvdimm_bridge(dev);
if (cxl_nvb->state == CXL_NVB_ONLINE)
cxl_nvb->state = CXL_NVB_OFFLINE;
if (queue_work(cxl_pmem_wq, &cxl_nvb->state_work))
get_device(&cxl_nvb->dev);
}
static int cxl_nvdimm_bridge_probe(struct device *dev)
{
struct cxl_nvdimm_bridge *cxl_nvb = to_cxl_nvdimm_bridge(dev);
if (cxl_nvb->state == CXL_NVB_DEAD)
return -ENXIO;
if (cxl_nvb->state == CXL_NVB_NEW) {
cxl_nvb->nd_desc = (struct nvdimm_bus_descriptor) {
.provider_name = "CXL",
.module = THIS_MODULE,
.ndctl = cxl_pmem_ctl,
};
INIT_WORK(&cxl_nvb->state_work, cxl_nvb_update_state);
}
cxl_nvb->state = CXL_NVB_ONLINE;
if (queue_work(cxl_pmem_wq, &cxl_nvb->state_work))
get_device(&cxl_nvb->dev);
return 0;
}
static struct cxl_driver cxl_nvdimm_bridge_driver = {
.name = "cxl_nvdimm_bridge",
.probe = cxl_nvdimm_bridge_probe,
.remove = cxl_nvdimm_bridge_remove,
.id = CXL_DEVICE_NVDIMM_BRIDGE,
};
static __init int cxl_pmem_init(void)
{
int rc;
cxl_pmem_wq = alloc_ordered_workqueue("cxl_pmem", 0);
if (!cxl_pmem_wq)
return -ENXIO;
rc = cxl_driver_register(&cxl_nvdimm_bridge_driver);
if (rc)
goto err_bridge;
rc = cxl_driver_register(&cxl_nvdimm_driver);
if (rc)
goto err_nvdimm;
return 0;
err_nvdimm:
cxl_driver_unregister(&cxl_nvdimm_bridge_driver);
err_bridge:
destroy_workqueue(cxl_pmem_wq);
return rc;
}
static __exit void cxl_pmem_exit(void)
{
cxl_driver_unregister(&cxl_nvdimm_driver);
cxl_driver_unregister(&cxl_nvdimm_bridge_driver);
destroy_workqueue(cxl_pmem_wq);
}
MODULE_LICENSE("GPL v2");
module_init(cxl_pmem_init);
module_exit(cxl_pmem_exit);
MODULE_IMPORT_NS(CXL);
MODULE_ALIAS_CXL(CXL_DEVICE_NVDIMM_BRIDGE);
MODULE_ALIAS_CXL(CXL_DEVICE_NVDIMM);
...@@ -363,8 +363,13 @@ struct nvdimm_bus *nvdimm_bus_register(struct device *parent, ...@@ -363,8 +363,13 @@ struct nvdimm_bus *nvdimm_bus_register(struct device *parent,
nvdimm_bus->dev.groups = nd_desc->attr_groups; nvdimm_bus->dev.groups = nd_desc->attr_groups;
nvdimm_bus->dev.bus = &nvdimm_bus_type; nvdimm_bus->dev.bus = &nvdimm_bus_type;
nvdimm_bus->dev.of_node = nd_desc->of_node; nvdimm_bus->dev.of_node = nd_desc->of_node;
dev_set_name(&nvdimm_bus->dev, "ndbus%d", nvdimm_bus->id); device_initialize(&nvdimm_bus->dev);
rc = device_register(&nvdimm_bus->dev); device_set_pm_not_required(&nvdimm_bus->dev);
rc = dev_set_name(&nvdimm_bus->dev, "ndbus%d", nvdimm_bus->id);
if (rc)
goto err;
rc = device_add(&nvdimm_bus->dev);
if (rc) { if (rc) {
dev_dbg(&nvdimm_bus->dev, "registration failed: %d\n", rc); dev_dbg(&nvdimm_bus->dev, "registration failed: %d\n", rc);
goto err; goto err;
...@@ -396,21 +401,10 @@ static int child_unregister(struct device *dev, void *data) ...@@ -396,21 +401,10 @@ static int child_unregister(struct device *dev, void *data)
if (dev->class) if (dev->class)
return 0; return 0;
if (is_nvdimm(dev)) { if (is_nvdimm(dev))
struct nvdimm *nvdimm = to_nvdimm(dev); nvdimm_delete(to_nvdimm(dev));
bool dev_put = false; else
nd_device_unregister(dev, ND_SYNC);
/* We are shutting down. Make state frozen artificially. */
nvdimm_bus_lock(dev);
set_bit(NVDIMM_SECURITY_FROZEN, &nvdimm->sec.flags);
if (test_and_clear_bit(NDD_WORK_PENDING, &nvdimm->flags))
dev_put = true;
nvdimm_bus_unlock(dev);
cancel_delayed_work_sync(&nvdimm->dwork);
if (dev_put)
put_device(dev);
}
nd_device_unregister(dev, ND_SYNC);
return 0; return 0;
} }
...@@ -536,6 +530,7 @@ void __nd_device_register(struct device *dev) ...@@ -536,6 +530,7 @@ void __nd_device_register(struct device *dev)
set_dev_node(dev, to_nd_region(dev)->numa_node); set_dev_node(dev, to_nd_region(dev)->numa_node);
dev->bus = &nvdimm_bus_type; dev->bus = &nvdimm_bus_type;
device_set_pm_not_required(dev);
if (dev->parent) { if (dev->parent) {
get_device(dev->parent); get_device(dev->parent);
if (dev_to_node(dev) == NUMA_NO_NODE) if (dev_to_node(dev) == NUMA_NO_NODE)
...@@ -728,18 +723,41 @@ const struct attribute_group nd_numa_attribute_group = { ...@@ -728,18 +723,41 @@ const struct attribute_group nd_numa_attribute_group = {
.is_visible = nd_numa_attr_visible, .is_visible = nd_numa_attr_visible,
}; };
static void ndctl_release(struct device *dev)
{
kfree(dev);
}
int nvdimm_bus_create_ndctl(struct nvdimm_bus *nvdimm_bus) int nvdimm_bus_create_ndctl(struct nvdimm_bus *nvdimm_bus)
{ {
dev_t devt = MKDEV(nvdimm_bus_major, nvdimm_bus->id); dev_t devt = MKDEV(nvdimm_bus_major, nvdimm_bus->id);
struct device *dev; struct device *dev;
int rc;
dev = device_create(nd_class, &nvdimm_bus->dev, devt, nvdimm_bus, dev = kzalloc(sizeof(*dev), GFP_KERNEL);
"ndctl%d", nvdimm_bus->id); if (!dev)
return -ENOMEM;
device_initialize(dev);
device_set_pm_not_required(dev);
dev->class = nd_class;
dev->parent = &nvdimm_bus->dev;
dev->devt = devt;
dev->release = ndctl_release;
rc = dev_set_name(dev, "ndctl%d", nvdimm_bus->id);
if (rc)
goto err;
if (IS_ERR(dev)) rc = device_add(dev);
dev_dbg(&nvdimm_bus->dev, "failed to register ndctl%d: %ld\n", if (rc) {
nvdimm_bus->id, PTR_ERR(dev)); dev_dbg(&nvdimm_bus->dev, "failed to register ndctl%d: %d\n",
return PTR_ERR_OR_ZERO(dev); nvdimm_bus->id, rc);
goto err;
}
return 0;
err:
put_device(dev);
return rc;
} }
void nvdimm_bus_destroy_ndctl(struct nvdimm_bus *nvdimm_bus) void nvdimm_bus_destroy_ndctl(struct nvdimm_bus *nvdimm_bus)
......
...@@ -642,6 +642,24 @@ struct nvdimm *__nvdimm_create(struct nvdimm_bus *nvdimm_bus, ...@@ -642,6 +642,24 @@ struct nvdimm *__nvdimm_create(struct nvdimm_bus *nvdimm_bus,
} }
EXPORT_SYMBOL_GPL(__nvdimm_create); EXPORT_SYMBOL_GPL(__nvdimm_create);
void nvdimm_delete(struct nvdimm *nvdimm)
{
struct device *dev = &nvdimm->dev;
bool dev_put = false;
/* We are shutting down. Make state frozen artificially. */
nvdimm_bus_lock(dev);
set_bit(NVDIMM_SECURITY_FROZEN, &nvdimm->sec.flags);
if (test_and_clear_bit(NDD_WORK_PENDING, &nvdimm->flags))
dev_put = true;
nvdimm_bus_unlock(dev);
cancel_delayed_work_sync(&nvdimm->dwork);
if (dev_put)
put_device(dev);
nd_device_unregister(dev, ND_SYNC);
}
EXPORT_SYMBOL_GPL(nvdimm_delete);
static void shutdown_security_notify(void *data) static void shutdown_security_notify(void *data)
{ {
struct nvdimm *nvdimm = data; struct nvdimm *nvdimm = data;
......
...@@ -278,6 +278,7 @@ static inline struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus, ...@@ -278,6 +278,7 @@ static inline struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus,
return __nvdimm_create(nvdimm_bus, provider_data, groups, flags, return __nvdimm_create(nvdimm_bus, provider_data, groups, flags,
cmd_mask, num_flush, flush_wpq, NULL, NULL, NULL); cmd_mask, num_flush, flush_wpq, NULL, NULL, NULL);
} }
void nvdimm_delete(struct nvdimm *nvdimm);
const struct nd_cmd_desc *nd_cmd_dimm_desc(int cmd); const struct nd_cmd_desc *nd_cmd_dimm_desc(int cmd);
const struct nd_cmd_desc *nd_cmd_bus_desc(int cmd); const struct nd_cmd_desc *nd_cmd_bus_desc(int cmd);
......
...@@ -29,6 +29,18 @@ ...@@ -29,6 +29,18 @@
___C(GET_LSA, "Get Label Storage Area"), \ ___C(GET_LSA, "Get Label Storage Area"), \
___C(GET_HEALTH_INFO, "Get Health Info"), \ ___C(GET_HEALTH_INFO, "Get Health Info"), \
___C(GET_LOG, "Get Log"), \ ___C(GET_LOG, "Get Log"), \
___C(SET_PARTITION_INFO, "Set Partition Information"), \
___C(SET_LSA, "Set Label Storage Area"), \
___C(GET_ALERT_CONFIG, "Get Alert Configuration"), \
___C(SET_ALERT_CONFIG, "Set Alert Configuration"), \
___C(GET_SHUTDOWN_STATE, "Get Shutdown State"), \
___C(SET_SHUTDOWN_STATE, "Set Shutdown State"), \
___C(GET_POISON, "Get Poison List"), \
___C(INJECT_POISON, "Inject Poison"), \
___C(CLEAR_POISON, "Clear Poison"), \
___C(GET_SCAN_MEDIA_CAPS, "Get Scan Media Capabilities"), \
___C(SCAN_MEDIA, "Scan Media"), \
___C(GET_SCAN_MEDIA, "Get Scan Media Results"), \
___C(MAX, "invalid / last command") ___C(MAX, "invalid / last command")
#define ___C(a, b) CXL_MEM_COMMAND_ID_##a #define ___C(a, b) CXL_MEM_COMMAND_ID_##a
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment