genirq/irqdomain: Provide IRQ_DOMAIN_FLAG_MSI_PARENT
The new PCI/IMS (Interrupt Message Store) functionality is allowing hardware vendors to provide implementation specific storage for the MSI messages. This can be device memory and also host/guest memory, e.g. in queue memory which is shared with the hardware. This requires device specific MSI interrupt domains, which cannot be achieved by expanding the existing PCI/MSI interrupt domain concept which is a global interrupt domain shared by all PCI devices on a particular (IOMMU) segment: |--- device 1 [Vector]---[Remapping]---[PCI/MSI]--|... |--- device N This works because the PCI/MSI[-X] space is uniform, but falls apart with PCI/IMS which is implementation defined and must be available along with PCI/MSI[-X] on the same device. To support PCI/MSI[-X] plus PCI/IMS on the same device it is required to rework the PCI/MSI interrupt domain hierarchy concept in the following way: |--- [PCI/MSI] device 1 [Vector]---[Remapping]---|... |--- [PCI/MSI] device N That allows in the next step to create multiple interrupt domains per device: |--- [PCI/MSI] device 1 |--- [PCI/IMS] device 1 [Vector]---[Remapping]---|... |--- [PCI/MSI] device N |--- [PCI/IMS] device N So the domain which previously created the global PCI/MSI domain must now act as parent domain for the per device domains. The hierarchy depth is the same as before, but the PCI/MSI domains are then device specific and not longer global. Provide IRQ_DOMAIN_FLAG_MSI_PARENT, which allows to identify these parent domains, along with helpers to query it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124230313.690038274@linutronix.de
Showing
Please register or sign in to comment