Commit bc976233 authored by Thomas Gleixner's avatar Thomas Gleixner

genirq/msi, x86/vector: Prevent reservation mode for non maskable MSI

The new reservation mode for interrupts assigns a dummy vector when the
interrupt is allocated and assigns a real vector when the interrupt is
requested. The reservation mode prevents vector pressure when devices with
a large amount of queues/interrupts are initialized, but only a minimal
subset of those queues/interrupts is actually used.

This mode has an issue with MSI interrupts which cannot be masked. If the
driver is not careful or the hardware emits an interrupt before the device
irq is requestd by the driver then the interrupt ends up on the dummy
vector as a spurious interrupt which can cause malfunction of the device or
in the worst case a lockup of the machine.

Change the logic for the reservation mode so that the early activation of
MSI interrupts checks whether:

 - the device is a PCI/MSI device
 - the reservation mode of the underlying irqdomain is activated
 - PCI/MSI masking is globally enabled
 - the PCI/MSI device uses either MSI-X, which supports masking, or
   MSI with the maskbit supported.

If one of those conditions is false, then clear the reservation mode flag
in the irq data of the interrupt and invoke irq_domain_activate_irq() with
the reserve argument cleared. In the x86 vector code, clear the can_reserve
flag in the vector allocation data so a subsequent free_irq() won't create
the same situation again. The interrupt stays assigned to a real vector
until pci_disable_msi() is invoked and all allocations are undone.

Fixes: 4900be83 ("x86/vector/msi: Switch to global reservation mode")
Reported-by: default avatarAlexandru Chirvasitu <achirvasub@gmail.com>
Reported-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
Tested-by: default avatarAlexandru Chirvasitu <achirvasub@gmail.com>
Tested-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dou Liyang <douly.fnst@cn.fujitsu.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: Mikael Pettersson <mikpelinux@gmail.com>
Cc: Josh Poulson <jopoulso@microsoft.com>
Cc: Mihai Costache <v-micos@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: linux-pci@vger.kernel.org
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Dexuan Cui <decui@microsoft.com>
Cc: Simon Xiao <sixiao@microsoft.com>
Cc: Saeed Mahameed <saeedm@mellanox.com>
Cc: Jork Loeser <Jork.Loeser@microsoft.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: devel@linuxdriverproject.org
Cc: KY Srinivasan <kys@microsoft.com>
Cc: Alan Cox <alan@linux.intel.com>
Cc: Sakari Ailus <sakari.ailus@intel.com>,
Cc: linux-media@vger.kernel.org
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1712291406420.1899@nanos
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1712291409460.1899@nanos
parent 702cb0a0
...@@ -369,8 +369,18 @@ static int activate_reserved(struct irq_data *irqd) ...@@ -369,8 +369,18 @@ static int activate_reserved(struct irq_data *irqd)
int ret; int ret;
ret = assign_irq_vector_any_locked(irqd); ret = assign_irq_vector_any_locked(irqd);
if (!ret) if (!ret) {
apicd->has_reserved = false; apicd->has_reserved = false;
/*
* Core might have disabled reservation mode after
* allocating the irq descriptor. Ideally this should
* happen before allocation time, but that would require
* completely convoluted ways of transporting that
* information.
*/
if (!irqd_can_reserve(irqd))
apicd->can_reserve = false;
}
return ret; return ret;
} }
......
...@@ -339,11 +339,38 @@ int msi_domain_populate_irqs(struct irq_domain *domain, struct device *dev, ...@@ -339,11 +339,38 @@ int msi_domain_populate_irqs(struct irq_domain *domain, struct device *dev,
return ret; return ret;
} }
static bool msi_check_reservation_mode(struct msi_domain_info *info) /*
* Carefully check whether the device can use reservation mode. If
* reservation mode is enabled then the early activation will assign a
* dummy vector to the device. If the PCI/MSI device does not support
* masking of the entry then this can result in spurious interrupts when
* the device driver is not absolutely careful. But even then a malfunction
* of the hardware could result in a spurious interrupt on the dummy vector
* and render the device unusable. If the entry can be masked then the core
* logic will prevent the spurious interrupt and reservation mode can be
* used. For now reservation mode is restricted to PCI/MSI.
*/
static bool msi_check_reservation_mode(struct irq_domain *domain,
struct msi_domain_info *info,
struct device *dev)
{ {
struct msi_desc *desc;
if (domain->bus_token != DOMAIN_BUS_PCI_MSI)
return false;
if (!(info->flags & MSI_FLAG_MUST_REACTIVATE)) if (!(info->flags & MSI_FLAG_MUST_REACTIVATE))
return false; return false;
return true;
if (IS_ENABLED(CONFIG_PCI_MSI) && pci_msi_ignore_mask)
return false;
/*
* Checking the first MSI descriptor is sufficient. MSIX supports
* masking and MSI does so when the maskbit is set.
*/
desc = first_msi_entry(dev);
return desc->msi_attrib.is_msix || desc->msi_attrib.maskbit;
} }
/** /**
...@@ -394,7 +421,7 @@ int msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev, ...@@ -394,7 +421,7 @@ int msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
if (ops->msi_finish) if (ops->msi_finish)
ops->msi_finish(&arg, 0); ops->msi_finish(&arg, 0);
can_reserve = msi_check_reservation_mode(info); can_reserve = msi_check_reservation_mode(domain, info, dev);
for_each_msi_entry(desc, dev) { for_each_msi_entry(desc, dev) {
virq = desc->irq; virq = desc->irq;
...@@ -412,7 +439,9 @@ int msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev, ...@@ -412,7 +439,9 @@ int msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
continue; continue;
irq_data = irq_domain_get_irq_data(domain, desc->irq); irq_data = irq_domain_get_irq_data(domain, desc->irq);
ret = irq_domain_activate_irq(irq_data, true); if (!can_reserve)
irqd_clr_can_reserve(irq_data);
ret = irq_domain_activate_irq(irq_data, can_reserve);
if (ret) if (ret)
goto cleanup; goto cleanup;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment