Commit d652ea30 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'iommu-updates-v5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull iommu updates from Joerg Roedel:

 - ARM SMMU and Mediatek updates from Will Deacon:
     - Support for MT8192 IOMMU from Mediatek
     - Arm v7s io-pgtable extensions for MT8192
     - Removal of TLBI_ON_MAP quirk
     - New Qualcomm compatible strings
     - Allow SVA without hardware broadcast TLB maintenance on SMMUv3
     - Virtualization Host Extension support for SMMUv3 (SVA)
     - Allow SMMUv3 PMU perf driver to be built independently from IOMMU

 - Some tidy-up in IOVA and core code

 - Conversion of the AMD IOMMU code to use the generic IO-page-table
   framework

 - Intel VT-d updates from Lu Baolu:
     - Audit capability consistency among different IOMMUs
     - Add SATC reporting structure support
     - Add iotlb_sync_map callback support

 - SDHI support for Renesas IOMMU driver

 - Misc cleanups and other small improvments

* tag 'iommu-updates-v5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (94 commits)
  iommu/amd: Fix performance counter initialization
  MAINTAINERS: repair file pattern in MEDIATEK IOMMU DRIVER
  iommu/mediatek: Fix error code in probe()
  iommu/mediatek: Fix unsigned domid comparison with less than zero
  iommu/vt-d: Parse SATC reporting structure
  iommu/vt-d: Add new enum value and structure for SATC
  iommu/vt-d: Add iotlb_sync_map callback
  iommu/vt-d: Move capability check code to cap_audit files
  iommu/vt-d: Audit IOMMU Capabilities and add helper functions
  iommu/vt-d: Fix 'physical' typos
  iommu: Properly pass gfp_t in _iommu_map() to avoid atomic sleeping
  iommu/vt-d: Fix compile error [-Werror=implicit-function-declaration]
  driver/perf: Remove ARM_SMMU_V3_PMU dependency on ARM_SMMU_V3
  MAINTAINERS: Add entry for MediaTek IOMMU
  iommu/mediatek: Add mt8192 support
  iommu/mediatek: Remove unnecessary check in attach_device
  iommu/mediatek: Support master use iova over 32bit
  iommu/mediatek: Add iova reserved function
  iommu/mediatek: Support for multi domains
  iommu/mediatek: Add get_domain_id from dev->dma_range_map
  ...
parents 3672ac8a 45e606f2
...@@ -34,9 +34,11 @@ properties: ...@@ -34,9 +34,11 @@ properties:
items: items:
- enum: - enum:
- qcom,sc7180-smmu-500 - qcom,sc7180-smmu-500
- qcom,sc8180x-smmu-500
- qcom,sdm845-smmu-500 - qcom,sdm845-smmu-500
- qcom,sm8150-smmu-500 - qcom,sm8150-smmu-500
- qcom,sm8250-smmu-500 - qcom,sm8250-smmu-500
- qcom,sm8350-smmu-500
- const: arm,mmu-500 - const: arm,mmu-500
- description: Qcom Adreno GPUs implementing "arm,smmu-v2" - description: Qcom Adreno GPUs implementing "arm,smmu-v2"
items: items:
......
* Mediatek IOMMU Architecture Implementation
Some Mediatek SOCs contain a Multimedia Memory Management Unit (M4U), and
this M4U have two generations of HW architecture. Generation one uses flat
pagetable, and only supports 4K size page mapping. Generation two uses the
ARM Short-Descriptor translation table format for address translation.
About the M4U Hardware Block Diagram, please check below:
EMI (External Memory Interface)
|
m4u (Multimedia Memory Management Unit)
|
+--------+
| |
gals0-rx gals1-rx (Global Async Local Sync rx)
| |
| |
gals0-tx gals1-tx (Global Async Local Sync tx)
| | Some SoCs may have GALS.
+--------+
|
SMI Common(Smart Multimedia Interface Common)
|
+----------------+-------
| |
| gals-rx There may be GALS in some larbs.
| |
| |
| gals-tx
| |
SMI larb0 SMI larb1 ... SoCs have several SMI local arbiter(larb).
(display) (vdec)
| |
| |
+-----+-----+ +----+----+
| | | | | |
| | |... | | | ... There are different ports in each larb.
| | | | | |
OVL0 RDMA0 WDMA0 MC PP VLD
As above, The Multimedia HW will go through SMI and M4U while it
access EMI. SMI is a bridge between m4u and the Multimedia HW. It contain
smi local arbiter and smi common. It will control whether the Multimedia
HW should go though the m4u for translation or bypass it and talk
directly with EMI. And also SMI help control the power domain and clocks for
each local arbiter.
Normally we specify a local arbiter(larb) for each multimedia HW
like display, video decode, and camera. And there are different ports
in each larb. Take a example, There are many ports like MC, PP, VLD in the
video decode local arbiter, all these ports are according to the video HW.
In some SoCs, there may be a GALS(Global Async Local Sync) module between
smi-common and m4u, and additional GALS module between smi-larb and
smi-common. GALS can been seen as a "asynchronous fifo" which could help
synchronize for the modules in different clock frequency.
Required properties:
- compatible : must be one of the following string:
"mediatek,mt2701-m4u" for mt2701 which uses generation one m4u HW.
"mediatek,mt2712-m4u" for mt2712 which uses generation two m4u HW.
"mediatek,mt6779-m4u" for mt6779 which uses generation two m4u HW.
"mediatek,mt7623-m4u", "mediatek,mt2701-m4u" for mt7623 which uses
generation one m4u HW.
"mediatek,mt8167-m4u" for mt8167 which uses generation two m4u HW.
"mediatek,mt8173-m4u" for mt8173 which uses generation two m4u HW.
"mediatek,mt8183-m4u" for mt8183 which uses generation two m4u HW.
- reg : m4u register base and size.
- interrupts : the interrupt of m4u.
- clocks : must contain one entry for each clock-names.
- clock-names : Only 1 optional clock:
- "bclk": the block clock of m4u.
Here is the list which require this "bclk":
- mt2701, mt2712, mt7623 and mt8173.
Note that m4u use the EMI clock which always has been enabled before kernel
if there is no this "bclk".
- mediatek,larbs : List of phandle to the local arbiters in the current Socs.
Refer to bindings/memory-controllers/mediatek,smi-larb.txt. It must sort
according to the local arbiter index, like larb0, larb1, larb2...
- iommu-cells : must be 1. This is the mtk_m4u_id according to the HW.
Specifies the mtk_m4u_id as defined in
dt-binding/memory/mt2701-larb-port.h for mt2701, mt7623
dt-binding/memory/mt2712-larb-port.h for mt2712,
dt-binding/memory/mt6779-larb-port.h for mt6779,
dt-binding/memory/mt8167-larb-port.h for mt8167,
dt-binding/memory/mt8173-larb-port.h for mt8173, and
dt-binding/memory/mt8183-larb-port.h for mt8183.
Example:
iommu: iommu@10205000 {
compatible = "mediatek,mt8173-m4u";
reg = <0 0x10205000 0 0x1000>;
interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_LOW>;
clocks = <&infracfg CLK_INFRA_M4U>;
clock-names = "bclk";
mediatek,larbs = <&larb0 &larb1 &larb2 &larb3 &larb4 &larb5>;
#iommu-cells = <1>;
};
Example for a client device:
display {
compatible = "mediatek,mt8173-disp";
iommus = <&iommu M4U_PORT_DISP_OVL0>,
<&iommu M4U_PORT_DISP_RDMA0>;
...
};
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/iommu/mediatek,iommu.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: MediaTek IOMMU Architecture Implementation
maintainers:
- Yong Wu <yong.wu@mediatek.com>
description: |+
Some MediaTek SOCs contain a Multimedia Memory Management Unit (M4U), and
this M4U have two generations of HW architecture. Generation one uses flat
pagetable, and only supports 4K size page mapping. Generation two uses the
ARM Short-Descriptor translation table format for address translation.
About the M4U Hardware Block Diagram, please check below:
EMI (External Memory Interface)
|
m4u (Multimedia Memory Management Unit)
|
+--------+
| |
gals0-rx gals1-rx (Global Async Local Sync rx)
| |
| |
gals0-tx gals1-tx (Global Async Local Sync tx)
| | Some SoCs may have GALS.
+--------+
|
SMI Common(Smart Multimedia Interface Common)
|
+----------------+-------
| |
| gals-rx There may be GALS in some larbs.
| |
| |
| gals-tx
| |
SMI larb0 SMI larb1 ... SoCs have several SMI local arbiter(larb).
(display) (vdec)
| |
| |
+-----+-----+ +----+----+
| | | | | |
| | |... | | | ... There are different ports in each larb.
| | | | | |
OVL0 RDMA0 WDMA0 MC PP VLD
As above, The Multimedia HW will go through SMI and M4U while it
access EMI. SMI is a bridge between m4u and the Multimedia HW. It contain
smi local arbiter and smi common. It will control whether the Multimedia
HW should go though the m4u for translation or bypass it and talk
directly with EMI. And also SMI help control the power domain and clocks for
each local arbiter.
Normally we specify a local arbiter(larb) for each multimedia HW
like display, video decode, and camera. And there are different ports
in each larb. Take a example, There are many ports like MC, PP, VLD in the
video decode local arbiter, all these ports are according to the video HW.
In some SoCs, there may be a GALS(Global Async Local Sync) module between
smi-common and m4u, and additional GALS module between smi-larb and
smi-common. GALS can been seen as a "asynchronous fifo" which could help
synchronize for the modules in different clock frequency.
properties:
compatible:
oneOf:
- enum:
- mediatek,mt2701-m4u # generation one
- mediatek,mt2712-m4u # generation two
- mediatek,mt6779-m4u # generation two
- mediatek,mt8167-m4u # generation two
- mediatek,mt8173-m4u # generation two
- mediatek,mt8183-m4u # generation two
- mediatek,mt8192-m4u # generation two
- description: mt7623 generation one
items:
- const: mediatek,mt7623-m4u
- const: mediatek,mt2701-m4u
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
items:
- description: bclk is the block clock.
clock-names:
items:
- const: bclk
mediatek,larbs:
$ref: /schemas/types.yaml#/definitions/phandle-array
minItems: 1
maxItems: 32
description: |
List of phandle to the local arbiters in the current Socs.
Refer to bindings/memory-controllers/mediatek,smi-larb.yaml. It must sort
according to the local arbiter index, like larb0, larb1, larb2...
'#iommu-cells':
const: 1
description: |
This is the mtk_m4u_id according to the HW. Specifies the mtk_m4u_id as
defined in
dt-binding/memory/mt2701-larb-port.h for mt2701 and mt7623,
dt-binding/memory/mt2712-larb-port.h for mt2712,
dt-binding/memory/mt6779-larb-port.h for mt6779,
dt-binding/memory/mt8167-larb-port.h for mt8167,
dt-binding/memory/mt8173-larb-port.h for mt8173,
dt-binding/memory/mt8183-larb-port.h for mt8183,
dt-binding/memory/mt8192-larb-port.h for mt8192.
power-domains:
maxItems: 1
required:
- compatible
- reg
- interrupts
- mediatek,larbs
- '#iommu-cells'
allOf:
- if:
properties:
compatible:
contains:
enum:
- mediatek,mt2701-m4u
- mediatek,mt2712-m4u
- mediatek,mt8173-m4u
- mediatek,mt8192-m4u
then:
required:
- clocks
- if:
properties:
compatible:
enum:
- mediatek,mt8192-m4u
then:
required:
- power-domains
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/mt8173-clk.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
iommu: iommu@10205000 {
compatible = "mediatek,mt8173-m4u";
reg = <0x10205000 0x1000>;
interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_LOW>;
clocks = <&infracfg CLK_INFRA_M4U>;
clock-names = "bclk";
mediatek,larbs = <&larb0 &larb1 &larb2
&larb3 &larb4 &larb5>;
#iommu-cells = <1>;
};
- |
#include <dt-bindings/memory/mt8173-larb-port.h>
/* Example for a client device */
display {
compatible = "mediatek,mt8173-disp";
iommus = <&iommu M4U_PORT_DISP_OVL0>,
<&iommu M4U_PORT_DISP_RDMA0>;
};
...@@ -11175,6 +11175,15 @@ S: Maintained ...@@ -11175,6 +11175,15 @@ S: Maintained
F: Documentation/devicetree/bindings/i2c/i2c-mt65xx.txt F: Documentation/devicetree/bindings/i2c/i2c-mt65xx.txt
F: drivers/i2c/busses/i2c-mt65xx.c F: drivers/i2c/busses/i2c-mt65xx.c
MEDIATEK IOMMU DRIVER
M: Yong Wu <yong.wu@mediatek.com>
L: iommu@lists.linux-foundation.org
L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)
S: Supported
F: Documentation/devicetree/bindings/iommu/mediatek*
F: drivers/iommu/mtk_iommu*
F: include/dt-bindings/memory/mt*-port.h
MEDIATEK JPEG DRIVER MEDIATEK JPEG DRIVER
M: Rick Chang <rick.chang@mediatek.com> M: Rick Chang <rick.chang@mediatek.com>
M: Bin Liu <bin.liu@mediatek.com> M: Bin Liu <bin.liu@mediatek.com>
......
...@@ -10,6 +10,7 @@ config AMD_IOMMU ...@@ -10,6 +10,7 @@ config AMD_IOMMU
select IOMMU_API select IOMMU_API
select IOMMU_IOVA select IOMMU_IOVA
select IOMMU_DMA select IOMMU_DMA
select IOMMU_IO_PGTABLE
depends on X86_64 && PCI && ACPI && HAVE_CMPXCHG_DOUBLE depends on X86_64 && PCI && ACPI && HAVE_CMPXCHG_DOUBLE
help help
With this option you can enable support for AMD IOMMU hardware in With this option you can enable support for AMD IOMMU hardware in
......
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_AMD_IOMMU) += iommu.o init.o quirks.o obj-$(CONFIG_AMD_IOMMU) += iommu.o init.o quirks.o io_pgtable.o
obj-$(CONFIG_AMD_IOMMU_DEBUGFS) += debugfs.o obj-$(CONFIG_AMD_IOMMU_DEBUGFS) += debugfs.o
obj-$(CONFIG_AMD_IOMMU_V2) += iommu_v2.o obj-$(CONFIG_AMD_IOMMU_V2) += iommu_v2.o
...@@ -36,6 +36,7 @@ extern void amd_iommu_disable(void); ...@@ -36,6 +36,7 @@ extern void amd_iommu_disable(void);
extern int amd_iommu_reenable(int); extern int amd_iommu_reenable(int);
extern int amd_iommu_enable_faulting(void); extern int amd_iommu_enable_faulting(void);
extern int amd_iommu_guest_ir; extern int amd_iommu_guest_ir;
extern enum io_pgtable_fmt amd_iommu_pgtable;
/* IOMMUv2 specific functions */ /* IOMMUv2 specific functions */
struct iommu_domain; struct iommu_domain;
...@@ -56,6 +57,10 @@ extern void amd_iommu_domain_direct_map(struct iommu_domain *dom); ...@@ -56,6 +57,10 @@ extern void amd_iommu_domain_direct_map(struct iommu_domain *dom);
extern int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids); extern int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids);
extern int amd_iommu_flush_page(struct iommu_domain *dom, u32 pasid, extern int amd_iommu_flush_page(struct iommu_domain *dom, u32 pasid,
u64 address); u64 address);
extern void amd_iommu_update_and_flush_device_table(struct protection_domain *domain);
extern void amd_iommu_domain_update(struct protection_domain *domain);
extern void amd_iommu_domain_flush_complete(struct protection_domain *domain);
extern void amd_iommu_domain_flush_tlb_pde(struct protection_domain *domain);
extern int amd_iommu_flush_tlb(struct iommu_domain *dom, u32 pasid); extern int amd_iommu_flush_tlb(struct iommu_domain *dom, u32 pasid);
extern int amd_iommu_domain_set_gcr3(struct iommu_domain *dom, u32 pasid, extern int amd_iommu_domain_set_gcr3(struct iommu_domain *dom, u32 pasid,
unsigned long cr3); unsigned long cr3);
...@@ -99,6 +104,21 @@ static inline void *iommu_phys_to_virt(unsigned long paddr) ...@@ -99,6 +104,21 @@ static inline void *iommu_phys_to_virt(unsigned long paddr)
return phys_to_virt(__sme_clr(paddr)); return phys_to_virt(__sme_clr(paddr));
} }
static inline
void amd_iommu_domain_set_pt_root(struct protection_domain *domain, u64 root)
{
atomic64_set(&domain->iop.pt_root, root);
domain->iop.root = (u64 *)(root & PAGE_MASK);
domain->iop.mode = root & 7; /* lowest 3 bits encode pgtable mode */
}
static inline
void amd_iommu_domain_clr_pt_root(struct protection_domain *domain)
{
amd_iommu_domain_set_pt_root(domain, 0);
}
extern bool translation_pre_enabled(struct amd_iommu *iommu); extern bool translation_pre_enabled(struct amd_iommu *iommu);
extern bool amd_iommu_is_attach_deferred(struct iommu_domain *domain, extern bool amd_iommu_is_attach_deferred(struct iommu_domain *domain,
struct device *dev); struct device *dev);
...@@ -111,4 +131,6 @@ void amd_iommu_apply_ivrs_quirks(void); ...@@ -111,4 +131,6 @@ void amd_iommu_apply_ivrs_quirks(void);
static inline void amd_iommu_apply_ivrs_quirks(void) { } static inline void amd_iommu_apply_ivrs_quirks(void) { }
#endif #endif
extern void amd_iommu_domain_set_pgtable(struct protection_domain *domain,
u64 *root, int mode);
#endif #endif
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/irqreturn.h> #include <linux/irqreturn.h>
#include <linux/io-pgtable.h>
/* /*
* Maximum number of IOMMUs supported * Maximum number of IOMMUs supported
...@@ -252,6 +253,19 @@ ...@@ -252,6 +253,19 @@
#define GA_GUEST_NR 0x1 #define GA_GUEST_NR 0x1
#define IOMMU_IN_ADDR_BIT_SIZE 52
#define IOMMU_OUT_ADDR_BIT_SIZE 52
/*
* This bitmap is used to advertise the page sizes our hardware support
* to the IOMMU core, which will then use this information to split
* physically contiguous memory regions it is mapping into page sizes
* that we support.
*
* 512GB Pages are not supported due to a hardware bug
*/
#define AMD_IOMMU_PGSIZES ((~0xFFFUL) & ~(2ULL << 38))
/* Bit value definition for dte irq remapping fields*/ /* Bit value definition for dte irq remapping fields*/
#define DTE_IRQ_PHYS_ADDR_MASK (((1ULL << 45)-1) << 6) #define DTE_IRQ_PHYS_ADDR_MASK (((1ULL << 45)-1) << 6)
#define DTE_IRQ_REMAP_INTCTL_MASK (0x3ULL << 60) #define DTE_IRQ_REMAP_INTCTL_MASK (0x3ULL << 60)
...@@ -470,6 +484,27 @@ struct amd_irte_ops; ...@@ -470,6 +484,27 @@ struct amd_irte_ops;
#define AMD_IOMMU_FLAG_TRANS_PRE_ENABLED (1 << 0) #define AMD_IOMMU_FLAG_TRANS_PRE_ENABLED (1 << 0)
#define io_pgtable_to_data(x) \
container_of((x), struct amd_io_pgtable, iop)
#define io_pgtable_ops_to_data(x) \
io_pgtable_to_data(io_pgtable_ops_to_pgtable(x))
#define io_pgtable_ops_to_domain(x) \
container_of(io_pgtable_ops_to_data(x), \
struct protection_domain, iop)
#define io_pgtable_cfg_to_data(x) \
container_of((x), struct amd_io_pgtable, pgtbl_cfg)
struct amd_io_pgtable {
struct io_pgtable_cfg pgtbl_cfg;
struct io_pgtable iop;
int mode;
u64 *root;
atomic64_t pt_root; /* pgtable root and pgtable mode */
};
/* /*
* This structure contains generic data for IOMMU protection domains * This structure contains generic data for IOMMU protection domains
* independent of their use. * independent of their use.
...@@ -478,9 +513,9 @@ struct protection_domain { ...@@ -478,9 +513,9 @@ struct protection_domain {
struct list_head dev_list; /* List of all devices in this domain */ struct list_head dev_list; /* List of all devices in this domain */
struct iommu_domain domain; /* generic domain handle used by struct iommu_domain domain; /* generic domain handle used by
iommu core code */ iommu core code */
struct amd_io_pgtable iop;
spinlock_t lock; /* mostly used to lock the page table*/ spinlock_t lock; /* mostly used to lock the page table*/
u16 id; /* the domain id written to the device table */ u16 id; /* the domain id written to the device table */
atomic64_t pt_root; /* pgtable root and pgtable mode */
int glx; /* Number of levels for GCR3 table */ int glx; /* Number of levels for GCR3 table */
u64 *gcr3_tbl; /* Guest CR3 table */ u64 *gcr3_tbl; /* Guest CR3 table */
unsigned long flags; /* flags to find out type of domain */ unsigned long flags; /* flags to find out type of domain */
...@@ -488,12 +523,6 @@ struct protection_domain { ...@@ -488,12 +523,6 @@ struct protection_domain {
unsigned dev_iommu[MAX_IOMMUS]; /* per-IOMMU reference count */ unsigned dev_iommu[MAX_IOMMUS]; /* per-IOMMU reference count */
}; };
/* For decocded pt_root */
struct domain_pgtable {
int mode;
u64 *root;
};
/* /*
* Structure where we save information about one hardware AMD IOMMU in the * Structure where we save information about one hardware AMD IOMMU in the
* system. * system.
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/bitmap.h> #include <linux/bitmap.h>
#include <linux/delay.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/syscore_ops.h> #include <linux/syscore_ops.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
...@@ -147,6 +148,8 @@ struct ivmd_header { ...@@ -147,6 +148,8 @@ struct ivmd_header {
bool amd_iommu_dump; bool amd_iommu_dump;
bool amd_iommu_irq_remap __read_mostly; bool amd_iommu_irq_remap __read_mostly;
enum io_pgtable_fmt amd_iommu_pgtable = AMD_IOMMU_V1;
int amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_VAPIC; int amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_VAPIC;
static int amd_iommu_xt_mode = IRQ_REMAP_XAPIC_MODE; static int amd_iommu_xt_mode = IRQ_REMAP_XAPIC_MODE;
...@@ -254,6 +257,8 @@ static enum iommu_init_state init_state = IOMMU_START_STATE; ...@@ -254,6 +257,8 @@ static enum iommu_init_state init_state = IOMMU_START_STATE;
static int amd_iommu_enable_interrupts(void); static int amd_iommu_enable_interrupts(void);
static int __init iommu_go_to_state(enum iommu_init_state state); static int __init iommu_go_to_state(enum iommu_init_state state);
static void init_device_table_dma(void); static void init_device_table_dma(void);
static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
u8 fxn, u64 *value, bool is_write);
static bool amd_iommu_pre_enabled = true; static bool amd_iommu_pre_enabled = true;
...@@ -1712,13 +1717,11 @@ static int __init init_iommu_all(struct acpi_table_header *table) ...@@ -1712,13 +1717,11 @@ static int __init init_iommu_all(struct acpi_table_header *table)
return 0; return 0;
} }
static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr, static void __init init_iommu_perf_ctr(struct amd_iommu *iommu)
u8 fxn, u64 *value, bool is_write);
static void init_iommu_perf_ctr(struct amd_iommu *iommu)
{ {
int retry;
struct pci_dev *pdev = iommu->dev; struct pci_dev *pdev = iommu->dev;
u64 val = 0xabcd, val2 = 0, save_reg = 0; u64 val = 0xabcd, val2 = 0, save_reg, save_src;
if (!iommu_feature(iommu, FEATURE_PC)) if (!iommu_feature(iommu, FEATURE_PC))
return; return;
...@@ -1726,17 +1729,39 @@ static void init_iommu_perf_ctr(struct amd_iommu *iommu) ...@@ -1726,17 +1729,39 @@ static void init_iommu_perf_ctr(struct amd_iommu *iommu)
amd_iommu_pc_present = true; amd_iommu_pc_present = true;
/* save the value to restore, if writable */ /* save the value to restore, if writable */
if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false)) if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false) ||
iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, false))
goto pc_false; goto pc_false;
/* Check if the performance counters can be written to */ /*
if ((iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true)) || * Disable power gating by programing the performance counter
(iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false)) || * source to 20 (i.e. counts the reads and writes from/to IOMMU
(val != val2)) * Reserved Register [MMIO Offset 1FF8h] that are ignored.),
* which never get incremented during this init phase.
* (Note: The event is also deprecated.)
*/
val = 20;
if (iommu_pc_get_set_reg(iommu, 0, 0, 8, &val, true))
goto pc_false; goto pc_false;
/* Check if the performance counters can be written to */
val = 0xabcd;
for (retry = 5; retry; retry--) {
if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true) ||
iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false) ||
val2)
break;
/* Wait about 20 msec for power gating to disable and retry. */
msleep(20);
}
/* restore */ /* restore */
if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true)) if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true) ||
iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, true))
goto pc_false;
if (val != val2)
goto pc_false; goto pc_false;
pci_info(pdev, "IOMMU performance counters supported\n"); pci_info(pdev, "IOMMU performance counters supported\n");
...@@ -1928,7 +1953,7 @@ static void print_iommu_info(void) ...@@ -1928,7 +1953,7 @@ static void print_iommu_info(void)
struct pci_dev *pdev = iommu->dev; struct pci_dev *pdev = iommu->dev;
int i; int i;
pci_info(pdev, "Found IOMMU cap 0x%hx\n", iommu->cap_ptr); pci_info(pdev, "Found IOMMU cap 0x%x\n", iommu->cap_ptr);
if (iommu->cap & (1 << IOMMU_CAP_EFR)) { if (iommu->cap & (1 << IOMMU_CAP_EFR)) {
pci_info(pdev, "Extended features (%#llx):", pci_info(pdev, "Extended features (%#llx):",
...@@ -1956,7 +1981,7 @@ static void print_iommu_info(void) ...@@ -1956,7 +1981,7 @@ static void print_iommu_info(void)
static int __init amd_iommu_init_pci(void) static int __init amd_iommu_init_pci(void)
{ {
struct amd_iommu *iommu; struct amd_iommu *iommu;
int ret = 0; int ret;
for_each_iommu(iommu) { for_each_iommu(iommu) {
ret = iommu_init_pci(iommu); ret = iommu_init_pci(iommu);
...@@ -2687,8 +2712,8 @@ static void __init ivinfo_init(void *ivrs) ...@@ -2687,8 +2712,8 @@ static void __init ivinfo_init(void *ivrs)
static int __init early_amd_iommu_init(void) static int __init early_amd_iommu_init(void)
{ {
struct acpi_table_header *ivrs_base; struct acpi_table_header *ivrs_base;
int i, remap_cache_sz, ret;
acpi_status status; acpi_status status;
int i, remap_cache_sz, ret = 0;
u32 pci_id; u32 pci_id;
if (!amd_iommu_detected) if (!amd_iommu_detected)
...@@ -2832,7 +2857,6 @@ static int __init early_amd_iommu_init(void) ...@@ -2832,7 +2857,6 @@ static int __init early_amd_iommu_init(void)
out: out:
/* Don't leak any ACPI memory */ /* Don't leak any ACPI memory */
acpi_put_table(ivrs_base); acpi_put_table(ivrs_base);
ivrs_base = NULL;
return ret; return ret;
} }
......
This diff is collapsed.
This diff is collapsed.
...@@ -77,7 +77,7 @@ struct fault { ...@@ -77,7 +77,7 @@ struct fault {
}; };
static LIST_HEAD(state_list); static LIST_HEAD(state_list);
static spinlock_t state_lock; static DEFINE_SPINLOCK(state_lock);
static struct workqueue_struct *iommu_wq; static struct workqueue_struct *iommu_wq;
...@@ -938,8 +938,6 @@ static int __init amd_iommu_v2_init(void) ...@@ -938,8 +938,6 @@ static int __init amd_iommu_v2_init(void)
return 0; return 0;
} }
spin_lock_init(&state_lock);
ret = -ENOMEM; ret = -ENOMEM;
iommu_wq = alloc_workqueue("amd_iommu_v2", WQ_MEM_RECLAIM, 0); iommu_wq = alloc_workqueue("amd_iommu_v2", WQ_MEM_RECLAIM, 0);
if (iommu_wq == NULL) if (iommu_wq == NULL)
......
...@@ -182,9 +182,13 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn, ...@@ -182,9 +182,13 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
unsigned long start, unsigned long end) unsigned long start, unsigned long end)
{ {
struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn); struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
size_t size = end - start + 1;
arm_smmu_atc_inv_domain(smmu_mn->domain, mm->pasid, start, if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
end - start + 1); arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
PAGE_SIZE, false, smmu_domain);
arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
} }
static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
...@@ -391,7 +395,7 @@ bool arm_smmu_sva_supported(struct arm_smmu_device *smmu) ...@@ -391,7 +395,7 @@ bool arm_smmu_sva_supported(struct arm_smmu_device *smmu)
unsigned long reg, fld; unsigned long reg, fld;
unsigned long oas; unsigned long oas;
unsigned long asid_bits; unsigned long asid_bits;
u32 feat_mask = ARM_SMMU_FEAT_BTM | ARM_SMMU_FEAT_COHERENCY; u32 feat_mask = ARM_SMMU_FEAT_COHERENCY;
if (vabits_actual == 52) if (vabits_actual == 52)
feat_mask |= ARM_SMMU_FEAT_VAX; feat_mask |= ARM_SMMU_FEAT_VAX;
......
This diff is collapsed.
...@@ -139,15 +139,15 @@ ...@@ -139,15 +139,15 @@
#define ARM_SMMU_CMDQ_CONS 0x9c #define ARM_SMMU_CMDQ_CONS 0x9c
#define ARM_SMMU_EVTQ_BASE 0xa0 #define ARM_SMMU_EVTQ_BASE 0xa0
#define ARM_SMMU_EVTQ_PROD 0x100a8 #define ARM_SMMU_EVTQ_PROD 0xa8
#define ARM_SMMU_EVTQ_CONS 0x100ac #define ARM_SMMU_EVTQ_CONS 0xac
#define ARM_SMMU_EVTQ_IRQ_CFG0 0xb0 #define ARM_SMMU_EVTQ_IRQ_CFG0 0xb0
#define ARM_SMMU_EVTQ_IRQ_CFG1 0xb8 #define ARM_SMMU_EVTQ_IRQ_CFG1 0xb8
#define ARM_SMMU_EVTQ_IRQ_CFG2 0xbc #define ARM_SMMU_EVTQ_IRQ_CFG2 0xbc
#define ARM_SMMU_PRIQ_BASE 0xc0 #define ARM_SMMU_PRIQ_BASE 0xc0
#define ARM_SMMU_PRIQ_PROD 0x100c8 #define ARM_SMMU_PRIQ_PROD 0xc8
#define ARM_SMMU_PRIQ_CONS 0x100cc #define ARM_SMMU_PRIQ_CONS 0xcc
#define ARM_SMMU_PRIQ_IRQ_CFG0 0xd0 #define ARM_SMMU_PRIQ_IRQ_CFG0 0xd0
#define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8 #define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8
#define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc #define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc
...@@ -430,6 +430,8 @@ struct arm_smmu_cmdq_ent { ...@@ -430,6 +430,8 @@ struct arm_smmu_cmdq_ent {
#define CMDQ_OP_TLBI_NH_ASID 0x11 #define CMDQ_OP_TLBI_NH_ASID 0x11
#define CMDQ_OP_TLBI_NH_VA 0x12 #define CMDQ_OP_TLBI_NH_VA 0x12
#define CMDQ_OP_TLBI_EL2_ALL 0x20 #define CMDQ_OP_TLBI_EL2_ALL 0x20
#define CMDQ_OP_TLBI_EL2_ASID 0x21
#define CMDQ_OP_TLBI_EL2_VA 0x22
#define CMDQ_OP_TLBI_S12_VMALL 0x28 #define CMDQ_OP_TLBI_S12_VMALL 0x28
#define CMDQ_OP_TLBI_S2_IPA 0x2a #define CMDQ_OP_TLBI_S2_IPA 0x2a
#define CMDQ_OP_TLBI_NSNH_ALL 0x30 #define CMDQ_OP_TLBI_NSNH_ALL 0x30
...@@ -604,6 +606,7 @@ struct arm_smmu_device { ...@@ -604,6 +606,7 @@ struct arm_smmu_device {
#define ARM_SMMU_FEAT_RANGE_INV (1 << 15) #define ARM_SMMU_FEAT_RANGE_INV (1 << 15)
#define ARM_SMMU_FEAT_BTM (1 << 16) #define ARM_SMMU_FEAT_BTM (1 << 16)
#define ARM_SMMU_FEAT_SVA (1 << 17) #define ARM_SMMU_FEAT_SVA (1 << 17)
#define ARM_SMMU_FEAT_E2H (1 << 18)
u32 features; u32 features;
#define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0)
...@@ -694,6 +697,9 @@ extern struct arm_smmu_ctx_desc quiet_cd; ...@@ -694,6 +697,9 @@ extern struct arm_smmu_ctx_desc quiet_cd;
int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid,
struct arm_smmu_ctx_desc *cd); struct arm_smmu_ctx_desc *cd);
void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid); void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid);
void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,
size_t granule, bool leaf,
struct arm_smmu_domain *smmu_domain);
bool arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd); bool arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd);
int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid, int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid,
unsigned long iova, size_t size); unsigned long iova, size_t size);
......
...@@ -166,6 +166,7 @@ static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = { ...@@ -166,6 +166,7 @@ static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = {
{ .compatible = "qcom,mdss" }, { .compatible = "qcom,mdss" },
{ .compatible = "qcom,sc7180-mdss" }, { .compatible = "qcom,sc7180-mdss" },
{ .compatible = "qcom,sc7180-mss-pil" }, { .compatible = "qcom,sc7180-mss-pil" },
{ .compatible = "qcom,sc8180x-mdss" },
{ .compatible = "qcom,sdm845-mdss" }, { .compatible = "qcom,sdm845-mdss" },
{ .compatible = "qcom,sdm845-mss-pil" }, { .compatible = "qcom,sdm845-mss-pil" },
{ } { }
...@@ -206,6 +207,8 @@ static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu) ...@@ -206,6 +207,8 @@ static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu)
smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(i)); smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(i));
if (FIELD_GET(ARM_SMMU_SMR_VALID, smr)) { if (FIELD_GET(ARM_SMMU_SMR_VALID, smr)) {
/* Ignore valid bit for SMR mask extraction. */
smr &= ~ARM_SMMU_SMR_VALID;
smmu->smrs[i].id = FIELD_GET(ARM_SMMU_SMR_ID, smr); smmu->smrs[i].id = FIELD_GET(ARM_SMMU_SMR_ID, smr);
smmu->smrs[i].mask = FIELD_GET(ARM_SMMU_SMR_MASK, smr); smmu->smrs[i].mask = FIELD_GET(ARM_SMMU_SMR_MASK, smr);
smmu->smrs[i].valid = true; smmu->smrs[i].valid = true;
...@@ -327,10 +330,12 @@ static struct arm_smmu_device *qcom_smmu_create(struct arm_smmu_device *smmu, ...@@ -327,10 +330,12 @@ static struct arm_smmu_device *qcom_smmu_create(struct arm_smmu_device *smmu,
static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = { static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = {
{ .compatible = "qcom,msm8998-smmu-v2" }, { .compatible = "qcom,msm8998-smmu-v2" },
{ .compatible = "qcom,sc7180-smmu-500" }, { .compatible = "qcom,sc7180-smmu-500" },
{ .compatible = "qcom,sc8180x-smmu-500" },
{ .compatible = "qcom,sdm630-smmu-v2" }, { .compatible = "qcom,sdm630-smmu-v2" },
{ .compatible = "qcom,sdm845-smmu-500" }, { .compatible = "qcom,sdm845-smmu-500" },
{ .compatible = "qcom,sm8150-smmu-500" }, { .compatible = "qcom,sm8150-smmu-500" },
{ .compatible = "qcom,sm8250-smmu-500" }, { .compatible = "qcom,sm8250-smmu-500" },
{ .compatible = "qcom,sm8350-smmu-500" },
{ } { }
}; };
......
...@@ -51,6 +51,8 @@ struct iommu_dma_cookie { ...@@ -51,6 +51,8 @@ struct iommu_dma_cookie {
struct iommu_domain *fq_domain; struct iommu_domain *fq_domain;
}; };
static DEFINE_STATIC_KEY_FALSE(iommu_deferred_attach_enabled);
void iommu_dma_free_cpu_cached_iovas(unsigned int cpu, void iommu_dma_free_cpu_cached_iovas(unsigned int cpu,
struct iommu_domain *domain) struct iommu_domain *domain)
{ {
...@@ -378,21 +380,6 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, ...@@ -378,21 +380,6 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
return iova_reserve_iommu_regions(dev, domain); return iova_reserve_iommu_regions(dev, domain);
} }
static int iommu_dma_deferred_attach(struct device *dev,
struct iommu_domain *domain)
{
const struct iommu_ops *ops = domain->ops;
if (!is_kdump_kernel())
return 0;
if (unlikely(ops->is_attach_deferred &&
ops->is_attach_deferred(domain, dev)))
return iommu_attach_device(domain, dev);
return 0;
}
/** /**
* dma_info_to_prot - Translate DMA API directions and attributes to IOMMU API * dma_info_to_prot - Translate DMA API directions and attributes to IOMMU API
* page flags. * page flags.
...@@ -535,7 +522,8 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, ...@@ -535,7 +522,8 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
size_t iova_off = iova_offset(iovad, phys); size_t iova_off = iova_offset(iovad, phys);
dma_addr_t iova; dma_addr_t iova;
if (unlikely(iommu_dma_deferred_attach(dev, domain))) if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
iommu_deferred_attach(dev, domain))
return DMA_MAPPING_ERROR; return DMA_MAPPING_ERROR;
size = iova_align(iovad, size + iova_off); size = iova_align(iovad, size + iova_off);
...@@ -693,7 +681,8 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, ...@@ -693,7 +681,8 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
*dma_handle = DMA_MAPPING_ERROR; *dma_handle = DMA_MAPPING_ERROR;
if (unlikely(iommu_dma_deferred_attach(dev, domain))) if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
iommu_deferred_attach(dev, domain))
return NULL; return NULL;
min_size = alloc_sizes & -alloc_sizes; min_size = alloc_sizes & -alloc_sizes;
...@@ -976,7 +965,8 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, ...@@ -976,7 +965,8 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
unsigned long mask = dma_get_seg_boundary(dev); unsigned long mask = dma_get_seg_boundary(dev);
int i; int i;
if (unlikely(iommu_dma_deferred_attach(dev, domain))) if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
iommu_deferred_attach(dev, domain))
return 0; return 0;
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
...@@ -1424,6 +1414,9 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc, ...@@ -1424,6 +1414,9 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc,
static int iommu_dma_init(void) static int iommu_dma_init(void)
{ {
if (is_kdump_kernel())
static_branch_enable(&iommu_deferred_attach_enabled);
return iova_cache_get(); return iova_cache_get();
} }
arch_initcall(iommu_dma_init); arch_initcall(iommu_dma_init);
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DMAR_TABLE) += dmar.o obj-$(CONFIG_DMAR_TABLE) += dmar.o
obj-$(CONFIG_INTEL_IOMMU) += iommu.o pasid.o obj-$(CONFIG_INTEL_IOMMU) += iommu.o pasid.o
obj-$(CONFIG_INTEL_IOMMU) += trace.o obj-$(CONFIG_DMAR_TABLE) += trace.o cap_audit.o
obj-$(CONFIG_INTEL_IOMMU_DEBUGFS) += debugfs.o obj-$(CONFIG_INTEL_IOMMU_DEBUGFS) += debugfs.o
obj-$(CONFIG_INTEL_IOMMU_SVM) += svm.o obj-$(CONFIG_INTEL_IOMMU_SVM) += svm.o
obj-$(CONFIG_IRQ_REMAP) += irq_remapping.o obj-$(CONFIG_IRQ_REMAP) += irq_remapping.o
// SPDX-License-Identifier: GPL-2.0
/*
* cap_audit.c - audit iommu capabilities for boot time and hot plug
*
* Copyright (C) 2021 Intel Corporation
*
* Author: Kyung Min Park <kyung.min.park@intel.com>
* Lu Baolu <baolu.lu@linux.intel.com>
*/
#define pr_fmt(fmt) "DMAR: " fmt
#include <linux/intel-iommu.h>
#include "cap_audit.h"
static u64 intel_iommu_cap_sanity;
static u64 intel_iommu_ecap_sanity;
static inline void check_irq_capabilities(struct intel_iommu *a,
struct intel_iommu *b)
{
CHECK_FEATURE_MISMATCH(a, b, cap, pi_support, CAP_PI_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, eim_support, ECAP_EIM_MASK);
}
static inline void check_dmar_capabilities(struct intel_iommu *a,
struct intel_iommu *b)
{
MINIMAL_FEATURE_IOMMU(b, cap, CAP_MAMV_MASK);
MINIMAL_FEATURE_IOMMU(b, cap, CAP_NFR_MASK);
MINIMAL_FEATURE_IOMMU(b, cap, CAP_SLLPS_MASK);
MINIMAL_FEATURE_IOMMU(b, cap, CAP_FRO_MASK);
MINIMAL_FEATURE_IOMMU(b, cap, CAP_MGAW_MASK);
MINIMAL_FEATURE_IOMMU(b, cap, CAP_SAGAW_MASK);
MINIMAL_FEATURE_IOMMU(b, cap, CAP_NDOMS_MASK);
MINIMAL_FEATURE_IOMMU(b, ecap, ECAP_PSS_MASK);
MINIMAL_FEATURE_IOMMU(b, ecap, ECAP_MHMV_MASK);
MINIMAL_FEATURE_IOMMU(b, ecap, ECAP_IRO_MASK);
CHECK_FEATURE_MISMATCH(a, b, cap, 5lp_support, CAP_FL5LP_MASK);
CHECK_FEATURE_MISMATCH(a, b, cap, fl1gp_support, CAP_FL1GP_MASK);
CHECK_FEATURE_MISMATCH(a, b, cap, read_drain, CAP_RD_MASK);
CHECK_FEATURE_MISMATCH(a, b, cap, write_drain, CAP_WD_MASK);
CHECK_FEATURE_MISMATCH(a, b, cap, pgsel_inv, CAP_PSI_MASK);
CHECK_FEATURE_MISMATCH(a, b, cap, zlr, CAP_ZLR_MASK);
CHECK_FEATURE_MISMATCH(a, b, cap, caching_mode, CAP_CM_MASK);
CHECK_FEATURE_MISMATCH(a, b, cap, phmr, CAP_PHMR_MASK);
CHECK_FEATURE_MISMATCH(a, b, cap, plmr, CAP_PLMR_MASK);
CHECK_FEATURE_MISMATCH(a, b, cap, rwbf, CAP_RWBF_MASK);
CHECK_FEATURE_MISMATCH(a, b, cap, afl, CAP_AFL_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, rps, ECAP_RPS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, smpwc, ECAP_SMPWC_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, flts, ECAP_FLTS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, slts, ECAP_SLTS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, nwfs, ECAP_NWFS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, slads, ECAP_SLADS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, vcs, ECAP_VCS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, smts, ECAP_SMTS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, pds, ECAP_PDS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, dit, ECAP_DIT_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, pasid, ECAP_PASID_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, eafs, ECAP_EAFS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, srs, ECAP_SRS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, ers, ECAP_ERS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, prs, ECAP_PRS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, nest, ECAP_NEST_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, mts, ECAP_MTS_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, sc_support, ECAP_SC_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, pass_through, ECAP_PT_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, dev_iotlb_support, ECAP_DT_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, qis, ECAP_QI_MASK);
CHECK_FEATURE_MISMATCH(a, b, ecap, coherent, ECAP_C_MASK);
}
static int cap_audit_hotplug(struct intel_iommu *iommu, enum cap_audit_type type)
{
bool mismatch = false;
u64 old_cap = intel_iommu_cap_sanity;
u64 old_ecap = intel_iommu_ecap_sanity;
if (type == CAP_AUDIT_HOTPLUG_IRQR) {
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, cap, pi_support, CAP_PI_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, eim_support, ECAP_EIM_MASK);
goto out;
}
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, cap, 5lp_support, CAP_FL5LP_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, cap, fl1gp_support, CAP_FL1GP_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, cap, read_drain, CAP_RD_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, cap, write_drain, CAP_WD_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, cap, pgsel_inv, CAP_PSI_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, cap, zlr, CAP_ZLR_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, cap, caching_mode, CAP_CM_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, cap, phmr, CAP_PHMR_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, cap, plmr, CAP_PLMR_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, cap, rwbf, CAP_RWBF_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, cap, afl, CAP_AFL_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, rps, ECAP_RPS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, smpwc, ECAP_SMPWC_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, flts, ECAP_FLTS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, slts, ECAP_SLTS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, nwfs, ECAP_NWFS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, slads, ECAP_SLADS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, vcs, ECAP_VCS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, smts, ECAP_SMTS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, pds, ECAP_PDS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, dit, ECAP_DIT_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, pasid, ECAP_PASID_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, eafs, ECAP_EAFS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, srs, ECAP_SRS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, ers, ECAP_ERS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, prs, ECAP_PRS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, nest, ECAP_NEST_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, mts, ECAP_MTS_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, sc_support, ECAP_SC_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, pass_through, ECAP_PT_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, dev_iotlb_support, ECAP_DT_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, qis, ECAP_QI_MASK);
CHECK_FEATURE_MISMATCH_HOTPLUG(iommu, ecap, coherent, ECAP_C_MASK);
/* Abort hot plug if the hot plug iommu feature is smaller than global */
MINIMAL_FEATURE_HOTPLUG(iommu, cap, max_amask_val, CAP_MAMV_MASK, mismatch);
MINIMAL_FEATURE_HOTPLUG(iommu, cap, num_fault_regs, CAP_NFR_MASK, mismatch);
MINIMAL_FEATURE_HOTPLUG(iommu, cap, super_page_val, CAP_SLLPS_MASK, mismatch);
MINIMAL_FEATURE_HOTPLUG(iommu, cap, fault_reg_offset, CAP_FRO_MASK, mismatch);
MINIMAL_FEATURE_HOTPLUG(iommu, cap, mgaw, CAP_MGAW_MASK, mismatch);
MINIMAL_FEATURE_HOTPLUG(iommu, cap, sagaw, CAP_SAGAW_MASK, mismatch);
MINIMAL_FEATURE_HOTPLUG(iommu, cap, ndoms, CAP_NDOMS_MASK, mismatch);
MINIMAL_FEATURE_HOTPLUG(iommu, ecap, pss, ECAP_PSS_MASK, mismatch);
MINIMAL_FEATURE_HOTPLUG(iommu, ecap, max_handle_mask, ECAP_MHMV_MASK, mismatch);
MINIMAL_FEATURE_HOTPLUG(iommu, ecap, iotlb_offset, ECAP_IRO_MASK, mismatch);
out:
if (mismatch) {
intel_iommu_cap_sanity = old_cap;
intel_iommu_ecap_sanity = old_ecap;
return -EFAULT;
}
return 0;
}
static int cap_audit_static(struct intel_iommu *iommu, enum cap_audit_type type)
{
struct dmar_drhd_unit *d;
struct intel_iommu *i;
rcu_read_lock();
if (list_empty(&dmar_drhd_units))
goto out;
for_each_active_iommu(i, d) {
if (!iommu) {
intel_iommu_ecap_sanity = i->ecap;
intel_iommu_cap_sanity = i->cap;
iommu = i;
continue;
}
if (type == CAP_AUDIT_STATIC_DMAR)
check_dmar_capabilities(iommu, i);
else
check_irq_capabilities(iommu, i);
}
out:
rcu_read_unlock();
return 0;
}
int intel_cap_audit(enum cap_audit_type type, struct intel_iommu *iommu)
{
switch (type) {
case CAP_AUDIT_STATIC_DMAR:
case CAP_AUDIT_STATIC_IRQR:
return cap_audit_static(iommu, type);
case CAP_AUDIT_HOTPLUG_DMAR:
case CAP_AUDIT_HOTPLUG_IRQR:
return cap_audit_hotplug(iommu, type);
default:
break;
}
return -EFAULT;
}
bool intel_cap_smts_sanity(void)
{
return ecap_smts(intel_iommu_ecap_sanity);
}
bool intel_cap_pasid_sanity(void)
{
return ecap_pasid(intel_iommu_ecap_sanity);
}
bool intel_cap_nest_sanity(void)
{
return ecap_nest(intel_iommu_ecap_sanity);
}
bool intel_cap_flts_sanity(void)
{
return ecap_flts(intel_iommu_ecap_sanity);
}
/* SPDX-License-Identifier: GPL-2.0 */
/*
* cap_audit.h - audit iommu capabilities header
*
* Copyright (C) 2021 Intel Corporation
*
* Author: Kyung Min Park <kyung.min.park@intel.com>
*/
/*
* Capability Register Mask
*/
#define CAP_FL5LP_MASK BIT_ULL(60)
#define CAP_PI_MASK BIT_ULL(59)
#define CAP_FL1GP_MASK BIT_ULL(56)
#define CAP_RD_MASK BIT_ULL(55)
#define CAP_WD_MASK BIT_ULL(54)
#define CAP_MAMV_MASK GENMASK_ULL(53, 48)
#define CAP_NFR_MASK GENMASK_ULL(47, 40)
#define CAP_PSI_MASK BIT_ULL(39)
#define CAP_SLLPS_MASK GENMASK_ULL(37, 34)
#define CAP_FRO_MASK GENMASK_ULL(33, 24)
#define CAP_ZLR_MASK BIT_ULL(22)
#define CAP_MGAW_MASK GENMASK_ULL(21, 16)
#define CAP_SAGAW_MASK GENMASK_ULL(12, 8)
#define CAP_CM_MASK BIT_ULL(7)
#define CAP_PHMR_MASK BIT_ULL(6)
#define CAP_PLMR_MASK BIT_ULL(5)
#define CAP_RWBF_MASK BIT_ULL(4)
#define CAP_AFL_MASK BIT_ULL(3)
#define CAP_NDOMS_MASK GENMASK_ULL(2, 0)
/*
* Extended Capability Register Mask
*/
#define ECAP_RPS_MASK BIT_ULL(49)
#define ECAP_SMPWC_MASK BIT_ULL(48)
#define ECAP_FLTS_MASK BIT_ULL(47)
#define ECAP_SLTS_MASK BIT_ULL(46)
#define ECAP_SLADS_MASK BIT_ULL(45)
#define ECAP_VCS_MASK BIT_ULL(44)
#define ECAP_SMTS_MASK BIT_ULL(43)
#define ECAP_PDS_MASK BIT_ULL(42)
#define ECAP_DIT_MASK BIT_ULL(41)
#define ECAP_PASID_MASK BIT_ULL(40)
#define ECAP_PSS_MASK GENMASK_ULL(39, 35)
#define ECAP_EAFS_MASK BIT_ULL(34)
#define ECAP_NWFS_MASK BIT_ULL(33)
#define ECAP_SRS_MASK BIT_ULL(31)
#define ECAP_ERS_MASK BIT_ULL(30)
#define ECAP_PRS_MASK BIT_ULL(29)
#define ECAP_NEST_MASK BIT_ULL(26)
#define ECAP_MTS_MASK BIT_ULL(25)
#define ECAP_MHMV_MASK GENMASK_ULL(23, 20)
#define ECAP_IRO_MASK GENMASK_ULL(17, 8)
#define ECAP_SC_MASK BIT_ULL(7)
#define ECAP_PT_MASK BIT_ULL(6)
#define ECAP_EIM_MASK BIT_ULL(4)
#define ECAP_DT_MASK BIT_ULL(2)
#define ECAP_QI_MASK BIT_ULL(1)
#define ECAP_C_MASK BIT_ULL(0)
/*
* u64 intel_iommu_cap_sanity, intel_iommu_ecap_sanity will be adjusted as each
* IOMMU gets audited.
*/
#define DO_CHECK_FEATURE_MISMATCH(a, b, cap, feature, MASK) \
do { \
if (cap##_##feature(a) != cap##_##feature(b)) { \
intel_iommu_##cap##_sanity &= ~(MASK); \
pr_info("IOMMU feature %s inconsistent", #feature); \
} \
} while (0)
#define CHECK_FEATURE_MISMATCH(a, b, cap, feature, MASK) \
DO_CHECK_FEATURE_MISMATCH((a)->cap, (b)->cap, cap, feature, MASK)
#define CHECK_FEATURE_MISMATCH_HOTPLUG(b, cap, feature, MASK) \
do { \
if (cap##_##feature(intel_iommu_##cap##_sanity)) \
DO_CHECK_FEATURE_MISMATCH(intel_iommu_##cap##_sanity, \
(b)->cap, cap, feature, MASK); \
} while (0)
#define MINIMAL_FEATURE_IOMMU(iommu, cap, MASK) \
do { \
u64 min_feature = intel_iommu_##cap##_sanity & (MASK); \
min_feature = min_t(u64, min_feature, (iommu)->cap & (MASK)); \
intel_iommu_##cap##_sanity = (intel_iommu_##cap##_sanity & ~(MASK)) | \
min_feature; \
} while (0)
#define MINIMAL_FEATURE_HOTPLUG(iommu, cap, feature, MASK, mismatch) \
do { \
if ((intel_iommu_##cap##_sanity & (MASK)) > \
(cap##_##feature((iommu)->cap))) \
mismatch = true; \
else \
(iommu)->cap = ((iommu)->cap & ~(MASK)) | \
(intel_iommu_##cap##_sanity & (MASK)); \
} while (0)
enum cap_audit_type {
CAP_AUDIT_STATIC_DMAR,
CAP_AUDIT_STATIC_IRQR,
CAP_AUDIT_HOTPLUG_DMAR,
CAP_AUDIT_HOTPLUG_IRQR,
};
bool intel_cap_smts_sanity(void);
bool intel_cap_pasid_sanity(void);
bool intel_cap_nest_sanity(void);
bool intel_cap_flts_sanity(void);
static inline bool scalable_mode_support(void)
{
return (intel_iommu_sm && intel_cap_smts_sanity());
}
static inline bool pasid_mode_support(void)
{
return scalable_mode_support() && intel_cap_pasid_sanity();
}
static inline bool nested_mode_support(void)
{
return scalable_mode_support() && intel_cap_nest_sanity();
}
int intel_cap_audit(enum cap_audit_type type, struct intel_iommu *iommu);
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <linux/limits.h> #include <linux/limits.h>
#include <asm/irq_remapping.h> #include <asm/irq_remapping.h>
#include <asm/iommu_table.h> #include <asm/iommu_table.h>
#include <trace/events/intel_iommu.h>
#include "../irq_remapping.h" #include "../irq_remapping.h"
...@@ -525,6 +526,7 @@ dmar_table_print_dmar_entry(struct acpi_dmar_header *header) ...@@ -525,6 +526,7 @@ dmar_table_print_dmar_entry(struct acpi_dmar_header *header)
struct acpi_dmar_reserved_memory *rmrr; struct acpi_dmar_reserved_memory *rmrr;
struct acpi_dmar_atsr *atsr; struct acpi_dmar_atsr *atsr;
struct acpi_dmar_rhsa *rhsa; struct acpi_dmar_rhsa *rhsa;
struct acpi_dmar_satc *satc;
switch (header->type) { switch (header->type) {
case ACPI_DMAR_TYPE_HARDWARE_UNIT: case ACPI_DMAR_TYPE_HARDWARE_UNIT:
...@@ -554,6 +556,10 @@ dmar_table_print_dmar_entry(struct acpi_dmar_header *header) ...@@ -554,6 +556,10 @@ dmar_table_print_dmar_entry(struct acpi_dmar_header *header)
/* We don't print this here because we need to sanity-check /* We don't print this here because we need to sanity-check
it first. So print it in dmar_parse_one_andd() instead. */ it first. So print it in dmar_parse_one_andd() instead. */
break; break;
case ACPI_DMAR_TYPE_SATC:
satc = container_of(header, struct acpi_dmar_satc, header);
pr_info("SATC flags: 0x%x\n", satc->flags);
break;
} }
} }
...@@ -641,6 +647,7 @@ parse_dmar_table(void) ...@@ -641,6 +647,7 @@ parse_dmar_table(void)
.cb[ACPI_DMAR_TYPE_ROOT_ATS] = &dmar_parse_one_atsr, .cb[ACPI_DMAR_TYPE_ROOT_ATS] = &dmar_parse_one_atsr,
.cb[ACPI_DMAR_TYPE_HARDWARE_AFFINITY] = &dmar_parse_one_rhsa, .cb[ACPI_DMAR_TYPE_HARDWARE_AFFINITY] = &dmar_parse_one_rhsa,
.cb[ACPI_DMAR_TYPE_NAMESPACE] = &dmar_parse_one_andd, .cb[ACPI_DMAR_TYPE_NAMESPACE] = &dmar_parse_one_andd,
.cb[ACPI_DMAR_TYPE_SATC] = &dmar_parse_one_satc,
}; };
/* /*
...@@ -1307,6 +1314,8 @@ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc, ...@@ -1307,6 +1314,8 @@ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc,
offset = ((index + i) % QI_LENGTH) << shift; offset = ((index + i) % QI_LENGTH) << shift;
memcpy(qi->desc + offset, &desc[i], 1 << shift); memcpy(qi->desc + offset, &desc[i], 1 << shift);
qi->desc_status[(index + i) % QI_LENGTH] = QI_IN_USE; qi->desc_status[(index + i) % QI_LENGTH] = QI_IN_USE;
trace_qi_submit(iommu, desc[i].qw0, desc[i].qw1,
desc[i].qw2, desc[i].qw3);
} }
qi->desc_status[wait_index] = QI_IN_USE; qi->desc_status[wait_index] = QI_IN_USE;
...@@ -2074,6 +2083,7 @@ static guid_t dmar_hp_guid = ...@@ -2074,6 +2083,7 @@ static guid_t dmar_hp_guid =
#define DMAR_DSM_FUNC_DRHD 1 #define DMAR_DSM_FUNC_DRHD 1
#define DMAR_DSM_FUNC_ATSR 2 #define DMAR_DSM_FUNC_ATSR 2
#define DMAR_DSM_FUNC_RHSA 3 #define DMAR_DSM_FUNC_RHSA 3
#define DMAR_DSM_FUNC_SATC 4
static inline bool dmar_detect_dsm(acpi_handle handle, int func) static inline bool dmar_detect_dsm(acpi_handle handle, int func)
{ {
...@@ -2091,6 +2101,7 @@ static int dmar_walk_dsm_resource(acpi_handle handle, int func, ...@@ -2091,6 +2101,7 @@ static int dmar_walk_dsm_resource(acpi_handle handle, int func,
[DMAR_DSM_FUNC_DRHD] = ACPI_DMAR_TYPE_HARDWARE_UNIT, [DMAR_DSM_FUNC_DRHD] = ACPI_DMAR_TYPE_HARDWARE_UNIT,
[DMAR_DSM_FUNC_ATSR] = ACPI_DMAR_TYPE_ROOT_ATS, [DMAR_DSM_FUNC_ATSR] = ACPI_DMAR_TYPE_ROOT_ATS,
[DMAR_DSM_FUNC_RHSA] = ACPI_DMAR_TYPE_HARDWARE_AFFINITY, [DMAR_DSM_FUNC_RHSA] = ACPI_DMAR_TYPE_HARDWARE_AFFINITY,
[DMAR_DSM_FUNC_SATC] = ACPI_DMAR_TYPE_SATC,
}; };
if (!dmar_detect_dsm(handle, func)) if (!dmar_detect_dsm(handle, func))
......
This diff is collapsed.
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <asm/pci-direct.h> #include <asm/pci-direct.h>
#include "../irq_remapping.h" #include "../irq_remapping.h"
#include "cap_audit.h"
enum irq_mode { enum irq_mode {
IRQ_REMAPPING, IRQ_REMAPPING,
...@@ -734,6 +735,9 @@ static int __init intel_prepare_irq_remapping(void) ...@@ -734,6 +735,9 @@ static int __init intel_prepare_irq_remapping(void)
if (dmar_table_init() < 0) if (dmar_table_init() < 0)
return -ENODEV; return -ENODEV;
if (intel_cap_audit(CAP_AUDIT_STATIC_IRQR, NULL))
goto error;
if (!dmar_ir_support()) if (!dmar_ir_support())
return -ENODEV; return -ENODEV;
...@@ -1439,6 +1443,10 @@ static int dmar_ir_add(struct dmar_drhd_unit *dmaru, struct intel_iommu *iommu) ...@@ -1439,6 +1443,10 @@ static int dmar_ir_add(struct dmar_drhd_unit *dmaru, struct intel_iommu *iommu)
int ret; int ret;
int eim = x2apic_enabled(); int eim = x2apic_enabled();
ret = intel_cap_audit(CAP_AUDIT_HOTPLUG_IRQR, iommu);
if (ret)
return ret;
if (eim && !ecap_eim_support(iommu->ecap)) { if (eim && !ecap_eim_support(iommu->ecap)) {
pr_info("DRHD %Lx: EIM not supported by DRHD, ecap %Lx\n", pr_info("DRHD %Lx: EIM not supported by DRHD, ecap %Lx\n",
iommu->reg_phys, iommu->ecap); iommu->reg_phys, iommu->ecap);
......
...@@ -456,20 +456,6 @@ pasid_cache_invalidation_with_pasid(struct intel_iommu *iommu, ...@@ -456,20 +456,6 @@ pasid_cache_invalidation_with_pasid(struct intel_iommu *iommu,
qi_submit_sync(iommu, &desc, 1, 0); qi_submit_sync(iommu, &desc, 1, 0);
} }
static void
iotlb_invalidation_with_pasid(struct intel_iommu *iommu, u16 did, u32 pasid)
{
struct qi_desc desc;
desc.qw0 = QI_EIOTLB_PASID(pasid) | QI_EIOTLB_DID(did) |
QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) | QI_EIOTLB_TYPE;
desc.qw1 = 0;
desc.qw2 = 0;
desc.qw3 = 0;
qi_submit_sync(iommu, &desc, 1, 0);
}
static void static void
devtlb_invalidation_with_pasid(struct intel_iommu *iommu, devtlb_invalidation_with_pasid(struct intel_iommu *iommu,
struct device *dev, u32 pasid) struct device *dev, u32 pasid)
...@@ -514,7 +500,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev, ...@@ -514,7 +500,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,
clflush_cache_range(pte, sizeof(*pte)); clflush_cache_range(pte, sizeof(*pte));
pasid_cache_invalidation_with_pasid(iommu, did, pasid); pasid_cache_invalidation_with_pasid(iommu, did, pasid);
iotlb_invalidation_with_pasid(iommu, did, pasid); qi_flush_piotlb(iommu, did, pasid, 0, -1, 0);
/* Device IOTLB doesn't need to be flushed in caching mode. */ /* Device IOTLB doesn't need to be flushed in caching mode. */
if (!cap_caching_mode(iommu->cap)) if (!cap_caching_mode(iommu->cap))
...@@ -530,7 +516,7 @@ static void pasid_flush_caches(struct intel_iommu *iommu, ...@@ -530,7 +516,7 @@ static void pasid_flush_caches(struct intel_iommu *iommu,
if (cap_caching_mode(iommu->cap)) { if (cap_caching_mode(iommu->cap)) {
pasid_cache_invalidation_with_pasid(iommu, did, pasid); pasid_cache_invalidation_with_pasid(iommu, did, pasid);
iotlb_invalidation_with_pasid(iommu, did, pasid); qi_flush_piotlb(iommu, did, pasid, 0, -1, 0);
} else { } else {
iommu_flush_write_buffer(iommu); iommu_flush_write_buffer(iommu);
} }
......
...@@ -123,53 +123,16 @@ static void __flush_svm_range_dev(struct intel_svm *svm, ...@@ -123,53 +123,16 @@ static void __flush_svm_range_dev(struct intel_svm *svm,
unsigned long address, unsigned long address,
unsigned long pages, int ih) unsigned long pages, int ih)
{ {
struct qi_desc desc; struct device_domain_info *info = get_domain_info(sdev->dev);
if (pages == -1) { if (WARN_ON(!pages))
desc.qw0 = QI_EIOTLB_PASID(svm->pasid) | return;
QI_EIOTLB_DID(sdev->did) |
QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) |
QI_EIOTLB_TYPE;
desc.qw1 = 0;
} else {
int mask = ilog2(__roundup_pow_of_two(pages));
desc.qw0 = QI_EIOTLB_PASID(svm->pasid) | qi_flush_piotlb(sdev->iommu, sdev->did, svm->pasid, address, pages, ih);
QI_EIOTLB_DID(sdev->did) | if (info->ats_enabled)
QI_EIOTLB_GRAN(QI_GRAN_PSI_PASID) | qi_flush_dev_iotlb_pasid(sdev->iommu, sdev->sid, info->pfsid,
QI_EIOTLB_TYPE; svm->pasid, sdev->qdep, address,
desc.qw1 = QI_EIOTLB_ADDR(address) | order_base_2(pages));
QI_EIOTLB_IH(ih) |
QI_EIOTLB_AM(mask);
}
desc.qw2 = 0;
desc.qw3 = 0;
qi_submit_sync(sdev->iommu, &desc, 1, 0);
if (sdev->dev_iotlb) {
desc.qw0 = QI_DEV_EIOTLB_PASID(svm->pasid) |
QI_DEV_EIOTLB_SID(sdev->sid) |
QI_DEV_EIOTLB_QDEP(sdev->qdep) |
QI_DEIOTLB_TYPE;
if (pages == -1) {
desc.qw1 = QI_DEV_EIOTLB_ADDR(-1ULL >> 1) |
QI_DEV_EIOTLB_SIZE;
} else if (pages > 1) {
/* The least significant zero bit indicates the size. So,
* for example, an "address" value of 0x12345f000 will
* flush from 0x123440000 to 0x12347ffff (256KiB). */
unsigned long last = address + ((unsigned long)(pages - 1) << VTD_PAGE_SHIFT);
unsigned long mask = __rounddown_pow_of_two(address ^ last);
desc.qw1 = QI_DEV_EIOTLB_ADDR((address & ~mask) |
(mask - 1)) | QI_DEV_EIOTLB_SIZE;
} else {
desc.qw1 = QI_DEV_EIOTLB_ADDR(address);
}
desc.qw2 = 0;
desc.qw3 = 0;
qi_submit_sync(sdev->iommu, &desc, 1, 0);
}
} }
static void intel_flush_svm_range_dev(struct intel_svm *svm, static void intel_flush_svm_range_dev(struct intel_svm *svm,
...@@ -948,10 +911,8 @@ static irqreturn_t prq_event_thread(int irq, void *d) ...@@ -948,10 +911,8 @@ static irqreturn_t prq_event_thread(int irq, void *d)
u64 address; u64 address;
handled = 1; handled = 1;
req = &iommu->prq[head / sizeof(*req)]; req = &iommu->prq[head / sizeof(*req)];
result = QI_RESP_INVALID;
result = QI_RESP_FAILURE;
address = (u64)req->addr << VTD_PAGE_SHIFT; address = (u64)req->addr << VTD_PAGE_SHIFT;
if (!req->pasid_present) { if (!req->pasid_present) {
pr_err("%s: Page request without PASID: %08llx %08llx\n", pr_err("%s: Page request without PASID: %08llx %08llx\n",
...@@ -989,7 +950,6 @@ static irqreturn_t prq_event_thread(int irq, void *d) ...@@ -989,7 +950,6 @@ static irqreturn_t prq_event_thread(int irq, void *d)
rcu_read_unlock(); rcu_read_unlock();
} }
result = QI_RESP_INVALID;
/* Since we're using init_mm.pgd directly, we should never take /* Since we're using init_mm.pgd directly, we should never take
* any faults on kernel addresses. */ * any faults on kernel addresses. */
if (!svm->mm) if (!svm->mm)
...@@ -1079,8 +1039,17 @@ static irqreturn_t prq_event_thread(int irq, void *d) ...@@ -1079,8 +1039,17 @@ static irqreturn_t prq_event_thread(int irq, void *d)
* Clear the page request overflow bit and wake up all threads that * Clear the page request overflow bit and wake up all threads that
* are waiting for the completion of this handling. * are waiting for the completion of this handling.
*/ */
if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) {
pr_info_ratelimited("IOMMU: %s: PRQ overflow detected\n",
iommu->name);
head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
if (head == tail) {
writel(DMA_PRS_PRO, iommu->reg + DMAR_PRS_REG); writel(DMA_PRS_PRO, iommu->reg + DMAR_PRS_REG);
pr_info_ratelimited("IOMMU: %s: PRQ overflow cleared",
iommu->name);
}
}
if (!completion_done(&iommu->prq_complete)) if (!completion_done(&iommu->prq_complete))
complete(&iommu->prq_complete); complete(&iommu->prq_complete);
......
...@@ -44,26 +44,25 @@ ...@@ -44,26 +44,25 @@
/* /*
* We have 32 bits total; 12 bits resolved at level 1, 8 bits at level 2, * We have 32 bits total; 12 bits resolved at level 1, 8 bits at level 2,
* and 12 bits in a page. With some carefully-chosen coefficients we can * and 12 bits in a page.
* hide the ugly inconsistencies behind these macros and at least let the * MediaTek extend 2 bits to reach 34bits, 14 bits at lvl1 and 8 bits at lvl2.
* rest of the code pretend to be somewhat sane.
*/ */
#define ARM_V7S_ADDR_BITS 32 #define ARM_V7S_ADDR_BITS 32
#define _ARM_V7S_LVL_BITS(lvl) (16 - (lvl) * 4) #define _ARM_V7S_LVL_BITS(lvl, cfg) ((lvl) == 1 ? ((cfg)->ias - 20) : 8)
#define ARM_V7S_LVL_SHIFT(lvl) (ARM_V7S_ADDR_BITS - (4 + 8 * (lvl))) #define ARM_V7S_LVL_SHIFT(lvl) ((lvl) == 1 ? 20 : 12)
#define ARM_V7S_TABLE_SHIFT 10 #define ARM_V7S_TABLE_SHIFT 10
#define ARM_V7S_PTES_PER_LVL(lvl) (1 << _ARM_V7S_LVL_BITS(lvl)) #define ARM_V7S_PTES_PER_LVL(lvl, cfg) (1 << _ARM_V7S_LVL_BITS(lvl, cfg))
#define ARM_V7S_TABLE_SIZE(lvl) \ #define ARM_V7S_TABLE_SIZE(lvl, cfg) \
(ARM_V7S_PTES_PER_LVL(lvl) * sizeof(arm_v7s_iopte)) (ARM_V7S_PTES_PER_LVL(lvl, cfg) * sizeof(arm_v7s_iopte))
#define ARM_V7S_BLOCK_SIZE(lvl) (1UL << ARM_V7S_LVL_SHIFT(lvl)) #define ARM_V7S_BLOCK_SIZE(lvl) (1UL << ARM_V7S_LVL_SHIFT(lvl))
#define ARM_V7S_LVL_MASK(lvl) ((u32)(~0U << ARM_V7S_LVL_SHIFT(lvl))) #define ARM_V7S_LVL_MASK(lvl) ((u32)(~0U << ARM_V7S_LVL_SHIFT(lvl)))
#define ARM_V7S_TABLE_MASK ((u32)(~0U << ARM_V7S_TABLE_SHIFT)) #define ARM_V7S_TABLE_MASK ((u32)(~0U << ARM_V7S_TABLE_SHIFT))
#define _ARM_V7S_IDX_MASK(lvl) (ARM_V7S_PTES_PER_LVL(lvl) - 1) #define _ARM_V7S_IDX_MASK(lvl, cfg) (ARM_V7S_PTES_PER_LVL(lvl, cfg) - 1)
#define ARM_V7S_LVL_IDX(addr, lvl) ({ \ #define ARM_V7S_LVL_IDX(addr, lvl, cfg) ({ \
int _l = lvl; \ int _l = lvl; \
((u32)(addr) >> ARM_V7S_LVL_SHIFT(_l)) & _ARM_V7S_IDX_MASK(_l); \ ((addr) >> ARM_V7S_LVL_SHIFT(_l)) & _ARM_V7S_IDX_MASK(_l, cfg); \
}) })
/* /*
...@@ -112,9 +111,10 @@ ...@@ -112,9 +111,10 @@
#define ARM_V7S_TEX_MASK 0x7 #define ARM_V7S_TEX_MASK 0x7
#define ARM_V7S_ATTR_TEX(val) (((val) & ARM_V7S_TEX_MASK) << ARM_V7S_TEX_SHIFT) #define ARM_V7S_ATTR_TEX(val) (((val) & ARM_V7S_TEX_MASK) << ARM_V7S_TEX_SHIFT)
/* MediaTek extend the two bits for PA 32bit/33bit */ /* MediaTek extend the bits below for PA 32bit/33bit/34bit */
#define ARM_V7S_ATTR_MTK_PA_BIT32 BIT(9) #define ARM_V7S_ATTR_MTK_PA_BIT32 BIT(9)
#define ARM_V7S_ATTR_MTK_PA_BIT33 BIT(4) #define ARM_V7S_ATTR_MTK_PA_BIT33 BIT(4)
#define ARM_V7S_ATTR_MTK_PA_BIT34 BIT(5)
/* *well, except for TEX on level 2 large pages, of course :( */ /* *well, except for TEX on level 2 large pages, of course :( */
#define ARM_V7S_CONT_PAGE_TEX_SHIFT 6 #define ARM_V7S_CONT_PAGE_TEX_SHIFT 6
...@@ -194,6 +194,8 @@ static arm_v7s_iopte paddr_to_iopte(phys_addr_t paddr, int lvl, ...@@ -194,6 +194,8 @@ static arm_v7s_iopte paddr_to_iopte(phys_addr_t paddr, int lvl,
pte |= ARM_V7S_ATTR_MTK_PA_BIT32; pte |= ARM_V7S_ATTR_MTK_PA_BIT32;
if (paddr & BIT_ULL(33)) if (paddr & BIT_ULL(33))
pte |= ARM_V7S_ATTR_MTK_PA_BIT33; pte |= ARM_V7S_ATTR_MTK_PA_BIT33;
if (paddr & BIT_ULL(34))
pte |= ARM_V7S_ATTR_MTK_PA_BIT34;
return pte; return pte;
} }
...@@ -218,6 +220,8 @@ static phys_addr_t iopte_to_paddr(arm_v7s_iopte pte, int lvl, ...@@ -218,6 +220,8 @@ static phys_addr_t iopte_to_paddr(arm_v7s_iopte pte, int lvl,
paddr |= BIT_ULL(32); paddr |= BIT_ULL(32);
if (pte & ARM_V7S_ATTR_MTK_PA_BIT33) if (pte & ARM_V7S_ATTR_MTK_PA_BIT33)
paddr |= BIT_ULL(33); paddr |= BIT_ULL(33);
if (pte & ARM_V7S_ATTR_MTK_PA_BIT34)
paddr |= BIT_ULL(34);
return paddr; return paddr;
} }
...@@ -234,7 +238,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp, ...@@ -234,7 +238,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
struct device *dev = cfg->iommu_dev; struct device *dev = cfg->iommu_dev;
phys_addr_t phys; phys_addr_t phys;
dma_addr_t dma; dma_addr_t dma;
size_t size = ARM_V7S_TABLE_SIZE(lvl); size_t size = ARM_V7S_TABLE_SIZE(lvl, cfg);
void *table = NULL; void *table = NULL;
if (lvl == 1) if (lvl == 1)
...@@ -280,7 +284,7 @@ static void __arm_v7s_free_table(void *table, int lvl, ...@@ -280,7 +284,7 @@ static void __arm_v7s_free_table(void *table, int lvl,
{ {
struct io_pgtable_cfg *cfg = &data->iop.cfg; struct io_pgtable_cfg *cfg = &data->iop.cfg;
struct device *dev = cfg->iommu_dev; struct device *dev = cfg->iommu_dev;
size_t size = ARM_V7S_TABLE_SIZE(lvl); size_t size = ARM_V7S_TABLE_SIZE(lvl, cfg);
if (!cfg->coherent_walk) if (!cfg->coherent_walk)
dma_unmap_single(dev, __arm_v7s_dma_addr(table), size, dma_unmap_single(dev, __arm_v7s_dma_addr(table), size,
...@@ -424,7 +428,7 @@ static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data, ...@@ -424,7 +428,7 @@ static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data,
arm_v7s_iopte *tblp; arm_v7s_iopte *tblp;
size_t sz = ARM_V7S_BLOCK_SIZE(lvl); size_t sz = ARM_V7S_BLOCK_SIZE(lvl);
tblp = ptep - ARM_V7S_LVL_IDX(iova, lvl); tblp = ptep - ARM_V7S_LVL_IDX(iova, lvl, cfg);
if (WARN_ON(__arm_v7s_unmap(data, NULL, iova + i * sz, if (WARN_ON(__arm_v7s_unmap(data, NULL, iova + i * sz,
sz, lvl, tblp) != sz)) sz, lvl, tblp) != sz))
return -EINVAL; return -EINVAL;
...@@ -477,7 +481,7 @@ static int __arm_v7s_map(struct arm_v7s_io_pgtable *data, unsigned long iova, ...@@ -477,7 +481,7 @@ static int __arm_v7s_map(struct arm_v7s_io_pgtable *data, unsigned long iova,
int num_entries = size >> ARM_V7S_LVL_SHIFT(lvl); int num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
/* Find our entry at the current level */ /* Find our entry at the current level */
ptep += ARM_V7S_LVL_IDX(iova, lvl); ptep += ARM_V7S_LVL_IDX(iova, lvl, cfg);
/* If we can install a leaf entry at this level, then do so */ /* If we can install a leaf entry at this level, then do so */
if (num_entries) if (num_entries)
...@@ -519,7 +523,6 @@ static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova, ...@@ -519,7 +523,6 @@ static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova,
phys_addr_t paddr, size_t size, int prot, gfp_t gfp) phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
{ {
struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
struct io_pgtable *iop = &data->iop;
int ret; int ret;
if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias) || if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias) ||
...@@ -535,12 +538,7 @@ static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova, ...@@ -535,12 +538,7 @@ static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova,
* Synchronise all PTE updates for the new mapping before there's * Synchronise all PTE updates for the new mapping before there's
* a chance for anything to kick off a table walk for the new iova. * a chance for anything to kick off a table walk for the new iova.
*/ */
if (iop->cfg.quirks & IO_PGTABLE_QUIRK_TLBI_ON_MAP) {
io_pgtable_tlb_flush_walk(iop, iova, size,
ARM_V7S_BLOCK_SIZE(2));
} else {
wmb(); wmb();
}
return ret; return ret;
} }
...@@ -550,7 +548,7 @@ static void arm_v7s_free_pgtable(struct io_pgtable *iop) ...@@ -550,7 +548,7 @@ static void arm_v7s_free_pgtable(struct io_pgtable *iop)
struct arm_v7s_io_pgtable *data = io_pgtable_to_data(iop); struct arm_v7s_io_pgtable *data = io_pgtable_to_data(iop);
int i; int i;
for (i = 0; i < ARM_V7S_PTES_PER_LVL(1); i++) { for (i = 0; i < ARM_V7S_PTES_PER_LVL(1, &data->iop.cfg); i++) {
arm_v7s_iopte pte = data->pgd[i]; arm_v7s_iopte pte = data->pgd[i];
if (ARM_V7S_PTE_IS_TABLE(pte, 1)) if (ARM_V7S_PTE_IS_TABLE(pte, 1))
...@@ -602,9 +600,9 @@ static size_t arm_v7s_split_blk_unmap(struct arm_v7s_io_pgtable *data, ...@@ -602,9 +600,9 @@ static size_t arm_v7s_split_blk_unmap(struct arm_v7s_io_pgtable *data,
if (!tablep) if (!tablep)
return 0; /* Bytes unmapped */ return 0; /* Bytes unmapped */
num_ptes = ARM_V7S_PTES_PER_LVL(2); num_ptes = ARM_V7S_PTES_PER_LVL(2, cfg);
num_entries = size >> ARM_V7S_LVL_SHIFT(2); num_entries = size >> ARM_V7S_LVL_SHIFT(2);
unmap_idx = ARM_V7S_LVL_IDX(iova, 2); unmap_idx = ARM_V7S_LVL_IDX(iova, 2, cfg);
pte = arm_v7s_prot_to_pte(arm_v7s_pte_to_prot(blk_pte, 1), 2, cfg); pte = arm_v7s_prot_to_pte(arm_v7s_pte_to_prot(blk_pte, 1), 2, cfg);
if (num_entries > 1) if (num_entries > 1)
...@@ -646,7 +644,7 @@ static size_t __arm_v7s_unmap(struct arm_v7s_io_pgtable *data, ...@@ -646,7 +644,7 @@ static size_t __arm_v7s_unmap(struct arm_v7s_io_pgtable *data,
if (WARN_ON(lvl > 2)) if (WARN_ON(lvl > 2))
return 0; return 0;
idx = ARM_V7S_LVL_IDX(iova, lvl); idx = ARM_V7S_LVL_IDX(iova, lvl, &iop->cfg);
ptep += idx; ptep += idx;
do { do {
pte[i] = READ_ONCE(ptep[i]); pte[i] = READ_ONCE(ptep[i]);
...@@ -717,7 +715,7 @@ static size_t arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova, ...@@ -717,7 +715,7 @@ static size_t arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova,
{ {
struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
if (WARN_ON(upper_32_bits(iova))) if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias)))
return 0; return 0;
return __arm_v7s_unmap(data, gather, iova, size, 1, data->pgd); return __arm_v7s_unmap(data, gather, iova, size, 1, data->pgd);
...@@ -732,7 +730,7 @@ static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops, ...@@ -732,7 +730,7 @@ static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops,
u32 mask; u32 mask;
do { do {
ptep += ARM_V7S_LVL_IDX(iova, ++lvl); ptep += ARM_V7S_LVL_IDX(iova, ++lvl, &data->iop.cfg);
pte = READ_ONCE(*ptep); pte = READ_ONCE(*ptep);
ptep = iopte_deref(pte, lvl, data); ptep = iopte_deref(pte, lvl, data);
} while (ARM_V7S_PTE_IS_TABLE(pte, lvl)); } while (ARM_V7S_PTE_IS_TABLE(pte, lvl));
...@@ -751,15 +749,14 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg, ...@@ -751,15 +749,14 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
{ {
struct arm_v7s_io_pgtable *data; struct arm_v7s_io_pgtable *data;
if (cfg->ias > ARM_V7S_ADDR_BITS) if (cfg->ias > (arm_v7s_is_mtk_enabled(cfg) ? 34 : ARM_V7S_ADDR_BITS))
return NULL; return NULL;
if (cfg->oas > (arm_v7s_is_mtk_enabled(cfg) ? 34 : ARM_V7S_ADDR_BITS)) if (cfg->oas > (arm_v7s_is_mtk_enabled(cfg) ? 35 : ARM_V7S_ADDR_BITS))
return NULL; return NULL;
if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS |
IO_PGTABLE_QUIRK_NO_PERMS | IO_PGTABLE_QUIRK_NO_PERMS |
IO_PGTABLE_QUIRK_TLBI_ON_MAP |
IO_PGTABLE_QUIRK_ARM_MTK_EXT | IO_PGTABLE_QUIRK_ARM_MTK_EXT |
IO_PGTABLE_QUIRK_NON_STRICT)) IO_PGTABLE_QUIRK_NON_STRICT))
return NULL; return NULL;
...@@ -775,8 +772,8 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg, ...@@ -775,8 +772,8 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
spin_lock_init(&data->split_lock); spin_lock_init(&data->split_lock);
data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2", data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2",
ARM_V7S_TABLE_SIZE(2), ARM_V7S_TABLE_SIZE(2, cfg),
ARM_V7S_TABLE_SIZE(2), ARM_V7S_TABLE_SIZE(2, cfg),
ARM_V7S_TABLE_SLAB_FLAGS, NULL); ARM_V7S_TABLE_SLAB_FLAGS, NULL);
if (!data->l2_tables) if (!data->l2_tables)
goto out_free_data; goto out_free_data;
......
...@@ -24,6 +24,9 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] = { ...@@ -24,6 +24,9 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] = {
#ifdef CONFIG_IOMMU_IO_PGTABLE_ARMV7S #ifdef CONFIG_IOMMU_IO_PGTABLE_ARMV7S
[ARM_V7S] = &io_pgtable_arm_v7s_init_fns, [ARM_V7S] = &io_pgtable_arm_v7s_init_fns,
#endif #endif
#ifdef CONFIG_AMD_IOMMU
[AMD_IOMMU_V1] = &io_pgtable_amd_iommu_v1_init_fns,
#endif
}; };
struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt, struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt,
......
...@@ -1980,6 +1980,16 @@ int iommu_attach_device(struct iommu_domain *domain, struct device *dev) ...@@ -1980,6 +1980,16 @@ int iommu_attach_device(struct iommu_domain *domain, struct device *dev)
} }
EXPORT_SYMBOL_GPL(iommu_attach_device); EXPORT_SYMBOL_GPL(iommu_attach_device);
int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain)
{
const struct iommu_ops *ops = domain->ops;
if (ops->is_attach_deferred && ops->is_attach_deferred(domain, dev))
return __iommu_attach_device(domain, dev);
return 0;
}
/* /*
* Check flags and other user provided data for valid combinations. We also * Check flags and other user provided data for valid combinations. We also
* make sure no reserved fields or unused flags are set. This is to ensure * make sure no reserved fields or unused flags are set. This is to ensure
...@@ -2426,9 +2436,6 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, ...@@ -2426,9 +2436,6 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
size -= pgsize; size -= pgsize;
} }
if (ops->iotlb_sync_map)
ops->iotlb_sync_map(domain);
/* unroll mapping in case something went wrong */ /* unroll mapping in case something went wrong */
if (ret) if (ret)
iommu_unmap(domain, orig_iova, orig_size - size); iommu_unmap(domain, orig_iova, orig_size - size);
...@@ -2438,18 +2445,31 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, ...@@ -2438,18 +2445,31 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
return ret; return ret;
} }
static int _iommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
{
const struct iommu_ops *ops = domain->ops;
int ret;
ret = __iommu_map(domain, iova, paddr, size, prot, gfp);
if (ret == 0 && ops->iotlb_sync_map)
ops->iotlb_sync_map(domain, iova, size);
return ret;
}
int iommu_map(struct iommu_domain *domain, unsigned long iova, int iommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot) phys_addr_t paddr, size_t size, int prot)
{ {
might_sleep(); might_sleep();
return __iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL); return _iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL);
} }
EXPORT_SYMBOL_GPL(iommu_map); EXPORT_SYMBOL_GPL(iommu_map);
int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova, int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot) phys_addr_t paddr, size_t size, int prot)
{ {
return __iommu_map(domain, iova, paddr, size, prot, GFP_ATOMIC); return _iommu_map(domain, iova, paddr, size, prot, GFP_ATOMIC);
} }
EXPORT_SYMBOL_GPL(iommu_map_atomic); EXPORT_SYMBOL_GPL(iommu_map_atomic);
...@@ -2533,6 +2553,7 @@ static size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova, ...@@ -2533,6 +2553,7 @@ static size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
struct scatterlist *sg, unsigned int nents, int prot, struct scatterlist *sg, unsigned int nents, int prot,
gfp_t gfp) gfp_t gfp)
{ {
const struct iommu_ops *ops = domain->ops;
size_t len = 0, mapped = 0; size_t len = 0, mapped = 0;
phys_addr_t start; phys_addr_t start;
unsigned int i = 0; unsigned int i = 0;
...@@ -2563,6 +2584,8 @@ static size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova, ...@@ -2563,6 +2584,8 @@ static size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
sg = sg_next(sg); sg = sg_next(sg);
} }
if (ops->iotlb_sync_map)
ops->iotlb_sync_map(domain, iova, mapped);
return mapped; return mapped;
out_err: out_err:
...@@ -2586,7 +2609,6 @@ size_t iommu_map_sg_atomic(struct iommu_domain *domain, unsigned long iova, ...@@ -2586,7 +2609,6 @@ size_t iommu_map_sg_atomic(struct iommu_domain *domain, unsigned long iova,
{ {
return __iommu_map_sg(domain, iova, sg, nents, prot, GFP_ATOMIC); return __iommu_map_sg(domain, iova, sg, nents, prot, GFP_ATOMIC);
} }
EXPORT_SYMBOL_GPL(iommu_map_sg_atomic);
int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr, int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr,
phys_addr_t paddr, u64 size, int prot) phys_addr_t paddr, u64 size, int prot)
...@@ -2599,15 +2621,6 @@ int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr, ...@@ -2599,15 +2621,6 @@ int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr,
} }
EXPORT_SYMBOL_GPL(iommu_domain_window_enable); EXPORT_SYMBOL_GPL(iommu_domain_window_enable);
void iommu_domain_window_disable(struct iommu_domain *domain, u32 wnd_nr)
{
if (unlikely(domain->ops->domain_window_disable == NULL))
return;
return domain->ops->domain_window_disable(domain, wnd_nr);
}
EXPORT_SYMBOL_GPL(iommu_domain_window_disable);
/** /**
* report_iommu_fault() - report about an IOMMU fault to the IOMMU framework * report_iommu_fault() - report about an IOMMU fault to the IOMMU framework
* @domain: the iommu domain where the fault has happened * @domain: the iommu domain where the fault has happened
...@@ -2863,17 +2876,6 @@ EXPORT_SYMBOL_GPL(iommu_fwspec_add_ids); ...@@ -2863,17 +2876,6 @@ EXPORT_SYMBOL_GPL(iommu_fwspec_add_ids);
/* /*
* Per device IOMMU features. * Per device IOMMU features.
*/ */
bool iommu_dev_has_feature(struct device *dev, enum iommu_dev_features feat)
{
const struct iommu_ops *ops = dev->bus->iommu_ops;
if (ops && ops->dev_has_feat)
return ops->dev_has_feat(dev, feat);
return false;
}
EXPORT_SYMBOL_GPL(iommu_dev_has_feature);
int iommu_dev_enable_feature(struct device *dev, enum iommu_dev_features feat) int iommu_dev_enable_feature(struct device *dev, enum iommu_dev_features feat)
{ {
const struct iommu_ops *ops = dev->bus->iommu_ops; const struct iommu_ops *ops = dev->bus->iommu_ops;
......
...@@ -55,7 +55,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, ...@@ -55,7 +55,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
} }
EXPORT_SYMBOL_GPL(init_iova_domain); EXPORT_SYMBOL_GPL(init_iova_domain);
bool has_iova_flush_queue(struct iova_domain *iovad) static bool has_iova_flush_queue(struct iova_domain *iovad)
{ {
return !!iovad->fq; return !!iovad->fq;
} }
...@@ -112,7 +112,6 @@ int init_iova_flush_queue(struct iova_domain *iovad, ...@@ -112,7 +112,6 @@ int init_iova_flush_queue(struct iova_domain *iovad,
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(init_iova_flush_queue);
static struct rb_node * static struct rb_node *
__get_cached_rbnode(struct iova_domain *iovad, unsigned long limit_pfn) __get_cached_rbnode(struct iova_domain *iovad, unsigned long limit_pfn)
...@@ -451,7 +450,6 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size, ...@@ -451,7 +450,6 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
return new_iova->pfn_lo; return new_iova->pfn_lo;
} }
EXPORT_SYMBOL_GPL(alloc_iova_fast);
/** /**
* free_iova_fast - free iova pfn range into rcache * free_iova_fast - free iova pfn range into rcache
...@@ -598,7 +596,6 @@ void queue_iova(struct iova_domain *iovad, ...@@ -598,7 +596,6 @@ void queue_iova(struct iova_domain *iovad,
mod_timer(&iovad->fq_timer, mod_timer(&iovad->fq_timer,
jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT)); jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT));
} }
EXPORT_SYMBOL_GPL(queue_iova);
/** /**
* put_iova_domain - destroys the iova domain * put_iova_domain - destroys the iova domain
...@@ -710,36 +707,6 @@ reserve_iova(struct iova_domain *iovad, ...@@ -710,36 +707,6 @@ reserve_iova(struct iova_domain *iovad,
} }
EXPORT_SYMBOL_GPL(reserve_iova); EXPORT_SYMBOL_GPL(reserve_iova);
/**
* copy_reserved_iova - copies the reserved between domains
* @from: - source domain from where to copy
* @to: - destination domin where to copy
* This function copies reserved iova's from one domain to
* other.
*/
void
copy_reserved_iova(struct iova_domain *from, struct iova_domain *to)
{
unsigned long flags;
struct rb_node *node;
spin_lock_irqsave(&from->iova_rbtree_lock, flags);
for (node = rb_first(&from->rbroot); node; node = rb_next(node)) {
struct iova *iova = rb_entry(node, struct iova, node);
struct iova *new_iova;
if (iova->pfn_lo == IOVA_ANCHOR)
continue;
new_iova = reserve_iova(to, iova->pfn_lo, iova->pfn_hi);
if (!new_iova)
pr_err("Reserve iova range %lx@%lx failed\n",
iova->pfn_lo, iova->pfn_lo);
}
spin_unlock_irqrestore(&from->iova_rbtree_lock, flags);
}
EXPORT_SYMBOL_GPL(copy_reserved_iova);
/* /*
* Magazine caches for IOVA ranges. For an introduction to magazines, * Magazine caches for IOVA ranges. For an introduction to magazines,
* see the USENIX 2001 paper "Magazines and Vmem: Extending the Slab * see the USENIX 2001 paper "Magazines and Vmem: Extending the Slab
......
...@@ -734,54 +734,45 @@ static int ipmmu_init_platform_device(struct device *dev, ...@@ -734,54 +734,45 @@ static int ipmmu_init_platform_device(struct device *dev,
return 0; return 0;
} }
static const struct soc_device_attribute soc_rcar_gen3[] = { static const struct soc_device_attribute soc_needs_opt_in[] = {
{ .soc_id = "r8a774a1", }, { .family = "R-Car Gen3", },
{ .soc_id = "r8a774b1", }, { .family = "RZ/G2", },
{ .soc_id = "r8a774c0", },
{ .soc_id = "r8a774e1", },
{ .soc_id = "r8a7795", },
{ .soc_id = "r8a77961", },
{ .soc_id = "r8a7796", },
{ .soc_id = "r8a77965", },
{ .soc_id = "r8a77970", },
{ .soc_id = "r8a77990", },
{ .soc_id = "r8a77995", },
{ /* sentinel */ } { /* sentinel */ }
}; };
static const struct soc_device_attribute soc_rcar_gen3_whitelist[] = { static const struct soc_device_attribute soc_denylist[] = {
{ .soc_id = "r8a774b1", }, { .soc_id = "r8a774a1", },
{ .soc_id = "r8a774c0", }, { .soc_id = "r8a7795", .revision = "ES1.*" },
{ .soc_id = "r8a774e1", }, { .soc_id = "r8a7795", .revision = "ES2.*" },
{ .soc_id = "r8a7795", .revision = "ES3.*" }, { .soc_id = "r8a7796", },
{ .soc_id = "r8a77961", },
{ .soc_id = "r8a77965", },
{ .soc_id = "r8a77990", },
{ .soc_id = "r8a77995", },
{ /* sentinel */ } { /* sentinel */ }
}; };
static const char * const rcar_gen3_slave_whitelist[] = { static const char * const devices_allowlist[] = {
"ee100000.mmc",
"ee120000.mmc",
"ee140000.mmc",
"ee160000.mmc"
}; };
static bool ipmmu_slave_whitelist(struct device *dev) static bool ipmmu_device_is_allowed(struct device *dev)
{ {
unsigned int i; unsigned int i;
/* /*
* For R-Car Gen3 use a white list to opt-in slave devices. * R-Car Gen3 and RZ/G2 use the allow list to opt-in devices.
* For Other SoCs, this returns true anyway. * For Other SoCs, this returns true anyway.
*/ */
if (!soc_device_match(soc_rcar_gen3)) if (!soc_device_match(soc_needs_opt_in))
return true; return true;
/* Check whether this R-Car Gen3 can use the IPMMU correctly or not */ /* Check whether this SoC can use the IPMMU correctly or not */
if (!soc_device_match(soc_rcar_gen3_whitelist)) if (soc_device_match(soc_denylist))
return false; return false;
/* Check whether this slave device can work with the IPMMU */ /* Check whether this device can work with the IPMMU */
for (i = 0; i < ARRAY_SIZE(rcar_gen3_slave_whitelist); i++) { for (i = 0; i < ARRAY_SIZE(devices_allowlist); i++) {
if (!strcmp(dev_name(dev), rcar_gen3_slave_whitelist[i])) if (!strcmp(dev_name(dev), devices_allowlist[i]))
return true; return true;
} }
...@@ -792,7 +783,7 @@ static bool ipmmu_slave_whitelist(struct device *dev) ...@@ -792,7 +783,7 @@ static bool ipmmu_slave_whitelist(struct device *dev)
static int ipmmu_of_xlate(struct device *dev, static int ipmmu_of_xlate(struct device *dev,
struct of_phandle_args *spec) struct of_phandle_args *spec)
{ {
if (!ipmmu_slave_whitelist(dev)) if (!ipmmu_device_is_allowed(dev))
return -ENODEV; return -ENODEV;
iommu_fwspec_add_ids(dev, spec->args, 1); iommu_fwspec_add_ids(dev, spec->args, 1);
......
...@@ -343,7 +343,6 @@ static int msm_iommu_domain_config(struct msm_priv *priv) ...@@ -343,7 +343,6 @@ static int msm_iommu_domain_config(struct msm_priv *priv)
spin_lock_init(&priv->pgtlock); spin_lock_init(&priv->pgtlock);
priv->cfg = (struct io_pgtable_cfg) { priv->cfg = (struct io_pgtable_cfg) {
.quirks = IO_PGTABLE_QUIRK_TLBI_ON_MAP,
.pgsize_bitmap = msm_iommu_ops.pgsize_bitmap, .pgsize_bitmap = msm_iommu_ops.pgsize_bitmap,
.ias = 32, .ias = 32,
.oas = 32, .oas = 32,
...@@ -490,6 +489,14 @@ static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova, ...@@ -490,6 +489,14 @@ static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova,
return ret; return ret;
} }
static void msm_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
size_t size)
{
struct msm_priv *priv = to_msm_priv(domain);
__flush_iotlb_range(iova, size, SZ_4K, false, priv);
}
static size_t msm_iommu_unmap(struct iommu_domain *domain, unsigned long iova, static size_t msm_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
size_t len, struct iommu_iotlb_gather *gather) size_t len, struct iommu_iotlb_gather *gather)
{ {
...@@ -680,6 +687,7 @@ static struct iommu_ops msm_iommu_ops = { ...@@ -680,6 +687,7 @@ static struct iommu_ops msm_iommu_ops = {
* kick starting the other master. * kick starting the other master.
*/ */
.iotlb_sync = NULL, .iotlb_sync = NULL,
.iotlb_sync_map = msm_iommu_sync_map,
.iova_to_phys = msm_iommu_iova_to_phys, .iova_to_phys = msm_iommu_iova_to_phys,
.probe_device = msm_iommu_probe_device, .probe_device = msm_iommu_probe_device,
.release_device = msm_iommu_release_device, .release_device = msm_iommu_release_device,
......
This diff is collapsed.
...@@ -17,10 +17,13 @@ ...@@ -17,10 +17,13 @@
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <soc/mediatek/smi.h> #include <soc/mediatek/smi.h>
#include <dt-bindings/memory/mtk-memory-port.h>
#define MTK_LARB_COM_MAX 8 #define MTK_LARB_COM_MAX 8
#define MTK_LARB_SUBCOM_MAX 4 #define MTK_LARB_SUBCOM_MAX 4
#define MTK_IOMMU_GROUP_MAX 8
struct mtk_iommu_suspend_reg { struct mtk_iommu_suspend_reg {
union { union {
u32 standard_axi_mode;/* v1 */ u32 standard_axi_mode;/* v1 */
...@@ -42,12 +45,18 @@ enum mtk_iommu_plat { ...@@ -42,12 +45,18 @@ enum mtk_iommu_plat {
M4U_MT8167, M4U_MT8167,
M4U_MT8173, M4U_MT8173,
M4U_MT8183, M4U_MT8183,
M4U_MT8192,
}; };
struct mtk_iommu_iova_region;
struct mtk_iommu_plat_data { struct mtk_iommu_plat_data {
enum mtk_iommu_plat m4u_plat; enum mtk_iommu_plat m4u_plat;
u32 flags; u32 flags;
u32 inv_sel_reg; u32 inv_sel_reg;
unsigned int iova_region_nr;
const struct mtk_iommu_iova_region *iova_region;
unsigned char larbid_remap[MTK_LARB_COM_MAX][MTK_LARB_SUBCOM_MAX]; unsigned char larbid_remap[MTK_LARB_COM_MAX][MTK_LARB_SUBCOM_MAX];
}; };
...@@ -61,12 +70,13 @@ struct mtk_iommu_data { ...@@ -61,12 +70,13 @@ struct mtk_iommu_data {
phys_addr_t protect_base; /* protect memory base */ phys_addr_t protect_base; /* protect memory base */
struct mtk_iommu_suspend_reg reg; struct mtk_iommu_suspend_reg reg;
struct mtk_iommu_domain *m4u_dom; struct mtk_iommu_domain *m4u_dom;
struct iommu_group *m4u_group; struct iommu_group *m4u_group[MTK_IOMMU_GROUP_MAX];
bool enable_4GB; bool enable_4GB;
spinlock_t tlb_lock; /* lock for tlb range flush */ spinlock_t tlb_lock; /* lock for tlb range flush */
struct iommu_device iommu; struct iommu_device iommu;
const struct mtk_iommu_plat_data *plat_data; const struct mtk_iommu_plat_data *plat_data;
struct device *smicomm_dev;
struct dma_iommu_mapping *mapping; /* For mtk_iommu_v1.c */ struct dma_iommu_mapping *mapping; /* For mtk_iommu_v1.c */
......
...@@ -261,7 +261,8 @@ static int gart_iommu_of_xlate(struct device *dev, ...@@ -261,7 +261,8 @@ static int gart_iommu_of_xlate(struct device *dev,
return 0; return 0;
} }
static void gart_iommu_sync_map(struct iommu_domain *domain) static void gart_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
size_t size)
{ {
FLUSH_GART_REGS(gart_handle); FLUSH_GART_REGS(gart_handle);
} }
...@@ -269,7 +270,9 @@ static void gart_iommu_sync_map(struct iommu_domain *domain) ...@@ -269,7 +270,9 @@ static void gart_iommu_sync_map(struct iommu_domain *domain)
static void gart_iommu_sync(struct iommu_domain *domain, static void gart_iommu_sync(struct iommu_domain *domain,
struct iommu_iotlb_gather *gather) struct iommu_iotlb_gather *gather)
{ {
gart_iommu_sync_map(domain); size_t length = gather->end - gather->start + 1;
gart_iommu_sync_map(domain, gather->start, length);
} }
static const struct iommu_ops gart_iommu_ops = { static const struct iommu_ops gart_iommu_ops = {
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <soc/mediatek/smi.h> #include <soc/mediatek/smi.h>
#include <dt-bindings/memory/mt2701-larb-port.h> #include <dt-bindings/memory/mt2701-larb-port.h>
#include <dt-bindings/memory/mtk-memory-port.h>
/* mt8173 */ /* mt8173 */
#define SMI_LARB_MMU_EN 0xf00 #define SMI_LARB_MMU_EN 0xf00
...@@ -43,6 +44,10 @@ ...@@ -43,6 +44,10 @@
/* mt2712 */ /* mt2712 */
#define SMI_LARB_NONSEC_CON(id) (0x380 + ((id) * 4)) #define SMI_LARB_NONSEC_CON(id) (0x380 + ((id) * 4))
#define F_MMU_EN BIT(0) #define F_MMU_EN BIT(0)
#define BANK_SEL(id) ({ \
u32 _id = (id) & 0x3; \
(_id << 8 | _id << 10 | _id << 12 | _id << 14); \
})
/* SMI COMMON */ /* SMI COMMON */
#define SMI_BUS_SEL 0x220 #define SMI_BUS_SEL 0x220
...@@ -87,6 +92,7 @@ struct mtk_smi_larb { /* larb: local arbiter */ ...@@ -87,6 +92,7 @@ struct mtk_smi_larb { /* larb: local arbiter */
const struct mtk_smi_larb_gen *larb_gen; const struct mtk_smi_larb_gen *larb_gen;
int larbid; int larbid;
u32 *mmu; u32 *mmu;
unsigned char *bank;
}; };
static int mtk_smi_clk_enable(const struct mtk_smi *smi) static int mtk_smi_clk_enable(const struct mtk_smi *smi)
...@@ -153,6 +159,7 @@ mtk_smi_larb_bind(struct device *dev, struct device *master, void *data) ...@@ -153,6 +159,7 @@ mtk_smi_larb_bind(struct device *dev, struct device *master, void *data)
if (dev == larb_mmu[i].dev) { if (dev == larb_mmu[i].dev) {
larb->larbid = i; larb->larbid = i;
larb->mmu = &larb_mmu[i].mmu; larb->mmu = &larb_mmu[i].mmu;
larb->bank = larb_mmu[i].bank;
return 0; return 0;
} }
} }
...@@ -171,6 +178,7 @@ static void mtk_smi_larb_config_port_gen2_general(struct device *dev) ...@@ -171,6 +178,7 @@ static void mtk_smi_larb_config_port_gen2_general(struct device *dev)
for_each_set_bit(i, (unsigned long *)larb->mmu, 32) { for_each_set_bit(i, (unsigned long *)larb->mmu, 32) {
reg = readl_relaxed(larb->base + SMI_LARB_NONSEC_CON(i)); reg = readl_relaxed(larb->base + SMI_LARB_NONSEC_CON(i));
reg |= F_MMU_EN; reg |= F_MMU_EN;
reg |= BANK_SEL(larb->bank[i]);
writel(reg, larb->base + SMI_LARB_NONSEC_CON(i)); writel(reg, larb->base + SMI_LARB_NONSEC_CON(i));
} }
} }
......
...@@ -62,7 +62,7 @@ config ARM_PMU_ACPI ...@@ -62,7 +62,7 @@ config ARM_PMU_ACPI
config ARM_SMMU_V3_PMU config ARM_SMMU_V3_PMU
tristate "ARM SMMUv3 Performance Monitors Extension" tristate "ARM SMMUv3 Performance Monitors Extension"
depends on ARM64 && ACPI && ARM_SMMU_V3 depends on ARM64 && ACPI
help help
Provides support for the ARM SMMUv3 Performance Monitor Counter Provides support for the ARM SMMUv3 Performance Monitor Counter
Groups (PMCG), which provide monitoring of transactions passing Groups (PMCG), which provide monitoring of transactions passing
......
...@@ -514,7 +514,8 @@ enum acpi_dmar_type { ...@@ -514,7 +514,8 @@ enum acpi_dmar_type {
ACPI_DMAR_TYPE_ROOT_ATS = 2, ACPI_DMAR_TYPE_ROOT_ATS = 2,
ACPI_DMAR_TYPE_HARDWARE_AFFINITY = 3, ACPI_DMAR_TYPE_HARDWARE_AFFINITY = 3,
ACPI_DMAR_TYPE_NAMESPACE = 4, ACPI_DMAR_TYPE_NAMESPACE = 4,
ACPI_DMAR_TYPE_RESERVED = 5 /* 5 and greater are reserved */ ACPI_DMAR_TYPE_SATC = 5,
ACPI_DMAR_TYPE_RESERVED = 6 /* 6 and greater are reserved */
}; };
/* DMAR Device Scope structure */ /* DMAR Device Scope structure */
...@@ -607,6 +608,14 @@ struct acpi_dmar_andd { ...@@ -607,6 +608,14 @@ struct acpi_dmar_andd {
char device_name[1]; char device_name[1];
}; };
/* 5: SOC Integrated Address Translation Cache Reporting Structure */
struct acpi_dmar_satc {
struct acpi_dmar_header header;
u8 flags;
u8 reserved;
u16 segment;
};
/******************************************************************************* /*******************************************************************************
* *
* DRTM - Dynamic Root of Trust for Measurement table * DRTM - Dynamic Root of Trust for Measurement table
......
...@@ -4,8 +4,8 @@ ...@@ -4,8 +4,8 @@
* Author: Honghui Zhang <honghui.zhang@mediatek.com> * Author: Honghui Zhang <honghui.zhang@mediatek.com>
*/ */
#ifndef _MT2701_LARB_PORT_H_ #ifndef _DT_BINDINGS_MEMORY_MT2701_LARB_PORT_H_
#define _MT2701_LARB_PORT_H_ #define _DT_BINDINGS_MEMORY_MT2701_LARB_PORT_H_
/* /*
* Mediatek m4u generation 1 such as mt2701 has flat m4u port numbers, * Mediatek m4u generation 1 such as mt2701 has flat m4u port numbers,
......
...@@ -3,10 +3,10 @@ ...@@ -3,10 +3,10 @@
* Copyright (c) 2017 MediaTek Inc. * Copyright (c) 2017 MediaTek Inc.
* Author: Yong Wu <yong.wu@mediatek.com> * Author: Yong Wu <yong.wu@mediatek.com>
*/ */
#ifndef __DTS_IOMMU_PORT_MT2712_H #ifndef _DT_BINDINGS_MEMORY_MT2712_LARB_PORT_H_
#define __DTS_IOMMU_PORT_MT2712_H #define _DT_BINDINGS_MEMORY_MT2712_LARB_PORT_H_
#define MTK_M4U_ID(larb, port) (((larb) << 5) | (port)) #include <dt-bindings/memory/mtk-memory-port.h>
#define M4U_LARB0_ID 0 #define M4U_LARB0_ID 0
#define M4U_LARB1_ID 1 #define M4U_LARB1_ID 1
......
...@@ -4,10 +4,10 @@ ...@@ -4,10 +4,10 @@
* Author: Chao Hao <chao.hao@mediatek.com> * Author: Chao Hao <chao.hao@mediatek.com>
*/ */
#ifndef _DTS_IOMMU_PORT_MT6779_H_ #ifndef _DT_BINDINGS_MEMORY_MT6779_LARB_PORT_H_
#define _DTS_IOMMU_PORT_MT6779_H_ #define _DT_BINDINGS_MEMORY_MT6779_LARB_PORT_H_
#define MTK_M4U_ID(larb, port) (((larb) << 5) | (port)) #include <dt-bindings/memory/mtk-memory-port.h>
#define M4U_LARB0_ID 0 #define M4U_LARB0_ID 0
#define M4U_LARB1_ID 1 #define M4U_LARB1_ID 1
......
...@@ -5,10 +5,10 @@ ...@@ -5,10 +5,10 @@
* Author: Honghui Zhang <honghui.zhang@mediatek.com> * Author: Honghui Zhang <honghui.zhang@mediatek.com>
* Author: Fabien Parent <fparent@baylibre.com> * Author: Fabien Parent <fparent@baylibre.com>
*/ */
#ifndef __DTS_IOMMU_PORT_MT8167_H #ifndef _DT_BINDINGS_MEMORY_MT8167_LARB_PORT_H_
#define __DTS_IOMMU_PORT_MT8167_H #define _DT_BINDINGS_MEMORY_MT8167_LARB_PORT_H_
#define MTK_M4U_ID(larb, port) (((larb) << 5) | (port)) #include <dt-bindings/memory/mtk-memory-port.h>
#define M4U_LARB0_ID 0 #define M4U_LARB0_ID 0
#define M4U_LARB1_ID 1 #define M4U_LARB1_ID 1
......
...@@ -3,10 +3,10 @@ ...@@ -3,10 +3,10 @@
* Copyright (c) 2015-2016 MediaTek Inc. * Copyright (c) 2015-2016 MediaTek Inc.
* Author: Yong Wu <yong.wu@mediatek.com> * Author: Yong Wu <yong.wu@mediatek.com>
*/ */
#ifndef __DTS_IOMMU_PORT_MT8173_H #ifndef _DT_BINDINGS_MEMORY_MT8173_LARB_PORT_H_
#define __DTS_IOMMU_PORT_MT8173_H #define _DT_BINDINGS_MEMORY_MT8173_LARB_PORT_H_
#define MTK_M4U_ID(larb, port) (((larb) << 5) | (port)) #include <dt-bindings/memory/mtk-memory-port.h>
#define M4U_LARB0_ID 0 #define M4U_LARB0_ID 0
#define M4U_LARB1_ID 1 #define M4U_LARB1_ID 1
......
...@@ -3,10 +3,10 @@ ...@@ -3,10 +3,10 @@
* Copyright (c) 2018 MediaTek Inc. * Copyright (c) 2018 MediaTek Inc.
* Author: Yong Wu <yong.wu@mediatek.com> * Author: Yong Wu <yong.wu@mediatek.com>
*/ */
#ifndef __DTS_IOMMU_PORT_MT8183_H #ifndef _DT_BINDINGS_MEMORY_MT8183_LARB_PORT_H_
#define __DTS_IOMMU_PORT_MT8183_H #define _DT_BINDINGS_MEMORY_MT8183_LARB_PORT_H_
#define MTK_M4U_ID(larb, port) (((larb) << 5) | (port)) #include <dt-bindings/memory/mtk-memory-port.h>
#define M4U_LARB0_ID 0 #define M4U_LARB0_ID 0
#define M4U_LARB1_ID 1 #define M4U_LARB1_ID 1
......
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2020 MediaTek Inc.
* Author: Yong Wu <yong.wu@mediatek.com>
*/
#ifndef __DT_BINDINGS_MEMORY_MTK_MEMORY_PORT_H_
#define __DT_BINDINGS_MEMORY_MTK_MEMORY_PORT_H_
#define MTK_LARB_NR_MAX 32
#define MTK_M4U_ID(larb, port) (((larb) << 5) | (port))
#define MTK_M4U_TO_LARB(id) (((id) >> 5) & 0x1f)
#define MTK_M4U_TO_PORT(id) ((id) & 0x1f)
#endif
...@@ -138,6 +138,7 @@ extern void intel_iommu_shutdown(void); ...@@ -138,6 +138,7 @@ extern void intel_iommu_shutdown(void);
extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg); extern int dmar_parse_one_rmrr(struct acpi_dmar_header *header, void *arg);
extern int dmar_parse_one_atsr(struct acpi_dmar_header *header, void *arg); extern int dmar_parse_one_atsr(struct acpi_dmar_header *header, void *arg);
extern int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg); extern int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg);
extern int dmar_parse_one_satc(struct acpi_dmar_header *hdr, void *arg);
extern int dmar_release_one_atsr(struct acpi_dmar_header *hdr, void *arg); extern int dmar_release_one_atsr(struct acpi_dmar_header *hdr, void *arg);
extern int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert); extern int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert);
extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info); extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info);
...@@ -149,6 +150,7 @@ static inline void intel_iommu_shutdown(void) { } ...@@ -149,6 +150,7 @@ static inline void intel_iommu_shutdown(void) { }
#define dmar_parse_one_atsr dmar_res_noop #define dmar_parse_one_atsr dmar_res_noop
#define dmar_check_one_atsr dmar_res_noop #define dmar_check_one_atsr dmar_res_noop
#define dmar_release_one_atsr dmar_res_noop #define dmar_release_one_atsr dmar_res_noop
#define dmar_parse_one_satc dmar_res_noop
static inline int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info) static inline int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info)
{ {
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -150,10 +150,8 @@ unsigned long alloc_iova_fast(struct iova_domain *iovad, unsigned long size, ...@@ -150,10 +150,8 @@ unsigned long alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
unsigned long limit_pfn, bool flush_rcache); unsigned long limit_pfn, bool flush_rcache);
struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
unsigned long pfn_hi); unsigned long pfn_hi);
void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
void init_iova_domain(struct iova_domain *iovad, unsigned long granule, void init_iova_domain(struct iova_domain *iovad, unsigned long granule,
unsigned long start_pfn); unsigned long start_pfn);
bool has_iova_flush_queue(struct iova_domain *iovad);
int init_iova_flush_queue(struct iova_domain *iovad, int init_iova_flush_queue(struct iova_domain *iovad,
iova_flush_cb flush_cb, iova_entry_dtor entry_dtor); iova_flush_cb flush_cb, iova_entry_dtor entry_dtor);
struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn); struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn);
...@@ -212,22 +210,12 @@ static inline struct iova *reserve_iova(struct iova_domain *iovad, ...@@ -212,22 +210,12 @@ static inline struct iova *reserve_iova(struct iova_domain *iovad,
return NULL; return NULL;
} }
static inline void copy_reserved_iova(struct iova_domain *from,
struct iova_domain *to)
{
}
static inline void init_iova_domain(struct iova_domain *iovad, static inline void init_iova_domain(struct iova_domain *iovad,
unsigned long granule, unsigned long granule,
unsigned long start_pfn) unsigned long start_pfn)
{ {
} }
static inline bool has_iova_flush_queue(struct iova_domain *iovad)
{
return false;
}
static inline int init_iova_flush_queue(struct iova_domain *iovad, static inline int init_iova_flush_queue(struct iova_domain *iovad,
iova_flush_cb flush_cb, iova_flush_cb flush_cb,
iova_entry_dtor entry_dtor) iova_entry_dtor entry_dtor)
......
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment