Commit 70cf769c authored by Joerg Roedel's avatar Joerg Roedel

Merge branches 'arm/rockchip', 'arm/exynos', 'arm/smmu', 'arm/mediatek',...

Merge branches 'arm/rockchip', 'arm/exynos', 'arm/smmu', 'arm/mediatek', 'arm/io-pgtable', 'arm/renesas' and 'core' into next
* Mediatek IOMMU Architecture Implementation
Some Mediatek SOCs contain a Multimedia Memory Management Unit (M4U) which
uses the ARM Short-Descriptor translation table format for address translation.
About the M4U Hardware Block Diagram, please check below:
EMI (External Memory Interface)
|
m4u (Multimedia Memory Management Unit)
|
SMI Common(Smart Multimedia Interface Common)
|
+----------------+-------
| |
| |
SMI larb0 SMI larb1 ... SoCs have several SMI local arbiter(larb).
(display) (vdec)
| |
| |
+-----+-----+ +----+----+
| | | | | |
| | |... | | | ... There are different ports in each larb.
| | | | | |
OVL0 RDMA0 WDMA0 MC PP VLD
As above, The Multimedia HW will go through SMI and M4U while it
access EMI. SMI is a bridge between m4u and the Multimedia HW. It contain
smi local arbiter and smi common. It will control whether the Multimedia
HW should go though the m4u for translation or bypass it and talk
directly with EMI. And also SMI help control the power domain and clocks for
each local arbiter.
Normally we specify a local arbiter(larb) for each multimedia HW
like display, video decode, and camera. And there are different ports
in each larb. Take a example, There are many ports like MC, PP, VLD in the
video decode local arbiter, all these ports are according to the video HW.
Required properties:
- compatible : must be "mediatek,mt8173-m4u".
- reg : m4u register base and size.
- interrupts : the interrupt of m4u.
- clocks : must contain one entry for each clock-names.
- clock-names : must be "bclk", It is the block clock of m4u.
- mediatek,larbs : List of phandle to the local arbiters in the current Socs.
Refer to bindings/memory-controllers/mediatek,smi-larb.txt. It must sort
according to the local arbiter index, like larb0, larb1, larb2...
- iommu-cells : must be 1. This is the mtk_m4u_id according to the HW.
Specifies the mtk_m4u_id as defined in
dt-binding/memory/mt8173-larb-port.h.
Example:
iommu: iommu@10205000 {
compatible = "mediatek,mt8173-m4u";
reg = <0 0x10205000 0 0x1000>;
interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_LOW>;
clocks = <&infracfg CLK_INFRA_M4U>;
clock-names = "bclk";
mediatek,larbs = <&larb0 &larb1 &larb2 &larb3 &larb4 &larb5>;
#iommu-cells = <1>;
};
Example for a client device:
display {
compatible = "mediatek,mt8173-disp";
iommus = <&iommu M4U_PORT_DISP_OVL0>,
<&iommu M4U_PORT_DISP_RDMA0>;
...
};
...@@ -7,23 +7,34 @@ connected to the IPMMU through a port called micro-TLB. ...@@ -7,23 +7,34 @@ connected to the IPMMU through a port called micro-TLB.
Required Properties: Required Properties:
- compatible: Must contain SoC-specific and generic entries from below. - compatible: Must contain SoC-specific and generic entry below in case
the device is compatible with the R-Car Gen2 VMSA-compatible IPMMU.
- "renesas,ipmmu-r8a73a4" for the R8A73A4 (R-Mobile APE6) IPMMU. - "renesas,ipmmu-r8a73a4" for the R8A73A4 (R-Mobile APE6) IPMMU.
- "renesas,ipmmu-r8a7790" for the R8A7790 (R-Car H2) IPMMU. - "renesas,ipmmu-r8a7790" for the R8A7790 (R-Car H2) IPMMU.
- "renesas,ipmmu-r8a7791" for the R8A7791 (R-Car M2-W) IPMMU. - "renesas,ipmmu-r8a7791" for the R8A7791 (R-Car M2-W) IPMMU.
- "renesas,ipmmu-r8a7793" for the R8A7793 (R-Car M2-N) IPMMU. - "renesas,ipmmu-r8a7793" for the R8A7793 (R-Car M2-N) IPMMU.
- "renesas,ipmmu-r8a7794" for the R8A7794 (R-Car E2) IPMMU. - "renesas,ipmmu-r8a7794" for the R8A7794 (R-Car E2) IPMMU.
- "renesas,ipmmu-r8a7795" for the R8A7795 (R-Car H3) IPMMU.
- "renesas,ipmmu-vmsa" for generic R-Car Gen2 VMSA-compatible IPMMU. - "renesas,ipmmu-vmsa" for generic R-Car Gen2 VMSA-compatible IPMMU.
- reg: Base address and size of the IPMMU registers. - reg: Base address and size of the IPMMU registers.
- interrupts: Specifiers for the MMU fault interrupts. For instances that - interrupts: Specifiers for the MMU fault interrupts. For instances that
support secure mode two interrupts must be specified, for non-secure and support secure mode two interrupts must be specified, for non-secure and
secure mode, in that order. For instances that don't support secure mode a secure mode, in that order. For instances that don't support secure mode a
single interrupt must be specified. single interrupt must be specified. Not required for cache IPMMUs.
- #iommu-cells: Must be 1. - #iommu-cells: Must be 1.
Optional properties:
- renesas,ipmmu-main: reference to the main IPMMU instance in two cells.
The first cell is a phandle to the main IPMMU and the second cell is
the interrupt bit number associated with the particular cache IPMMU device.
The interrupt bit number needs to match the main IPMMU IMSSTR register.
Only used by cache IPMMU instances.
Each bus master connected to an IPMMU must reference the IPMMU in its device Each bus master connected to an IPMMU must reference the IPMMU in its device
node with the following property: node with the following property:
......
...@@ -23,28 +23,24 @@ MMUs. ...@@ -23,28 +23,24 @@ MMUs.
for window 1, 2 and 3. for window 1, 2 and 3.
* M2M Scalers and G2D in Exynos5420 has one System MMU on the read channel and * M2M Scalers and G2D in Exynos5420 has one System MMU on the read channel and
the other System MMU on the write channel. the other System MMU on the write channel.
The drivers must consider how to handle those System MMUs. One of the idea is
to implement child devices or sub-devices which are the client devices of the
System MMU.
Note: For information on assigning System MMU controller to its peripheral devices,
The current DT binding for the Exynos System MMU is incomplete. see generic IOMMU bindings.
The following properties can be removed or changed, if found incompatible with
the "Generic IOMMU Binding" support for attaching devices to the IOMMU.
Required properties: Required properties:
- compatible: Should be "samsung,exynos-sysmmu" - compatible: Should be "samsung,exynos-sysmmu"
- reg: A tuple of base address and size of System MMU registers. - reg: A tuple of base address and size of System MMU registers.
- #iommu-cells: Should be <0>.
- interrupt-parent: The phandle of the interrupt controller of System MMU - interrupt-parent: The phandle of the interrupt controller of System MMU
- interrupts: An interrupt specifier for interrupt signal of System MMU, - interrupts: An interrupt specifier for interrupt signal of System MMU,
according to the format defined by a particular interrupt according to the format defined by a particular interrupt
controller. controller.
- clock-names: Should be "sysmmu" if the System MMU is needed to gate its clock. - clock-names: Should be "sysmmu" or a pair of "aclk" and "pclk" to gate
SYSMMU core clocks.
Optional "master" if the clock to the System MMU is gated by Optional "master" if the clock to the System MMU is gated by
another gate clock other than "sysmmu". another gate clock other core (usually main gate clock
Exynos4 SoCs, there needs no "master" clock. of peripheral device this SYSMMU belongs to).
Exynos5 SoCs, some System MMUs must have "master" clocks. - clocks: Phandles for respective clocks described by clock-names.
- clocks: Required if the System MMU is needed to gate its clock.
- power-domains: Required if the System MMU is needed to gate its power. - power-domains: Required if the System MMU is needed to gate its power.
Please refer to the following document: Please refer to the following document:
Documentation/devicetree/bindings/power/pd-samsung.txt Documentation/devicetree/bindings/power/pd-samsung.txt
...@@ -57,6 +53,7 @@ Examples: ...@@ -57,6 +53,7 @@ Examples:
power-domains = <&pd_gsc>; power-domains = <&pd_gsc>;
clocks = <&clock CLK_GSCL0>; clocks = <&clock CLK_GSCL0>;
clock-names = "gscl"; clock-names = "gscl";
iommus = <&sysmmu_gsc0>;
}; };
sysmmu_gsc0: sysmmu@13E80000 { sysmmu_gsc0: sysmmu@13E80000 {
...@@ -67,4 +64,5 @@ Examples: ...@@ -67,4 +64,5 @@ Examples:
clock-names = "sysmmu", "master"; clock-names = "sysmmu", "master";
clocks = <&clock CLK_SMMU_GSCL0>, <&clock CLK_GSCL0>; clocks = <&clock CLK_SMMU_GSCL0>, <&clock CLK_GSCL0>;
power-domains = <&pd_gsc>; power-domains = <&pd_gsc>;
#iommu-cells = <0>;
}; };
SMI (Smart Multimedia Interface) Common
The hardware block diagram please check bindings/iommu/mediatek,iommu.txt
Required properties:
- compatible : must be "mediatek,mt8173-smi-common"
- reg : the register and size of the SMI block.
- power-domains : a phandle to the power domain of this local arbiter.
- clocks : Must contain an entry for each entry in clock-names.
- clock-names : must contain 2 entries, as follows:
- "apb" : Advanced Peripheral Bus clock, It's the clock for setting
the register.
- "smi" : It's the clock for transfer data and command.
They may be the same if both source clocks are the same.
Example:
smi_common: smi@14022000 {
compatible = "mediatek,mt8173-smi-common";
reg = <0 0x14022000 0 0x1000>;
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
clocks = <&mmsys CLK_MM_SMI_COMMON>,
<&mmsys CLK_MM_SMI_COMMON>;
clock-names = "apb", "smi";
};
SMI (Smart Multimedia Interface) Local Arbiter
The hardware block diagram please check bindings/iommu/mediatek,iommu.txt
Required properties:
- compatible : must be "mediatek,mt8173-smi-larb"
- reg : the register and size of this local arbiter.
- mediatek,smi : a phandle to the smi_common node.
- power-domains : a phandle to the power domain of this local arbiter.
- clocks : Must contain an entry for each entry in clock-names.
- clock-names: must contain 2 entries, as follows:
- "apb" : Advanced Peripheral Bus clock, It's the clock for setting
the register.
- "smi" : It's the clock for transfer data and command.
Example:
larb1: larb@16010000 {
compatible = "mediatek,mt8173-smi-larb";
reg = <0 0x16010000 0 0x1000>;
mediatek,smi = <&smi_common>;
power-domains = <&scpsys MT8173_POWER_DOMAIN_VDEC>;
clocks = <&vdecsys CLK_VDEC_CKEN>,
<&vdecsys CLK_VDEC_LARB_CKEN>;
clock-names = "apb", "smi";
};
...@@ -1800,11 +1800,13 @@ F: drivers/edac/synopsys_edac.c ...@@ -1800,11 +1800,13 @@ F: drivers/edac/synopsys_edac.c
ARM SMMU DRIVERS ARM SMMU DRIVERS
M: Will Deacon <will.deacon@arm.com> M: Will Deacon <will.deacon@arm.com>
R: Robin Murphy <robin.murphy@arm.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: drivers/iommu/arm-smmu.c F: drivers/iommu/arm-smmu.c
F: drivers/iommu/arm-smmu-v3.c F: drivers/iommu/arm-smmu-v3.c
F: drivers/iommu/io-pgtable-arm.c F: drivers/iommu/io-pgtable-arm.c
F: drivers/iommu/io-pgtable-arm-v7s.c
ARM64 PORT (AARCH64 ARCHITECTURE) ARM64 PORT (AARCH64 ARCHITECTURE)
M: Catalin Marinas <catalin.marinas@arm.com> M: Catalin Marinas <catalin.marinas@arm.com>
...@@ -4311,6 +4313,12 @@ L: dri-devel@lists.freedesktop.org ...@@ -4311,6 +4313,12 @@ L: dri-devel@lists.freedesktop.org
S: Maintained S: Maintained
F: drivers/gpu/drm/exynos/exynos_dp* F: drivers/gpu/drm/exynos/exynos_dp*
EXYNOS SYSMMU (IOMMU) driver
M: Marek Szyprowski <m.szyprowski@samsung.com>
L: iommu@lists.linux-foundation.org
S: Maintained
F: drivers/iommu/exynos-iommu.c
EXYNOS MIPI DISPLAY DRIVERS EXYNOS MIPI DISPLAY DRIVERS
M: Inki Dae <inki.dae@samsung.com> M: Inki Dae <inki.dae@samsung.com>
M: Donghwa Lee <dh09.lee@samsung.com> M: Donghwa Lee <dh09.lee@samsung.com>
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <dt-bindings/clock/mt8173-clk.h> #include <dt-bindings/clock/mt8173-clk.h>
#include <dt-bindings/interrupt-controller/irq.h> #include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/interrupt-controller/arm-gic.h> #include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/memory/mt8173-larb-port.h>
#include <dt-bindings/phy/phy.h> #include <dt-bindings/phy/phy.h>
#include <dt-bindings/power/mt8173-power.h> #include <dt-bindings/power/mt8173-power.h>
#include <dt-bindings/reset/mt8173-resets.h> #include <dt-bindings/reset/mt8173-resets.h>
...@@ -277,6 +278,17 @@ sysirq: intpol-controller@10200620 { ...@@ -277,6 +278,17 @@ sysirq: intpol-controller@10200620 {
reg = <0 0x10200620 0 0x20>; reg = <0 0x10200620 0 0x20>;
}; };
iommu: iommu@10205000 {
compatible = "mediatek,mt8173-m4u";
reg = <0 0x10205000 0 0x1000>;
interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_LOW>;
clocks = <&infracfg CLK_INFRA_M4U>;
clock-names = "bclk";
mediatek,larbs = <&larb0 &larb1 &larb2
&larb3 &larb4 &larb5>;
#iommu-cells = <1>;
};
apmixedsys: clock-controller@10209000 { apmixedsys: clock-controller@10209000 {
compatible = "mediatek,mt8173-apmixedsys"; compatible = "mediatek,mt8173-apmixedsys";
reg = <0 0x10209000 0 0x1000>; reg = <0 0x10209000 0 0x1000>;
...@@ -589,29 +601,98 @@ pwm1: pwm@1401f000 { ...@@ -589,29 +601,98 @@ pwm1: pwm@1401f000 {
status = "disabled"; status = "disabled";
}; };
larb0: larb@14021000 {
compatible = "mediatek,mt8173-smi-larb";
reg = <0 0x14021000 0 0x1000>;
mediatek,smi = <&smi_common>;
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
clocks = <&mmsys CLK_MM_SMI_LARB0>,
<&mmsys CLK_MM_SMI_LARB0>;
clock-names = "apb", "smi";
};
smi_common: smi@14022000 {
compatible = "mediatek,mt8173-smi-common";
reg = <0 0x14022000 0 0x1000>;
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
clocks = <&mmsys CLK_MM_SMI_COMMON>,
<&mmsys CLK_MM_SMI_COMMON>;
clock-names = "apb", "smi";
};
larb4: larb@14027000 {
compatible = "mediatek,mt8173-smi-larb";
reg = <0 0x14027000 0 0x1000>;
mediatek,smi = <&smi_common>;
power-domains = <&scpsys MT8173_POWER_DOMAIN_MM>;
clocks = <&mmsys CLK_MM_SMI_LARB4>,
<&mmsys CLK_MM_SMI_LARB4>;
clock-names = "apb", "smi";
};
imgsys: clock-controller@15000000 { imgsys: clock-controller@15000000 {
compatible = "mediatek,mt8173-imgsys", "syscon"; compatible = "mediatek,mt8173-imgsys", "syscon";
reg = <0 0x15000000 0 0x1000>; reg = <0 0x15000000 0 0x1000>;
#clock-cells = <1>; #clock-cells = <1>;
}; };
larb2: larb@15001000 {
compatible = "mediatek,mt8173-smi-larb";
reg = <0 0x15001000 0 0x1000>;
mediatek,smi = <&smi_common>;
power-domains = <&scpsys MT8173_POWER_DOMAIN_ISP>;
clocks = <&imgsys CLK_IMG_LARB2_SMI>,
<&imgsys CLK_IMG_LARB2_SMI>;
clock-names = "apb", "smi";
};
vdecsys: clock-controller@16000000 { vdecsys: clock-controller@16000000 {
compatible = "mediatek,mt8173-vdecsys", "syscon"; compatible = "mediatek,mt8173-vdecsys", "syscon";
reg = <0 0x16000000 0 0x1000>; reg = <0 0x16000000 0 0x1000>;
#clock-cells = <1>; #clock-cells = <1>;
}; };
larb1: larb@16010000 {
compatible = "mediatek,mt8173-smi-larb";
reg = <0 0x16010000 0 0x1000>;
mediatek,smi = <&smi_common>;
power-domains = <&scpsys MT8173_POWER_DOMAIN_VDEC>;
clocks = <&vdecsys CLK_VDEC_CKEN>,
<&vdecsys CLK_VDEC_LARB_CKEN>;
clock-names = "apb", "smi";
};
vencsys: clock-controller@18000000 { vencsys: clock-controller@18000000 {
compatible = "mediatek,mt8173-vencsys", "syscon"; compatible = "mediatek,mt8173-vencsys", "syscon";
reg = <0 0x18000000 0 0x1000>; reg = <0 0x18000000 0 0x1000>;
#clock-cells = <1>; #clock-cells = <1>;
}; };
larb3: larb@18001000 {
compatible = "mediatek,mt8173-smi-larb";
reg = <0 0x18001000 0 0x1000>;
mediatek,smi = <&smi_common>;
power-domains = <&scpsys MT8173_POWER_DOMAIN_VENC>;
clocks = <&vencsys CLK_VENC_CKE1>,
<&vencsys CLK_VENC_CKE0>;
clock-names = "apb", "smi";
};
vencltsys: clock-controller@19000000 { vencltsys: clock-controller@19000000 {
compatible = "mediatek,mt8173-vencltsys", "syscon"; compatible = "mediatek,mt8173-vencltsys", "syscon";
reg = <0 0x19000000 0 0x1000>; reg = <0 0x19000000 0 0x1000>;
#clock-cells = <1>; #clock-cells = <1>;
}; };
larb5: larb@19001000 {
compatible = "mediatek,mt8173-smi-larb";
reg = <0 0x19001000 0 0x1000>;
mediatek,smi = <&smi_common>;
power-domains = <&scpsys MT8173_POWER_DOMAIN_VENC_LT>;
clocks = <&vencltsys CLK_VENCLT_CKE1>,
<&vencltsys CLK_VENCLT_CKE0>;
clock-names = "apb", "smi";
};
}; };
}; };
...@@ -39,6 +39,25 @@ config IOMMU_IO_PGTABLE_LPAE_SELFTEST ...@@ -39,6 +39,25 @@ config IOMMU_IO_PGTABLE_LPAE_SELFTEST
If unsure, say N here. If unsure, say N here.
config IOMMU_IO_PGTABLE_ARMV7S
bool "ARMv7/v8 Short Descriptor Format"
select IOMMU_IO_PGTABLE
depends on HAS_DMA && (ARM || ARM64 || COMPILE_TEST)
help
Enable support for the ARM Short-descriptor pagetable format.
This supports 32-bit virtual and physical addresses mapped using
2-level tables with 4KB pages/1MB sections, and contiguous entries
for 64KB pages/16MB supersections if indicated by the IOMMU driver.
config IOMMU_IO_PGTABLE_ARMV7S_SELFTEST
bool "ARMv7s selftests"
depends on IOMMU_IO_PGTABLE_ARMV7S
help
Enable self-tests for ARMv7s page table allocator. This performs
a series of page-table consistency checks during boot.
If unsure, say N here.
endmenu endmenu
config IOMMU_IOVA config IOMMU_IOVA
...@@ -51,9 +70,9 @@ config OF_IOMMU ...@@ -51,9 +70,9 @@ config OF_IOMMU
# IOMMU-agnostic DMA-mapping layer # IOMMU-agnostic DMA-mapping layer
config IOMMU_DMA config IOMMU_DMA
bool bool
depends on NEED_SG_DMA_LENGTH
select IOMMU_API select IOMMU_API
select IOMMU_IOVA select IOMMU_IOVA
select NEED_SG_DMA_LENGTH
config FSL_PAMU config FSL_PAMU
bool "Freescale IOMMU support" bool "Freescale IOMMU support"
...@@ -243,7 +262,7 @@ config TEGRA_IOMMU_SMMU ...@@ -243,7 +262,7 @@ config TEGRA_IOMMU_SMMU
config EXYNOS_IOMMU config EXYNOS_IOMMU
bool "Exynos IOMMU Support" bool "Exynos IOMMU Support"
depends on ARCH_EXYNOS && ARM && MMU depends on ARCH_EXYNOS && MMU
select IOMMU_API select IOMMU_API
select ARM_DMA_USE_IOMMU select ARM_DMA_USE_IOMMU
help help
...@@ -266,7 +285,7 @@ config EXYNOS_IOMMU_DEBUG ...@@ -266,7 +285,7 @@ config EXYNOS_IOMMU_DEBUG
config IPMMU_VMSA config IPMMU_VMSA
bool "Renesas VMSA-compatible IPMMU" bool "Renesas VMSA-compatible IPMMU"
depends on ARM_LPAE depends on ARM_LPAE
depends on ARCH_SHMOBILE || COMPILE_TEST depends on ARCH_RENESAS || COMPILE_TEST
select IOMMU_API select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE select IOMMU_IO_PGTABLE_LPAE
select ARM_DMA_USE_IOMMU select ARM_DMA_USE_IOMMU
...@@ -318,4 +337,21 @@ config S390_IOMMU ...@@ -318,4 +337,21 @@ config S390_IOMMU
help help
Support for the IOMMU API for s390 PCI devices. Support for the IOMMU API for s390 PCI devices.
config MTK_IOMMU
bool "MTK IOMMU Support"
depends on ARM || ARM64
depends on ARCH_MEDIATEK || COMPILE_TEST
select ARM_DMA_USE_IOMMU
select IOMMU_API
select IOMMU_DMA
select IOMMU_IO_PGTABLE_ARMV7S
select MEMORY
select MTK_SMI
help
Support for the M4U on certain Mediatek SOCs. M4U is MultiMedia
Memory Management Unit. This option enables remapping of DMA memory
accesses for the multimedia subsystem.
If unsure, say N here.
endif # IOMMU_SUPPORT endif # IOMMU_SUPPORT
...@@ -3,6 +3,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o ...@@ -3,6 +3,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o
obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o
obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o
obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o
obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
obj-$(CONFIG_IOMMU_IOVA) += iova.o obj-$(CONFIG_IOMMU_IOVA) += iova.o
obj-$(CONFIG_OF_IOMMU) += of_iommu.o obj-$(CONFIG_OF_IOMMU) += of_iommu.o
...@@ -16,6 +17,7 @@ obj-$(CONFIG_INTEL_IOMMU) += intel-iommu.o ...@@ -16,6 +17,7 @@ obj-$(CONFIG_INTEL_IOMMU) += intel-iommu.o
obj-$(CONFIG_INTEL_IOMMU_SVM) += intel-svm.o obj-$(CONFIG_INTEL_IOMMU_SVM) += intel-svm.o
obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
obj-$(CONFIG_IRQ_REMAP) += intel_irq_remapping.o irq_remapping.o obj-$(CONFIG_IRQ_REMAP) += intel_irq_remapping.o irq_remapping.o
obj-$(CONFIG_MTK_IOMMU) += mtk_iommu.o
obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o
obj-$(CONFIG_OMAP_IOMMU_DEBUG) += omap-iommu-debug.o obj-$(CONFIG_OMAP_IOMMU_DEBUG) += omap-iommu-debug.o
obj-$(CONFIG_ROCKCHIP_IOMMU) += rockchip-iommu.o obj-$(CONFIG_ROCKCHIP_IOMMU) += rockchip-iommu.o
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
*/ */
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/dma-iommu.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/iommu.h> #include <linux/iommu.h>
...@@ -1396,7 +1397,7 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) ...@@ -1396,7 +1397,7 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
{ {
struct arm_smmu_domain *smmu_domain; struct arm_smmu_domain *smmu_domain;
if (type != IOMMU_DOMAIN_UNMANAGED) if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
return NULL; return NULL;
/* /*
...@@ -1408,6 +1409,12 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) ...@@ -1408,6 +1409,12 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
if (!smmu_domain) if (!smmu_domain)
return NULL; return NULL;
if (type == IOMMU_DOMAIN_DMA &&
iommu_get_dma_cookie(&smmu_domain->domain)) {
kfree(smmu_domain);
return NULL;
}
mutex_init(&smmu_domain->init_mutex); mutex_init(&smmu_domain->init_mutex);
spin_lock_init(&smmu_domain->pgtbl_lock); spin_lock_init(&smmu_domain->pgtbl_lock);
return &smmu_domain->domain; return &smmu_domain->domain;
...@@ -1436,6 +1443,7 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) ...@@ -1436,6 +1443,7 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_device *smmu = smmu_domain->smmu;
iommu_put_dma_cookie(domain);
free_io_pgtable_ops(smmu_domain->pgtbl_ops); free_io_pgtable_ops(smmu_domain->pgtbl_ops);
/* Free the CD and ASID, if we allocated them */ /* Free the CD and ASID, if we allocated them */
...@@ -1630,6 +1638,17 @@ static int arm_smmu_install_ste_for_group(struct arm_smmu_group *smmu_group) ...@@ -1630,6 +1638,17 @@ static int arm_smmu_install_ste_for_group(struct arm_smmu_group *smmu_group)
return 0; return 0;
} }
static void arm_smmu_detach_dev(struct device *dev)
{
struct arm_smmu_group *smmu_group = arm_smmu_group_get(dev);
smmu_group->ste.bypass = true;
if (IS_ERR_VALUE(arm_smmu_install_ste_for_group(smmu_group)))
dev_warn(dev, "failed to install bypass STE\n");
smmu_group->domain = NULL;
}
static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
{ {
int ret = 0; int ret = 0;
...@@ -1642,7 +1661,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ...@@ -1642,7 +1661,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
/* Already attached to a different domain? */ /* Already attached to a different domain? */
if (smmu_group->domain && smmu_group->domain != smmu_domain) if (smmu_group->domain && smmu_group->domain != smmu_domain)
return -EEXIST; arm_smmu_detach_dev(dev);
smmu = smmu_group->smmu; smmu = smmu_group->smmu;
mutex_lock(&smmu_domain->init_mutex); mutex_lock(&smmu_domain->init_mutex);
...@@ -1668,7 +1687,12 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ...@@ -1668,7 +1687,12 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
goto out_unlock; goto out_unlock;
smmu_group->domain = smmu_domain; smmu_group->domain = smmu_domain;
smmu_group->ste.bypass = false;
/*
* FIXME: This should always be "false" once we have IOMMU-backed
* DMA ops for all devices behind the SMMU.
*/
smmu_group->ste.bypass = domain->type == IOMMU_DOMAIN_DMA;
ret = arm_smmu_install_ste_for_group(smmu_group); ret = arm_smmu_install_ste_for_group(smmu_group);
if (IS_ERR_VALUE(ret)) if (IS_ERR_VALUE(ret))
...@@ -1679,25 +1703,6 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ...@@ -1679,25 +1703,6 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
return ret; return ret;
} }
static void arm_smmu_detach_dev(struct iommu_domain *domain, struct device *dev)
{
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_group *smmu_group = arm_smmu_group_get(dev);
BUG_ON(!smmu_domain);
BUG_ON(!smmu_group);
mutex_lock(&smmu_domain->init_mutex);
BUG_ON(smmu_group->domain != smmu_domain);
smmu_group->ste.bypass = true;
if (IS_ERR_VALUE(arm_smmu_install_ste_for_group(smmu_group)))
dev_warn(dev, "failed to install bypass STE\n");
smmu_group->domain = NULL;
mutex_unlock(&smmu_domain->init_mutex);
}
static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot) phys_addr_t paddr, size_t size, int prot)
{ {
...@@ -1935,7 +1940,6 @@ static struct iommu_ops arm_smmu_ops = { ...@@ -1935,7 +1940,6 @@ static struct iommu_ops arm_smmu_ops = {
.domain_alloc = arm_smmu_domain_alloc, .domain_alloc = arm_smmu_domain_alloc,
.domain_free = arm_smmu_domain_free, .domain_free = arm_smmu_domain_free,
.attach_dev = arm_smmu_attach_dev, .attach_dev = arm_smmu_attach_dev,
.detach_dev = arm_smmu_detach_dev,
.map = arm_smmu_map, .map = arm_smmu_map,
.unmap = arm_smmu_unmap, .unmap = arm_smmu_unmap,
.iova_to_phys = arm_smmu_iova_to_phys, .iova_to_phys = arm_smmu_iova_to_phys,
......
...@@ -29,6 +29,7 @@ ...@@ -29,6 +29,7 @@
#define pr_fmt(fmt) "arm-smmu: " fmt #define pr_fmt(fmt) "arm-smmu: " fmt
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/dma-iommu.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
...@@ -167,6 +168,9 @@ ...@@ -167,6 +168,9 @@
#define S2CR_TYPE_BYPASS (1 << S2CR_TYPE_SHIFT) #define S2CR_TYPE_BYPASS (1 << S2CR_TYPE_SHIFT)
#define S2CR_TYPE_FAULT (2 << S2CR_TYPE_SHIFT) #define S2CR_TYPE_FAULT (2 << S2CR_TYPE_SHIFT)
#define S2CR_PRIVCFG_SHIFT 24
#define S2CR_PRIVCFG_UNPRIV (2 << S2CR_PRIVCFG_SHIFT)
/* Context bank attribute registers */ /* Context bank attribute registers */
#define ARM_SMMU_GR1_CBAR(n) (0x0 + ((n) << 2)) #define ARM_SMMU_GR1_CBAR(n) (0x0 + ((n) << 2))
#define CBAR_VMID_SHIFT 0 #define CBAR_VMID_SHIFT 0
...@@ -257,9 +261,13 @@ ...@@ -257,9 +261,13 @@
#define FSYNR0_WNR (1 << 4) #define FSYNR0_WNR (1 << 4)
static int force_stage; static int force_stage;
module_param_named(force_stage, force_stage, int, S_IRUGO); module_param(force_stage, int, S_IRUGO);
MODULE_PARM_DESC(force_stage, MODULE_PARM_DESC(force_stage,
"Force SMMU mappings to be installed at a particular stage of translation. A value of '1' or '2' forces the corresponding stage. All other values are ignored (i.e. no stage is forced). Note that selecting a specific stage will disable support for nested translation."); "Force SMMU mappings to be installed at a particular stage of translation. A value of '1' or '2' forces the corresponding stage. All other values are ignored (i.e. no stage is forced). Note that selecting a specific stage will disable support for nested translation.");
static bool disable_bypass;
module_param(disable_bypass, bool, S_IRUGO);
MODULE_PARM_DESC(disable_bypass,
"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
enum arm_smmu_arch_version { enum arm_smmu_arch_version {
ARM_SMMU_V1 = 1, ARM_SMMU_V1 = 1,
...@@ -963,7 +971,7 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) ...@@ -963,7 +971,7 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
{ {
struct arm_smmu_domain *smmu_domain; struct arm_smmu_domain *smmu_domain;
if (type != IOMMU_DOMAIN_UNMANAGED) if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
return NULL; return NULL;
/* /*
* Allocate the domain and initialise some of its data structures. * Allocate the domain and initialise some of its data structures.
...@@ -974,6 +982,12 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) ...@@ -974,6 +982,12 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
if (!smmu_domain) if (!smmu_domain)
return NULL; return NULL;
if (type == IOMMU_DOMAIN_DMA &&
iommu_get_dma_cookie(&smmu_domain->domain)) {
kfree(smmu_domain);
return NULL;
}
mutex_init(&smmu_domain->init_mutex); mutex_init(&smmu_domain->init_mutex);
spin_lock_init(&smmu_domain->pgtbl_lock); spin_lock_init(&smmu_domain->pgtbl_lock);
...@@ -988,6 +1002,7 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) ...@@ -988,6 +1002,7 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
* Free the domain resources. We assume that all devices have * Free the domain resources. We assume that all devices have
* already been detached. * already been detached.
*/ */
iommu_put_dma_cookie(domain);
arm_smmu_destroy_domain_context(domain); arm_smmu_destroy_domain_context(domain);
kfree(smmu_domain); kfree(smmu_domain);
} }
...@@ -1079,11 +1094,18 @@ static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain, ...@@ -1079,11 +1094,18 @@ static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain,
if (ret) if (ret)
return ret == -EEXIST ? 0 : ret; return ret == -EEXIST ? 0 : ret;
/*
* FIXME: This won't be needed once we have IOMMU-backed DMA ops
* for all devices behind the SMMU.
*/
if (smmu_domain->domain.type == IOMMU_DOMAIN_DMA)
return 0;
for (i = 0; i < cfg->num_streamids; ++i) { for (i = 0; i < cfg->num_streamids; ++i) {
u32 idx, s2cr; u32 idx, s2cr;
idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i]; idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i];
s2cr = S2CR_TYPE_TRANS | s2cr = S2CR_TYPE_TRANS | S2CR_PRIVCFG_UNPRIV |
(smmu_domain->cfg.cbndx << S2CR_CBNDX_SHIFT); (smmu_domain->cfg.cbndx << S2CR_CBNDX_SHIFT);
writel_relaxed(s2cr, gr0_base + ARM_SMMU_GR0_S2CR(idx)); writel_relaxed(s2cr, gr0_base + ARM_SMMU_GR0_S2CR(idx));
} }
...@@ -1108,14 +1130,24 @@ static void arm_smmu_domain_remove_master(struct arm_smmu_domain *smmu_domain, ...@@ -1108,14 +1130,24 @@ static void arm_smmu_domain_remove_master(struct arm_smmu_domain *smmu_domain,
*/ */
for (i = 0; i < cfg->num_streamids; ++i) { for (i = 0; i < cfg->num_streamids; ++i) {
u32 idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i]; u32 idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i];
u32 reg = disable_bypass ? S2CR_TYPE_FAULT : S2CR_TYPE_BYPASS;
writel_relaxed(S2CR_TYPE_BYPASS, writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_S2CR(idx));
gr0_base + ARM_SMMU_GR0_S2CR(idx));
} }
arm_smmu_master_free_smrs(smmu, cfg); arm_smmu_master_free_smrs(smmu, cfg);
} }
static void arm_smmu_detach_dev(struct device *dev,
struct arm_smmu_master_cfg *cfg)
{
struct iommu_domain *domain = dev->archdata.iommu;
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
dev->archdata.iommu = NULL;
arm_smmu_domain_remove_master(smmu_domain, cfg);
}
static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
{ {
int ret; int ret;
...@@ -1129,11 +1161,6 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ...@@ -1129,11 +1161,6 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
return -ENXIO; return -ENXIO;
} }
if (dev->archdata.iommu) {
dev_err(dev, "already attached to IOMMU domain\n");
return -EEXIST;
}
/* Ensure that the domain is finalised */ /* Ensure that the domain is finalised */
ret = arm_smmu_init_domain_context(domain, smmu); ret = arm_smmu_init_domain_context(domain, smmu);
if (IS_ERR_VALUE(ret)) if (IS_ERR_VALUE(ret))
...@@ -1155,25 +1182,16 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ...@@ -1155,25 +1182,16 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
if (!cfg) if (!cfg)
return -ENODEV; return -ENODEV;
/* Detach the dev from its current domain */
if (dev->archdata.iommu)
arm_smmu_detach_dev(dev, cfg);
ret = arm_smmu_domain_add_master(smmu_domain, cfg); ret = arm_smmu_domain_add_master(smmu_domain, cfg);
if (!ret) if (!ret)
dev->archdata.iommu = domain; dev->archdata.iommu = domain;
return ret; return ret;
} }
static void arm_smmu_detach_dev(struct iommu_domain *domain, struct device *dev)
{
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_master_cfg *cfg;
cfg = find_smmu_master_cfg(dev);
if (!cfg)
return;
dev->archdata.iommu = NULL;
arm_smmu_domain_remove_master(smmu_domain, cfg);
}
static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot) phys_addr_t paddr, size_t size, int prot)
{ {
...@@ -1449,7 +1467,6 @@ static struct iommu_ops arm_smmu_ops = { ...@@ -1449,7 +1467,6 @@ static struct iommu_ops arm_smmu_ops = {
.domain_alloc = arm_smmu_domain_alloc, .domain_alloc = arm_smmu_domain_alloc,
.domain_free = arm_smmu_domain_free, .domain_free = arm_smmu_domain_free,
.attach_dev = arm_smmu_attach_dev, .attach_dev = arm_smmu_attach_dev,
.detach_dev = arm_smmu_detach_dev,
.map = arm_smmu_map, .map = arm_smmu_map,
.unmap = arm_smmu_unmap, .unmap = arm_smmu_unmap,
.map_sg = default_iommu_map_sg, .map_sg = default_iommu_map_sg,
...@@ -1473,11 +1490,11 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu) ...@@ -1473,11 +1490,11 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu)
reg = readl_relaxed(ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR); reg = readl_relaxed(ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR);
writel(reg, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR); writel(reg, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR);
/* Mark all SMRn as invalid and all S2CRn as bypass */ /* Mark all SMRn as invalid and all S2CRn as bypass unless overridden */
reg = disable_bypass ? S2CR_TYPE_FAULT : S2CR_TYPE_BYPASS;
for (i = 0; i < smmu->num_mapping_groups; ++i) { for (i = 0; i < smmu->num_mapping_groups; ++i) {
writel_relaxed(0, gr0_base + ARM_SMMU_GR0_SMR(i)); writel_relaxed(0, gr0_base + ARM_SMMU_GR0_SMR(i));
writel_relaxed(S2CR_TYPE_BYPASS, writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_S2CR(i));
gr0_base + ARM_SMMU_GR0_S2CR(i));
} }
/* Make sure all context banks are disabled and clear CB_FSR */ /* Make sure all context banks are disabled and clear CB_FSR */
...@@ -1499,8 +1516,12 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu) ...@@ -1499,8 +1516,12 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu)
/* Disable TLB broadcasting. */ /* Disable TLB broadcasting. */
reg |= (sCR0_VMIDPNE | sCR0_PTM); reg |= (sCR0_VMIDPNE | sCR0_PTM);
/* Enable client access, but bypass when no mapping is found */ /* Enable client access, handling unmatched streams as appropriate */
reg &= ~(sCR0_CLIENTPD | sCR0_USFCFG); reg &= ~sCR0_CLIENTPD;
if (disable_bypass)
reg |= sCR0_USFCFG;
else
reg &= ~sCR0_USFCFG;
/* Disable forced broadcasting */ /* Disable forced broadcasting */
reg &= ~sCR0_FB; reg &= ~sCR0_FB;
......
/* linux/drivers/iommu/exynos_iommu.c /*
* * Copyright (c) 2011,2016 Samsung Electronics Co., Ltd.
* Copyright (c) 2011 Samsung Electronics Co., Ltd.
* http://www.samsung.com * http://www.samsung.com
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
...@@ -25,10 +24,7 @@ ...@@ -25,10 +24,7 @@
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/dma-iommu.h>
#include <asm/cacheflush.h>
#include <asm/dma-iommu.h>
#include <asm/pgtable.h>
typedef u32 sysmmu_iova_t; typedef u32 sysmmu_iova_t;
typedef u32 sysmmu_pte_t; typedef u32 sysmmu_pte_t;
...@@ -58,17 +54,25 @@ typedef u32 sysmmu_pte_t; ...@@ -58,17 +54,25 @@ typedef u32 sysmmu_pte_t;
#define lv2ent_small(pent) ((*(pent) & 2) == 2) #define lv2ent_small(pent) ((*(pent) & 2) == 2)
#define lv2ent_large(pent) ((*(pent) & 3) == 1) #define lv2ent_large(pent) ((*(pent) & 3) == 1)
static u32 sysmmu_page_offset(sysmmu_iova_t iova, u32 size) /*
{ * v1.x - v3.x SYSMMU supports 32bit physical and 32bit virtual address spaces
return iova & (size - 1); * v5.0 introduced support for 36bit physical address space by shifting
} * all page entry values by 4 bits.
* All SYSMMU controllers in the system support the address spaces of the same
#define section_phys(sent) (*(sent) & SECT_MASK) * size, so PG_ENT_SHIFT can be initialized on first SYSMMU probe to proper
#define section_offs(iova) sysmmu_page_offset((iova), SECT_SIZE) * value (0 or 4).
#define lpage_phys(pent) (*(pent) & LPAGE_MASK) */
#define lpage_offs(iova) sysmmu_page_offset((iova), LPAGE_SIZE) static short PG_ENT_SHIFT = -1;
#define spage_phys(pent) (*(pent) & SPAGE_MASK) #define SYSMMU_PG_ENT_SHIFT 0
#define spage_offs(iova) sysmmu_page_offset((iova), SPAGE_SIZE) #define SYSMMU_V5_PG_ENT_SHIFT 4
#define sect_to_phys(ent) (((phys_addr_t) ent) << PG_ENT_SHIFT)
#define section_phys(sent) (sect_to_phys(*(sent)) & SECT_MASK)
#define section_offs(iova) (iova & (SECT_SIZE - 1))
#define lpage_phys(pent) (sect_to_phys(*(pent)) & LPAGE_MASK)
#define lpage_offs(iova) (iova & (LPAGE_SIZE - 1))
#define spage_phys(pent) (sect_to_phys(*(pent)) & SPAGE_MASK)
#define spage_offs(iova) (iova & (SPAGE_SIZE - 1))
#define NUM_LV1ENTRIES 4096 #define NUM_LV1ENTRIES 4096
#define NUM_LV2ENTRIES (SECT_SIZE / SPAGE_SIZE) #define NUM_LV2ENTRIES (SECT_SIZE / SPAGE_SIZE)
...@@ -83,16 +87,16 @@ static u32 lv2ent_offset(sysmmu_iova_t iova) ...@@ -83,16 +87,16 @@ static u32 lv2ent_offset(sysmmu_iova_t iova)
return (iova >> SPAGE_ORDER) & (NUM_LV2ENTRIES - 1); return (iova >> SPAGE_ORDER) & (NUM_LV2ENTRIES - 1);
} }
#define LV1TABLE_SIZE (NUM_LV1ENTRIES * sizeof(sysmmu_pte_t))
#define LV2TABLE_SIZE (NUM_LV2ENTRIES * sizeof(sysmmu_pte_t)) #define LV2TABLE_SIZE (NUM_LV2ENTRIES * sizeof(sysmmu_pte_t))
#define SPAGES_PER_LPAGE (LPAGE_SIZE / SPAGE_SIZE) #define SPAGES_PER_LPAGE (LPAGE_SIZE / SPAGE_SIZE)
#define lv2table_base(sent) (sect_to_phys(*(sent) & 0xFFFFFFC0))
#define lv2table_base(sent) (*(sent) & 0xFFFFFC00) #define mk_lv1ent_sect(pa) ((pa >> PG_ENT_SHIFT) | 2)
#define mk_lv1ent_page(pa) ((pa >> PG_ENT_SHIFT) | 1)
#define mk_lv1ent_sect(pa) ((pa) | 2) #define mk_lv2ent_lpage(pa) ((pa >> PG_ENT_SHIFT) | 1)
#define mk_lv1ent_page(pa) ((pa) | 1) #define mk_lv2ent_spage(pa) ((pa >> PG_ENT_SHIFT) | 2)
#define mk_lv2ent_lpage(pa) ((pa) | 1)
#define mk_lv2ent_spage(pa) ((pa) | 2)
#define CTRL_ENABLE 0x5 #define CTRL_ENABLE 0x5
#define CTRL_BLOCK 0x7 #define CTRL_BLOCK 0x7
...@@ -100,14 +104,23 @@ static u32 lv2ent_offset(sysmmu_iova_t iova) ...@@ -100,14 +104,23 @@ static u32 lv2ent_offset(sysmmu_iova_t iova)
#define CFG_LRU 0x1 #define CFG_LRU 0x1
#define CFG_QOS(n) ((n & 0xF) << 7) #define CFG_QOS(n) ((n & 0xF) << 7)
#define CFG_MASK 0x0150FFFF /* Selecting bit 0-15, 20, 22 and 24 */
#define CFG_ACGEN (1 << 24) /* System MMU 3.3 only */ #define CFG_ACGEN (1 << 24) /* System MMU 3.3 only */
#define CFG_SYSSEL (1 << 22) /* System MMU 3.2 only */ #define CFG_SYSSEL (1 << 22) /* System MMU 3.2 only */
#define CFG_FLPDCACHE (1 << 20) /* System MMU 3.2+ only */ #define CFG_FLPDCACHE (1 << 20) /* System MMU 3.2+ only */
/* common registers */
#define REG_MMU_CTRL 0x000 #define REG_MMU_CTRL 0x000
#define REG_MMU_CFG 0x004 #define REG_MMU_CFG 0x004
#define REG_MMU_STATUS 0x008 #define REG_MMU_STATUS 0x008
#define REG_MMU_VERSION 0x034
#define MMU_MAJ_VER(val) ((val) >> 7)
#define MMU_MIN_VER(val) ((val) & 0x7F)
#define MMU_RAW_VER(reg) (((reg) >> 21) & ((1 << 11) - 1)) /* 11 bits */
#define MAKE_MMU_VER(maj, min) ((((maj) & 0xF) << 7) | ((min) & 0x7F))
/* v1.x - v3.x registers */
#define REG_MMU_FLUSH 0x00C #define REG_MMU_FLUSH 0x00C
#define REG_MMU_FLUSH_ENTRY 0x010 #define REG_MMU_FLUSH_ENTRY 0x010
#define REG_PT_BASE_ADDR 0x014 #define REG_PT_BASE_ADDR 0x014
...@@ -119,21 +132,18 @@ static u32 lv2ent_offset(sysmmu_iova_t iova) ...@@ -119,21 +132,18 @@ static u32 lv2ent_offset(sysmmu_iova_t iova)
#define REG_AR_FAULT_ADDR 0x02C #define REG_AR_FAULT_ADDR 0x02C
#define REG_DEFAULT_SLAVE_ADDR 0x030 #define REG_DEFAULT_SLAVE_ADDR 0x030
#define REG_MMU_VERSION 0x034 /* v5.x registers */
#define REG_V5_PT_BASE_PFN 0x00C
#define MMU_MAJ_VER(val) ((val) >> 7) #define REG_V5_MMU_FLUSH_ALL 0x010
#define MMU_MIN_VER(val) ((val) & 0x7F) #define REG_V5_MMU_FLUSH_ENTRY 0x014
#define MMU_RAW_VER(reg) (((reg) >> 21) & ((1 << 11) - 1)) /* 11 bits */ #define REG_V5_INT_STATUS 0x060
#define REG_V5_INT_CLEAR 0x064
#define MAKE_MMU_VER(maj, min) ((((maj) & 0xF) << 7) | ((min) & 0x7F)) #define REG_V5_FAULT_AR_VA 0x070
#define REG_V5_FAULT_AW_VA 0x080
#define REG_PB0_SADDR 0x04C
#define REG_PB0_EADDR 0x050
#define REG_PB1_SADDR 0x054
#define REG_PB1_EADDR 0x058
#define has_sysmmu(dev) (dev->archdata.iommu != NULL) #define has_sysmmu(dev) (dev->archdata.iommu != NULL)
static struct device *dma_dev;
static struct kmem_cache *lv2table_kmem_cache; static struct kmem_cache *lv2table_kmem_cache;
static sysmmu_pte_t *zero_lv2_table; static sysmmu_pte_t *zero_lv2_table;
#define ZERO_LV2LINK mk_lv1ent_page(virt_to_phys(zero_lv2_table)) #define ZERO_LV2LINK mk_lv1ent_page(virt_to_phys(zero_lv2_table))
...@@ -149,40 +159,38 @@ static sysmmu_pte_t *page_entry(sysmmu_pte_t *sent, sysmmu_iova_t iova) ...@@ -149,40 +159,38 @@ static sysmmu_pte_t *page_entry(sysmmu_pte_t *sent, sysmmu_iova_t iova)
lv2table_base(sent)) + lv2ent_offset(iova); lv2table_base(sent)) + lv2ent_offset(iova);
} }
enum exynos_sysmmu_inttype { /*
SYSMMU_PAGEFAULT, * IOMMU fault information register
SYSMMU_AR_MULTIHIT, */
SYSMMU_AW_MULTIHIT, struct sysmmu_fault_info {
SYSMMU_BUSERROR, unsigned int bit; /* bit number in STATUS register */
SYSMMU_AR_SECURITY, unsigned short addr_reg; /* register to read VA fault address */
SYSMMU_AR_ACCESS, const char *name; /* human readable fault name */
SYSMMU_AW_SECURITY, unsigned int type; /* fault type for report_iommu_fault */
SYSMMU_AW_PROTECTION, /* 7 */
SYSMMU_FAULT_UNKNOWN,
SYSMMU_FAULTS_NUM
}; };
static unsigned short fault_reg_offset[SYSMMU_FAULTS_NUM] = { static const struct sysmmu_fault_info sysmmu_faults[] = {
REG_PAGE_FAULT_ADDR, { 0, REG_PAGE_FAULT_ADDR, "PAGE", IOMMU_FAULT_READ },
REG_AR_FAULT_ADDR, { 1, REG_AR_FAULT_ADDR, "AR MULTI-HIT", IOMMU_FAULT_READ },
REG_AW_FAULT_ADDR, { 2, REG_AW_FAULT_ADDR, "AW MULTI-HIT", IOMMU_FAULT_WRITE },
REG_DEFAULT_SLAVE_ADDR, { 3, REG_DEFAULT_SLAVE_ADDR, "BUS ERROR", IOMMU_FAULT_READ },
REG_AR_FAULT_ADDR, { 4, REG_AR_FAULT_ADDR, "AR SECURITY PROTECTION", IOMMU_FAULT_READ },
REG_AR_FAULT_ADDR, { 5, REG_AR_FAULT_ADDR, "AR ACCESS PROTECTION", IOMMU_FAULT_READ },
REG_AW_FAULT_ADDR, { 6, REG_AW_FAULT_ADDR, "AW SECURITY PROTECTION", IOMMU_FAULT_WRITE },
REG_AW_FAULT_ADDR { 7, REG_AW_FAULT_ADDR, "AW ACCESS PROTECTION", IOMMU_FAULT_WRITE },
}; };
static char *sysmmu_fault_name[SYSMMU_FAULTS_NUM] = { static const struct sysmmu_fault_info sysmmu_v5_faults[] = {
"PAGE FAULT", { 0, REG_V5_FAULT_AR_VA, "AR PTW", IOMMU_FAULT_READ },
"AR MULTI-HIT FAULT", { 1, REG_V5_FAULT_AR_VA, "AR PAGE", IOMMU_FAULT_READ },
"AW MULTI-HIT FAULT", { 2, REG_V5_FAULT_AR_VA, "AR MULTI-HIT", IOMMU_FAULT_READ },
"BUS ERROR", { 3, REG_V5_FAULT_AR_VA, "AR ACCESS PROTECTION", IOMMU_FAULT_READ },
"AR SECURITY PROTECTION FAULT", { 4, REG_V5_FAULT_AR_VA, "AR SECURITY PROTECTION", IOMMU_FAULT_READ },
"AR ACCESS PROTECTION FAULT", { 16, REG_V5_FAULT_AW_VA, "AW PTW", IOMMU_FAULT_WRITE },
"AW SECURITY PROTECTION FAULT", { 17, REG_V5_FAULT_AW_VA, "AW PAGE", IOMMU_FAULT_WRITE },
"AW ACCESS PROTECTION FAULT", { 18, REG_V5_FAULT_AW_VA, "AW MULTI-HIT", IOMMU_FAULT_WRITE },
"UNKNOWN FAULT" { 19, REG_V5_FAULT_AW_VA, "AW ACCESS PROTECTION", IOMMU_FAULT_WRITE },
{ 20, REG_V5_FAULT_AW_VA, "AW SECURITY PROTECTION", IOMMU_FAULT_WRITE },
}; };
/* /*
...@@ -193,6 +201,7 @@ static char *sysmmu_fault_name[SYSMMU_FAULTS_NUM] = { ...@@ -193,6 +201,7 @@ static char *sysmmu_fault_name[SYSMMU_FAULTS_NUM] = {
*/ */
struct exynos_iommu_owner { struct exynos_iommu_owner {
struct list_head controllers; /* list of sysmmu_drvdata.owner_node */ struct list_head controllers; /* list of sysmmu_drvdata.owner_node */
struct iommu_domain *domain; /* domain this device is attached */
}; };
/* /*
...@@ -221,6 +230,8 @@ struct sysmmu_drvdata { ...@@ -221,6 +230,8 @@ struct sysmmu_drvdata {
struct device *master; /* master device (owner) */ struct device *master; /* master device (owner) */
void __iomem *sfrbase; /* our registers */ void __iomem *sfrbase; /* our registers */
struct clk *clk; /* SYSMMU's clock */ struct clk *clk; /* SYSMMU's clock */
struct clk *aclk; /* SYSMMU's aclk clock */
struct clk *pclk; /* SYSMMU's pclk clock */
struct clk *clk_master; /* master's device clock */ struct clk *clk_master; /* master's device clock */
int activations; /* number of calls to sysmmu_enable */ int activations; /* number of calls to sysmmu_enable */
spinlock_t lock; /* lock for modyfying state */ spinlock_t lock; /* lock for modyfying state */
...@@ -255,70 +266,101 @@ static bool is_sysmmu_active(struct sysmmu_drvdata *data) ...@@ -255,70 +266,101 @@ static bool is_sysmmu_active(struct sysmmu_drvdata *data)
return data->activations > 0; return data->activations > 0;
} }
static void sysmmu_unblock(void __iomem *sfrbase) static void sysmmu_unblock(struct sysmmu_drvdata *data)
{ {
__raw_writel(CTRL_ENABLE, sfrbase + REG_MMU_CTRL); writel(CTRL_ENABLE, data->sfrbase + REG_MMU_CTRL);
} }
static bool sysmmu_block(void __iomem *sfrbase) static bool sysmmu_block(struct sysmmu_drvdata *data)
{ {
int i = 120; int i = 120;
__raw_writel(CTRL_BLOCK, sfrbase + REG_MMU_CTRL); writel(CTRL_BLOCK, data->sfrbase + REG_MMU_CTRL);
while ((i > 0) && !(__raw_readl(sfrbase + REG_MMU_STATUS) & 1)) while ((i > 0) && !(readl(data->sfrbase + REG_MMU_STATUS) & 1))
--i; --i;
if (!(__raw_readl(sfrbase + REG_MMU_STATUS) & 1)) { if (!(readl(data->sfrbase + REG_MMU_STATUS) & 1)) {
sysmmu_unblock(sfrbase); sysmmu_unblock(data);
return false; return false;
} }
return true; return true;
} }
static void __sysmmu_tlb_invalidate(void __iomem *sfrbase) static void __sysmmu_tlb_invalidate(struct sysmmu_drvdata *data)
{ {
__raw_writel(0x1, sfrbase + REG_MMU_FLUSH); if (MMU_MAJ_VER(data->version) < 5)
writel(0x1, data->sfrbase + REG_MMU_FLUSH);
else
writel(0x1, data->sfrbase + REG_V5_MMU_FLUSH_ALL);
} }
static void __sysmmu_tlb_invalidate_entry(void __iomem *sfrbase, static void __sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data,
sysmmu_iova_t iova, unsigned int num_inv) sysmmu_iova_t iova, unsigned int num_inv)
{ {
unsigned int i; unsigned int i;
for (i = 0; i < num_inv; i++) { for (i = 0; i < num_inv; i++) {
__raw_writel((iova & SPAGE_MASK) | 1, if (MMU_MAJ_VER(data->version) < 5)
sfrbase + REG_MMU_FLUSH_ENTRY); writel((iova & SPAGE_MASK) | 1,
data->sfrbase + REG_MMU_FLUSH_ENTRY);
else
writel((iova & SPAGE_MASK) | 1,
data->sfrbase + REG_V5_MMU_FLUSH_ENTRY);
iova += SPAGE_SIZE; iova += SPAGE_SIZE;
} }
} }
static void __sysmmu_set_ptbase(void __iomem *sfrbase, static void __sysmmu_set_ptbase(struct sysmmu_drvdata *data, phys_addr_t pgd)
phys_addr_t pgd)
{ {
__raw_writel(pgd, sfrbase + REG_PT_BASE_ADDR); if (MMU_MAJ_VER(data->version) < 5)
writel(pgd, data->sfrbase + REG_PT_BASE_ADDR);
else
writel(pgd >> PAGE_SHIFT,
data->sfrbase + REG_V5_PT_BASE_PFN);
__sysmmu_tlb_invalidate(sfrbase); __sysmmu_tlb_invalidate(data);
} }
static void show_fault_information(const char *name, static void __sysmmu_get_version(struct sysmmu_drvdata *data)
enum exynos_sysmmu_inttype itype,
phys_addr_t pgtable_base, sysmmu_iova_t fault_addr)
{ {
sysmmu_pte_t *ent; u32 ver;
clk_enable(data->clk_master);
clk_enable(data->clk);
clk_enable(data->pclk);
clk_enable(data->aclk);
ver = readl(data->sfrbase + REG_MMU_VERSION);
if ((itype >= SYSMMU_FAULTS_NUM) || (itype < SYSMMU_PAGEFAULT)) /* controllers on some SoCs don't report proper version */
itype = SYSMMU_FAULT_UNKNOWN; if (ver == 0x80000001u)
data->version = MAKE_MMU_VER(1, 0);
else
data->version = MMU_RAW_VER(ver);
dev_dbg(data->sysmmu, "hardware version: %d.%d\n",
MMU_MAJ_VER(data->version), MMU_MIN_VER(data->version));
pr_err("%s occurred at %#x by %s(Page table base: %pa)\n", clk_disable(data->aclk);
sysmmu_fault_name[itype], fault_addr, name, &pgtable_base); clk_disable(data->pclk);
clk_disable(data->clk);
clk_disable(data->clk_master);
}
ent = section_entry(phys_to_virt(pgtable_base), fault_addr); static void show_fault_information(struct sysmmu_drvdata *data,
pr_err("\tLv1 entry: %#x\n", *ent); const struct sysmmu_fault_info *finfo,
sysmmu_iova_t fault_addr)
{
sysmmu_pte_t *ent;
dev_err(data->sysmmu, "%s FAULT occurred at %#x (page table base: %pa)\n",
finfo->name, fault_addr, &data->pgtable);
ent = section_entry(phys_to_virt(data->pgtable), fault_addr);
dev_err(data->sysmmu, "\tLv1 entry: %#x\n", *ent);
if (lv1ent_page(ent)) { if (lv1ent_page(ent)) {
ent = page_entry(ent, fault_addr); ent = page_entry(ent, fault_addr);
pr_err("\t Lv2 entry: %#x\n", *ent); dev_err(data->sysmmu, "\t Lv2 entry: %#x\n", *ent);
} }
} }
...@@ -326,49 +368,52 @@ static irqreturn_t exynos_sysmmu_irq(int irq, void *dev_id) ...@@ -326,49 +368,52 @@ static irqreturn_t exynos_sysmmu_irq(int irq, void *dev_id)
{ {
/* SYSMMU is in blocked state when interrupt occurred. */ /* SYSMMU is in blocked state when interrupt occurred. */
struct sysmmu_drvdata *data = dev_id; struct sysmmu_drvdata *data = dev_id;
enum exynos_sysmmu_inttype itype; const struct sysmmu_fault_info *finfo;
sysmmu_iova_t addr = -1; unsigned int i, n, itype;
sysmmu_iova_t fault_addr = -1;
unsigned short reg_status, reg_clear;
int ret = -ENOSYS; int ret = -ENOSYS;
WARN_ON(!is_sysmmu_active(data)); WARN_ON(!is_sysmmu_active(data));
if (MMU_MAJ_VER(data->version) < 5) {
reg_status = REG_INT_STATUS;
reg_clear = REG_INT_CLEAR;
finfo = sysmmu_faults;
n = ARRAY_SIZE(sysmmu_faults);
} else {
reg_status = REG_V5_INT_STATUS;
reg_clear = REG_V5_INT_CLEAR;
finfo = sysmmu_v5_faults;
n = ARRAY_SIZE(sysmmu_v5_faults);
}
spin_lock(&data->lock); spin_lock(&data->lock);
if (!IS_ERR(data->clk_master)) clk_enable(data->clk_master);
clk_enable(data->clk_master);
itype = (enum exynos_sysmmu_inttype) itype = __ffs(readl(data->sfrbase + reg_status));
__ffs(__raw_readl(data->sfrbase + REG_INT_STATUS)); for (i = 0; i < n; i++, finfo++)
if (WARN_ON(!((itype >= 0) && (itype < SYSMMU_FAULT_UNKNOWN)))) if (finfo->bit == itype)
itype = SYSMMU_FAULT_UNKNOWN; break;
else /* unknown/unsupported fault */
addr = __raw_readl(data->sfrbase + fault_reg_offset[itype]); BUG_ON(i == n);
if (itype == SYSMMU_FAULT_UNKNOWN) { /* print debug message */
pr_err("%s: Fault is not occurred by System MMU '%s'!\n", fault_addr = readl(data->sfrbase + finfo->addr_reg);
__func__, dev_name(data->sysmmu)); show_fault_information(data, finfo, fault_addr);
pr_err("%s: Please check if IRQ is correctly configured.\n",
__func__);
BUG();
} else {
unsigned int base =
__raw_readl(data->sfrbase + REG_PT_BASE_ADDR);
show_fault_information(dev_name(data->sysmmu),
itype, base, addr);
if (data->domain)
ret = report_iommu_fault(&data->domain->domain,
data->master, addr, itype);
}
if (data->domain)
ret = report_iommu_fault(&data->domain->domain,
data->master, fault_addr, finfo->type);
/* fault is not recovered by fault handler */ /* fault is not recovered by fault handler */
BUG_ON(ret != 0); BUG_ON(ret != 0);
__raw_writel(1 << itype, data->sfrbase + REG_INT_CLEAR); writel(1 << itype, data->sfrbase + reg_clear);
sysmmu_unblock(data->sfrbase); sysmmu_unblock(data);
if (!IS_ERR(data->clk_master)) clk_disable(data->clk_master);
clk_disable(data->clk_master);
spin_unlock(&data->lock); spin_unlock(&data->lock);
...@@ -377,15 +422,15 @@ static irqreturn_t exynos_sysmmu_irq(int irq, void *dev_id) ...@@ -377,15 +422,15 @@ static irqreturn_t exynos_sysmmu_irq(int irq, void *dev_id)
static void __sysmmu_disable_nocount(struct sysmmu_drvdata *data) static void __sysmmu_disable_nocount(struct sysmmu_drvdata *data)
{ {
if (!IS_ERR(data->clk_master)) clk_enable(data->clk_master);
clk_enable(data->clk_master);
__raw_writel(CTRL_DISABLE, data->sfrbase + REG_MMU_CTRL); writel(CTRL_DISABLE, data->sfrbase + REG_MMU_CTRL);
__raw_writel(0, data->sfrbase + REG_MMU_CFG); writel(0, data->sfrbase + REG_MMU_CFG);
clk_disable(data->aclk);
clk_disable(data->pclk);
clk_disable(data->clk); clk_disable(data->clk);
if (!IS_ERR(data->clk_master)) clk_disable(data->clk_master);
clk_disable(data->clk_master);
} }
static bool __sysmmu_disable(struct sysmmu_drvdata *data) static bool __sysmmu_disable(struct sysmmu_drvdata *data)
...@@ -416,42 +461,34 @@ static bool __sysmmu_disable(struct sysmmu_drvdata *data) ...@@ -416,42 +461,34 @@ static bool __sysmmu_disable(struct sysmmu_drvdata *data)
static void __sysmmu_init_config(struct sysmmu_drvdata *data) static void __sysmmu_init_config(struct sysmmu_drvdata *data)
{ {
unsigned int cfg = CFG_LRU | CFG_QOS(15); unsigned int cfg;
unsigned int ver;
ver = MMU_RAW_VER(__raw_readl(data->sfrbase + REG_MMU_VERSION));
if (MMU_MAJ_VER(ver) == 3) {
if (MMU_MIN_VER(ver) >= 2) {
cfg |= CFG_FLPDCACHE;
if (MMU_MIN_VER(ver) == 3) {
cfg |= CFG_ACGEN;
cfg &= ~CFG_LRU;
} else {
cfg |= CFG_SYSSEL;
}
}
}
__raw_writel(cfg, data->sfrbase + REG_MMU_CFG); if (data->version <= MAKE_MMU_VER(3, 1))
data->version = ver; cfg = CFG_LRU | CFG_QOS(15);
else if (data->version <= MAKE_MMU_VER(3, 2))
cfg = CFG_LRU | CFG_QOS(15) | CFG_FLPDCACHE | CFG_SYSSEL;
else
cfg = CFG_QOS(15) | CFG_FLPDCACHE | CFG_ACGEN;
writel(cfg, data->sfrbase + REG_MMU_CFG);
} }
static void __sysmmu_enable_nocount(struct sysmmu_drvdata *data) static void __sysmmu_enable_nocount(struct sysmmu_drvdata *data)
{ {
if (!IS_ERR(data->clk_master)) clk_enable(data->clk_master);
clk_enable(data->clk_master);
clk_enable(data->clk); clk_enable(data->clk);
clk_enable(data->pclk);
clk_enable(data->aclk);
__raw_writel(CTRL_BLOCK, data->sfrbase + REG_MMU_CTRL); writel(CTRL_BLOCK, data->sfrbase + REG_MMU_CTRL);
__sysmmu_init_config(data); __sysmmu_init_config(data);
__sysmmu_set_ptbase(data->sfrbase, data->pgtable); __sysmmu_set_ptbase(data, data->pgtable);
__raw_writel(CTRL_ENABLE, data->sfrbase + REG_MMU_CTRL); writel(CTRL_ENABLE, data->sfrbase + REG_MMU_CTRL);
if (!IS_ERR(data->clk_master)) clk_disable(data->clk_master);
clk_disable(data->clk_master);
} }
static int __sysmmu_enable(struct sysmmu_drvdata *data, phys_addr_t pgtable, static int __sysmmu_enable(struct sysmmu_drvdata *data, phys_addr_t pgtable,
...@@ -482,28 +519,21 @@ static int __sysmmu_enable(struct sysmmu_drvdata *data, phys_addr_t pgtable, ...@@ -482,28 +519,21 @@ static int __sysmmu_enable(struct sysmmu_drvdata *data, phys_addr_t pgtable,
return ret; return ret;
} }
static void __sysmmu_tlb_invalidate_flpdcache(struct sysmmu_drvdata *data,
sysmmu_iova_t iova)
{
if (data->version == MAKE_MMU_VER(3, 3))
__raw_writel(iova | 0x1, data->sfrbase + REG_MMU_FLUSH_ENTRY);
}
static void sysmmu_tlb_invalidate_flpdcache(struct sysmmu_drvdata *data, static void sysmmu_tlb_invalidate_flpdcache(struct sysmmu_drvdata *data,
sysmmu_iova_t iova) sysmmu_iova_t iova)
{ {
unsigned long flags; unsigned long flags;
if (!IS_ERR(data->clk_master)) clk_enable(data->clk_master);
clk_enable(data->clk_master);
spin_lock_irqsave(&data->lock, flags); spin_lock_irqsave(&data->lock, flags);
if (is_sysmmu_active(data)) if (is_sysmmu_active(data)) {
__sysmmu_tlb_invalidate_flpdcache(data, iova); if (data->version >= MAKE_MMU_VER(3, 3))
__sysmmu_tlb_invalidate_entry(data, iova, 1);
}
spin_unlock_irqrestore(&data->lock, flags); spin_unlock_irqrestore(&data->lock, flags);
if (!IS_ERR(data->clk_master)) clk_disable(data->clk_master);
clk_disable(data->clk_master);
} }
static void sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data, static void sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data,
...@@ -515,8 +545,7 @@ static void sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data, ...@@ -515,8 +545,7 @@ static void sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data,
if (is_sysmmu_active(data)) { if (is_sysmmu_active(data)) {
unsigned int num_inv = 1; unsigned int num_inv = 1;
if (!IS_ERR(data->clk_master)) clk_enable(data->clk_master);
clk_enable(data->clk_master);
/* /*
* L2TLB invalidation required * L2TLB invalidation required
...@@ -531,13 +560,11 @@ static void sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data, ...@@ -531,13 +560,11 @@ static void sysmmu_tlb_invalidate_entry(struct sysmmu_drvdata *data,
if (MMU_MAJ_VER(data->version) == 2) if (MMU_MAJ_VER(data->version) == 2)
num_inv = min_t(unsigned int, size / PAGE_SIZE, 64); num_inv = min_t(unsigned int, size / PAGE_SIZE, 64);
if (sysmmu_block(data->sfrbase)) { if (sysmmu_block(data)) {
__sysmmu_tlb_invalidate_entry( __sysmmu_tlb_invalidate_entry(data, iova, num_inv);
data->sfrbase, iova, num_inv); sysmmu_unblock(data);
sysmmu_unblock(data->sfrbase);
} }
if (!IS_ERR(data->clk_master)) clk_disable(data->clk_master);
clk_disable(data->clk_master);
} else { } else {
dev_dbg(data->master, dev_dbg(data->master,
"disabled. Skipping TLB invalidation @ %#x\n", iova); "disabled. Skipping TLB invalidation @ %#x\n", iova);
...@@ -575,25 +602,52 @@ static int __init exynos_sysmmu_probe(struct platform_device *pdev) ...@@ -575,25 +602,52 @@ static int __init exynos_sysmmu_probe(struct platform_device *pdev)
} }
data->clk = devm_clk_get(dev, "sysmmu"); data->clk = devm_clk_get(dev, "sysmmu");
if (IS_ERR(data->clk)) { if (!IS_ERR(data->clk)) {
dev_err(dev, "Failed to get clock!\n");
return PTR_ERR(data->clk);
} else {
ret = clk_prepare(data->clk); ret = clk_prepare(data->clk);
if (ret) { if (ret) {
dev_err(dev, "Failed to prepare clk\n"); dev_err(dev, "Failed to prepare clk\n");
return ret; return ret;
} }
} else {
data->clk = NULL;
}
data->aclk = devm_clk_get(dev, "aclk");
if (!IS_ERR(data->aclk)) {
ret = clk_prepare(data->aclk);
if (ret) {
dev_err(dev, "Failed to prepare aclk\n");
return ret;
}
} else {
data->aclk = NULL;
}
data->pclk = devm_clk_get(dev, "pclk");
if (!IS_ERR(data->pclk)) {
ret = clk_prepare(data->pclk);
if (ret) {
dev_err(dev, "Failed to prepare pclk\n");
return ret;
}
} else {
data->pclk = NULL;
}
if (!data->clk && (!data->aclk || !data->pclk)) {
dev_err(dev, "Failed to get device clock(s)!\n");
return -ENOSYS;
} }
data->clk_master = devm_clk_get(dev, "master"); data->clk_master = devm_clk_get(dev, "master");
if (!IS_ERR(data->clk_master)) { if (!IS_ERR(data->clk_master)) {
ret = clk_prepare(data->clk_master); ret = clk_prepare(data->clk_master);
if (ret) { if (ret) {
clk_unprepare(data->clk);
dev_err(dev, "Failed to prepare master's clk\n"); dev_err(dev, "Failed to prepare master's clk\n");
return ret; return ret;
} }
} else {
data->clk_master = NULL;
} }
data->sysmmu = dev; data->sysmmu = dev;
...@@ -601,6 +655,14 @@ static int __init exynos_sysmmu_probe(struct platform_device *pdev) ...@@ -601,6 +655,14 @@ static int __init exynos_sysmmu_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, data); platform_set_drvdata(pdev, data);
__sysmmu_get_version(data);
if (PG_ENT_SHIFT < 0) {
if (MMU_MAJ_VER(data->version) < 5)
PG_ENT_SHIFT = SYSMMU_PG_ENT_SHIFT;
else
PG_ENT_SHIFT = SYSMMU_V5_PG_ENT_SHIFT;
}
pm_runtime_enable(dev); pm_runtime_enable(dev);
return 0; return 0;
...@@ -650,28 +712,38 @@ static struct platform_driver exynos_sysmmu_driver __refdata = { ...@@ -650,28 +712,38 @@ static struct platform_driver exynos_sysmmu_driver __refdata = {
} }
}; };
static inline void pgtable_flush(void *vastart, void *vaend) static inline void update_pte(sysmmu_pte_t *ent, sysmmu_pte_t val)
{ {
dmac_flush_range(vastart, vaend); dma_sync_single_for_cpu(dma_dev, virt_to_phys(ent), sizeof(*ent),
outer_flush_range(virt_to_phys(vastart), DMA_TO_DEVICE);
virt_to_phys(vaend)); *ent = val;
dma_sync_single_for_device(dma_dev, virt_to_phys(ent), sizeof(*ent),
DMA_TO_DEVICE);
} }
static struct iommu_domain *exynos_iommu_domain_alloc(unsigned type) static struct iommu_domain *exynos_iommu_domain_alloc(unsigned type)
{ {
struct exynos_iommu_domain *domain; struct exynos_iommu_domain *domain;
dma_addr_t handle;
int i; int i;
if (type != IOMMU_DOMAIN_UNMANAGED) /* Check if correct PTE offsets are initialized */
return NULL; BUG_ON(PG_ENT_SHIFT < 0 || !dma_dev);
domain = kzalloc(sizeof(*domain), GFP_KERNEL); domain = kzalloc(sizeof(*domain), GFP_KERNEL);
if (!domain) if (!domain)
return NULL; return NULL;
if (type == IOMMU_DOMAIN_DMA) {
if (iommu_get_dma_cookie(&domain->domain) != 0)
goto err_pgtable;
} else if (type != IOMMU_DOMAIN_UNMANAGED) {
goto err_pgtable;
}
domain->pgtable = (sysmmu_pte_t *)__get_free_pages(GFP_KERNEL, 2); domain->pgtable = (sysmmu_pte_t *)__get_free_pages(GFP_KERNEL, 2);
if (!domain->pgtable) if (!domain->pgtable)
goto err_pgtable; goto err_dma_cookie;
domain->lv2entcnt = (short *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 1); domain->lv2entcnt = (short *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 1);
if (!domain->lv2entcnt) if (!domain->lv2entcnt)
...@@ -689,7 +761,10 @@ static struct iommu_domain *exynos_iommu_domain_alloc(unsigned type) ...@@ -689,7 +761,10 @@ static struct iommu_domain *exynos_iommu_domain_alloc(unsigned type)
domain->pgtable[i + 7] = ZERO_LV2LINK; domain->pgtable[i + 7] = ZERO_LV2LINK;
} }
pgtable_flush(domain->pgtable, domain->pgtable + NUM_LV1ENTRIES); handle = dma_map_single(dma_dev, domain->pgtable, LV1TABLE_SIZE,
DMA_TO_DEVICE);
/* For mapping page table entries we rely on dma == phys */
BUG_ON(handle != virt_to_phys(domain->pgtable));
spin_lock_init(&domain->lock); spin_lock_init(&domain->lock);
spin_lock_init(&domain->pgtablelock); spin_lock_init(&domain->pgtablelock);
...@@ -703,6 +778,9 @@ static struct iommu_domain *exynos_iommu_domain_alloc(unsigned type) ...@@ -703,6 +778,9 @@ static struct iommu_domain *exynos_iommu_domain_alloc(unsigned type)
err_counter: err_counter:
free_pages((unsigned long)domain->pgtable, 2); free_pages((unsigned long)domain->pgtable, 2);
err_dma_cookie:
if (type == IOMMU_DOMAIN_DMA)
iommu_put_dma_cookie(&domain->domain);
err_pgtable: err_pgtable:
kfree(domain); kfree(domain);
return NULL; return NULL;
...@@ -727,16 +805,62 @@ static void exynos_iommu_domain_free(struct iommu_domain *iommu_domain) ...@@ -727,16 +805,62 @@ static void exynos_iommu_domain_free(struct iommu_domain *iommu_domain)
spin_unlock_irqrestore(&domain->lock, flags); spin_unlock_irqrestore(&domain->lock, flags);
if (iommu_domain->type == IOMMU_DOMAIN_DMA)
iommu_put_dma_cookie(iommu_domain);
dma_unmap_single(dma_dev, virt_to_phys(domain->pgtable), LV1TABLE_SIZE,
DMA_TO_DEVICE);
for (i = 0; i < NUM_LV1ENTRIES; i++) for (i = 0; i < NUM_LV1ENTRIES; i++)
if (lv1ent_page(domain->pgtable + i)) if (lv1ent_page(domain->pgtable + i)) {
phys_addr_t base = lv2table_base(domain->pgtable + i);
dma_unmap_single(dma_dev, base, LV2TABLE_SIZE,
DMA_TO_DEVICE);
kmem_cache_free(lv2table_kmem_cache, kmem_cache_free(lv2table_kmem_cache,
phys_to_virt(lv2table_base(domain->pgtable + i))); phys_to_virt(base));
}
free_pages((unsigned long)domain->pgtable, 2); free_pages((unsigned long)domain->pgtable, 2);
free_pages((unsigned long)domain->lv2entcnt, 1); free_pages((unsigned long)domain->lv2entcnt, 1);
kfree(domain); kfree(domain);
} }
static void exynos_iommu_detach_device(struct iommu_domain *iommu_domain,
struct device *dev)
{
struct exynos_iommu_owner *owner = dev->archdata.iommu;
struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain);
phys_addr_t pagetable = virt_to_phys(domain->pgtable);
struct sysmmu_drvdata *data, *next;
unsigned long flags;
bool found = false;
if (!has_sysmmu(dev) || owner->domain != iommu_domain)
return;
spin_lock_irqsave(&domain->lock, flags);
list_for_each_entry_safe(data, next, &domain->clients, domain_node) {
if (data->master == dev) {
if (__sysmmu_disable(data)) {
data->master = NULL;
list_del_init(&data->domain_node);
}
pm_runtime_put(data->sysmmu);
found = true;
}
}
spin_unlock_irqrestore(&domain->lock, flags);
owner->domain = NULL;
if (found)
dev_dbg(dev, "%s: Detached IOMMU with pgtable %pa\n",
__func__, &pagetable);
else
dev_err(dev, "%s: No IOMMU is attached\n", __func__);
}
static int exynos_iommu_attach_device(struct iommu_domain *iommu_domain, static int exynos_iommu_attach_device(struct iommu_domain *iommu_domain,
struct device *dev) struct device *dev)
{ {
...@@ -750,6 +874,9 @@ static int exynos_iommu_attach_device(struct iommu_domain *iommu_domain, ...@@ -750,6 +874,9 @@ static int exynos_iommu_attach_device(struct iommu_domain *iommu_domain,
if (!has_sysmmu(dev)) if (!has_sysmmu(dev))
return -ENODEV; return -ENODEV;
if (owner->domain)
exynos_iommu_detach_device(owner->domain, dev);
list_for_each_entry(data, &owner->controllers, owner_node) { list_for_each_entry(data, &owner->controllers, owner_node) {
pm_runtime_get_sync(data->sysmmu); pm_runtime_get_sync(data->sysmmu);
ret = __sysmmu_enable(data, pagetable, domain); ret = __sysmmu_enable(data, pagetable, domain);
...@@ -768,44 +895,13 @@ static int exynos_iommu_attach_device(struct iommu_domain *iommu_domain, ...@@ -768,44 +895,13 @@ static int exynos_iommu_attach_device(struct iommu_domain *iommu_domain,
return ret; return ret;
} }
owner->domain = iommu_domain;
dev_dbg(dev, "%s: Attached IOMMU with pgtable %pa %s\n", dev_dbg(dev, "%s: Attached IOMMU with pgtable %pa %s\n",
__func__, &pagetable, (ret == 0) ? "" : ", again"); __func__, &pagetable, (ret == 0) ? "" : ", again");
return ret; return ret;
} }
static void exynos_iommu_detach_device(struct iommu_domain *iommu_domain,
struct device *dev)
{
struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain);
phys_addr_t pagetable = virt_to_phys(domain->pgtable);
struct sysmmu_drvdata *data, *next;
unsigned long flags;
bool found = false;
if (!has_sysmmu(dev))
return;
spin_lock_irqsave(&domain->lock, flags);
list_for_each_entry_safe(data, next, &domain->clients, domain_node) {
if (data->master == dev) {
if (__sysmmu_disable(data)) {
data->master = NULL;
list_del_init(&data->domain_node);
}
pm_runtime_put(data->sysmmu);
found = true;
}
}
spin_unlock_irqrestore(&domain->lock, flags);
if (found)
dev_dbg(dev, "%s: Detached IOMMU with pgtable %pa\n",
__func__, &pagetable);
else
dev_err(dev, "%s: No IOMMU is attached\n", __func__);
}
static sysmmu_pte_t *alloc_lv2entry(struct exynos_iommu_domain *domain, static sysmmu_pte_t *alloc_lv2entry(struct exynos_iommu_domain *domain,
sysmmu_pte_t *sent, sysmmu_iova_t iova, short *pgcounter) sysmmu_pte_t *sent, sysmmu_iova_t iova, short *pgcounter)
{ {
...@@ -819,15 +915,14 @@ static sysmmu_pte_t *alloc_lv2entry(struct exynos_iommu_domain *domain, ...@@ -819,15 +915,14 @@ static sysmmu_pte_t *alloc_lv2entry(struct exynos_iommu_domain *domain,
bool need_flush_flpd_cache = lv1ent_zero(sent); bool need_flush_flpd_cache = lv1ent_zero(sent);
pent = kmem_cache_zalloc(lv2table_kmem_cache, GFP_ATOMIC); pent = kmem_cache_zalloc(lv2table_kmem_cache, GFP_ATOMIC);
BUG_ON((unsigned int)pent & (LV2TABLE_SIZE - 1)); BUG_ON((uintptr_t)pent & (LV2TABLE_SIZE - 1));
if (!pent) if (!pent)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
*sent = mk_lv1ent_page(virt_to_phys(pent)); update_pte(sent, mk_lv1ent_page(virt_to_phys(pent)));
kmemleak_ignore(pent); kmemleak_ignore(pent);
*pgcounter = NUM_LV2ENTRIES; *pgcounter = NUM_LV2ENTRIES;
pgtable_flush(pent, pent + NUM_LV2ENTRIES); dma_map_single(dma_dev, pent, LV2TABLE_SIZE, DMA_TO_DEVICE);
pgtable_flush(sent, sent + 1);
/* /*
* If pre-fetched SLPD is a faulty SLPD in zero_l2_table, * If pre-fetched SLPD is a faulty SLPD in zero_l2_table,
...@@ -880,9 +975,7 @@ static int lv1set_section(struct exynos_iommu_domain *domain, ...@@ -880,9 +975,7 @@ static int lv1set_section(struct exynos_iommu_domain *domain,
*pgcnt = 0; *pgcnt = 0;
} }
*sent = mk_lv1ent_sect(paddr); update_pte(sent, mk_lv1ent_sect(paddr));
pgtable_flush(sent, sent + 1);
spin_lock(&domain->lock); spin_lock(&domain->lock);
if (lv1ent_page_zero(sent)) { if (lv1ent_page_zero(sent)) {
...@@ -906,12 +999,15 @@ static int lv2set_page(sysmmu_pte_t *pent, phys_addr_t paddr, size_t size, ...@@ -906,12 +999,15 @@ static int lv2set_page(sysmmu_pte_t *pent, phys_addr_t paddr, size_t size,
if (WARN_ON(!lv2ent_fault(pent))) if (WARN_ON(!lv2ent_fault(pent)))
return -EADDRINUSE; return -EADDRINUSE;
*pent = mk_lv2ent_spage(paddr); update_pte(pent, mk_lv2ent_spage(paddr));
pgtable_flush(pent, pent + 1);
*pgcnt -= 1; *pgcnt -= 1;
} else { /* size == LPAGE_SIZE */ } else { /* size == LPAGE_SIZE */
int i; int i;
dma_addr_t pent_base = virt_to_phys(pent);
dma_sync_single_for_cpu(dma_dev, pent_base,
sizeof(*pent) * SPAGES_PER_LPAGE,
DMA_TO_DEVICE);
for (i = 0; i < SPAGES_PER_LPAGE; i++, pent++) { for (i = 0; i < SPAGES_PER_LPAGE; i++, pent++) {
if (WARN_ON(!lv2ent_fault(pent))) { if (WARN_ON(!lv2ent_fault(pent))) {
if (i > 0) if (i > 0)
...@@ -921,7 +1017,9 @@ static int lv2set_page(sysmmu_pte_t *pent, phys_addr_t paddr, size_t size, ...@@ -921,7 +1017,9 @@ static int lv2set_page(sysmmu_pte_t *pent, phys_addr_t paddr, size_t size,
*pent = mk_lv2ent_lpage(paddr); *pent = mk_lv2ent_lpage(paddr);
} }
pgtable_flush(pent - SPAGES_PER_LPAGE, pent); dma_sync_single_for_device(dma_dev, pent_base,
sizeof(*pent) * SPAGES_PER_LPAGE,
DMA_TO_DEVICE);
*pgcnt -= SPAGES_PER_LPAGE; *pgcnt -= SPAGES_PER_LPAGE;
} }
...@@ -1031,8 +1129,7 @@ static size_t exynos_iommu_unmap(struct iommu_domain *iommu_domain, ...@@ -1031,8 +1129,7 @@ static size_t exynos_iommu_unmap(struct iommu_domain *iommu_domain,
} }
/* workaround for h/w bug in System MMU v3.3 */ /* workaround for h/w bug in System MMU v3.3 */
*ent = ZERO_LV2LINK; update_pte(ent, ZERO_LV2LINK);
pgtable_flush(ent, ent + 1);
size = SECT_SIZE; size = SECT_SIZE;
goto done; goto done;
} }
...@@ -1053,9 +1150,8 @@ static size_t exynos_iommu_unmap(struct iommu_domain *iommu_domain, ...@@ -1053,9 +1150,8 @@ static size_t exynos_iommu_unmap(struct iommu_domain *iommu_domain,
} }
if (lv2ent_small(ent)) { if (lv2ent_small(ent)) {
*ent = 0; update_pte(ent, 0);
size = SPAGE_SIZE; size = SPAGE_SIZE;
pgtable_flush(ent, ent + 1);
domain->lv2entcnt[lv1ent_offset(iova)] += 1; domain->lv2entcnt[lv1ent_offset(iova)] += 1;
goto done; goto done;
} }
...@@ -1066,9 +1162,13 @@ static size_t exynos_iommu_unmap(struct iommu_domain *iommu_domain, ...@@ -1066,9 +1162,13 @@ static size_t exynos_iommu_unmap(struct iommu_domain *iommu_domain,
goto err; goto err;
} }
dma_sync_single_for_cpu(dma_dev, virt_to_phys(ent),
sizeof(*ent) * SPAGES_PER_LPAGE,
DMA_TO_DEVICE);
memset(ent, 0, sizeof(*ent) * SPAGES_PER_LPAGE); memset(ent, 0, sizeof(*ent) * SPAGES_PER_LPAGE);
pgtable_flush(ent, ent + SPAGES_PER_LPAGE); dma_sync_single_for_device(dma_dev, virt_to_phys(ent),
sizeof(*ent) * SPAGES_PER_LPAGE,
DMA_TO_DEVICE);
size = LPAGE_SIZE; size = LPAGE_SIZE;
domain->lv2entcnt[lv1ent_offset(iova)] += SPAGES_PER_LPAGE; domain->lv2entcnt[lv1ent_offset(iova)] += SPAGES_PER_LPAGE;
done: done:
...@@ -1114,28 +1214,32 @@ static phys_addr_t exynos_iommu_iova_to_phys(struct iommu_domain *iommu_domain, ...@@ -1114,28 +1214,32 @@ static phys_addr_t exynos_iommu_iova_to_phys(struct iommu_domain *iommu_domain,
return phys; return phys;
} }
static struct iommu_group *get_device_iommu_group(struct device *dev)
{
struct iommu_group *group;
group = iommu_group_get(dev);
if (!group)
group = iommu_group_alloc();
return group;
}
static int exynos_iommu_add_device(struct device *dev) static int exynos_iommu_add_device(struct device *dev)
{ {
struct iommu_group *group; struct iommu_group *group;
int ret;
if (!has_sysmmu(dev)) if (!has_sysmmu(dev))
return -ENODEV; return -ENODEV;
group = iommu_group_get(dev); group = iommu_group_get_for_dev(dev);
if (!group) { if (IS_ERR(group))
group = iommu_group_alloc(); return PTR_ERR(group);
if (IS_ERR(group)) {
dev_err(dev, "Failed to allocate IOMMU group\n");
return PTR_ERR(group);
}
}
ret = iommu_group_add_device(group, dev);
iommu_group_put(group); iommu_group_put(group);
return ret; return 0;
} }
static void exynos_iommu_remove_device(struct device *dev) static void exynos_iommu_remove_device(struct device *dev)
...@@ -1182,6 +1286,7 @@ static struct iommu_ops exynos_iommu_ops = { ...@@ -1182,6 +1286,7 @@ static struct iommu_ops exynos_iommu_ops = {
.unmap = exynos_iommu_unmap, .unmap = exynos_iommu_unmap,
.map_sg = default_iommu_map_sg, .map_sg = default_iommu_map_sg,
.iova_to_phys = exynos_iommu_iova_to_phys, .iova_to_phys = exynos_iommu_iova_to_phys,
.device_group = get_device_iommu_group,
.add_device = exynos_iommu_add_device, .add_device = exynos_iommu_add_device,
.remove_device = exynos_iommu_remove_device, .remove_device = exynos_iommu_remove_device,
.pgsize_bitmap = SECT_SIZE | LPAGE_SIZE | SPAGE_SIZE, .pgsize_bitmap = SECT_SIZE | LPAGE_SIZE | SPAGE_SIZE,
...@@ -1245,6 +1350,13 @@ static int __init exynos_iommu_of_setup(struct device_node *np) ...@@ -1245,6 +1350,13 @@ static int __init exynos_iommu_of_setup(struct device_node *np)
if (IS_ERR(pdev)) if (IS_ERR(pdev))
return PTR_ERR(pdev); return PTR_ERR(pdev);
/*
* use the first registered sysmmu device for performing
* dma mapping operations on iommu page tables (cpu cache flush)
*/
if (!dma_dev)
dma_dev = &pdev->dev;
of_iommu_set_ops(np, &exynos_iommu_ops); of_iommu_set_ops(np, &exynos_iommu_ops);
return 0; return 0;
} }
......
/*
* CPU-agnostic ARM page table allocator.
*
* ARMv7 Short-descriptor format, supporting
* - Basic memory attributes
* - Simplified access permissions (AP[2:1] model)
* - Backwards-compatible TEX remap
* - Large pages/supersections (if indicated by the caller)
*
* Not supporting:
* - Legacy access permissions (AP[2:0] model)
*
* Almost certainly never supporting:
* - PXN
* - Domains
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
* Copyright (C) 2014-2015 ARM Limited
* Copyright (c) 2014-2015 MediaTek Inc.
*/
#define pr_fmt(fmt) "arm-v7s io-pgtable: " fmt
#include <linux/dma-mapping.h>
#include <linux/gfp.h>
#include <linux/iommu.h>
#include <linux/kernel.h>
#include <linux/kmemleak.h>
#include <linux/sizes.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <asm/barrier.h>
#include "io-pgtable.h"
/* Struct accessors */
#define io_pgtable_to_data(x) \
container_of((x), struct arm_v7s_io_pgtable, iop)
#define io_pgtable_ops_to_data(x) \
io_pgtable_to_data(io_pgtable_ops_to_pgtable(x))
/*
* We have 32 bits total; 12 bits resolved at level 1, 8 bits at level 2,
* and 12 bits in a page. With some carefully-chosen coefficients we can
* hide the ugly inconsistencies behind these macros and at least let the
* rest of the code pretend to be somewhat sane.
*/
#define ARM_V7S_ADDR_BITS 32
#define _ARM_V7S_LVL_BITS(lvl) (16 - (lvl) * 4)
#define ARM_V7S_LVL_SHIFT(lvl) (ARM_V7S_ADDR_BITS - (4 + 8 * (lvl)))
#define ARM_V7S_TABLE_SHIFT 10
#define ARM_V7S_PTES_PER_LVL(lvl) (1 << _ARM_V7S_LVL_BITS(lvl))
#define ARM_V7S_TABLE_SIZE(lvl) \
(ARM_V7S_PTES_PER_LVL(lvl) * sizeof(arm_v7s_iopte))
#define ARM_V7S_BLOCK_SIZE(lvl) (1UL << ARM_V7S_LVL_SHIFT(lvl))
#define ARM_V7S_LVL_MASK(lvl) ((u32)(~0U << ARM_V7S_LVL_SHIFT(lvl)))
#define ARM_V7S_TABLE_MASK ((u32)(~0U << ARM_V7S_TABLE_SHIFT))
#define _ARM_V7S_IDX_MASK(lvl) (ARM_V7S_PTES_PER_LVL(lvl) - 1)
#define ARM_V7S_LVL_IDX(addr, lvl) ({ \
int _l = lvl; \
((u32)(addr) >> ARM_V7S_LVL_SHIFT(_l)) & _ARM_V7S_IDX_MASK(_l); \
})
/*
* Large page/supersection entries are effectively a block of 16 page/section
* entries, along the lines of the LPAE contiguous hint, but all with the
* same output address. For want of a better common name we'll call them
* "contiguous" versions of their respective page/section entries here, but
* noting the distinction (WRT to TLB maintenance) that they represent *one*
* entry repeated 16 times, not 16 separate entries (as in the LPAE case).
*/
#define ARM_V7S_CONT_PAGES 16
/* PTE type bits: these are all mixed up with XN/PXN bits in most cases */
#define ARM_V7S_PTE_TYPE_TABLE 0x1
#define ARM_V7S_PTE_TYPE_PAGE 0x2
#define ARM_V7S_PTE_TYPE_CONT_PAGE 0x1
#define ARM_V7S_PTE_IS_VALID(pte) (((pte) & 0x3) != 0)
#define ARM_V7S_PTE_IS_TABLE(pte, lvl) (lvl == 1 && ((pte) & ARM_V7S_PTE_TYPE_TABLE))
/* Page table bits */
#define ARM_V7S_ATTR_XN(lvl) BIT(4 * (2 - (lvl)))
#define ARM_V7S_ATTR_B BIT(2)
#define ARM_V7S_ATTR_C BIT(3)
#define ARM_V7S_ATTR_NS_TABLE BIT(3)
#define ARM_V7S_ATTR_NS_SECTION BIT(19)
#define ARM_V7S_CONT_SECTION BIT(18)
#define ARM_V7S_CONT_PAGE_XN_SHIFT 15
/*
* The attribute bits are consistently ordered*, but occupy bits [17:10] of
* a level 1 PTE vs. bits [11:4] at level 2. Thus we define the individual
* fields relative to that 8-bit block, plus a total shift relative to the PTE.
*/
#define ARM_V7S_ATTR_SHIFT(lvl) (16 - (lvl) * 6)
#define ARM_V7S_ATTR_MASK 0xff
#define ARM_V7S_ATTR_AP0 BIT(0)
#define ARM_V7S_ATTR_AP1 BIT(1)
#define ARM_V7S_ATTR_AP2 BIT(5)
#define ARM_V7S_ATTR_S BIT(6)
#define ARM_V7S_ATTR_NG BIT(7)
#define ARM_V7S_TEX_SHIFT 2
#define ARM_V7S_TEX_MASK 0x7
#define ARM_V7S_ATTR_TEX(val) (((val) & ARM_V7S_TEX_MASK) << ARM_V7S_TEX_SHIFT)
/* *well, except for TEX on level 2 large pages, of course :( */
#define ARM_V7S_CONT_PAGE_TEX_SHIFT 6
#define ARM_V7S_CONT_PAGE_TEX_MASK (ARM_V7S_TEX_MASK << ARM_V7S_CONT_PAGE_TEX_SHIFT)
/* Simplified access permissions */
#define ARM_V7S_PTE_AF ARM_V7S_ATTR_AP0
#define ARM_V7S_PTE_AP_UNPRIV ARM_V7S_ATTR_AP1
#define ARM_V7S_PTE_AP_RDONLY ARM_V7S_ATTR_AP2
/* Register bits */
#define ARM_V7S_RGN_NC 0
#define ARM_V7S_RGN_WBWA 1
#define ARM_V7S_RGN_WT 2
#define ARM_V7S_RGN_WB 3
#define ARM_V7S_PRRR_TYPE_DEVICE 1
#define ARM_V7S_PRRR_TYPE_NORMAL 2
#define ARM_V7S_PRRR_TR(n, type) (((type) & 0x3) << ((n) * 2))
#define ARM_V7S_PRRR_DS0 BIT(16)
#define ARM_V7S_PRRR_DS1 BIT(17)
#define ARM_V7S_PRRR_NS0 BIT(18)
#define ARM_V7S_PRRR_NS1 BIT(19)
#define ARM_V7S_PRRR_NOS(n) BIT((n) + 24)
#define ARM_V7S_NMRR_IR(n, attr) (((attr) & 0x3) << ((n) * 2))
#define ARM_V7S_NMRR_OR(n, attr) (((attr) & 0x3) << ((n) * 2 + 16))
#define ARM_V7S_TTBR_S BIT(1)
#define ARM_V7S_TTBR_NOS BIT(5)
#define ARM_V7S_TTBR_ORGN_ATTR(attr) (((attr) & 0x3) << 3)
#define ARM_V7S_TTBR_IRGN_ATTR(attr) \
((((attr) & 0x1) << 6) | (((attr) & 0x2) >> 1))
#define ARM_V7S_TCR_PD1 BIT(5)
typedef u32 arm_v7s_iopte;
static bool selftest_running;
struct arm_v7s_io_pgtable {
struct io_pgtable iop;
arm_v7s_iopte *pgd;
struct kmem_cache *l2_tables;
};
static dma_addr_t __arm_v7s_dma_addr(void *pages)
{
return (dma_addr_t)virt_to_phys(pages);
}
static arm_v7s_iopte *iopte_deref(arm_v7s_iopte pte, int lvl)
{
if (ARM_V7S_PTE_IS_TABLE(pte, lvl))
pte &= ARM_V7S_TABLE_MASK;
else
pte &= ARM_V7S_LVL_MASK(lvl);
return phys_to_virt(pte);
}
static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
struct arm_v7s_io_pgtable *data)
{
struct device *dev = data->iop.cfg.iommu_dev;
dma_addr_t dma;
size_t size = ARM_V7S_TABLE_SIZE(lvl);
void *table = NULL;
if (lvl == 1)
table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
else if (lvl == 2)
table = kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA);
if (table && !selftest_running) {
dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
if (dma_mapping_error(dev, dma))
goto out_free;
/*
* We depend on the IOMMU being able to work with any physical
* address directly, so if the DMA layer suggests otherwise by
* translating or truncating them, that bodes very badly...
*/
if (dma != virt_to_phys(table))
goto out_unmap;
}
kmemleak_ignore(table);
return table;
out_unmap:
dev_err(dev, "Cannot accommodate DMA translation for IOMMU page tables\n");
dma_unmap_single(dev, dma, size, DMA_TO_DEVICE);
out_free:
if (lvl == 1)
free_pages((unsigned long)table, get_order(size));
else
kmem_cache_free(data->l2_tables, table);
return NULL;
}
static void __arm_v7s_free_table(void *table, int lvl,
struct arm_v7s_io_pgtable *data)
{
struct device *dev = data->iop.cfg.iommu_dev;
size_t size = ARM_V7S_TABLE_SIZE(lvl);
if (!selftest_running)
dma_unmap_single(dev, __arm_v7s_dma_addr(table), size,
DMA_TO_DEVICE);
if (lvl == 1)
free_pages((unsigned long)table, get_order(size));
else
kmem_cache_free(data->l2_tables, table);
}
static void __arm_v7s_pte_sync(arm_v7s_iopte *ptep, int num_entries,
struct io_pgtable_cfg *cfg)
{
if (selftest_running)
return;
dma_sync_single_for_device(cfg->iommu_dev, __arm_v7s_dma_addr(ptep),
num_entries * sizeof(*ptep), DMA_TO_DEVICE);
}
static void __arm_v7s_set_pte(arm_v7s_iopte *ptep, arm_v7s_iopte pte,
int num_entries, struct io_pgtable_cfg *cfg)
{
int i;
for (i = 0; i < num_entries; i++)
ptep[i] = pte;
__arm_v7s_pte_sync(ptep, num_entries, cfg);
}
static arm_v7s_iopte arm_v7s_prot_to_pte(int prot, int lvl,
struct io_pgtable_cfg *cfg)
{
bool ap = !(cfg->quirks & IO_PGTABLE_QUIRK_NO_PERMS);
arm_v7s_iopte pte = ARM_V7S_ATTR_NG | ARM_V7S_ATTR_S |
ARM_V7S_ATTR_TEX(1);
if (ap) {
pte |= ARM_V7S_PTE_AF | ARM_V7S_PTE_AP_UNPRIV;
if (!(prot & IOMMU_WRITE))
pte |= ARM_V7S_PTE_AP_RDONLY;
}
pte <<= ARM_V7S_ATTR_SHIFT(lvl);
if ((prot & IOMMU_NOEXEC) && ap)
pte |= ARM_V7S_ATTR_XN(lvl);
if (prot & IOMMU_CACHE)
pte |= ARM_V7S_ATTR_B | ARM_V7S_ATTR_C;
return pte;
}
static int arm_v7s_pte_to_prot(arm_v7s_iopte pte, int lvl)
{
int prot = IOMMU_READ;
if (pte & (ARM_V7S_PTE_AP_RDONLY << ARM_V7S_ATTR_SHIFT(lvl)))
prot |= IOMMU_WRITE;
if (pte & ARM_V7S_ATTR_C)
prot |= IOMMU_CACHE;
return prot;
}
static arm_v7s_iopte arm_v7s_pte_to_cont(arm_v7s_iopte pte, int lvl)
{
if (lvl == 1) {
pte |= ARM_V7S_CONT_SECTION;
} else if (lvl == 2) {
arm_v7s_iopte xn = pte & ARM_V7S_ATTR_XN(lvl);
arm_v7s_iopte tex = pte & ARM_V7S_CONT_PAGE_TEX_MASK;
pte ^= xn | tex | ARM_V7S_PTE_TYPE_PAGE;
pte |= (xn << ARM_V7S_CONT_PAGE_XN_SHIFT) |
(tex << ARM_V7S_CONT_PAGE_TEX_SHIFT) |
ARM_V7S_PTE_TYPE_CONT_PAGE;
}
return pte;
}
static arm_v7s_iopte arm_v7s_cont_to_pte(arm_v7s_iopte pte, int lvl)
{
if (lvl == 1) {
pte &= ~ARM_V7S_CONT_SECTION;
} else if (lvl == 2) {
arm_v7s_iopte xn = pte & BIT(ARM_V7S_CONT_PAGE_XN_SHIFT);
arm_v7s_iopte tex = pte & (ARM_V7S_CONT_PAGE_TEX_MASK <<
ARM_V7S_CONT_PAGE_TEX_SHIFT);
pte ^= xn | tex | ARM_V7S_PTE_TYPE_CONT_PAGE;
pte |= (xn >> ARM_V7S_CONT_PAGE_XN_SHIFT) |
(tex >> ARM_V7S_CONT_PAGE_TEX_SHIFT) |
ARM_V7S_PTE_TYPE_PAGE;
}
return pte;
}
static bool arm_v7s_pte_is_cont(arm_v7s_iopte pte, int lvl)
{
if (lvl == 1 && !ARM_V7S_PTE_IS_TABLE(pte, lvl))
return pte & ARM_V7S_CONT_SECTION;
else if (lvl == 2)
return !(pte & ARM_V7S_PTE_TYPE_PAGE);
return false;
}
static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *, unsigned long,
size_t, int, arm_v7s_iopte *);
static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data,
unsigned long iova, phys_addr_t paddr, int prot,
int lvl, int num_entries, arm_v7s_iopte *ptep)
{
struct io_pgtable_cfg *cfg = &data->iop.cfg;
arm_v7s_iopte pte = arm_v7s_prot_to_pte(prot, lvl, cfg);
int i;
for (i = 0; i < num_entries; i++)
if (ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) {
/*
* We need to unmap and free the old table before
* overwriting it with a block entry.
*/
arm_v7s_iopte *tblp;
size_t sz = ARM_V7S_BLOCK_SIZE(lvl);
tblp = ptep - ARM_V7S_LVL_IDX(iova, lvl);
if (WARN_ON(__arm_v7s_unmap(data, iova + i * sz,
sz, lvl, tblp) != sz))
return -EINVAL;
} else if (ptep[i]) {
/* We require an unmap first */
WARN_ON(!selftest_running);
return -EEXIST;
}
pte |= ARM_V7S_PTE_TYPE_PAGE;
if (lvl == 1 && (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS))
pte |= ARM_V7S_ATTR_NS_SECTION;
if (num_entries > 1)
pte = arm_v7s_pte_to_cont(pte, lvl);
pte |= paddr & ARM_V7S_LVL_MASK(lvl);
__arm_v7s_set_pte(ptep, pte, num_entries, cfg);
return 0;
}
static int __arm_v7s_map(struct arm_v7s_io_pgtable *data, unsigned long iova,
phys_addr_t paddr, size_t size, int prot,
int lvl, arm_v7s_iopte *ptep)
{
struct io_pgtable_cfg *cfg = &data->iop.cfg;
arm_v7s_iopte pte, *cptep;
int num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
/* Find our entry at the current level */
ptep += ARM_V7S_LVL_IDX(iova, lvl);
/* If we can install a leaf entry at this level, then do so */
if (num_entries)
return arm_v7s_init_pte(data, iova, paddr, prot,
lvl, num_entries, ptep);
/* We can't allocate tables at the final level */
if (WARN_ON(lvl == 2))
return -EINVAL;
/* Grab a pointer to the next level */
pte = *ptep;
if (!pte) {
cptep = __arm_v7s_alloc_table(lvl + 1, GFP_ATOMIC, data);
if (!cptep)
return -ENOMEM;
pte = virt_to_phys(cptep) | ARM_V7S_PTE_TYPE_TABLE;
if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS)
pte |= ARM_V7S_ATTR_NS_TABLE;
__arm_v7s_set_pte(ptep, pte, 1, cfg);
} else {
cptep = iopte_deref(pte, lvl);
}
/* Rinse, repeat */
return __arm_v7s_map(data, iova, paddr, size, prot, lvl + 1, cptep);
}
static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova,
phys_addr_t paddr, size_t size, int prot)
{
struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
struct io_pgtable *iop = &data->iop;
int ret;
/* If no access, then nothing to do */
if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
return 0;
ret = __arm_v7s_map(data, iova, paddr, size, prot, 1, data->pgd);
/*
* Synchronise all PTE updates for the new mapping before there's
* a chance for anything to kick off a table walk for the new iova.
*/
if (iop->cfg.quirks & IO_PGTABLE_QUIRK_TLBI_ON_MAP) {
io_pgtable_tlb_add_flush(iop, iova, size,
ARM_V7S_BLOCK_SIZE(2), false);
io_pgtable_tlb_sync(iop);
} else {
wmb();
}
return ret;
}
static void arm_v7s_free_pgtable(struct io_pgtable *iop)
{
struct arm_v7s_io_pgtable *data = io_pgtable_to_data(iop);
int i;
for (i = 0; i < ARM_V7S_PTES_PER_LVL(1); i++) {
arm_v7s_iopte pte = data->pgd[i];
if (ARM_V7S_PTE_IS_TABLE(pte, 1))
__arm_v7s_free_table(iopte_deref(pte, 1), 2, data);
}
__arm_v7s_free_table(data->pgd, 1, data);
kmem_cache_destroy(data->l2_tables);
kfree(data);
}
static void arm_v7s_split_cont(struct arm_v7s_io_pgtable *data,
unsigned long iova, int idx, int lvl,
arm_v7s_iopte *ptep)
{
struct io_pgtable *iop = &data->iop;
arm_v7s_iopte pte;
size_t size = ARM_V7S_BLOCK_SIZE(lvl);
int i;
ptep -= idx & (ARM_V7S_CONT_PAGES - 1);
pte = arm_v7s_cont_to_pte(*ptep, lvl);
for (i = 0; i < ARM_V7S_CONT_PAGES; i++) {
ptep[i] = pte;
pte += size;
}
__arm_v7s_pte_sync(ptep, ARM_V7S_CONT_PAGES, &iop->cfg);
size *= ARM_V7S_CONT_PAGES;
io_pgtable_tlb_add_flush(iop, iova, size, size, true);
io_pgtable_tlb_sync(iop);
}
static int arm_v7s_split_blk_unmap(struct arm_v7s_io_pgtable *data,
unsigned long iova, size_t size,
arm_v7s_iopte *ptep)
{
unsigned long blk_start, blk_end, blk_size;
phys_addr_t blk_paddr;
arm_v7s_iopte table = 0;
int prot = arm_v7s_pte_to_prot(*ptep, 1);
blk_size = ARM_V7S_BLOCK_SIZE(1);
blk_start = iova & ARM_V7S_LVL_MASK(1);
blk_end = blk_start + ARM_V7S_BLOCK_SIZE(1);
blk_paddr = *ptep & ARM_V7S_LVL_MASK(1);
for (; blk_start < blk_end; blk_start += size, blk_paddr += size) {
arm_v7s_iopte *tablep;
/* Unmap! */
if (blk_start == iova)
continue;
/* __arm_v7s_map expects a pointer to the start of the table */
tablep = &table - ARM_V7S_LVL_IDX(blk_start, 1);
if (__arm_v7s_map(data, blk_start, blk_paddr, size, prot, 1,
tablep) < 0) {
if (table) {
/* Free the table we allocated */
tablep = iopte_deref(table, 1);
__arm_v7s_free_table(tablep, 2, data);
}
return 0; /* Bytes unmapped */
}
}
__arm_v7s_set_pte(ptep, table, 1, &data->iop.cfg);
iova &= ~(blk_size - 1);
io_pgtable_tlb_add_flush(&data->iop, iova, blk_size, blk_size, true);
return size;
}
static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *data,
unsigned long iova, size_t size, int lvl,
arm_v7s_iopte *ptep)
{
arm_v7s_iopte pte[ARM_V7S_CONT_PAGES];
struct io_pgtable *iop = &data->iop;
int idx, i = 0, num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
/* Something went horribly wrong and we ran out of page table */
if (WARN_ON(lvl > 2))
return 0;
idx = ARM_V7S_LVL_IDX(iova, lvl);
ptep += idx;
do {
if (WARN_ON(!ARM_V7S_PTE_IS_VALID(ptep[i])))
return 0;
pte[i] = ptep[i];
} while (++i < num_entries);
/*
* If we've hit a contiguous 'large page' entry at this level, it
* needs splitting first, unless we're unmapping the whole lot.
*/
if (num_entries <= 1 && arm_v7s_pte_is_cont(pte[0], lvl))
arm_v7s_split_cont(data, iova, idx, lvl, ptep);
/* If the size matches this level, we're in the right place */
if (num_entries) {
size_t blk_size = ARM_V7S_BLOCK_SIZE(lvl);
__arm_v7s_set_pte(ptep, 0, num_entries, &iop->cfg);
for (i = 0; i < num_entries; i++) {
if (ARM_V7S_PTE_IS_TABLE(pte[i], lvl)) {
/* Also flush any partial walks */
io_pgtable_tlb_add_flush(iop, iova, blk_size,
ARM_V7S_BLOCK_SIZE(lvl + 1), false);
io_pgtable_tlb_sync(iop);
ptep = iopte_deref(pte[i], lvl);
__arm_v7s_free_table(ptep, lvl + 1, data);
} else {
io_pgtable_tlb_add_flush(iop, iova, blk_size,
blk_size, true);
}
iova += blk_size;
}
return size;
} else if (lvl == 1 && !ARM_V7S_PTE_IS_TABLE(pte[0], lvl)) {
/*
* Insert a table at the next level to map the old region,
* minus the part we want to unmap
*/
return arm_v7s_split_blk_unmap(data, iova, size, ptep);
}
/* Keep on walkin' */
ptep = iopte_deref(pte[0], lvl);
return __arm_v7s_unmap(data, iova, size, lvl + 1, ptep);
}
static int arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova,
size_t size)
{
struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
size_t unmapped;
unmapped = __arm_v7s_unmap(data, iova, size, 1, data->pgd);
if (unmapped)
io_pgtable_tlb_sync(&data->iop);
return unmapped;
}
static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops,
unsigned long iova)
{
struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops);
arm_v7s_iopte *ptep = data->pgd, pte;
int lvl = 0;
u32 mask;
do {
pte = ptep[ARM_V7S_LVL_IDX(iova, ++lvl)];
ptep = iopte_deref(pte, lvl);
} while (ARM_V7S_PTE_IS_TABLE(pte, lvl));
if (!ARM_V7S_PTE_IS_VALID(pte))
return 0;
mask = ARM_V7S_LVL_MASK(lvl);
if (arm_v7s_pte_is_cont(pte, lvl))
mask *= ARM_V7S_CONT_PAGES;
return (pte & mask) | (iova & ~mask);
}
static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
void *cookie)
{
struct arm_v7s_io_pgtable *data;
if (cfg->ias > ARM_V7S_ADDR_BITS || cfg->oas > ARM_V7S_ADDR_BITS)
return NULL;
if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS |
IO_PGTABLE_QUIRK_NO_PERMS |
IO_PGTABLE_QUIRK_TLBI_ON_MAP))
return NULL;
data = kmalloc(sizeof(*data), GFP_KERNEL);
if (!data)
return NULL;
data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2",
ARM_V7S_TABLE_SIZE(2),
ARM_V7S_TABLE_SIZE(2),
SLAB_CACHE_DMA, NULL);
if (!data->l2_tables)
goto out_free_data;
data->iop.ops = (struct io_pgtable_ops) {
.map = arm_v7s_map,
.unmap = arm_v7s_unmap,
.iova_to_phys = arm_v7s_iova_to_phys,
};
/* We have to do this early for __arm_v7s_alloc_table to work... */
data->iop.cfg = *cfg;
/*
* Unless the IOMMU driver indicates supersection support by
* having SZ_16M set in the initial bitmap, they won't be used.
*/
cfg->pgsize_bitmap &= SZ_4K | SZ_64K | SZ_1M | SZ_16M;
/* TCR: T0SZ=0, disable TTBR1 */
cfg->arm_v7s_cfg.tcr = ARM_V7S_TCR_PD1;
/*
* TEX remap: the indices used map to the closest equivalent types
* under the non-TEX-remap interpretation of those attribute bits,
* excepting various implementation-defined aspects of shareability.
*/
cfg->arm_v7s_cfg.prrr = ARM_V7S_PRRR_TR(1, ARM_V7S_PRRR_TYPE_DEVICE) |
ARM_V7S_PRRR_TR(4, ARM_V7S_PRRR_TYPE_NORMAL) |
ARM_V7S_PRRR_TR(7, ARM_V7S_PRRR_TYPE_NORMAL) |
ARM_V7S_PRRR_DS0 | ARM_V7S_PRRR_DS1 |
ARM_V7S_PRRR_NS1 | ARM_V7S_PRRR_NOS(7);
cfg->arm_v7s_cfg.nmrr = ARM_V7S_NMRR_IR(7, ARM_V7S_RGN_WBWA) |
ARM_V7S_NMRR_OR(7, ARM_V7S_RGN_WBWA);
/* Looking good; allocate a pgd */
data->pgd = __arm_v7s_alloc_table(1, GFP_KERNEL, data);
if (!data->pgd)
goto out_free_data;
/* Ensure the empty pgd is visible before any actual TTBR write */
wmb();
/* TTBRs */
cfg->arm_v7s_cfg.ttbr[0] = virt_to_phys(data->pgd) |
ARM_V7S_TTBR_S | ARM_V7S_TTBR_NOS |
ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_WBWA) |
ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_WBWA);
cfg->arm_v7s_cfg.ttbr[1] = 0;
return &data->iop;
out_free_data:
kmem_cache_destroy(data->l2_tables);
kfree(data);
return NULL;
}
struct io_pgtable_init_fns io_pgtable_arm_v7s_init_fns = {
.alloc = arm_v7s_alloc_pgtable,
.free = arm_v7s_free_pgtable,
};
#ifdef CONFIG_IOMMU_IO_PGTABLE_ARMV7S_SELFTEST
static struct io_pgtable_cfg *cfg_cookie;
static void dummy_tlb_flush_all(void *cookie)
{
WARN_ON(cookie != cfg_cookie);
}
static void dummy_tlb_add_flush(unsigned long iova, size_t size,
size_t granule, bool leaf, void *cookie)
{
WARN_ON(cookie != cfg_cookie);
WARN_ON(!(size & cfg_cookie->pgsize_bitmap));
}
static void dummy_tlb_sync(void *cookie)
{
WARN_ON(cookie != cfg_cookie);
}
static struct iommu_gather_ops dummy_tlb_ops = {
.tlb_flush_all = dummy_tlb_flush_all,
.tlb_add_flush = dummy_tlb_add_flush,
.tlb_sync = dummy_tlb_sync,
};
#define __FAIL(ops) ({ \
WARN(1, "selftest: test failed\n"); \
selftest_running = false; \
-EFAULT; \
})
static int __init arm_v7s_do_selftests(void)
{
struct io_pgtable_ops *ops;
struct io_pgtable_cfg cfg = {
.tlb = &dummy_tlb_ops,
.oas = 32,
.ias = 32,
.quirks = IO_PGTABLE_QUIRK_ARM_NS,
.pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M,
};
unsigned int iova, size, iova_start;
unsigned int i, loopnr = 0;
selftest_running = true;
cfg_cookie = &cfg;
ops = alloc_io_pgtable_ops(ARM_V7S, &cfg, &cfg);
if (!ops) {
pr_err("selftest: failed to allocate io pgtable ops\n");
return -EINVAL;
}
/*
* Initial sanity checks.
* Empty page tables shouldn't provide any translations.
*/
if (ops->iova_to_phys(ops, 42))
return __FAIL(ops);
if (ops->iova_to_phys(ops, SZ_1G + 42))
return __FAIL(ops);
if (ops->iova_to_phys(ops, SZ_2G + 42))
return __FAIL(ops);
/*
* Distinct mappings of different granule sizes.
*/
iova = 0;
i = find_first_bit(&cfg.pgsize_bitmap, BITS_PER_LONG);
while (i != BITS_PER_LONG) {
size = 1UL << i;
if (ops->map(ops, iova, iova, size, IOMMU_READ |
IOMMU_WRITE |
IOMMU_NOEXEC |
IOMMU_CACHE))
return __FAIL(ops);
/* Overlapping mappings */
if (!ops->map(ops, iova, iova + size, size,
IOMMU_READ | IOMMU_NOEXEC))
return __FAIL(ops);
if (ops->iova_to_phys(ops, iova + 42) != (iova + 42))
return __FAIL(ops);
iova += SZ_16M;
i++;
i = find_next_bit(&cfg.pgsize_bitmap, BITS_PER_LONG, i);
loopnr++;
}
/* Partial unmap */
i = 1;
size = 1UL << __ffs(cfg.pgsize_bitmap);
while (i < loopnr) {
iova_start = i * SZ_16M;
if (ops->unmap(ops, iova_start + size, size) != size)
return __FAIL(ops);
/* Remap of partial unmap */
if (ops->map(ops, iova_start + size, size, size, IOMMU_READ))
return __FAIL(ops);
if (ops->iova_to_phys(ops, iova_start + size + 42)
!= (size + 42))
return __FAIL(ops);
i++;
}
/* Full unmap */
iova = 0;
i = find_first_bit(&cfg.pgsize_bitmap, BITS_PER_LONG);
while (i != BITS_PER_LONG) {
size = 1UL << i;
if (ops->unmap(ops, iova, size) != size)
return __FAIL(ops);
if (ops->iova_to_phys(ops, iova + 42))
return __FAIL(ops);
/* Remap full block */
if (ops->map(ops, iova, iova, size, IOMMU_WRITE))
return __FAIL(ops);
if (ops->iova_to_phys(ops, iova + 42) != (iova + 42))
return __FAIL(ops);
iova += SZ_16M;
i++;
i = find_next_bit(&cfg.pgsize_bitmap, BITS_PER_LONG, i);
}
free_io_pgtable_ops(ops);
selftest_running = false;
pr_info("self test ok\n");
return 0;
}
subsys_initcall(arm_v7s_do_selftests);
#endif
...@@ -446,7 +446,6 @@ static int arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, ...@@ -446,7 +446,6 @@ static int arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
unsigned long blk_start, blk_end; unsigned long blk_start, blk_end;
phys_addr_t blk_paddr; phys_addr_t blk_paddr;
arm_lpae_iopte table = 0; arm_lpae_iopte table = 0;
struct io_pgtable_cfg *cfg = &data->iop.cfg;
blk_start = iova & ~(blk_size - 1); blk_start = iova & ~(blk_size - 1);
blk_end = blk_start + blk_size; blk_end = blk_start + blk_size;
...@@ -472,9 +471,9 @@ static int arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, ...@@ -472,9 +471,9 @@ static int arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
} }
} }
__arm_lpae_set_pte(ptep, table, cfg); __arm_lpae_set_pte(ptep, table, &data->iop.cfg);
iova &= ~(blk_size - 1); iova &= ~(blk_size - 1);
cfg->tlb->tlb_add_flush(iova, blk_size, blk_size, true, data->iop.cookie); io_pgtable_tlb_add_flush(&data->iop, iova, blk_size, blk_size, true);
return size; return size;
} }
...@@ -483,8 +482,7 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, ...@@ -483,8 +482,7 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
arm_lpae_iopte *ptep) arm_lpae_iopte *ptep)
{ {
arm_lpae_iopte pte; arm_lpae_iopte pte;
const struct iommu_gather_ops *tlb = data->iop.cfg.tlb; struct io_pgtable *iop = &data->iop;
void *cookie = data->iop.cookie;
size_t blk_size = ARM_LPAE_BLOCK_SIZE(lvl, data); size_t blk_size = ARM_LPAE_BLOCK_SIZE(lvl, data);
/* Something went horribly wrong and we ran out of page table */ /* Something went horribly wrong and we ran out of page table */
...@@ -498,17 +496,17 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, ...@@ -498,17 +496,17 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
/* If the size matches this level, we're in the right place */ /* If the size matches this level, we're in the right place */
if (size == blk_size) { if (size == blk_size) {
__arm_lpae_set_pte(ptep, 0, &data->iop.cfg); __arm_lpae_set_pte(ptep, 0, &iop->cfg);
if (!iopte_leaf(pte, lvl)) { if (!iopte_leaf(pte, lvl)) {
/* Also flush any partial walks */ /* Also flush any partial walks */
tlb->tlb_add_flush(iova, size, ARM_LPAE_GRANULE(data), io_pgtable_tlb_add_flush(iop, iova, size,
false, cookie); ARM_LPAE_GRANULE(data), false);
tlb->tlb_sync(cookie); io_pgtable_tlb_sync(iop);
ptep = iopte_deref(pte, data); ptep = iopte_deref(pte, data);
__arm_lpae_free_pgtable(data, lvl + 1, ptep); __arm_lpae_free_pgtable(data, lvl + 1, ptep);
} else { } else {
tlb->tlb_add_flush(iova, size, size, true, cookie); io_pgtable_tlb_add_flush(iop, iova, size, size, true);
} }
return size; return size;
...@@ -532,13 +530,12 @@ static int arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova, ...@@ -532,13 +530,12 @@ static int arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova,
{ {
size_t unmapped; size_t unmapped;
struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops);
struct io_pgtable *iop = &data->iop;
arm_lpae_iopte *ptep = data->pgd; arm_lpae_iopte *ptep = data->pgd;
int lvl = ARM_LPAE_START_LVL(data); int lvl = ARM_LPAE_START_LVL(data);
unmapped = __arm_lpae_unmap(data, iova, size, lvl, ptep); unmapped = __arm_lpae_unmap(data, iova, size, lvl, ptep);
if (unmapped) if (unmapped)
iop->cfg.tlb->tlb_sync(iop->cookie); io_pgtable_tlb_sync(&data->iop);
return unmapped; return unmapped;
} }
...@@ -662,8 +659,12 @@ static struct io_pgtable * ...@@ -662,8 +659,12 @@ static struct io_pgtable *
arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie)
{ {
u64 reg; u64 reg;
struct arm_lpae_io_pgtable *data = arm_lpae_alloc_pgtable(cfg); struct arm_lpae_io_pgtable *data;
if (cfg->quirks & ~IO_PGTABLE_QUIRK_ARM_NS)
return NULL;
data = arm_lpae_alloc_pgtable(cfg);
if (!data) if (!data)
return NULL; return NULL;
...@@ -746,8 +747,13 @@ static struct io_pgtable * ...@@ -746,8 +747,13 @@ static struct io_pgtable *
arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie)
{ {
u64 reg, sl; u64 reg, sl;
struct arm_lpae_io_pgtable *data = arm_lpae_alloc_pgtable(cfg); struct arm_lpae_io_pgtable *data;
/* The NS quirk doesn't apply at stage 2 */
if (cfg->quirks)
return NULL;
data = arm_lpae_alloc_pgtable(cfg);
if (!data) if (!data)
return NULL; return NULL;
......
...@@ -33,6 +33,9 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] = ...@@ -33,6 +33,9 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] =
[ARM_64_LPAE_S1] = &io_pgtable_arm_64_lpae_s1_init_fns, [ARM_64_LPAE_S1] = &io_pgtable_arm_64_lpae_s1_init_fns,
[ARM_64_LPAE_S2] = &io_pgtable_arm_64_lpae_s2_init_fns, [ARM_64_LPAE_S2] = &io_pgtable_arm_64_lpae_s2_init_fns,
#endif #endif
#ifdef CONFIG_IOMMU_IO_PGTABLE_ARMV7S
[ARM_V7S] = &io_pgtable_arm_v7s_init_fns,
#endif
}; };
struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt, struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt,
...@@ -72,6 +75,6 @@ void free_io_pgtable_ops(struct io_pgtable_ops *ops) ...@@ -72,6 +75,6 @@ void free_io_pgtable_ops(struct io_pgtable_ops *ops)
return; return;
iop = container_of(ops, struct io_pgtable, ops); iop = container_of(ops, struct io_pgtable, ops);
iop->cfg.tlb->tlb_flush_all(iop->cookie); io_pgtable_tlb_flush_all(iop);
io_pgtable_init_table[iop->fmt]->free(iop); io_pgtable_init_table[iop->fmt]->free(iop);
} }
#ifndef __IO_PGTABLE_H #ifndef __IO_PGTABLE_H
#define __IO_PGTABLE_H #define __IO_PGTABLE_H
#include <linux/bitops.h>
/* /*
* Public API for use by IOMMU drivers * Public API for use by IOMMU drivers
...@@ -9,6 +10,7 @@ enum io_pgtable_fmt { ...@@ -9,6 +10,7 @@ enum io_pgtable_fmt {
ARM_32_LPAE_S2, ARM_32_LPAE_S2,
ARM_64_LPAE_S1, ARM_64_LPAE_S1,
ARM_64_LPAE_S2, ARM_64_LPAE_S2,
ARM_V7S,
IO_PGTABLE_NUM_FMTS, IO_PGTABLE_NUM_FMTS,
}; };
...@@ -45,8 +47,24 @@ struct iommu_gather_ops { ...@@ -45,8 +47,24 @@ struct iommu_gather_ops {
* page table walker. * page table walker.
*/ */
struct io_pgtable_cfg { struct io_pgtable_cfg {
#define IO_PGTABLE_QUIRK_ARM_NS (1 << 0) /* Set NS bit in PTEs */ /*
int quirks; * IO_PGTABLE_QUIRK_ARM_NS: (ARM formats) Set NS and NSTABLE bits in
* stage 1 PTEs, for hardware which insists on validating them
* even in non-secure state where they should normally be ignored.
*
* IO_PGTABLE_QUIRK_NO_PERMS: Ignore the IOMMU_READ, IOMMU_WRITE and
* IOMMU_NOEXEC flags and map everything with full access, for
* hardware which does not implement the permissions of a given
* format, and/or requires some format-specific default value.
*
* IO_PGTABLE_QUIRK_TLBI_ON_MAP: If the format forbids caching invalid
* (unmapped) entries but the hardware might do so anyway, perform
* TLB maintenance when mapping as well as when unmapping.
*/
#define IO_PGTABLE_QUIRK_ARM_NS BIT(0)
#define IO_PGTABLE_QUIRK_NO_PERMS BIT(1)
#define IO_PGTABLE_QUIRK_TLBI_ON_MAP BIT(2)
unsigned long quirks;
unsigned long pgsize_bitmap; unsigned long pgsize_bitmap;
unsigned int ias; unsigned int ias;
unsigned int oas; unsigned int oas;
...@@ -65,6 +83,13 @@ struct io_pgtable_cfg { ...@@ -65,6 +83,13 @@ struct io_pgtable_cfg {
u64 vttbr; u64 vttbr;
u64 vtcr; u64 vtcr;
} arm_lpae_s2_cfg; } arm_lpae_s2_cfg;
struct {
u32 ttbr[2];
u32 tcr;
u32 nmrr;
u32 prrr;
} arm_v7s_cfg;
}; };
}; };
...@@ -121,18 +146,41 @@ void free_io_pgtable_ops(struct io_pgtable_ops *ops); ...@@ -121,18 +146,41 @@ void free_io_pgtable_ops(struct io_pgtable_ops *ops);
* @fmt: The page table format. * @fmt: The page table format.
* @cookie: An opaque token provided by the IOMMU driver and passed back to * @cookie: An opaque token provided by the IOMMU driver and passed back to
* any callback routines. * any callback routines.
* @tlb_sync_pending: Private flag for optimising out redundant syncs.
* @cfg: A copy of the page table configuration. * @cfg: A copy of the page table configuration.
* @ops: The page table operations in use for this set of page tables. * @ops: The page table operations in use for this set of page tables.
*/ */
struct io_pgtable { struct io_pgtable {
enum io_pgtable_fmt fmt; enum io_pgtable_fmt fmt;
void *cookie; void *cookie;
bool tlb_sync_pending;
struct io_pgtable_cfg cfg; struct io_pgtable_cfg cfg;
struct io_pgtable_ops ops; struct io_pgtable_ops ops;
}; };
#define io_pgtable_ops_to_pgtable(x) container_of((x), struct io_pgtable, ops) #define io_pgtable_ops_to_pgtable(x) container_of((x), struct io_pgtable, ops)
static inline void io_pgtable_tlb_flush_all(struct io_pgtable *iop)
{
iop->cfg.tlb->tlb_flush_all(iop->cookie);
iop->tlb_sync_pending = true;
}
static inline void io_pgtable_tlb_add_flush(struct io_pgtable *iop,
unsigned long iova, size_t size, size_t granule, bool leaf)
{
iop->cfg.tlb->tlb_add_flush(iova, size, granule, leaf, iop->cookie);
iop->tlb_sync_pending = true;
}
static inline void io_pgtable_tlb_sync(struct io_pgtable *iop)
{
if (iop->tlb_sync_pending) {
iop->cfg.tlb->tlb_sync(iop->cookie);
iop->tlb_sync_pending = false;
}
}
/** /**
* struct io_pgtable_init_fns - Alloc/free a set of page tables for a * struct io_pgtable_init_fns - Alloc/free a set of page tables for a
* particular format. * particular format.
...@@ -149,5 +197,6 @@ extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s1_init_fns; ...@@ -149,5 +197,6 @@ extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s1_init_fns;
extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s2_init_fns; extern struct io_pgtable_init_fns io_pgtable_arm_32_lpae_s2_init_fns;
extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s1_init_fns; extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s1_init_fns;
extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns; extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns;
extern struct io_pgtable_init_fns io_pgtable_arm_v7s_init_fns;
#endif /* __IO_PGTABLE_H */ #endif /* __IO_PGTABLE_H */
...@@ -1314,6 +1314,7 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova, ...@@ -1314,6 +1314,7 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
unsigned long orig_iova = iova; unsigned long orig_iova = iova;
unsigned int min_pagesz; unsigned int min_pagesz;
size_t orig_size = size; size_t orig_size = size;
phys_addr_t orig_paddr = paddr;
int ret = 0; int ret = 0;
if (unlikely(domain->ops->map == NULL || if (unlikely(domain->ops->map == NULL ||
...@@ -1358,7 +1359,7 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova, ...@@ -1358,7 +1359,7 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
if (ret) if (ret)
iommu_unmap(domain, orig_iova, orig_size - size); iommu_unmap(domain, orig_iova, orig_size - size);
else else
trace_map(orig_iova, paddr, orig_size); trace_map(orig_iova, orig_paddr, orig_size);
return ret; return ret;
} }
......
/*
* Copyright (c) 2015-2016 MediaTek Inc.
* Author: Yong Wu <yong.wu@mediatek.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/bug.h>
#include <linux/clk.h>
#include <linux/component.h>
#include <linux/device.h>
#include <linux/dma-iommu.h>
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/iommu.h>
#include <linux/iopoll.h>
#include <linux/list.h>
#include <linux/of_address.h>
#include <linux/of_iommu.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <asm/barrier.h>
#include <dt-bindings/memory/mt8173-larb-port.h>
#include <soc/mediatek/smi.h>
#include "io-pgtable.h"
#define REG_MMU_PT_BASE_ADDR 0x000
#define REG_MMU_INVALIDATE 0x020
#define F_ALL_INVLD 0x2
#define F_MMU_INV_RANGE 0x1
#define REG_MMU_INVLD_START_A 0x024
#define REG_MMU_INVLD_END_A 0x028
#define REG_MMU_INV_SEL 0x038
#define F_INVLD_EN0 BIT(0)
#define F_INVLD_EN1 BIT(1)
#define REG_MMU_STANDARD_AXI_MODE 0x048
#define REG_MMU_DCM_DIS 0x050
#define REG_MMU_CTRL_REG 0x110
#define F_MMU_PREFETCH_RT_REPLACE_MOD BIT(4)
#define F_MMU_TF_PROTECT_SEL(prot) (((prot) & 0x3) << 5)
#define REG_MMU_IVRP_PADDR 0x114
#define F_MMU_IVRP_PA_SET(pa) ((pa) >> 1)
#define REG_MMU_INT_CONTROL0 0x120
#define F_L2_MULIT_HIT_EN BIT(0)
#define F_TABLE_WALK_FAULT_INT_EN BIT(1)
#define F_PREETCH_FIFO_OVERFLOW_INT_EN BIT(2)
#define F_MISS_FIFO_OVERFLOW_INT_EN BIT(3)
#define F_PREFETCH_FIFO_ERR_INT_EN BIT(5)
#define F_MISS_FIFO_ERR_INT_EN BIT(6)
#define F_INT_CLR_BIT BIT(12)
#define REG_MMU_INT_MAIN_CONTROL 0x124
#define F_INT_TRANSLATION_FAULT BIT(0)
#define F_INT_MAIN_MULTI_HIT_FAULT BIT(1)
#define F_INT_INVALID_PA_FAULT BIT(2)
#define F_INT_ENTRY_REPLACEMENT_FAULT BIT(3)
#define F_INT_TLB_MISS_FAULT BIT(4)
#define F_INT_MISS_TRANSACTION_FIFO_FAULT BIT(5)
#define F_INT_PRETETCH_TRANSATION_FIFO_FAULT BIT(6)
#define REG_MMU_CPE_DONE 0x12C
#define REG_MMU_FAULT_ST1 0x134
#define REG_MMU_FAULT_VA 0x13c
#define F_MMU_FAULT_VA_MSK 0xfffff000
#define F_MMU_FAULT_VA_WRITE_BIT BIT(1)
#define F_MMU_FAULT_VA_LAYER_BIT BIT(0)
#define REG_MMU_INVLD_PA 0x140
#define REG_MMU_INT_ID 0x150
#define F_MMU0_INT_ID_LARB_ID(a) (((a) >> 7) & 0x7)
#define F_MMU0_INT_ID_PORT_ID(a) (((a) >> 2) & 0x1f)
#define MTK_PROTECT_PA_ALIGN 128
struct mtk_iommu_suspend_reg {
u32 standard_axi_mode;
u32 dcm_dis;
u32 ctrl_reg;
u32 int_control0;
u32 int_main_control;
};
struct mtk_iommu_client_priv {
struct list_head client;
unsigned int mtk_m4u_id;
struct device *m4udev;
};
struct mtk_iommu_domain {
spinlock_t pgtlock; /* lock for page table */
struct io_pgtable_cfg cfg;
struct io_pgtable_ops *iop;
struct iommu_domain domain;
};
struct mtk_iommu_data {
void __iomem *base;
int irq;
struct device *dev;
struct clk *bclk;
phys_addr_t protect_base; /* protect memory base */
struct mtk_iommu_suspend_reg reg;
struct mtk_iommu_domain *m4u_dom;
struct iommu_group *m4u_group;
struct mtk_smi_iommu smi_imu; /* SMI larb iommu info */
};
static struct iommu_ops mtk_iommu_ops;
static struct mtk_iommu_domain *to_mtk_domain(struct iommu_domain *dom)
{
return container_of(dom, struct mtk_iommu_domain, domain);
}
static void mtk_iommu_tlb_flush_all(void *cookie)
{
struct mtk_iommu_data *data = cookie;
writel_relaxed(F_INVLD_EN1 | F_INVLD_EN0, data->base + REG_MMU_INV_SEL);
writel_relaxed(F_ALL_INVLD, data->base + REG_MMU_INVALIDATE);
wmb(); /* Make sure the tlb flush all done */
}
static void mtk_iommu_tlb_add_flush_nosync(unsigned long iova, size_t size,
size_t granule, bool leaf,
void *cookie)
{
struct mtk_iommu_data *data = cookie;
writel_relaxed(F_INVLD_EN1 | F_INVLD_EN0, data->base + REG_MMU_INV_SEL);
writel_relaxed(iova, data->base + REG_MMU_INVLD_START_A);
writel_relaxed(iova + size - 1, data->base + REG_MMU_INVLD_END_A);
writel_relaxed(F_MMU_INV_RANGE, data->base + REG_MMU_INVALIDATE);
}
static void mtk_iommu_tlb_sync(void *cookie)
{
struct mtk_iommu_data *data = cookie;
int ret;
u32 tmp;
ret = readl_poll_timeout_atomic(data->base + REG_MMU_CPE_DONE, tmp,
tmp != 0, 10, 100000);
if (ret) {
dev_warn(data->dev,
"Partial TLB flush timed out, falling back to full flush\n");
mtk_iommu_tlb_flush_all(cookie);
}
/* Clear the CPE status */
writel_relaxed(0, data->base + REG_MMU_CPE_DONE);
}
static const struct iommu_gather_ops mtk_iommu_gather_ops = {
.tlb_flush_all = mtk_iommu_tlb_flush_all,
.tlb_add_flush = mtk_iommu_tlb_add_flush_nosync,
.tlb_sync = mtk_iommu_tlb_sync,
};
static irqreturn_t mtk_iommu_isr(int irq, void *dev_id)
{
struct mtk_iommu_data *data = dev_id;
struct mtk_iommu_domain *dom = data->m4u_dom;
u32 int_state, regval, fault_iova, fault_pa;
unsigned int fault_larb, fault_port;
bool layer, write;
/* Read error info from registers */
int_state = readl_relaxed(data->base + REG_MMU_FAULT_ST1);
fault_iova = readl_relaxed(data->base + REG_MMU_FAULT_VA);
layer = fault_iova & F_MMU_FAULT_VA_LAYER_BIT;
write = fault_iova & F_MMU_FAULT_VA_WRITE_BIT;
fault_iova &= F_MMU_FAULT_VA_MSK;
fault_pa = readl_relaxed(data->base + REG_MMU_INVLD_PA);
regval = readl_relaxed(data->base + REG_MMU_INT_ID);
fault_larb = F_MMU0_INT_ID_LARB_ID(regval);
fault_port = F_MMU0_INT_ID_PORT_ID(regval);
if (report_iommu_fault(&dom->domain, data->dev, fault_iova,
write ? IOMMU_FAULT_WRITE : IOMMU_FAULT_READ)) {
dev_err_ratelimited(
data->dev,
"fault type=0x%x iova=0x%x pa=0x%x larb=%d port=%d layer=%d %s\n",
int_state, fault_iova, fault_pa, fault_larb, fault_port,
layer, write ? "write" : "read");
}
/* Interrupt clear */
regval = readl_relaxed(data->base + REG_MMU_INT_CONTROL0);
regval |= F_INT_CLR_BIT;
writel_relaxed(regval, data->base + REG_MMU_INT_CONTROL0);
mtk_iommu_tlb_flush_all(data);
return IRQ_HANDLED;
}
static void mtk_iommu_config(struct mtk_iommu_data *data,
struct device *dev, bool enable)
{
struct mtk_iommu_client_priv *head, *cur, *next;
struct mtk_smi_larb_iommu *larb_mmu;
unsigned int larbid, portid;
head = dev->archdata.iommu;
list_for_each_entry_safe(cur, next, &head->client, client) {
larbid = MTK_M4U_TO_LARB(cur->mtk_m4u_id);
portid = MTK_M4U_TO_PORT(cur->mtk_m4u_id);
larb_mmu = &data->smi_imu.larb_imu[larbid];
dev_dbg(dev, "%s iommu port: %d\n",
enable ? "enable" : "disable", portid);
if (enable)
larb_mmu->mmu |= MTK_SMI_MMU_EN(portid);
else
larb_mmu->mmu &= ~MTK_SMI_MMU_EN(portid);
}
}
static int mtk_iommu_domain_finalise(struct mtk_iommu_data *data)
{
struct mtk_iommu_domain *dom = data->m4u_dom;
spin_lock_init(&dom->pgtlock);
dom->cfg = (struct io_pgtable_cfg) {
.quirks = IO_PGTABLE_QUIRK_ARM_NS |
IO_PGTABLE_QUIRK_NO_PERMS |
IO_PGTABLE_QUIRK_TLBI_ON_MAP,
.pgsize_bitmap = mtk_iommu_ops.pgsize_bitmap,
.ias = 32,
.oas = 32,
.tlb = &mtk_iommu_gather_ops,
.iommu_dev = data->dev,
};
dom->iop = alloc_io_pgtable_ops(ARM_V7S, &dom->cfg, data);
if (!dom->iop) {
dev_err(data->dev, "Failed to alloc io pgtable\n");
return -EINVAL;
}
/* Update our support page sizes bitmap */
mtk_iommu_ops.pgsize_bitmap = dom->cfg.pgsize_bitmap;
writel(data->m4u_dom->cfg.arm_v7s_cfg.ttbr[0],
data->base + REG_MMU_PT_BASE_ADDR);
return 0;
}
static struct iommu_domain *mtk_iommu_domain_alloc(unsigned type)
{
struct mtk_iommu_domain *dom;
if (type != IOMMU_DOMAIN_DMA)
return NULL;
dom = kzalloc(sizeof(*dom), GFP_KERNEL);
if (!dom)
return NULL;
if (iommu_get_dma_cookie(&dom->domain)) {
kfree(dom);
return NULL;
}
dom->domain.geometry.aperture_start = 0;
dom->domain.geometry.aperture_end = DMA_BIT_MASK(32);
dom->domain.geometry.force_aperture = true;
return &dom->domain;
}
static void mtk_iommu_domain_free(struct iommu_domain *domain)
{
iommu_put_dma_cookie(domain);
kfree(to_mtk_domain(domain));
}
static int mtk_iommu_attach_device(struct iommu_domain *domain,
struct device *dev)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
struct mtk_iommu_client_priv *priv = dev->archdata.iommu;
struct mtk_iommu_data *data;
int ret;
if (!priv)
return -ENODEV;
data = dev_get_drvdata(priv->m4udev);
if (!data->m4u_dom) {
data->m4u_dom = dom;
ret = mtk_iommu_domain_finalise(data);
if (ret) {
data->m4u_dom = NULL;
return ret;
}
} else if (data->m4u_dom != dom) {
/* All the client devices should be in the same m4u domain */
dev_err(dev, "try to attach into the error iommu domain\n");
return -EPERM;
}
mtk_iommu_config(data, dev, true);
return 0;
}
static void mtk_iommu_detach_device(struct iommu_domain *domain,
struct device *dev)
{
struct mtk_iommu_client_priv *priv = dev->archdata.iommu;
struct mtk_iommu_data *data;
if (!priv)
return;
data = dev_get_drvdata(priv->m4udev);
mtk_iommu_config(data, dev, false);
}
static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
unsigned long flags;
int ret;
spin_lock_irqsave(&dom->pgtlock, flags);
ret = dom->iop->map(dom->iop, iova, paddr, size, prot);
spin_unlock_irqrestore(&dom->pgtlock, flags);
return ret;
}
static size_t mtk_iommu_unmap(struct iommu_domain *domain,
unsigned long iova, size_t size)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
unsigned long flags;
size_t unmapsz;
spin_lock_irqsave(&dom->pgtlock, flags);
unmapsz = dom->iop->unmap(dom->iop, iova, size);
spin_unlock_irqrestore(&dom->pgtlock, flags);
return unmapsz;
}
static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
unsigned long flags;
phys_addr_t pa;
spin_lock_irqsave(&dom->pgtlock, flags);
pa = dom->iop->iova_to_phys(dom->iop, iova);
spin_unlock_irqrestore(&dom->pgtlock, flags);
return pa;
}
static int mtk_iommu_add_device(struct device *dev)
{
struct iommu_group *group;
if (!dev->archdata.iommu) /* Not a iommu client device */
return -ENODEV;
group = iommu_group_get_for_dev(dev);
if (IS_ERR(group))
return PTR_ERR(group);
iommu_group_put(group);
return 0;
}
static void mtk_iommu_remove_device(struct device *dev)
{
struct mtk_iommu_client_priv *head, *cur, *next;
head = dev->archdata.iommu;
if (!head)
return;
list_for_each_entry_safe(cur, next, &head->client, client) {
list_del(&cur->client);
kfree(cur);
}
kfree(head);
dev->archdata.iommu = NULL;
iommu_group_remove_device(dev);
}
static struct iommu_group *mtk_iommu_device_group(struct device *dev)
{
struct mtk_iommu_data *data;
struct mtk_iommu_client_priv *priv;
priv = dev->archdata.iommu;
if (!priv)
return ERR_PTR(-ENODEV);
/* All the client devices are in the same m4u iommu-group */
data = dev_get_drvdata(priv->m4udev);
if (!data->m4u_group) {
data->m4u_group = iommu_group_alloc();
if (IS_ERR(data->m4u_group))
dev_err(dev, "Failed to allocate M4U IOMMU group\n");
}
return data->m4u_group;
}
static int mtk_iommu_of_xlate(struct device *dev, struct of_phandle_args *args)
{
struct mtk_iommu_client_priv *head, *priv, *next;
struct platform_device *m4updev;
if (args->args_count != 1) {
dev_err(dev, "invalid #iommu-cells(%d) property for IOMMU\n",
args->args_count);
return -EINVAL;
}
if (!dev->archdata.iommu) {
/* Get the m4u device */
m4updev = of_find_device_by_node(args->np);
of_node_put(args->np);
if (WARN_ON(!m4updev))
return -EINVAL;
head = kzalloc(sizeof(*head), GFP_KERNEL);
if (!head)
return -ENOMEM;
dev->archdata.iommu = head;
INIT_LIST_HEAD(&head->client);
head->m4udev = &m4updev->dev;
} else {
head = dev->archdata.iommu;
}
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv)
goto err_free_mem;
priv->mtk_m4u_id = args->args[0];
list_add_tail(&priv->client, &head->client);
return 0;
err_free_mem:
list_for_each_entry_safe(priv, next, &head->client, client)
kfree(priv);
kfree(head);
dev->archdata.iommu = NULL;
return -ENOMEM;
}
static struct iommu_ops mtk_iommu_ops = {
.domain_alloc = mtk_iommu_domain_alloc,
.domain_free = mtk_iommu_domain_free,
.attach_dev = mtk_iommu_attach_device,
.detach_dev = mtk_iommu_detach_device,
.map = mtk_iommu_map,
.unmap = mtk_iommu_unmap,
.map_sg = default_iommu_map_sg,
.iova_to_phys = mtk_iommu_iova_to_phys,
.add_device = mtk_iommu_add_device,
.remove_device = mtk_iommu_remove_device,
.device_group = mtk_iommu_device_group,
.of_xlate = mtk_iommu_of_xlate,
.pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M,
};
static int mtk_iommu_hw_init(const struct mtk_iommu_data *data)
{
u32 regval;
int ret;
ret = clk_prepare_enable(data->bclk);
if (ret) {
dev_err(data->dev, "Failed to enable iommu bclk(%d)\n", ret);
return ret;
}
regval = F_MMU_PREFETCH_RT_REPLACE_MOD |
F_MMU_TF_PROTECT_SEL(2);
writel_relaxed(regval, data->base + REG_MMU_CTRL_REG);
regval = F_L2_MULIT_HIT_EN |
F_TABLE_WALK_FAULT_INT_EN |
F_PREETCH_FIFO_OVERFLOW_INT_EN |
F_MISS_FIFO_OVERFLOW_INT_EN |
F_PREFETCH_FIFO_ERR_INT_EN |
F_MISS_FIFO_ERR_INT_EN;
writel_relaxed(regval, data->base + REG_MMU_INT_CONTROL0);
regval = F_INT_TRANSLATION_FAULT |
F_INT_MAIN_MULTI_HIT_FAULT |
F_INT_INVALID_PA_FAULT |
F_INT_ENTRY_REPLACEMENT_FAULT |
F_INT_TLB_MISS_FAULT |
F_INT_MISS_TRANSACTION_FIFO_FAULT |
F_INT_PRETETCH_TRANSATION_FIFO_FAULT;
writel_relaxed(regval, data->base + REG_MMU_INT_MAIN_CONTROL);
writel_relaxed(F_MMU_IVRP_PA_SET(data->protect_base),
data->base + REG_MMU_IVRP_PADDR);
writel_relaxed(0, data->base + REG_MMU_DCM_DIS);
writel_relaxed(0, data->base + REG_MMU_STANDARD_AXI_MODE);
if (devm_request_irq(data->dev, data->irq, mtk_iommu_isr, 0,
dev_name(data->dev), (void *)data)) {
writel_relaxed(0, data->base + REG_MMU_PT_BASE_ADDR);
clk_disable_unprepare(data->bclk);
dev_err(data->dev, "Failed @ IRQ-%d Request\n", data->irq);
return -ENODEV;
}
return 0;
}
static int compare_of(struct device *dev, void *data)
{
return dev->of_node == data;
}
static int mtk_iommu_bind(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
return component_bind_all(dev, &data->smi_imu);
}
static void mtk_iommu_unbind(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
component_unbind_all(dev, &data->smi_imu);
}
static const struct component_master_ops mtk_iommu_com_ops = {
.bind = mtk_iommu_bind,
.unbind = mtk_iommu_unbind,
};
static int mtk_iommu_probe(struct platform_device *pdev)
{
struct mtk_iommu_data *data;
struct device *dev = &pdev->dev;
struct resource *res;
struct component_match *match = NULL;
void *protect;
int i, larb_nr, ret;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
data->dev = dev;
/* Protect memory. HW will access here while translation fault.*/
protect = devm_kzalloc(dev, MTK_PROTECT_PA_ALIGN * 2, GFP_KERNEL);
if (!protect)
return -ENOMEM;
data->protect_base = ALIGN(virt_to_phys(protect), MTK_PROTECT_PA_ALIGN);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
data->base = devm_ioremap_resource(dev, res);
if (IS_ERR(data->base))
return PTR_ERR(data->base);
data->irq = platform_get_irq(pdev, 0);
if (data->irq < 0)
return data->irq;
data->bclk = devm_clk_get(dev, "bclk");
if (IS_ERR(data->bclk))
return PTR_ERR(data->bclk);
larb_nr = of_count_phandle_with_args(dev->of_node,
"mediatek,larbs", NULL);
if (larb_nr < 0)
return larb_nr;
data->smi_imu.larb_nr = larb_nr;
for (i = 0; i < larb_nr; i++) {
struct device_node *larbnode;
struct platform_device *plarbdev;
larbnode = of_parse_phandle(dev->of_node, "mediatek,larbs", i);
if (!larbnode)
return -EINVAL;
if (!of_device_is_available(larbnode))
continue;
plarbdev = of_find_device_by_node(larbnode);
of_node_put(larbnode);
if (!plarbdev) {
plarbdev = of_platform_device_create(
larbnode, NULL,
platform_bus_type.dev_root);
if (!plarbdev)
return -EPROBE_DEFER;
}
data->smi_imu.larb_imu[i].dev = &plarbdev->dev;
component_match_add(dev, &match, compare_of, larbnode);
}
platform_set_drvdata(pdev, data);
ret = mtk_iommu_hw_init(data);
if (ret)
return ret;
if (!iommu_present(&platform_bus_type))
bus_set_iommu(&platform_bus_type, &mtk_iommu_ops);
return component_master_add_with_match(dev, &mtk_iommu_com_ops, match);
}
static int mtk_iommu_remove(struct platform_device *pdev)
{
struct mtk_iommu_data *data = platform_get_drvdata(pdev);
if (iommu_present(&platform_bus_type))
bus_set_iommu(&platform_bus_type, NULL);
free_io_pgtable_ops(data->m4u_dom->iop);
clk_disable_unprepare(data->bclk);
devm_free_irq(&pdev->dev, data->irq, data);
component_master_del(&pdev->dev, &mtk_iommu_com_ops);
return 0;
}
static int __maybe_unused mtk_iommu_suspend(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
struct mtk_iommu_suspend_reg *reg = &data->reg;
void __iomem *base = data->base;
reg->standard_axi_mode = readl_relaxed(base +
REG_MMU_STANDARD_AXI_MODE);
reg->dcm_dis = readl_relaxed(base + REG_MMU_DCM_DIS);
reg->ctrl_reg = readl_relaxed(base + REG_MMU_CTRL_REG);
reg->int_control0 = readl_relaxed(base + REG_MMU_INT_CONTROL0);
reg->int_main_control = readl_relaxed(base + REG_MMU_INT_MAIN_CONTROL);
return 0;
}
static int __maybe_unused mtk_iommu_resume(struct device *dev)
{
struct mtk_iommu_data *data = dev_get_drvdata(dev);
struct mtk_iommu_suspend_reg *reg = &data->reg;
void __iomem *base = data->base;
writel_relaxed(data->m4u_dom->cfg.arm_v7s_cfg.ttbr[0],
base + REG_MMU_PT_BASE_ADDR);
writel_relaxed(reg->standard_axi_mode,
base + REG_MMU_STANDARD_AXI_MODE);
writel_relaxed(reg->dcm_dis, base + REG_MMU_DCM_DIS);
writel_relaxed(reg->ctrl_reg, base + REG_MMU_CTRL_REG);
writel_relaxed(reg->int_control0, base + REG_MMU_INT_CONTROL0);
writel_relaxed(reg->int_main_control, base + REG_MMU_INT_MAIN_CONTROL);
writel_relaxed(F_MMU_IVRP_PA_SET(data->protect_base),
base + REG_MMU_IVRP_PADDR);
return 0;
}
const struct dev_pm_ops mtk_iommu_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(mtk_iommu_suspend, mtk_iommu_resume)
};
static const struct of_device_id mtk_iommu_of_ids[] = {
{ .compatible = "mediatek,mt8173-m4u", },
{}
};
static struct platform_driver mtk_iommu_driver = {
.probe = mtk_iommu_probe,
.remove = mtk_iommu_remove,
.driver = {
.name = "mtk-iommu",
.of_match_table = mtk_iommu_of_ids,
.pm = &mtk_iommu_pm_ops,
}
};
static int mtk_iommu_init_fn(struct device_node *np)
{
int ret;
struct platform_device *pdev;
pdev = of_platform_device_create(np, NULL, platform_bus_type.dev_root);
if (!pdev)
return -ENOMEM;
ret = platform_driver_register(&mtk_iommu_driver);
if (ret) {
pr_err("%s: Failed to register driver\n", __func__);
return ret;
}
of_iommu_set_ops(np, &mtk_iommu_ops);
return 0;
}
IOMMU_OF_DECLARE(mtkm4u, "mediatek,mt8173-m4u", mtk_iommu_init_fn);
...@@ -110,6 +110,7 @@ void of_iommu_set_ops(struct device_node *np, struct iommu_ops *ops) ...@@ -110,6 +110,7 @@ void of_iommu_set_ops(struct device_node *np, struct iommu_ops *ops)
if (WARN_ON(!iommu)) if (WARN_ON(!iommu))
return; return;
of_node_get(np);
INIT_LIST_HEAD(&iommu->list); INIT_LIST_HEAD(&iommu->list);
iommu->np = np; iommu->np = np;
iommu->ops = ops; iommu->ops = ops;
......
...@@ -86,7 +86,8 @@ struct rk_iommu_domain { ...@@ -86,7 +86,8 @@ struct rk_iommu_domain {
struct rk_iommu { struct rk_iommu {
struct device *dev; struct device *dev;
void __iomem *base; void __iomem **bases;
int num_mmu;
int irq; int irq;
struct list_head node; /* entry in rk_iommu_domain.iommus */ struct list_head node; /* entry in rk_iommu_domain.iommus */
struct iommu_domain *domain; /* domain to which iommu is attached */ struct iommu_domain *domain; /* domain to which iommu is attached */
...@@ -271,47 +272,70 @@ static u32 rk_iova_page_offset(dma_addr_t iova) ...@@ -271,47 +272,70 @@ static u32 rk_iova_page_offset(dma_addr_t iova)
return (u32)(iova & RK_IOVA_PAGE_MASK) >> RK_IOVA_PAGE_SHIFT; return (u32)(iova & RK_IOVA_PAGE_MASK) >> RK_IOVA_PAGE_SHIFT;
} }
static u32 rk_iommu_read(struct rk_iommu *iommu, u32 offset) static u32 rk_iommu_read(void __iomem *base, u32 offset)
{ {
return readl(iommu->base + offset); return readl(base + offset);
} }
static void rk_iommu_write(struct rk_iommu *iommu, u32 offset, u32 value) static void rk_iommu_write(void __iomem *base, u32 offset, u32 value)
{ {
writel(value, iommu->base + offset); writel(value, base + offset);
} }
static void rk_iommu_command(struct rk_iommu *iommu, u32 command) static void rk_iommu_command(struct rk_iommu *iommu, u32 command)
{ {
writel(command, iommu->base + RK_MMU_COMMAND); int i;
for (i = 0; i < iommu->num_mmu; i++)
writel(command, iommu->bases[i] + RK_MMU_COMMAND);
} }
static void rk_iommu_base_command(void __iomem *base, u32 command)
{
writel(command, base + RK_MMU_COMMAND);
}
static void rk_iommu_zap_lines(struct rk_iommu *iommu, dma_addr_t iova, static void rk_iommu_zap_lines(struct rk_iommu *iommu, dma_addr_t iova,
size_t size) size_t size)
{ {
int i;
dma_addr_t iova_end = iova + size; dma_addr_t iova_end = iova + size;
/* /*
* TODO(djkurtz): Figure out when it is more efficient to shootdown the * TODO(djkurtz): Figure out when it is more efficient to shootdown the
* entire iotlb rather than iterate over individual iovas. * entire iotlb rather than iterate over individual iovas.
*/ */
for (; iova < iova_end; iova += SPAGE_SIZE) for (i = 0; i < iommu->num_mmu; i++)
rk_iommu_write(iommu, RK_MMU_ZAP_ONE_LINE, iova); for (; iova < iova_end; iova += SPAGE_SIZE)
rk_iommu_write(iommu->bases[i], RK_MMU_ZAP_ONE_LINE, iova);
} }
static bool rk_iommu_is_stall_active(struct rk_iommu *iommu) static bool rk_iommu_is_stall_active(struct rk_iommu *iommu)
{ {
return rk_iommu_read(iommu, RK_MMU_STATUS) & RK_MMU_STATUS_STALL_ACTIVE; bool active = true;
int i;
for (i = 0; i < iommu->num_mmu; i++)
active &= rk_iommu_read(iommu->bases[i], RK_MMU_STATUS) &
RK_MMU_STATUS_STALL_ACTIVE;
return active;
} }
static bool rk_iommu_is_paging_enabled(struct rk_iommu *iommu) static bool rk_iommu_is_paging_enabled(struct rk_iommu *iommu)
{ {
return rk_iommu_read(iommu, RK_MMU_STATUS) & bool enable = true;
RK_MMU_STATUS_PAGING_ENABLED; int i;
for (i = 0; i < iommu->num_mmu; i++)
enable &= rk_iommu_read(iommu->bases[i], RK_MMU_STATUS) &
RK_MMU_STATUS_PAGING_ENABLED;
return enable;
} }
static int rk_iommu_enable_stall(struct rk_iommu *iommu) static int rk_iommu_enable_stall(struct rk_iommu *iommu)
{ {
int ret; int ret, i;
if (rk_iommu_is_stall_active(iommu)) if (rk_iommu_is_stall_active(iommu))
return 0; return 0;
...@@ -324,15 +348,16 @@ static int rk_iommu_enable_stall(struct rk_iommu *iommu) ...@@ -324,15 +348,16 @@ static int rk_iommu_enable_stall(struct rk_iommu *iommu)
ret = rk_wait_for(rk_iommu_is_stall_active(iommu), 1); ret = rk_wait_for(rk_iommu_is_stall_active(iommu), 1);
if (ret) if (ret)
dev_err(iommu->dev, "Enable stall request timed out, status: %#08x\n", for (i = 0; i < iommu->num_mmu; i++)
rk_iommu_read(iommu, RK_MMU_STATUS)); dev_err(iommu->dev, "Enable stall request timed out, status: %#08x\n",
rk_iommu_read(iommu->bases[i], RK_MMU_STATUS));
return ret; return ret;
} }
static int rk_iommu_disable_stall(struct rk_iommu *iommu) static int rk_iommu_disable_stall(struct rk_iommu *iommu)
{ {
int ret; int ret, i;
if (!rk_iommu_is_stall_active(iommu)) if (!rk_iommu_is_stall_active(iommu))
return 0; return 0;
...@@ -341,15 +366,16 @@ static int rk_iommu_disable_stall(struct rk_iommu *iommu) ...@@ -341,15 +366,16 @@ static int rk_iommu_disable_stall(struct rk_iommu *iommu)
ret = rk_wait_for(!rk_iommu_is_stall_active(iommu), 1); ret = rk_wait_for(!rk_iommu_is_stall_active(iommu), 1);
if (ret) if (ret)
dev_err(iommu->dev, "Disable stall request timed out, status: %#08x\n", for (i = 0; i < iommu->num_mmu; i++)
rk_iommu_read(iommu, RK_MMU_STATUS)); dev_err(iommu->dev, "Disable stall request timed out, status: %#08x\n",
rk_iommu_read(iommu->bases[i], RK_MMU_STATUS));
return ret; return ret;
} }
static int rk_iommu_enable_paging(struct rk_iommu *iommu) static int rk_iommu_enable_paging(struct rk_iommu *iommu)
{ {
int ret; int ret, i;
if (rk_iommu_is_paging_enabled(iommu)) if (rk_iommu_is_paging_enabled(iommu))
return 0; return 0;
...@@ -358,15 +384,16 @@ static int rk_iommu_enable_paging(struct rk_iommu *iommu) ...@@ -358,15 +384,16 @@ static int rk_iommu_enable_paging(struct rk_iommu *iommu)
ret = rk_wait_for(rk_iommu_is_paging_enabled(iommu), 1); ret = rk_wait_for(rk_iommu_is_paging_enabled(iommu), 1);
if (ret) if (ret)
dev_err(iommu->dev, "Enable paging request timed out, status: %#08x\n", for (i = 0; i < iommu->num_mmu; i++)
rk_iommu_read(iommu, RK_MMU_STATUS)); dev_err(iommu->dev, "Enable paging request timed out, status: %#08x\n",
rk_iommu_read(iommu->bases[i], RK_MMU_STATUS));
return ret; return ret;
} }
static int rk_iommu_disable_paging(struct rk_iommu *iommu) static int rk_iommu_disable_paging(struct rk_iommu *iommu)
{ {
int ret; int ret, i;
if (!rk_iommu_is_paging_enabled(iommu)) if (!rk_iommu_is_paging_enabled(iommu))
return 0; return 0;
...@@ -375,41 +402,49 @@ static int rk_iommu_disable_paging(struct rk_iommu *iommu) ...@@ -375,41 +402,49 @@ static int rk_iommu_disable_paging(struct rk_iommu *iommu)
ret = rk_wait_for(!rk_iommu_is_paging_enabled(iommu), 1); ret = rk_wait_for(!rk_iommu_is_paging_enabled(iommu), 1);
if (ret) if (ret)
dev_err(iommu->dev, "Disable paging request timed out, status: %#08x\n", for (i = 0; i < iommu->num_mmu; i++)
rk_iommu_read(iommu, RK_MMU_STATUS)); dev_err(iommu->dev, "Disable paging request timed out, status: %#08x\n",
rk_iommu_read(iommu->bases[i], RK_MMU_STATUS));
return ret; return ret;
} }
static int rk_iommu_force_reset(struct rk_iommu *iommu) static int rk_iommu_force_reset(struct rk_iommu *iommu)
{ {
int ret; int ret, i;
u32 dte_addr; u32 dte_addr;
/* /*
* Check if register DTE_ADDR is working by writing DTE_ADDR_DUMMY * Check if register DTE_ADDR is working by writing DTE_ADDR_DUMMY
* and verifying that upper 5 nybbles are read back. * and verifying that upper 5 nybbles are read back.
*/ */
rk_iommu_write(iommu, RK_MMU_DTE_ADDR, DTE_ADDR_DUMMY); for (i = 0; i < iommu->num_mmu; i++) {
rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR, DTE_ADDR_DUMMY);
dte_addr = rk_iommu_read(iommu, RK_MMU_DTE_ADDR); dte_addr = rk_iommu_read(iommu->bases[i], RK_MMU_DTE_ADDR);
if (dte_addr != (DTE_ADDR_DUMMY & RK_DTE_PT_ADDRESS_MASK)) { if (dte_addr != (DTE_ADDR_DUMMY & RK_DTE_PT_ADDRESS_MASK)) {
dev_err(iommu->dev, "Error during raw reset. MMU_DTE_ADDR is not functioning\n"); dev_err(iommu->dev, "Error during raw reset. MMU_DTE_ADDR is not functioning\n");
return -EFAULT; return -EFAULT;
}
} }
rk_iommu_command(iommu, RK_MMU_CMD_FORCE_RESET); rk_iommu_command(iommu, RK_MMU_CMD_FORCE_RESET);
ret = rk_wait_for(rk_iommu_read(iommu, RK_MMU_DTE_ADDR) == 0x00000000, for (i = 0; i < iommu->num_mmu; i++) {
FORCE_RESET_TIMEOUT); ret = rk_wait_for(rk_iommu_read(iommu->bases[i], RK_MMU_DTE_ADDR) == 0x00000000,
if (ret) FORCE_RESET_TIMEOUT);
dev_err(iommu->dev, "FORCE_RESET command timed out\n"); if (ret) {
dev_err(iommu->dev, "FORCE_RESET command timed out\n");
return ret;
}
}
return ret; return 0;
} }
static void log_iova(struct rk_iommu *iommu, dma_addr_t iova) static void log_iova(struct rk_iommu *iommu, int index, dma_addr_t iova)
{ {
void __iomem *base = iommu->bases[index];
u32 dte_index, pte_index, page_offset; u32 dte_index, pte_index, page_offset;
u32 mmu_dte_addr; u32 mmu_dte_addr;
phys_addr_t mmu_dte_addr_phys, dte_addr_phys; phys_addr_t mmu_dte_addr_phys, dte_addr_phys;
...@@ -425,7 +460,7 @@ static void log_iova(struct rk_iommu *iommu, dma_addr_t iova) ...@@ -425,7 +460,7 @@ static void log_iova(struct rk_iommu *iommu, dma_addr_t iova)
pte_index = rk_iova_pte_index(iova); pte_index = rk_iova_pte_index(iova);
page_offset = rk_iova_page_offset(iova); page_offset = rk_iova_page_offset(iova);
mmu_dte_addr = rk_iommu_read(iommu, RK_MMU_DTE_ADDR); mmu_dte_addr = rk_iommu_read(base, RK_MMU_DTE_ADDR);
mmu_dte_addr_phys = (phys_addr_t)mmu_dte_addr; mmu_dte_addr_phys = (phys_addr_t)mmu_dte_addr;
dte_addr_phys = mmu_dte_addr_phys + (4 * dte_index); dte_addr_phys = mmu_dte_addr_phys + (4 * dte_index);
...@@ -460,51 +495,56 @@ static irqreturn_t rk_iommu_irq(int irq, void *dev_id) ...@@ -460,51 +495,56 @@ static irqreturn_t rk_iommu_irq(int irq, void *dev_id)
u32 status; u32 status;
u32 int_status; u32 int_status;
dma_addr_t iova; dma_addr_t iova;
irqreturn_t ret = IRQ_NONE;
int i;
int_status = rk_iommu_read(iommu, RK_MMU_INT_STATUS); for (i = 0; i < iommu->num_mmu; i++) {
if (int_status == 0) int_status = rk_iommu_read(iommu->bases[i], RK_MMU_INT_STATUS);
return IRQ_NONE; if (int_status == 0)
continue;
iova = rk_iommu_read(iommu, RK_MMU_PAGE_FAULT_ADDR); ret = IRQ_HANDLED;
iova = rk_iommu_read(iommu->bases[i], RK_MMU_PAGE_FAULT_ADDR);
if (int_status & RK_MMU_IRQ_PAGE_FAULT) { if (int_status & RK_MMU_IRQ_PAGE_FAULT) {
int flags; int flags;
status = rk_iommu_read(iommu, RK_MMU_STATUS); status = rk_iommu_read(iommu->bases[i], RK_MMU_STATUS);
flags = (status & RK_MMU_STATUS_PAGE_FAULT_IS_WRITE) ? flags = (status & RK_MMU_STATUS_PAGE_FAULT_IS_WRITE) ?
IOMMU_FAULT_WRITE : IOMMU_FAULT_READ; IOMMU_FAULT_WRITE : IOMMU_FAULT_READ;
dev_err(iommu->dev, "Page fault at %pad of type %s\n", dev_err(iommu->dev, "Page fault at %pad of type %s\n",
&iova, &iova,
(flags == IOMMU_FAULT_WRITE) ? "write" : "read"); (flags == IOMMU_FAULT_WRITE) ? "write" : "read");
log_iova(iommu, iova); log_iova(iommu, i, iova);
/* /*
* Report page fault to any installed handlers. * Report page fault to any installed handlers.
* Ignore the return code, though, since we always zap cache * Ignore the return code, though, since we always zap cache
* and clear the page fault anyway. * and clear the page fault anyway.
*/ */
if (iommu->domain) if (iommu->domain)
report_iommu_fault(iommu->domain, iommu->dev, iova, report_iommu_fault(iommu->domain, iommu->dev, iova,
flags); flags);
else else
dev_err(iommu->dev, "Page fault while iommu not attached to domain?\n"); dev_err(iommu->dev, "Page fault while iommu not attached to domain?\n");
rk_iommu_command(iommu, RK_MMU_CMD_ZAP_CACHE); rk_iommu_base_command(iommu->bases[i], RK_MMU_CMD_ZAP_CACHE);
rk_iommu_command(iommu, RK_MMU_CMD_PAGE_FAULT_DONE); rk_iommu_base_command(iommu->bases[i], RK_MMU_CMD_PAGE_FAULT_DONE);
} }
if (int_status & RK_MMU_IRQ_BUS_ERROR) if (int_status & RK_MMU_IRQ_BUS_ERROR)
dev_err(iommu->dev, "BUS_ERROR occurred at %pad\n", &iova); dev_err(iommu->dev, "BUS_ERROR occurred at %pad\n", &iova);
if (int_status & ~RK_MMU_IRQ_MASK) if (int_status & ~RK_MMU_IRQ_MASK)
dev_err(iommu->dev, "unexpected int_status: %#08x\n", dev_err(iommu->dev, "unexpected int_status: %#08x\n",
int_status); int_status);
rk_iommu_write(iommu, RK_MMU_INT_CLEAR, int_status); rk_iommu_write(iommu->bases[i], RK_MMU_INT_CLEAR, int_status);
}
return IRQ_HANDLED; return ret;
} }
static phys_addr_t rk_iommu_iova_to_phys(struct iommu_domain *domain, static phys_addr_t rk_iommu_iova_to_phys(struct iommu_domain *domain,
...@@ -746,7 +786,7 @@ static int rk_iommu_attach_device(struct iommu_domain *domain, ...@@ -746,7 +786,7 @@ static int rk_iommu_attach_device(struct iommu_domain *domain,
struct rk_iommu *iommu; struct rk_iommu *iommu;
struct rk_iommu_domain *rk_domain = to_rk_domain(domain); struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
unsigned long flags; unsigned long flags;
int ret; int ret, i;
phys_addr_t dte_addr; phys_addr_t dte_addr;
/* /*
...@@ -773,9 +813,11 @@ static int rk_iommu_attach_device(struct iommu_domain *domain, ...@@ -773,9 +813,11 @@ static int rk_iommu_attach_device(struct iommu_domain *domain,
return ret; return ret;
dte_addr = virt_to_phys(rk_domain->dt); dte_addr = virt_to_phys(rk_domain->dt);
rk_iommu_write(iommu, RK_MMU_DTE_ADDR, dte_addr); for (i = 0; i < iommu->num_mmu; i++) {
rk_iommu_command(iommu, RK_MMU_CMD_ZAP_CACHE); rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR, dte_addr);
rk_iommu_write(iommu, RK_MMU_INT_MASK, RK_MMU_IRQ_MASK); rk_iommu_command(iommu->bases[i], RK_MMU_CMD_ZAP_CACHE);
rk_iommu_write(iommu->bases[i], RK_MMU_INT_MASK, RK_MMU_IRQ_MASK);
}
ret = rk_iommu_enable_paging(iommu); ret = rk_iommu_enable_paging(iommu);
if (ret) if (ret)
...@@ -798,6 +840,7 @@ static void rk_iommu_detach_device(struct iommu_domain *domain, ...@@ -798,6 +840,7 @@ static void rk_iommu_detach_device(struct iommu_domain *domain,
struct rk_iommu *iommu; struct rk_iommu *iommu;
struct rk_iommu_domain *rk_domain = to_rk_domain(domain); struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
unsigned long flags; unsigned long flags;
int i;
/* Allow 'virtual devices' (eg drm) to detach from domain */ /* Allow 'virtual devices' (eg drm) to detach from domain */
iommu = rk_iommu_from_dev(dev); iommu = rk_iommu_from_dev(dev);
...@@ -811,8 +854,10 @@ static void rk_iommu_detach_device(struct iommu_domain *domain, ...@@ -811,8 +854,10 @@ static void rk_iommu_detach_device(struct iommu_domain *domain,
/* Ignore error while disabling, just keep going */ /* Ignore error while disabling, just keep going */
rk_iommu_enable_stall(iommu); rk_iommu_enable_stall(iommu);
rk_iommu_disable_paging(iommu); rk_iommu_disable_paging(iommu);
rk_iommu_write(iommu, RK_MMU_INT_MASK, 0); for (i = 0; i < iommu->num_mmu; i++) {
rk_iommu_write(iommu, RK_MMU_DTE_ADDR, 0); rk_iommu_write(iommu->bases[i], RK_MMU_INT_MASK, 0);
rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR, 0);
}
rk_iommu_disable_stall(iommu); rk_iommu_disable_stall(iommu);
devm_free_irq(dev, iommu->irq, iommu); devm_free_irq(dev, iommu->irq, iommu);
...@@ -988,6 +1033,7 @@ static int rk_iommu_probe(struct platform_device *pdev) ...@@ -988,6 +1033,7 @@ static int rk_iommu_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct rk_iommu *iommu; struct rk_iommu *iommu;
struct resource *res; struct resource *res;
int i;
iommu = devm_kzalloc(dev, sizeof(*iommu), GFP_KERNEL); iommu = devm_kzalloc(dev, sizeof(*iommu), GFP_KERNEL);
if (!iommu) if (!iommu)
...@@ -995,11 +1041,21 @@ static int rk_iommu_probe(struct platform_device *pdev) ...@@ -995,11 +1041,21 @@ static int rk_iommu_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, iommu); platform_set_drvdata(pdev, iommu);
iommu->dev = dev; iommu->dev = dev;
iommu->num_mmu = 0;
iommu->bases = devm_kzalloc(dev, sizeof(*iommu->bases) * iommu->num_mmu,
GFP_KERNEL);
if (!iommu->bases)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); for (i = 0; i < pdev->num_resources; i++) {
iommu->base = devm_ioremap_resource(&pdev->dev, res); res = platform_get_resource(pdev, IORESOURCE_MEM, i);
if (IS_ERR(iommu->base)) iommu->bases[i] = devm_ioremap_resource(&pdev->dev, res);
return PTR_ERR(iommu->base); if (IS_ERR(iommu->bases[i]))
continue;
iommu->num_mmu++;
}
if (iommu->num_mmu == 0)
return PTR_ERR(iommu->bases[0]);
iommu->irq = platform_get_irq(pdev, 0); iommu->irq = platform_get_irq(pdev, 0);
if (iommu->irq < 0) { if (iommu->irq < 0) {
......
...@@ -114,6 +114,14 @@ config JZ4780_NEMC ...@@ -114,6 +114,14 @@ config JZ4780_NEMC
the Ingenic JZ4780. This controller is used to handle external the Ingenic JZ4780. This controller is used to handle external
memory devices such as NAND and SRAM. memory devices such as NAND and SRAM.
config MTK_SMI
bool
depends on ARCH_MEDIATEK || COMPILE_TEST
help
This driver is for the Memory Controller module in MediaTek SoCs,
mainly help enable/disable iommu and control the power domain and
clocks for each local arbiter.
source "drivers/memory/tegra/Kconfig" source "drivers/memory/tegra/Kconfig"
endif endif
...@@ -15,5 +15,6 @@ obj-$(CONFIG_FSL_IFC) += fsl_ifc.o ...@@ -15,5 +15,6 @@ obj-$(CONFIG_FSL_IFC) += fsl_ifc.o
obj-$(CONFIG_MVEBU_DEVBUS) += mvebu-devbus.o obj-$(CONFIG_MVEBU_DEVBUS) += mvebu-devbus.o
obj-$(CONFIG_TEGRA20_MC) += tegra20-mc.o obj-$(CONFIG_TEGRA20_MC) += tegra20-mc.o
obj-$(CONFIG_JZ4780_NEMC) += jz4780-nemc.o obj-$(CONFIG_JZ4780_NEMC) += jz4780-nemc.o
obj-$(CONFIG_MTK_SMI) += mtk-smi.o
obj-$(CONFIG_TEGRA_MC) += tegra/ obj-$(CONFIG_TEGRA_MC) += tegra/
/*
* Copyright (c) 2015-2016 MediaTek Inc.
* Author: Yong Wu <yong.wu@mediatek.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/clk.h>
#include <linux/component.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <soc/mediatek/smi.h>
#define SMI_LARB_MMU_EN 0xf00
struct mtk_smi {
struct device *dev;
struct clk *clk_apb, *clk_smi;
};
struct mtk_smi_larb { /* larb: local arbiter */
struct mtk_smi smi;
void __iomem *base;
struct device *smi_common_dev;
u32 *mmu;
};
static int mtk_smi_enable(const struct mtk_smi *smi)
{
int ret;
ret = pm_runtime_get_sync(smi->dev);
if (ret < 0)
return ret;
ret = clk_prepare_enable(smi->clk_apb);
if (ret)
goto err_put_pm;
ret = clk_prepare_enable(smi->clk_smi);
if (ret)
goto err_disable_apb;
return 0;
err_disable_apb:
clk_disable_unprepare(smi->clk_apb);
err_put_pm:
pm_runtime_put_sync(smi->dev);
return ret;
}
static void mtk_smi_disable(const struct mtk_smi *smi)
{
clk_disable_unprepare(smi->clk_smi);
clk_disable_unprepare(smi->clk_apb);
pm_runtime_put_sync(smi->dev);
}
int mtk_smi_larb_get(struct device *larbdev)
{
struct mtk_smi_larb *larb = dev_get_drvdata(larbdev);
struct mtk_smi *common = dev_get_drvdata(larb->smi_common_dev);
int ret;
/* Enable the smi-common's power and clocks */
ret = mtk_smi_enable(common);
if (ret)
return ret;
/* Enable the larb's power and clocks */
ret = mtk_smi_enable(&larb->smi);
if (ret) {
mtk_smi_disable(common);
return ret;
}
/* Configure the iommu info for this larb */
writel(*larb->mmu, larb->base + SMI_LARB_MMU_EN);
return 0;
}
void mtk_smi_larb_put(struct device *larbdev)
{
struct mtk_smi_larb *larb = dev_get_drvdata(larbdev);
struct mtk_smi *common = dev_get_drvdata(larb->smi_common_dev);
/*
* Don't de-configure the iommu info for this larb since there may be
* several modules in this larb.
* The iommu info will be reset after power off.
*/
mtk_smi_disable(&larb->smi);
mtk_smi_disable(common);
}
static int
mtk_smi_larb_bind(struct device *dev, struct device *master, void *data)
{
struct mtk_smi_larb *larb = dev_get_drvdata(dev);
struct mtk_smi_iommu *smi_iommu = data;
unsigned int i;
for (i = 0; i < smi_iommu->larb_nr; i++) {
if (dev == smi_iommu->larb_imu[i].dev) {
/* The 'mmu' may be updated in iommu-attach/detach. */
larb->mmu = &smi_iommu->larb_imu[i].mmu;
return 0;
}
}
return -ENODEV;
}
static void
mtk_smi_larb_unbind(struct device *dev, struct device *master, void *data)
{
/* Do nothing as the iommu is always enabled. */
}
static const struct component_ops mtk_smi_larb_component_ops = {
.bind = mtk_smi_larb_bind,
.unbind = mtk_smi_larb_unbind,
};
static int mtk_smi_larb_probe(struct platform_device *pdev)
{
struct mtk_smi_larb *larb;
struct resource *res;
struct device *dev = &pdev->dev;
struct device_node *smi_node;
struct platform_device *smi_pdev;
if (!dev->pm_domain)
return -EPROBE_DEFER;
larb = devm_kzalloc(dev, sizeof(*larb), GFP_KERNEL);
if (!larb)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
larb->base = devm_ioremap_resource(dev, res);
if (IS_ERR(larb->base))
return PTR_ERR(larb->base);
larb->smi.clk_apb = devm_clk_get(dev, "apb");
if (IS_ERR(larb->smi.clk_apb))
return PTR_ERR(larb->smi.clk_apb);
larb->smi.clk_smi = devm_clk_get(dev, "smi");
if (IS_ERR(larb->smi.clk_smi))
return PTR_ERR(larb->smi.clk_smi);
larb->smi.dev = dev;
smi_node = of_parse_phandle(dev->of_node, "mediatek,smi", 0);
if (!smi_node)
return -EINVAL;
smi_pdev = of_find_device_by_node(smi_node);
of_node_put(smi_node);
if (smi_pdev) {
larb->smi_common_dev = &smi_pdev->dev;
} else {
dev_err(dev, "Failed to get the smi_common device\n");
return -EINVAL;
}
pm_runtime_enable(dev);
platform_set_drvdata(pdev, larb);
return component_add(dev, &mtk_smi_larb_component_ops);
}
static int mtk_smi_larb_remove(struct platform_device *pdev)
{
pm_runtime_disable(&pdev->dev);
component_del(&pdev->dev, &mtk_smi_larb_component_ops);
return 0;
}
static const struct of_device_id mtk_smi_larb_of_ids[] = {
{ .compatible = "mediatek,mt8173-smi-larb",},
{}
};
static struct platform_driver mtk_smi_larb_driver = {
.probe = mtk_smi_larb_probe,
.remove = mtk_smi_larb_remove,
.driver = {
.name = "mtk-smi-larb",
.of_match_table = mtk_smi_larb_of_ids,
}
};
static int mtk_smi_common_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct mtk_smi *common;
if (!dev->pm_domain)
return -EPROBE_DEFER;
common = devm_kzalloc(dev, sizeof(*common), GFP_KERNEL);
if (!common)
return -ENOMEM;
common->dev = dev;
common->clk_apb = devm_clk_get(dev, "apb");
if (IS_ERR(common->clk_apb))
return PTR_ERR(common->clk_apb);
common->clk_smi = devm_clk_get(dev, "smi");
if (IS_ERR(common->clk_smi))
return PTR_ERR(common->clk_smi);
pm_runtime_enable(dev);
platform_set_drvdata(pdev, common);
return 0;
}
static int mtk_smi_common_remove(struct platform_device *pdev)
{
pm_runtime_disable(&pdev->dev);
return 0;
}
static const struct of_device_id mtk_smi_common_of_ids[] = {
{ .compatible = "mediatek,mt8173-smi-common", },
{}
};
static struct platform_driver mtk_smi_common_driver = {
.probe = mtk_smi_common_probe,
.remove = mtk_smi_common_remove,
.driver = {
.name = "mtk-smi-common",
.of_match_table = mtk_smi_common_of_ids,
}
};
static int __init mtk_smi_init(void)
{
int ret;
ret = platform_driver_register(&mtk_smi_common_driver);
if (ret != 0) {
pr_err("Failed to register SMI driver\n");
return ret;
}
ret = platform_driver_register(&mtk_smi_larb_driver);
if (ret != 0) {
pr_err("Failed to register SMI-LARB driver\n");
goto err_unreg_smi;
}
return ret;
err_unreg_smi:
platform_driver_unregister(&mtk_smi_common_driver);
return ret;
}
subsys_initcall(mtk_smi_init);
/*
* Copyright (c) 2015-2016 MediaTek Inc.
* Author: Yong Wu <yong.wu@mediatek.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __DTS_IOMMU_PORT_MT8173_H
#define __DTS_IOMMU_PORT_MT8173_H
#define MTK_M4U_ID(larb, port) (((larb) << 5) | (port))
/* Local arbiter ID */
#define MTK_M4U_TO_LARB(id) (((id) >> 5) & 0x7)
/* PortID within the local arbiter */
#define MTK_M4U_TO_PORT(id) ((id) & 0x1f)
#define M4U_LARB0_ID 0
#define M4U_LARB1_ID 1
#define M4U_LARB2_ID 2
#define M4U_LARB3_ID 3
#define M4U_LARB4_ID 4
#define M4U_LARB5_ID 5
/* larb0 */
#define M4U_PORT_DISP_OVL0 MTK_M4U_ID(M4U_LARB0_ID, 0)
#define M4U_PORT_DISP_RDMA0 MTK_M4U_ID(M4U_LARB0_ID, 1)
#define M4U_PORT_DISP_WDMA0 MTK_M4U_ID(M4U_LARB0_ID, 2)
#define M4U_PORT_DISP_OD_R MTK_M4U_ID(M4U_LARB0_ID, 3)
#define M4U_PORT_DISP_OD_W MTK_M4U_ID(M4U_LARB0_ID, 4)
#define M4U_PORT_MDP_RDMA0 MTK_M4U_ID(M4U_LARB0_ID, 5)
#define M4U_PORT_MDP_WDMA MTK_M4U_ID(M4U_LARB0_ID, 6)
#define M4U_PORT_MDP_WROT0 MTK_M4U_ID(M4U_LARB0_ID, 7)
/* larb1 */
#define M4U_PORT_HW_VDEC_MC_EXT MTK_M4U_ID(M4U_LARB1_ID, 0)
#define M4U_PORT_HW_VDEC_PP_EXT MTK_M4U_ID(M4U_LARB1_ID, 1)
#define M4U_PORT_HW_VDEC_UFO_EXT MTK_M4U_ID(M4U_LARB1_ID, 2)
#define M4U_PORT_HW_VDEC_VLD_EXT MTK_M4U_ID(M4U_LARB1_ID, 3)
#define M4U_PORT_HW_VDEC_VLD2_EXT MTK_M4U_ID(M4U_LARB1_ID, 4)
#define M4U_PORT_HW_VDEC_AVC_MV_EXT MTK_M4U_ID(M4U_LARB1_ID, 5)
#define M4U_PORT_HW_VDEC_PRED_RD_EXT MTK_M4U_ID(M4U_LARB1_ID, 6)
#define M4U_PORT_HW_VDEC_PRED_WR_EXT MTK_M4U_ID(M4U_LARB1_ID, 7)
#define M4U_PORT_HW_VDEC_PPWRAP_EXT MTK_M4U_ID(M4U_LARB1_ID, 8)
#define M4U_PORT_HW_VDEC_TILE MTK_M4U_ID(M4U_LARB1_ID, 9)
/* larb2 */
#define M4U_PORT_IMGO MTK_M4U_ID(M4U_LARB2_ID, 0)
#define M4U_PORT_RRZO MTK_M4U_ID(M4U_LARB2_ID, 1)
#define M4U_PORT_AAO MTK_M4U_ID(M4U_LARB2_ID, 2)
#define M4U_PORT_LCSO MTK_M4U_ID(M4U_LARB2_ID, 3)
#define M4U_PORT_ESFKO MTK_M4U_ID(M4U_LARB2_ID, 4)
#define M4U_PORT_IMGO_D MTK_M4U_ID(M4U_LARB2_ID, 5)
#define M4U_PORT_LSCI MTK_M4U_ID(M4U_LARB2_ID, 6)
#define M4U_PORT_LSCI_D MTK_M4U_ID(M4U_LARB2_ID, 7)
#define M4U_PORT_BPCI MTK_M4U_ID(M4U_LARB2_ID, 8)
#define M4U_PORT_BPCI_D MTK_M4U_ID(M4U_LARB2_ID, 9)
#define M4U_PORT_UFDI MTK_M4U_ID(M4U_LARB2_ID, 10)
#define M4U_PORT_IMGI MTK_M4U_ID(M4U_LARB2_ID, 11)
#define M4U_PORT_IMG2O MTK_M4U_ID(M4U_LARB2_ID, 12)
#define M4U_PORT_IMG3O MTK_M4U_ID(M4U_LARB2_ID, 13)
#define M4U_PORT_VIPI MTK_M4U_ID(M4U_LARB2_ID, 14)
#define M4U_PORT_VIP2I MTK_M4U_ID(M4U_LARB2_ID, 15)
#define M4U_PORT_VIP3I MTK_M4U_ID(M4U_LARB2_ID, 16)
#define M4U_PORT_LCEI MTK_M4U_ID(M4U_LARB2_ID, 17)
#define M4U_PORT_RB MTK_M4U_ID(M4U_LARB2_ID, 18)
#define M4U_PORT_RP MTK_M4U_ID(M4U_LARB2_ID, 19)
#define M4U_PORT_WR MTK_M4U_ID(M4U_LARB2_ID, 20)
/* larb3 */
#define M4U_PORT_VENC_RCPU MTK_M4U_ID(M4U_LARB3_ID, 0)
#define M4U_PORT_VENC_REC MTK_M4U_ID(M4U_LARB3_ID, 1)
#define M4U_PORT_VENC_BSDMA MTK_M4U_ID(M4U_LARB3_ID, 2)
#define M4U_PORT_VENC_SV_COMV MTK_M4U_ID(M4U_LARB3_ID, 3)
#define M4U_PORT_VENC_RD_COMV MTK_M4U_ID(M4U_LARB3_ID, 4)
#define M4U_PORT_JPGENC_RDMA MTK_M4U_ID(M4U_LARB3_ID, 5)
#define M4U_PORT_JPGENC_BSDMA MTK_M4U_ID(M4U_LARB3_ID, 6)
#define M4U_PORT_JPGDEC_WDMA MTK_M4U_ID(M4U_LARB3_ID, 7)
#define M4U_PORT_JPGDEC_BSDMA MTK_M4U_ID(M4U_LARB3_ID, 8)
#define M4U_PORT_VENC_CUR_LUMA MTK_M4U_ID(M4U_LARB3_ID, 9)
#define M4U_PORT_VENC_CUR_CHROMA MTK_M4U_ID(M4U_LARB3_ID, 10)
#define M4U_PORT_VENC_REF_LUMA MTK_M4U_ID(M4U_LARB3_ID, 11)
#define M4U_PORT_VENC_REF_CHROMA MTK_M4U_ID(M4U_LARB3_ID, 12)
#define M4U_PORT_VENC_NBM_RDMA MTK_M4U_ID(M4U_LARB3_ID, 13)
#define M4U_PORT_VENC_NBM_WDMA MTK_M4U_ID(M4U_LARB3_ID, 14)
/* larb4 */
#define M4U_PORT_DISP_OVL1 MTK_M4U_ID(M4U_LARB4_ID, 0)
#define M4U_PORT_DISP_RDMA1 MTK_M4U_ID(M4U_LARB4_ID, 1)
#define M4U_PORT_DISP_RDMA2 MTK_M4U_ID(M4U_LARB4_ID, 2)
#define M4U_PORT_DISP_WDMA1 MTK_M4U_ID(M4U_LARB4_ID, 3)
#define M4U_PORT_MDP_RDMA1 MTK_M4U_ID(M4U_LARB4_ID, 4)
#define M4U_PORT_MDP_WROT1 MTK_M4U_ID(M4U_LARB4_ID, 5)
/* larb5 */
#define M4U_PORT_VENC_RCPU_SET2 MTK_M4U_ID(M4U_LARB5_ID, 0)
#define M4U_PORT_VENC_REC_FRM_SET2 MTK_M4U_ID(M4U_LARB5_ID, 1)
#define M4U_PORT_VENC_REF_LUMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 2)
#define M4U_PORT_VENC_REC_CHROMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 3)
#define M4U_PORT_VENC_BSDMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 4)
#define M4U_PORT_VENC_CUR_LUMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 5)
#define M4U_PORT_VENC_CUR_CHROMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 6)
#define M4U_PORT_VENC_RD_COMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 7)
#define M4U_PORT_VENC_SV_COMA_SET2 MTK_M4U_ID(M4U_LARB5_ID, 8)
#endif
/*
* Copyright (c) 2015-2016 MediaTek Inc.
* Author: Yong Wu <yong.wu@mediatek.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef MTK_IOMMU_SMI_H
#define MTK_IOMMU_SMI_H
#include <linux/bitops.h>
#include <linux/device.h>
#ifdef CONFIG_MTK_SMI
#define MTK_LARB_NR_MAX 8
#define MTK_SMI_MMU_EN(port) BIT(port)
struct mtk_smi_larb_iommu {
struct device *dev;
unsigned int mmu;
};
struct mtk_smi_iommu {
unsigned int larb_nr;
struct mtk_smi_larb_iommu larb_imu[MTK_LARB_NR_MAX];
};
/*
* mtk_smi_larb_get: Enable the power domain and clocks for this local arbiter.
* It also initialize some basic setting(like iommu).
* mtk_smi_larb_put: Disable the power domain and clocks for this local arbiter.
* Both should be called in non-atomic context.
*
* Returns 0 if successful, negative on failure.
*/
int mtk_smi_larb_get(struct device *larbdev);
void mtk_smi_larb_put(struct device *larbdev);
#else
static inline int mtk_smi_larb_get(struct device *larbdev)
{
return 0;
}
static inline void mtk_smi_larb_put(struct device *larbdev) { }
#endif
#endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment