- 05 Oct, 2023 1 commit
-
-
Niklas Schnelle authored
Together with enabling the Function Measurement Block zpci_fmb_enable_device() also resets the software counters. This allows to use "echo 0 > /sys/kernel/debug/pci/<dev>/statistics" followed by echo "1 > /../statistics" to reset all counters. In commit c76c067e ("s390/pci: Use dma-iommu layer") this use of the now obsolete counters in struct zpci_device was missed as was their removal. Fix this by resetting the new counters and removing the old ones. Fixes: c76c067e ("s390/pci: Use dma-iommu layer") Signed-off-by:
Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by:
Matthew Rosato <mjrosato@linux.ibm.com> Link: https://lore.kernel.org/r/20231004-dma_iommu_fix-v1-1-129777cd8232@linux.ibm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
- 02 Oct, 2023 6 commits
-
-
Niklas Schnelle authored
Flush queues currently use a fixed compile time size of 256 entries. This being a power of 2 allows the compiler to use shift and mask instead of more expensive modulo operations. With per-CPU flush queues larger queue sizes would hit per-CPU allocation limits, with a single flush queue these limits do not apply however. Also with single queues being particularly suitable for virtualized environments with expensive IOTLB flushes these benefit especially from larger queues and thus fewer flushes. To this end re-order struct iova_fq so we can use a dynamic array and introduce the flush queue size and timeouts as new options in the iommu_dma_options struct. So as not to lose the shift and mask optimization, use a power of 2 for the length and use explicit shift and mask instead of letting the compiler optimize this. A large queue size and 1 second timeout is then set for the shadow on flush case set by s390 paged memory guests. This then brings performance on par with the previous s390 specific DMA API implementation. Acked-by:
Robin Murphy <robin.murphy@arm.com> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> #s390 Signed-off-by:
Niklas Schnelle <schnelle@linux.ibm.com> Link: https://lore.kernel.org/r/20230928-dma_iommu-v13-6-9e5fc4dacc36@linux.ibm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Niklas Schnelle authored
In some virtualized environments, including s390 paged memory guests, IOTLB flushes are used to update IOMMU shadow tables. Due to this, they are much more expensive than in typical bare metal environments or non-paged s390 guests. In addition they may parallelize poorly in virtualized environments. This changes the trade off for flushing IOVAs such that minimizing the number of IOTLB flushes trumps any benefit of cheaper queuing operations or increased paralellism. In this scenario per-CPU flush queues pose several problems. Firstly per-CPU memory is often quite limited prohibiting larger queues. Secondly collecting IOVAs per-CPU but flushing via a global timeout reduces the number of IOVAs flushed for each timeout especially on s390 where PCI interrupts may not be bound to a specific CPU. Let's introduce a single flush queue mode that reuses the same queue logic but only allocates a single global queue. This mode is selected by dma-iommu if a newly introduced .shadow_on_flush flag is set in struct dev_iommu. As a first user the s390 IOMMU driver sets this flag during probe_device. With the unchanged small FQ size and timeouts this setting is worse than per-CPU queues but a follow up patch will make the FQ size and timeout variable. Together this allows the common IOVA flushing code to more closely resemble the global flush behavior used on s390's previous internal DMA API implementation. Link: https://lore.kernel.org/all/9a466109-01c5-96b0-bf03-304123f435ee@arm.com/Acked-by:
Robin Murphy <robin.murphy@arm.com> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> #s390 Signed-off-by:
Niklas Schnelle <schnelle@linux.ibm.com> Link: https://lore.kernel.org/r/20230928-dma_iommu-v13-5-9e5fc4dacc36@linux.ibm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Niklas Schnelle authored
ISM devices are virtual PCI devices used for cross-LPAR communication. Unlike real PCI devices ISM devices do not use the hardware IOMMU but inspects IOMMU translation tables directly on IOTLB flush (s390 RPCIT instruction). ISM devices keep their DMA allocations static and only very rarely DMA unmap at all. For each IOTLB flush that occurs after unmap the ISM devices will however inspect the area of the IOVA space indicated by the flush. This means that for the global IOTLB flushes used by the flush queue mechanism the entire IOVA space would be inspected. In principle this would be fine, albeit potentially unnecessarily slow, it turns out however that ISM devices are sensitive to seeing IOVA addresses that are currently in use in the IOVA range being flushed. Seeing such in-use IOVA addresses will cause the ISM device to enter an error state and become unusable. Fix this by claiming IOMMU_CAP_DEFERRED_FLUSH only for non-ISM devices. This makes sure IOTLB flushes only cover IOVAs that have been unmapped and also restricts the range of the IOTLB flush potentially reducing latency spikes. Signed-off-by:
Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by:
Matthew Rosato <mjrosato@linux.ibm.com> Link: https://lore.kernel.org/r/20230928-dma_iommu-v13-4-9e5fc4dacc36@linux.ibm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Niklas Schnelle authored
While s390 already has a standard IOMMU driver and previous changes have added I/O TLB flushing operations this driver is currently only used for user-space PCI access such as vfio-pci. For the DMA API s390 instead utilizes its own implementation in arch/s390/pci/pci_dma.c which drives the same hardware and shares some code but requires a complex and fragile hand over between DMA API and IOMMU API use of a device and despite code sharing still leads to significant duplication and maintenance effort. Let's utilize the common code DMAP API implementation from drivers/iommu/dma-iommu.c instead allowing us to get rid of arch/s390/pci/pci_dma.c. Reviewed-by:
Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by:
Niklas Schnelle <schnelle@linux.ibm.com> Link: https://lore.kernel.org/r/20230928-dma_iommu-v13-3-9e5fc4dacc36@linux.ibm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Niklas Schnelle authored
With the IOMMU always controlled through the IOMMU driver testing for zdev->s390_domain is not a valid indication of the device being passed-through. Instead test if zdev->kzdev is set. Reviewed-by:
Pierre Morel <pmorel@linux.ibm.com> Reviewed-by:
Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by:
Niklas Schnelle <schnelle@linux.ibm.com> Link: https://lore.kernel.org/r/20230928-dma_iommu-v13-2-9e5fc4dacc36@linux.ibm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Niklas Schnelle authored
On s390 when using a paging hypervisor, .iotlb_sync_map is used to sync mappings by letting the hypervisor inspect the synced IOVA range and updating a shadow table. This however means that .iotlb_sync_map can fail as the hypervisor may run out of resources while doing the sync. This can be due to the hypervisor being unable to pin guest pages, due to a limit on mapped addresses such as vfio_iommu_type1.dma_entry_limit or lack of other resources. Either way such a failure to sync a mapping should result in a DMA_MAPPING_ERROR. Now especially when running with batched IOTLB flushes for unmap it may be that some IOVAs have already been invalidated but not yet synced via .iotlb_sync_map. Thus if the hypervisor indicates running out of resources, first do a global flush allowing the hypervisor to free resources associated with these mappings as well a retry creating the new mappings and only if that also fails report this error to callers. Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Matthew Rosato <mjrosato@linux.ibm.com> Acked-by: Jernej Skrabec <jernej.skrabec@gmail.com> # sun50i Signed-off-by:
Niklas Schnelle <schnelle@linux.ibm.com> Link: https://lore.kernel.org/r/20230928-dma_iommu-v13-1-9e5fc4dacc36@linux.ibm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
- 25 Sep, 2023 33 commits
-
-
Jiapeng Chong authored
./drivers/iommu/iommu.c: iommu-priv.h is included more than once. Reported-by:
Abaci Robot <abaci@linux.alibaba.com> Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=6186Signed-off-by:
Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Link: https://lore.kernel.org/r/20230818092620.91748-1-jiapeng.chong@linux.alibaba.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
Automatically scaling the depot up to suit the peak capacity of a workload is all well and good, but it would be nice to have a way to scale it back down again if the workload changes. To that end, add backround reclaim that will gradually free surplus magazines if the depot size remains above a reasonable threshold for long enough. Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/03170665c56d89c6ce6081246b47f68d4e483308.1694535580.git.robin.murphy@arm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
The algorithm in the original paper specifies the storage of full magazines in the depot as an unbounded list rather than a fixed-size array. It turns out to be pretty straightforward to do this in our implementation with no significant loss of efficiency. This allows the depot to scale up to the working set sizes of larger systems, while also potentially saving some memory on smaller ones too. Since this involves touching struct iova_magazine with the requisite care, we may as well reinforce the comment with a proper assertion too. Reviewed-by:
John Garry <john.g.garry@oracle.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/f597aa72fc3e1d315bc4574af0ce0ebe5c31cd22.1694535580.git.robin.murphy@arm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
The current checks for the __IOMMU_DOMAIN_PAGING capability seem a bit stifled, since it is quite likely now that a non-paging domain won't have a pgsize_bitmap and/or mapping ops, and thus get caught by the earlier condition anyway. Swap them around to test the more fundamental condition first, then we can reasonably also upgrade the other to a WARN_ON, since if a driver does ever expose a paging domain without the means to actually page, it's clearly very broken. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/524db1ec0139c964d26928a6a264945aa66d010c.1694525662.git.robin.murphy@arm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
With everyone now implementing the new interfaces, clean up the last remnants of the old map/unmap ops and simplify the calling logic again. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Reviewed-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/d2afdf13b2fbf537713c3ec642dfd49d16dd9e6a.1694525662.git.robin.murphy@arm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
Trivially update map/unmap to the new interface, which is quite happy for drivers to still process just one page per call. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/338c520ed947d6d5b9d0509ccb4588908bd9ce1e.1694525662.git.robin.murphy@arm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
Trivially update map/unmap to the new interface, which is quite happy for drivers to still process just one page per call. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/395995e5097803f9a65f2fb79e0732d41c2b8a84.1694525662.git.robin.murphy@arm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
Trivially update map/unmap to the new interface, which is quite happy for drivers to still process just one page per call. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/ccc21bf7d1d0da8989d4d517a13d0846d6b71a38.1694525662.git.robin.murphy@arm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
Trivially update map/unmap to the new interface, which is quite happy for drivers to still process just one page per call. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/7bad94ffccd4cba32bded72e0860974012881e24.1694525662.git.robin.murphy@arm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
Trivially update map/unmap to the new interface, which is quite happy for drivers to still process just one page per call. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/579176033e92d49ec9fc9f3d33d7b9d4c474f0b4.1694525662.git.robin.murphy@arm.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
Use the new helper. For some reason omap will probe its driver even if it doesn't load an iommu driver. Keep this working by keeping a bool to track if the iommu driver was started. Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/7-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
Use the new helper. This driver is kind of weird since in ARM mode it pretends it has per-device groups, but ARM64 mode does not. Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/6-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
Use the new helper. Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/5-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
Use the new helper. Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/4-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
Use the new helper. Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Acked-by:
Jernej Skrabec <jernej.skrabec@gmail.com> Link: https://lore.kernel.org/r/3-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
This implements the common pattern seen in drivers of a single iommu_group for the entire iommu driver instance. Implement this in core code so the drivers that want this can select it from their ops. Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Kevin Tian <kevin.tian@intel.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/2-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
Several functions obtain the group reference and then release it before returning. This gives the impression that the refcount is protecting something for the duration of the function. In truth all of these functions are called in places that know a device driver is probed to the device and our locking rules already require that dev->iommu_group cannot change while a driver is attached to the struct device. If this was not the case then this code is already at risk of triggering UAF as it is racy if the dev->iommu_group is concurrently going to NULL/free. refcount debugging will throw a WARN if kobject_get() is called on a 0 refcount object to highlight the bug. Remove the confusing refcounting and leave behind a comment about the restriction. Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Kevin Tian <kevin.tian@intel.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/1-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
These drivers don't support IOMMU_DOMAIN_DMA, so this commit effectively allows them to support that mode. The prior work to require default_domains makes this safe because every one of these drivers is either compilation incompatible with dma-iommu.c, or already establishing a default_domain. In both cases alloc_domain() will never be called with IOMMU_DOMAIN_DMA for these drivers so it is safe to drop the test. Removing these tests clarifies that the domain allocation path is only about the functionality of a paging domain and has nothing to do with policy of how the paging domain is used for UNMANAGED/DMA/DMA_FQ. Tested-by:
Niklas Schnelle <schnelle@linux.ibm.com> Tested-by:
Steven Price <steven.price@arm.com> Tested-by:
Marek Szyprowski <m.szyprowski@samsung.com> Tested-by:
Nicolin Chen <nicolinc@nvidia.com> Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/24-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
These drivers are all trivially converted since the function is only called if the domain type is going to be IOMMU_DOMAIN_UNMANAGED/DMA. Tested-by:
Heiko Stuebner <heiko@sntech.de> Tested-by:
Steven Price <steven.price@arm.com> Tested-by:
Marek Szyprowski <m.szyprowski@samsung.com> Tested-by:
Nicolin Chen <nicolinc@nvidia.com> Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Tested-by: Yong Wu <yong.wu@mediatek.com> #For mtk_iommu.c Link: https://lore.kernel.org/r/23-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
This callback requests the driver to create only a __IOMMU_DOMAIN_PAGING domain, so it saves a few lines in a lot of drivers needlessly checking the type. More critically, this allows us to sweep out all the IOMMU_DOMAIN_UNMANAGED and IOMMU_DOMAIN_DMA checks from a lot of the drivers, simplifying what is going on in the code and ultimately removing the now-unused special cases in drivers where they did not support IOMMU_DOMAIN_DMA. domain_alloc_paging() should return a struct iommu_domain that is functionally compatible with ARM_DMA_USE_IOMMU, dma-iommu.c and iommufd. Be forwards looking and pass in a 'struct device *' argument. We can provide this when allocating the default_domain. No drivers will look at this. Tested-by:
Steven Price <steven.price@arm.com> Tested-by:
Marek Szyprowski <m.szyprowski@samsung.com> Tested-by:
Nicolin Chen <nicolinc@nvidia.com> Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/22-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
Allocate a domain from a group. Automatically obtains the iommu_ops to use from the device list of the group. Convert the internal callers to use it. Tested-by:
Steven Price <steven.price@arm.com> Tested-by:
Marek Szyprowski <m.szyprowski@samsung.com> Tested-by:
Nicolin Chen <nicolinc@nvidia.com> Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/21-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
At this point every iommu driver will cause a default_domain to be selected, so we can finally remove this gap from the core code. The following table explains what each driver supports and what the resulting default_domain will be: ops->defaut_domain IDENTITY DMA PLATFORM v ARM32 dma-iommu ARCH amd/iommu.c Y Y N/A either apple-dart.c Y Y N/A either arm-smmu.c Y Y IDENTITY either qcom_iommu.c G Y IDENTITY either arm-smmu-v3.c Y Y N/A either exynos-iommu.c G Y IDENTITY either fsl_pamu_domain.c Y Y N/A N/A PLATFORM intel/iommu.c Y Y N/A either ipmmu-vmsa.c G Y IDENTITY either msm_iommu.c G IDENTITY N/A mtk_iommu.c G Y IDENTITY either mtk_iommu_v1.c G IDENTITY N/A omap-iommu.c G IDENTITY N/A rockchip-iommu.c G Y IDENTITY either s390-iommu.c Y Y N/A N/A PLATFORM sprd-iommu.c Y N/A DMA sun50i-iommu.c G Y IDENTITY either tegra-smmu.c G Y IDENTITY IDENTITY virtio-iommu.c Y Y N/A either spapr Y Y N/A N/A PLATFORM * G means ops->identity_domain is used * N/A means the driver will not compile in this configuration ARM32 drivers select an IDENTITY default domain through either the ops->identity_domain or directly requesting an IDENTIY domain through alloc_domain(). In ARM64 mode tegra-smmu will still block the use of dma-iommu.c and forces an IDENTITY domain. S390 uses a PLATFORM domain to represent when the dma_ops are set to the s390 iommu code. fsl_pamu uses an PLATFORM domain. POWER SPAPR uses PLATFORM and blocking to enable its weird VFIO mode. The x86 drivers continue unchanged. After this patch group->default_domain is only NULL for a short period during bus iommu probing while all the groups are constituted. Otherwise it is always !NULL. This completes changing the iommu subsystem driver contract to a system where the current iommu_domain always represents some form of translation and the driver is continuously asserting a definable translation mode. It resolves the confusion that the original ops->detach_dev() caused around what translation, exactly, is the IOMMU performing after detach. There were at least three different answers to that question in the tree, they are all now clearly named with domain types. Tested-by:
Heiko Stuebner <heiko@sntech.de> Tested-by:
Niklas Schnelle <schnelle@linux.ibm.com> Tested-by:
Steven Price <steven.price@arm.com> Tested-by:
Marek Szyprowski <m.szyprowski@samsung.com> Tested-by:
Nicolin Chen <nicolinc@nvidia.com> Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
Prior to commit 1b932ced ("iommu: Remove detach_dev callbacks") the sun50i_iommu_detach_device() function was being called by ops->detach_dev(). This is an IDENTITY domain so convert sun50i_iommu_detach_device() into sun50i_iommu_identity_attach() and a full IDENTITY domain and thus hook it back up the same was as the old ops->detach_dev(). Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/19-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
This brings back the ops->detach_dev() code that commit 1b932ced ("iommu: Remove detach_dev callbacks") deleted and turns it into an IDENTITY domain. Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/18-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
This brings back the ops->detach_dev() code that commit 1b932ced ("iommu: Remove detach_dev callbacks") deleted and turns it into an IDENTITY domain. Also reverts commit 584d334b ("iommu/ipmmu-vmsa: Remove ipmmu_utlb_disable()") Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/17-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
This brings back the ops->detach_dev() code that commit 1b932ced ("iommu: Remove detach_dev callbacks") deleted and turns it into an IDENTITY domain. Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/16-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
All drivers are now using IDENTITY or PLATFORM domains for what this did, we can remove it now. It is no longer possible to attach to a NULL domain. Tested-by:
Heiko Stuebner <heiko@sntech.de> Tested-by:
Niklas Schnelle <schnelle@linux.ibm.com> Tested-by:
Steven Price <steven.price@arm.com> Tested-by:
Marek Szyprowski <m.szyprowski@samsung.com> Tested-by:
Nicolin Chen <nicolinc@nvidia.com> Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/15-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
What msm does during msm_iommu_set_platform_dma() is actually putting the iommu into identity mode. Move to the new core support for ARM_DMA_USE_IOMMU by defining ops->identity_domain. This driver does not support IOMMU_DOMAIN_DMA, however it cannot be compiled on ARM64 either. Most likely it is fine to support dma-iommu.c Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/14-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
What omap does during omap_iommu_set_platform_dma() is actually putting the iommu into identity mode. Move to the new core support for ARM_DMA_USE_IOMMU by defining ops->identity_domain. This driver does not support IOMMU_DOMAIN_DMA, however it cannot be compiled on ARM64 either. Most likely it is fine to support dma-iommu.c Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/13-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
All ARM64 iommu drivers should support IOMMU_DOMAIN_DMA to enable dma-iommu.c. tegra is blocking dma-iommu usage, and also default_domain's, because it wants an identity translation. This is needed for some device quirk. The correct way to do this is to support IDENTITY domains and use ops->def_domain_type() to return IOMMU_DOMAIN_IDENTITY for only the quirky devices. Add support for IOMMU_DOMAIN_DMA and force IOMMU_DOMAIN_IDENTITY mode for everything so no behavior changes. Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/12-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
What tegra-smmu does during tegra_smmu_set_platform_dma() is actually putting the iommu into identity mode. Move to the new core support for ARM_DMA_USE_IOMMU by defining ops->identity_domain. Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/11-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
What exynos calls exynos_iommu_detach_device is actually putting the iommu into identity mode. Move to the new core support for ARM_DMA_USE_IOMMU by defining ops->identity_domain. Tested-by:
Marek Szyprowski <m.szyprowski@samsung.com> Acked-by:
Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/10-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
Even though dma-iommu.c and CONFIG_ARM_DMA_USE_IOMMU do approximately the same stuff, the way they relate to the IOMMU core is quiet different. dma-iommu.c expects the core code to setup an UNMANAGED domain (of type IOMMU_DOMAIN_DMA) and then configures itself to use that domain. This becomes the default_domain for the group. ARM_DMA_USE_IOMMU does not use the default_domain, instead it directly allocates an UNMANAGED domain and operates it just like an external driver. In this case group->default_domain is NULL. If the driver provides a global static identity_domain then automatically use it as the default_domain when in ARM_DMA_USE_IOMMU mode. This allows drivers that implemented default_domain == NULL as an IDENTITY translation to trivially get a properly labeled non-NULL default_domain on ARM32 configs. With this arrangment when ARM_DMA_USE_IOMMU wants to disconnect from the device the normal detach_domain flow will restore the IDENTITY domain as the default domain. Overall this makes attach_dev() of the IDENTITY domain called in the same places as detach_dev(). This effectively migrates these drivers to default_domain mode. For drivers that support ARM64 they will gain support for the IDENTITY translation mode for the dma_api and behave in a uniform way. Drivers use this by setting ops->identity_domain to a static singleton iommu_domain that implements the identity attach. If the core detects ARM_DMA_USE_IOMMU mode then it automatically attaches the IDENTITY domain during probe. Drivers can continue to prevent the use of DMA translation by returning IOMMU_DOMAIN_IDENTITY from def_domain_type, this will completely prevent IOMMU_DMA from running but will not impact ARM_DMA_USE_IOMMU. This allows removing the set_platform_dma_ops() from every remaining driver. Remove the set_platform_dma_ops from rockchip and mkt_v1 as all it does is set an existing global static identity domain. mkt_v1 does not support IOMMU_DOMAIN_DMA and it does not compile on ARM64 so this transformation is safe. Tested-by:
Steven Price <steven.price@arm.com> Tested-by:
Marek Szyprowski <m.szyprowski@samsung.com> Tested-by:
Nicolin Chen <nicolinc@nvidia.com> Reviewed-by:
Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by:
Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/9-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.comSigned-off-by:
Joerg Roedel <jroedel@suse.de>
-