- 28 Jun, 2017 36 commits
-
-
Vladimir Murzin authored
This patch introduces default coherent DMA pool similar to default CMA area concept. To keep other users safe code kept under CONFIG_ARM. Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Rob Herring <robh+dt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Suggested-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Andras Szemzo <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Vladimir Murzin authored
dma_declare_coherent_memory() and friends are designed to account difference in CPU and device addresses. However, when it is used with reserved memory regions there is assumption that CPU and device have the same view on address space. This assumption gets invalid when reserved memory for coherent DMA allocations is referenced by device with non-empty "dma-range" property. Simply feeding device address as rmem->base + dev->dma_pfn_offset would not work due to reserved memory region can be shared, so this patch turns device address to be expressed with help of CPU address and device's dma_pfn_offset in case memory reservation has been done via device tree; non device tree users continue to use the old scheme. Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Roger Quadros <rogerq@ti.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Andras Szemzo <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Vladimir Murzin authored
Even though dma-noop-ops assumes 1:1 memory mapping DMA memory range can be different to RAM. For example, ARM STM32F4 MCU offers the possibility to remap SDRAM from 0xc000_0000 to 0x0 to get CPU performance boost, but DMA continue to see SDRAM at 0xc000_0000. This difference in mapping is handled via device-tree "dma-range" property which leads to dev->dma_pfn_offset is set nonzero. To handle such cases take dma_pfn_offset into account. Cc: Joerg Roedel <jroedel@suse.de> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Reported-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Andras Szemzo <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
dmam_alloc_noncoherent is a trivial wrapper around dmam_alloc_attrs, that hardcodes one particular flag. Make the devres code more flexible by allowing the callers to pass arbitrary flags. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Tejun Heo <tj@kernel.org>
-
Christoph Hellwig authored
This function was never used since it was added. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Tejun Heo <tj@kernel.org>
-
Arnd Bergmann authored
After commit 9e442aa6a753 ("x86: remove DMA_ERROR_CODE"), the inlining decisions in the qat driver changed slightly, introducing a new false-positive warning: drivers/crypto/qat/qat_common/qat_algs.c: In function 'qat_alg_sgl_to_bufl.isra.6': include/linux/dma-mapping.h:228:2: error: 'sz_out' may be used uninitialized in this function [-Werror=maybe-uninitialized] drivers/crypto/qat/qat_common/qat_algs.c:676:9: note: 'sz_out' was declared here The patch that introduced this is correct, so let's just avoid the warning in this driver by rearranging the unwinding after an error to make it more obvious to the compiler what is going on. The problem here is the 'if (unlikely(dma_mapping_error(dev, blp)))' check, in which the 'unlikely' causes gcc to forget what it knew about the state of the variables. Cleaning up the dma state in the reverse order it was created means we can simplify the logic so it doesn't have to know about that state, and also makes it easier to understand. Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
au1100fb is using managed dma allocations, so it doesn't need to explicitly free the dma memory in the error path (and if it did it would have to use the managed version). Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
-
Christoph Hellwig authored
This code has been spread between getting in through arch trees, the iommu tree, -mm and the drivers tree. There will be a lot of work in this area, including consolidating various arch implementations into more common code, so ensure we have a proper git tree that facilitates cooperation with the architecture maintainers. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Besides removing the last instance of the set_dma_mask method this also reduced the code duplication. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
By the time cell_pci_dma_dev_setup calls cell_dma_dev_setup no device can have the fixed map_ops set yet as it's only set by the set_dma_mask method. So move the setup for the fixed case to be only called in that place instead of indirecting through cell_dma_dev_setup. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
These just duplicate the default behavior if no method is provided. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
This just duplicates the generic implementation. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Same behavior, less code duplication. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Same behavior, less code duplication. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
And instead wire it up as method for all the dma_map_ops instances. Note that this also means the arch specific check will be fully instead of partially applied in the AMD iommu driver. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
And instead wire it up as method for all the dma_map_ops instances. Note that the code seems a little fishy for dmabounce and iommu, but for now I'd like to preserve the existing behavior 1:1. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
This implementation is simply bogus - openrisc only has a simple direct mapped DMA implementation and thus doesn't care about the address. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
This implementation is simply bogus - hexagon only has a simple direct mapped DMA implementation and thus doesn't care about the address. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Richard Kuo <rkuo@codeaurora.org>
-
Christoph Hellwig authored
These just duplicate the default behavior if no method is provided. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
These just duplicate the default behavior if no method is provided. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Usually dma_supported decisions are done by the dma_map_ops instance. Switch sparc to that model by providing a ->dma_supported instance for sbus that always returns false, and implementations tailored to the sun4u and sun4v cases for sparc64, and leave it unimplemented for PCI on sparc32, which means always supported. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: David S. Miller <davem@davemloft.net>
-
Christoph Hellwig authored
We can just use pci32_dma_ops directly. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: David S. Miller <davem@davemloft.net>
-
Christoph Hellwig authored
And update the documentation - dma_mapping_error has been supported everywhere for a long time. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
DMA_ERROR_CODE is going to go away, so don't rely on it. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
All dma_map_ops instances now handle their errors through ->mapping_error. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
DMA_ERROR_CODE is going to go away, so don't rely on it. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
DMA_ERROR_CODE is going to go away, so don't rely on it. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
DMA_ERROR_CODE is going to go away, so don't rely on it. Instead define a ->mapping_error method for all IOMMU based dma operation instances. The direct ops don't ever return an error and don't need a ->mapping_error method. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christoph Hellwig authored
DMA_ERROR_CODE is going to go away, so don't rely on it. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: David S. Miller <davem@davemloft.net>
-
Christoph Hellwig authored
s390 can also use noop_dma_ops, and while that currently does not return errors it will so in the future. Implementing the mapping_error method is the proper way to have per-ops error conditions. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
-
Christoph Hellwig authored
DMA_ERROR_CODE is going to go away, so don't rely on it. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Richard Kuo <rkuo@codeaurora.org>
-
- 20 Jun, 2017 4 commits
-
-
Christoph Hellwig authored
The dma alloc interface returns an error by return NULL, and the mapping interfaces rely on the mapping_error method, which the dummy ops already implement correctly. Thus remove the DMA_ERROR_CODE define. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
xtensa already implements the mapping_error method for its only dma_map_ops instance. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
sh does not return errors for dma_map_page. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
openrisc does not return errors for dma_map_page. Signed-off-by: Christoph Hellwig <hch@lst.de>
-