- 25 Jun, 2015 4 commits
-
-
Vinod Koul authored
-
Vinod Koul authored
-
Vinod Koul authored
-
Vinod Koul authored
-
- 17 Jun, 2015 2 commits
-
-
Robert Jarzmik authored
Add documentation about acking the transfers, and their reusability. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Robert Jarzmik authored
This patch attempts to enhance the case of a transfer submitted multiple times, and where the cost of creating the descriptors chain is not negligible. This happens with big video buffers (several megabytes, ie. several thousands of linked descriptors in one scatter-gather list). In these cases, a video driver would want to do : - tx = dmaengine_prep_slave_sg() - dma_engine_submit(tx); - dma_async_issue_pending() - wait for video completion - read video data (or not, skipping a frame is also possible) - dma_engine_submit(tx) => here, the descriptors chain recalculation will take time => the dma coherent allocation over and over might create holes in the dma pool, which is counter-productive. - dma_async_issue_pending() - etc ... In order to cope with this case, virt-dma is modified to prevent freeing the descriptors upon completion if DMA_CTRL_ACK flag is set in the transfer. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 12 Jun, 2015 3 commits
-
-
Maxime Ripard authored
This reverts commit 48a9db46. Some platforms actually need support for the memset operations. Bring it back. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Maxime Ripard authored
The AT91 HDMAC controller supports interleaved transfers through what's called the Picture-in-Picture mode, which allows to transfer a squared portion of a framebuffer. This means that this interleaved transfer only supports interleaved transfers which have a transfer size and ICGs that are fixed across all the chunks. While this is a quite drastic restriction of the interleaved transfers compared to what the dmaengine API allows, this is still useful, and our driver will only reject transfers that do not conform to this. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Maxime Ripard authored
Now that we can have ICGs set for both the source and destination (using the icg field of struct data_chunk) or for only the source or the destination (using the dst_icg or src_icg respectively), and that these fields can be ignored depending on other parameters (src_inc, src_sgl, etc.), the logic to get the actual ICG value can be quite tricky. The XDMAC driver was already implementing it, but since we will need it in other drivers, we can move it to the main header file. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 10 Jun, 2015 5 commits
-
-
Lior Amsalem authored
This patch change the way free descriptors are marked. Instead of having a field for descriptor in use, all the descriptors in the all_slots list are free for use. This simplify the allocation method and reduce the locking needed. Signed-off-by: Lior Amsalem <alior@marvell.com> Reviewed-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Lior Amsalem authored
Now that we have 2 channels assigned to 2 CPUs and all requests are chained on same channels, we need much more descriptors available to satisfy async_tx workload. 3072 descriptors was found in our lab as the number of descriptors which allow the async_tx stack to work without waiting for free descriptors on submission of new requests. Signed-off-by: Lior Amsalem <alior@marvell.com> Reviewed-by: Nadav Haklai <nadavh@marvell.com> Tested-by: Nadav Haklai <nadavh@marvell.com> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Lior Amsalem authored
The Marvell Armada 38x SoC introduce new features to the XOR engine, especially the fact that the engine mode (MEMCPY/XOR/PQ/etc) can be part of the descriptor and not set through the controller registers. This new feature allows mixing of different commands (even PQ) on the same channel/chain without the need to stop the engine to reconfigure the engine mode. Refactor the driver to be able to use that new feature on the Armada 38x, while keeping the old behaviour on the older SoCs. Signed-off-by: Lior Amsalem <alior@marvell.com> Reviewed-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Maxime Ripard authored
The current function names isn't very consistent, and functions with the same prefix might operate on either a channel or a descriptor, which is kind of confusing. Rename these functions to have a consistent and clearer naming scheme. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Lior Amsalem authored
This patch fixes a bug in the XOR driver where the cleanup function can be called and free descriptors that never been processed by the engine (which result in data errors). The cleanup function will free descriptors based on the ownership bit in the descriptors. Fixes: ff7b0479 ("dmaengine: DMA engine driver for Marvell XOR engine") Signed-off-by: Lior Amsalem <alior@marvell.com> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Reviewed-by: Ofer Heifetz <oferh@marvell.com> Cc: stable@vger.kernel.org Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 08 Jun, 2015 2 commits
-
-
Michal Suchanek authored
The kernel is not trying to increase mcbufsz. It suggests you should try doing so. Also print the calculated required size of mcbufsz. Signed-off-by: Michal Suchanek <hramrach@gmail.com> Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Hao Liu authored
add support for new CSR atlas7 SoC. atlas7 exists V1 and V2 IP. atlas7 DMAv1 is basically moved from marco, which has never been delivered to customers and renamed in this patch. atlas7 DMAv2 supports chain DMA by a chain table, this patch also adds chain DMA support for atlas7. atlas7 DMAv1 and DMAv2 co-exist in the same chip. there are some HW configuration differences(register offset etc.) with old prima2 chips, so we use compatible string to differentiate old prima2 and new atlas7, then results in different set in HW for them. Signed-off-by: Hao Liu <Hao.Liu@csr.com> Signed-off-by: Yanchang Li <Yanchang.Li@csr.com> Signed-off-by: Barry Song <Baohua.Song@csr.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 02 Jun, 2015 2 commits
-
-
Rameshwar Prasad Sahu authored
This patch fixes sparse warnings like incorrect type in assignment (different base types), cast to restricted __le64. Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Rameshwar Prasad Sahu <rsahu@apm.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Stefan Agner authored
Fix function names in kernel-doc function comments. Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 29 May, 2015 1 commit
-
-
Robert Jarzmik authored
In order to achieve smooth transition of pxa drivers from old legacy dma handling to new dmaengine, introduce a function to "hide" dma physical channels from dmaengine. This is temporary situation where pxa dma will be handled in 2 places : - arch/arm/plat-pxa/dma.c - drivers/dma/pxa_dma.c The resources, ie. dma channels, will be controlled by pxa_dma. The legacy code will request or release a channel with pxad_toggle_reserved_channel(). This is not very pretty, but it ensures both legacy and dmaengine consumers can live in the same kernel until the conversion is done. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 26 May, 2015 4 commits
-
-
Robert Jarzmik authored
Reuse the debugging features which were available in pxa architecture. This is a copy of the code from arch/arm/plat-pxa/dma, which is doomed to disappear once the conversion is completed towards dmaengine. This is a transfer of the commit "[ARM] pxa/dma: add debugfs entries" (d294948c). Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Robert Jarzmik authored
This is a new driver for pxa SoCs, which is also compatible with the former mmp_pdma. The rationale behind a new driver (as opposed to incremental patching) was : - the new driver relies on virt-dma, which obsoletes all the internal structures of mmp_pdma (sw_desc, hw_desc, ...), and by consequence all the functions - mmp_pdma allocates dma coherent descriptors containing not only hardware descriptors but linked list information The new driver only puts the dma hardware descriptors (ie. 4 u32) into the dma pool allocated memory. This changes completely the way descriptors are handled - the architecture behind the interrupt/tasklet management was rewritten to be more conforming to virt-dma - the buffers alignment is handled differently The former driver assumed that the DMA channel stopped between each descriptor. The new one chains descriptors to let the channel running. This is a necessary guarantee for real-time high bandwidth usecases such as video capture on "old" architectures such as pxa. - hot chaining / cold chaining / no chaining Whenever possible, submitting a descriptor "hot chains" it to a running channel. There is still no guarantee that the descriptor will be issued, as the channel might be stopped just before the descriptor is submitted. Yet this allows to submit several video buffers, and resubmit a buffer while another is under handling. As before, dma_async_issue_pending() is the only guarantee to have all the buffers issued. When an alignment issue is detected (ie. one address in a descriptor is not a multiple of 8), if the already running channel is in "aligned mode", the channel will stop, and restarted in "misaligned mode" to finished the issued list. - descriptors reusing A submitted, issued and completed descriptor can be reused, ie resubmitted if it was prepared with the proper flag (DMA_PREP_ACK). Only a channel resources release will in this case release that buffer. This allows a rolling ring of buffers to be reused, where there are several thousands of hardware descriptors used (video buffer for example). Additionally, a set of more casual features is introduced : - debugging traces - lockless way to know if a descriptor is terminated or not The driver was tested on zylonite board (pxa3xx) and mioa701 (pxa27x), with dmatest, pxa_camera and pxamci. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Robert Jarzmik authored
Add the pxa dma driver as maintained by the pxa architecture maintainers, as it is part of the core IP. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Robert Jarzmik authored
Document the new design of the pxa dma driver. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 25 May, 2015 3 commits
-
-
Joe Perches authored
Use the generic mechanism to declare a bitmap instead of unsigned long. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Maninder Singh authored
Currently pch pointer is already dereferenced before NULL check and thus we are getting below warning: warn: variable dereferenced before check 'pch' So initialize struct pl330_dmac *pl330 after NULL check of dma_pl330_chan *pch. Signed-off-by: Maninder Singh <maninder1.s@samsung.com> Reviewed-by: Vaneet Narang <v.narang@samsung.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Geert Uytterhoeven authored
dma_ts_shift[] isn't used outside this source file. All other users use the definition from arch/arm/mach-shmobile/dma-register.h. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 18 May, 2015 5 commits
-
-
Maxime Ripard authored
The XDMAC supports interleaved tranfers through its flexible descriptor configuration. Add support for that kind of transfers to the dmaengine driver. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Maxime Ripard authored
So far, we were setting the NDE bit in our descriptors through some logic to try to see if we were the last descriptor in the chain. However, that was turning out to be rather complex to get right, while this information is also available when we actually chain a new descriptor after an already existing one. Simplify this by never setting NDE unless when we actually chain a descriptor. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Maxime Ripard authored
The code has some logic to compute the burst width according to the alignment of the address we're using. Move that in a function of its own to reduce code duplication. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Maxime Ripard authored
The XDMAC DMA controller uses a concept of views to be able to handle descriptors of different sizes. So far, only the views 1 and 2 were handled by the driver. Unfortunately, we need some of the configuration fields found in the view 3 in order to support memset and interleaved transfers. Add the definition for the view 3 registers, and the needed code to handle view 3 descriptors. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Maxime Ripard authored
In interleaved mode, we can expect to have different source and destination strides. Add support for such case to dmaengine. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 14 May, 2015 1 commit
-
-
Peter Ujfalusi authored
The DRA7x has more peripherals with DMA requests than the sDMA can handle: 205 vs 127. All DMA requests are routed through the DMA crossbar, which can be configured to route selected incoming DMA requests to specific sDMA request. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 09 May, 2015 6 commits
-
-
Peter Ujfalusi authored
Since the mapping between the hardware request lines and channels has been removed it no longer make sense to have too many channels. Set the number of channels to match with the number of logical channels supported by sDMA. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Peter Ujfalusi authored
Do not direct map the virtual channels to sDMA request number. When the sDMA is behind of a crossbar this direct mapping can cause situations when certain channel can not be requested since the crossbar request number will no longer match with the sDMA request line. The direct mapping for virtual channels with HW request lines will make it harder to implement MEM_TO_MEM mode for the driver. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Peter Ujfalusi authored
Use the dma-requests property from DT to get the number of DMA requests. In case of legacy boot or failure to find the property, use the default 127 as number of requests. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Peter Ujfalusi authored
Instead of magic numbers in the code, use define for number of logical DMA channels and DMA requests. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Peter Ujfalusi authored
The DRA7x has more peripherals with DMA requests than the sDMA can handle: 205 vs 127. All DMA requests are routed through the DMA crossbar, which can be configured to route selected incoming DMA requests to specific request line of the DMA controller. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Peter Ujfalusi authored
DMA routers are transparent devices used to mux DMA requests from peripherals to DMA controllers. They are used when the SoC integrates more devices with DMA requests then their controller can handle. DRA7x is one example of such SoC, where the sDMA can hanlde 128 DMA request lines, but in SoC level it has 205 DMA requests. The of_dma_router will be registered as of_dma_controller with special xlate function and additional parameters. The driver for the router is responsible to craft the dma_spec (in the of_dma_route_allocate callback) which can be used to requests a DMA channel from the real DMA controller. This way the router can be transparent for the system while remaining generic enough to be used in different environments. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
- 08 May, 2015 2 commits
-
-
Jens Kuske authored
The H3 SoC has the same dma engine as the A31 (sun6i), with a reduced amount of endpoints and physical channels. Add the proper config data and compatible string to support it. Signed-off-by: Jens Kuske <jenskuske@gmail.com> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-
Geert Uytterhoeven authored
Commit 3cd44dcd ("dmaengine: remove Renesas Audio DMAC peri peri") forgot to remove the header file with the platform data definitions. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
-