Commit 23c25876 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dmaengine-4.15-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "Updates for this cycle include:

   - new driver for Spreadtrum dma controller, ST MDMA and DMAMUX
     controllers

   - PM support for IMG MDC drivers

   - updates to bcm-sba-raid driver and improvements to sun6i driver

   - subsystem conversion for:
      - timers to use timer_setup()
      - remove usage of PCI pool API
      - usage of %p format specifier

   - minor updates to bunch of drivers"

* tag 'dmaengine-4.15-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (49 commits)
  dmaengine: ti-dma-crossbar: Correct am335x/am43xx mux value type
  dmaengine: dmatest: warn user when dma test times out
  dmaengine: Revert "rcar-dmac: use TCRB instead of TCR for residue"
  dmaengine: stm32_mdma: activate pack/unpack feature
  dmaengine: at_hdmac: Remove unnecessary 0x prefixes before %pad
  dmaengine: coh901318: Remove unnecessary 0x prefixes before %pad
  MAINTAINERS: Step down from a co-maintaner of DW DMAC driver
  dmaengine: pch_dma: Replace PCI pool old API
  dmaengine: Convert timers to use timer_setup()
  dmaengine: sprd: Add Spreadtrum DMA driver
  dt-bindings: dmaengine: Add Spreadtrum SC9860 DMA controller
  dmaengine: sun6i: Retrieve channel count/max request from devicetree
  dmaengine: Build bcm-sba-raid driver as loadable module for iProc SoCs
  dmaengine: bcm-sba-raid: Use common GPL comment header
  dmaengine: bcm-sba-raid: Use only single mailbox channel
  dmaengine: bcm-sba-raid: serialize dma_cookie_complete() using reqs_lock
  dmaengine: pl330: fix descriptor allocation fail
  dmaengine: rcar-dmac: use TCRB instead of TCR for residue
  dmaengine: sun6i: Add support for Allwinner A64 and compatibles
  arm64: allwinner: a64: Add devicetree binding for DMA controller
  ...
parents e0ca3826 cecd5fc5
...@@ -3,6 +3,8 @@ ...@@ -3,6 +3,8 @@
Required Properties: Required Properties:
-compatible: "renesas,<soctype>-usb-dmac", "renesas,usb-dmac" as fallback. -compatible: "renesas,<soctype>-usb-dmac", "renesas,usb-dmac" as fallback.
Examples with soctypes are: Examples with soctypes are:
- "renesas,r8a7743-usb-dmac" (RZ/G1M)
- "renesas,r8a7745-usb-dmac" (RZ/G1E)
- "renesas,r8a7790-usb-dmac" (R-Car H2) - "renesas,r8a7790-usb-dmac" (R-Car H2)
- "renesas,r8a7791-usb-dmac" (R-Car M2-W) - "renesas,r8a7791-usb-dmac" (R-Car M2-W)
- "renesas,r8a7793-usb-dmac" (R-Car M2-N) - "renesas,r8a7793-usb-dmac" (R-Car M2-N)
......
* Spreadtrum DMA controller
This binding follows the generic DMA bindings defined in dma.txt.
Required properties:
- compatible: Should be "sprd,sc9860-dma".
- reg: Should contain DMA registers location and length.
- interrupts: Should contain one interrupt shared by all channel.
- #dma-cells: must be <1>. Used to represent the number of integer
cells in the dmas property of client device.
- #dma-channels : Number of DMA channels supported. Should be 32.
- clock-names: Should contain the clock of the DMA controller.
- clocks: Should contain a clock specifier for each entry in clock-names.
Example:
Controller:
apdma: dma-controller@20100000 {
compatible = "sprd,sc9860-dma";
reg = <0x20100000 0x4000>;
interrupts = <GIC_SPI 50 IRQ_TYPE_LEVEL_HIGH>;
#dma-cells = <1>;
#dma-channels = <32>;
clock-names = "enable";
clocks = <&clk_ap_ahb_gates 5>;
};
Client:
DMA clients connected to the Spreadtrum DMA controller must use the format
described in the dma.txt file, using a two-cell specifier for each channel.
The two cells in order are:
1. A phandle pointing to the DMA controller.
2. The channel id.
spi0: spi@70a00000{
...
dma-names = "rx_chn", "tx_chn";
dmas = <&apdma 11>, <&apdma 12>;
...
};
...@@ -13,6 +13,7 @@ Required properties: ...@@ -13,6 +13,7 @@ Required properties:
- #dma-cells : Must be <4>. See DMA client paragraph for more details. - #dma-cells : Must be <4>. See DMA client paragraph for more details.
Optional properties: Optional properties:
- dma-requests : Number of DMA requests supported.
- resets: Reference to a reset controller asserting the DMA controller - resets: Reference to a reset controller asserting the DMA controller
- st,mem2mem: boolean; if defined, it indicates that the controller supports - st,mem2mem: boolean; if defined, it indicates that the controller supports
memory-to-memory transfer memory-to-memory transfer
...@@ -34,12 +35,13 @@ Example: ...@@ -34,12 +35,13 @@ Example:
#dma-cells = <4>; #dma-cells = <4>;
st,mem2mem; st,mem2mem;
resets = <&rcc 150>; resets = <&rcc 150>;
dma-requests = <8>;
}; };
* DMA client * DMA client
DMA clients connected to the STM32 DMA controller must use the format DMA clients connected to the STM32 DMA controller must use the format
described in the dma.txt file, using a five-cell specifier for each described in the dma.txt file, using a four-cell specifier for each
channel: a phandle to the DMA controller plus the following four integer cells: channel: a phandle to the DMA controller plus the following four integer cells:
1. The channel id 1. The channel id
......
STM32 DMA MUX (DMA request router)
Required properties:
- compatible: "st,stm32h7-dmamux"
- reg: Memory map for accessing module
- #dma-cells: Should be set to <3>.
First parameter is request line number.
Second is DMA channel configuration
Third is Fifo threshold
For more details about the three cells, please see
stm32-dma.txt documentation binding file
- dma-masters: Phandle pointing to the DMA controllers.
Several controllers are allowed. Only "st,stm32-dma" DMA
compatible are supported.
Optional properties:
- dma-channels : Number of DMA requests supported.
- dma-requests : Number of DMAMUX requests supported.
- resets: Reference to a reset controller asserting the DMA controller
- clocks: Input clock of the DMAMUX instance.
Example:
/* DMA controller 1 */
dma1: dma-controller@40020000 {
compatible = "st,stm32-dma";
reg = <0x40020000 0x400>;
interrupts = <11>,
<12>,
<13>,
<14>,
<15>,
<16>,
<17>,
<47>;
clocks = <&timer_clk>;
#dma-cells = <4>;
st,mem2mem;
resets = <&rcc 150>;
dma-channels = <8>;
dma-requests = <8>;
};
/* DMA controller 1 */
dma2: dma@40020400 {
compatible = "st,stm32-dma";
reg = <0x40020400 0x400>;
interrupts = <56>,
<57>,
<58>,
<59>,
<60>,
<68>,
<69>,
<70>;
clocks = <&timer_clk>;
#dma-cells = <4>;
st,mem2mem;
resets = <&rcc 150>;
dma-channels = <8>;
dma-requests = <8>;
};
/* DMA mux */
dmamux1: dma-router@40020800 {
compatible = "st,stm32h7-dmamux";
reg = <0x40020800 0x3c>;
#dma-cells = <3>;
dma-requests = <128>;
dma-channels = <16>;
dma-masters = <&dma1 &dma2>;
clocks = <&timer_clk>;
};
/* DMA client */
usart1: serial@40011000 {
compatible = "st,stm32-usart", "st,stm32-uart";
reg = <0x40011000 0x400>;
interrupts = <37>;
clocks = <&timer_clk>;
dmas = <&dmamux1 41 0x414 0>,
<&dmamux1 42 0x414 0>;
dma-names = "rx", "tx";
};
* STMicroelectronics STM32 MDMA controller
The STM32 MDMA is a general-purpose direct memory access controller capable of
supporting 64 independent DMA channels with 256 HW requests.
Required properties:
- compatible: Should be "st,stm32h7-mdma"
- reg: Should contain MDMA registers location and length. This should include
all of the per-channel registers.
- interrupts: Should contain the MDMA interrupt.
- clocks: Should contain the input clock of the DMA instance.
- resets: Reference to a reset controller asserting the DMA controller.
- #dma-cells : Must be <5>. See DMA client paragraph for more details.
Optional properties:
- dma-channels: Number of DMA channels supported by the controller.
- dma-requests: Number of DMA request signals supported by the controller.
- st,ahb-addr-masks: Array of u32 mask to list memory devices addressed via
AHB bus.
Example:
mdma1: dma@52000000 {
compatible = "st,stm32h7-mdma";
reg = <0x52000000 0x1000>;
interrupts = <122>;
clocks = <&timer_clk>;
resets = <&rcc 992>;
#dma-cells = <5>;
dma-channels = <16>;
dma-requests = <32>;
st,ahb-addr-masks = <0x20000000>, <0x00000000>;
};
* DMA client
DMA clients connected to the STM32 MDMA controller must use the format
described in the dma.txt file, using a five-cell specifier for each channel:
a phandle to the MDMA controller plus the following five integer cells:
1. The request line number
2. The priority level
0x00: Low
0x01: Medium
0x10: High
0x11: Very high
3. A 32bit mask specifying the DMA channel configuration
-bit 0-1: Source increment mode
0x00: Source address pointer is fixed
0x10: Source address pointer is incremented after each data transfer
0x11: Source address pointer is decremented after each data transfer
-bit 2-3: Destination increment mode
0x00: Destination address pointer is fixed
0x10: Destination address pointer is incremented after each data
transfer
0x11: Destination address pointer is decremented after each data
transfer
-bit 8-9: Source increment offset size
0x00: byte (8bit)
0x01: half-word (16bit)
0x10: word (32bit)
0x11: double-word (64bit)
-bit 10-11: Destination increment offset size
0x00: byte (8bit)
0x01: half-word (16bit)
0x10: word (32bit)
0x11: double-word (64bit)
-bit 25-18: The number of bytes to be transferred in a single transfer
(min = 1 byte, max = 128 bytes)
-bit 29:28: Trigger Mode
0x00: Each MDMA request triggers a buffer transfer (max 128 bytes)
0x01: Each MDMA request triggers a block transfer (max 64K bytes)
0x10: Each MDMA request triggers a repeated block transfer
0x11: Each MDMA request triggers a linked list transfer
4. A 32bit value specifying the register to be used to acknowledge the request
if no HW ack signal is used by the MDMA client
5. A 32bit mask specifying the value to be written to acknowledge the request
if no HW ack signal is used by the MDMA client
Example:
i2c4: i2c@5c002000 {
compatible = "st,stm32f7-i2c";
reg = <0x5c002000 0x400>;
interrupts = <95>,
<96>;
clocks = <&timer_clk>;
#address-cells = <1>;
#size-cells = <0>;
dmas = <&mdma1 36 0x0 0x40008 0x0 0x0>,
<&mdma1 37 0x0 0x40002 0x0 0x0>;
dma-names = "rx", "tx";
status = "disabled";
};
...@@ -27,6 +27,32 @@ Example: ...@@ -27,6 +27,32 @@ Example:
#dma-cells = <1>; #dma-cells = <1>;
}; };
------------------------------------------------------------------------------
For A64 DMA controller:
Required properties:
- compatible: "allwinner,sun50i-a64-dma"
- dma-channels: Number of DMA channels supported by the controller.
Refer to Documentation/devicetree/bindings/dma/dma.txt
- all properties above, i.e. reg, interrupts, clocks, resets and #dma-cells
Optional properties:
- dma-requests: Number of DMA request signals supported by the controller.
Refer to Documentation/devicetree/bindings/dma/dma.txt
Example:
dma: dma-controller@1c02000 {
compatible = "allwinner,sun50i-a64-dma";
reg = <0x01c02000 0x1000>;
interrupts = <GIC_SPI 50 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&ccu CLK_BUS_DMA>;
dma-channels = <8>;
dma-requests = <27>;
resets = <&ccu RST_BUS_DMA>;
#dma-cells = <1>;
};
------------------------------------------------------------------------------
Clients: Clients:
DMA clients connected to the A31 DMA controller must use the format DMA clients connected to the A31 DMA controller must use the format
......
...@@ -12947,7 +12947,7 @@ F: Documentation/devicetree/bindings/arc/axs10* ...@@ -12947,7 +12947,7 @@ F: Documentation/devicetree/bindings/arc/axs10*
SYNOPSYS DESIGNWARE DMAC DRIVER SYNOPSYS DESIGNWARE DMAC DRIVER
M: Viresh Kumar <vireshk@kernel.org> M: Viresh Kumar <vireshk@kernel.org>
M: Andy Shevchenko <andriy.shevchenko@linux.intel.com> R: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
S: Maintained S: Maintained
F: include/linux/dma/dw.h F: include/linux/dma/dw.h
F: include/linux/platform_data/dma-dw.h F: include/linux/platform_data/dma-dw.h
......
...@@ -115,7 +115,7 @@ config BCM_SBA_RAID ...@@ -115,7 +115,7 @@ config BCM_SBA_RAID
select DMA_ENGINE_RAID select DMA_ENGINE_RAID
select ASYNC_TX_DISABLE_XOR_VAL_DMA select ASYNC_TX_DISABLE_XOR_VAL_DMA
select ASYNC_TX_DISABLE_PQ_VAL_DMA select ASYNC_TX_DISABLE_PQ_VAL_DMA
default ARCH_BCM_IPROC default m if ARCH_BCM_IPROC
help help
Enable support for Broadcom SBA RAID Engine. The SBA RAID Enable support for Broadcom SBA RAID Engine. The SBA RAID
engine is available on most of the Broadcom iProc SoCs. It engine is available on most of the Broadcom iProc SoCs. It
...@@ -483,6 +483,35 @@ config STM32_DMA ...@@ -483,6 +483,35 @@ config STM32_DMA
If you have a board based on such a MCU and wish to use DMA say Y If you have a board based on such a MCU and wish to use DMA say Y
here. here.
config STM32_DMAMUX
bool "STMicroelectronics STM32 dma multiplexer support"
depends on STM32_DMA || COMPILE_TEST
help
Enable support for the on-chip DMA multiplexer on STMicroelectronics
STM32 MCUs.
If you have a board based on such a MCU and wish to use DMAMUX say Y
here.
config STM32_MDMA
bool "STMicroelectronics STM32 master dma support"
depends on ARCH_STM32 || COMPILE_TEST
depends on OF
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Enable support for the on-chip MDMA controller on STMicroelectronics
STM32 platforms.
If you have a board based on STM32 SoC and wish to use the master DMA
say Y here.
config SPRD_DMA
tristate "Spreadtrum DMA support"
depends on ARCH_SPRD || COMPILE_TEST
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Enable support for the on-chip DMA controller on Spreadtrum platform.
config S3C24XX_DMAC config S3C24XX_DMAC
bool "Samsung S3C24XX DMA support" bool "Samsung S3C24XX DMA support"
depends on ARCH_S3C24XX || COMPILE_TEST depends on ARCH_S3C24XX || COMPILE_TEST
......
...@@ -60,6 +60,9 @@ obj-$(CONFIG_RENESAS_DMA) += sh/ ...@@ -60,6 +60,9 @@ obj-$(CONFIG_RENESAS_DMA) += sh/
obj-$(CONFIG_SIRF_DMA) += sirf-dma.o obj-$(CONFIG_SIRF_DMA) += sirf-dma.o
obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o
obj-$(CONFIG_STM32_DMA) += stm32-dma.o obj-$(CONFIG_STM32_DMA) += stm32-dma.o
obj-$(CONFIG_STM32_DMAMUX) += stm32-dmamux.o
obj-$(CONFIG_STM32_MDMA) += stm32-mdma.o
obj-$(CONFIG_SPRD_DMA) += sprd-dma.o
obj-$(CONFIG_S3C24XX_DMAC) += s3c24xx-dma.o obj-$(CONFIG_S3C24XX_DMAC) += s3c24xx-dma.o
obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o
obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
......
...@@ -385,7 +385,7 @@ static void vdbg_dump_regs(struct at_dma_chan *atchan) {} ...@@ -385,7 +385,7 @@ static void vdbg_dump_regs(struct at_dma_chan *atchan) {}
static void atc_dump_lli(struct at_dma_chan *atchan, struct at_lli *lli) static void atc_dump_lli(struct at_dma_chan *atchan, struct at_lli *lli)
{ {
dev_crit(chan2dev(&atchan->chan_common), dev_crit(chan2dev(&atchan->chan_common),
" desc: s%pad d%pad ctrl0x%x:0x%x l0x%pad\n", "desc: s%pad d%pad ctrl0x%x:0x%x l%pad\n",
&lli->saddr, &lli->daddr, &lli->saddr, &lli->daddr,
lli->ctrla, lli->ctrlb, &lli->dscr); lli->ctrla, lli->ctrlb, &lli->dscr);
} }
......
/* /*
* Copyright (C) 2017 Broadcom * Copyright (C) 2017 Broadcom
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or
* it under the terms of the GNU General Public License version 2 as * modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation. * published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/ */
/* /*
...@@ -25,11 +30,8 @@ ...@@ -25,11 +30,8 @@
* *
* The Broadcom SBA RAID driver does not require any register programming * The Broadcom SBA RAID driver does not require any register programming
* except submitting request to SBA hardware device via mailbox channels. * except submitting request to SBA hardware device via mailbox channels.
* This driver implements a DMA device with one DMA channel using a set * This driver implements a DMA device with one DMA channel using a single
* of mailbox channels provided by Broadcom SoC specific ring manager * mailbox channel provided by Broadcom SoC specific ring manager driver.
* driver. To exploit parallelism (as described above), all DMA request
* coming to SBA RAID DMA channel are broken down to smaller requests
* and submitted to multiple mailbox channels in round-robin fashion.
* For having more SBA DMA channels, we can create more SBA device nodes * For having more SBA DMA channels, we can create more SBA device nodes
* in Broadcom SoC specific DTS based on number of hardware rings supported * in Broadcom SoC specific DTS based on number of hardware rings supported
* by Broadcom SoC ring manager. * by Broadcom SoC ring manager.
...@@ -85,6 +87,7 @@ ...@@ -85,6 +87,7 @@
#define SBA_CMD_GALOIS 0xe #define SBA_CMD_GALOIS 0xe
#define SBA_MAX_REQ_PER_MBOX_CHANNEL 8192 #define SBA_MAX_REQ_PER_MBOX_CHANNEL 8192
#define SBA_MAX_MSG_SEND_PER_MBOX_CHANNEL 8
/* Driver helper macros */ /* Driver helper macros */
#define to_sba_request(tx) \ #define to_sba_request(tx) \
...@@ -142,9 +145,7 @@ struct sba_device { ...@@ -142,9 +145,7 @@ struct sba_device {
u32 max_cmds_pool_size; u32 max_cmds_pool_size;
/* Maibox client and Mailbox channels */ /* Maibox client and Mailbox channels */
struct mbox_client client; struct mbox_client client;
int mchans_count; struct mbox_chan *mchan;
atomic_t mchans_current;
struct mbox_chan **mchans;
struct device *mbox_dev; struct device *mbox_dev;
/* DMA device and DMA channel */ /* DMA device and DMA channel */
struct dma_device dma_dev; struct dma_device dma_dev;
...@@ -200,14 +201,6 @@ static inline u32 __pure sba_cmd_pq_c_mdata(u32 d, u32 b1, u32 b0) ...@@ -200,14 +201,6 @@ static inline u32 __pure sba_cmd_pq_c_mdata(u32 d, u32 b1, u32 b0)
/* ====== General helper routines ===== */ /* ====== General helper routines ===== */
static void sba_peek_mchans(struct sba_device *sba)
{
int mchan_idx;
for (mchan_idx = 0; mchan_idx < sba->mchans_count; mchan_idx++)
mbox_client_peek_data(sba->mchans[mchan_idx]);
}
static struct sba_request *sba_alloc_request(struct sba_device *sba) static struct sba_request *sba_alloc_request(struct sba_device *sba)
{ {
bool found = false; bool found = false;
...@@ -231,7 +224,7 @@ static struct sba_request *sba_alloc_request(struct sba_device *sba) ...@@ -231,7 +224,7 @@ static struct sba_request *sba_alloc_request(struct sba_device *sba)
* would have completed which will create more * would have completed which will create more
* room for new requests. * room for new requests.
*/ */
sba_peek_mchans(sba); mbox_client_peek_data(sba->mchan);
return NULL; return NULL;
} }
...@@ -369,15 +362,11 @@ static void sba_cleanup_pending_requests(struct sba_device *sba) ...@@ -369,15 +362,11 @@ static void sba_cleanup_pending_requests(struct sba_device *sba)
static int sba_send_mbox_request(struct sba_device *sba, static int sba_send_mbox_request(struct sba_device *sba,
struct sba_request *req) struct sba_request *req)
{ {
int mchans_idx, ret = 0; int ret = 0;
/* Select mailbox channel in round-robin fashion */
mchans_idx = atomic_inc_return(&sba->mchans_current);
mchans_idx = mchans_idx % sba->mchans_count;
/* Send message for the request */ /* Send message for the request */
req->msg.error = 0; req->msg.error = 0;
ret = mbox_send_message(sba->mchans[mchans_idx], &req->msg); ret = mbox_send_message(sba->mchan, &req->msg);
if (ret < 0) { if (ret < 0) {
dev_err(sba->dev, "send message failed with error %d", ret); dev_err(sba->dev, "send message failed with error %d", ret);
return ret; return ret;
...@@ -390,7 +379,7 @@ static int sba_send_mbox_request(struct sba_device *sba, ...@@ -390,7 +379,7 @@ static int sba_send_mbox_request(struct sba_device *sba,
} }
/* Signal txdone for mailbox channel */ /* Signal txdone for mailbox channel */
mbox_client_txdone(sba->mchans[mchans_idx], ret); mbox_client_txdone(sba->mchan, ret);
return ret; return ret;
} }
...@@ -402,13 +391,8 @@ static void _sba_process_pending_requests(struct sba_device *sba) ...@@ -402,13 +391,8 @@ static void _sba_process_pending_requests(struct sba_device *sba)
u32 count; u32 count;
struct sba_request *req; struct sba_request *req;
/* /* Process few pending requests */
* Process few pending requests count = SBA_MAX_MSG_SEND_PER_MBOX_CHANNEL;
*
* For now, we process (<number_of_mailbox_channels> * 8)
* number of requests at a time.
*/
count = sba->mchans_count * 8;
while (!list_empty(&sba->reqs_pending_list) && count) { while (!list_empty(&sba->reqs_pending_list) && count) {
/* Get the first pending request */ /* Get the first pending request */
req = list_first_entry(&sba->reqs_pending_list, req = list_first_entry(&sba->reqs_pending_list,
...@@ -442,7 +426,9 @@ static void sba_process_received_request(struct sba_device *sba, ...@@ -442,7 +426,9 @@ static void sba_process_received_request(struct sba_device *sba,
WARN_ON(tx->cookie < 0); WARN_ON(tx->cookie < 0);
if (tx->cookie > 0) { if (tx->cookie > 0) {
spin_lock_irqsave(&sba->reqs_lock, flags);
dma_cookie_complete(tx); dma_cookie_complete(tx);
spin_unlock_irqrestore(&sba->reqs_lock, flags);
dmaengine_desc_get_callback_invoke(tx, NULL); dmaengine_desc_get_callback_invoke(tx, NULL);
dma_descriptor_unmap(tx); dma_descriptor_unmap(tx);
tx->callback = NULL; tx->callback = NULL;
...@@ -570,7 +556,7 @@ static enum dma_status sba_tx_status(struct dma_chan *dchan, ...@@ -570,7 +556,7 @@ static enum dma_status sba_tx_status(struct dma_chan *dchan,
if (ret == DMA_COMPLETE) if (ret == DMA_COMPLETE)
return ret; return ret;
sba_peek_mchans(sba); mbox_client_peek_data(sba->mchan);
return dma_cookie_status(dchan, cookie, txstate); return dma_cookie_status(dchan, cookie, txstate);
} }
...@@ -1637,7 +1623,7 @@ static int sba_async_register(struct sba_device *sba) ...@@ -1637,7 +1623,7 @@ static int sba_async_register(struct sba_device *sba)
static int sba_probe(struct platform_device *pdev) static int sba_probe(struct platform_device *pdev)
{ {
int i, ret = 0, mchans_count; int ret = 0;
struct sba_device *sba; struct sba_device *sba;
struct platform_device *mbox_pdev; struct platform_device *mbox_pdev;
struct of_phandle_args args; struct of_phandle_args args;
...@@ -1650,12 +1636,11 @@ static int sba_probe(struct platform_device *pdev) ...@@ -1650,12 +1636,11 @@ static int sba_probe(struct platform_device *pdev)
sba->dev = &pdev->dev; sba->dev = &pdev->dev;
platform_set_drvdata(pdev, sba); platform_set_drvdata(pdev, sba);
/* Number of channels equals number of mailbox channels */ /* Number of mailbox channels should be atleast 1 */
ret = of_count_phandle_with_args(pdev->dev.of_node, ret = of_count_phandle_with_args(pdev->dev.of_node,
"mboxes", "#mbox-cells"); "mboxes", "#mbox-cells");
if (ret <= 0) if (ret <= 0)
return -ENODEV; return -ENODEV;
mchans_count = ret;
/* Determine SBA version from DT compatible string */ /* Determine SBA version from DT compatible string */
if (of_device_is_compatible(sba->dev->of_node, "brcm,iproc-sba")) if (of_device_is_compatible(sba->dev->of_node, "brcm,iproc-sba"))
...@@ -1688,7 +1673,7 @@ static int sba_probe(struct platform_device *pdev) ...@@ -1688,7 +1673,7 @@ static int sba_probe(struct platform_device *pdev)
default: default:
return -EINVAL; return -EINVAL;
} }
sba->max_req = SBA_MAX_REQ_PER_MBOX_CHANNEL * mchans_count; sba->max_req = SBA_MAX_REQ_PER_MBOX_CHANNEL;
sba->max_cmd_per_req = sba->max_pq_srcs + 3; sba->max_cmd_per_req = sba->max_pq_srcs + 3;
sba->max_xor_srcs = sba->max_cmd_per_req - 1; sba->max_xor_srcs = sba->max_cmd_per_req - 1;
sba->max_resp_pool_size = sba->max_req * sba->hw_resp_size; sba->max_resp_pool_size = sba->max_req * sba->hw_resp_size;
...@@ -1702,55 +1687,30 @@ static int sba_probe(struct platform_device *pdev) ...@@ -1702,55 +1687,30 @@ static int sba_probe(struct platform_device *pdev)
sba->client.knows_txdone = true; sba->client.knows_txdone = true;
sba->client.tx_tout = 0; sba->client.tx_tout = 0;
/* Allocate mailbox channel array */ /* Request mailbox channel */
sba->mchans = devm_kcalloc(&pdev->dev, mchans_count, sba->mchan = mbox_request_channel(&sba->client, 0);
sizeof(*sba->mchans), GFP_KERNEL); if (IS_ERR(sba->mchan)) {
if (!sba->mchans) ret = PTR_ERR(sba->mchan);
return -ENOMEM; goto fail_free_mchan;
/* Request mailbox channels */
sba->mchans_count = 0;
for (i = 0; i < mchans_count; i++) {
sba->mchans[i] = mbox_request_channel(&sba->client, i);
if (IS_ERR(sba->mchans[i])) {
ret = PTR_ERR(sba->mchans[i]);
goto fail_free_mchans;
}
sba->mchans_count++;
} }
atomic_set(&sba->mchans_current, 0);
/* Find-out underlying mailbox device */ /* Find-out underlying mailbox device */
ret = of_parse_phandle_with_args(pdev->dev.of_node, ret = of_parse_phandle_with_args(pdev->dev.of_node,
"mboxes", "#mbox-cells", 0, &args); "mboxes", "#mbox-cells", 0, &args);
if (ret) if (ret)
goto fail_free_mchans; goto fail_free_mchan;
mbox_pdev = of_find_device_by_node(args.np); mbox_pdev = of_find_device_by_node(args.np);
of_node_put(args.np); of_node_put(args.np);
if (!mbox_pdev) { if (!mbox_pdev) {
ret = -ENODEV; ret = -ENODEV;
goto fail_free_mchans; goto fail_free_mchan;
} }
sba->mbox_dev = &mbox_pdev->dev; sba->mbox_dev = &mbox_pdev->dev;
/* All mailbox channels should be of same ring manager device */
for (i = 1; i < mchans_count; i++) {
ret = of_parse_phandle_with_args(pdev->dev.of_node,
"mboxes", "#mbox-cells", i, &args);
if (ret)
goto fail_free_mchans;
mbox_pdev = of_find_device_by_node(args.np);
of_node_put(args.np);
if (sba->mbox_dev != &mbox_pdev->dev) {
ret = -EINVAL;
goto fail_free_mchans;
}
}
/* Prealloc channel resource */ /* Prealloc channel resource */
ret = sba_prealloc_channel_resources(sba); ret = sba_prealloc_channel_resources(sba);
if (ret) if (ret)
goto fail_free_mchans; goto fail_free_mchan;
/* Check availability of debugfs */ /* Check availability of debugfs */
if (!debugfs_initialized()) if (!debugfs_initialized())
...@@ -1777,24 +1737,22 @@ static int sba_probe(struct platform_device *pdev) ...@@ -1777,24 +1737,22 @@ static int sba_probe(struct platform_device *pdev)
goto fail_free_resources; goto fail_free_resources;
/* Print device info */ /* Print device info */
dev_info(sba->dev, "%s using SBAv%d and %d mailbox channels", dev_info(sba->dev, "%s using SBAv%d mailbox channel from %s",
dma_chan_name(&sba->dma_chan), sba->ver+1, dma_chan_name(&sba->dma_chan), sba->ver+1,
sba->mchans_count); dev_name(sba->mbox_dev));
return 0; return 0;
fail_free_resources: fail_free_resources:
debugfs_remove_recursive(sba->root); debugfs_remove_recursive(sba->root);
sba_freeup_channel_resources(sba); sba_freeup_channel_resources(sba);
fail_free_mchans: fail_free_mchan:
for (i = 0; i < sba->mchans_count; i++) mbox_free_channel(sba->mchan);
mbox_free_channel(sba->mchans[i]);
return ret; return ret;
} }
static int sba_remove(struct platform_device *pdev) static int sba_remove(struct platform_device *pdev)
{ {
int i;
struct sba_device *sba = platform_get_drvdata(pdev); struct sba_device *sba = platform_get_drvdata(pdev);
dma_async_device_unregister(&sba->dma_dev); dma_async_device_unregister(&sba->dma_dev);
...@@ -1803,8 +1761,7 @@ static int sba_remove(struct platform_device *pdev) ...@@ -1803,8 +1761,7 @@ static int sba_remove(struct platform_device *pdev)
sba_freeup_channel_resources(sba); sba_freeup_channel_resources(sba);
for (i = 0; i < sba->mchans_count; i++) mbox_free_channel(sba->mchan);
mbox_free_channel(sba->mchans[i]);
return 0; return 0;
} }
......
...@@ -1319,8 +1319,8 @@ static void coh901318_list_print(struct coh901318_chan *cohc, ...@@ -1319,8 +1319,8 @@ static void coh901318_list_print(struct coh901318_chan *cohc,
int i = 0; int i = 0;
while (l) { while (l) {
dev_vdbg(COHC_2_DEV(cohc), "i %d, lli %p, ctrl 0x%x, src 0x%pad" dev_vdbg(COHC_2_DEV(cohc), "i %d, lli %p, ctrl 0x%x, src %pad"
", dst 0x%pad, link 0x%pad virt_link_addr 0x%p\n", ", dst %pad, link %pad virt_link_addr 0x%p\n",
i, l, l->control, &l->src_addr, &l->dst_addr, i, l, l->control, &l->src_addr, &l->dst_addr,
&l->link_addr, l->virt_link_addr); &l->link_addr, l->virt_link_addr);
i++; i++;
...@@ -2231,7 +2231,7 @@ coh901318_prep_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, ...@@ -2231,7 +2231,7 @@ coh901318_prep_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
spin_lock_irqsave(&cohc->lock, flg); spin_lock_irqsave(&cohc->lock, flg);
dev_vdbg(COHC_2_DEV(cohc), dev_vdbg(COHC_2_DEV(cohc),
"[%s] channel %d src 0x%pad dest 0x%pad size %zu\n", "[%s] channel %d src %pad dest %pad size %zu\n",
__func__, cohc->id, &src, &dest, size); __func__, cohc->id, &src, &dest, size);
if (flags & DMA_PREP_INTERRUPT) if (flags & DMA_PREP_INTERRUPT)
......
...@@ -72,6 +72,9 @@ ...@@ -72,6 +72,9 @@
#define AXI_DMAC_FLAG_CYCLIC BIT(0) #define AXI_DMAC_FLAG_CYCLIC BIT(0)
/* The maximum ID allocated by the hardware is 31 */
#define AXI_DMAC_SG_UNUSED 32U
struct axi_dmac_sg { struct axi_dmac_sg {
dma_addr_t src_addr; dma_addr_t src_addr;
dma_addr_t dest_addr; dma_addr_t dest_addr;
...@@ -80,6 +83,7 @@ struct axi_dmac_sg { ...@@ -80,6 +83,7 @@ struct axi_dmac_sg {
unsigned int dest_stride; unsigned int dest_stride;
unsigned int src_stride; unsigned int src_stride;
unsigned int id; unsigned int id;
bool schedule_when_free;
}; };
struct axi_dmac_desc { struct axi_dmac_desc {
...@@ -200,11 +204,21 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan) ...@@ -200,11 +204,21 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
} }
sg = &desc->sg[desc->num_submitted]; sg = &desc->sg[desc->num_submitted];
/* Already queued in cyclic mode. Wait for it to finish */
if (sg->id != AXI_DMAC_SG_UNUSED) {
sg->schedule_when_free = true;
return;
}
desc->num_submitted++; desc->num_submitted++;
if (desc->num_submitted == desc->num_sgs) if (desc->num_submitted == desc->num_sgs) {
chan->next_desc = NULL; if (desc->cyclic)
desc->num_submitted = 0; /* Start again */
else else
chan->next_desc = NULL;
} else {
chan->next_desc = desc; chan->next_desc = desc;
}
sg->id = axi_dmac_read(dmac, AXI_DMAC_REG_TRANSFER_ID); sg->id = axi_dmac_read(dmac, AXI_DMAC_REG_TRANSFER_ID);
...@@ -220,9 +234,11 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan) ...@@ -220,9 +234,11 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
/* /*
* If the hardware supports cyclic transfers and there is no callback to * If the hardware supports cyclic transfers and there is no callback to
* call, enable hw cyclic mode to avoid unnecessary interrupts. * call and only a single segment, enable hw cyclic mode to avoid
* unnecessary interrupts.
*/ */
if (chan->hw_cyclic && desc->cyclic && !desc->vdesc.tx.callback) if (chan->hw_cyclic && desc->cyclic && !desc->vdesc.tx.callback &&
desc->num_sgs == 1)
flags |= AXI_DMAC_FLAG_CYCLIC; flags |= AXI_DMAC_FLAG_CYCLIC;
axi_dmac_write(dmac, AXI_DMAC_REG_X_LENGTH, sg->x_len - 1); axi_dmac_write(dmac, AXI_DMAC_REG_X_LENGTH, sg->x_len - 1);
...@@ -237,37 +253,52 @@ static struct axi_dmac_desc *axi_dmac_active_desc(struct axi_dmac_chan *chan) ...@@ -237,37 +253,52 @@ static struct axi_dmac_desc *axi_dmac_active_desc(struct axi_dmac_chan *chan)
struct axi_dmac_desc, vdesc.node); struct axi_dmac_desc, vdesc.node);
} }
static void axi_dmac_transfer_done(struct axi_dmac_chan *chan, static bool axi_dmac_transfer_done(struct axi_dmac_chan *chan,
unsigned int completed_transfers) unsigned int completed_transfers)
{ {
struct axi_dmac_desc *active; struct axi_dmac_desc *active;
struct axi_dmac_sg *sg; struct axi_dmac_sg *sg;
bool start_next = false;
active = axi_dmac_active_desc(chan); active = axi_dmac_active_desc(chan);
if (!active) if (!active)
return; return false;
if (active->cyclic) {
vchan_cyclic_callback(&active->vdesc);
} else {
do { do {
sg = &active->sg[active->num_completed]; sg = &active->sg[active->num_completed];
if (sg->id == AXI_DMAC_SG_UNUSED) /* Not yet submitted */
break;
if (!(BIT(sg->id) & completed_transfers)) if (!(BIT(sg->id) & completed_transfers))
break; break;
active->num_completed++; active->num_completed++;
sg->id = AXI_DMAC_SG_UNUSED;
if (sg->schedule_when_free) {
sg->schedule_when_free = false;
start_next = true;
}
if (active->cyclic)
vchan_cyclic_callback(&active->vdesc);
if (active->num_completed == active->num_sgs) { if (active->num_completed == active->num_sgs) {
if (active->cyclic) {
active->num_completed = 0; /* wrap around */
} else {
list_del(&active->vdesc.node); list_del(&active->vdesc.node);
vchan_cookie_complete(&active->vdesc); vchan_cookie_complete(&active->vdesc);
active = axi_dmac_active_desc(chan); active = axi_dmac_active_desc(chan);
} }
} while (active);
} }
} while (active);
return start_next;
} }
static irqreturn_t axi_dmac_interrupt_handler(int irq, void *devid) static irqreturn_t axi_dmac_interrupt_handler(int irq, void *devid)
{ {
struct axi_dmac *dmac = devid; struct axi_dmac *dmac = devid;
unsigned int pending; unsigned int pending;
bool start_next = false;
pending = axi_dmac_read(dmac, AXI_DMAC_REG_IRQ_PENDING); pending = axi_dmac_read(dmac, AXI_DMAC_REG_IRQ_PENDING);
if (!pending) if (!pending)
...@@ -281,10 +312,10 @@ static irqreturn_t axi_dmac_interrupt_handler(int irq, void *devid) ...@@ -281,10 +312,10 @@ static irqreturn_t axi_dmac_interrupt_handler(int irq, void *devid)
unsigned int completed; unsigned int completed;
completed = axi_dmac_read(dmac, AXI_DMAC_REG_TRANSFER_DONE); completed = axi_dmac_read(dmac, AXI_DMAC_REG_TRANSFER_DONE);
axi_dmac_transfer_done(&dmac->chan, completed); start_next = axi_dmac_transfer_done(&dmac->chan, completed);
} }
/* Space has become available in the descriptor queue */ /* Space has become available in the descriptor queue */
if (pending & AXI_DMAC_IRQ_SOT) if ((pending & AXI_DMAC_IRQ_SOT) || start_next)
axi_dmac_start_transfer(&dmac->chan); axi_dmac_start_transfer(&dmac->chan);
spin_unlock(&dmac->chan.vchan.lock); spin_unlock(&dmac->chan.vchan.lock);
...@@ -334,12 +365,16 @@ static void axi_dmac_issue_pending(struct dma_chan *c) ...@@ -334,12 +365,16 @@ static void axi_dmac_issue_pending(struct dma_chan *c)
static struct axi_dmac_desc *axi_dmac_alloc_desc(unsigned int num_sgs) static struct axi_dmac_desc *axi_dmac_alloc_desc(unsigned int num_sgs)
{ {
struct axi_dmac_desc *desc; struct axi_dmac_desc *desc;
unsigned int i;
desc = kzalloc(sizeof(struct axi_dmac_desc) + desc = kzalloc(sizeof(struct axi_dmac_desc) +
sizeof(struct axi_dmac_sg) * num_sgs, GFP_NOWAIT); sizeof(struct axi_dmac_sg) * num_sgs, GFP_NOWAIT);
if (!desc) if (!desc)
return NULL; return NULL;
for (i = 0; i < num_sgs; i++)
desc->sg[i].id = AXI_DMAC_SG_UNUSED;
desc->num_sgs = num_sgs; desc->num_sgs = num_sgs;
return desc; return desc;
......
...@@ -702,6 +702,7 @@ static int dmatest_func(void *data) ...@@ -702,6 +702,7 @@ static int dmatest_func(void *data)
* free it this time?" dancing. For now, just * free it this time?" dancing. For now, just
* leave it dangling. * leave it dangling.
*/ */
WARN(1, "dmatest: Kernel stack may be corrupted!!\n");
dmaengine_unmap_put(um); dmaengine_unmap_put(um);
result("test timed out", total_tests, src_off, dst_off, result("test timed out", total_tests, src_off, dst_off,
len, 0); len, 0);
......
...@@ -891,6 +891,10 @@ static int edma_slave_config(struct dma_chan *chan, ...@@ -891,6 +891,10 @@ static int edma_slave_config(struct dma_chan *chan,
cfg->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES) cfg->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
return -EINVAL; return -EINVAL;
if (cfg->src_maxburst > chan->device->max_burst ||
cfg->dst_maxburst > chan->device->max_burst)
return -EINVAL;
memcpy(&echan->cfg, cfg, sizeof(echan->cfg)); memcpy(&echan->cfg, cfg, sizeof(echan->cfg));
return 0; return 0;
...@@ -1868,6 +1872,7 @@ static void edma_dma_init(struct edma_cc *ecc, bool legacy_mode) ...@@ -1868,6 +1872,7 @@ static void edma_dma_init(struct edma_cc *ecc, bool legacy_mode)
s_ddev->dst_addr_widths = EDMA_DMA_BUSWIDTHS; s_ddev->dst_addr_widths = EDMA_DMA_BUSWIDTHS;
s_ddev->directions |= (BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV)); s_ddev->directions |= (BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV));
s_ddev->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; s_ddev->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
s_ddev->max_burst = SZ_32K - 1; /* CIDX: 16bit signed */
s_ddev->dev = ecc->dev; s_ddev->dev = ecc->dev;
INIT_LIST_HEAD(&s_ddev->channels); INIT_LIST_HEAD(&s_ddev->channels);
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/of_dma.h> #include <linux/of_dma.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/regmap.h> #include <linux/regmap.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
...@@ -730,14 +731,23 @@ static int mdc_slave_config(struct dma_chan *chan, ...@@ -730,14 +731,23 @@ static int mdc_slave_config(struct dma_chan *chan,
return 0; return 0;
} }
static int mdc_alloc_chan_resources(struct dma_chan *chan)
{
struct mdc_chan *mchan = to_mdc_chan(chan);
struct device *dev = mdma2dev(mchan->mdma);
return pm_runtime_get_sync(dev);
}
static void mdc_free_chan_resources(struct dma_chan *chan) static void mdc_free_chan_resources(struct dma_chan *chan)
{ {
struct mdc_chan *mchan = to_mdc_chan(chan); struct mdc_chan *mchan = to_mdc_chan(chan);
struct mdc_dma *mdma = mchan->mdma; struct mdc_dma *mdma = mchan->mdma;
struct device *dev = mdma2dev(mdma);
mdc_terminate_all(chan); mdc_terminate_all(chan);
mdma->soc->disable_chan(mchan); mdma->soc->disable_chan(mchan);
pm_runtime_put(dev);
} }
static irqreturn_t mdc_chan_irq(int irq, void *dev_id) static irqreturn_t mdc_chan_irq(int irq, void *dev_id)
...@@ -854,6 +864,22 @@ static const struct of_device_id mdc_dma_of_match[] = { ...@@ -854,6 +864,22 @@ static const struct of_device_id mdc_dma_of_match[] = {
}; };
MODULE_DEVICE_TABLE(of, mdc_dma_of_match); MODULE_DEVICE_TABLE(of, mdc_dma_of_match);
static int img_mdc_runtime_suspend(struct device *dev)
{
struct mdc_dma *mdma = dev_get_drvdata(dev);
clk_disable_unprepare(mdma->clk);
return 0;
}
static int img_mdc_runtime_resume(struct device *dev)
{
struct mdc_dma *mdma = dev_get_drvdata(dev);
return clk_prepare_enable(mdma->clk);
}
static int mdc_dma_probe(struct platform_device *pdev) static int mdc_dma_probe(struct platform_device *pdev)
{ {
struct mdc_dma *mdma; struct mdc_dma *mdma;
...@@ -883,10 +909,6 @@ static int mdc_dma_probe(struct platform_device *pdev) ...@@ -883,10 +909,6 @@ static int mdc_dma_probe(struct platform_device *pdev)
if (IS_ERR(mdma->clk)) if (IS_ERR(mdma->clk))
return PTR_ERR(mdma->clk); return PTR_ERR(mdma->clk);
ret = clk_prepare_enable(mdma->clk);
if (ret)
return ret;
dma_cap_zero(mdma->dma_dev.cap_mask); dma_cap_zero(mdma->dma_dev.cap_mask);
dma_cap_set(DMA_SLAVE, mdma->dma_dev.cap_mask); dma_cap_set(DMA_SLAVE, mdma->dma_dev.cap_mask);
dma_cap_set(DMA_PRIVATE, mdma->dma_dev.cap_mask); dma_cap_set(DMA_PRIVATE, mdma->dma_dev.cap_mask);
...@@ -919,12 +941,13 @@ static int mdc_dma_probe(struct platform_device *pdev) ...@@ -919,12 +941,13 @@ static int mdc_dma_probe(struct platform_device *pdev)
"img,max-burst-multiplier", "img,max-burst-multiplier",
&mdma->max_burst_mult); &mdma->max_burst_mult);
if (ret) if (ret)
goto disable_clk; return ret;
mdma->dma_dev.dev = &pdev->dev; mdma->dma_dev.dev = &pdev->dev;
mdma->dma_dev.device_prep_slave_sg = mdc_prep_slave_sg; mdma->dma_dev.device_prep_slave_sg = mdc_prep_slave_sg;
mdma->dma_dev.device_prep_dma_cyclic = mdc_prep_dma_cyclic; mdma->dma_dev.device_prep_dma_cyclic = mdc_prep_dma_cyclic;
mdma->dma_dev.device_prep_dma_memcpy = mdc_prep_dma_memcpy; mdma->dma_dev.device_prep_dma_memcpy = mdc_prep_dma_memcpy;
mdma->dma_dev.device_alloc_chan_resources = mdc_alloc_chan_resources;
mdma->dma_dev.device_free_chan_resources = mdc_free_chan_resources; mdma->dma_dev.device_free_chan_resources = mdc_free_chan_resources;
mdma->dma_dev.device_tx_status = mdc_tx_status; mdma->dma_dev.device_tx_status = mdc_tx_status;
mdma->dma_dev.device_issue_pending = mdc_issue_pending; mdma->dma_dev.device_issue_pending = mdc_issue_pending;
...@@ -945,15 +968,14 @@ static int mdc_dma_probe(struct platform_device *pdev) ...@@ -945,15 +968,14 @@ static int mdc_dma_probe(struct platform_device *pdev)
mchan->mdma = mdma; mchan->mdma = mdma;
mchan->chan_nr = i; mchan->chan_nr = i;
mchan->irq = platform_get_irq(pdev, i); mchan->irq = platform_get_irq(pdev, i);
if (mchan->irq < 0) { if (mchan->irq < 0)
ret = mchan->irq; return mchan->irq;
goto disable_clk;
}
ret = devm_request_irq(&pdev->dev, mchan->irq, mdc_chan_irq, ret = devm_request_irq(&pdev->dev, mchan->irq, mdc_chan_irq,
IRQ_TYPE_LEVEL_HIGH, IRQ_TYPE_LEVEL_HIGH,
dev_name(&pdev->dev), mchan); dev_name(&pdev->dev), mchan);
if (ret < 0) if (ret < 0)
goto disable_clk; return ret;
mchan->vc.desc_free = mdc_desc_free; mchan->vc.desc_free = mdc_desc_free;
vchan_init(&mchan->vc, &mdma->dma_dev); vchan_init(&mchan->vc, &mdma->dma_dev);
...@@ -962,14 +984,19 @@ static int mdc_dma_probe(struct platform_device *pdev) ...@@ -962,14 +984,19 @@ static int mdc_dma_probe(struct platform_device *pdev)
mdma->desc_pool = dmam_pool_create(dev_name(&pdev->dev), &pdev->dev, mdma->desc_pool = dmam_pool_create(dev_name(&pdev->dev), &pdev->dev,
sizeof(struct mdc_hw_list_desc), sizeof(struct mdc_hw_list_desc),
4, 0); 4, 0);
if (!mdma->desc_pool) { if (!mdma->desc_pool)
ret = -ENOMEM; return -ENOMEM;
goto disable_clk;
pm_runtime_enable(&pdev->dev);
if (!pm_runtime_enabled(&pdev->dev)) {
ret = img_mdc_runtime_resume(&pdev->dev);
if (ret)
return ret;
} }
ret = dma_async_device_register(&mdma->dma_dev); ret = dma_async_device_register(&mdma->dma_dev);
if (ret) if (ret)
goto disable_clk; goto suspend;
ret = of_dma_controller_register(pdev->dev.of_node, mdc_of_xlate, mdma); ret = of_dma_controller_register(pdev->dev.of_node, mdc_of_xlate, mdma);
if (ret) if (ret)
...@@ -982,8 +1009,10 @@ static int mdc_dma_probe(struct platform_device *pdev) ...@@ -982,8 +1009,10 @@ static int mdc_dma_probe(struct platform_device *pdev)
unregister: unregister:
dma_async_device_unregister(&mdma->dma_dev); dma_async_device_unregister(&mdma->dma_dev);
disable_clk: suspend:
clk_disable_unprepare(mdma->clk); if (!pm_runtime_enabled(&pdev->dev))
img_mdc_runtime_suspend(&pdev->dev);
pm_runtime_disable(&pdev->dev);
return ret; return ret;
} }
...@@ -1004,14 +1033,47 @@ static int mdc_dma_remove(struct platform_device *pdev) ...@@ -1004,14 +1033,47 @@ static int mdc_dma_remove(struct platform_device *pdev)
tasklet_kill(&mchan->vc.task); tasklet_kill(&mchan->vc.task);
} }
clk_disable_unprepare(mdma->clk); pm_runtime_disable(&pdev->dev);
if (!pm_runtime_status_suspended(&pdev->dev))
img_mdc_runtime_suspend(&pdev->dev);
return 0; return 0;
} }
#ifdef CONFIG_PM_SLEEP
static int img_mdc_suspend_late(struct device *dev)
{
struct mdc_dma *mdma = dev_get_drvdata(dev);
int i;
/* Check that all channels are idle */
for (i = 0; i < mdma->nr_channels; i++) {
struct mdc_chan *mchan = &mdma->channels[i];
if (unlikely(mchan->desc))
return -EBUSY;
}
return pm_runtime_force_suspend(dev);
}
static int img_mdc_resume_early(struct device *dev)
{
return pm_runtime_force_resume(dev);
}
#endif /* CONFIG_PM_SLEEP */
static const struct dev_pm_ops img_mdc_pm_ops = {
SET_RUNTIME_PM_OPS(img_mdc_runtime_suspend,
img_mdc_runtime_resume, NULL)
SET_LATE_SYSTEM_SLEEP_PM_OPS(img_mdc_suspend_late,
img_mdc_resume_early)
};
static struct platform_driver mdc_dma_driver = { static struct platform_driver mdc_dma_driver = {
.driver = { .driver = {
.name = "img-mdc-dma", .name = "img-mdc-dma",
.pm = &img_mdc_pm_ops,
.of_match_table = of_match_ptr(mdc_dma_of_match), .of_match_table = of_match_ptr(mdc_dma_of_match),
}, },
.probe = mdc_dma_probe, .probe = mdc_dma_probe,
......
...@@ -364,9 +364,9 @@ static void imxdma_disable_hw(struct imxdma_channel *imxdmac) ...@@ -364,9 +364,9 @@ static void imxdma_disable_hw(struct imxdma_channel *imxdmac)
local_irq_restore(flags); local_irq_restore(flags);
} }
static void imxdma_watchdog(unsigned long data) static void imxdma_watchdog(struct timer_list *t)
{ {
struct imxdma_channel *imxdmac = (struct imxdma_channel *)data; struct imxdma_channel *imxdmac = from_timer(imxdmac, t, watchdog);
struct imxdma_engine *imxdma = imxdmac->imxdma; struct imxdma_engine *imxdma = imxdmac->imxdma;
int channel = imxdmac->channel; int channel = imxdmac->channel;
...@@ -1153,9 +1153,7 @@ static int __init imxdma_probe(struct platform_device *pdev) ...@@ -1153,9 +1153,7 @@ static int __init imxdma_probe(struct platform_device *pdev)
} }
imxdmac->irq = irq + i; imxdmac->irq = irq + i;
init_timer(&imxdmac->watchdog); timer_setup(&imxdmac->watchdog, imxdma_watchdog, 0);
imxdmac->watchdog.function = &imxdma_watchdog;
imxdmac->watchdog.data = (unsigned long)imxdmac;
} }
imxdmac->imxdma = imxdma; imxdmac->imxdma = imxdma;
......
...@@ -178,6 +178,14 @@ ...@@ -178,6 +178,14 @@
#define SDMA_WATERMARK_LEVEL_HWE BIT(29) #define SDMA_WATERMARK_LEVEL_HWE BIT(29)
#define SDMA_WATERMARK_LEVEL_CONT BIT(31) #define SDMA_WATERMARK_LEVEL_CONT BIT(31)
#define SDMA_DMA_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES))
#define SDMA_DMA_DIRECTIONS (BIT(DMA_DEV_TO_MEM) | \
BIT(DMA_MEM_TO_DEV) | \
BIT(DMA_DEV_TO_DEV))
/* /*
* Mode/Count of data node descriptors - IPCv2 * Mode/Count of data node descriptors - IPCv2
*/ */
...@@ -1851,9 +1859,9 @@ static int sdma_probe(struct platform_device *pdev) ...@@ -1851,9 +1859,9 @@ static int sdma_probe(struct platform_device *pdev)
sdma->dma_device.device_prep_dma_cyclic = sdma_prep_dma_cyclic; sdma->dma_device.device_prep_dma_cyclic = sdma_prep_dma_cyclic;
sdma->dma_device.device_config = sdma_config; sdma->dma_device.device_config = sdma_config;
sdma->dma_device.device_terminate_all = sdma_disable_channel_with_delay; sdma->dma_device.device_terminate_all = sdma_disable_channel_with_delay;
sdma->dma_device.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); sdma->dma_device.src_addr_widths = SDMA_DMA_BUSWIDTHS;
sdma->dma_device.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); sdma->dma_device.dst_addr_widths = SDMA_DMA_BUSWIDTHS;
sdma->dma_device.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); sdma->dma_device.directions = SDMA_DMA_DIRECTIONS;
sdma->dma_device.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT; sdma->dma_device.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
sdma->dma_device.device_issue_pending = sdma_issue_pending; sdma->dma_device.device_issue_pending = sdma_issue_pending;
sdma->dma_device.dev->dma_parms = &sdma->dma_parms; sdma->dma_device.dev->dma_parms = &sdma->dma_parms;
......
...@@ -474,7 +474,7 @@ int ioat_check_space_lock(struct ioatdma_chan *ioat_chan, int num_descs) ...@@ -474,7 +474,7 @@ int ioat_check_space_lock(struct ioatdma_chan *ioat_chan, int num_descs)
if (time_is_before_jiffies(ioat_chan->timer.expires) if (time_is_before_jiffies(ioat_chan->timer.expires)
&& timer_pending(&ioat_chan->timer)) { && timer_pending(&ioat_chan->timer)) {
mod_timer(&ioat_chan->timer, jiffies + COMPLETION_TIMEOUT); mod_timer(&ioat_chan->timer, jiffies + COMPLETION_TIMEOUT);
ioat_timer_event((unsigned long)ioat_chan); ioat_timer_event(&ioat_chan->timer);
} }
return -ENOMEM; return -ENOMEM;
...@@ -862,9 +862,9 @@ static void check_active(struct ioatdma_chan *ioat_chan) ...@@ -862,9 +862,9 @@ static void check_active(struct ioatdma_chan *ioat_chan)
mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT); mod_timer(&ioat_chan->timer, jiffies + IDLE_TIMEOUT);
} }
void ioat_timer_event(unsigned long data) void ioat_timer_event(struct timer_list *t)
{ {
struct ioatdma_chan *ioat_chan = to_ioat_chan((void *)data); struct ioatdma_chan *ioat_chan = from_timer(ioat_chan, t, timer);
dma_addr_t phys_complete; dma_addr_t phys_complete;
u64 status; u64 status;
......
...@@ -406,10 +406,9 @@ enum dma_status ...@@ -406,10 +406,9 @@ enum dma_status
ioat_tx_status(struct dma_chan *c, dma_cookie_t cookie, ioat_tx_status(struct dma_chan *c, dma_cookie_t cookie,
struct dma_tx_state *txstate); struct dma_tx_state *txstate);
void ioat_cleanup_event(unsigned long data); void ioat_cleanup_event(unsigned long data);
void ioat_timer_event(unsigned long data); void ioat_timer_event(struct timer_list *t);
int ioat_check_space_lock(struct ioatdma_chan *ioat_chan, int num_descs); int ioat_check_space_lock(struct ioatdma_chan *ioat_chan, int num_descs);
void ioat_issue_pending(struct dma_chan *chan); void ioat_issue_pending(struct dma_chan *chan);
void ioat_timer_event(unsigned long data);
/* IOAT Init functions */ /* IOAT Init functions */
bool is_bwd_ioat(struct pci_dev *pdev); bool is_bwd_ioat(struct pci_dev *pdev);
......
...@@ -760,7 +760,7 @@ ioat_init_channel(struct ioatdma_device *ioat_dma, ...@@ -760,7 +760,7 @@ ioat_init_channel(struct ioatdma_device *ioat_dma,
dma_cookie_init(&ioat_chan->dma_chan); dma_cookie_init(&ioat_chan->dma_chan);
list_add_tail(&ioat_chan->dma_chan.device_node, &dma->channels); list_add_tail(&ioat_chan->dma_chan.device_node, &dma->channels);
ioat_dma->idx[idx] = ioat_chan; ioat_dma->idx[idx] = ioat_chan;
setup_timer(&ioat_chan->timer, ioat_timer_event, data); timer_setup(&ioat_chan->timer, ioat_timer_event, 0);
tasklet_init(&ioat_chan->cleanup_task, ioat_cleanup_event, data); tasklet_init(&ioat_chan->cleanup_task, ioat_cleanup_event, data);
} }
......
...@@ -1286,7 +1286,6 @@ MODULE_DEVICE_TABLE(of, nbpf_match); ...@@ -1286,7 +1286,6 @@ MODULE_DEVICE_TABLE(of, nbpf_match);
static int nbpf_probe(struct platform_device *pdev) static int nbpf_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
const struct of_device_id *of_id = of_match_device(nbpf_match, dev);
struct device_node *np = dev->of_node; struct device_node *np = dev->of_node;
struct nbpf_device *nbpf; struct nbpf_device *nbpf;
struct dma_device *dma_dev; struct dma_device *dma_dev;
...@@ -1300,10 +1299,10 @@ static int nbpf_probe(struct platform_device *pdev) ...@@ -1300,10 +1299,10 @@ static int nbpf_probe(struct platform_device *pdev)
BUILD_BUG_ON(sizeof(struct nbpf_desc_page) > PAGE_SIZE); BUILD_BUG_ON(sizeof(struct nbpf_desc_page) > PAGE_SIZE);
/* DT only */ /* DT only */
if (!np || !of_id || !of_id->data) if (!np)
return -ENODEV; return -ENODEV;
cfg = of_id->data; cfg = of_device_get_match_data(dev);
num_channels = cfg->num_channels; num_channels = cfg->num_channels;
nbpf = devm_kzalloc(dev, sizeof(*nbpf) + num_channels * nbpf = devm_kzalloc(dev, sizeof(*nbpf) + num_channels *
......
...@@ -1288,6 +1288,10 @@ static int omap_dma_slave_config(struct dma_chan *chan, struct dma_slave_config ...@@ -1288,6 +1288,10 @@ static int omap_dma_slave_config(struct dma_chan *chan, struct dma_slave_config
cfg->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES) cfg->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
return -EINVAL; return -EINVAL;
if (cfg->src_maxburst > chan->device->max_burst ||
cfg->dst_maxburst > chan->device->max_burst)
return -EINVAL;
memcpy(&c->cfg, cfg, sizeof(c->cfg)); memcpy(&c->cfg, cfg, sizeof(c->cfg));
return 0; return 0;
...@@ -1482,6 +1486,7 @@ static int omap_dma_probe(struct platform_device *pdev) ...@@ -1482,6 +1486,7 @@ static int omap_dma_probe(struct platform_device *pdev)
od->ddev.dst_addr_widths = OMAP_DMA_BUSWIDTHS; od->ddev.dst_addr_widths = OMAP_DMA_BUSWIDTHS;
od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
od->ddev.max_burst = SZ_16M - 1; /* CCEN: 24bit unsigned */
od->ddev.dev = &pdev->dev; od->ddev.dev = &pdev->dev;
INIT_LIST_HEAD(&od->ddev.channels); INIT_LIST_HEAD(&od->ddev.channels);
spin_lock_init(&od->lock); spin_lock_init(&od->lock);
......
...@@ -123,7 +123,7 @@ struct pch_dma_chan { ...@@ -123,7 +123,7 @@ struct pch_dma_chan {
struct pch_dma { struct pch_dma {
struct dma_device dma; struct dma_device dma;
void __iomem *membase; void __iomem *membase;
struct pci_pool *pool; struct dma_pool *pool;
struct pch_dma_regs regs; struct pch_dma_regs regs;
struct pch_dma_desc_regs ch_regs[MAX_CHAN_NR]; struct pch_dma_desc_regs ch_regs[MAX_CHAN_NR];
struct pch_dma_chan channels[MAX_CHAN_NR]; struct pch_dma_chan channels[MAX_CHAN_NR];
...@@ -437,7 +437,7 @@ static struct pch_dma_desc *pdc_alloc_desc(struct dma_chan *chan, gfp_t flags) ...@@ -437,7 +437,7 @@ static struct pch_dma_desc *pdc_alloc_desc(struct dma_chan *chan, gfp_t flags)
struct pch_dma *pd = to_pd(chan->device); struct pch_dma *pd = to_pd(chan->device);
dma_addr_t addr; dma_addr_t addr;
desc = pci_pool_zalloc(pd->pool, flags, &addr); desc = dma_pool_zalloc(pd->pool, flags, &addr);
if (desc) { if (desc) {
INIT_LIST_HEAD(&desc->tx_list); INIT_LIST_HEAD(&desc->tx_list);
dma_async_tx_descriptor_init(&desc->txd, chan); dma_async_tx_descriptor_init(&desc->txd, chan);
...@@ -549,7 +549,7 @@ static void pd_free_chan_resources(struct dma_chan *chan) ...@@ -549,7 +549,7 @@ static void pd_free_chan_resources(struct dma_chan *chan)
spin_unlock_irq(&pd_chan->lock); spin_unlock_irq(&pd_chan->lock);
list_for_each_entry_safe(desc, _d, &tmp_list, desc_node) list_for_each_entry_safe(desc, _d, &tmp_list, desc_node)
pci_pool_free(pd->pool, desc, desc->txd.phys); dma_pool_free(pd->pool, desc, desc->txd.phys);
pdc_enable_irq(chan, 0); pdc_enable_irq(chan, 0);
} }
...@@ -880,7 +880,7 @@ static int pch_dma_probe(struct pci_dev *pdev, ...@@ -880,7 +880,7 @@ static int pch_dma_probe(struct pci_dev *pdev,
goto err_iounmap; goto err_iounmap;
} }
pd->pool = pci_pool_create("pch_dma_desc_pool", pdev, pd->pool = dma_pool_create("pch_dma_desc_pool", &pdev->dev,
sizeof(struct pch_dma_desc), 4, 0); sizeof(struct pch_dma_desc), 4, 0);
if (!pd->pool) { if (!pd->pool) {
dev_err(&pdev->dev, "Failed to alloc DMA descriptors\n"); dev_err(&pdev->dev, "Failed to alloc DMA descriptors\n");
...@@ -931,7 +931,7 @@ static int pch_dma_probe(struct pci_dev *pdev, ...@@ -931,7 +931,7 @@ static int pch_dma_probe(struct pci_dev *pdev,
return 0; return 0;
err_free_pool: err_free_pool:
pci_pool_destroy(pd->pool); dma_pool_destroy(pd->pool);
err_free_irq: err_free_irq:
free_irq(pdev->irq, pd); free_irq(pdev->irq, pd);
err_iounmap: err_iounmap:
...@@ -963,7 +963,7 @@ static void pch_dma_remove(struct pci_dev *pdev) ...@@ -963,7 +963,7 @@ static void pch_dma_remove(struct pci_dev *pdev)
tasklet_kill(&pd_chan->tasklet); tasklet_kill(&pd_chan->tasklet);
} }
pci_pool_destroy(pd->pool); dma_pool_destroy(pd->pool);
pci_iounmap(pdev, pd->membase); pci_iounmap(pdev, pd->membase);
pci_release_regions(pdev); pci_release_regions(pdev);
pci_disable_device(pdev); pci_disable_device(pdev);
......
...@@ -2390,7 +2390,8 @@ static inline void _init_desc(struct dma_pl330_desc *desc) ...@@ -2390,7 +2390,8 @@ static inline void _init_desc(struct dma_pl330_desc *desc)
} }
/* Returns the number of descriptors added to the DMAC pool */ /* Returns the number of descriptors added to the DMAC pool */
static int add_desc(struct pl330_dmac *pl330, gfp_t flg, int count) static int add_desc(struct list_head *pool, spinlock_t *lock,
gfp_t flg, int count)
{ {
struct dma_pl330_desc *desc; struct dma_pl330_desc *desc;
unsigned long flags; unsigned long flags;
...@@ -2400,27 +2401,28 @@ static int add_desc(struct pl330_dmac *pl330, gfp_t flg, int count) ...@@ -2400,27 +2401,28 @@ static int add_desc(struct pl330_dmac *pl330, gfp_t flg, int count)
if (!desc) if (!desc)
return 0; return 0;
spin_lock_irqsave(&pl330->pool_lock, flags); spin_lock_irqsave(lock, flags);
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
_init_desc(&desc[i]); _init_desc(&desc[i]);
list_add_tail(&desc[i].node, &pl330->desc_pool); list_add_tail(&desc[i].node, pool);
} }
spin_unlock_irqrestore(&pl330->pool_lock, flags); spin_unlock_irqrestore(lock, flags);
return count; return count;
} }
static struct dma_pl330_desc *pluck_desc(struct pl330_dmac *pl330) static struct dma_pl330_desc *pluck_desc(struct list_head *pool,
spinlock_t *lock)
{ {
struct dma_pl330_desc *desc = NULL; struct dma_pl330_desc *desc = NULL;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&pl330->pool_lock, flags); spin_lock_irqsave(lock, flags);
if (!list_empty(&pl330->desc_pool)) { if (!list_empty(pool)) {
desc = list_entry(pl330->desc_pool.next, desc = list_entry(pool->next,
struct dma_pl330_desc, node); struct dma_pl330_desc, node);
list_del_init(&desc->node); list_del_init(&desc->node);
...@@ -2429,7 +2431,7 @@ static struct dma_pl330_desc *pluck_desc(struct pl330_dmac *pl330) ...@@ -2429,7 +2431,7 @@ static struct dma_pl330_desc *pluck_desc(struct pl330_dmac *pl330)
desc->txd.callback = NULL; desc->txd.callback = NULL;
} }
spin_unlock_irqrestore(&pl330->pool_lock, flags); spin_unlock_irqrestore(lock, flags);
return desc; return desc;
} }
...@@ -2441,20 +2443,18 @@ static struct dma_pl330_desc *pl330_get_desc(struct dma_pl330_chan *pch) ...@@ -2441,20 +2443,18 @@ static struct dma_pl330_desc *pl330_get_desc(struct dma_pl330_chan *pch)
struct dma_pl330_desc *desc; struct dma_pl330_desc *desc;
/* Pluck one desc from the pool of DMAC */ /* Pluck one desc from the pool of DMAC */
desc = pluck_desc(pl330); desc = pluck_desc(&pl330->desc_pool, &pl330->pool_lock);
/* If the DMAC pool is empty, alloc new */ /* If the DMAC pool is empty, alloc new */
if (!desc) { if (!desc) {
if (!add_desc(pl330, GFP_ATOMIC, 1)) DEFINE_SPINLOCK(lock);
return NULL; LIST_HEAD(pool);
/* Try again */ if (!add_desc(&pool, &lock, GFP_ATOMIC, 1))
desc = pluck_desc(pl330);
if (!desc) {
dev_err(pch->dmac->ddma.dev,
"%s:%d ALERT!\n", __func__, __LINE__);
return NULL; return NULL;
}
desc = pluck_desc(&pool, &lock);
WARN_ON(!desc || !list_empty(&pool));
} }
/* Initialize the descriptor */ /* Initialize the descriptor */
...@@ -2868,7 +2868,8 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id) ...@@ -2868,7 +2868,8 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
spin_lock_init(&pl330->pool_lock); spin_lock_init(&pl330->pool_lock);
/* Create a descriptor pool of default size */ /* Create a descriptor pool of default size */
if (!add_desc(pl330, GFP_KERNEL, NR_DEFAULT_DESC)) if (!add_desc(&pl330->desc_pool, &pl330->pool_lock,
GFP_KERNEL, NR_DEFAULT_DESC))
dev_warn(&adev->dev, "unable to allocate desc\n"); dev_warn(&adev->dev, "unable to allocate desc\n");
INIT_LIST_HEAD(&pd->channels); INIT_LIST_HEAD(&pd->channels);
......
...@@ -46,6 +46,7 @@ ...@@ -46,6 +46,7 @@
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_irq.h> #include <linux/of_irq.h>
#include <linux/of_dma.h> #include <linux/of_dma.h>
#include <linux/circ_buf.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
...@@ -78,6 +79,8 @@ struct bam_async_desc { ...@@ -78,6 +79,8 @@ struct bam_async_desc {
struct bam_desc_hw *curr_desc; struct bam_desc_hw *curr_desc;
/* list node for the desc in the bam_chan list of descriptors */
struct list_head desc_node;
enum dma_transfer_direction dir; enum dma_transfer_direction dir;
size_t length; size_t length;
struct bam_desc_hw desc[0]; struct bam_desc_hw desc[0];
...@@ -347,6 +350,8 @@ static const struct reg_offset_data bam_v1_7_reg_info[] = { ...@@ -347,6 +350,8 @@ static const struct reg_offset_data bam_v1_7_reg_info[] = {
#define BAM_DESC_FIFO_SIZE SZ_32K #define BAM_DESC_FIFO_SIZE SZ_32K
#define MAX_DESCRIPTORS (BAM_DESC_FIFO_SIZE / sizeof(struct bam_desc_hw) - 1) #define MAX_DESCRIPTORS (BAM_DESC_FIFO_SIZE / sizeof(struct bam_desc_hw) - 1)
#define BAM_FIFO_SIZE (SZ_32K - 8) #define BAM_FIFO_SIZE (SZ_32K - 8)
#define IS_BUSY(chan) (CIRC_SPACE(bchan->tail, bchan->head,\
MAX_DESCRIPTORS + 1) == 0)
struct bam_chan { struct bam_chan {
struct virt_dma_chan vc; struct virt_dma_chan vc;
...@@ -356,8 +361,6 @@ struct bam_chan { ...@@ -356,8 +361,6 @@ struct bam_chan {
/* configuration from device tree */ /* configuration from device tree */
u32 id; u32 id;
struct bam_async_desc *curr_txd; /* current running dma */
/* runtime configuration */ /* runtime configuration */
struct dma_slave_config slave; struct dma_slave_config slave;
...@@ -372,6 +375,8 @@ struct bam_chan { ...@@ -372,6 +375,8 @@ struct bam_chan {
unsigned int initialized; /* is the channel hw initialized? */ unsigned int initialized; /* is the channel hw initialized? */
unsigned int paused; /* is the channel paused? */ unsigned int paused; /* is the channel paused? */
unsigned int reconfigure; /* new slave config? */ unsigned int reconfigure; /* new slave config? */
/* list of descriptors currently processed */
struct list_head desc_list;
struct list_head node; struct list_head node;
}; };
...@@ -539,7 +544,7 @@ static void bam_free_chan(struct dma_chan *chan) ...@@ -539,7 +544,7 @@ static void bam_free_chan(struct dma_chan *chan)
vchan_free_chan_resources(to_virt_chan(chan)); vchan_free_chan_resources(to_virt_chan(chan));
if (bchan->curr_txd) { if (!list_empty(&bchan->desc_list)) {
dev_err(bchan->bdev->dev, "Cannot free busy channel\n"); dev_err(bchan->bdev->dev, "Cannot free busy channel\n");
goto err; goto err;
} }
...@@ -632,8 +637,6 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan, ...@@ -632,8 +637,6 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
if (flags & DMA_PREP_INTERRUPT) if (flags & DMA_PREP_INTERRUPT)
async_desc->flags |= DESC_FLAG_EOT; async_desc->flags |= DESC_FLAG_EOT;
else
async_desc->flags |= DESC_FLAG_INT;
async_desc->num_desc = num_alloc; async_desc->num_desc = num_alloc;
async_desc->curr_desc = async_desc->desc; async_desc->curr_desc = async_desc->desc;
...@@ -684,14 +687,16 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan, ...@@ -684,14 +687,16 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
static int bam_dma_terminate_all(struct dma_chan *chan) static int bam_dma_terminate_all(struct dma_chan *chan)
{ {
struct bam_chan *bchan = to_bam_chan(chan); struct bam_chan *bchan = to_bam_chan(chan);
struct bam_async_desc *async_desc, *tmp;
unsigned long flag; unsigned long flag;
LIST_HEAD(head); LIST_HEAD(head);
/* remove all transactions, including active transaction */ /* remove all transactions, including active transaction */
spin_lock_irqsave(&bchan->vc.lock, flag); spin_lock_irqsave(&bchan->vc.lock, flag);
if (bchan->curr_txd) { list_for_each_entry_safe(async_desc, tmp,
list_add(&bchan->curr_txd->vd.node, &bchan->vc.desc_issued); &bchan->desc_list, desc_node) {
bchan->curr_txd = NULL; list_add(&async_desc->vd.node, &bchan->vc.desc_issued);
list_del(&async_desc->desc_node);
} }
vchan_get_all_descriptors(&bchan->vc, &head); vchan_get_all_descriptors(&bchan->vc, &head);
...@@ -763,9 +768,9 @@ static int bam_resume(struct dma_chan *chan) ...@@ -763,9 +768,9 @@ static int bam_resume(struct dma_chan *chan)
*/ */
static u32 process_channel_irqs(struct bam_device *bdev) static u32 process_channel_irqs(struct bam_device *bdev)
{ {
u32 i, srcs, pipe_stts; u32 i, srcs, pipe_stts, offset, avail;
unsigned long flags; unsigned long flags;
struct bam_async_desc *async_desc; struct bam_async_desc *async_desc, *tmp;
srcs = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_EE)); srcs = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_EE));
...@@ -785,28 +790,41 @@ static u32 process_channel_irqs(struct bam_device *bdev) ...@@ -785,28 +790,41 @@ static u32 process_channel_irqs(struct bam_device *bdev)
writel_relaxed(pipe_stts, bam_addr(bdev, i, BAM_P_IRQ_CLR)); writel_relaxed(pipe_stts, bam_addr(bdev, i, BAM_P_IRQ_CLR));
spin_lock_irqsave(&bchan->vc.lock, flags); spin_lock_irqsave(&bchan->vc.lock, flags);
async_desc = bchan->curr_txd;
if (async_desc) { offset = readl_relaxed(bam_addr(bdev, i, BAM_P_SW_OFSTS)) &
async_desc->num_desc -= async_desc->xfer_len; P_SW_OFSTS_MASK;
async_desc->curr_desc += async_desc->xfer_len; offset /= sizeof(struct bam_desc_hw);
bchan->curr_txd = NULL;
/* Number of bytes available to read */
avail = CIRC_CNT(offset, bchan->head, MAX_DESCRIPTORS + 1);
list_for_each_entry_safe(async_desc, tmp,
&bchan->desc_list, desc_node) {
/* Not enough data to read */
if (avail < async_desc->xfer_len)
break;
/* manage FIFO */ /* manage FIFO */
bchan->head += async_desc->xfer_len; bchan->head += async_desc->xfer_len;
bchan->head %= MAX_DESCRIPTORS; bchan->head %= MAX_DESCRIPTORS;
async_desc->num_desc -= async_desc->xfer_len;
async_desc->curr_desc += async_desc->xfer_len;
avail -= async_desc->xfer_len;
/* /*
* if complete, process cookie. Otherwise * if complete, process cookie. Otherwise
* push back to front of desc_issued so that * push back to front of desc_issued so that
* it gets restarted by the tasklet * it gets restarted by the tasklet
*/ */
if (!async_desc->num_desc) if (!async_desc->num_desc) {
vchan_cookie_complete(&async_desc->vd); vchan_cookie_complete(&async_desc->vd);
else } else {
list_add(&async_desc->vd.node, list_add(&async_desc->vd.node,
&bchan->vc.desc_issued); &bchan->vc.desc_issued);
} }
list_del(&async_desc->desc_node);
}
spin_unlock_irqrestore(&bchan->vc.lock, flags); spin_unlock_irqrestore(&bchan->vc.lock, flags);
} }
...@@ -867,6 +885,7 @@ static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie, ...@@ -867,6 +885,7 @@ static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
struct dma_tx_state *txstate) struct dma_tx_state *txstate)
{ {
struct bam_chan *bchan = to_bam_chan(chan); struct bam_chan *bchan = to_bam_chan(chan);
struct bam_async_desc *async_desc;
struct virt_dma_desc *vd; struct virt_dma_desc *vd;
int ret; int ret;
size_t residue = 0; size_t residue = 0;
...@@ -882,11 +901,17 @@ static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie, ...@@ -882,11 +901,17 @@ static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
spin_lock_irqsave(&bchan->vc.lock, flags); spin_lock_irqsave(&bchan->vc.lock, flags);
vd = vchan_find_desc(&bchan->vc, cookie); vd = vchan_find_desc(&bchan->vc, cookie);
if (vd) if (vd) {
residue = container_of(vd, struct bam_async_desc, vd)->length; residue = container_of(vd, struct bam_async_desc, vd)->length;
else if (bchan->curr_txd && bchan->curr_txd->vd.tx.cookie == cookie) } else {
for (i = 0; i < bchan->curr_txd->num_desc; i++) list_for_each_entry(async_desc, &bchan->desc_list, desc_node) {
residue += bchan->curr_txd->curr_desc[i].size; if (async_desc->vd.tx.cookie != cookie)
continue;
for (i = 0; i < async_desc->num_desc; i++)
residue += async_desc->curr_desc[i].size;
}
}
spin_unlock_irqrestore(&bchan->vc.lock, flags); spin_unlock_irqrestore(&bchan->vc.lock, flags);
...@@ -927,26 +952,28 @@ static void bam_start_dma(struct bam_chan *bchan) ...@@ -927,26 +952,28 @@ static void bam_start_dma(struct bam_chan *bchan)
{ {
struct virt_dma_desc *vd = vchan_next_desc(&bchan->vc); struct virt_dma_desc *vd = vchan_next_desc(&bchan->vc);
struct bam_device *bdev = bchan->bdev; struct bam_device *bdev = bchan->bdev;
struct bam_async_desc *async_desc; struct bam_async_desc *async_desc = NULL;
struct bam_desc_hw *desc; struct bam_desc_hw *desc;
struct bam_desc_hw *fifo = PTR_ALIGN(bchan->fifo_virt, struct bam_desc_hw *fifo = PTR_ALIGN(bchan->fifo_virt,
sizeof(struct bam_desc_hw)); sizeof(struct bam_desc_hw));
int ret; int ret;
unsigned int avail;
struct dmaengine_desc_callback cb;
lockdep_assert_held(&bchan->vc.lock); lockdep_assert_held(&bchan->vc.lock);
if (!vd) if (!vd)
return; return;
list_del(&vd->node);
async_desc = container_of(vd, struct bam_async_desc, vd);
bchan->curr_txd = async_desc;
ret = pm_runtime_get_sync(bdev->dev); ret = pm_runtime_get_sync(bdev->dev);
if (ret < 0) if (ret < 0)
return; return;
while (vd && !IS_BUSY(bchan)) {
list_del(&vd->node);
async_desc = container_of(vd, struct bam_async_desc, vd);
/* on first use, initialize the channel hardware */ /* on first use, initialize the channel hardware */
if (!bchan->initialized) if (!bchan->initialized)
bam_chan_init_hw(bchan, async_desc->dir); bam_chan_init_hw(bchan, async_desc->dir);
...@@ -955,10 +982,12 @@ static void bam_start_dma(struct bam_chan *bchan) ...@@ -955,10 +982,12 @@ static void bam_start_dma(struct bam_chan *bchan)
if (bchan->reconfigure) if (bchan->reconfigure)
bam_apply_new_config(bchan, async_desc->dir); bam_apply_new_config(bchan, async_desc->dir);
desc = bchan->curr_txd->curr_desc; desc = async_desc->curr_desc;
avail = CIRC_SPACE(bchan->tail, bchan->head,
MAX_DESCRIPTORS + 1);
if (async_desc->num_desc > MAX_DESCRIPTORS) if (async_desc->num_desc > avail)
async_desc->xfer_len = MAX_DESCRIPTORS; async_desc->xfer_len = avail;
else else
async_desc->xfer_len = async_desc->num_desc; async_desc->xfer_len = async_desc->num_desc;
...@@ -966,7 +995,22 @@ static void bam_start_dma(struct bam_chan *bchan) ...@@ -966,7 +995,22 @@ static void bam_start_dma(struct bam_chan *bchan)
if (async_desc->num_desc == async_desc->xfer_len) if (async_desc->num_desc == async_desc->xfer_len)
desc[async_desc->xfer_len - 1].flags |= desc[async_desc->xfer_len - 1].flags |=
cpu_to_le16(async_desc->flags); cpu_to_le16(async_desc->flags);
else
vd = vchan_next_desc(&bchan->vc);
dmaengine_desc_get_callback(&async_desc->vd.tx, &cb);
/*
* An interrupt is generated at this desc, if
* - FIFO is FULL.
* - No more descriptors to add.
* - If a callback completion was requested for this DESC,
* In this case, BAM will deliver the completion callback
* for this desc and continue processing the next desc.
*/
if (((avail <= async_desc->xfer_len) || !vd ||
dmaengine_desc_callback_valid(&cb)) &&
!(async_desc->flags & DESC_FLAG_EOT))
desc[async_desc->xfer_len - 1].flags |= desc[async_desc->xfer_len - 1].flags |=
cpu_to_le16(DESC_FLAG_INT); cpu_to_le16(DESC_FLAG_INT);
...@@ -975,15 +1019,19 @@ static void bam_start_dma(struct bam_chan *bchan) ...@@ -975,15 +1019,19 @@ static void bam_start_dma(struct bam_chan *bchan)
memcpy(&fifo[bchan->tail], desc, memcpy(&fifo[bchan->tail], desc,
partial * sizeof(struct bam_desc_hw)); partial * sizeof(struct bam_desc_hw));
memcpy(fifo, &desc[partial], (async_desc->xfer_len - partial) * memcpy(fifo, &desc[partial],
(async_desc->xfer_len - partial) *
sizeof(struct bam_desc_hw)); sizeof(struct bam_desc_hw));
} else { } else {
memcpy(&fifo[bchan->tail], desc, memcpy(&fifo[bchan->tail], desc,
async_desc->xfer_len * sizeof(struct bam_desc_hw)); async_desc->xfer_len *
sizeof(struct bam_desc_hw));
} }
bchan->tail += async_desc->xfer_len; bchan->tail += async_desc->xfer_len;
bchan->tail %= MAX_DESCRIPTORS; bchan->tail %= MAX_DESCRIPTORS;
list_add_tail(&async_desc->desc_node, &bchan->desc_list);
}
/* ensure descriptor writes and dma start not reordered */ /* ensure descriptor writes and dma start not reordered */
wmb(); wmb();
...@@ -1012,7 +1060,7 @@ static void dma_tasklet(unsigned long data) ...@@ -1012,7 +1060,7 @@ static void dma_tasklet(unsigned long data)
bchan = &bdev->channels[i]; bchan = &bdev->channels[i];
spin_lock_irqsave(&bchan->vc.lock, flags); spin_lock_irqsave(&bchan->vc.lock, flags);
if (!list_empty(&bchan->vc.desc_issued) && !bchan->curr_txd) if (!list_empty(&bchan->vc.desc_issued) && !IS_BUSY(bchan))
bam_start_dma(bchan); bam_start_dma(bchan);
spin_unlock_irqrestore(&bchan->vc.lock, flags); spin_unlock_irqrestore(&bchan->vc.lock, flags);
} }
...@@ -1033,7 +1081,7 @@ static void bam_issue_pending(struct dma_chan *chan) ...@@ -1033,7 +1081,7 @@ static void bam_issue_pending(struct dma_chan *chan)
spin_lock_irqsave(&bchan->vc.lock, flags); spin_lock_irqsave(&bchan->vc.lock, flags);
/* if work pending and idle, start a transaction */ /* if work pending and idle, start a transaction */
if (vchan_issue_pending(&bchan->vc) && !bchan->curr_txd) if (vchan_issue_pending(&bchan->vc) && !IS_BUSY(bchan))
bam_start_dma(bchan); bam_start_dma(bchan);
spin_unlock_irqrestore(&bchan->vc.lock, flags); spin_unlock_irqrestore(&bchan->vc.lock, flags);
...@@ -1133,6 +1181,7 @@ static void bam_channel_init(struct bam_device *bdev, struct bam_chan *bchan, ...@@ -1133,6 +1181,7 @@ static void bam_channel_init(struct bam_device *bdev, struct bam_chan *bchan,
vchan_init(&bchan->vc, &bdev->common); vchan_init(&bchan->vc, &bdev->common);
bchan->vc.desc_free = bam_dma_free_desc; bchan->vc.desc_free = bam_dma_free_desc;
INIT_LIST_HEAD(&bchan->desc_list);
} }
static const struct of_device_id bam_of_match[] = { static const struct of_device_id bam_of_match[] = {
......
...@@ -823,6 +823,13 @@ static const struct sa11x0_dma_channel_desc chan_desc[] = { ...@@ -823,6 +823,13 @@ static const struct sa11x0_dma_channel_desc chan_desc[] = {
CD(Ser4SSPRc, DDAR_RW), CD(Ser4SSPRc, DDAR_RW),
}; };
static const struct dma_slave_map sa11x0_dma_map[] = {
{ "sa11x0-ir", "tx", "Ser2ICPTr" },
{ "sa11x0-ir", "rx", "Ser2ICPRc" },
{ "sa11x0-ssp", "tx", "Ser4SSPTr" },
{ "sa11x0-ssp", "rx", "Ser4SSPRc" },
};
static int sa11x0_dma_init_dmadev(struct dma_device *dmadev, static int sa11x0_dma_init_dmadev(struct dma_device *dmadev,
struct device *dev) struct device *dev)
{ {
...@@ -909,6 +916,10 @@ static int sa11x0_dma_probe(struct platform_device *pdev) ...@@ -909,6 +916,10 @@ static int sa11x0_dma_probe(struct platform_device *pdev)
spin_lock_init(&d->lock); spin_lock_init(&d->lock);
INIT_LIST_HEAD(&d->chan_pending); INIT_LIST_HEAD(&d->chan_pending);
d->slave.filter.fn = sa11x0_dma_filter_fn;
d->slave.filter.mapcnt = ARRAY_SIZE(sa11x0_dma_map);
d->slave.filter.map = sa11x0_dma_map;
d->base = ioremap(res->start, resource_size(res)); d->base = ioremap(res->start, resource_size(res));
if (!d->base) { if (!d->base) {
ret = -ENOMEM; ret = -ENOMEM;
......
This diff is collapsed.
/*
*
* Copyright (C) STMicroelectronics SA 2017
* Author(s): M'boumba Cedric Madianga <cedric.madianga@gmail.com>
* Pierre-Yves Mordret <pierre-yves.mordret@st.com>
*
* License terms: GPL V2.0.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published by
* the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
* details.
*
* DMA Router driver for STM32 DMA MUX
*
* Based on TI DMA Crossbar driver
*
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/of_dma.h>
#include <linux/reset.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#define STM32_DMAMUX_CCR(x) (0x4 * (x))
#define STM32_DMAMUX_MAX_DMA_REQUESTS 32
#define STM32_DMAMUX_MAX_REQUESTS 255
struct stm32_dmamux {
u32 master;
u32 request;
u32 chan_id;
};
struct stm32_dmamux_data {
struct dma_router dmarouter;
struct clk *clk;
struct reset_control *rst;
void __iomem *iomem;
u32 dma_requests; /* Number of DMA requests connected to DMAMUX */
u32 dmamux_requests; /* Number of DMA requests routed toward DMAs */
spinlock_t lock; /* Protects register access */
unsigned long *dma_inuse; /* Used DMA channel */
u32 dma_reqs[]; /* Number of DMA Request per DMA masters.
* [0] holds number of DMA Masters.
* To be kept at very end end of this structure
*/
};
static inline u32 stm32_dmamux_read(void __iomem *iomem, u32 reg)
{
return readl_relaxed(iomem + reg);
}
static inline void stm32_dmamux_write(void __iomem *iomem, u32 reg, u32 val)
{
writel_relaxed(val, iomem + reg);
}
static void stm32_dmamux_free(struct device *dev, void *route_data)
{
struct stm32_dmamux_data *dmamux = dev_get_drvdata(dev);
struct stm32_dmamux *mux = route_data;
unsigned long flags;
/* Clear dma request */
spin_lock_irqsave(&dmamux->lock, flags);
stm32_dmamux_write(dmamux->iomem, STM32_DMAMUX_CCR(mux->chan_id), 0);
clear_bit(mux->chan_id, dmamux->dma_inuse);
if (!IS_ERR(dmamux->clk))
clk_disable(dmamux->clk);
spin_unlock_irqrestore(&dmamux->lock, flags);
dev_dbg(dev, "Unmapping DMAMUX(%u) to DMA%u(%u)\n",
mux->request, mux->master, mux->chan_id);
kfree(mux);
}
static void *stm32_dmamux_route_allocate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct platform_device *pdev = of_find_device_by_node(ofdma->of_node);
struct stm32_dmamux_data *dmamux = platform_get_drvdata(pdev);
struct stm32_dmamux *mux;
u32 i, min, max;
int ret;
unsigned long flags;
if (dma_spec->args_count != 3) {
dev_err(&pdev->dev, "invalid number of dma mux args\n");
return ERR_PTR(-EINVAL);
}
if (dma_spec->args[0] > dmamux->dmamux_requests) {
dev_err(&pdev->dev, "invalid mux request number: %d\n",
dma_spec->args[0]);
return ERR_PTR(-EINVAL);
}
mux = kzalloc(sizeof(*mux), GFP_KERNEL);
if (!mux)
return ERR_PTR(-ENOMEM);
spin_lock_irqsave(&dmamux->lock, flags);
mux->chan_id = find_first_zero_bit(dmamux->dma_inuse,
dmamux->dma_requests);
set_bit(mux->chan_id, dmamux->dma_inuse);
spin_unlock_irqrestore(&dmamux->lock, flags);
if (mux->chan_id == dmamux->dma_requests) {
dev_err(&pdev->dev, "Run out of free DMA requests\n");
ret = -ENOMEM;
goto error;
}
/* Look for DMA Master */
for (i = 1, min = 0, max = dmamux->dma_reqs[i];
i <= dmamux->dma_reqs[0];
min += dmamux->dma_reqs[i], max += dmamux->dma_reqs[++i])
if (mux->chan_id < max)
break;
mux->master = i - 1;
/* The of_node_put() will be done in of_dma_router_xlate function */
dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", i - 1);
if (!dma_spec->np) {
dev_err(&pdev->dev, "can't get dma master\n");
ret = -EINVAL;
goto error;
}
/* Set dma request */
spin_lock_irqsave(&dmamux->lock, flags);
if (!IS_ERR(dmamux->clk)) {
ret = clk_enable(dmamux->clk);
if (ret < 0) {
spin_unlock_irqrestore(&dmamux->lock, flags);
dev_err(&pdev->dev, "clk_prep_enable issue: %d\n", ret);
goto error;
}
}
spin_unlock_irqrestore(&dmamux->lock, flags);
mux->request = dma_spec->args[0];
/* craft DMA spec */
dma_spec->args[3] = dma_spec->args[2];
dma_spec->args[2] = dma_spec->args[1];
dma_spec->args[1] = 0;
dma_spec->args[0] = mux->chan_id - min;
dma_spec->args_count = 4;
stm32_dmamux_write(dmamux->iomem, STM32_DMAMUX_CCR(mux->chan_id),
mux->request);
dev_dbg(&pdev->dev, "Mapping DMAMUX(%u) to DMA%u(%u)\n",
mux->request, mux->master, mux->chan_id);
return mux;
error:
clear_bit(mux->chan_id, dmamux->dma_inuse);
kfree(mux);
return ERR_PTR(ret);
}
static const struct of_device_id stm32_stm32dma_master_match[] = {
{ .compatible = "st,stm32-dma", },
{},
};
static int stm32_dmamux_probe(struct platform_device *pdev)
{
struct device_node *node = pdev->dev.of_node;
const struct of_device_id *match;
struct device_node *dma_node;
struct stm32_dmamux_data *stm32_dmamux;
struct resource *res;
void __iomem *iomem;
int i, count, ret;
u32 dma_req;
if (!node)
return -ENODEV;
count = device_property_read_u32_array(&pdev->dev, "dma-masters",
NULL, 0);
if (count < 0) {
dev_err(&pdev->dev, "Can't get DMA master(s) node\n");
return -ENODEV;
}
stm32_dmamux = devm_kzalloc(&pdev->dev, sizeof(*stm32_dmamux) +
sizeof(u32) * (count + 1), GFP_KERNEL);
if (!stm32_dmamux)
return -ENOMEM;
dma_req = 0;
for (i = 1; i <= count; i++) {
dma_node = of_parse_phandle(node, "dma-masters", i - 1);
match = of_match_node(stm32_stm32dma_master_match, dma_node);
if (!match) {
dev_err(&pdev->dev, "DMA master is not supported\n");
of_node_put(dma_node);
return -EINVAL;
}
if (of_property_read_u32(dma_node, "dma-requests",
&stm32_dmamux->dma_reqs[i])) {
dev_info(&pdev->dev,
"Missing MUX output information, using %u.\n",
STM32_DMAMUX_MAX_DMA_REQUESTS);
stm32_dmamux->dma_reqs[i] =
STM32_DMAMUX_MAX_DMA_REQUESTS;
}
dma_req += stm32_dmamux->dma_reqs[i];
of_node_put(dma_node);
}
if (dma_req > STM32_DMAMUX_MAX_DMA_REQUESTS) {
dev_err(&pdev->dev, "Too many DMA Master Requests to manage\n");
return -ENODEV;
}
stm32_dmamux->dma_requests = dma_req;
stm32_dmamux->dma_reqs[0] = count;
stm32_dmamux->dma_inuse = devm_kcalloc(&pdev->dev,
BITS_TO_LONGS(dma_req),
sizeof(unsigned long),
GFP_KERNEL);
if (!stm32_dmamux->dma_inuse)
return -ENOMEM;
if (device_property_read_u32(&pdev->dev, "dma-requests",
&stm32_dmamux->dmamux_requests)) {
stm32_dmamux->dmamux_requests = STM32_DMAMUX_MAX_REQUESTS;
dev_warn(&pdev->dev, "DMAMUX defaulting on %u requests\n",
stm32_dmamux->dmamux_requests);
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
return -ENODEV;
iomem = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(iomem))
return PTR_ERR(iomem);
spin_lock_init(&stm32_dmamux->lock);
stm32_dmamux->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(stm32_dmamux->clk)) {
ret = PTR_ERR(stm32_dmamux->clk);
if (ret == -EPROBE_DEFER)
dev_info(&pdev->dev, "Missing controller clock\n");
return ret;
}
stm32_dmamux->rst = devm_reset_control_get(&pdev->dev, NULL);
if (!IS_ERR(stm32_dmamux->rst)) {
reset_control_assert(stm32_dmamux->rst);
udelay(2);
reset_control_deassert(stm32_dmamux->rst);
}
stm32_dmamux->iomem = iomem;
stm32_dmamux->dmarouter.dev = &pdev->dev;
stm32_dmamux->dmarouter.route_free = stm32_dmamux_free;
platform_set_drvdata(pdev, stm32_dmamux);
if (!IS_ERR(stm32_dmamux->clk)) {
ret = clk_prepare_enable(stm32_dmamux->clk);
if (ret < 0) {
dev_err(&pdev->dev, "clk_prep_enable error: %d\n", ret);
return ret;
}
}
/* Reset the dmamux */
for (i = 0; i < stm32_dmamux->dma_requests; i++)
stm32_dmamux_write(stm32_dmamux->iomem, STM32_DMAMUX_CCR(i), 0);
if (!IS_ERR(stm32_dmamux->clk))
clk_disable(stm32_dmamux->clk);
return of_dma_router_register(node, stm32_dmamux_route_allocate,
&stm32_dmamux->dmarouter);
}
static const struct of_device_id stm32_dmamux_match[] = {
{ .compatible = "st,stm32h7-dmamux" },
{},
};
static struct platform_driver stm32_dmamux_driver = {
.probe = stm32_dmamux_probe,
.driver = {
.name = "stm32-dmamux",
.of_match_table = stm32_dmamux_match,
},
};
static int __init stm32_dmamux_init(void)
{
return platform_driver_register(&stm32_dmamux_driver);
}
arch_initcall(stm32_dmamux_init);
MODULE_DESCRIPTION("DMA Router driver for STM32 DMA MUX");
MODULE_AUTHOR("M'boumba Cedric Madianga <cedric.madianga@gmail.com>");
MODULE_AUTHOR("Pierre-Yves Mordret <pierre-yves.mordret@st.com>");
MODULE_LICENSE("GPL v2");
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment