Commit 1bc5e157 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dmaengine-4.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This time we have support for few new devices, few new features and
  odd fixes spread thru the subsystem.

  New devices added:
   - support for CSRatlas7 dma controller
   - Allwinner H3(sun8i) controller
   - TI DMA crossbar driver on DRA7x
   - new pxa driver

  New features added:
   - memset support is bought back now that we have a user in xdmac controller
   - interleaved transfers support different source and destination strides
   - supporting DMA routers and configuration thru DT
   - support for reusing descriptors
   - xdmac memset and interleaved transfer support
   - hdmac support for interleaved transfers
   - omap-dma support for memcpy

  Others:
   - Constify platform_device_id
   - mv_xor fixes and improvements"

* tag 'dmaengine-4.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (46 commits)
  dmaengine: xgene: fix file permission
  dmaengine: fsl-edma: clear pending interrupts on initialization
  dmaengine: xdmac: Add memset support
  Documentation: dmaengine: document DMA_CTRL_ACK
  dmaengine: virt-dma: don't always free descriptor upon completion
  dmaengine: Revert "drivers/dma: remove unused support for MEMSET operations"
  dmaengine: hdmac: Implement interleaved transfers
  dmaengine: Move icg helpers to global header
  dmaengine: mv_xor: improve descriptors list handling and reduce locking
  dmaengine: mv_xor: Enlarge descriptor pool size
  dmaengine: mv_xor: add support for a38x command in descriptor mode
  dmaengine: mv_xor: Rename function for consistent naming
  dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup
  dmaengine: pl330: fix wording in mcbufsz message
  dmaengine: sirf: add CSRatlas7 SoC support
  dmaengine: xgene-dma: Fix "incorrect type in assignement" warnings
  dmaengine: fix kernel-doc documentation
  dmaengine: pxa_dma: add support for legacy transition
  dmaengine: pxa_dma: add debug information
  dmaengine: pxa: add pxa dmaengine driver
  ...
parents f199b663 657d6127
...@@ -31,6 +31,34 @@ Example: ...@@ -31,6 +31,34 @@ Example:
dma-requests = <127>; dma-requests = <127>;
}; };
* DMA router
DMA routers are transparent IP blocks used to route DMA request lines from
devices to the DMA controller. Some SoCs (like TI DRA7x) have more peripherals
integrated with DMA requests than what the DMA controller can handle directly.
Required property:
- dma-masters: phandle of the DMA controller or list of phandles for
the DMA controllers the router can direct the signal to.
- #dma-cells: Must be at least 1. Used to provide DMA router specific
information. See DMA client binding below for more
details.
Optional properties:
- dma-requests: Number of incoming request lines the router can handle.
- In the node pointed by the dma-masters:
- dma-requests: The router driver might need to look for this in order
to configure the routing.
Example:
sdma_xbar: dma-router@4a002b78 {
compatible = "ti,dra7-dma-crossbar";
reg = <0x4a002b78 0xfc>;
#dma-cells = <1>;
dma-requests = <205>;
ti,dma-safe-map = <0>;
dma-masters = <&sdma>;
};
* DMA client * DMA client
......
* Marvell XOR engines * Marvell XOR engines
Required properties: Required properties:
- compatible: Should be "marvell,orion-xor" - compatible: Should be "marvell,orion-xor" or "marvell,armada-380-xor"
- reg: Should contain registers location and length (two sets) - reg: Should contain registers location and length (two sets)
the first set is the low registers, the second set the high the first set is the low registers, the second set the high
registers for the XOR engine. registers for the XOR engine.
......
...@@ -3,7 +3,8 @@ ...@@ -3,7 +3,8 @@
See dma.txt first See dma.txt first
Required properties: Required properties:
- compatible: Should be "sirf,prima2-dmac" or "sirf,marco-dmac" - compatible: Should be "sirf,prima2-dmac", "sirf,atlas7-dmac" or
"sirf,atlas7-dmac-v2"
- reg: Should contain DMA registers location and length. - reg: Should contain DMA registers location and length.
- interrupts: Should contain one interrupt shared by all channel - interrupts: Should contain one interrupt shared by all channel
- #dma-cells: must be <1>. used to represent the number of integer - #dma-cells: must be <1>. used to represent the number of integer
......
...@@ -4,7 +4,10 @@ This driver follows the generic DMA bindings defined in dma.txt. ...@@ -4,7 +4,10 @@ This driver follows the generic DMA bindings defined in dma.txt.
Required properties: Required properties:
- compatible: Must be "allwinner,sun6i-a31-dma" or "allwinner,sun8i-a23-dma" - compatible: Must be one of
"allwinner,sun6i-a31-dma"
"allwinner,sun8i-a23-dma"
"allwinner,sun8i-h3-dma"
- reg: Should contain the registers base address and length - reg: Should contain the registers base address and length
- interrupts: Should contain a reference to the interrupt used by this device - interrupts: Should contain a reference to the interrupt used by this device
- clocks: Should contain a reference to the parent AHB clock - clocks: Should contain a reference to the parent AHB clock
......
Texas Instruments DMA Crossbar (DMA request router)
Required properties:
- compatible: "ti,dra7-dma-crossbar" for DRA7xx DMA crossbar
- reg: Memory map for accessing module
- #dma-cells: Should be set to <1>.
Clients should use the crossbar request number (input)
- dma-requests: Number of DMA requests the crossbar can receive
- dma-masters: phandle pointing to the DMA controller
The DMA controller node need to have the following poroperties:
- dma-requests: Number of DMA requests the controller can handle
Optional properties:
- ti,dma-safe-map: Safe routing value for unused request lines
Example:
/* DMA controller */
sdma: dma-controller@4a056000 {
compatible = "ti,omap4430-sdma";
reg = <0x4a056000 0x1000>;
interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
#dma-cells = <1>;
dma-channels = <32>;
dma-requests = <127>;
};
/* DMA crossbar */
sdma_xbar: dma-router@4a002b78 {
compatible = "ti,dra7-dma-crossbar";
reg = <0x4a002b78 0xfc>;
#dma-cells = <1>;
dma-requests = <205>;
ti,dma-safe-map = <0>;
dma-masters = <&sdma>;
};
/* DMA client */
uart1: serial@4806a000 {
compatible = "ti,omap4-uart";
reg = <0x4806a000 0x100>;
interrupts-extended = <&gic GIC_SPI 67 IRQ_TYPE_LEVEL_HIGH>;
ti,hwmods = "uart1";
clock-frequency = <48000000>;
status = "disabled";
dmas = <&sdma_xbar 49>, <&sdma_xbar 50>;
dma-names = "tx", "rx";
};
...@@ -345,11 +345,12 @@ where to put them) ...@@ -345,11 +345,12 @@ where to put them)
that abstracts it away. that abstracts it away.
* DMA_CTRL_ACK * DMA_CTRL_ACK
- Undocumented feature - If set, the transfer can be reused after being completed.
- No one really has an idea of what it's about, besides being - There is a guarantee the transfer won't be freed until it is acked
related to reusing the DMA transaction descriptors or having by async_tx_ack().
additional transactions added to it in the async-tx API - As a consequence, if a device driver wants to skip the dma_map_sg() and
- Useless in the case of the slave API dma_unmap_sg() in between 2 transfers, because the DMA'd data wasn't used,
it can resubmit the transfer right after its completion.
General Design Notes General Design Notes
-------------------- --------------------
......
PXA/MMP - DMA Slave controller
==============================
Constraints
-----------
a) Transfers hot queuing
A driver submitting a transfer and issuing it should be granted the transfer
is queued even on a running DMA channel.
This implies that the queuing doesn't wait for the previous transfer end,
and that the descriptor chaining is not only done in the irq/tasklet code
triggered by the end of the transfer.
A transfer which is submitted and issued on a phy doesn't wait for a phy to
stop and restart, but is submitted on a "running channel". The other
drivers, especially mmp_pdma waited for the phy to stop before relaunching
a new transfer.
b) All transfers having asked for confirmation should be signaled
Any issued transfer with DMA_PREP_INTERRUPT should trigger a callback call.
This implies that even if an irq/tasklet is triggered by end of tx1, but
at the time of irq/dma tx2 is already finished, tx1->complete() and
tx2->complete() should be called.
c) Channel running state
A driver should be able to query if a channel is running or not. For the
multimedia case, such as video capture, if a transfer is submitted and then
a check of the DMA channel reports a "stopped channel", the transfer should
not be issued until the next "start of frame interrupt", hence the need to
know if a channel is in running or stopped state.
d) Bandwidth guarantee
The PXA architecture has 4 levels of DMAs priorities : high, normal, low.
The high prorities get twice as much bandwidth as the normal, which get twice
as much as the low priorities.
A driver should be able to request a priority, especially the real-time
ones such as pxa_camera with (big) throughputs.
Design
------
a) Virtual channels
Same concept as in sa11x0 driver, ie. a driver was assigned a "virtual
channel" linked to the requestor line, and the physical DMA channel is
assigned on the fly when the transfer is issued.
b) Transfer anatomy for a scatter-gather transfer
+------------+-----+---------------+----------------+-----------------+
| desc-sg[0] | ... | desc-sg[last] | status updater | finisher/linker |
+------------+-----+---------------+----------------+-----------------+
This structure is pointed by dma->sg_cpu.
The descriptors are used as follows :
- desc-sg[i]: i-th descriptor, transferring the i-th sg
element to the video buffer scatter gather
- status updater
Transfers a single u32 to a well known dma coherent memory to leave
a trace that this transfer is done. The "well known" is unique per
physical channel, meaning that a read of this value will tell which
is the last finished transfer at that point in time.
- finisher: has ddadr=DADDR_STOP, dcmd=ENDIRQEN
- linker: has ddadr= desc-sg[0] of next transfer, dcmd=0
c) Transfers hot-chaining
Suppose the running chain is :
Buffer 1 Buffer 2
+---------+----+---+ +----+----+----+---+
| d0 | .. | dN | l | | d0 | .. | dN | f |
+---------+----+-|-+ ^----+----+----+---+
| |
+----+
After a call to dmaengine_submit(b3), the chain will look like :
Buffer 1 Buffer 2 Buffer 3
+---------+----+---+ +----+----+----+---+ +----+----+----+---+
| d0 | .. | dN | l | | d0 | .. | dN | l | | d0 | .. | dN | f |
+---------+----+-|-+ ^----+----+----+-|-+ ^----+----+----+---+
| | | |
+----+ +----+
new_link
If while new_link was created the DMA channel stopped, it is _not_
restarted. Hot-chaining doesn't break the assumption that
dma_async_issue_pending() is to be used to ensure the transfer is actually started.
One exception to this rule :
- if Buffer1 and Buffer2 had all their addresses 8 bytes aligned
- and if Buffer3 has at least one address not 4 bytes aligned
- then hot-chaining cannot happen, as the channel must be stopped, the
"align bit" must be set, and the channel restarted As a consequence,
such a transfer tx_submit() will be queued on the submitted queue, and
this specific case if the DMA is already running in aligned mode.
d) Transfers completion updater
Each time a transfer is completed on a channel, an interrupt might be
generated or not, up to the client's request. But in each case, the last
descriptor of a transfer, the "status updater", will write the latest
transfer being completed into the physical channel's completion mark.
This will speed up residue calculation, for large transfers such as video
buffers which hold around 6k descriptors or more. This also allows without
any lock to find out what is the latest completed transfer in a running
DMA chain.
e) Transfers completion, irq and tasklet
When a transfer flagged as "DMA_PREP_INTERRUPT" is finished, the dma irq
is raised. Upon this interrupt, a tasklet is scheduled for the physical
channel.
The tasklet is responsible for :
- reading the physical channel last updater mark
- calling all the transfer callbacks of finished transfers, based on
that mark, and each transfer flags.
If a transfer is completed while this handling is done, a dma irq will
be raised, and the tasklet will be scheduled once again, having a new
updater mark.
f) Residue
Residue granularity will be descriptor based. The issued but not completed
transfers will be scanned for all of their descriptors against the
currently running descriptor.
g) Most complicated case of driver's tx queues
The most tricky situation is when :
- there are not "acked" transfers (tx0)
- a driver submitted an aligned tx1, not chained
- a driver submitted an aligned tx2 => tx2 is cold chained to tx1
- a driver issued tx1+tx2 => channel is running in aligned mode
- a driver submitted an aligned tx3 => tx3 is hot-chained
- a driver submitted an unaligned tx4 => tx4 is put in submitted queue,
not chained
- a driver issued tx4 => tx4 is put in issued queue, not chained
- a driver submitted an aligned tx5 => tx5 is put in submitted queue, not
chained
- a driver submitted an aligned tx6 => tx6 is put in submitted queue,
cold chained to tx5
This translates into (after tx4 is issued) :
- issued queue
+-----+ +-----+ +-----+ +-----+
| tx1 | | tx2 | | tx3 | | tx4 |
+---|-+ ^---|-+ ^-----+ +-----+
| | | |
+---+ +---+
- submitted queue
+-----+ +-----+
| tx5 | | tx6 |
+---|-+ ^-----+
| |
+---+
- completed queue : empty
- allocated queue : tx0
It should be noted that after tx3 is completed, the channel is stopped, and
restarted in "unaligned mode" to handle tx4.
Author: Robert Jarzmik <robert.jarzmik@free.fr>
...@@ -8174,6 +8174,7 @@ T: git git://github.com/hzhuang1/linux.git ...@@ -8174,6 +8174,7 @@ T: git git://github.com/hzhuang1/linux.git
T: git git://github.com/rjarzmik/linux.git T: git git://github.com/rjarzmik/linux.git
S: Maintained S: Maintained
F: arch/arm/mach-pxa/ F: arch/arm/mach-pxa/
F: drivers/dma/pxa*
F: drivers/pcmcia/pxa2xx* F: drivers/pcmcia/pxa2xx*
F: drivers/spi/spi-pxa2xx* F: drivers/spi/spi-pxa2xx*
F: drivers/usb/gadget/udc/pxa2* F: drivers/usb/gadget/udc/pxa2*
......
...@@ -162,6 +162,17 @@ config MX3_IPU_IRQS ...@@ -162,6 +162,17 @@ config MX3_IPU_IRQS
To avoid bloating the irq_desc[] array we allocate a sufficient To avoid bloating the irq_desc[] array we allocate a sufficient
number of IRQ slots and map them dynamically to specific sources. number of IRQ slots and map them dynamically to specific sources.
config PXA_DMA
bool "PXA DMA support"
depends on (ARCH_MMP || ARCH_PXA)
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Support the DMA engine for PXA. It is also compatible with MMP PDMA
platform. The internal DMA IP of all PXA variants is supported, with
16 to 32 channels for peripheral to memory or memory to memory
transfers.
config TXX9_DMAC config TXX9_DMAC
tristate "Toshiba TXx9 SoC DMA support" tristate "Toshiba TXx9 SoC DMA support"
depends on MACH_TX49XX || MACH_TX39XX depends on MACH_TX49XX || MACH_TX39XX
...@@ -245,6 +256,9 @@ config TI_EDMA ...@@ -245,6 +256,9 @@ config TI_EDMA
Enable support for the TI EDMA controller. This DMA Enable support for the TI EDMA controller. This DMA
engine is found on TI DaVinci and AM33xx parts. engine is found on TI DaVinci and AM33xx parts.
config TI_DMA_CROSSBAR
bool
config ARCH_HAS_ASYNC_TX_FIND_CHANNEL config ARCH_HAS_ASYNC_TX_FIND_CHANNEL
bool bool
...@@ -330,6 +344,7 @@ config DMA_OMAP ...@@ -330,6 +344,7 @@ config DMA_OMAP
depends on ARCH_OMAP depends on ARCH_OMAP
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS select DMA_VIRTUAL_CHANNELS
select TI_DMA_CROSSBAR if SOC_DRA7XX
config DMA_BCM2835 config DMA_BCM2835
tristate "BCM2835 DMA engine support" tristate "BCM2835 DMA engine support"
......
...@@ -25,6 +25,7 @@ obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/ ...@@ -25,6 +25,7 @@ obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/
obj-$(CONFIG_IMX_SDMA) += imx-sdma.o obj-$(CONFIG_IMX_SDMA) += imx-sdma.o
obj-$(CONFIG_IMX_DMA) += imx-dma.o obj-$(CONFIG_IMX_DMA) += imx-dma.o
obj-$(CONFIG_MXS_DMA) += mxs-dma.o obj-$(CONFIG_MXS_DMA) += mxs-dma.o
obj-$(CONFIG_PXA_DMA) += pxa_dma.o
obj-$(CONFIG_TIMB_DMA) += timb_dma.o obj-$(CONFIG_TIMB_DMA) += timb_dma.o
obj-$(CONFIG_SIRF_DMA) += sirf-dma.o obj-$(CONFIG_SIRF_DMA) += sirf-dma.o
obj-$(CONFIG_TI_EDMA) += edma.o obj-$(CONFIG_TI_EDMA) += edma.o
...@@ -38,6 +39,7 @@ obj-$(CONFIG_EP93XX_DMA) += ep93xx_dma.o ...@@ -38,6 +39,7 @@ obj-$(CONFIG_EP93XX_DMA) += ep93xx_dma.o
obj-$(CONFIG_DMA_SA11X0) += sa11x0-dma.o obj-$(CONFIG_DMA_SA11X0) += sa11x0-dma.o
obj-$(CONFIG_MMP_TDMA) += mmp_tdma.o obj-$(CONFIG_MMP_TDMA) += mmp_tdma.o
obj-$(CONFIG_DMA_OMAP) += omap-dma.o obj-$(CONFIG_DMA_OMAP) += omap-dma.o
obj-$(CONFIG_TI_DMA_CROSSBAR) += ti-dma-crossbar.o
obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o
obj-$(CONFIG_MMP_PDMA) += mmp_pdma.o obj-$(CONFIG_MMP_PDMA) += mmp_pdma.o
obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o
......
...@@ -474,7 +474,7 @@ static void pl08x_terminate_phy_chan(struct pl08x_driver_data *pl08x, ...@@ -474,7 +474,7 @@ static void pl08x_terminate_phy_chan(struct pl08x_driver_data *pl08x,
u32 val = readl(ch->reg_config); u32 val = readl(ch->reg_config);
val &= ~(PL080_CONFIG_ENABLE | PL080_CONFIG_ERR_IRQ_MASK | val &= ~(PL080_CONFIG_ENABLE | PL080_CONFIG_ERR_IRQ_MASK |
PL080_CONFIG_TC_IRQ_MASK); PL080_CONFIG_TC_IRQ_MASK);
writel(val, ch->reg_config); writel(val, ch->reg_config);
......
...@@ -247,6 +247,10 @@ static void atc_dostart(struct at_dma_chan *atchan, struct at_desc *first) ...@@ -247,6 +247,10 @@ static void atc_dostart(struct at_dma_chan *atchan, struct at_desc *first)
channel_writel(atchan, CTRLA, 0); channel_writel(atchan, CTRLA, 0);
channel_writel(atchan, CTRLB, 0); channel_writel(atchan, CTRLB, 0);
channel_writel(atchan, DSCR, first->txd.phys); channel_writel(atchan, DSCR, first->txd.phys);
channel_writel(atchan, SPIP, ATC_SPIP_HOLE(first->src_hole) |
ATC_SPIP_BOUNDARY(first->boundary));
channel_writel(atchan, DPIP, ATC_DPIP_HOLE(first->dst_hole) |
ATC_DPIP_BOUNDARY(first->boundary));
dma_writel(atdma, CHER, atchan->mask); dma_writel(atdma, CHER, atchan->mask);
vdbg_dump_regs(atchan); vdbg_dump_regs(atchan);
...@@ -634,6 +638,104 @@ static dma_cookie_t atc_tx_submit(struct dma_async_tx_descriptor *tx) ...@@ -634,6 +638,104 @@ static dma_cookie_t atc_tx_submit(struct dma_async_tx_descriptor *tx)
return cookie; return cookie;
} }
/**
* atc_prep_dma_interleaved - prepare memory to memory interleaved operation
* @chan: the channel to prepare operation on
* @xt: Interleaved transfer template
* @flags: tx descriptor status flags
*/
static struct dma_async_tx_descriptor *
atc_prep_dma_interleaved(struct dma_chan *chan,
struct dma_interleaved_template *xt,
unsigned long flags)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
struct data_chunk *first = xt->sgl;
struct at_desc *desc = NULL;
size_t xfer_count;
unsigned int dwidth;
u32 ctrla;
u32 ctrlb;
size_t len = 0;
int i;
dev_info(chan2dev(chan),
"%s: src=0x%08x, dest=0x%08x, numf=%d, frame_size=%d, flags=0x%lx\n",
__func__, xt->src_start, xt->dst_start, xt->numf,
xt->frame_size, flags);
if (unlikely(!xt || xt->numf != 1 || !xt->frame_size))
return NULL;
/*
* The controller can only "skip" X bytes every Y bytes, so we
* need to make sure we are given a template that fit that
* description, ie a template with chunks that always have the
* same size, with the same ICGs.
*/
for (i = 0; i < xt->frame_size; i++) {
struct data_chunk *chunk = xt->sgl + i;
if ((chunk->size != xt->sgl->size) ||
(dmaengine_get_dst_icg(xt, chunk) != dmaengine_get_dst_icg(xt, first)) ||
(dmaengine_get_src_icg(xt, chunk) != dmaengine_get_src_icg(xt, first))) {
dev_err(chan2dev(chan),
"%s: the controller can transfer only identical chunks\n",
__func__);
return NULL;
}
len += chunk->size;
}
dwidth = atc_get_xfer_width(xt->src_start,
xt->dst_start, len);
xfer_count = len >> dwidth;
if (xfer_count > ATC_BTSIZE_MAX) {
dev_err(chan2dev(chan), "%s: buffer is too big\n", __func__);
return NULL;
}
ctrla = ATC_SRC_WIDTH(dwidth) |
ATC_DST_WIDTH(dwidth);
ctrlb = ATC_DEFAULT_CTRLB | ATC_IEN
| ATC_SRC_ADDR_MODE_INCR
| ATC_DST_ADDR_MODE_INCR
| ATC_SRC_PIP
| ATC_DST_PIP
| ATC_FC_MEM2MEM;
/* create the transfer */
desc = atc_desc_get(atchan);
if (!desc) {
dev_err(chan2dev(chan),
"%s: couldn't allocate our descriptor\n", __func__);
return NULL;
}
desc->lli.saddr = xt->src_start;
desc->lli.daddr = xt->dst_start;
desc->lli.ctrla = ctrla | xfer_count;
desc->lli.ctrlb = ctrlb;
desc->boundary = first->size >> dwidth;
desc->dst_hole = (dmaengine_get_dst_icg(xt, first) >> dwidth) + 1;
desc->src_hole = (dmaengine_get_src_icg(xt, first) >> dwidth) + 1;
desc->txd.cookie = -EBUSY;
desc->total_len = desc->len = len;
desc->tx_width = dwidth;
/* set end-of-link to the last link descriptor of list*/
set_desc_eol(desc);
desc->txd.flags = flags; /* client is in control of this ack */
return &desc->txd;
}
/** /**
* atc_prep_dma_memcpy - prepare a memcpy operation * atc_prep_dma_memcpy - prepare a memcpy operation
* @chan: the channel to prepare operation on * @chan: the channel to prepare operation on
...@@ -1609,6 +1711,7 @@ static int __init at_dma_probe(struct platform_device *pdev) ...@@ -1609,6 +1711,7 @@ static int __init at_dma_probe(struct platform_device *pdev)
/* setup platform data for each SoC */ /* setup platform data for each SoC */
dma_cap_set(DMA_MEMCPY, at91sam9rl_config.cap_mask); dma_cap_set(DMA_MEMCPY, at91sam9rl_config.cap_mask);
dma_cap_set(DMA_SG, at91sam9rl_config.cap_mask); dma_cap_set(DMA_SG, at91sam9rl_config.cap_mask);
dma_cap_set(DMA_INTERLEAVE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_MEMCPY, at91sam9g45_config.cap_mask); dma_cap_set(DMA_MEMCPY, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_SLAVE, at91sam9g45_config.cap_mask); dma_cap_set(DMA_SLAVE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_SG, at91sam9g45_config.cap_mask); dma_cap_set(DMA_SG, at91sam9g45_config.cap_mask);
...@@ -1713,6 +1816,9 @@ static int __init at_dma_probe(struct platform_device *pdev) ...@@ -1713,6 +1816,9 @@ static int __init at_dma_probe(struct platform_device *pdev)
atdma->dma_common.dev = &pdev->dev; atdma->dma_common.dev = &pdev->dev;
/* set prep routines based on capability */ /* set prep routines based on capability */
if (dma_has_cap(DMA_INTERLEAVE, atdma->dma_common.cap_mask))
atdma->dma_common.device_prep_interleaved_dma = atc_prep_dma_interleaved;
if (dma_has_cap(DMA_MEMCPY, atdma->dma_common.cap_mask)) if (dma_has_cap(DMA_MEMCPY, atdma->dma_common.cap_mask))
atdma->dma_common.device_prep_dma_memcpy = atc_prep_dma_memcpy; atdma->dma_common.device_prep_dma_memcpy = atc_prep_dma_memcpy;
......
...@@ -196,6 +196,11 @@ struct at_desc { ...@@ -196,6 +196,11 @@ struct at_desc {
size_t len; size_t len;
u32 tx_width; u32 tx_width;
size_t total_len; size_t total_len;
/* Interleaved data */
size_t boundary;
size_t dst_hole;
size_t src_hole;
}; };
static inline struct at_desc * static inline struct at_desc *
......
...@@ -235,6 +235,10 @@ struct at_xdmac_lld { ...@@ -235,6 +235,10 @@ struct at_xdmac_lld {
dma_addr_t mbr_sa; /* Source Address Member */ dma_addr_t mbr_sa; /* Source Address Member */
dma_addr_t mbr_da; /* Destination Address Member */ dma_addr_t mbr_da; /* Destination Address Member */
u32 mbr_cfg; /* Configuration Register */ u32 mbr_cfg; /* Configuration Register */
u32 mbr_bc; /* Block Control Register */
u32 mbr_ds; /* Data Stride Register */
u32 mbr_sus; /* Source Microblock Stride Register */
u32 mbr_dus; /* Destination Microblock Stride Register */
}; };
...@@ -358,6 +362,8 @@ static void at_xdmac_start_xfer(struct at_xdmac_chan *atchan, ...@@ -358,6 +362,8 @@ static void at_xdmac_start_xfer(struct at_xdmac_chan *atchan,
if (at_xdmac_chan_is_cyclic(atchan)) { if (at_xdmac_chan_is_cyclic(atchan)) {
reg = AT_XDMAC_CNDC_NDVIEW_NDV1; reg = AT_XDMAC_CNDC_NDVIEW_NDV1;
at_xdmac_chan_write(atchan, AT_XDMAC_CC, first->lld.mbr_cfg); at_xdmac_chan_write(atchan, AT_XDMAC_CC, first->lld.mbr_cfg);
} else if (first->lld.mbr_ubc & AT_XDMAC_MBR_UBC_NDV3) {
reg = AT_XDMAC_CNDC_NDVIEW_NDV3;
} else { } else {
/* /*
* No need to write AT_XDMAC_CC reg, it will be done when the * No need to write AT_XDMAC_CC reg, it will be done when the
...@@ -465,6 +471,33 @@ static struct at_xdmac_desc *at_xdmac_get_desc(struct at_xdmac_chan *atchan) ...@@ -465,6 +471,33 @@ static struct at_xdmac_desc *at_xdmac_get_desc(struct at_xdmac_chan *atchan)
return desc; return desc;
} }
static void at_xdmac_queue_desc(struct dma_chan *chan,
struct at_xdmac_desc *prev,
struct at_xdmac_desc *desc)
{
if (!prev || !desc)
return;
prev->lld.mbr_nda = desc->tx_dma_desc.phys;
prev->lld.mbr_ubc |= AT_XDMAC_MBR_UBC_NDE;
dev_dbg(chan2dev(chan), "%s: chain lld: prev=0x%p, mbr_nda=%pad\n",
__func__, prev, &prev->lld.mbr_nda);
}
static inline void at_xdmac_increment_block_count(struct dma_chan *chan,
struct at_xdmac_desc *desc)
{
if (!desc)
return;
desc->lld.mbr_bc++;
dev_dbg(chan2dev(chan),
"%s: incrementing the block count of the desc 0x%p\n",
__func__, desc);
}
static struct dma_chan *at_xdmac_xlate(struct of_phandle_args *dma_spec, static struct dma_chan *at_xdmac_xlate(struct of_phandle_args *dma_spec,
struct of_dma *of_dma) struct of_dma *of_dma)
{ {
...@@ -656,19 +689,14 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, ...@@ -656,19 +689,14 @@ at_xdmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV2 /* next descriptor view */ desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV2 /* next descriptor view */
| AT_XDMAC_MBR_UBC_NDEN /* next descriptor dst parameter update */ | AT_XDMAC_MBR_UBC_NDEN /* next descriptor dst parameter update */
| AT_XDMAC_MBR_UBC_NSEN /* next descriptor src parameter update */ | AT_XDMAC_MBR_UBC_NSEN /* next descriptor src parameter update */
| (i == sg_len - 1 ? 0 : AT_XDMAC_MBR_UBC_NDE) /* descriptor fetch */
| (len >> fixed_dwidth); /* microblock length */ | (len >> fixed_dwidth); /* microblock length */
dev_dbg(chan2dev(chan), dev_dbg(chan2dev(chan),
"%s: lld: mbr_sa=%pad, mbr_da=%pad, mbr_ubc=0x%08x\n", "%s: lld: mbr_sa=%pad, mbr_da=%pad, mbr_ubc=0x%08x\n",
__func__, &desc->lld.mbr_sa, &desc->lld.mbr_da, desc->lld.mbr_ubc); __func__, &desc->lld.mbr_sa, &desc->lld.mbr_da, desc->lld.mbr_ubc);
/* Chain lld. */ /* Chain lld. */
if (prev) { if (prev)
prev->lld.mbr_nda = desc->tx_dma_desc.phys; at_xdmac_queue_desc(chan, prev, desc);
dev_dbg(chan2dev(chan),
"%s: chain lld: prev=0x%p, mbr_nda=%pad\n",
__func__, prev, &prev->lld.mbr_nda);
}
prev = desc; prev = desc;
if (!first) if (!first)
...@@ -748,7 +776,6 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr, ...@@ -748,7 +776,6 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV1 desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV1
| AT_XDMAC_MBR_UBC_NDEN | AT_XDMAC_MBR_UBC_NDEN
| AT_XDMAC_MBR_UBC_NSEN | AT_XDMAC_MBR_UBC_NSEN
| AT_XDMAC_MBR_UBC_NDE
| period_len >> at_xdmac_get_dwidth(desc->lld.mbr_cfg); | period_len >> at_xdmac_get_dwidth(desc->lld.mbr_cfg);
dev_dbg(chan2dev(chan), dev_dbg(chan2dev(chan),
...@@ -756,12 +783,8 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr, ...@@ -756,12 +783,8 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
__func__, &desc->lld.mbr_sa, &desc->lld.mbr_da, desc->lld.mbr_ubc); __func__, &desc->lld.mbr_sa, &desc->lld.mbr_da, desc->lld.mbr_ubc);
/* Chain lld. */ /* Chain lld. */
if (prev) { if (prev)
prev->lld.mbr_nda = desc->tx_dma_desc.phys; at_xdmac_queue_desc(chan, prev, desc);
dev_dbg(chan2dev(chan),
"%s: chain lld: prev=0x%p, mbr_nda=%pad\n",
__func__, prev, &prev->lld.mbr_nda);
}
prev = desc; prev = desc;
if (!first) if (!first)
...@@ -783,6 +806,215 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr, ...@@ -783,6 +806,215 @@ at_xdmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
return &first->tx_dma_desc; return &first->tx_dma_desc;
} }
static inline u32 at_xdmac_align_width(struct dma_chan *chan, dma_addr_t addr)
{
u32 width;
/*
* Check address alignment to select the greater data width we
* can use.
*
* Some XDMAC implementations don't provide dword transfer, in
* this case selecting dword has the same behavior as
* selecting word transfers.
*/
if (!(addr & 7)) {
width = AT_XDMAC_CC_DWIDTH_DWORD;
dev_dbg(chan2dev(chan), "%s: dwidth: double word\n", __func__);
} else if (!(addr & 3)) {
width = AT_XDMAC_CC_DWIDTH_WORD;
dev_dbg(chan2dev(chan), "%s: dwidth: word\n", __func__);
} else if (!(addr & 1)) {
width = AT_XDMAC_CC_DWIDTH_HALFWORD;
dev_dbg(chan2dev(chan), "%s: dwidth: half word\n", __func__);
} else {
width = AT_XDMAC_CC_DWIDTH_BYTE;
dev_dbg(chan2dev(chan), "%s: dwidth: byte\n", __func__);
}
return width;
}
static struct at_xdmac_desc *
at_xdmac_interleaved_queue_desc(struct dma_chan *chan,
struct at_xdmac_chan *atchan,
struct at_xdmac_desc *prev,
dma_addr_t src, dma_addr_t dst,
struct dma_interleaved_template *xt,
struct data_chunk *chunk)
{
struct at_xdmac_desc *desc;
u32 dwidth;
unsigned long flags;
size_t ublen;
/*
* WARNING: The channel configuration is set here since there is no
* dmaengine_slave_config call in this case. Moreover we don't know the
* direction, it involves we can't dynamically set the source and dest
* interface so we have to use the same one. Only interface 0 allows EBI
* access. Hopefully we can access DDR through both ports (at least on
* SAMA5D4x), so we can use the same interface for source and dest,
* that solves the fact we don't know the direction.
*/
u32 chan_cc = AT_XDMAC_CC_DIF(0)
| AT_XDMAC_CC_SIF(0)
| AT_XDMAC_CC_MBSIZE_SIXTEEN
| AT_XDMAC_CC_TYPE_MEM_TRAN;
dwidth = at_xdmac_align_width(chan, src | dst | chunk->size);
if (chunk->size >= (AT_XDMAC_MBR_UBC_UBLEN_MAX << dwidth)) {
dev_dbg(chan2dev(chan),
"%s: chunk too big (%d, max size %lu)...\n",
__func__, chunk->size,
AT_XDMAC_MBR_UBC_UBLEN_MAX << dwidth);
return NULL;
}
if (prev)
dev_dbg(chan2dev(chan),
"Adding items at the end of desc 0x%p\n", prev);
if (xt->src_inc) {
if (xt->src_sgl)
chan_cc |= AT_XDMAC_CC_SAM_UBS_DS_AM;
else
chan_cc |= AT_XDMAC_CC_SAM_INCREMENTED_AM;
}
if (xt->dst_inc) {
if (xt->dst_sgl)
chan_cc |= AT_XDMAC_CC_DAM_UBS_DS_AM;
else
chan_cc |= AT_XDMAC_CC_DAM_INCREMENTED_AM;
}
spin_lock_irqsave(&atchan->lock, flags);
desc = at_xdmac_get_desc(atchan);
spin_unlock_irqrestore(&atchan->lock, flags);
if (!desc) {
dev_err(chan2dev(chan), "can't get descriptor\n");
return NULL;
}
chan_cc |= AT_XDMAC_CC_DWIDTH(dwidth);
ublen = chunk->size >> dwidth;
desc->lld.mbr_sa = src;
desc->lld.mbr_da = dst;
desc->lld.mbr_sus = dmaengine_get_src_icg(xt, chunk);
desc->lld.mbr_dus = dmaengine_get_dst_icg(xt, chunk);
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV3
| AT_XDMAC_MBR_UBC_NDEN
| AT_XDMAC_MBR_UBC_NSEN
| ublen;
desc->lld.mbr_cfg = chan_cc;
dev_dbg(chan2dev(chan),
"%s: lld: mbr_sa=0x%08x, mbr_da=0x%08x, mbr_ubc=0x%08x, mbr_cfg=0x%08x\n",
__func__, desc->lld.mbr_sa, desc->lld.mbr_da,
desc->lld.mbr_ubc, desc->lld.mbr_cfg);
/* Chain lld. */
if (prev)
at_xdmac_queue_desc(chan, prev, desc);
return desc;
}
static struct dma_async_tx_descriptor *
at_xdmac_prep_interleaved(struct dma_chan *chan,
struct dma_interleaved_template *xt,
unsigned long flags)
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac_desc *prev = NULL, *first = NULL;
struct data_chunk *chunk, *prev_chunk = NULL;
dma_addr_t dst_addr, src_addr;
size_t dst_skip, src_skip, len = 0;
size_t prev_dst_icg = 0, prev_src_icg = 0;
int i;
if (!xt || (xt->numf != 1) || (xt->dir != DMA_MEM_TO_MEM))
return NULL;
dev_dbg(chan2dev(chan), "%s: src=0x%08x, dest=0x%08x, numf=%d, frame_size=%d, flags=0x%lx\n",
__func__, xt->src_start, xt->dst_start, xt->numf,
xt->frame_size, flags);
src_addr = xt->src_start;
dst_addr = xt->dst_start;
for (i = 0; i < xt->frame_size; i++) {
struct at_xdmac_desc *desc;
size_t src_icg, dst_icg;
chunk = xt->sgl + i;
dst_icg = dmaengine_get_dst_icg(xt, chunk);
src_icg = dmaengine_get_src_icg(xt, chunk);
src_skip = chunk->size + src_icg;
dst_skip = chunk->size + dst_icg;
dev_dbg(chan2dev(chan),
"%s: chunk size=%d, src icg=%d, dst icg=%d\n",
__func__, chunk->size, src_icg, dst_icg);
/*
* Handle the case where we just have the same
* transfer to setup, we can just increase the
* block number and reuse the same descriptor.
*/
if (prev_chunk && prev &&
(prev_chunk->size == chunk->size) &&
(prev_src_icg == src_icg) &&
(prev_dst_icg == dst_icg)) {
dev_dbg(chan2dev(chan),
"%s: same configuration that the previous chunk, merging the descriptors...\n",
__func__);
at_xdmac_increment_block_count(chan, prev);
continue;
}
desc = at_xdmac_interleaved_queue_desc(chan, atchan,
prev,
src_addr, dst_addr,
xt, chunk);
if (!desc) {
list_splice_init(&first->descs_list,
&atchan->free_descs_list);
return NULL;
}
if (!first)
first = desc;
dev_dbg(chan2dev(chan), "%s: add desc 0x%p to descs_list 0x%p\n",
__func__, desc, first);
list_add_tail(&desc->desc_node, &first->descs_list);
if (xt->src_sgl)
src_addr += src_skip;
if (xt->dst_sgl)
dst_addr += dst_skip;
len += chunk->size;
prev_chunk = chunk;
prev_dst_icg = dst_icg;
prev_src_icg = src_icg;
prev = desc;
}
first->tx_dma_desc.cookie = -EBUSY;
first->tx_dma_desc.flags = flags;
first->xfer_size = len;
return &first->tx_dma_desc;
}
static struct dma_async_tx_descriptor * static struct dma_async_tx_descriptor *
at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
size_t len, unsigned long flags) size_t len, unsigned long flags)
...@@ -814,24 +1046,7 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, ...@@ -814,24 +1046,7 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
if (unlikely(!len)) if (unlikely(!len))
return NULL; return NULL;
/* dwidth = at_xdmac_align_width(chan, src_addr | dst_addr);
* Check address alignment to select the greater data width we can use.
* Some XDMAC implementations don't provide dword transfer, in this
* case selecting dword has the same behavior as selecting word transfers.
*/
if (!((src_addr | dst_addr) & 7)) {
dwidth = AT_XDMAC_CC_DWIDTH_DWORD;
dev_dbg(chan2dev(chan), "%s: dwidth: double word\n", __func__);
} else if (!((src_addr | dst_addr) & 3)) {
dwidth = AT_XDMAC_CC_DWIDTH_WORD;
dev_dbg(chan2dev(chan), "%s: dwidth: word\n", __func__);
} else if (!((src_addr | dst_addr) & 1)) {
dwidth = AT_XDMAC_CC_DWIDTH_HALFWORD;
dev_dbg(chan2dev(chan), "%s: dwidth: half word\n", __func__);
} else {
dwidth = AT_XDMAC_CC_DWIDTH_BYTE;
dev_dbg(chan2dev(chan), "%s: dwidth: byte\n", __func__);
}
/* Prepare descriptors. */ /* Prepare descriptors. */
while (remaining_size) { while (remaining_size) {
...@@ -861,19 +1076,8 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, ...@@ -861,19 +1076,8 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
dev_dbg(chan2dev(chan), "%s: xfer_size=%zu\n", __func__, xfer_size); dev_dbg(chan2dev(chan), "%s: xfer_size=%zu\n", __func__, xfer_size);
/* Check remaining length and change data width if needed. */ /* Check remaining length and change data width if needed. */
if (!((src_addr | dst_addr | xfer_size) & 7)) { dwidth = at_xdmac_align_width(chan,
dwidth = AT_XDMAC_CC_DWIDTH_DWORD; src_addr | dst_addr | xfer_size);
dev_dbg(chan2dev(chan), "%s: dwidth: double word\n", __func__);
} else if (!((src_addr | dst_addr | xfer_size) & 3)) {
dwidth = AT_XDMAC_CC_DWIDTH_WORD;
dev_dbg(chan2dev(chan), "%s: dwidth: word\n", __func__);
} else if (!((src_addr | dst_addr | xfer_size) & 1)) {
dwidth = AT_XDMAC_CC_DWIDTH_HALFWORD;
dev_dbg(chan2dev(chan), "%s: dwidth: half word\n", __func__);
} else if ((src_addr | dst_addr | xfer_size) & 1) {
dwidth = AT_XDMAC_CC_DWIDTH_BYTE;
dev_dbg(chan2dev(chan), "%s: dwidth: byte\n", __func__);
}
chan_cc |= AT_XDMAC_CC_DWIDTH(dwidth); chan_cc |= AT_XDMAC_CC_DWIDTH(dwidth);
ublen = xfer_size >> dwidth; ublen = xfer_size >> dwidth;
...@@ -884,7 +1088,6 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, ...@@ -884,7 +1088,6 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV2 desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV2
| AT_XDMAC_MBR_UBC_NDEN | AT_XDMAC_MBR_UBC_NDEN
| AT_XDMAC_MBR_UBC_NSEN | AT_XDMAC_MBR_UBC_NSEN
| (remaining_size ? AT_XDMAC_MBR_UBC_NDE : 0)
| ublen; | ublen;
desc->lld.mbr_cfg = chan_cc; desc->lld.mbr_cfg = chan_cc;
...@@ -893,12 +1096,8 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, ...@@ -893,12 +1096,8 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
__func__, &desc->lld.mbr_sa, &desc->lld.mbr_da, desc->lld.mbr_ubc, desc->lld.mbr_cfg); __func__, &desc->lld.mbr_sa, &desc->lld.mbr_da, desc->lld.mbr_ubc, desc->lld.mbr_cfg);
/* Chain lld. */ /* Chain lld. */
if (prev) { if (prev)
prev->lld.mbr_nda = desc->tx_dma_desc.phys; at_xdmac_queue_desc(chan, prev, desc);
dev_dbg(chan2dev(chan),
"%s: chain lld: prev=0x%p, mbr_nda=0x%08x\n",
__func__, prev, prev->lld.mbr_nda);
}
prev = desc; prev = desc;
if (!first) if (!first)
...@@ -915,6 +1114,93 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, ...@@ -915,6 +1114,93 @@ at_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
return &first->tx_dma_desc; return &first->tx_dma_desc;
} }
static struct at_xdmac_desc *at_xdmac_memset_create_desc(struct dma_chan *chan,
struct at_xdmac_chan *atchan,
dma_addr_t dst_addr,
size_t len,
int value)
{
struct at_xdmac_desc *desc;
unsigned long flags;
size_t ublen;
u32 dwidth;
/*
* WARNING: The channel configuration is set here since there is no
* dmaengine_slave_config call in this case. Moreover we don't know the
* direction, it involves we can't dynamically set the source and dest
* interface so we have to use the same one. Only interface 0 allows EBI
* access. Hopefully we can access DDR through both ports (at least on
* SAMA5D4x), so we can use the same interface for source and dest,
* that solves the fact we don't know the direction.
*/
u32 chan_cc = AT_XDMAC_CC_DAM_INCREMENTED_AM
| AT_XDMAC_CC_SAM_INCREMENTED_AM
| AT_XDMAC_CC_DIF(0)
| AT_XDMAC_CC_SIF(0)
| AT_XDMAC_CC_MBSIZE_SIXTEEN
| AT_XDMAC_CC_MEMSET_HW_MODE
| AT_XDMAC_CC_TYPE_MEM_TRAN;
dwidth = at_xdmac_align_width(chan, dst_addr);
if (len >= (AT_XDMAC_MBR_UBC_UBLEN_MAX << dwidth)) {
dev_err(chan2dev(chan),
"%s: Transfer too large, aborting...\n",
__func__);
return NULL;
}
spin_lock_irqsave(&atchan->lock, flags);
desc = at_xdmac_get_desc(atchan);
spin_unlock_irqrestore(&atchan->lock, flags);
if (!desc) {
dev_err(chan2dev(chan), "can't get descriptor\n");
return NULL;
}
chan_cc |= AT_XDMAC_CC_DWIDTH(dwidth);
ublen = len >> dwidth;
desc->lld.mbr_da = dst_addr;
desc->lld.mbr_ds = value;
desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV3
| AT_XDMAC_MBR_UBC_NDEN
| AT_XDMAC_MBR_UBC_NSEN
| ublen;
desc->lld.mbr_cfg = chan_cc;
dev_dbg(chan2dev(chan),
"%s: lld: mbr_da=0x%08x, mbr_ds=0x%08x, mbr_ubc=0x%08x, mbr_cfg=0x%08x\n",
__func__, desc->lld.mbr_da, desc->lld.mbr_ds, desc->lld.mbr_ubc,
desc->lld.mbr_cfg);
return desc;
}
struct dma_async_tx_descriptor *
at_xdmac_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
size_t len, unsigned long flags)
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac_desc *desc;
dev_dbg(chan2dev(chan), "%s: dest=0x%08x, len=%d, pattern=0x%x, flags=0x%lx\n",
__func__, dest, len, value, flags);
if (unlikely(!len))
return NULL;
desc = at_xdmac_memset_create_desc(chan, atchan, dest, len, value);
list_add_tail(&desc->desc_node, &desc->descs_list);
desc->tx_dma_desc.cookie = -EBUSY;
desc->tx_dma_desc.flags = flags;
desc->xfer_size = len;
return &desc->tx_dma_desc;
}
static enum dma_status static enum dma_status
at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie, at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
struct dma_tx_state *txstate) struct dma_tx_state *txstate)
...@@ -1445,7 +1731,9 @@ static int at_xdmac_probe(struct platform_device *pdev) ...@@ -1445,7 +1731,9 @@ static int at_xdmac_probe(struct platform_device *pdev)
} }
dma_cap_set(DMA_CYCLIC, atxdmac->dma.cap_mask); dma_cap_set(DMA_CYCLIC, atxdmac->dma.cap_mask);
dma_cap_set(DMA_INTERLEAVE, atxdmac->dma.cap_mask);
dma_cap_set(DMA_MEMCPY, atxdmac->dma.cap_mask); dma_cap_set(DMA_MEMCPY, atxdmac->dma.cap_mask);
dma_cap_set(DMA_MEMSET, atxdmac->dma.cap_mask);
dma_cap_set(DMA_SLAVE, atxdmac->dma.cap_mask); dma_cap_set(DMA_SLAVE, atxdmac->dma.cap_mask);
/* /*
* Without DMA_PRIVATE the driver is not able to allocate more than * Without DMA_PRIVATE the driver is not able to allocate more than
...@@ -1458,7 +1746,9 @@ static int at_xdmac_probe(struct platform_device *pdev) ...@@ -1458,7 +1746,9 @@ static int at_xdmac_probe(struct platform_device *pdev)
atxdmac->dma.device_tx_status = at_xdmac_tx_status; atxdmac->dma.device_tx_status = at_xdmac_tx_status;
atxdmac->dma.device_issue_pending = at_xdmac_issue_pending; atxdmac->dma.device_issue_pending = at_xdmac_issue_pending;
atxdmac->dma.device_prep_dma_cyclic = at_xdmac_prep_dma_cyclic; atxdmac->dma.device_prep_dma_cyclic = at_xdmac_prep_dma_cyclic;
atxdmac->dma.device_prep_interleaved_dma = at_xdmac_prep_interleaved;
atxdmac->dma.device_prep_dma_memcpy = at_xdmac_prep_dma_memcpy; atxdmac->dma.device_prep_dma_memcpy = at_xdmac_prep_dma_memcpy;
atxdmac->dma.device_prep_dma_memset = at_xdmac_prep_dma_memset;
atxdmac->dma.device_prep_slave_sg = at_xdmac_prep_slave_sg; atxdmac->dma.device_prep_slave_sg = at_xdmac_prep_slave_sg;
atxdmac->dma.device_config = at_xdmac_device_config; atxdmac->dma.device_config = at_xdmac_device_config;
atxdmac->dma.device_pause = at_xdmac_device_pause; atxdmac->dma.device_pause = at_xdmac_device_pause;
......
...@@ -267,6 +267,13 @@ static void dma_chan_put(struct dma_chan *chan) ...@@ -267,6 +267,13 @@ static void dma_chan_put(struct dma_chan *chan)
/* This channel is not in use anymore, free it */ /* This channel is not in use anymore, free it */
if (!chan->client_count && chan->device->device_free_chan_resources) if (!chan->client_count && chan->device->device_free_chan_resources)
chan->device->device_free_chan_resources(chan); chan->device->device_free_chan_resources(chan);
/* If the channel is used via a DMA request router, free the mapping */
if (chan->router && chan->router->route_free) {
chan->router->route_free(chan->router->dev, chan->route_data);
chan->router = NULL;
chan->route_data = NULL;
}
} }
enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie) enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie)
...@@ -536,7 +543,7 @@ static struct dma_chan *private_candidate(const dma_cap_mask_t *mask, ...@@ -536,7 +543,7 @@ static struct dma_chan *private_candidate(const dma_cap_mask_t *mask,
} }
/** /**
* dma_request_slave_channel - try to get specific channel exclusively * dma_get_slave_channel - try to get specific channel exclusively
* @chan: target channel * @chan: target channel
*/ */
struct dma_chan *dma_get_slave_channel(struct dma_chan *chan) struct dma_chan *dma_get_slave_channel(struct dma_chan *chan)
...@@ -648,7 +655,7 @@ struct dma_chan *__dma_request_channel(const dma_cap_mask_t *mask, ...@@ -648,7 +655,7 @@ struct dma_chan *__dma_request_channel(const dma_cap_mask_t *mask,
EXPORT_SYMBOL_GPL(__dma_request_channel); EXPORT_SYMBOL_GPL(__dma_request_channel);
/** /**
* dma_request_slave_channel - try to allocate an exclusive slave channel * dma_request_slave_channel_reason - try to allocate an exclusive slave channel
* @dev: pointer to client device structure * @dev: pointer to client device structure
* @name: slave channel name * @name: slave channel name
* *
...@@ -836,6 +843,8 @@ int dma_async_device_register(struct dma_device *device) ...@@ -836,6 +843,8 @@ int dma_async_device_register(struct dma_device *device)
!device->device_prep_dma_pq); !device->device_prep_dma_pq);
BUG_ON(dma_has_cap(DMA_PQ_VAL, device->cap_mask) && BUG_ON(dma_has_cap(DMA_PQ_VAL, device->cap_mask) &&
!device->device_prep_dma_pq_val); !device->device_prep_dma_pq_val);
BUG_ON(dma_has_cap(DMA_MEMSET, device->cap_mask) &&
!device->device_prep_dma_memset);
BUG_ON(dma_has_cap(DMA_INTERRUPT, device->cap_mask) && BUG_ON(dma_has_cap(DMA_INTERRUPT, device->cap_mask) &&
!device->device_prep_dma_interrupt); !device->device_prep_dma_interrupt);
BUG_ON(dma_has_cap(DMA_SG, device->cap_mask) && BUG_ON(dma_has_cap(DMA_SG, device->cap_mask) &&
......
...@@ -1364,7 +1364,7 @@ static int __init ep93xx_dma_probe(struct platform_device *pdev) ...@@ -1364,7 +1364,7 @@ static int __init ep93xx_dma_probe(struct platform_device *pdev)
return ret; return ret;
} }
static struct platform_device_id ep93xx_dma_driver_ids[] = { static const struct platform_device_id ep93xx_dma_driver_ids[] = {
{ "ep93xx-dma-m2p", 0 }, { "ep93xx-dma-m2p", 0 },
{ "ep93xx-dma-m2m", 1 }, { "ep93xx-dma-m2m", 1 },
{ }, { },
......
...@@ -881,10 +881,6 @@ static int fsl_edma_probe(struct platform_device *pdev) ...@@ -881,10 +881,6 @@ static int fsl_edma_probe(struct platform_device *pdev)
} }
ret = fsl_edma_irq_init(pdev, fsl_edma);
if (ret)
return ret;
fsl_edma->big_endian = of_property_read_bool(np, "big-endian"); fsl_edma->big_endian = of_property_read_bool(np, "big-endian");
INIT_LIST_HEAD(&fsl_edma->dma_dev.channels); INIT_LIST_HEAD(&fsl_edma->dma_dev.channels);
...@@ -900,6 +896,11 @@ static int fsl_edma_probe(struct platform_device *pdev) ...@@ -900,6 +896,11 @@ static int fsl_edma_probe(struct platform_device *pdev)
fsl_edma_chan_mux(fsl_chan, 0, false); fsl_edma_chan_mux(fsl_chan, 0, false);
} }
edma_writel(fsl_edma, ~0, fsl_edma->membase + EDMA_INTR);
ret = fsl_edma_irq_init(pdev, fsl_edma);
if (ret)
return ret;
dma_cap_set(DMA_PRIVATE, fsl_edma->dma_dev.cap_mask); dma_cap_set(DMA_PRIVATE, fsl_edma->dma_dev.cap_mask);
dma_cap_set(DMA_SLAVE, fsl_edma->dma_dev.cap_mask); dma_cap_set(DMA_SLAVE, fsl_edma->dma_dev.cap_mask);
dma_cap_set(DMA_CYCLIC, fsl_edma->dma_dev.cap_mask); dma_cap_set(DMA_CYCLIC, fsl_edma->dma_dev.cap_mask);
......
...@@ -193,7 +193,7 @@ struct imxdma_filter_data { ...@@ -193,7 +193,7 @@ struct imxdma_filter_data {
int request; int request;
}; };
static struct platform_device_id imx_dma_devtype[] = { static const struct platform_device_id imx_dma_devtype[] = {
{ {
.name = "imx1-dma", .name = "imx1-dma",
.driver_data = IMX1_DMA, .driver_data = IMX1_DMA,
......
...@@ -420,7 +420,7 @@ static struct sdma_driver_data sdma_imx6q = { ...@@ -420,7 +420,7 @@ static struct sdma_driver_data sdma_imx6q = {
.script_addrs = &sdma_script_imx6q, .script_addrs = &sdma_script_imx6q,
}; };
static struct platform_device_id sdma_devtypes[] = { static const struct platform_device_id sdma_devtypes[] = {
{ {
.name = "imx25-sdma", .name = "imx25-sdma",
.driver_data = (unsigned long)&sdma_imx25, .driver_data = (unsigned long)&sdma_imx25,
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/of_device.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/memory.h> #include <linux/memory.h>
#include <linux/clk.h> #include <linux/clk.h>
...@@ -30,6 +31,11 @@ ...@@ -30,6 +31,11 @@
#include "dmaengine.h" #include "dmaengine.h"
#include "mv_xor.h" #include "mv_xor.h"
enum mv_xor_mode {
XOR_MODE_IN_REG,
XOR_MODE_IN_DESC,
};
static void mv_xor_issue_pending(struct dma_chan *chan); static void mv_xor_issue_pending(struct dma_chan *chan);
#define to_mv_xor_chan(chan) \ #define to_mv_xor_chan(chan) \
...@@ -56,18 +62,30 @@ static void mv_desc_init(struct mv_xor_desc_slot *desc, ...@@ -56,18 +62,30 @@ static void mv_desc_init(struct mv_xor_desc_slot *desc,
hw_desc->byte_count = byte_count; hw_desc->byte_count = byte_count;
} }
static void mv_desc_set_next_desc(struct mv_xor_desc_slot *desc, static void mv_desc_set_mode(struct mv_xor_desc_slot *desc)
u32 next_desc_addr)
{ {
struct mv_xor_desc *hw_desc = desc->hw_desc; struct mv_xor_desc *hw_desc = desc->hw_desc;
BUG_ON(hw_desc->phy_next_desc);
hw_desc->phy_next_desc = next_desc_addr; switch (desc->type) {
case DMA_XOR:
case DMA_INTERRUPT:
hw_desc->desc_command |= XOR_DESC_OPERATION_XOR;
break;
case DMA_MEMCPY:
hw_desc->desc_command |= XOR_DESC_OPERATION_MEMCPY;
break;
default:
BUG();
return;
}
} }
static void mv_desc_clear_next_desc(struct mv_xor_desc_slot *desc) static void mv_desc_set_next_desc(struct mv_xor_desc_slot *desc,
u32 next_desc_addr)
{ {
struct mv_xor_desc *hw_desc = desc->hw_desc; struct mv_xor_desc *hw_desc = desc->hw_desc;
hw_desc->phy_next_desc = 0; BUG_ON(hw_desc->phy_next_desc);
hw_desc->phy_next_desc = next_desc_addr;
} }
static void mv_desc_set_src_addr(struct mv_xor_desc_slot *desc, static void mv_desc_set_src_addr(struct mv_xor_desc_slot *desc,
...@@ -104,7 +122,7 @@ static u32 mv_chan_get_intr_cause(struct mv_xor_chan *chan) ...@@ -104,7 +122,7 @@ static u32 mv_chan_get_intr_cause(struct mv_xor_chan *chan)
return intr_cause; return intr_cause;
} }
static void mv_xor_device_clear_eoc_cause(struct mv_xor_chan *chan) static void mv_chan_clear_eoc_cause(struct mv_xor_chan *chan)
{ {
u32 val; u32 val;
...@@ -114,14 +132,14 @@ static void mv_xor_device_clear_eoc_cause(struct mv_xor_chan *chan) ...@@ -114,14 +132,14 @@ static void mv_xor_device_clear_eoc_cause(struct mv_xor_chan *chan)
writel_relaxed(val, XOR_INTR_CAUSE(chan)); writel_relaxed(val, XOR_INTR_CAUSE(chan));
} }
static void mv_xor_device_clear_err_status(struct mv_xor_chan *chan) static void mv_chan_clear_err_status(struct mv_xor_chan *chan)
{ {
u32 val = 0xFFFF0000 >> (chan->idx * 16); u32 val = 0xFFFF0000 >> (chan->idx * 16);
writel_relaxed(val, XOR_INTR_CAUSE(chan)); writel_relaxed(val, XOR_INTR_CAUSE(chan));
} }
static void mv_set_mode(struct mv_xor_chan *chan, static void mv_chan_set_mode(struct mv_xor_chan *chan,
enum dma_transaction_type type) enum dma_transaction_type type)
{ {
u32 op_mode; u32 op_mode;
u32 config = readl_relaxed(XOR_CONFIG(chan)); u32 config = readl_relaxed(XOR_CONFIG(chan));
...@@ -144,6 +162,25 @@ static void mv_set_mode(struct mv_xor_chan *chan, ...@@ -144,6 +162,25 @@ static void mv_set_mode(struct mv_xor_chan *chan,
config &= ~0x7; config &= ~0x7;
config |= op_mode; config |= op_mode;
if (IS_ENABLED(__BIG_ENDIAN))
config |= XOR_DESCRIPTOR_SWAP;
else
config &= ~XOR_DESCRIPTOR_SWAP;
writel_relaxed(config, XOR_CONFIG(chan));
chan->current_type = type;
}
static void mv_chan_set_mode_to_desc(struct mv_xor_chan *chan)
{
u32 op_mode;
u32 config = readl_relaxed(XOR_CONFIG(chan));
op_mode = XOR_OPERATION_MODE_IN_DESC;
config &= ~0x7;
config |= op_mode;
#if defined(__BIG_ENDIAN) #if defined(__BIG_ENDIAN)
config |= XOR_DESCRIPTOR_SWAP; config |= XOR_DESCRIPTOR_SWAP;
#else #else
...@@ -151,7 +188,6 @@ static void mv_set_mode(struct mv_xor_chan *chan, ...@@ -151,7 +188,6 @@ static void mv_set_mode(struct mv_xor_chan *chan,
#endif #endif
writel_relaxed(config, XOR_CONFIG(chan)); writel_relaxed(config, XOR_CONFIG(chan));
chan->current_type = type;
} }
static void mv_chan_activate(struct mv_xor_chan *chan) static void mv_chan_activate(struct mv_xor_chan *chan)
...@@ -171,28 +207,13 @@ static char mv_chan_is_busy(struct mv_xor_chan *chan) ...@@ -171,28 +207,13 @@ static char mv_chan_is_busy(struct mv_xor_chan *chan)
return (state == 1) ? 1 : 0; return (state == 1) ? 1 : 0;
} }
/**
* mv_xor_free_slots - flags descriptor slots for reuse
* @slot: Slot to free
* Caller must hold &mv_chan->lock while calling this function
*/
static void mv_xor_free_slots(struct mv_xor_chan *mv_chan,
struct mv_xor_desc_slot *slot)
{
dev_dbg(mv_chan_to_devp(mv_chan), "%s %d slot %p\n",
__func__, __LINE__, slot);
slot->slot_used = 0;
}
/* /*
* mv_xor_start_new_chain - program the engine to operate on new chain headed by * mv_chan_start_new_chain - program the engine to operate on new
* sw_desc * chain headed by sw_desc
* Caller must hold &mv_chan->lock while calling this function * Caller must hold &mv_chan->lock while calling this function
*/ */
static void mv_xor_start_new_chain(struct mv_xor_chan *mv_chan, static void mv_chan_start_new_chain(struct mv_xor_chan *mv_chan,
struct mv_xor_desc_slot *sw_desc) struct mv_xor_desc_slot *sw_desc)
{ {
dev_dbg(mv_chan_to_devp(mv_chan), "%s %d: sw_desc %p\n", dev_dbg(mv_chan_to_devp(mv_chan), "%s %d: sw_desc %p\n",
__func__, __LINE__, sw_desc); __func__, __LINE__, sw_desc);
...@@ -205,8 +226,9 @@ static void mv_xor_start_new_chain(struct mv_xor_chan *mv_chan, ...@@ -205,8 +226,9 @@ static void mv_xor_start_new_chain(struct mv_xor_chan *mv_chan,
} }
static dma_cookie_t static dma_cookie_t
mv_xor_run_tx_complete_actions(struct mv_xor_desc_slot *desc, mv_desc_run_tx_complete_actions(struct mv_xor_desc_slot *desc,
struct mv_xor_chan *mv_chan, dma_cookie_t cookie) struct mv_xor_chan *mv_chan,
dma_cookie_t cookie)
{ {
BUG_ON(desc->async_tx.cookie < 0); BUG_ON(desc->async_tx.cookie < 0);
...@@ -230,93 +252,110 @@ mv_xor_run_tx_complete_actions(struct mv_xor_desc_slot *desc, ...@@ -230,93 +252,110 @@ mv_xor_run_tx_complete_actions(struct mv_xor_desc_slot *desc,
} }
static int static int
mv_xor_clean_completed_slots(struct mv_xor_chan *mv_chan) mv_chan_clean_completed_slots(struct mv_xor_chan *mv_chan)
{ {
struct mv_xor_desc_slot *iter, *_iter; struct mv_xor_desc_slot *iter, *_iter;
dev_dbg(mv_chan_to_devp(mv_chan), "%s %d\n", __func__, __LINE__); dev_dbg(mv_chan_to_devp(mv_chan), "%s %d\n", __func__, __LINE__);
list_for_each_entry_safe(iter, _iter, &mv_chan->completed_slots, list_for_each_entry_safe(iter, _iter, &mv_chan->completed_slots,
completed_node) { node) {
if (async_tx_test_ack(&iter->async_tx)) { if (async_tx_test_ack(&iter->async_tx))
list_del(&iter->completed_node); list_move_tail(&iter->node, &mv_chan->free_slots);
mv_xor_free_slots(mv_chan, iter);
}
} }
return 0; return 0;
} }
static int static int
mv_xor_clean_slot(struct mv_xor_desc_slot *desc, mv_desc_clean_slot(struct mv_xor_desc_slot *desc,
struct mv_xor_chan *mv_chan) struct mv_xor_chan *mv_chan)
{ {
dev_dbg(mv_chan_to_devp(mv_chan), "%s %d: desc %p flags %d\n", dev_dbg(mv_chan_to_devp(mv_chan), "%s %d: desc %p flags %d\n",
__func__, __LINE__, desc, desc->async_tx.flags); __func__, __LINE__, desc, desc->async_tx.flags);
list_del(&desc->chain_node);
/* the client is allowed to attach dependent operations /* the client is allowed to attach dependent operations
* until 'ack' is set * until 'ack' is set
*/ */
if (!async_tx_test_ack(&desc->async_tx)) { if (!async_tx_test_ack(&desc->async_tx))
/* move this slot to the completed_slots */ /* move this slot to the completed_slots */
list_add_tail(&desc->completed_node, &mv_chan->completed_slots); list_move_tail(&desc->node, &mv_chan->completed_slots);
return 0; else
} list_move_tail(&desc->node, &mv_chan->free_slots);
mv_xor_free_slots(mv_chan, desc);
return 0; return 0;
} }
/* This function must be called with the mv_xor_chan spinlock held */ /* This function must be called with the mv_xor_chan spinlock held */
static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan) static void mv_chan_slot_cleanup(struct mv_xor_chan *mv_chan)
{ {
struct mv_xor_desc_slot *iter, *_iter; struct mv_xor_desc_slot *iter, *_iter;
dma_cookie_t cookie = 0; dma_cookie_t cookie = 0;
int busy = mv_chan_is_busy(mv_chan); int busy = mv_chan_is_busy(mv_chan);
u32 current_desc = mv_chan_get_current_desc(mv_chan); u32 current_desc = mv_chan_get_current_desc(mv_chan);
int seen_current = 0; int current_cleaned = 0;
struct mv_xor_desc *hw_desc;
dev_dbg(mv_chan_to_devp(mv_chan), "%s %d\n", __func__, __LINE__); dev_dbg(mv_chan_to_devp(mv_chan), "%s %d\n", __func__, __LINE__);
dev_dbg(mv_chan_to_devp(mv_chan), "current_desc %x\n", current_desc); dev_dbg(mv_chan_to_devp(mv_chan), "current_desc %x\n", current_desc);
mv_xor_clean_completed_slots(mv_chan); mv_chan_clean_completed_slots(mv_chan);
/* free completed slots from the chain starting with /* free completed slots from the chain starting with
* the oldest descriptor * the oldest descriptor
*/ */
list_for_each_entry_safe(iter, _iter, &mv_chan->chain, list_for_each_entry_safe(iter, _iter, &mv_chan->chain,
chain_node) { node) {
prefetch(_iter);
prefetch(&_iter->async_tx);
/* do not advance past the current descriptor loaded into the /* clean finished descriptors */
* hardware channel, subsequent descriptors are either in hw_desc = iter->hw_desc;
* process or have not been submitted if (hw_desc->status & XOR_DESC_SUCCESS) {
*/ cookie = mv_desc_run_tx_complete_actions(iter, mv_chan,
if (seen_current) cookie);
break;
/* stop the search if we reach the current descriptor and the /* done processing desc, clean slot */
* channel is busy mv_desc_clean_slot(iter, mv_chan);
*/
if (iter->async_tx.phys == current_desc) { /* break if we did cleaned the current */
seen_current = 1; if (iter->async_tx.phys == current_desc) {
if (busy) current_cleaned = 1;
break; break;
}
} else {
if (iter->async_tx.phys == current_desc) {
current_cleaned = 0;
break;
}
} }
cookie = mv_xor_run_tx_complete_actions(iter, mv_chan, cookie);
if (mv_xor_clean_slot(iter, mv_chan))
break;
} }
if ((busy == 0) && !list_empty(&mv_chan->chain)) { if ((busy == 0) && !list_empty(&mv_chan->chain)) {
struct mv_xor_desc_slot *chain_head; if (current_cleaned) {
chain_head = list_entry(mv_chan->chain.next, /*
struct mv_xor_desc_slot, * current descriptor cleaned and removed, run
chain_node); * from list head
*/
mv_xor_start_new_chain(mv_chan, chain_head); iter = list_entry(mv_chan->chain.next,
struct mv_xor_desc_slot,
node);
mv_chan_start_new_chain(mv_chan, iter);
} else {
if (!list_is_last(&iter->node, &mv_chan->chain)) {
/*
* descriptors are still waiting after
* current, trigger them
*/
iter = list_entry(iter->node.next,
struct mv_xor_desc_slot,
node);
mv_chan_start_new_chain(mv_chan, iter);
} else {
/*
* some descriptors are still waiting
* to be cleaned
*/
tasklet_schedule(&mv_chan->irq_tasklet);
}
}
} }
if (cookie > 0) if (cookie > 0)
...@@ -328,56 +367,35 @@ static void mv_xor_tasklet(unsigned long data) ...@@ -328,56 +367,35 @@ static void mv_xor_tasklet(unsigned long data)
struct mv_xor_chan *chan = (struct mv_xor_chan *) data; struct mv_xor_chan *chan = (struct mv_xor_chan *) data;
spin_lock_bh(&chan->lock); spin_lock_bh(&chan->lock);
mv_xor_slot_cleanup(chan); mv_chan_slot_cleanup(chan);
spin_unlock_bh(&chan->lock); spin_unlock_bh(&chan->lock);
} }
static struct mv_xor_desc_slot * static struct mv_xor_desc_slot *
mv_xor_alloc_slot(struct mv_xor_chan *mv_chan) mv_chan_alloc_slot(struct mv_xor_chan *mv_chan)
{ {
struct mv_xor_desc_slot *iter, *_iter; struct mv_xor_desc_slot *iter;
int retry = 0;
/* start search from the last allocated descrtiptor spin_lock_bh(&mv_chan->lock);
* if a contiguous allocation can not be found start searching
* from the beginning of the list if (!list_empty(&mv_chan->free_slots)) {
*/ iter = list_first_entry(&mv_chan->free_slots,
retry: struct mv_xor_desc_slot,
if (retry == 0) node);
iter = mv_chan->last_used;
else list_move_tail(&iter->node, &mv_chan->allocated_slots);
iter = list_entry(&mv_chan->all_slots,
struct mv_xor_desc_slot, spin_unlock_bh(&mv_chan->lock);
slot_node);
list_for_each_entry_safe_continue(
iter, _iter, &mv_chan->all_slots, slot_node) {
prefetch(_iter);
prefetch(&_iter->async_tx);
if (iter->slot_used) {
/* give up after finding the first busy slot
* on the second pass through the list
*/
if (retry)
break;
continue;
}
/* pre-ack descriptor */ /* pre-ack descriptor */
async_tx_ack(&iter->async_tx); async_tx_ack(&iter->async_tx);
iter->slot_used = 1;
INIT_LIST_HEAD(&iter->chain_node);
iter->async_tx.cookie = -EBUSY; iter->async_tx.cookie = -EBUSY;
mv_chan->last_used = iter;
mv_desc_clear_next_desc(iter);
return iter; return iter;
} }
if (!retry++)
goto retry; spin_unlock_bh(&mv_chan->lock);
/* try to free some slots if the allocation fails */ /* try to free some slots if the allocation fails */
tasklet_schedule(&mv_chan->irq_tasklet); tasklet_schedule(&mv_chan->irq_tasklet);
...@@ -403,14 +421,14 @@ mv_xor_tx_submit(struct dma_async_tx_descriptor *tx) ...@@ -403,14 +421,14 @@ mv_xor_tx_submit(struct dma_async_tx_descriptor *tx)
cookie = dma_cookie_assign(tx); cookie = dma_cookie_assign(tx);
if (list_empty(&mv_chan->chain)) if (list_empty(&mv_chan->chain))
list_add_tail(&sw_desc->chain_node, &mv_chan->chain); list_move_tail(&sw_desc->node, &mv_chan->chain);
else { else {
new_hw_chain = 0; new_hw_chain = 0;
old_chain_tail = list_entry(mv_chan->chain.prev, old_chain_tail = list_entry(mv_chan->chain.prev,
struct mv_xor_desc_slot, struct mv_xor_desc_slot,
chain_node); node);
list_add_tail(&sw_desc->chain_node, &mv_chan->chain); list_move_tail(&sw_desc->node, &mv_chan->chain);
dev_dbg(mv_chan_to_devp(mv_chan), "Append to last desc %pa\n", dev_dbg(mv_chan_to_devp(mv_chan), "Append to last desc %pa\n",
&old_chain_tail->async_tx.phys); &old_chain_tail->async_tx.phys);
...@@ -431,7 +449,7 @@ mv_xor_tx_submit(struct dma_async_tx_descriptor *tx) ...@@ -431,7 +449,7 @@ mv_xor_tx_submit(struct dma_async_tx_descriptor *tx)
} }
if (new_hw_chain) if (new_hw_chain)
mv_xor_start_new_chain(mv_chan, sw_desc); mv_chan_start_new_chain(mv_chan, sw_desc);
spin_unlock_bh(&mv_chan->lock); spin_unlock_bh(&mv_chan->lock);
...@@ -463,26 +481,20 @@ static int mv_xor_alloc_chan_resources(struct dma_chan *chan) ...@@ -463,26 +481,20 @@ static int mv_xor_alloc_chan_resources(struct dma_chan *chan)
dma_async_tx_descriptor_init(&slot->async_tx, chan); dma_async_tx_descriptor_init(&slot->async_tx, chan);
slot->async_tx.tx_submit = mv_xor_tx_submit; slot->async_tx.tx_submit = mv_xor_tx_submit;
INIT_LIST_HEAD(&slot->chain_node); INIT_LIST_HEAD(&slot->node);
INIT_LIST_HEAD(&slot->slot_node);
dma_desc = mv_chan->dma_desc_pool; dma_desc = mv_chan->dma_desc_pool;
slot->async_tx.phys = dma_desc + idx * MV_XOR_SLOT_SIZE; slot->async_tx.phys = dma_desc + idx * MV_XOR_SLOT_SIZE;
slot->idx = idx++; slot->idx = idx++;
spin_lock_bh(&mv_chan->lock); spin_lock_bh(&mv_chan->lock);
mv_chan->slots_allocated = idx; mv_chan->slots_allocated = idx;
list_add_tail(&slot->slot_node, &mv_chan->all_slots); list_add_tail(&slot->node, &mv_chan->free_slots);
spin_unlock_bh(&mv_chan->lock); spin_unlock_bh(&mv_chan->lock);
} }
if (mv_chan->slots_allocated && !mv_chan->last_used)
mv_chan->last_used = list_entry(mv_chan->all_slots.next,
struct mv_xor_desc_slot,
slot_node);
dev_dbg(mv_chan_to_devp(mv_chan), dev_dbg(mv_chan_to_devp(mv_chan),
"allocated %d descriptor slots last_used: %p\n", "allocated %d descriptor slots\n",
mv_chan->slots_allocated, mv_chan->last_used); mv_chan->slots_allocated);
return mv_chan->slots_allocated ? : -ENOMEM; return mv_chan->slots_allocated ? : -ENOMEM;
} }
...@@ -503,16 +515,17 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src, ...@@ -503,16 +515,17 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
"%s src_cnt: %d len: %u dest %pad flags: %ld\n", "%s src_cnt: %d len: %u dest %pad flags: %ld\n",
__func__, src_cnt, len, &dest, flags); __func__, src_cnt, len, &dest, flags);
spin_lock_bh(&mv_chan->lock); sw_desc = mv_chan_alloc_slot(mv_chan);
sw_desc = mv_xor_alloc_slot(mv_chan);
if (sw_desc) { if (sw_desc) {
sw_desc->type = DMA_XOR; sw_desc->type = DMA_XOR;
sw_desc->async_tx.flags = flags; sw_desc->async_tx.flags = flags;
mv_desc_init(sw_desc, dest, len, flags); mv_desc_init(sw_desc, dest, len, flags);
if (mv_chan->op_in_desc == XOR_MODE_IN_DESC)
mv_desc_set_mode(sw_desc);
while (src_cnt--) while (src_cnt--)
mv_desc_set_src_addr(sw_desc, src_cnt, src[src_cnt]); mv_desc_set_src_addr(sw_desc, src_cnt, src[src_cnt]);
} }
spin_unlock_bh(&mv_chan->lock);
dev_dbg(mv_chan_to_devp(mv_chan), dev_dbg(mv_chan_to_devp(mv_chan),
"%s sw_desc %p async_tx %p \n", "%s sw_desc %p async_tx %p \n",
__func__, sw_desc, &sw_desc->async_tx); __func__, sw_desc, &sw_desc->async_tx);
...@@ -556,25 +569,29 @@ static void mv_xor_free_chan_resources(struct dma_chan *chan) ...@@ -556,25 +569,29 @@ static void mv_xor_free_chan_resources(struct dma_chan *chan)
spin_lock_bh(&mv_chan->lock); spin_lock_bh(&mv_chan->lock);
mv_xor_slot_cleanup(mv_chan); mv_chan_slot_cleanup(mv_chan);
list_for_each_entry_safe(iter, _iter, &mv_chan->chain, list_for_each_entry_safe(iter, _iter, &mv_chan->chain,
chain_node) { node) {
in_use_descs++; in_use_descs++;
list_del(&iter->chain_node); list_move_tail(&iter->node, &mv_chan->free_slots);
} }
list_for_each_entry_safe(iter, _iter, &mv_chan->completed_slots, list_for_each_entry_safe(iter, _iter, &mv_chan->completed_slots,
completed_node) { node) {
in_use_descs++; in_use_descs++;
list_del(&iter->completed_node); list_move_tail(&iter->node, &mv_chan->free_slots);
}
list_for_each_entry_safe(iter, _iter, &mv_chan->allocated_slots,
node) {
in_use_descs++;
list_move_tail(&iter->node, &mv_chan->free_slots);
} }
list_for_each_entry_safe_reverse( list_for_each_entry_safe_reverse(
iter, _iter, &mv_chan->all_slots, slot_node) { iter, _iter, &mv_chan->free_slots, node) {
list_del(&iter->slot_node); list_del(&iter->node);
kfree(iter); kfree(iter);
mv_chan->slots_allocated--; mv_chan->slots_allocated--;
} }
mv_chan->last_used = NULL;
dev_dbg(mv_chan_to_devp(mv_chan), "%s slots_allocated %d\n", dev_dbg(mv_chan_to_devp(mv_chan), "%s slots_allocated %d\n",
__func__, mv_chan->slots_allocated); __func__, mv_chan->slots_allocated);
...@@ -603,13 +620,13 @@ static enum dma_status mv_xor_status(struct dma_chan *chan, ...@@ -603,13 +620,13 @@ static enum dma_status mv_xor_status(struct dma_chan *chan,
return ret; return ret;
spin_lock_bh(&mv_chan->lock); spin_lock_bh(&mv_chan->lock);
mv_xor_slot_cleanup(mv_chan); mv_chan_slot_cleanup(mv_chan);
spin_unlock_bh(&mv_chan->lock); spin_unlock_bh(&mv_chan->lock);
return dma_cookie_status(chan, cookie, txstate); return dma_cookie_status(chan, cookie, txstate);
} }
static void mv_dump_xor_regs(struct mv_xor_chan *chan) static void mv_chan_dump_regs(struct mv_xor_chan *chan)
{ {
u32 val; u32 val;
...@@ -632,8 +649,8 @@ static void mv_dump_xor_regs(struct mv_xor_chan *chan) ...@@ -632,8 +649,8 @@ static void mv_dump_xor_regs(struct mv_xor_chan *chan)
dev_err(mv_chan_to_devp(chan), "error addr 0x%08x\n", val); dev_err(mv_chan_to_devp(chan), "error addr 0x%08x\n", val);
} }
static void mv_xor_err_interrupt_handler(struct mv_xor_chan *chan, static void mv_chan_err_interrupt_handler(struct mv_xor_chan *chan,
u32 intr_cause) u32 intr_cause)
{ {
if (intr_cause & XOR_INT_ERR_DECODE) { if (intr_cause & XOR_INT_ERR_DECODE) {
dev_dbg(mv_chan_to_devp(chan), "ignoring address decode error\n"); dev_dbg(mv_chan_to_devp(chan), "ignoring address decode error\n");
...@@ -643,7 +660,7 @@ static void mv_xor_err_interrupt_handler(struct mv_xor_chan *chan, ...@@ -643,7 +660,7 @@ static void mv_xor_err_interrupt_handler(struct mv_xor_chan *chan,
dev_err(mv_chan_to_devp(chan), "error on chan %d. intr cause 0x%08x\n", dev_err(mv_chan_to_devp(chan), "error on chan %d. intr cause 0x%08x\n",
chan->idx, intr_cause); chan->idx, intr_cause);
mv_dump_xor_regs(chan); mv_chan_dump_regs(chan);
WARN_ON(1); WARN_ON(1);
} }
...@@ -655,11 +672,11 @@ static irqreturn_t mv_xor_interrupt_handler(int irq, void *data) ...@@ -655,11 +672,11 @@ static irqreturn_t mv_xor_interrupt_handler(int irq, void *data)
dev_dbg(mv_chan_to_devp(chan), "intr cause %x\n", intr_cause); dev_dbg(mv_chan_to_devp(chan), "intr cause %x\n", intr_cause);
if (intr_cause & XOR_INTR_ERRORS) if (intr_cause & XOR_INTR_ERRORS)
mv_xor_err_interrupt_handler(chan, intr_cause); mv_chan_err_interrupt_handler(chan, intr_cause);
tasklet_schedule(&chan->irq_tasklet); tasklet_schedule(&chan->irq_tasklet);
mv_xor_device_clear_eoc_cause(chan); mv_chan_clear_eoc_cause(chan);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
...@@ -678,7 +695,7 @@ static void mv_xor_issue_pending(struct dma_chan *chan) ...@@ -678,7 +695,7 @@ static void mv_xor_issue_pending(struct dma_chan *chan)
* Perform a transaction to verify the HW works. * Perform a transaction to verify the HW works.
*/ */
static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan) static int mv_chan_memcpy_self_test(struct mv_xor_chan *mv_chan)
{ {
int i, ret; int i, ret;
void *src, *dest; void *src, *dest;
...@@ -787,7 +804,7 @@ static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan) ...@@ -787,7 +804,7 @@ static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan)
#define MV_XOR_NUM_SRC_TEST 4 /* must be <= 15 */ #define MV_XOR_NUM_SRC_TEST 4 /* must be <= 15 */
static int static int
mv_xor_xor_self_test(struct mv_xor_chan *mv_chan) mv_chan_xor_self_test(struct mv_xor_chan *mv_chan)
{ {
int i, src_idx, ret; int i, src_idx, ret;
struct page *dest; struct page *dest;
...@@ -951,7 +968,7 @@ static int mv_xor_channel_remove(struct mv_xor_chan *mv_chan) ...@@ -951,7 +968,7 @@ static int mv_xor_channel_remove(struct mv_xor_chan *mv_chan)
static struct mv_xor_chan * static struct mv_xor_chan *
mv_xor_channel_add(struct mv_xor_device *xordev, mv_xor_channel_add(struct mv_xor_device *xordev,
struct platform_device *pdev, struct platform_device *pdev,
int idx, dma_cap_mask_t cap_mask, int irq) int idx, dma_cap_mask_t cap_mask, int irq, int op_in_desc)
{ {
int ret = 0; int ret = 0;
struct mv_xor_chan *mv_chan; struct mv_xor_chan *mv_chan;
...@@ -963,6 +980,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev, ...@@ -963,6 +980,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
mv_chan->idx = idx; mv_chan->idx = idx;
mv_chan->irq = irq; mv_chan->irq = irq;
mv_chan->op_in_desc = op_in_desc;
dma_dev = &mv_chan->dmadev; dma_dev = &mv_chan->dmadev;
...@@ -1014,7 +1032,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev, ...@@ -1014,7 +1032,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
mv_chan); mv_chan);
/* clear errors before enabling interrupts */ /* clear errors before enabling interrupts */
mv_xor_device_clear_err_status(mv_chan); mv_chan_clear_err_status(mv_chan);
ret = request_irq(mv_chan->irq, mv_xor_interrupt_handler, ret = request_irq(mv_chan->irq, mv_xor_interrupt_handler,
0, dev_name(&pdev->dev), mv_chan); 0, dev_name(&pdev->dev), mv_chan);
...@@ -1023,32 +1041,37 @@ mv_xor_channel_add(struct mv_xor_device *xordev, ...@@ -1023,32 +1041,37 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
mv_chan_unmask_interrupts(mv_chan); mv_chan_unmask_interrupts(mv_chan);
mv_set_mode(mv_chan, DMA_XOR); if (mv_chan->op_in_desc == XOR_MODE_IN_DESC)
mv_chan_set_mode_to_desc(mv_chan);
else
mv_chan_set_mode(mv_chan, DMA_XOR);
spin_lock_init(&mv_chan->lock); spin_lock_init(&mv_chan->lock);
INIT_LIST_HEAD(&mv_chan->chain); INIT_LIST_HEAD(&mv_chan->chain);
INIT_LIST_HEAD(&mv_chan->completed_slots); INIT_LIST_HEAD(&mv_chan->completed_slots);
INIT_LIST_HEAD(&mv_chan->all_slots); INIT_LIST_HEAD(&mv_chan->free_slots);
INIT_LIST_HEAD(&mv_chan->allocated_slots);
mv_chan->dmachan.device = dma_dev; mv_chan->dmachan.device = dma_dev;
dma_cookie_init(&mv_chan->dmachan); dma_cookie_init(&mv_chan->dmachan);
list_add_tail(&mv_chan->dmachan.device_node, &dma_dev->channels); list_add_tail(&mv_chan->dmachan.device_node, &dma_dev->channels);
if (dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask)) { if (dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask)) {
ret = mv_xor_memcpy_self_test(mv_chan); ret = mv_chan_memcpy_self_test(mv_chan);
dev_dbg(&pdev->dev, "memcpy self test returned %d\n", ret); dev_dbg(&pdev->dev, "memcpy self test returned %d\n", ret);
if (ret) if (ret)
goto err_free_irq; goto err_free_irq;
} }
if (dma_has_cap(DMA_XOR, dma_dev->cap_mask)) { if (dma_has_cap(DMA_XOR, dma_dev->cap_mask)) {
ret = mv_xor_xor_self_test(mv_chan); ret = mv_chan_xor_self_test(mv_chan);
dev_dbg(&pdev->dev, "xor self test returned %d\n", ret); dev_dbg(&pdev->dev, "xor self test returned %d\n", ret);
if (ret) if (ret)
goto err_free_irq; goto err_free_irq;
} }
dev_info(&pdev->dev, "Marvell XOR: ( %s%s%s)\n", dev_info(&pdev->dev, "Marvell XOR (%s): ( %s%s%s)\n",
mv_chan->op_in_desc ? "Descriptor Mode" : "Registers Mode",
dma_has_cap(DMA_XOR, dma_dev->cap_mask) ? "xor " : "", dma_has_cap(DMA_XOR, dma_dev->cap_mask) ? "xor " : "",
dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask) ? "cpy " : "", dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask) ? "cpy " : "",
dma_has_cap(DMA_INTERRUPT, dma_dev->cap_mask) ? "intr " : ""); dma_has_cap(DMA_INTERRUPT, dma_dev->cap_mask) ? "intr " : "");
...@@ -1097,6 +1120,13 @@ mv_xor_conf_mbus_windows(struct mv_xor_device *xordev, ...@@ -1097,6 +1120,13 @@ mv_xor_conf_mbus_windows(struct mv_xor_device *xordev,
writel(0, base + WINDOW_OVERRIDE_CTRL(1)); writel(0, base + WINDOW_OVERRIDE_CTRL(1));
} }
static const struct of_device_id mv_xor_dt_ids[] = {
{ .compatible = "marvell,orion-xor", .data = (void *)XOR_MODE_IN_REG },
{ .compatible = "marvell,armada-380-xor", .data = (void *)XOR_MODE_IN_DESC },
{},
};
MODULE_DEVICE_TABLE(of, mv_xor_dt_ids);
static int mv_xor_probe(struct platform_device *pdev) static int mv_xor_probe(struct platform_device *pdev)
{ {
const struct mbus_dram_target_info *dram; const struct mbus_dram_target_info *dram;
...@@ -1104,6 +1134,7 @@ static int mv_xor_probe(struct platform_device *pdev) ...@@ -1104,6 +1134,7 @@ static int mv_xor_probe(struct platform_device *pdev)
struct mv_xor_platform_data *pdata = dev_get_platdata(&pdev->dev); struct mv_xor_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct resource *res; struct resource *res;
int i, ret; int i, ret;
int op_in_desc;
dev_notice(&pdev->dev, "Marvell shared XOR driver\n"); dev_notice(&pdev->dev, "Marvell shared XOR driver\n");
...@@ -1148,11 +1179,15 @@ static int mv_xor_probe(struct platform_device *pdev) ...@@ -1148,11 +1179,15 @@ static int mv_xor_probe(struct platform_device *pdev)
if (pdev->dev.of_node) { if (pdev->dev.of_node) {
struct device_node *np; struct device_node *np;
int i = 0; int i = 0;
const struct of_device_id *of_id =
of_match_device(mv_xor_dt_ids,
&pdev->dev);
for_each_child_of_node(pdev->dev.of_node, np) { for_each_child_of_node(pdev->dev.of_node, np) {
struct mv_xor_chan *chan; struct mv_xor_chan *chan;
dma_cap_mask_t cap_mask; dma_cap_mask_t cap_mask;
int irq; int irq;
op_in_desc = (int)of_id->data;
dma_cap_zero(cap_mask); dma_cap_zero(cap_mask);
if (of_property_read_bool(np, "dmacap,memcpy")) if (of_property_read_bool(np, "dmacap,memcpy"))
...@@ -1169,7 +1204,7 @@ static int mv_xor_probe(struct platform_device *pdev) ...@@ -1169,7 +1204,7 @@ static int mv_xor_probe(struct platform_device *pdev)
} }
chan = mv_xor_channel_add(xordev, pdev, i, chan = mv_xor_channel_add(xordev, pdev, i,
cap_mask, irq); cap_mask, irq, op_in_desc);
if (IS_ERR(chan)) { if (IS_ERR(chan)) {
ret = PTR_ERR(chan); ret = PTR_ERR(chan);
irq_dispose_mapping(irq); irq_dispose_mapping(irq);
...@@ -1198,7 +1233,8 @@ static int mv_xor_probe(struct platform_device *pdev) ...@@ -1198,7 +1233,8 @@ static int mv_xor_probe(struct platform_device *pdev)
} }
chan = mv_xor_channel_add(xordev, pdev, i, chan = mv_xor_channel_add(xordev, pdev, i,
cd->cap_mask, irq); cd->cap_mask, irq,
XOR_MODE_IN_REG);
if (IS_ERR(chan)) { if (IS_ERR(chan)) {
ret = PTR_ERR(chan); ret = PTR_ERR(chan);
goto err_channel_add; goto err_channel_add;
...@@ -1244,14 +1280,6 @@ static int mv_xor_remove(struct platform_device *pdev) ...@@ -1244,14 +1280,6 @@ static int mv_xor_remove(struct platform_device *pdev)
return 0; return 0;
} }
#ifdef CONFIG_OF
static const struct of_device_id mv_xor_dt_ids[] = {
{ .compatible = "marvell,orion-xor", },
{},
};
MODULE_DEVICE_TABLE(of, mv_xor_dt_ids);
#endif
static struct platform_driver mv_xor_driver = { static struct platform_driver mv_xor_driver = {
.probe = mv_xor_probe, .probe = mv_xor_probe,
.remove = mv_xor_remove, .remove = mv_xor_remove,
......
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#define MV_XOR_POOL_SIZE PAGE_SIZE #define MV_XOR_POOL_SIZE (MV_XOR_SLOT_SIZE * 3072)
#define MV_XOR_SLOT_SIZE 64 #define MV_XOR_SLOT_SIZE 64
#define MV_XOR_THRESHOLD 1 #define MV_XOR_THRESHOLD 1
#define MV_XOR_MAX_CHANNELS 2 #define MV_XOR_MAX_CHANNELS 2
...@@ -30,7 +30,13 @@ ...@@ -30,7 +30,13 @@
/* Values for the XOR_CONFIG register */ /* Values for the XOR_CONFIG register */
#define XOR_OPERATION_MODE_XOR 0 #define XOR_OPERATION_MODE_XOR 0
#define XOR_OPERATION_MODE_MEMCPY 2 #define XOR_OPERATION_MODE_MEMCPY 2
#define XOR_OPERATION_MODE_IN_DESC 7
#define XOR_DESCRIPTOR_SWAP BIT(14) #define XOR_DESCRIPTOR_SWAP BIT(14)
#define XOR_DESC_SUCCESS 0x40000000
#define XOR_DESC_OPERATION_XOR (0 << 24)
#define XOR_DESC_OPERATION_CRC32C (1 << 24)
#define XOR_DESC_OPERATION_MEMCPY (2 << 24)
#define XOR_DESC_DMA_OWNED BIT(31) #define XOR_DESC_DMA_OWNED BIT(31)
#define XOR_DESC_EOD_INT_EN BIT(31) #define XOR_DESC_EOD_INT_EN BIT(31)
...@@ -88,13 +94,14 @@ struct mv_xor_device { ...@@ -88,13 +94,14 @@ struct mv_xor_device {
* @mmr_base: memory mapped register base * @mmr_base: memory mapped register base
* @idx: the index of the xor channel * @idx: the index of the xor channel
* @chain: device chain view of the descriptors * @chain: device chain view of the descriptors
* @free_slots: free slots usable by the channel
* @allocated_slots: slots allocated by the driver
* @completed_slots: slots completed by HW but still need to be acked * @completed_slots: slots completed by HW but still need to be acked
* @device: parent device * @device: parent device
* @common: common dmaengine channel object members * @common: common dmaengine channel object members
* @last_used: place holder for allocation to continue from where it left off
* @all_slots: complete domain of slots usable by the channel
* @slots_allocated: records the actual size of the descriptor slot pool * @slots_allocated: records the actual size of the descriptor slot pool
* @irq_tasklet: bottom half where mv_xor_slot_cleanup runs * @irq_tasklet: bottom half where mv_xor_slot_cleanup runs
* @op_in_desc: new mode of driver, each op is writen to descriptor.
*/ */
struct mv_xor_chan { struct mv_xor_chan {
int pending; int pending;
...@@ -105,16 +112,17 @@ struct mv_xor_chan { ...@@ -105,16 +112,17 @@ struct mv_xor_chan {
int irq; int irq;
enum dma_transaction_type current_type; enum dma_transaction_type current_type;
struct list_head chain; struct list_head chain;
struct list_head free_slots;
struct list_head allocated_slots;
struct list_head completed_slots; struct list_head completed_slots;
dma_addr_t dma_desc_pool; dma_addr_t dma_desc_pool;
void *dma_desc_pool_virt; void *dma_desc_pool_virt;
size_t pool_size; size_t pool_size;
struct dma_device dmadev; struct dma_device dmadev;
struct dma_chan dmachan; struct dma_chan dmachan;
struct mv_xor_desc_slot *last_used;
struct list_head all_slots;
int slots_allocated; int slots_allocated;
struct tasklet_struct irq_tasklet; struct tasklet_struct irq_tasklet;
int op_in_desc;
char dummy_src[MV_XOR_MIN_BYTE_COUNT]; char dummy_src[MV_XOR_MIN_BYTE_COUNT];
char dummy_dst[MV_XOR_MIN_BYTE_COUNT]; char dummy_dst[MV_XOR_MIN_BYTE_COUNT];
dma_addr_t dummy_src_addr, dummy_dst_addr; dma_addr_t dummy_src_addr, dummy_dst_addr;
...@@ -122,9 +130,7 @@ struct mv_xor_chan { ...@@ -122,9 +130,7 @@ struct mv_xor_chan {
/** /**
* struct mv_xor_desc_slot - software descriptor * struct mv_xor_desc_slot - software descriptor
* @slot_node: node on the mv_xor_chan.all_slots list * @node: node on the mv_xor_chan lists
* @chain_node: node on the mv_xor_chan.chain list
* @completed_node: node on the mv_xor_chan.completed_slots list
* @hw_desc: virtual address of the hardware descriptor chain * @hw_desc: virtual address of the hardware descriptor chain
* @phys: hardware address of the hardware descriptor chain * @phys: hardware address of the hardware descriptor chain
* @slot_used: slot in use or not * @slot_used: slot in use or not
...@@ -133,12 +139,9 @@ struct mv_xor_chan { ...@@ -133,12 +139,9 @@ struct mv_xor_chan {
* @async_tx: support for the async_tx api * @async_tx: support for the async_tx api
*/ */
struct mv_xor_desc_slot { struct mv_xor_desc_slot {
struct list_head slot_node; struct list_head node;
struct list_head chain_node;
struct list_head completed_node;
enum dma_transaction_type type; enum dma_transaction_type type;
void *hw_desc; void *hw_desc;
u16 slot_used;
u16 idx; u16 idx;
struct dma_async_tx_descriptor async_tx; struct dma_async_tx_descriptor async_tx;
}; };
......
...@@ -170,7 +170,7 @@ static struct mxs_dma_type mxs_dma_types[] = { ...@@ -170,7 +170,7 @@ static struct mxs_dma_type mxs_dma_types[] = {
} }
}; };
static struct platform_device_id mxs_dma_ids[] = { static const struct platform_device_id mxs_dma_ids[] = {
{ {
.name = "imx23-dma-apbh", .name = "imx23-dma-apbh",
.driver_data = (kernel_ulong_t) &mxs_dma_types[0], .driver_data = (kernel_ulong_t) &mxs_dma_types[0],
......
...@@ -1455,7 +1455,7 @@ static int nbpf_remove(struct platform_device *pdev) ...@@ -1455,7 +1455,7 @@ static int nbpf_remove(struct platform_device *pdev)
return 0; return 0;
} }
static struct platform_device_id nbpf_ids[] = { static const struct platform_device_id nbpf_ids[] = {
{"nbpfaxi64dmac1b4", (kernel_ulong_t)&nbpf_cfg[NBPF1B4]}, {"nbpfaxi64dmac1b4", (kernel_ulong_t)&nbpf_cfg[NBPF1B4]},
{"nbpfaxi64dmac1b8", (kernel_ulong_t)&nbpf_cfg[NBPF1B8]}, {"nbpfaxi64dmac1b8", (kernel_ulong_t)&nbpf_cfg[NBPF1B8]},
{"nbpfaxi64dmac1b16", (kernel_ulong_t)&nbpf_cfg[NBPF1B16]}, {"nbpfaxi64dmac1b16", (kernel_ulong_t)&nbpf_cfg[NBPF1B16]},
......
...@@ -44,6 +44,50 @@ static struct of_dma *of_dma_find_controller(struct of_phandle_args *dma_spec) ...@@ -44,6 +44,50 @@ static struct of_dma *of_dma_find_controller(struct of_phandle_args *dma_spec)
return NULL; return NULL;
} }
/**
* of_dma_router_xlate - translation function for router devices
* @dma_spec: pointer to DMA specifier as found in the device tree
* @of_dma: pointer to DMA controller data (router information)
*
* The function creates new dma_spec to be passed to the router driver's
* of_dma_route_allocate() function to prepare a dma_spec which will be used
* to request channel from the real DMA controller.
*/
static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct dma_chan *chan;
struct of_dma *ofdma_target;
struct of_phandle_args dma_spec_target;
void *route_data;
/* translate the request for the real DMA controller */
memcpy(&dma_spec_target, dma_spec, sizeof(dma_spec_target));
route_data = ofdma->of_dma_route_allocate(&dma_spec_target, ofdma);
if (IS_ERR(route_data))
return NULL;
ofdma_target = of_dma_find_controller(&dma_spec_target);
if (!ofdma_target)
return NULL;
chan = ofdma_target->of_dma_xlate(&dma_spec_target, ofdma_target);
if (chan) {
chan->router = ofdma->dma_router;
chan->route_data = route_data;
} else {
ofdma->dma_router->route_free(ofdma->dma_router->dev,
route_data);
}
/*
* Need to put the node back since the ofdma->of_dma_route_allocate
* has taken it for generating the new, translated dma_spec
*/
of_node_put(dma_spec_target.np);
return chan;
}
/** /**
* of_dma_controller_register - Register a DMA controller to DT DMA helpers * of_dma_controller_register - Register a DMA controller to DT DMA helpers
* @np: device node of DMA controller * @np: device node of DMA controller
...@@ -109,6 +153,51 @@ void of_dma_controller_free(struct device_node *np) ...@@ -109,6 +153,51 @@ void of_dma_controller_free(struct device_node *np)
} }
EXPORT_SYMBOL_GPL(of_dma_controller_free); EXPORT_SYMBOL_GPL(of_dma_controller_free);
/**
* of_dma_router_register - Register a DMA router to DT DMA helpers as a
* controller
* @np: device node of DMA router
* @of_dma_route_allocate: setup function for the router which need to
* modify the dma_spec for the DMA controller to
* use and to set up the requested route.
* @dma_router: pointer to dma_router structure to be used when
* the route need to be free up.
*
* Returns 0 on success or appropriate errno value on error.
*
* Allocated memory should be freed with appropriate of_dma_controller_free()
* call.
*/
int of_dma_router_register(struct device_node *np,
void *(*of_dma_route_allocate)
(struct of_phandle_args *, struct of_dma *),
struct dma_router *dma_router)
{
struct of_dma *ofdma;
if (!np || !of_dma_route_allocate || !dma_router) {
pr_err("%s: not enough information provided\n", __func__);
return -EINVAL;
}
ofdma = kzalloc(sizeof(*ofdma), GFP_KERNEL);
if (!ofdma)
return -ENOMEM;
ofdma->of_node = np;
ofdma->of_dma_xlate = of_dma_router_xlate;
ofdma->of_dma_route_allocate = of_dma_route_allocate;
ofdma->dma_router = dma_router;
/* Now queue of_dma controller structure in list */
mutex_lock(&of_dma_lock);
list_add_tail(&ofdma->of_dma_controllers, &of_dma_list);
mutex_unlock(&of_dma_lock);
return 0;
}
EXPORT_SYMBOL_GPL(of_dma_router_register);
/** /**
* of_dma_match_channel - Check if a DMA specifier matches name * of_dma_match_channel - Check if a DMA specifier matches name
* @np: device node to look for DMA channels * @np: device node to look for DMA channels
......
...@@ -22,6 +22,9 @@ ...@@ -22,6 +22,9 @@
#include "virt-dma.h" #include "virt-dma.h"
#define OMAP_SDMA_REQUESTS 127
#define OMAP_SDMA_CHANNELS 32
struct omap_dmadev { struct omap_dmadev {
struct dma_device ddev; struct dma_device ddev;
spinlock_t lock; spinlock_t lock;
...@@ -31,9 +34,10 @@ struct omap_dmadev { ...@@ -31,9 +34,10 @@ struct omap_dmadev {
const struct omap_dma_reg *reg_map; const struct omap_dma_reg *reg_map;
struct omap_system_dma_plat_info *plat; struct omap_system_dma_plat_info *plat;
bool legacy; bool legacy;
unsigned dma_requests;
spinlock_t irq_lock; spinlock_t irq_lock;
uint32_t irq_enable_mask; uint32_t irq_enable_mask;
struct omap_chan *lch_map[32]; struct omap_chan *lch_map[OMAP_SDMA_CHANNELS];
}; };
struct omap_chan { struct omap_chan {
...@@ -362,7 +366,7 @@ static void omap_dma_start_sg(struct omap_chan *c, struct omap_desc *d, ...@@ -362,7 +366,7 @@ static void omap_dma_start_sg(struct omap_chan *c, struct omap_desc *d,
struct omap_sg *sg = d->sg + idx; struct omap_sg *sg = d->sg + idx;
unsigned cxsa, cxei, cxfi; unsigned cxsa, cxei, cxfi;
if (d->dir == DMA_DEV_TO_MEM) { if (d->dir == DMA_DEV_TO_MEM || d->dir == DMA_MEM_TO_MEM) {
cxsa = CDSA; cxsa = CDSA;
cxei = CDEI; cxei = CDEI;
cxfi = CDFI; cxfi = CDFI;
...@@ -408,7 +412,7 @@ static void omap_dma_start_desc(struct omap_chan *c) ...@@ -408,7 +412,7 @@ static void omap_dma_start_desc(struct omap_chan *c)
if (dma_omap1()) if (dma_omap1())
omap_dma_chan_write(c, CCR2, d->ccr >> 16); omap_dma_chan_write(c, CCR2, d->ccr >> 16);
if (d->dir == DMA_DEV_TO_MEM) { if (d->dir == DMA_DEV_TO_MEM || d->dir == DMA_MEM_TO_MEM) {
cxsa = CSSA; cxsa = CSSA;
cxei = CSEI; cxei = CSEI;
cxfi = CSFI; cxfi = CSFI;
...@@ -589,6 +593,7 @@ static void omap_dma_free_chan_resources(struct dma_chan *chan) ...@@ -589,6 +593,7 @@ static void omap_dma_free_chan_resources(struct dma_chan *chan)
omap_free_dma(c->dma_ch); omap_free_dma(c->dma_ch);
dev_dbg(od->ddev.dev, "freeing channel for %u\n", c->dma_sig); dev_dbg(od->ddev.dev, "freeing channel for %u\n", c->dma_sig);
c->dma_sig = 0;
} }
static size_t omap_dma_sg_size(struct omap_sg *sg) static size_t omap_dma_sg_size(struct omap_sg *sg)
...@@ -948,6 +953,51 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_cyclic( ...@@ -948,6 +953,51 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_cyclic(
return vchan_tx_prep(&c->vc, &d->vd, flags); return vchan_tx_prep(&c->vc, &d->vd, flags);
} }
static struct dma_async_tx_descriptor *omap_dma_prep_dma_memcpy(
struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
size_t len, unsigned long tx_flags)
{
struct omap_chan *c = to_omap_dma_chan(chan);
struct omap_desc *d;
uint8_t data_type;
d = kzalloc(sizeof(*d) + sizeof(d->sg[0]), GFP_ATOMIC);
if (!d)
return NULL;
data_type = __ffs((src | dest | len));
if (data_type > CSDP_DATA_TYPE_32)
data_type = CSDP_DATA_TYPE_32;
d->dir = DMA_MEM_TO_MEM;
d->dev_addr = src;
d->fi = 0;
d->es = data_type;
d->sg[0].en = len / BIT(data_type);
d->sg[0].fn = 1;
d->sg[0].addr = dest;
d->sglen = 1;
d->ccr = c->ccr;
d->ccr |= CCR_DST_AMODE_POSTINC | CCR_SRC_AMODE_POSTINC;
d->cicr = CICR_DROP_IE;
if (tx_flags & DMA_PREP_INTERRUPT)
d->cicr |= CICR_FRAME_IE;
d->csdp = data_type;
if (dma_omap1()) {
d->cicr |= CICR_TOUT_IE;
d->csdp |= CSDP_DST_PORT_EMIFF | CSDP_SRC_PORT_EMIFF;
} else {
d->csdp |= CSDP_DST_PACKED | CSDP_SRC_PACKED;
d->cicr |= CICR_MISALIGNED_ERR_IE | CICR_TRANS_ERR_IE;
d->csdp |= CSDP_DST_BURST_64 | CSDP_SRC_BURST_64;
}
return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
}
static int omap_dma_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg) static int omap_dma_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg)
{ {
struct omap_chan *c = to_omap_dma_chan(chan); struct omap_chan *c = to_omap_dma_chan(chan);
...@@ -1037,7 +1087,7 @@ static int omap_dma_resume(struct dma_chan *chan) ...@@ -1037,7 +1087,7 @@ static int omap_dma_resume(struct dma_chan *chan)
return 0; return 0;
} }
static int omap_dma_chan_init(struct omap_dmadev *od, int dma_sig) static int omap_dma_chan_init(struct omap_dmadev *od)
{ {
struct omap_chan *c; struct omap_chan *c;
...@@ -1046,7 +1096,6 @@ static int omap_dma_chan_init(struct omap_dmadev *od, int dma_sig) ...@@ -1046,7 +1096,6 @@ static int omap_dma_chan_init(struct omap_dmadev *od, int dma_sig)
return -ENOMEM; return -ENOMEM;
c->reg_map = od->reg_map; c->reg_map = od->reg_map;
c->dma_sig = dma_sig;
c->vc.desc_free = omap_dma_desc_free; c->vc.desc_free = omap_dma_desc_free;
vchan_init(&c->vc, &od->ddev); vchan_init(&c->vc, &od->ddev);
INIT_LIST_HEAD(&c->node); INIT_LIST_HEAD(&c->node);
...@@ -1094,12 +1143,14 @@ static int omap_dma_probe(struct platform_device *pdev) ...@@ -1094,12 +1143,14 @@ static int omap_dma_probe(struct platform_device *pdev)
dma_cap_set(DMA_SLAVE, od->ddev.cap_mask); dma_cap_set(DMA_SLAVE, od->ddev.cap_mask);
dma_cap_set(DMA_CYCLIC, od->ddev.cap_mask); dma_cap_set(DMA_CYCLIC, od->ddev.cap_mask);
dma_cap_set(DMA_MEMCPY, od->ddev.cap_mask);
od->ddev.device_alloc_chan_resources = omap_dma_alloc_chan_resources; od->ddev.device_alloc_chan_resources = omap_dma_alloc_chan_resources;
od->ddev.device_free_chan_resources = omap_dma_free_chan_resources; od->ddev.device_free_chan_resources = omap_dma_free_chan_resources;
od->ddev.device_tx_status = omap_dma_tx_status; od->ddev.device_tx_status = omap_dma_tx_status;
od->ddev.device_issue_pending = omap_dma_issue_pending; od->ddev.device_issue_pending = omap_dma_issue_pending;
od->ddev.device_prep_slave_sg = omap_dma_prep_slave_sg; od->ddev.device_prep_slave_sg = omap_dma_prep_slave_sg;
od->ddev.device_prep_dma_cyclic = omap_dma_prep_dma_cyclic; od->ddev.device_prep_dma_cyclic = omap_dma_prep_dma_cyclic;
od->ddev.device_prep_dma_memcpy = omap_dma_prep_dma_memcpy;
od->ddev.device_config = omap_dma_slave_config; od->ddev.device_config = omap_dma_slave_config;
od->ddev.device_pause = omap_dma_pause; od->ddev.device_pause = omap_dma_pause;
od->ddev.device_resume = omap_dma_resume; od->ddev.device_resume = omap_dma_resume;
...@@ -1116,8 +1167,17 @@ static int omap_dma_probe(struct platform_device *pdev) ...@@ -1116,8 +1167,17 @@ static int omap_dma_probe(struct platform_device *pdev)
tasklet_init(&od->task, omap_dma_sched, (unsigned long)od); tasklet_init(&od->task, omap_dma_sched, (unsigned long)od);
for (i = 0; i < 127; i++) { od->dma_requests = OMAP_SDMA_REQUESTS;
rc = omap_dma_chan_init(od, i); if (pdev->dev.of_node && of_property_read_u32(pdev->dev.of_node,
"dma-requests",
&od->dma_requests)) {
dev_info(&pdev->dev,
"Missing dma-requests property, using %u.\n",
OMAP_SDMA_REQUESTS);
}
for (i = 0; i < OMAP_SDMA_CHANNELS; i++) {
rc = omap_dma_chan_init(od);
if (rc) { if (rc) {
omap_dma_free(od); omap_dma_free(od);
return rc; return rc;
...@@ -1208,10 +1268,14 @@ static struct platform_driver omap_dma_driver = { ...@@ -1208,10 +1268,14 @@ static struct platform_driver omap_dma_driver = {
bool omap_dma_filter_fn(struct dma_chan *chan, void *param) bool omap_dma_filter_fn(struct dma_chan *chan, void *param)
{ {
if (chan->device->dev->driver == &omap_dma_driver.driver) { if (chan->device->dev->driver == &omap_dma_driver.driver) {
struct omap_dmadev *od = to_omap_dma_dev(chan->device);
struct omap_chan *c = to_omap_dma_chan(chan); struct omap_chan *c = to_omap_dma_chan(chan);
unsigned req = *(unsigned *)param; unsigned req = *(unsigned *)param;
return req == c->dma_sig; if (req <= od->dma_requests) {
c->dma_sig = req;
return true;
}
} }
return false; return false;
} }
......
...@@ -1424,8 +1424,8 @@ static int pl330_submit_req(struct pl330_thread *thrd, ...@@ -1424,8 +1424,8 @@ static int pl330_submit_req(struct pl330_thread *thrd,
goto xfer_exit; goto xfer_exit;
if (ret > pl330->mcbufsz / 2) { if (ret > pl330->mcbufsz / 2) {
dev_info(pl330->ddma.dev, "%s:%d Trying increasing mcbufsz\n", dev_info(pl330->ddma.dev, "%s:%d Try increasing mcbufsz (%i/%i)\n",
__func__, __LINE__); __func__, __LINE__, ret, pl330->mcbufsz / 2);
ret = -ENOMEM; ret = -ENOMEM;
goto xfer_exit; goto xfer_exit;
} }
...@@ -2584,12 +2584,14 @@ pl330_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst, ...@@ -2584,12 +2584,14 @@ pl330_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
{ {
struct dma_pl330_desc *desc; struct dma_pl330_desc *desc;
struct dma_pl330_chan *pch = to_pchan(chan); struct dma_pl330_chan *pch = to_pchan(chan);
struct pl330_dmac *pl330 = pch->dmac; struct pl330_dmac *pl330;
int burst; int burst;
if (unlikely(!pch || !len)) if (unlikely(!pch || !len))
return NULL; return NULL;
pl330 = pch->dmac;
desc = __pl330_prep_dma_memcpy(pch, dst, src, len); desc = __pl330_prep_dma_memcpy(pch, dst, src, len);
if (!desc) if (!desc)
return NULL; return NULL;
......
/*
* Copyright 2015 Robert Jarzmik <robert.jarzmik@free.fr>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/err.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/interrupt.h>
#include <linux/dma-mapping.h>
#include <linux/slab.h>
#include <linux/dmaengine.h>
#include <linux/platform_device.h>
#include <linux/device.h>
#include <linux/platform_data/mmp_dma.h>
#include <linux/dmapool.h>
#include <linux/of_device.h>
#include <linux/of_dma.h>
#include <linux/of.h>
#include <linux/dma/pxa-dma.h>
#include "dmaengine.h"
#include "virt-dma.h"
#define DCSR(n) (0x0000 + ((n) << 2))
#define DALGN(n) 0x00a0
#define DINT 0x00f0
#define DDADR(n) (0x0200 + ((n) << 4))
#define DSADR(n) (0x0204 + ((n) << 4))
#define DTADR(n) (0x0208 + ((n) << 4))
#define DCMD(n) (0x020c + ((n) << 4))
#define PXA_DCSR_RUN BIT(31) /* Run Bit (read / write) */
#define PXA_DCSR_NODESC BIT(30) /* No-Descriptor Fetch (read / write) */
#define PXA_DCSR_STOPIRQEN BIT(29) /* Stop Interrupt Enable (R/W) */
#define PXA_DCSR_REQPEND BIT(8) /* Request Pending (read-only) */
#define PXA_DCSR_STOPSTATE BIT(3) /* Stop State (read-only) */
#define PXA_DCSR_ENDINTR BIT(2) /* End Interrupt (read / write) */
#define PXA_DCSR_STARTINTR BIT(1) /* Start Interrupt (read / write) */
#define PXA_DCSR_BUSERR BIT(0) /* Bus Error Interrupt (read / write) */
#define PXA_DCSR_EORIRQEN BIT(28) /* End of Receive IRQ Enable (R/W) */
#define PXA_DCSR_EORJMPEN BIT(27) /* Jump to next descriptor on EOR */
#define PXA_DCSR_EORSTOPEN BIT(26) /* STOP on an EOR */
#define PXA_DCSR_SETCMPST BIT(25) /* Set Descriptor Compare Status */
#define PXA_DCSR_CLRCMPST BIT(24) /* Clear Descriptor Compare Status */
#define PXA_DCSR_CMPST BIT(10) /* The Descriptor Compare Status */
#define PXA_DCSR_EORINTR BIT(9) /* The end of Receive */
#define DRCMR_MAPVLD BIT(7) /* Map Valid (read / write) */
#define DRCMR_CHLNUM 0x1f /* mask for Channel Number (read / write) */
#define DDADR_DESCADDR 0xfffffff0 /* Address of next descriptor (mask) */
#define DDADR_STOP BIT(0) /* Stop (read / write) */
#define PXA_DCMD_INCSRCADDR BIT(31) /* Source Address Increment Setting. */
#define PXA_DCMD_INCTRGADDR BIT(30) /* Target Address Increment Setting. */
#define PXA_DCMD_FLOWSRC BIT(29) /* Flow Control by the source. */
#define PXA_DCMD_FLOWTRG BIT(28) /* Flow Control by the target. */
#define PXA_DCMD_STARTIRQEN BIT(22) /* Start Interrupt Enable */
#define PXA_DCMD_ENDIRQEN BIT(21) /* End Interrupt Enable */
#define PXA_DCMD_ENDIAN BIT(18) /* Device Endian-ness. */
#define PXA_DCMD_BURST8 (1 << 16) /* 8 byte burst */
#define PXA_DCMD_BURST16 (2 << 16) /* 16 byte burst */
#define PXA_DCMD_BURST32 (3 << 16) /* 32 byte burst */
#define PXA_DCMD_WIDTH1 (1 << 14) /* 1 byte width */
#define PXA_DCMD_WIDTH2 (2 << 14) /* 2 byte width (HalfWord) */
#define PXA_DCMD_WIDTH4 (3 << 14) /* 4 byte width (Word) */
#define PXA_DCMD_LENGTH 0x01fff /* length mask (max = 8K - 1) */
#define PDMA_ALIGNMENT 3
#define PDMA_MAX_DESC_BYTES (PXA_DCMD_LENGTH & ~((1 << PDMA_ALIGNMENT) - 1))
struct pxad_desc_hw {
u32 ddadr; /* Points to the next descriptor + flags */
u32 dsadr; /* DSADR value for the current transfer */
u32 dtadr; /* DTADR value for the current transfer */
u32 dcmd; /* DCMD value for the current transfer */
} __aligned(16);
struct pxad_desc_sw {
struct virt_dma_desc vd; /* Virtual descriptor */
int nb_desc; /* Number of hw. descriptors */
size_t len; /* Number of bytes xfered */
dma_addr_t first; /* First descriptor's addr */
/* At least one descriptor has an src/dst address not multiple of 8 */
bool misaligned;
bool cyclic;
struct dma_pool *desc_pool; /* Channel's used allocator */
struct pxad_desc_hw *hw_desc[]; /* DMA coherent descriptors */
};
struct pxad_phy {
int idx;
void __iomem *base;
struct pxad_chan *vchan;
};
struct pxad_chan {
struct virt_dma_chan vc; /* Virtual channel */
u32 drcmr; /* Requestor of the channel */
enum pxad_chan_prio prio; /* Required priority of phy */
/*
* At least one desc_sw in submitted or issued transfers on this channel
* has one address such as: addr % 8 != 0. This implies the DALGN
* setting on the phy.
*/
bool misaligned;
struct dma_slave_config cfg; /* Runtime config */
/* protected by vc->lock */
struct pxad_phy *phy;
struct dma_pool *desc_pool; /* Descriptors pool */
};
struct pxad_device {
struct dma_device slave;
int nr_chans;
void __iomem *base;
struct pxad_phy *phys;
spinlock_t phy_lock; /* Phy association */
#ifdef CONFIG_DEBUG_FS
struct dentry *dbgfs_root;
struct dentry *dbgfs_state;
struct dentry **dbgfs_chan;
#endif
};
#define tx_to_pxad_desc(tx) \
container_of(tx, struct pxad_desc_sw, async_tx)
#define to_pxad_chan(dchan) \
container_of(dchan, struct pxad_chan, vc.chan)
#define to_pxad_dev(dmadev) \
container_of(dmadev, struct pxad_device, slave)
#define to_pxad_sw_desc(_vd) \
container_of((_vd), struct pxad_desc_sw, vd)
#define _phy_readl_relaxed(phy, _reg) \
readl_relaxed((phy)->base + _reg((phy)->idx))
#define phy_readl_relaxed(phy, _reg) \
({ \
u32 _v; \
_v = readl_relaxed((phy)->base + _reg((phy)->idx)); \
dev_vdbg(&phy->vchan->vc.chan.dev->device, \
"%s(): readl(%s): 0x%08x\n", __func__, #_reg, \
_v); \
_v; \
})
#define phy_writel(phy, val, _reg) \
do { \
writel((val), (phy)->base + _reg((phy)->idx)); \
dev_vdbg(&phy->vchan->vc.chan.dev->device, \
"%s(): writel(0x%08x, %s)\n", \
__func__, (u32)(val), #_reg); \
} while (0)
#define phy_writel_relaxed(phy, val, _reg) \
do { \
writel_relaxed((val), (phy)->base + _reg((phy)->idx)); \
dev_vdbg(&phy->vchan->vc.chan.dev->device, \
"%s(): writel_relaxed(0x%08x, %s)\n", \
__func__, (u32)(val), #_reg); \
} while (0)
static unsigned int pxad_drcmr(unsigned int line)
{
if (line < 64)
return 0x100 + line * 4;
return 0x1000 + line * 4;
}
/*
* Debug fs
*/
#ifdef CONFIG_DEBUG_FS
#include <linux/debugfs.h>
#include <linux/uaccess.h>
#include <linux/seq_file.h>
static int dbg_show_requester_chan(struct seq_file *s, void *p)
{
int pos = 0;
struct pxad_phy *phy = s->private;
int i;
u32 drcmr;
pos += seq_printf(s, "DMA channel %d requester :\n", phy->idx);
for (i = 0; i < 70; i++) {
drcmr = readl_relaxed(phy->base + pxad_drcmr(i));
if ((drcmr & DRCMR_CHLNUM) == phy->idx)
pos += seq_printf(s, "\tRequester %d (MAPVLD=%d)\n", i,
!!(drcmr & DRCMR_MAPVLD));
}
return pos;
}
static inline int dbg_burst_from_dcmd(u32 dcmd)
{
int burst = (dcmd >> 16) & 0x3;
return burst ? 4 << burst : 0;
}
static int is_phys_valid(unsigned long addr)
{
return pfn_valid(__phys_to_pfn(addr));
}
#define PXA_DCSR_STR(flag) (dcsr & PXA_DCSR_##flag ? #flag" " : "")
#define PXA_DCMD_STR(flag) (dcmd & PXA_DCMD_##flag ? #flag" " : "")
static int dbg_show_descriptors(struct seq_file *s, void *p)
{
struct pxad_phy *phy = s->private;
int i, max_show = 20, burst, width;
u32 dcmd;
unsigned long phys_desc, ddadr;
struct pxad_desc_hw *desc;
phys_desc = ddadr = _phy_readl_relaxed(phy, DDADR);
seq_printf(s, "DMA channel %d descriptors :\n", phy->idx);
seq_printf(s, "[%03d] First descriptor unknown\n", 0);
for (i = 1; i < max_show && is_phys_valid(phys_desc); i++) {
desc = phys_to_virt(phys_desc);
dcmd = desc->dcmd;
burst = dbg_burst_from_dcmd(dcmd);
width = (1 << ((dcmd >> 14) & 0x3)) >> 1;
seq_printf(s, "[%03d] Desc at %08lx(virt %p)\n",
i, phys_desc, desc);
seq_printf(s, "\tDDADR = %08x\n", desc->ddadr);
seq_printf(s, "\tDSADR = %08x\n", desc->dsadr);
seq_printf(s, "\tDTADR = %08x\n", desc->dtadr);
seq_printf(s, "\tDCMD = %08x (%s%s%s%s%s%s%sburst=%d width=%d len=%d)\n",
dcmd,
PXA_DCMD_STR(INCSRCADDR), PXA_DCMD_STR(INCTRGADDR),
PXA_DCMD_STR(FLOWSRC), PXA_DCMD_STR(FLOWTRG),
PXA_DCMD_STR(STARTIRQEN), PXA_DCMD_STR(ENDIRQEN),
PXA_DCMD_STR(ENDIAN), burst, width,
dcmd & PXA_DCMD_LENGTH);
phys_desc = desc->ddadr;
}
if (i == max_show)
seq_printf(s, "[%03d] Desc at %08lx ... max display reached\n",
i, phys_desc);
else
seq_printf(s, "[%03d] Desc at %08lx is %s\n",
i, phys_desc, phys_desc == DDADR_STOP ?
"DDADR_STOP" : "invalid");
return 0;
}
static int dbg_show_chan_state(struct seq_file *s, void *p)
{
struct pxad_phy *phy = s->private;
u32 dcsr, dcmd;
int burst, width;
static const char * const str_prio[] = {
"high", "normal", "low", "invalid"
};
dcsr = _phy_readl_relaxed(phy, DCSR);
dcmd = _phy_readl_relaxed(phy, DCMD);
burst = dbg_burst_from_dcmd(dcmd);
width = (1 << ((dcmd >> 14) & 0x3)) >> 1;
seq_printf(s, "DMA channel %d\n", phy->idx);
seq_printf(s, "\tPriority : %s\n",
str_prio[(phy->idx & 0xf) / 4]);
seq_printf(s, "\tUnaligned transfer bit: %s\n",
_phy_readl_relaxed(phy, DALGN) & BIT(phy->idx) ?
"yes" : "no");
seq_printf(s, "\tDCSR = %08x (%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s)\n",
dcsr, PXA_DCSR_STR(RUN), PXA_DCSR_STR(NODESC),
PXA_DCSR_STR(STOPIRQEN), PXA_DCSR_STR(EORIRQEN),
PXA_DCSR_STR(EORJMPEN), PXA_DCSR_STR(EORSTOPEN),
PXA_DCSR_STR(SETCMPST), PXA_DCSR_STR(CLRCMPST),
PXA_DCSR_STR(CMPST), PXA_DCSR_STR(EORINTR),
PXA_DCSR_STR(REQPEND), PXA_DCSR_STR(STOPSTATE),
PXA_DCSR_STR(ENDINTR), PXA_DCSR_STR(STARTINTR),
PXA_DCSR_STR(BUSERR));
seq_printf(s, "\tDCMD = %08x (%s%s%s%s%s%s%sburst=%d width=%d len=%d)\n",
dcmd,
PXA_DCMD_STR(INCSRCADDR), PXA_DCMD_STR(INCTRGADDR),
PXA_DCMD_STR(FLOWSRC), PXA_DCMD_STR(FLOWTRG),
PXA_DCMD_STR(STARTIRQEN), PXA_DCMD_STR(ENDIRQEN),
PXA_DCMD_STR(ENDIAN), burst, width, dcmd & PXA_DCMD_LENGTH);
seq_printf(s, "\tDSADR = %08x\n", _phy_readl_relaxed(phy, DSADR));
seq_printf(s, "\tDTADR = %08x\n", _phy_readl_relaxed(phy, DTADR));
seq_printf(s, "\tDDADR = %08x\n", _phy_readl_relaxed(phy, DDADR));
return 0;
}
static int dbg_show_state(struct seq_file *s, void *p)
{
struct pxad_device *pdev = s->private;
/* basic device status */
seq_puts(s, "DMA engine status\n");
seq_printf(s, "\tChannel number: %d\n", pdev->nr_chans);
return 0;
}
#define DBGFS_FUNC_DECL(name) \
static int dbg_open_##name(struct inode *inode, struct file *file) \
{ \
return single_open(file, dbg_show_##name, inode->i_private); \
} \
static const struct file_operations dbg_fops_##name = { \
.owner = THIS_MODULE, \
.open = dbg_open_##name, \
.llseek = seq_lseek, \
.read = seq_read, \
.release = single_release, \
}
DBGFS_FUNC_DECL(state);
DBGFS_FUNC_DECL(chan_state);
DBGFS_FUNC_DECL(descriptors);
DBGFS_FUNC_DECL(requester_chan);
static struct dentry *pxad_dbg_alloc_chan(struct pxad_device *pdev,
int ch, struct dentry *chandir)
{
char chan_name[11];
struct dentry *chan, *chan_state = NULL, *chan_descr = NULL;
struct dentry *chan_reqs = NULL;
void *dt;
scnprintf(chan_name, sizeof(chan_name), "%d", ch);
chan = debugfs_create_dir(chan_name, chandir);
dt = (void *)&pdev->phys[ch];
if (chan)
chan_state = debugfs_create_file("state", 0400, chan, dt,
&dbg_fops_chan_state);
if (chan_state)
chan_descr = debugfs_create_file("descriptors", 0400, chan, dt,
&dbg_fops_descriptors);
if (chan_descr)
chan_reqs = debugfs_create_file("requesters", 0400, chan, dt,
&dbg_fops_requester_chan);
if (!chan_reqs)
goto err_state;
return chan;
err_state:
debugfs_remove_recursive(chan);
return NULL;
}
static void pxad_init_debugfs(struct pxad_device *pdev)
{
int i;
struct dentry *chandir;
pdev->dbgfs_root = debugfs_create_dir(dev_name(pdev->slave.dev), NULL);
if (IS_ERR(pdev->dbgfs_root) || !pdev->dbgfs_root)
goto err_root;
pdev->dbgfs_state = debugfs_create_file("state", 0400, pdev->dbgfs_root,
pdev, &dbg_fops_state);
if (!pdev->dbgfs_state)
goto err_state;
pdev->dbgfs_chan =
kmalloc_array(pdev->nr_chans, sizeof(*pdev->dbgfs_state),
GFP_KERNEL);
if (!pdev->dbgfs_chan)
goto err_alloc;
chandir = debugfs_create_dir("channels", pdev->dbgfs_root);
if (!chandir)
goto err_chandir;
for (i = 0; i < pdev->nr_chans; i++) {
pdev->dbgfs_chan[i] = pxad_dbg_alloc_chan(pdev, i, chandir);
if (!pdev->dbgfs_chan[i])
goto err_chans;
}
return;
err_chans:
err_chandir:
kfree(pdev->dbgfs_chan);
err_alloc:
err_state:
debugfs_remove_recursive(pdev->dbgfs_root);
err_root:
pr_err("pxad: debugfs is not available\n");
}
static void pxad_cleanup_debugfs(struct pxad_device *pdev)
{
debugfs_remove_recursive(pdev->dbgfs_root);
}
#else
static inline void pxad_init_debugfs(struct pxad_device *pdev) {}
static inline void pxad_cleanup_debugfs(struct pxad_device *pdev) {}
#endif
/*
* In the transition phase where legacy pxa handling is done at the same time as
* mmp_dma, the DMA physical channel split between the 2 DMA providers is done
* through legacy_reserved. Legacy code reserves DMA channels by settings
* corresponding bits in legacy_reserved.
*/
static u32 legacy_reserved;
static u32 legacy_unavailable;
static struct pxad_phy *lookup_phy(struct pxad_chan *pchan)
{
int prio, i;
struct pxad_device *pdev = to_pxad_dev(pchan->vc.chan.device);
struct pxad_phy *phy, *found = NULL;
unsigned long flags;
/*
* dma channel priorities
* ch 0 - 3, 16 - 19 <--> (0)
* ch 4 - 7, 20 - 23 <--> (1)
* ch 8 - 11, 24 - 27 <--> (2)
* ch 12 - 15, 28 - 31 <--> (3)
*/
spin_lock_irqsave(&pdev->phy_lock, flags);
for (prio = pchan->prio; prio >= PXAD_PRIO_HIGHEST; prio--) {
for (i = 0; i < pdev->nr_chans; i++) {
if (prio != (i & 0xf) >> 2)
continue;
if ((i < 32) && (legacy_reserved & BIT(i)))
continue;
phy = &pdev->phys[i];
if (!phy->vchan) {
phy->vchan = pchan;
found = phy;
if (i < 32)
legacy_unavailable |= BIT(i);
goto out_unlock;
}
}
}
out_unlock:
spin_unlock_irqrestore(&pdev->phy_lock, flags);
dev_dbg(&pchan->vc.chan.dev->device,
"%s(): phy=%p(%d)\n", __func__, found,
found ? found->idx : -1);
return found;
}
static void pxad_free_phy(struct pxad_chan *chan)
{
struct pxad_device *pdev = to_pxad_dev(chan->vc.chan.device);
unsigned long flags;
u32 reg;
int i;
dev_dbg(&chan->vc.chan.dev->device,
"%s(): freeing\n", __func__);
if (!chan->phy)
return;
/* clear the channel mapping in DRCMR */
reg = pxad_drcmr(chan->drcmr);
writel_relaxed(0, chan->phy->base + reg);
spin_lock_irqsave(&pdev->phy_lock, flags);
for (i = 0; i < 32; i++)
if (chan->phy == &pdev->phys[i])
legacy_unavailable &= ~BIT(i);
chan->phy->vchan = NULL;
chan->phy = NULL;
spin_unlock_irqrestore(&pdev->phy_lock, flags);
}
static bool is_chan_running(struct pxad_chan *chan)
{
u32 dcsr;
struct pxad_phy *phy = chan->phy;
if (!phy)
return false;
dcsr = phy_readl_relaxed(phy, DCSR);
return dcsr & PXA_DCSR_RUN;
}
static bool is_running_chan_misaligned(struct pxad_chan *chan)
{
u32 dalgn;
BUG_ON(!chan->phy);
dalgn = phy_readl_relaxed(chan->phy, DALGN);
return dalgn & (BIT(chan->phy->idx));
}
static void phy_enable(struct pxad_phy *phy, bool misaligned)
{
u32 reg, dalgn;
if (!phy->vchan)
return;
dev_dbg(&phy->vchan->vc.chan.dev->device,
"%s(); phy=%p(%d) misaligned=%d\n", __func__,
phy, phy->idx, misaligned);
reg = pxad_drcmr(phy->vchan->drcmr);
writel_relaxed(DRCMR_MAPVLD | phy->idx, phy->base + reg);
dalgn = phy_readl_relaxed(phy, DALGN);
if (misaligned)
dalgn |= BIT(phy->idx);
else
dalgn &= ~BIT(phy->idx);
phy_writel_relaxed(phy, dalgn, DALGN);
phy_writel(phy, PXA_DCSR_STOPIRQEN | PXA_DCSR_ENDINTR |
PXA_DCSR_BUSERR | PXA_DCSR_RUN, DCSR);
}
static void phy_disable(struct pxad_phy *phy)
{
u32 dcsr;
if (!phy)
return;
dcsr = phy_readl_relaxed(phy, DCSR);
dev_dbg(&phy->vchan->vc.chan.dev->device,
"%s(): phy=%p(%d)\n", __func__, phy, phy->idx);
phy_writel(phy, dcsr & ~PXA_DCSR_RUN & ~PXA_DCSR_STOPIRQEN, DCSR);
}
static void pxad_launch_chan(struct pxad_chan *chan,
struct pxad_desc_sw *desc)
{
dev_dbg(&chan->vc.chan.dev->device,
"%s(): desc=%p\n", __func__, desc);
if (!chan->phy) {
chan->phy = lookup_phy(chan);
if (!chan->phy) {
dev_dbg(&chan->vc.chan.dev->device,
"%s(): no free dma channel\n", __func__);
return;
}
}
/*
* Program the descriptor's address into the DMA controller,
* then start the DMA transaction
*/
phy_writel(chan->phy, desc->first, DDADR);
phy_enable(chan->phy, chan->misaligned);
}
static void set_updater_desc(struct pxad_desc_sw *sw_desc,
unsigned long flags)
{
struct pxad_desc_hw *updater =
sw_desc->hw_desc[sw_desc->nb_desc - 1];
dma_addr_t dma = sw_desc->hw_desc[sw_desc->nb_desc - 2]->ddadr;
updater->ddadr = DDADR_STOP;
updater->dsadr = dma;
updater->dtadr = dma + 8;
updater->dcmd = PXA_DCMD_WIDTH4 | PXA_DCMD_BURST32 |
(PXA_DCMD_LENGTH & sizeof(u32));
if (flags & DMA_PREP_INTERRUPT)
updater->dcmd |= PXA_DCMD_ENDIRQEN;
}
static bool is_desc_completed(struct virt_dma_desc *vd)
{
struct pxad_desc_sw *sw_desc = to_pxad_sw_desc(vd);
struct pxad_desc_hw *updater =
sw_desc->hw_desc[sw_desc->nb_desc - 1];
return updater->dtadr != (updater->dsadr + 8);
}
static void pxad_desc_chain(struct virt_dma_desc *vd1,
struct virt_dma_desc *vd2)
{
struct pxad_desc_sw *desc1 = to_pxad_sw_desc(vd1);
struct pxad_desc_sw *desc2 = to_pxad_sw_desc(vd2);
dma_addr_t dma_to_chain;
dma_to_chain = desc2->first;
desc1->hw_desc[desc1->nb_desc - 1]->ddadr = dma_to_chain;
}
static bool pxad_try_hotchain(struct virt_dma_chan *vc,
struct virt_dma_desc *vd)
{
struct virt_dma_desc *vd_last_issued = NULL;
struct pxad_chan *chan = to_pxad_chan(&vc->chan);
/*
* Attempt to hot chain the tx if the phy is still running. This is
* considered successful only if either the channel is still running
* after the chaining, or if the chained transfer is completed after
* having been hot chained.
* A change of alignment is not allowed, and forbids hotchaining.
*/
if (is_chan_running(chan)) {
BUG_ON(list_empty(&vc->desc_issued));
if (!is_running_chan_misaligned(chan) &&
to_pxad_sw_desc(vd)->misaligned)
return false;
vd_last_issued = list_entry(vc->desc_issued.prev,
struct virt_dma_desc, node);
pxad_desc_chain(vd_last_issued, vd);
if (is_chan_running(chan) || is_desc_completed(vd_last_issued))
return true;
}
return false;
}
static unsigned int clear_chan_irq(struct pxad_phy *phy)
{
u32 dcsr;
u32 dint = readl(phy->base + DINT);
if (!(dint & BIT(phy->idx)))
return PXA_DCSR_RUN;
/* clear irq */
dcsr = phy_readl_relaxed(phy, DCSR);
phy_writel(phy, dcsr, DCSR);
if ((dcsr & PXA_DCSR_BUSERR) && (phy->vchan))
dev_warn(&phy->vchan->vc.chan.dev->device,
"%s(chan=%p): PXA_DCSR_BUSERR\n",
__func__, &phy->vchan);
return dcsr & ~PXA_DCSR_RUN;
}
static irqreturn_t pxad_chan_handler(int irq, void *dev_id)
{
struct pxad_phy *phy = dev_id;
struct pxad_chan *chan = phy->vchan;
struct virt_dma_desc *vd, *tmp;
unsigned int dcsr;
unsigned long flags;
BUG_ON(!chan);
dcsr = clear_chan_irq(phy);
if (dcsr & PXA_DCSR_RUN)
return IRQ_NONE;
spin_lock_irqsave(&chan->vc.lock, flags);
list_for_each_entry_safe(vd, tmp, &chan->vc.desc_issued, node) {
dev_dbg(&chan->vc.chan.dev->device,
"%s(): checking txd %p[%x]: completed=%d\n",
__func__, vd, vd->tx.cookie, is_desc_completed(vd));
if (is_desc_completed(vd)) {
list_del(&vd->node);
vchan_cookie_complete(vd);
} else {
break;
}
}
if (dcsr & PXA_DCSR_STOPSTATE) {
dev_dbg(&chan->vc.chan.dev->device,
"%s(): channel stopped, submitted_empty=%d issued_empty=%d",
__func__,
list_empty(&chan->vc.desc_submitted),
list_empty(&chan->vc.desc_issued));
phy_writel_relaxed(phy, dcsr & ~PXA_DCSR_STOPIRQEN, DCSR);
if (list_empty(&chan->vc.desc_issued)) {
chan->misaligned =
!list_empty(&chan->vc.desc_submitted);
} else {
vd = list_first_entry(&chan->vc.desc_issued,
struct virt_dma_desc, node);
pxad_launch_chan(chan, to_pxad_sw_desc(vd));
}
}
spin_unlock_irqrestore(&chan->vc.lock, flags);
return IRQ_HANDLED;
}
static irqreturn_t pxad_int_handler(int irq, void *dev_id)
{
struct pxad_device *pdev = dev_id;
struct pxad_phy *phy;
u32 dint = readl(pdev->base + DINT);
int i, ret = IRQ_NONE;
while (dint) {
i = __ffs(dint);
dint &= (dint - 1);
phy = &pdev->phys[i];
if ((i < 32) && (legacy_reserved & BIT(i)))
continue;
if (pxad_chan_handler(irq, phy) == IRQ_HANDLED)
ret = IRQ_HANDLED;
}
return ret;
}
static int pxad_alloc_chan_resources(struct dma_chan *dchan)
{
struct pxad_chan *chan = to_pxad_chan(dchan);
struct pxad_device *pdev = to_pxad_dev(chan->vc.chan.device);
if (chan->desc_pool)
return 1;
chan->desc_pool = dma_pool_create(dma_chan_name(dchan),
pdev->slave.dev,
sizeof(struct pxad_desc_hw),
__alignof__(struct pxad_desc_hw),
0);
if (!chan->desc_pool) {
dev_err(&chan->vc.chan.dev->device,
"%s(): unable to allocate descriptor pool\n",
__func__);
return -ENOMEM;
}
return 1;
}
static void pxad_free_chan_resources(struct dma_chan *dchan)
{
struct pxad_chan *chan = to_pxad_chan(dchan);
vchan_free_chan_resources(&chan->vc);
dma_pool_destroy(chan->desc_pool);
chan->desc_pool = NULL;
}
static void pxad_free_desc(struct virt_dma_desc *vd)
{
int i;
dma_addr_t dma;
struct pxad_desc_sw *sw_desc = to_pxad_sw_desc(vd);
BUG_ON(sw_desc->nb_desc == 0);
for (i = sw_desc->nb_desc - 1; i >= 0; i--) {
if (i > 0)
dma = sw_desc->hw_desc[i - 1]->ddadr;
else
dma = sw_desc->first;
dma_pool_free(sw_desc->desc_pool,
sw_desc->hw_desc[i], dma);
}
sw_desc->nb_desc = 0;
kfree(sw_desc);
}
static struct pxad_desc_sw *
pxad_alloc_desc(struct pxad_chan *chan, unsigned int nb_hw_desc)
{
struct pxad_desc_sw *sw_desc;
dma_addr_t dma;
int i;
sw_desc = kzalloc(sizeof(*sw_desc) +
nb_hw_desc * sizeof(struct pxad_desc_hw *),
GFP_NOWAIT);
if (!sw_desc)
return NULL;
sw_desc->desc_pool = chan->desc_pool;
for (i = 0; i < nb_hw_desc; i++) {
sw_desc->hw_desc[i] = dma_pool_alloc(sw_desc->desc_pool,
GFP_NOWAIT, &dma);
if (!sw_desc->hw_desc[i]) {
dev_err(&chan->vc.chan.dev->device,
"%s(): Couldn't allocate the %dth hw_desc from dma_pool %p\n",
__func__, i, sw_desc->desc_pool);
goto err;
}
if (i == 0)
sw_desc->first = dma;
else
sw_desc->hw_desc[i - 1]->ddadr = dma;
sw_desc->nb_desc++;
}
return sw_desc;
err:
pxad_free_desc(&sw_desc->vd);
return NULL;
}
static dma_cookie_t pxad_tx_submit(struct dma_async_tx_descriptor *tx)
{
struct virt_dma_chan *vc = to_virt_chan(tx->chan);
struct pxad_chan *chan = to_pxad_chan(&vc->chan);
struct virt_dma_desc *vd_chained = NULL,
*vd = container_of(tx, struct virt_dma_desc, tx);
dma_cookie_t cookie;
unsigned long flags;
set_updater_desc(to_pxad_sw_desc(vd), tx->flags);
spin_lock_irqsave(&vc->lock, flags);
cookie = dma_cookie_assign(tx);
if (list_empty(&vc->desc_submitted) && pxad_try_hotchain(vc, vd)) {
list_move_tail(&vd->node, &vc->desc_issued);
dev_dbg(&chan->vc.chan.dev->device,
"%s(): txd %p[%x]: submitted (hot linked)\n",
__func__, vd, cookie);
goto out;
}
/*
* Fallback to placing the tx in the submitted queue
*/
if (!list_empty(&vc->desc_submitted)) {
vd_chained = list_entry(vc->desc_submitted.prev,
struct virt_dma_desc, node);
/*
* Only chain the descriptors if no new misalignment is
* introduced. If a new misalignment is chained, let the channel
* stop, and be relaunched in misalign mode from the irq
* handler.
*/
if (chan->misaligned || !to_pxad_sw_desc(vd)->misaligned)
pxad_desc_chain(vd_chained, vd);
else
vd_chained = NULL;
}
dev_dbg(&chan->vc.chan.dev->device,
"%s(): txd %p[%x]: submitted (%s linked)\n",
__func__, vd, cookie, vd_chained ? "cold" : "not");
list_move_tail(&vd->node, &vc->desc_submitted);
chan->misaligned |= to_pxad_sw_desc(vd)->misaligned;
out:
spin_unlock_irqrestore(&vc->lock, flags);
return cookie;
}
static void pxad_issue_pending(struct dma_chan *dchan)
{
struct pxad_chan *chan = to_pxad_chan(dchan);
struct virt_dma_desc *vd_first;
unsigned long flags;
spin_lock_irqsave(&chan->vc.lock, flags);
if (list_empty(&chan->vc.desc_submitted))
goto out;
vd_first = list_first_entry(&chan->vc.desc_submitted,
struct virt_dma_desc, node);
dev_dbg(&chan->vc.chan.dev->device,
"%s(): txd %p[%x]", __func__, vd_first, vd_first->tx.cookie);
vchan_issue_pending(&chan->vc);
if (!pxad_try_hotchain(&chan->vc, vd_first))
pxad_launch_chan(chan, to_pxad_sw_desc(vd_first));
out:
spin_unlock_irqrestore(&chan->vc.lock, flags);
}
static inline struct dma_async_tx_descriptor *
pxad_tx_prep(struct virt_dma_chan *vc, struct virt_dma_desc *vd,
unsigned long tx_flags)
{
struct dma_async_tx_descriptor *tx;
struct pxad_chan *chan = container_of(vc, struct pxad_chan, vc);
tx = vchan_tx_prep(vc, vd, tx_flags);
tx->tx_submit = pxad_tx_submit;
dev_dbg(&chan->vc.chan.dev->device,
"%s(): vc=%p txd=%p[%x] flags=0x%lx\n", __func__,
vc, vd, vd->tx.cookie,
tx_flags);
return tx;
}
static void pxad_get_config(struct pxad_chan *chan,
enum dma_transfer_direction dir,
u32 *dcmd, u32 *dev_src, u32 *dev_dst)
{
u32 maxburst = 0, dev_addr = 0;
enum dma_slave_buswidth width = DMA_SLAVE_BUSWIDTH_UNDEFINED;
*dcmd = 0;
if (chan->cfg.direction == DMA_DEV_TO_MEM) {
maxburst = chan->cfg.src_maxburst;
width = chan->cfg.src_addr_width;
dev_addr = chan->cfg.src_addr;
*dev_src = dev_addr;
*dcmd |= PXA_DCMD_INCTRGADDR | PXA_DCMD_FLOWSRC;
}
if (chan->cfg.direction == DMA_MEM_TO_DEV) {
maxburst = chan->cfg.dst_maxburst;
width = chan->cfg.dst_addr_width;
dev_addr = chan->cfg.dst_addr;
*dev_dst = dev_addr;
*dcmd |= PXA_DCMD_INCSRCADDR | PXA_DCMD_FLOWTRG;
}
if (chan->cfg.direction == DMA_MEM_TO_MEM)
*dcmd |= PXA_DCMD_BURST32 | PXA_DCMD_INCTRGADDR |
PXA_DCMD_INCSRCADDR;
dev_dbg(&chan->vc.chan.dev->device,
"%s(): dev_addr=0x%x maxburst=%d width=%d dir=%d\n",
__func__, dev_addr, maxburst, width, dir);
if (width == DMA_SLAVE_BUSWIDTH_1_BYTE)
*dcmd |= PXA_DCMD_WIDTH1;
else if (width == DMA_SLAVE_BUSWIDTH_2_BYTES)
*dcmd |= PXA_DCMD_WIDTH2;
else if (width == DMA_SLAVE_BUSWIDTH_4_BYTES)
*dcmd |= PXA_DCMD_WIDTH4;
if (maxburst == 8)
*dcmd |= PXA_DCMD_BURST8;
else if (maxburst == 16)
*dcmd |= PXA_DCMD_BURST16;
else if (maxburst == 32)
*dcmd |= PXA_DCMD_BURST32;
/* FIXME: drivers should be ported over to use the filter
* function. Once that's done, the following two lines can
* be removed.
*/
if (chan->cfg.slave_id)
chan->drcmr = chan->cfg.slave_id;
}
static struct dma_async_tx_descriptor *
pxad_prep_memcpy(struct dma_chan *dchan,
dma_addr_t dma_dst, dma_addr_t dma_src,
size_t len, unsigned long flags)
{
struct pxad_chan *chan = to_pxad_chan(dchan);
struct pxad_desc_sw *sw_desc;
struct pxad_desc_hw *hw_desc;
u32 dcmd;
unsigned int i, nb_desc = 0;
size_t copy;
if (!dchan || !len)
return NULL;
dev_dbg(&chan->vc.chan.dev->device,
"%s(): dma_dst=0x%lx dma_src=0x%lx len=%zu flags=%lx\n",
__func__, (unsigned long)dma_dst, (unsigned long)dma_src,
len, flags);
pxad_get_config(chan, DMA_MEM_TO_MEM, &dcmd, NULL, NULL);
nb_desc = DIV_ROUND_UP(len, PDMA_MAX_DESC_BYTES);
sw_desc = pxad_alloc_desc(chan, nb_desc + 1);
if (!sw_desc)
return NULL;
sw_desc->len = len;
if (!IS_ALIGNED(dma_src, 1 << PDMA_ALIGNMENT) ||
!IS_ALIGNED(dma_dst, 1 << PDMA_ALIGNMENT))
sw_desc->misaligned = true;
i = 0;
do {
hw_desc = sw_desc->hw_desc[i++];
copy = min_t(size_t, len, PDMA_MAX_DESC_BYTES);
hw_desc->dcmd = dcmd | (PXA_DCMD_LENGTH & copy);
hw_desc->dsadr = dma_src;
hw_desc->dtadr = dma_dst;
len -= copy;
dma_src += copy;
dma_dst += copy;
} while (len);
set_updater_desc(sw_desc, flags);
return pxad_tx_prep(&chan->vc, &sw_desc->vd, flags);
}
static struct dma_async_tx_descriptor *
pxad_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction dir,
unsigned long flags, void *context)
{
struct pxad_chan *chan = to_pxad_chan(dchan);
struct pxad_desc_sw *sw_desc;
size_t len, avail;
struct scatterlist *sg;
dma_addr_t dma;
u32 dcmd, dsadr = 0, dtadr = 0;
unsigned int nb_desc = 0, i, j = 0;
if ((sgl == NULL) || (sg_len == 0))
return NULL;
pxad_get_config(chan, dir, &dcmd, &dsadr, &dtadr);
dev_dbg(&chan->vc.chan.dev->device,
"%s(): dir=%d flags=%lx\n", __func__, dir, flags);
for_each_sg(sgl, sg, sg_len, i)
nb_desc += DIV_ROUND_UP(sg_dma_len(sg), PDMA_MAX_DESC_BYTES);
sw_desc = pxad_alloc_desc(chan, nb_desc + 1);
if (!sw_desc)
return NULL;
for_each_sg(sgl, sg, sg_len, i) {
dma = sg_dma_address(sg);
avail = sg_dma_len(sg);
sw_desc->len += avail;
do {
len = min_t(size_t, avail, PDMA_MAX_DESC_BYTES);
if (dma & 0x7)
sw_desc->misaligned = true;
sw_desc->hw_desc[j]->dcmd =
dcmd | (PXA_DCMD_LENGTH & len);
sw_desc->hw_desc[j]->dsadr = dsadr ? dsadr : dma;
sw_desc->hw_desc[j++]->dtadr = dtadr ? dtadr : dma;
dma += len;
avail -= len;
} while (avail);
}
set_updater_desc(sw_desc, flags);
return pxad_tx_prep(&chan->vc, &sw_desc->vd, flags);
}
static struct dma_async_tx_descriptor *
pxad_prep_dma_cyclic(struct dma_chan *dchan,
dma_addr_t buf_addr, size_t len, size_t period_len,
enum dma_transfer_direction dir, unsigned long flags)
{
struct pxad_chan *chan = to_pxad_chan(dchan);
struct pxad_desc_sw *sw_desc;
struct pxad_desc_hw **phw_desc;
dma_addr_t dma;
u32 dcmd, dsadr = 0, dtadr = 0;
unsigned int nb_desc = 0;
if (!dchan || !len || !period_len)
return NULL;
if ((dir != DMA_DEV_TO_MEM) && (dir != DMA_MEM_TO_DEV)) {
dev_err(&chan->vc.chan.dev->device,
"Unsupported direction for cyclic DMA\n");
return NULL;
}
/* the buffer length must be a multiple of period_len */
if (len % period_len != 0 || period_len > PDMA_MAX_DESC_BYTES ||
!IS_ALIGNED(period_len, 1 << PDMA_ALIGNMENT))
return NULL;
pxad_get_config(chan, dir, &dcmd, &dsadr, &dtadr);
dcmd |= PXA_DCMD_ENDIRQEN | (PXA_DCMD_LENGTH | period_len);
dev_dbg(&chan->vc.chan.dev->device,
"%s(): buf_addr=0x%lx len=%zu period=%zu dir=%d flags=%lx\n",
__func__, (unsigned long)buf_addr, len, period_len, dir, flags);
nb_desc = DIV_ROUND_UP(period_len, PDMA_MAX_DESC_BYTES);
nb_desc *= DIV_ROUND_UP(len, period_len);
sw_desc = pxad_alloc_desc(chan, nb_desc + 1);
if (!sw_desc)
return NULL;
sw_desc->cyclic = true;
sw_desc->len = len;
phw_desc = sw_desc->hw_desc;
dma = buf_addr;
do {
phw_desc[0]->dsadr = dsadr ? dsadr : dma;
phw_desc[0]->dtadr = dtadr ? dtadr : dma;
phw_desc[0]->dcmd = dcmd;
phw_desc++;
dma += period_len;
len -= period_len;
} while (len);
set_updater_desc(sw_desc, flags);
return pxad_tx_prep(&chan->vc, &sw_desc->vd, flags);
}
static int pxad_config(struct dma_chan *dchan,
struct dma_slave_config *cfg)
{
struct pxad_chan *chan = to_pxad_chan(dchan);
if (!dchan)
return -EINVAL;
chan->cfg = *cfg;
return 0;
}
static int pxad_terminate_all(struct dma_chan *dchan)
{
struct pxad_chan *chan = to_pxad_chan(dchan);
struct pxad_device *pdev = to_pxad_dev(chan->vc.chan.device);
struct virt_dma_desc *vd = NULL;
unsigned long flags;
struct pxad_phy *phy;
LIST_HEAD(head);
dev_dbg(&chan->vc.chan.dev->device,
"%s(): vchan %p: terminate all\n", __func__, &chan->vc);
spin_lock_irqsave(&chan->vc.lock, flags);
vchan_get_all_descriptors(&chan->vc, &head);
list_for_each_entry(vd, &head, node) {
dev_dbg(&chan->vc.chan.dev->device,
"%s(): cancelling txd %p[%x] (completed=%d)", __func__,
vd, vd->tx.cookie, is_desc_completed(vd));
}
phy = chan->phy;
if (phy) {
phy_disable(chan->phy);
pxad_free_phy(chan);
chan->phy = NULL;
spin_lock(&pdev->phy_lock);
phy->vchan = NULL;
spin_unlock(&pdev->phy_lock);
}
spin_unlock_irqrestore(&chan->vc.lock, flags);
vchan_dma_desc_free_list(&chan->vc, &head);
return 0;
}
static unsigned int pxad_residue(struct pxad_chan *chan,
dma_cookie_t cookie)
{
struct virt_dma_desc *vd = NULL;
struct pxad_desc_sw *sw_desc = NULL;
struct pxad_desc_hw *hw_desc = NULL;
u32 curr, start, len, end, residue = 0;
unsigned long flags;
bool passed = false;
int i;
/*
* If the channel does not have a phy pointer anymore, it has already
* been completed. Therefore, its residue is 0.
*/
if (!chan->phy)
return 0;
spin_lock_irqsave(&chan->vc.lock, flags);
vd = vchan_find_desc(&chan->vc, cookie);
if (!vd)
goto out;
sw_desc = to_pxad_sw_desc(vd);
if (sw_desc->hw_desc[0]->dcmd & PXA_DCMD_INCSRCADDR)
curr = phy_readl_relaxed(chan->phy, DSADR);
else
curr = phy_readl_relaxed(chan->phy, DTADR);
for (i = 0; i < sw_desc->nb_desc - 1; i++) {
hw_desc = sw_desc->hw_desc[i];
if (sw_desc->hw_desc[0]->dcmd & PXA_DCMD_INCSRCADDR)
start = hw_desc->dsadr;
else
start = hw_desc->dtadr;
len = hw_desc->dcmd & PXA_DCMD_LENGTH;
end = start + len;
/*
* 'passed' will be latched once we found the descriptor
* which lies inside the boundaries of the curr
* pointer. All descriptors that occur in the list
* _after_ we found that partially handled descriptor
* are still to be processed and are hence added to the
* residual bytes counter.
*/
if (passed) {
residue += len;
} else if (curr >= start && curr <= end) {
residue += end - curr;
passed = true;
}
}
if (!passed)
residue = sw_desc->len;
out:
spin_unlock_irqrestore(&chan->vc.lock, flags);
dev_dbg(&chan->vc.chan.dev->device,
"%s(): txd %p[%x] sw_desc=%p: %d\n",
__func__, vd, cookie, sw_desc, residue);
return residue;
}
static enum dma_status pxad_tx_status(struct dma_chan *dchan,
dma_cookie_t cookie,
struct dma_tx_state *txstate)
{
struct pxad_chan *chan = to_pxad_chan(dchan);
enum dma_status ret;
ret = dma_cookie_status(dchan, cookie, txstate);
if (likely(txstate && (ret != DMA_ERROR)))
dma_set_residue(txstate, pxad_residue(chan, cookie));
return ret;
}
static void pxad_free_channels(struct dma_device *dmadev)
{
struct pxad_chan *c, *cn;
list_for_each_entry_safe(c, cn, &dmadev->channels,
vc.chan.device_node) {
list_del(&c->vc.chan.device_node);
tasklet_kill(&c->vc.task);
}
}
static int pxad_remove(struct platform_device *op)
{
struct pxad_device *pdev = platform_get_drvdata(op);
pxad_cleanup_debugfs(pdev);
pxad_free_channels(&pdev->slave);
dma_async_device_unregister(&pdev->slave);
return 0;
}
static int pxad_init_phys(struct platform_device *op,
struct pxad_device *pdev,
unsigned int nb_phy_chans)
{
int irq0, irq, nr_irq = 0, i, ret;
struct pxad_phy *phy;
irq0 = platform_get_irq(op, 0);
if (irq0 < 0)
return irq0;
pdev->phys = devm_kcalloc(&op->dev, nb_phy_chans,
sizeof(pdev->phys[0]), GFP_KERNEL);
if (!pdev->phys)
return -ENOMEM;
for (i = 0; i < nb_phy_chans; i++)
if (platform_get_irq(op, i) > 0)
nr_irq++;
for (i = 0; i < nb_phy_chans; i++) {
phy = &pdev->phys[i];
phy->base = pdev->base;
phy->idx = i;
irq = platform_get_irq(op, i);
if ((nr_irq > 1) && (irq > 0))
ret = devm_request_irq(&op->dev, irq,
pxad_chan_handler,
IRQF_SHARED, "pxa-dma", phy);
if ((nr_irq == 1) && (i == 0))
ret = devm_request_irq(&op->dev, irq0,
pxad_int_handler,
IRQF_SHARED, "pxa-dma", pdev);
if (ret) {
dev_err(pdev->slave.dev,
"%s(): can't request irq %d:%d\n", __func__,
irq, ret);
return ret;
}
}
return 0;
}
static const struct of_device_id const pxad_dt_ids[] = {
{ .compatible = "marvell,pdma-1.0", },
{}
};
MODULE_DEVICE_TABLE(of, pxad_dt_ids);
static struct dma_chan *pxad_dma_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct pxad_device *d = ofdma->of_dma_data;
struct dma_chan *chan;
chan = dma_get_any_slave_channel(&d->slave);
if (!chan)
return NULL;
to_pxad_chan(chan)->drcmr = dma_spec->args[0];
to_pxad_chan(chan)->prio = dma_spec->args[1];
return chan;
}
static int pxad_init_dmadev(struct platform_device *op,
struct pxad_device *pdev,
unsigned int nr_phy_chans)
{
int ret;
unsigned int i;
struct pxad_chan *c;
pdev->nr_chans = nr_phy_chans;
INIT_LIST_HEAD(&pdev->slave.channels);
pdev->slave.device_alloc_chan_resources = pxad_alloc_chan_resources;
pdev->slave.device_free_chan_resources = pxad_free_chan_resources;
pdev->slave.device_tx_status = pxad_tx_status;
pdev->slave.device_issue_pending = pxad_issue_pending;
pdev->slave.device_config = pxad_config;
pdev->slave.device_terminate_all = pxad_terminate_all;
if (op->dev.coherent_dma_mask)
dma_set_mask(&op->dev, op->dev.coherent_dma_mask);
else
dma_set_mask(&op->dev, DMA_BIT_MASK(32));
ret = pxad_init_phys(op, pdev, nr_phy_chans);
if (ret)
return ret;
for (i = 0; i < nr_phy_chans; i++) {
c = devm_kzalloc(&op->dev, sizeof(*c), GFP_KERNEL);
if (!c)
return -ENOMEM;
c->vc.desc_free = pxad_free_desc;
vchan_init(&c->vc, &pdev->slave);
}
return dma_async_device_register(&pdev->slave);
}
static int pxad_probe(struct platform_device *op)
{
struct pxad_device *pdev;
const struct of_device_id *of_id;
struct mmp_dma_platdata *pdata = dev_get_platdata(&op->dev);
struct resource *iores;
int ret, dma_channels = 0;
const enum dma_slave_buswidth widths =
DMA_SLAVE_BUSWIDTH_1_BYTE | DMA_SLAVE_BUSWIDTH_2_BYTES |
DMA_SLAVE_BUSWIDTH_4_BYTES;
pdev = devm_kzalloc(&op->dev, sizeof(*pdev), GFP_KERNEL);
if (!pdev)
return -ENOMEM;
spin_lock_init(&pdev->phy_lock);
iores = platform_get_resource(op, IORESOURCE_MEM, 0);
pdev->base = devm_ioremap_resource(&op->dev, iores);
if (IS_ERR(pdev->base))
return PTR_ERR(pdev->base);
of_id = of_match_device(pxad_dt_ids, &op->dev);
if (of_id)
of_property_read_u32(op->dev.of_node, "#dma-channels",
&dma_channels);
else if (pdata && pdata->dma_channels)
dma_channels = pdata->dma_channels;
else
dma_channels = 32; /* default 32 channel */
dma_cap_set(DMA_SLAVE, pdev->slave.cap_mask);
dma_cap_set(DMA_MEMCPY, pdev->slave.cap_mask);
dma_cap_set(DMA_CYCLIC, pdev->slave.cap_mask);
dma_cap_set(DMA_PRIVATE, pdev->slave.cap_mask);
pdev->slave.device_prep_dma_memcpy = pxad_prep_memcpy;
pdev->slave.device_prep_slave_sg = pxad_prep_slave_sg;
pdev->slave.device_prep_dma_cyclic = pxad_prep_dma_cyclic;
pdev->slave.copy_align = PDMA_ALIGNMENT;
pdev->slave.src_addr_widths = widths;
pdev->slave.dst_addr_widths = widths;
pdev->slave.directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM);
pdev->slave.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
pdev->slave.dev = &op->dev;
ret = pxad_init_dmadev(op, pdev, dma_channels);
if (ret) {
dev_err(pdev->slave.dev, "unable to register\n");
return ret;
}
if (op->dev.of_node) {
/* Device-tree DMA controller registration */
ret = of_dma_controller_register(op->dev.of_node,
pxad_dma_xlate, pdev);
if (ret < 0) {
dev_err(pdev->slave.dev,
"of_dma_controller_register failed\n");
return ret;
}
}
platform_set_drvdata(op, pdev);
pxad_init_debugfs(pdev);
dev_info(pdev->slave.dev, "initialized %d channels\n", dma_channels);
return 0;
}
static const struct platform_device_id pxad_id_table[] = {
{ "pxa-dma", },
{ },
};
static struct platform_driver pxad_driver = {
.driver = {
.name = "pxa-dma",
.of_match_table = pxad_dt_ids,
},
.id_table = pxad_id_table,
.probe = pxad_probe,
.remove = pxad_remove,
};
bool pxad_filter_fn(struct dma_chan *chan, void *param)
{
struct pxad_chan *c = to_pxad_chan(chan);
struct pxad_param *p = param;
if (chan->device->dev->driver != &pxad_driver.driver)
return false;
c->drcmr = p->drcmr;
c->prio = p->prio;
return true;
}
EXPORT_SYMBOL_GPL(pxad_filter_fn);
int pxad_toggle_reserved_channel(int legacy_channel)
{
if (legacy_unavailable & (BIT(legacy_channel)))
return -EBUSY;
legacy_reserved ^= BIT(legacy_channel);
return 0;
}
EXPORT_SYMBOL_GPL(pxad_toggle_reserved_channel);
module_platform_driver(pxad_driver);
MODULE_DESCRIPTION("Marvell PXA Peripheral DMA Driver");
MODULE_AUTHOR("Robert Jarzmik <robert.jarzmik@free.fr>");
MODULE_LICENSE("GPL v2");
...@@ -1168,7 +1168,7 @@ static struct soc_data soc_s3c2443 = { ...@@ -1168,7 +1168,7 @@ static struct soc_data soc_s3c2443 = {
.has_clocks = true, .has_clocks = true,
}; };
static struct platform_device_id s3c24xx_dma_driver_ids[] = { static const struct platform_device_id s3c24xx_dma_driver_ids[] = {
{ {
.name = "s3c2410-dma", .name = "s3c2410-dma",
.driver_data = (kernel_ulong_t)&soc_s3c2410, .driver_data = (kernel_ulong_t)&soc_s3c2410,
......
...@@ -183,7 +183,7 @@ struct rcar_dmac { ...@@ -183,7 +183,7 @@ struct rcar_dmac {
unsigned int n_channels; unsigned int n_channels;
struct rcar_dmac_chan *channels; struct rcar_dmac_chan *channels;
unsigned long modules[256 / BITS_PER_LONG]; DECLARE_BITMAP(modules, 256);
}; };
#define to_rcar_dmac(d) container_of(d, struct rcar_dmac, engine) #define to_rcar_dmac(d) container_of(d, struct rcar_dmac, engine)
......
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
#include "shdma-arm.h" #include "shdma-arm.h"
const unsigned int dma_ts_shift[] = SH_DMAE_TS_SHIFT; static const unsigned int dma_ts_shift[] = SH_DMAE_TS_SHIFT;
static const struct sh_dmae_slave_config dma_slaves[] = { static const struct sh_dmae_slave_config dma_slaves[] = {
{ {
......
...@@ -23,8 +23,13 @@ ...@@ -23,8 +23,13 @@
#include "dmaengine.h" #include "dmaengine.h"
#define SIRFSOC_DMA_VER_A7V1 1
#define SIRFSOC_DMA_VER_A7V2 2
#define SIRFSOC_DMA_VER_A6 4
#define SIRFSOC_DMA_DESCRIPTORS 16 #define SIRFSOC_DMA_DESCRIPTORS 16
#define SIRFSOC_DMA_CHANNELS 16 #define SIRFSOC_DMA_CHANNELS 16
#define SIRFSOC_DMA_TABLE_NUM 256
#define SIRFSOC_DMA_CH_ADDR 0x00 #define SIRFSOC_DMA_CH_ADDR 0x00
#define SIRFSOC_DMA_CH_XLEN 0x04 #define SIRFSOC_DMA_CH_XLEN 0x04
...@@ -35,15 +40,44 @@ ...@@ -35,15 +40,44 @@
#define SIRFSOC_DMA_CH_VALID 0x140 #define SIRFSOC_DMA_CH_VALID 0x140
#define SIRFSOC_DMA_CH_INT 0x144 #define SIRFSOC_DMA_CH_INT 0x144
#define SIRFSOC_DMA_INT_EN 0x148 #define SIRFSOC_DMA_INT_EN 0x148
#define SIRFSOC_DMA_INT_EN_CLR 0x14C #define SIRFSOC_DMA_INT_EN_CLR 0x14C
#define SIRFSOC_DMA_CH_LOOP_CTRL 0x150 #define SIRFSOC_DMA_CH_LOOP_CTRL 0x150
#define SIRFSOC_DMA_CH_LOOP_CTRL_CLR 0x15C #define SIRFSOC_DMA_CH_LOOP_CTRL_CLR 0x154
#define SIRFSOC_DMA_WIDTH_ATLAS7 0x10
#define SIRFSOC_DMA_VALID_ATLAS7 0x14
#define SIRFSOC_DMA_INT_ATLAS7 0x18
#define SIRFSOC_DMA_INT_EN_ATLAS7 0x1c
#define SIRFSOC_DMA_LOOP_CTRL_ATLAS7 0x20
#define SIRFSOC_DMA_CUR_DATA_ADDR 0x34
#define SIRFSOC_DMA_MUL_ATLAS7 0x38
#define SIRFSOC_DMA_CH_LOOP_CTRL_ATLAS7 0x158
#define SIRFSOC_DMA_CH_LOOP_CTRL_CLR_ATLAS7 0x15C
#define SIRFSOC_DMA_IOBG_SCMD_EN 0x800
#define SIRFSOC_DMA_EARLY_RESP_SET 0x818
#define SIRFSOC_DMA_EARLY_RESP_CLR 0x81C
#define SIRFSOC_DMA_MODE_CTRL_BIT 4 #define SIRFSOC_DMA_MODE_CTRL_BIT 4
#define SIRFSOC_DMA_DIR_CTRL_BIT 5 #define SIRFSOC_DMA_DIR_CTRL_BIT 5
#define SIRFSOC_DMA_MODE_CTRL_BIT_ATLAS7 2
#define SIRFSOC_DMA_CHAIN_CTRL_BIT_ATLAS7 3
#define SIRFSOC_DMA_DIR_CTRL_BIT_ATLAS7 4
#define SIRFSOC_DMA_TAB_NUM_ATLAS7 7
#define SIRFSOC_DMA_CHAIN_INT_BIT_ATLAS7 5
#define SIRFSOC_DMA_CHAIN_FLAG_SHIFT_ATLAS7 25
#define SIRFSOC_DMA_CHAIN_ADDR_SHIFT 32
#define SIRFSOC_DMA_INT_FINI_INT_ATLAS7 BIT(0)
#define SIRFSOC_DMA_INT_CNT_INT_ATLAS7 BIT(1)
#define SIRFSOC_DMA_INT_PAU_INT_ATLAS7 BIT(2)
#define SIRFSOC_DMA_INT_LOOP_INT_ATLAS7 BIT(3)
#define SIRFSOC_DMA_INT_INV_INT_ATLAS7 BIT(4)
#define SIRFSOC_DMA_INT_END_INT_ATLAS7 BIT(5)
#define SIRFSOC_DMA_INT_ALL_ATLAS7 0x3F
/* xlen and dma_width register is in 4 bytes boundary */ /* xlen and dma_width register is in 4 bytes boundary */
#define SIRFSOC_DMA_WORD_LEN 4 #define SIRFSOC_DMA_WORD_LEN 4
#define SIRFSOC_DMA_XLEN_MAX_V1 0x800
#define SIRFSOC_DMA_XLEN_MAX_V2 0x1000
struct sirfsoc_dma_desc { struct sirfsoc_dma_desc {
struct dma_async_tx_descriptor desc; struct dma_async_tx_descriptor desc;
...@@ -56,7 +90,9 @@ struct sirfsoc_dma_desc { ...@@ -56,7 +90,9 @@ struct sirfsoc_dma_desc {
int width; /* DMA width */ int width; /* DMA width */
int dir; int dir;
bool cyclic; /* is loop DMA? */ bool cyclic; /* is loop DMA? */
bool chain; /* is chain DMA? */
u32 addr; /* DMA buffer address */ u32 addr; /* DMA buffer address */
u64 chain_table[SIRFSOC_DMA_TABLE_NUM]; /* chain tbl */
}; };
struct sirfsoc_dma_chan { struct sirfsoc_dma_chan {
...@@ -87,10 +123,25 @@ struct sirfsoc_dma { ...@@ -87,10 +123,25 @@ struct sirfsoc_dma {
void __iomem *base; void __iomem *base;
int irq; int irq;
struct clk *clk; struct clk *clk;
bool is_marco; int type;
void (*exec_desc)(struct sirfsoc_dma_desc *sdesc,
int cid, int burst_mode, void __iomem *base);
struct sirfsoc_dma_regs regs_save; struct sirfsoc_dma_regs regs_save;
}; };
struct sirfsoc_dmadata {
void (*exec)(struct sirfsoc_dma_desc *sdesc,
int cid, int burst_mode, void __iomem *base);
int type;
};
enum sirfsoc_dma_chain_flag {
SIRFSOC_DMA_CHAIN_NORMAL = 0x01,
SIRFSOC_DMA_CHAIN_PAUSE = 0x02,
SIRFSOC_DMA_CHAIN_LOOP = 0x03,
SIRFSOC_DMA_CHAIN_END = 0x04
};
#define DRV_NAME "sirfsoc_dma" #define DRV_NAME "sirfsoc_dma"
static int sirfsoc_dma_runtime_suspend(struct device *dev); static int sirfsoc_dma_runtime_suspend(struct device *dev);
...@@ -109,48 +160,105 @@ static inline struct sirfsoc_dma *dma_chan_to_sirfsoc_dma(struct dma_chan *c) ...@@ -109,48 +160,105 @@ static inline struct sirfsoc_dma *dma_chan_to_sirfsoc_dma(struct dma_chan *c)
return container_of(schan, struct sirfsoc_dma, channels[c->chan_id]); return container_of(schan, struct sirfsoc_dma, channels[c->chan_id]);
} }
static void sirfsoc_dma_execute_hw_a7v2(struct sirfsoc_dma_desc *sdesc,
int cid, int burst_mode, void __iomem *base)
{
if (sdesc->chain) {
/* DMA v2 HW chain mode */
writel_relaxed((sdesc->dir << SIRFSOC_DMA_DIR_CTRL_BIT_ATLAS7) |
(sdesc->chain <<
SIRFSOC_DMA_CHAIN_CTRL_BIT_ATLAS7) |
(0x8 << SIRFSOC_DMA_TAB_NUM_ATLAS7) | 0x3,
base + SIRFSOC_DMA_CH_CTRL);
} else {
/* DMA v2 legacy mode */
writel_relaxed(sdesc->xlen, base + SIRFSOC_DMA_CH_XLEN);
writel_relaxed(sdesc->ylen, base + SIRFSOC_DMA_CH_YLEN);
writel_relaxed(sdesc->width, base + SIRFSOC_DMA_WIDTH_ATLAS7);
writel_relaxed((sdesc->width*((sdesc->ylen+1)>>1)),
base + SIRFSOC_DMA_MUL_ATLAS7);
writel_relaxed((sdesc->dir << SIRFSOC_DMA_DIR_CTRL_BIT_ATLAS7) |
(sdesc->chain <<
SIRFSOC_DMA_CHAIN_CTRL_BIT_ATLAS7) |
0x3, base + SIRFSOC_DMA_CH_CTRL);
}
writel_relaxed(sdesc->chain ? SIRFSOC_DMA_INT_END_INT_ATLAS7 :
(SIRFSOC_DMA_INT_FINI_INT_ATLAS7 |
SIRFSOC_DMA_INT_LOOP_INT_ATLAS7),
base + SIRFSOC_DMA_INT_EN_ATLAS7);
writel(sdesc->addr, base + SIRFSOC_DMA_CH_ADDR);
if (sdesc->cyclic)
writel(0x10001, base + SIRFSOC_DMA_LOOP_CTRL_ATLAS7);
}
static void sirfsoc_dma_execute_hw_a7v1(struct sirfsoc_dma_desc *sdesc,
int cid, int burst_mode, void __iomem *base)
{
writel_relaxed(1, base + SIRFSOC_DMA_IOBG_SCMD_EN);
writel_relaxed((1 << cid), base + SIRFSOC_DMA_EARLY_RESP_SET);
writel_relaxed(sdesc->width, base + SIRFSOC_DMA_WIDTH_0 + cid * 4);
writel_relaxed(cid | (burst_mode << SIRFSOC_DMA_MODE_CTRL_BIT) |
(sdesc->dir << SIRFSOC_DMA_DIR_CTRL_BIT),
base + cid * 0x10 + SIRFSOC_DMA_CH_CTRL);
writel_relaxed(sdesc->xlen, base + cid * 0x10 + SIRFSOC_DMA_CH_XLEN);
writel_relaxed(sdesc->ylen, base + cid * 0x10 + SIRFSOC_DMA_CH_YLEN);
writel_relaxed(readl_relaxed(base + SIRFSOC_DMA_INT_EN) |
(1 << cid), base + SIRFSOC_DMA_INT_EN);
writel(sdesc->addr >> 2, base + cid * 0x10 + SIRFSOC_DMA_CH_ADDR);
if (sdesc->cyclic) {
writel((1 << cid) | 1 << (cid + 16) |
readl_relaxed(base + SIRFSOC_DMA_CH_LOOP_CTRL_ATLAS7),
base + SIRFSOC_DMA_CH_LOOP_CTRL_ATLAS7);
}
}
static void sirfsoc_dma_execute_hw_a6(struct sirfsoc_dma_desc *sdesc,
int cid, int burst_mode, void __iomem *base)
{
writel_relaxed(sdesc->width, base + SIRFSOC_DMA_WIDTH_0 + cid * 4);
writel_relaxed(cid | (burst_mode << SIRFSOC_DMA_MODE_CTRL_BIT) |
(sdesc->dir << SIRFSOC_DMA_DIR_CTRL_BIT),
base + cid * 0x10 + SIRFSOC_DMA_CH_CTRL);
writel_relaxed(sdesc->xlen, base + cid * 0x10 + SIRFSOC_DMA_CH_XLEN);
writel_relaxed(sdesc->ylen, base + cid * 0x10 + SIRFSOC_DMA_CH_YLEN);
writel_relaxed(readl_relaxed(base + SIRFSOC_DMA_INT_EN) |
(1 << cid), base + SIRFSOC_DMA_INT_EN);
writel(sdesc->addr >> 2, base + cid * 0x10 + SIRFSOC_DMA_CH_ADDR);
if (sdesc->cyclic) {
writel((1 << cid) | 1 << (cid + 16) |
readl_relaxed(base + SIRFSOC_DMA_CH_LOOP_CTRL),
base + SIRFSOC_DMA_CH_LOOP_CTRL);
}
}
/* Execute all queued DMA descriptors */ /* Execute all queued DMA descriptors */
static void sirfsoc_dma_execute(struct sirfsoc_dma_chan *schan) static void sirfsoc_dma_execute(struct sirfsoc_dma_chan *schan)
{ {
struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(&schan->chan); struct sirfsoc_dma *sdma = dma_chan_to_sirfsoc_dma(&schan->chan);
int cid = schan->chan.chan_id; int cid = schan->chan.chan_id;
struct sirfsoc_dma_desc *sdesc = NULL; struct sirfsoc_dma_desc *sdesc = NULL;
void __iomem *base;
/* /*
* lock has been held by functions calling this, so we don't hold * lock has been held by functions calling this, so we don't hold
* lock again * lock again
*/ */
base = sdma->base;
sdesc = list_first_entry(&schan->queued, struct sirfsoc_dma_desc, sdesc = list_first_entry(&schan->queued, struct sirfsoc_dma_desc,
node); node);
/* Move the first queued descriptor to active list */ /* Move the first queued descriptor to active list */
list_move_tail(&sdesc->node, &schan->active); list_move_tail(&sdesc->node, &schan->active);
/* Start the DMA transfer */ if (sdma->type == SIRFSOC_DMA_VER_A7V2)
writel_relaxed(sdesc->width, sdma->base + SIRFSOC_DMA_WIDTH_0 + cid = 0;
cid * 4);
writel_relaxed(cid | (schan->mode << SIRFSOC_DMA_MODE_CTRL_BIT) |
(sdesc->dir << SIRFSOC_DMA_DIR_CTRL_BIT),
sdma->base + cid * 0x10 + SIRFSOC_DMA_CH_CTRL);
writel_relaxed(sdesc->xlen, sdma->base + cid * 0x10 +
SIRFSOC_DMA_CH_XLEN);
writel_relaxed(sdesc->ylen, sdma->base + cid * 0x10 +
SIRFSOC_DMA_CH_YLEN);
writel_relaxed(readl_relaxed(sdma->base + SIRFSOC_DMA_INT_EN) |
(1 << cid), sdma->base + SIRFSOC_DMA_INT_EN);
/* /* Start the DMA transfer */
* writel has an implict memory write barrier to make sure data is sdma->exec_desc(sdesc, cid, schan->mode, base);
* flushed into memory before starting DMA
*/
writel(sdesc->addr >> 2, sdma->base + cid * 0x10 + SIRFSOC_DMA_CH_ADDR);
if (sdesc->cyclic) { if (sdesc->cyclic)
writel((1 << cid) | 1 << (cid + 16) |
readl_relaxed(sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL),
sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL);
schan->happened_cyclic = schan->completed_cyclic = 0; schan->happened_cyclic = schan->completed_cyclic = 0;
}
} }
/* Interrupt handler */ /* Interrupt handler */
...@@ -160,27 +268,65 @@ static irqreturn_t sirfsoc_dma_irq(int irq, void *data) ...@@ -160,27 +268,65 @@ static irqreturn_t sirfsoc_dma_irq(int irq, void *data)
struct sirfsoc_dma_chan *schan; struct sirfsoc_dma_chan *schan;
struct sirfsoc_dma_desc *sdesc = NULL; struct sirfsoc_dma_desc *sdesc = NULL;
u32 is; u32 is;
bool chain;
int ch; int ch;
void __iomem *reg;
switch (sdma->type) {
case SIRFSOC_DMA_VER_A6:
case SIRFSOC_DMA_VER_A7V1:
is = readl(sdma->base + SIRFSOC_DMA_CH_INT);
reg = sdma->base + SIRFSOC_DMA_CH_INT;
while ((ch = fls(is) - 1) >= 0) {
is &= ~(1 << ch);
writel_relaxed(1 << ch, reg);
schan = &sdma->channels[ch];
spin_lock(&schan->lock);
sdesc = list_first_entry(&schan->active,
struct sirfsoc_dma_desc, node);
if (!sdesc->cyclic) {
/* Execute queued descriptors */
list_splice_tail_init(&schan->active,
&schan->completed);
dma_cookie_complete(&sdesc->desc);
if (!list_empty(&schan->queued))
sirfsoc_dma_execute(schan);
} else
schan->happened_cyclic++;
spin_unlock(&schan->lock);
}
break;
is = readl(sdma->base + SIRFSOC_DMA_CH_INT); case SIRFSOC_DMA_VER_A7V2:
while ((ch = fls(is) - 1) >= 0) { is = readl(sdma->base + SIRFSOC_DMA_INT_ATLAS7);
is &= ~(1 << ch);
writel_relaxed(1 << ch, sdma->base + SIRFSOC_DMA_CH_INT);
schan = &sdma->channels[ch];
reg = sdma->base + SIRFSOC_DMA_INT_ATLAS7;
writel_relaxed(SIRFSOC_DMA_INT_ALL_ATLAS7, reg);
schan = &sdma->channels[0];
spin_lock(&schan->lock); spin_lock(&schan->lock);
sdesc = list_first_entry(&schan->active,
sdesc = list_first_entry(&schan->active, struct sirfsoc_dma_desc, struct sirfsoc_dma_desc, node);
node);
if (!sdesc->cyclic) { if (!sdesc->cyclic) {
/* Execute queued descriptors */ chain = sdesc->chain;
list_splice_tail_init(&schan->active, &schan->completed); if ((chain && (is & SIRFSOC_DMA_INT_END_INT_ATLAS7)) ||
if (!list_empty(&schan->queued)) (!chain &&
sirfsoc_dma_execute(schan); (is & SIRFSOC_DMA_INT_FINI_INT_ATLAS7))) {
} else /* Execute queued descriptors */
list_splice_tail_init(&schan->active,
&schan->completed);
dma_cookie_complete(&sdesc->desc);
if (!list_empty(&schan->queued))
sirfsoc_dma_execute(schan);
}
} else if (sdesc->cyclic && (is &
SIRFSOC_DMA_INT_LOOP_INT_ATLAS7))
schan->happened_cyclic++; schan->happened_cyclic++;
spin_unlock(&schan->lock); spin_unlock(&schan->lock);
break;
default:
break;
} }
/* Schedule tasklet */ /* Schedule tasklet */
...@@ -227,16 +373,15 @@ static void sirfsoc_dma_process_completed(struct sirfsoc_dma *sdma) ...@@ -227,16 +373,15 @@ static void sirfsoc_dma_process_completed(struct sirfsoc_dma *sdma)
schan->chan.completed_cookie = last_cookie; schan->chan.completed_cookie = last_cookie;
spin_unlock_irqrestore(&schan->lock, flags); spin_unlock_irqrestore(&schan->lock, flags);
} else { } else {
/* for cyclic channel, desc is always in active list */ if (list_empty(&schan->active)) {
sdesc = list_first_entry(&schan->active, struct sirfsoc_dma_desc,
node);
if (!sdesc || (sdesc && !sdesc->cyclic)) {
/* without active cyclic DMA */
spin_unlock_irqrestore(&schan->lock, flags); spin_unlock_irqrestore(&schan->lock, flags);
continue; continue;
} }
/* for cyclic channel, desc is always in active list */
sdesc = list_first_entry(&schan->active,
struct sirfsoc_dma_desc, node);
/* cyclic DMA */ /* cyclic DMA */
happened_cyclic = schan->happened_cyclic; happened_cyclic = schan->happened_cyclic;
spin_unlock_irqrestore(&schan->lock, flags); spin_unlock_irqrestore(&schan->lock, flags);
...@@ -307,20 +452,32 @@ static int sirfsoc_dma_terminate_all(struct dma_chan *chan) ...@@ -307,20 +452,32 @@ static int sirfsoc_dma_terminate_all(struct dma_chan *chan)
spin_lock_irqsave(&schan->lock, flags); spin_lock_irqsave(&schan->lock, flags);
if (!sdma->is_marco) { switch (sdma->type) {
writel_relaxed(readl_relaxed(sdma->base + SIRFSOC_DMA_INT_EN) & case SIRFSOC_DMA_VER_A7V1:
~(1 << cid), sdma->base + SIRFSOC_DMA_INT_EN);
writel_relaxed(readl_relaxed(sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL)
& ~((1 << cid) | 1 << (cid + 16)),
sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL);
} else {
writel_relaxed(1 << cid, sdma->base + SIRFSOC_DMA_INT_EN_CLR); writel_relaxed(1 << cid, sdma->base + SIRFSOC_DMA_INT_EN_CLR);
writel_relaxed((1 << cid) | 1 << (cid + 16), writel_relaxed((1 << cid) | 1 << (cid + 16),
sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL_CLR); sdma->base +
SIRFSOC_DMA_CH_LOOP_CTRL_CLR_ATLAS7);
writel_relaxed(1 << cid, sdma->base + SIRFSOC_DMA_CH_VALID);
break;
case SIRFSOC_DMA_VER_A7V2:
writel_relaxed(0, sdma->base + SIRFSOC_DMA_INT_EN_ATLAS7);
writel_relaxed(0, sdma->base + SIRFSOC_DMA_LOOP_CTRL_ATLAS7);
writel_relaxed(0, sdma->base + SIRFSOC_DMA_VALID_ATLAS7);
break;
case SIRFSOC_DMA_VER_A6:
writel_relaxed(readl_relaxed(sdma->base + SIRFSOC_DMA_INT_EN) &
~(1 << cid), sdma->base + SIRFSOC_DMA_INT_EN);
writel_relaxed(readl_relaxed(sdma->base +
SIRFSOC_DMA_CH_LOOP_CTRL) &
~((1 << cid) | 1 << (cid + 16)),
sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL);
writel_relaxed(1 << cid, sdma->base + SIRFSOC_DMA_CH_VALID);
break;
default:
break;
} }
writel_relaxed(1 << cid, sdma->base + SIRFSOC_DMA_CH_VALID);
list_splice_tail_init(&schan->active, &schan->free); list_splice_tail_init(&schan->active, &schan->free);
list_splice_tail_init(&schan->queued, &schan->free); list_splice_tail_init(&schan->queued, &schan->free);
...@@ -338,13 +495,25 @@ static int sirfsoc_dma_pause_chan(struct dma_chan *chan) ...@@ -338,13 +495,25 @@ static int sirfsoc_dma_pause_chan(struct dma_chan *chan)
spin_lock_irqsave(&schan->lock, flags); spin_lock_irqsave(&schan->lock, flags);
if (!sdma->is_marco) switch (sdma->type) {
writel_relaxed(readl_relaxed(sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL) case SIRFSOC_DMA_VER_A7V1:
& ~((1 << cid) | 1 << (cid + 16)),
sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL);
else
writel_relaxed((1 << cid) | 1 << (cid + 16), writel_relaxed((1 << cid) | 1 << (cid + 16),
sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL_CLR); sdma->base +
SIRFSOC_DMA_CH_LOOP_CTRL_CLR_ATLAS7);
break;
case SIRFSOC_DMA_VER_A7V2:
writel_relaxed(0, sdma->base + SIRFSOC_DMA_LOOP_CTRL_ATLAS7);
break;
case SIRFSOC_DMA_VER_A6:
writel_relaxed(readl_relaxed(sdma->base +
SIRFSOC_DMA_CH_LOOP_CTRL) &
~((1 << cid) | 1 << (cid + 16)),
sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL);
break;
default:
break;
}
spin_unlock_irqrestore(&schan->lock, flags); spin_unlock_irqrestore(&schan->lock, flags);
...@@ -359,14 +528,25 @@ static int sirfsoc_dma_resume_chan(struct dma_chan *chan) ...@@ -359,14 +528,25 @@ static int sirfsoc_dma_resume_chan(struct dma_chan *chan)
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&schan->lock, flags); spin_lock_irqsave(&schan->lock, flags);
switch (sdma->type) {
if (!sdma->is_marco) case SIRFSOC_DMA_VER_A7V1:
writel_relaxed(readl_relaxed(sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL)
| ((1 << cid) | 1 << (cid + 16)),
sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL);
else
writel_relaxed((1 << cid) | 1 << (cid + 16), writel_relaxed((1 << cid) | 1 << (cid + 16),
sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL); sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL_ATLAS7);
break;
case SIRFSOC_DMA_VER_A7V2:
writel_relaxed(0x10001,
sdma->base + SIRFSOC_DMA_LOOP_CTRL_ATLAS7);
break;
case SIRFSOC_DMA_VER_A6:
writel_relaxed(readl_relaxed(sdma->base +
SIRFSOC_DMA_CH_LOOP_CTRL) |
((1 << cid) | 1 << (cid + 16)),
sdma->base + SIRFSOC_DMA_CH_LOOP_CTRL);
break;
default:
break;
}
spin_unlock_irqrestore(&schan->lock, flags); spin_unlock_irqrestore(&schan->lock, flags);
...@@ -473,14 +653,31 @@ sirfsoc_dma_tx_status(struct dma_chan *chan, dma_cookie_t cookie, ...@@ -473,14 +653,31 @@ sirfsoc_dma_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
spin_lock_irqsave(&schan->lock, flags); spin_lock_irqsave(&schan->lock, flags);
sdesc = list_first_entry(&schan->active, struct sirfsoc_dma_desc, if (list_empty(&schan->active)) {
node); ret = dma_cookie_status(chan, cookie, txstate);
dma_request_bytes = (sdesc->xlen + 1) * (sdesc->ylen + 1) * dma_set_residue(txstate, 0);
(sdesc->width * SIRFSOC_DMA_WORD_LEN); spin_unlock_irqrestore(&schan->lock, flags);
return ret;
}
sdesc = list_first_entry(&schan->active, struct sirfsoc_dma_desc, node);
if (sdesc->cyclic)
dma_request_bytes = (sdesc->xlen + 1) * (sdesc->ylen + 1) *
(sdesc->width * SIRFSOC_DMA_WORD_LEN);
else
dma_request_bytes = sdesc->xlen * SIRFSOC_DMA_WORD_LEN;
ret = dma_cookie_status(chan, cookie, txstate); ret = dma_cookie_status(chan, cookie, txstate);
dma_pos = readl_relaxed(sdma->base + cid * 0x10 + SIRFSOC_DMA_CH_ADDR)
<< 2; if (sdma->type == SIRFSOC_DMA_VER_A7V2)
cid = 0;
if (sdma->type == SIRFSOC_DMA_VER_A7V2) {
dma_pos = readl_relaxed(sdma->base + SIRFSOC_DMA_CUR_DATA_ADDR);
} else {
dma_pos = readl_relaxed(
sdma->base + cid * 0x10 + SIRFSOC_DMA_CH_ADDR) << 2;
}
residue = dma_request_bytes - (dma_pos - sdesc->addr); residue = dma_request_bytes - (dma_pos - sdesc->addr);
dma_set_residue(txstate, residue); dma_set_residue(txstate, residue);
...@@ -647,6 +844,7 @@ static int sirfsoc_dma_probe(struct platform_device *op) ...@@ -647,6 +844,7 @@ static int sirfsoc_dma_probe(struct platform_device *op)
struct dma_device *dma; struct dma_device *dma;
struct sirfsoc_dma *sdma; struct sirfsoc_dma *sdma;
struct sirfsoc_dma_chan *schan; struct sirfsoc_dma_chan *schan;
struct sirfsoc_dmadata *data;
struct resource res; struct resource res;
ulong regs_start, regs_size; ulong regs_start, regs_size;
u32 id; u32 id;
...@@ -657,9 +855,11 @@ static int sirfsoc_dma_probe(struct platform_device *op) ...@@ -657,9 +855,11 @@ static int sirfsoc_dma_probe(struct platform_device *op)
dev_err(dev, "Memory exhausted!\n"); dev_err(dev, "Memory exhausted!\n");
return -ENOMEM; return -ENOMEM;
} }
data = (struct sirfsoc_dmadata *)
if (of_device_is_compatible(dn, "sirf,marco-dmac")) (of_match_device(op->dev.driver->of_match_table,
sdma->is_marco = true; &op->dev)->data);
sdma->exec_desc = data->exec;
sdma->type = data->type;
if (of_property_read_u32(dn, "cell-index", &id)) { if (of_property_read_u32(dn, "cell-index", &id)) {
dev_err(dev, "Fail to get DMAC index\n"); dev_err(dev, "Fail to get DMAC index\n");
...@@ -816,6 +1016,8 @@ static int sirfsoc_dma_pm_suspend(struct device *dev) ...@@ -816,6 +1016,8 @@ static int sirfsoc_dma_pm_suspend(struct device *dev)
struct sirfsoc_dma_chan *schan; struct sirfsoc_dma_chan *schan;
int ch; int ch;
int ret; int ret;
int count;
u32 int_offset;
/* /*
* if we were runtime-suspended before, resume to enable clock * if we were runtime-suspended before, resume to enable clock
...@@ -827,11 +1029,19 @@ static int sirfsoc_dma_pm_suspend(struct device *dev) ...@@ -827,11 +1029,19 @@ static int sirfsoc_dma_pm_suspend(struct device *dev)
return ret; return ret;
} }
if (sdma->type == SIRFSOC_DMA_VER_A7V2) {
count = 1;
int_offset = SIRFSOC_DMA_INT_EN_ATLAS7;
} else {
count = SIRFSOC_DMA_CHANNELS;
int_offset = SIRFSOC_DMA_INT_EN;
}
/* /*
* DMA controller will lose all registers while suspending * DMA controller will lose all registers while suspending
* so we need to save registers for active channels * so we need to save registers for active channels
*/ */
for (ch = 0; ch < SIRFSOC_DMA_CHANNELS; ch++) { for (ch = 0; ch < count; ch++) {
schan = &sdma->channels[ch]; schan = &sdma->channels[ch];
if (list_empty(&schan->active)) if (list_empty(&schan->active))
continue; continue;
...@@ -841,7 +1051,7 @@ static int sirfsoc_dma_pm_suspend(struct device *dev) ...@@ -841,7 +1051,7 @@ static int sirfsoc_dma_pm_suspend(struct device *dev)
save->ctrl[ch] = readl_relaxed(sdma->base + save->ctrl[ch] = readl_relaxed(sdma->base +
ch * 0x10 + SIRFSOC_DMA_CH_CTRL); ch * 0x10 + SIRFSOC_DMA_CH_CTRL);
} }
save->interrupt_en = readl_relaxed(sdma->base + SIRFSOC_DMA_INT_EN); save->interrupt_en = readl_relaxed(sdma->base + int_offset);
/* Disable clock */ /* Disable clock */
sirfsoc_dma_runtime_suspend(dev); sirfsoc_dma_runtime_suspend(dev);
...@@ -857,14 +1067,27 @@ static int sirfsoc_dma_pm_resume(struct device *dev) ...@@ -857,14 +1067,27 @@ static int sirfsoc_dma_pm_resume(struct device *dev)
struct sirfsoc_dma_chan *schan; struct sirfsoc_dma_chan *schan;
int ch; int ch;
int ret; int ret;
int count;
u32 int_offset;
u32 width_offset;
/* Enable clock before accessing register */ /* Enable clock before accessing register */
ret = sirfsoc_dma_runtime_resume(dev); ret = sirfsoc_dma_runtime_resume(dev);
if (ret < 0) if (ret < 0)
return ret; return ret;
writel_relaxed(save->interrupt_en, sdma->base + SIRFSOC_DMA_INT_EN); if (sdma->type == SIRFSOC_DMA_VER_A7V2) {
for (ch = 0; ch < SIRFSOC_DMA_CHANNELS; ch++) { count = 1;
int_offset = SIRFSOC_DMA_INT_EN_ATLAS7;
width_offset = SIRFSOC_DMA_WIDTH_ATLAS7;
} else {
count = SIRFSOC_DMA_CHANNELS;
int_offset = SIRFSOC_DMA_INT_EN;
width_offset = SIRFSOC_DMA_WIDTH_0;
}
writel_relaxed(save->interrupt_en, sdma->base + int_offset);
for (ch = 0; ch < count; ch++) {
schan = &sdma->channels[ch]; schan = &sdma->channels[ch];
if (list_empty(&schan->active)) if (list_empty(&schan->active))
continue; continue;
...@@ -872,15 +1095,21 @@ static int sirfsoc_dma_pm_resume(struct device *dev) ...@@ -872,15 +1095,21 @@ static int sirfsoc_dma_pm_resume(struct device *dev)
struct sirfsoc_dma_desc, struct sirfsoc_dma_desc,
node); node);
writel_relaxed(sdesc->width, writel_relaxed(sdesc->width,
sdma->base + SIRFSOC_DMA_WIDTH_0 + ch * 4); sdma->base + width_offset + ch * 4);
writel_relaxed(sdesc->xlen, writel_relaxed(sdesc->xlen,
sdma->base + ch * 0x10 + SIRFSOC_DMA_CH_XLEN); sdma->base + ch * 0x10 + SIRFSOC_DMA_CH_XLEN);
writel_relaxed(sdesc->ylen, writel_relaxed(sdesc->ylen,
sdma->base + ch * 0x10 + SIRFSOC_DMA_CH_YLEN); sdma->base + ch * 0x10 + SIRFSOC_DMA_CH_YLEN);
writel_relaxed(save->ctrl[ch], writel_relaxed(save->ctrl[ch],
sdma->base + ch * 0x10 + SIRFSOC_DMA_CH_CTRL); sdma->base + ch * 0x10 + SIRFSOC_DMA_CH_CTRL);
writel_relaxed(sdesc->addr >> 2, if (sdma->type == SIRFSOC_DMA_VER_A7V2) {
sdma->base + ch * 0x10 + SIRFSOC_DMA_CH_ADDR); writel_relaxed(sdesc->addr,
sdma->base + SIRFSOC_DMA_CH_ADDR);
} else {
writel_relaxed(sdesc->addr >> 2,
sdma->base + ch * 0x10 + SIRFSOC_DMA_CH_ADDR);
}
} }
/* if we were runtime-suspended before, suspend again */ /* if we were runtime-suspended before, suspend again */
...@@ -896,9 +1125,25 @@ static const struct dev_pm_ops sirfsoc_dma_pm_ops = { ...@@ -896,9 +1125,25 @@ static const struct dev_pm_ops sirfsoc_dma_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(sirfsoc_dma_pm_suspend, sirfsoc_dma_pm_resume) SET_SYSTEM_SLEEP_PM_OPS(sirfsoc_dma_pm_suspend, sirfsoc_dma_pm_resume)
}; };
struct sirfsoc_dmadata sirfsoc_dmadata_a6 = {
.exec = sirfsoc_dma_execute_hw_a6,
.type = SIRFSOC_DMA_VER_A6,
};
struct sirfsoc_dmadata sirfsoc_dmadata_a7v1 = {
.exec = sirfsoc_dma_execute_hw_a7v1,
.type = SIRFSOC_DMA_VER_A7V1,
};
struct sirfsoc_dmadata sirfsoc_dmadata_a7v2 = {
.exec = sirfsoc_dma_execute_hw_a7v2,
.type = SIRFSOC_DMA_VER_A7V2,
};
static const struct of_device_id sirfsoc_dma_match[] = { static const struct of_device_id sirfsoc_dma_match[] = {
{ .compatible = "sirf,prima2-dmac", }, { .compatible = "sirf,prima2-dmac", .data = &sirfsoc_dmadata_a6,},
{ .compatible = "sirf,marco-dmac", }, { .compatible = "sirf,atlas7-dmac", .data = &sirfsoc_dmadata_a7v1,},
{ .compatible = "sirf,atlas7-dmac-v2", .data = &sirfsoc_dmadata_a7v2,},
{}, {},
}; };
...@@ -925,7 +1170,7 @@ static void __exit sirfsoc_dma_exit(void) ...@@ -925,7 +1170,7 @@ static void __exit sirfsoc_dma_exit(void)
subsys_initcall(sirfsoc_dma_init); subsys_initcall(sirfsoc_dma_init);
module_exit(sirfsoc_dma_exit); module_exit(sirfsoc_dma_exit);
MODULE_AUTHOR("Rongjun Ying <rongjun.ying@csr.com>, " MODULE_AUTHOR("Rongjun Ying <rongjun.ying@csr.com>");
"Barry Song <baohua.song@csr.com>"); MODULE_AUTHOR("Barry Song <baohua.song@csr.com>");
MODULE_DESCRIPTION("SIRFSOC DMA control driver"); MODULE_DESCRIPTION("SIRFSOC DMA control driver");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
...@@ -891,9 +891,21 @@ static struct sun6i_dma_config sun8i_a23_dma_cfg = { ...@@ -891,9 +891,21 @@ static struct sun6i_dma_config sun8i_a23_dma_cfg = {
.nr_max_vchans = 37, .nr_max_vchans = 37,
}; };
/*
* The H3 has 12 physical channels, a maximum DRQ port id of 27,
* and a total of 34 usable source and destination endpoints.
*/
static struct sun6i_dma_config sun8i_h3_dma_cfg = {
.nr_max_channels = 12,
.nr_max_requests = 27,
.nr_max_vchans = 34,
};
static const struct of_device_id sun6i_dma_match[] = { static const struct of_device_id sun6i_dma_match[] = {
{ .compatible = "allwinner,sun6i-a31-dma", .data = &sun6i_a31_dma_cfg }, { .compatible = "allwinner,sun6i-a31-dma", .data = &sun6i_a31_dma_cfg },
{ .compatible = "allwinner,sun8i-a23-dma", .data = &sun8i_a23_dma_cfg }, { .compatible = "allwinner,sun8i-a23-dma", .data = &sun8i_a23_dma_cfg },
{ .compatible = "allwinner,sun8i-h3-dma", .data = &sun8i_h3_dma_cfg },
{ /* sentinel */ } { /* sentinel */ }
}; };
......
/*
* Copyright (C) 2015 Texas Instruments Incorporated - http://www.ti.com
* Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#include <linux/slab.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/list.h>
#include <linux/io.h>
#include <linux/idr.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_dma.h>
#define TI_XBAR_OUTPUTS 127
#define TI_XBAR_INPUTS 256
static DEFINE_IDR(map_idr);
struct ti_dma_xbar_data {
void __iomem *iomem;
struct dma_router dmarouter;
u16 safe_val; /* Value to rest the crossbar lines */
u32 xbar_requests; /* number of DMA requests connected to XBAR */
u32 dma_requests; /* number of DMA requests forwarded to DMA */
};
struct ti_dma_xbar_map {
u16 xbar_in;
int xbar_out;
};
static inline void ti_dma_xbar_write(void __iomem *iomem, int xbar, u16 val)
{
writew_relaxed(val, iomem + (xbar * 2));
}
static void ti_dma_xbar_free(struct device *dev, void *route_data)
{
struct ti_dma_xbar_data *xbar = dev_get_drvdata(dev);
struct ti_dma_xbar_map *map = route_data;
dev_dbg(dev, "Unmapping XBAR%u (was routed to %d)\n",
map->xbar_in, map->xbar_out);
ti_dma_xbar_write(xbar->iomem, map->xbar_out, xbar->safe_val);
idr_remove(&map_idr, map->xbar_out);
kfree(map);
}
static void *ti_dma_xbar_route_allocate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct platform_device *pdev = of_find_device_by_node(ofdma->of_node);
struct ti_dma_xbar_data *xbar = platform_get_drvdata(pdev);
struct ti_dma_xbar_map *map;
if (dma_spec->args[0] >= xbar->xbar_requests) {
dev_err(&pdev->dev, "Invalid XBAR request number: %d\n",
dma_spec->args[0]);
return ERR_PTR(-EINVAL);
}
/* The of_node_put() will be done in the core for the node */
dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0);
if (!dma_spec->np) {
dev_err(&pdev->dev, "Can't get DMA master\n");
return ERR_PTR(-EINVAL);
}
map = kzalloc(sizeof(*map), GFP_KERNEL);
if (!map) {
of_node_put(dma_spec->np);
return ERR_PTR(-ENOMEM);
}
map->xbar_out = idr_alloc(&map_idr, NULL, 0, xbar->dma_requests,
GFP_KERNEL);
map->xbar_in = (u16)dma_spec->args[0];
/* The DMA request is 1 based in sDMA */
dma_spec->args[0] = map->xbar_out + 1;
dev_dbg(&pdev->dev, "Mapping XBAR%u to DMA%d\n",
map->xbar_in, map->xbar_out);
ti_dma_xbar_write(xbar->iomem, map->xbar_out, map->xbar_in);
return map;
}
static int ti_dma_xbar_probe(struct platform_device *pdev)
{
struct device_node *node = pdev->dev.of_node;
struct device_node *dma_node;
struct ti_dma_xbar_data *xbar;
struct resource *res;
u32 safe_val;
void __iomem *iomem;
int i, ret;
if (!node)
return -ENODEV;
xbar = devm_kzalloc(&pdev->dev, sizeof(*xbar), GFP_KERNEL);
if (!xbar)
return -ENOMEM;
dma_node = of_parse_phandle(node, "dma-masters", 0);
if (!dma_node) {
dev_err(&pdev->dev, "Can't get DMA master node\n");
return -ENODEV;
}
if (of_property_read_u32(dma_node, "dma-requests",
&xbar->dma_requests)) {
dev_info(&pdev->dev,
"Missing XBAR output information, using %u.\n",
TI_XBAR_OUTPUTS);
xbar->dma_requests = TI_XBAR_OUTPUTS;
}
of_node_put(dma_node);
if (of_property_read_u32(node, "dma-requests", &xbar->xbar_requests)) {
dev_info(&pdev->dev,
"Missing XBAR input information, using %u.\n",
TI_XBAR_INPUTS);
xbar->xbar_requests = TI_XBAR_INPUTS;
}
if (!of_property_read_u32(node, "ti,dma-safe-map", &safe_val))
xbar->safe_val = (u16)safe_val;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
return -ENODEV;
iomem = devm_ioremap_resource(&pdev->dev, res);
if (!iomem)
return -ENOMEM;
xbar->iomem = iomem;
xbar->dmarouter.dev = &pdev->dev;
xbar->dmarouter.route_free = ti_dma_xbar_free;
platform_set_drvdata(pdev, xbar);
/* Reset the crossbar */
for (i = 0; i < xbar->dma_requests; i++)
ti_dma_xbar_write(xbar->iomem, i, xbar->safe_val);
ret = of_dma_router_register(node, ti_dma_xbar_route_allocate,
&xbar->dmarouter);
if (ret) {
/* Restore the defaults for the crossbar */
for (i = 0; i < xbar->dma_requests; i++)
ti_dma_xbar_write(xbar->iomem, i, i);
}
return ret;
}
static const struct of_device_id ti_dma_xbar_match[] = {
{ .compatible = "ti,dra7-dma-crossbar" },
{},
};
static struct platform_driver ti_dma_xbar_driver = {
.driver = {
.name = "ti-dma-crossbar",
.of_match_table = of_match_ptr(ti_dma_xbar_match),
},
.probe = ti_dma_xbar_probe,
};
int omap_dmaxbar_init(void)
{
return platform_driver_register(&ti_dma_xbar_driver);
}
arch_initcall(omap_dmaxbar_init);
...@@ -29,7 +29,7 @@ dma_cookie_t vchan_tx_submit(struct dma_async_tx_descriptor *tx) ...@@ -29,7 +29,7 @@ dma_cookie_t vchan_tx_submit(struct dma_async_tx_descriptor *tx)
spin_lock_irqsave(&vc->lock, flags); spin_lock_irqsave(&vc->lock, flags);
cookie = dma_cookie_assign(tx); cookie = dma_cookie_assign(tx);
list_add_tail(&vd->node, &vc->desc_submitted); list_move_tail(&vd->node, &vc->desc_submitted);
spin_unlock_irqrestore(&vc->lock, flags); spin_unlock_irqrestore(&vc->lock, flags);
dev_dbg(vc->chan.device->dev, "vchan %p: txd %p[%x]: submitted\n", dev_dbg(vc->chan.device->dev, "vchan %p: txd %p[%x]: submitted\n",
...@@ -83,8 +83,10 @@ static void vchan_complete(unsigned long arg) ...@@ -83,8 +83,10 @@ static void vchan_complete(unsigned long arg)
cb_data = vd->tx.callback_param; cb_data = vd->tx.callback_param;
list_del(&vd->node); list_del(&vd->node);
if (async_tx_test_ack(&vd->tx))
vc->desc_free(vd); list_add(&vd->node, &vc->desc_allocated);
else
vc->desc_free(vd);
if (cb) if (cb)
cb(cb_data); cb(cb_data);
...@@ -96,9 +98,13 @@ void vchan_dma_desc_free_list(struct virt_dma_chan *vc, struct list_head *head) ...@@ -96,9 +98,13 @@ void vchan_dma_desc_free_list(struct virt_dma_chan *vc, struct list_head *head)
while (!list_empty(head)) { while (!list_empty(head)) {
struct virt_dma_desc *vd = list_first_entry(head, struct virt_dma_desc *vd = list_first_entry(head,
struct virt_dma_desc, node); struct virt_dma_desc, node);
list_del(&vd->node); if (async_tx_test_ack(&vd->tx)) {
dev_dbg(vc->chan.device->dev, "txd %p: freeing\n", vd); list_move_tail(&vd->node, &vc->desc_allocated);
vc->desc_free(vd); } else {
dev_dbg(vc->chan.device->dev, "txd %p: freeing\n", vd);
list_del(&vd->node);
vc->desc_free(vd);
}
} }
} }
EXPORT_SYMBOL_GPL(vchan_dma_desc_free_list); EXPORT_SYMBOL_GPL(vchan_dma_desc_free_list);
...@@ -108,6 +114,7 @@ void vchan_init(struct virt_dma_chan *vc, struct dma_device *dmadev) ...@@ -108,6 +114,7 @@ void vchan_init(struct virt_dma_chan *vc, struct dma_device *dmadev)
dma_cookie_init(&vc->chan); dma_cookie_init(&vc->chan);
spin_lock_init(&vc->lock); spin_lock_init(&vc->lock);
INIT_LIST_HEAD(&vc->desc_allocated);
INIT_LIST_HEAD(&vc->desc_submitted); INIT_LIST_HEAD(&vc->desc_submitted);
INIT_LIST_HEAD(&vc->desc_issued); INIT_LIST_HEAD(&vc->desc_issued);
INIT_LIST_HEAD(&vc->desc_completed); INIT_LIST_HEAD(&vc->desc_completed);
......
...@@ -29,6 +29,7 @@ struct virt_dma_chan { ...@@ -29,6 +29,7 @@ struct virt_dma_chan {
spinlock_t lock; spinlock_t lock;
/* protected by vc.lock */ /* protected by vc.lock */
struct list_head desc_allocated;
struct list_head desc_submitted; struct list_head desc_submitted;
struct list_head desc_issued; struct list_head desc_issued;
struct list_head desc_completed; struct list_head desc_completed;
...@@ -55,11 +56,16 @@ static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan ...@@ -55,11 +56,16 @@ static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan
struct virt_dma_desc *vd, unsigned long tx_flags) struct virt_dma_desc *vd, unsigned long tx_flags)
{ {
extern dma_cookie_t vchan_tx_submit(struct dma_async_tx_descriptor *); extern dma_cookie_t vchan_tx_submit(struct dma_async_tx_descriptor *);
unsigned long flags;
dma_async_tx_descriptor_init(&vd->tx, &vc->chan); dma_async_tx_descriptor_init(&vd->tx, &vc->chan);
vd->tx.flags = tx_flags; vd->tx.flags = tx_flags;
vd->tx.tx_submit = vchan_tx_submit; vd->tx.tx_submit = vchan_tx_submit;
spin_lock_irqsave(&vc->lock, flags);
list_add_tail(&vd->node, &vc->desc_allocated);
spin_unlock_irqrestore(&vc->lock, flags);
return &vd->tx; return &vd->tx;
} }
...@@ -122,7 +128,8 @@ static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc) ...@@ -122,7 +128,8 @@ static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc)
} }
/** /**
* vchan_get_all_descriptors - obtain all submitted and issued descriptors * vchan_get_all_descriptors - obtain all allocated, submitted and issued
* descriptors
* vc: virtual channel to get descriptors from * vc: virtual channel to get descriptors from
* head: list of descriptors found * head: list of descriptors found
* *
...@@ -134,6 +141,7 @@ static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc) ...@@ -134,6 +141,7 @@ static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc)
static inline void vchan_get_all_descriptors(struct virt_dma_chan *vc, static inline void vchan_get_all_descriptors(struct virt_dma_chan *vc,
struct list_head *head) struct list_head *head)
{ {
list_splice_tail_init(&vc->desc_allocated, head);
list_splice_tail_init(&vc->desc_submitted, head); list_splice_tail_init(&vc->desc_submitted, head);
list_splice_tail_init(&vc->desc_issued, head); list_splice_tail_init(&vc->desc_issued, head);
list_splice_tail_init(&vc->desc_completed, head); list_splice_tail_init(&vc->desc_completed, head);
...@@ -141,11 +149,14 @@ static inline void vchan_get_all_descriptors(struct virt_dma_chan *vc, ...@@ -141,11 +149,14 @@ static inline void vchan_get_all_descriptors(struct virt_dma_chan *vc,
static inline void vchan_free_chan_resources(struct virt_dma_chan *vc) static inline void vchan_free_chan_resources(struct virt_dma_chan *vc)
{ {
struct virt_dma_desc *vd;
unsigned long flags; unsigned long flags;
LIST_HEAD(head); LIST_HEAD(head);
spin_lock_irqsave(&vc->lock, flags); spin_lock_irqsave(&vc->lock, flags);
vchan_get_all_descriptors(vc, &head); vchan_get_all_descriptors(vc, &head);
list_for_each_entry(vd, &head, node)
async_tx_clear_ack(&vd->tx);
spin_unlock_irqrestore(&vc->lock, flags); spin_unlock_irqrestore(&vc->lock, flags);
vchan_dma_desc_free_list(vc, &head); vchan_dma_desc_free_list(vc, &head);
......
...@@ -124,32 +124,8 @@ ...@@ -124,32 +124,8 @@
#define XGENE_DMA_DESC_ELERR_POS 46 #define XGENE_DMA_DESC_ELERR_POS 46
#define XGENE_DMA_DESC_RTYPE_POS 56 #define XGENE_DMA_DESC_RTYPE_POS 56
#define XGENE_DMA_DESC_LERR_POS 60 #define XGENE_DMA_DESC_LERR_POS 60
#define XGENE_DMA_DESC_FLYBY_POS 4
#define XGENE_DMA_DESC_BUFLEN_POS 48 #define XGENE_DMA_DESC_BUFLEN_POS 48
#define XGENE_DMA_DESC_HOENQ_NUM_POS 48 #define XGENE_DMA_DESC_HOENQ_NUM_POS 48
#define XGENE_DMA_DESC_NV_SET(m) \
(((u64 *)(m))[0] |= XGENE_DMA_DESC_NV_BIT)
#define XGENE_DMA_DESC_IN_SET(m) \
(((u64 *)(m))[0] |= XGENE_DMA_DESC_IN_BIT)
#define XGENE_DMA_DESC_RTYPE_SET(m, v) \
(((u64 *)(m))[0] |= ((u64)(v) << XGENE_DMA_DESC_RTYPE_POS))
#define XGENE_DMA_DESC_BUFADDR_SET(m, v) \
(((u64 *)(m))[0] |= (v))
#define XGENE_DMA_DESC_BUFLEN_SET(m, v) \
(((u64 *)(m))[0] |= ((u64)(v) << XGENE_DMA_DESC_BUFLEN_POS))
#define XGENE_DMA_DESC_C_SET(m) \
(((u64 *)(m))[1] |= XGENE_DMA_DESC_C_BIT)
#define XGENE_DMA_DESC_FLYBY_SET(m, v) \
(((u64 *)(m))[2] |= ((v) << XGENE_DMA_DESC_FLYBY_POS))
#define XGENE_DMA_DESC_MULTI_SET(m, v, i) \
(((u64 *)(m))[2] |= ((u64)(v) << (((i) + 1) * 8)))
#define XGENE_DMA_DESC_DR_SET(m) \
(((u64 *)(m))[2] |= XGENE_DMA_DESC_DR_BIT)
#define XGENE_DMA_DESC_DST_ADDR_SET(m, v) \
(((u64 *)(m))[3] |= (v))
#define XGENE_DMA_DESC_H0ENQ_NUM_SET(m, v) \
(((u64 *)(m))[3] |= ((u64)(v) << XGENE_DMA_DESC_HOENQ_NUM_POS))
#define XGENE_DMA_DESC_ELERR_RD(m) \ #define XGENE_DMA_DESC_ELERR_RD(m) \
(((m) >> XGENE_DMA_DESC_ELERR_POS) & 0x3) (((m) >> XGENE_DMA_DESC_ELERR_POS) & 0x3)
#define XGENE_DMA_DESC_LERR_RD(m) \ #define XGENE_DMA_DESC_LERR_RD(m) \
...@@ -158,14 +134,7 @@ ...@@ -158,14 +134,7 @@
(((elerr) << 4) | (lerr)) (((elerr) << 4) | (lerr))
/* X-Gene DMA descriptor empty s/w signature */ /* X-Gene DMA descriptor empty s/w signature */
#define XGENE_DMA_DESC_EMPTY_INDEX 0
#define XGENE_DMA_DESC_EMPTY_SIGNATURE ~0ULL #define XGENE_DMA_DESC_EMPTY_SIGNATURE ~0ULL
#define XGENE_DMA_DESC_SET_EMPTY(m) \
(((u64 *)(m))[XGENE_DMA_DESC_EMPTY_INDEX] = \
XGENE_DMA_DESC_EMPTY_SIGNATURE)
#define XGENE_DMA_DESC_IS_EMPTY(m) \
(((u64 *)(m))[XGENE_DMA_DESC_EMPTY_INDEX] == \
XGENE_DMA_DESC_EMPTY_SIGNATURE)
/* X-Gene DMA configurable parameters defines */ /* X-Gene DMA configurable parameters defines */
#define XGENE_DMA_RING_NUM 512 #define XGENE_DMA_RING_NUM 512
...@@ -184,7 +153,7 @@ ...@@ -184,7 +153,7 @@
#define XGENE_DMA_XOR_ALIGNMENT 6 /* 64 Bytes */ #define XGENE_DMA_XOR_ALIGNMENT 6 /* 64 Bytes */
#define XGENE_DMA_MAX_XOR_SRC 5 #define XGENE_DMA_MAX_XOR_SRC 5
#define XGENE_DMA_16K_BUFFER_LEN_CODE 0x0 #define XGENE_DMA_16K_BUFFER_LEN_CODE 0x0
#define XGENE_DMA_INVALID_LEN_CODE 0x7800 #define XGENE_DMA_INVALID_LEN_CODE 0x7800000000000000ULL
/* X-Gene DMA descriptor error codes */ /* X-Gene DMA descriptor error codes */
#define ERR_DESC_AXI 0x01 #define ERR_DESC_AXI 0x01
...@@ -214,10 +183,10 @@ ...@@ -214,10 +183,10 @@
#define ERR_DESC_SRC_INT 0xB #define ERR_DESC_SRC_INT 0xB
/* X-Gene DMA flyby operation code */ /* X-Gene DMA flyby operation code */
#define FLYBY_2SRC_XOR 0x8 #define FLYBY_2SRC_XOR 0x80
#define FLYBY_3SRC_XOR 0x9 #define FLYBY_3SRC_XOR 0x90
#define FLYBY_4SRC_XOR 0xA #define FLYBY_4SRC_XOR 0xA0
#define FLYBY_5SRC_XOR 0xB #define FLYBY_5SRC_XOR 0xB0
/* X-Gene DMA SW descriptor flags */ /* X-Gene DMA SW descriptor flags */
#define XGENE_DMA_FLAG_64B_DESC BIT(0) #define XGENE_DMA_FLAG_64B_DESC BIT(0)
...@@ -238,10 +207,10 @@ ...@@ -238,10 +207,10 @@
dev_err(chan->dev, "%s: " fmt, chan->name, ##arg) dev_err(chan->dev, "%s: " fmt, chan->name, ##arg)
struct xgene_dma_desc_hw { struct xgene_dma_desc_hw {
u64 m0; __le64 m0;
u64 m1; __le64 m1;
u64 m2; __le64 m2;
u64 m3; __le64 m3;
}; };
enum xgene_dma_ring_cfgsize { enum xgene_dma_ring_cfgsize {
...@@ -388,18 +357,11 @@ static bool is_pq_enabled(struct xgene_dma *pdma) ...@@ -388,18 +357,11 @@ static bool is_pq_enabled(struct xgene_dma *pdma)
return !(val & XGENE_DMA_PQ_DISABLE_MASK); return !(val & XGENE_DMA_PQ_DISABLE_MASK);
} }
static void xgene_dma_cpu_to_le64(u64 *desc, int count) static u64 xgene_dma_encode_len(size_t len)
{
int i;
for (i = 0; i < count; i++)
desc[i] = cpu_to_le64(desc[i]);
}
static u16 xgene_dma_encode_len(u32 len)
{ {
return (len < XGENE_DMA_MAX_BYTE_CNT) ? return (len < XGENE_DMA_MAX_BYTE_CNT) ?
len : XGENE_DMA_16K_BUFFER_LEN_CODE; ((u64)len << XGENE_DMA_DESC_BUFLEN_POS) :
XGENE_DMA_16K_BUFFER_LEN_CODE;
} }
static u8 xgene_dma_encode_xor_flyby(u32 src_cnt) static u8 xgene_dma_encode_xor_flyby(u32 src_cnt)
...@@ -424,34 +386,50 @@ static u32 xgene_dma_ring_desc_cnt(struct xgene_dma_ring *ring) ...@@ -424,34 +386,50 @@ static u32 xgene_dma_ring_desc_cnt(struct xgene_dma_ring *ring)
return XGENE_DMA_RING_DESC_CNT(ring_state); return XGENE_DMA_RING_DESC_CNT(ring_state);
} }
static void xgene_dma_set_src_buffer(void *ext8, size_t *len, static void xgene_dma_set_src_buffer(__le64 *ext8, size_t *len,
dma_addr_t *paddr) dma_addr_t *paddr)
{ {
size_t nbytes = (*len < XGENE_DMA_MAX_BYTE_CNT) ? size_t nbytes = (*len < XGENE_DMA_MAX_BYTE_CNT) ?
*len : XGENE_DMA_MAX_BYTE_CNT; *len : XGENE_DMA_MAX_BYTE_CNT;
XGENE_DMA_DESC_BUFADDR_SET(ext8, *paddr); *ext8 |= cpu_to_le64(*paddr);
XGENE_DMA_DESC_BUFLEN_SET(ext8, xgene_dma_encode_len(nbytes)); *ext8 |= cpu_to_le64(xgene_dma_encode_len(nbytes));
*len -= nbytes; *len -= nbytes;
*paddr += nbytes; *paddr += nbytes;
} }
static void xgene_dma_invalidate_buffer(void *ext8) static void xgene_dma_invalidate_buffer(__le64 *ext8)
{ {
XGENE_DMA_DESC_BUFLEN_SET(ext8, XGENE_DMA_INVALID_LEN_CODE); *ext8 |= cpu_to_le64(XGENE_DMA_INVALID_LEN_CODE);
} }
static void *xgene_dma_lookup_ext8(u64 *desc, int idx) static __le64 *xgene_dma_lookup_ext8(struct xgene_dma_desc_hw *desc, int idx)
{ {
return (idx % 2) ? (desc + idx - 1) : (desc + idx + 1); switch (idx) {
case 0:
return &desc->m1;
case 1:
return &desc->m0;
case 2:
return &desc->m3;
case 3:
return &desc->m2;
default:
pr_err("Invalid dma descriptor index\n");
}
return NULL;
} }
static void xgene_dma_init_desc(void *desc, u16 dst_ring_num) static void xgene_dma_init_desc(struct xgene_dma_desc_hw *desc,
u16 dst_ring_num)
{ {
XGENE_DMA_DESC_C_SET(desc); /* Coherent IO */ desc->m0 |= cpu_to_le64(XGENE_DMA_DESC_IN_BIT);
XGENE_DMA_DESC_IN_SET(desc); desc->m0 |= cpu_to_le64((u64)XGENE_DMA_RING_OWNER_DMA <<
XGENE_DMA_DESC_H0ENQ_NUM_SET(desc, dst_ring_num); XGENE_DMA_DESC_RTYPE_POS);
XGENE_DMA_DESC_RTYPE_SET(desc, XGENE_DMA_RING_OWNER_DMA); desc->m1 |= cpu_to_le64(XGENE_DMA_DESC_C_BIT);
desc->m3 |= cpu_to_le64((u64)dst_ring_num <<
XGENE_DMA_DESC_HOENQ_NUM_POS);
} }
static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan, static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan,
...@@ -459,7 +437,7 @@ static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan, ...@@ -459,7 +437,7 @@ static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan,
dma_addr_t dst, dma_addr_t src, dma_addr_t dst, dma_addr_t src,
size_t len) size_t len)
{ {
void *desc1, *desc2; struct xgene_dma_desc_hw *desc1, *desc2;
int i; int i;
/* Get 1st descriptor */ /* Get 1st descriptor */
...@@ -467,23 +445,21 @@ static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan, ...@@ -467,23 +445,21 @@ static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan,
xgene_dma_init_desc(desc1, chan->tx_ring.dst_ring_num); xgene_dma_init_desc(desc1, chan->tx_ring.dst_ring_num);
/* Set destination address */ /* Set destination address */
XGENE_DMA_DESC_DR_SET(desc1); desc1->m2 |= cpu_to_le64(XGENE_DMA_DESC_DR_BIT);
XGENE_DMA_DESC_DST_ADDR_SET(desc1, dst); desc1->m3 |= cpu_to_le64(dst);
/* Set 1st source address */ /* Set 1st source address */
xgene_dma_set_src_buffer(desc1 + 8, &len, &src); xgene_dma_set_src_buffer(&desc1->m1, &len, &src);
if (len <= 0) { if (!len)
desc2 = NULL; return;
goto skip_additional_src;
}
/* /*
* We need to split this source buffer, * We need to split this source buffer,
* and need to use 2nd descriptor * and need to use 2nd descriptor
*/ */
desc2 = &desc_sw->desc2; desc2 = &desc_sw->desc2;
XGENE_DMA_DESC_NV_SET(desc1); desc1->m0 |= cpu_to_le64(XGENE_DMA_DESC_NV_BIT);
/* Set 2nd to 5th source address */ /* Set 2nd to 5th source address */
for (i = 0; i < 4 && len; i++) for (i = 0; i < 4 && len; i++)
...@@ -496,12 +472,6 @@ static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan, ...@@ -496,12 +472,6 @@ static void xgene_dma_prep_cpy_desc(struct xgene_dma_chan *chan,
/* Updated flag that we have prepared 64B descriptor */ /* Updated flag that we have prepared 64B descriptor */
desc_sw->flags |= XGENE_DMA_FLAG_64B_DESC; desc_sw->flags |= XGENE_DMA_FLAG_64B_DESC;
skip_additional_src:
/* Hardware stores descriptor in little endian format */
xgene_dma_cpu_to_le64(desc1, 4);
if (desc2)
xgene_dma_cpu_to_le64(desc2, 4);
} }
static void xgene_dma_prep_xor_desc(struct xgene_dma_chan *chan, static void xgene_dma_prep_xor_desc(struct xgene_dma_chan *chan,
...@@ -510,7 +480,7 @@ static void xgene_dma_prep_xor_desc(struct xgene_dma_chan *chan, ...@@ -510,7 +480,7 @@ static void xgene_dma_prep_xor_desc(struct xgene_dma_chan *chan,
u32 src_cnt, size_t *nbytes, u32 src_cnt, size_t *nbytes,
const u8 *scf) const u8 *scf)
{ {
void *desc1, *desc2; struct xgene_dma_desc_hw *desc1, *desc2;
size_t len = *nbytes; size_t len = *nbytes;
int i; int i;
...@@ -521,28 +491,24 @@ static void xgene_dma_prep_xor_desc(struct xgene_dma_chan *chan, ...@@ -521,28 +491,24 @@ static void xgene_dma_prep_xor_desc(struct xgene_dma_chan *chan,
xgene_dma_init_desc(desc1, chan->tx_ring.dst_ring_num); xgene_dma_init_desc(desc1, chan->tx_ring.dst_ring_num);
/* Set destination address */ /* Set destination address */
XGENE_DMA_DESC_DR_SET(desc1); desc1->m2 |= cpu_to_le64(XGENE_DMA_DESC_DR_BIT);
XGENE_DMA_DESC_DST_ADDR_SET(desc1, *dst); desc1->m3 |= cpu_to_le64(*dst);
/* We have multiple source addresses, so need to set NV bit*/ /* We have multiple source addresses, so need to set NV bit*/
XGENE_DMA_DESC_NV_SET(desc1); desc1->m0 |= cpu_to_le64(XGENE_DMA_DESC_NV_BIT);
/* Set flyby opcode */ /* Set flyby opcode */
XGENE_DMA_DESC_FLYBY_SET(desc1, xgene_dma_encode_xor_flyby(src_cnt)); desc1->m2 |= cpu_to_le64(xgene_dma_encode_xor_flyby(src_cnt));
/* Set 1st to 5th source addresses */ /* Set 1st to 5th source addresses */
for (i = 0; i < src_cnt; i++) { for (i = 0; i < src_cnt; i++) {
len = *nbytes; len = *nbytes;
xgene_dma_set_src_buffer((i == 0) ? (desc1 + 8) : xgene_dma_set_src_buffer((i == 0) ? &desc1->m1 :
xgene_dma_lookup_ext8(desc2, i - 1), xgene_dma_lookup_ext8(desc2, i - 1),
&len, &src[i]); &len, &src[i]);
XGENE_DMA_DESC_MULTI_SET(desc1, scf[i], i); desc1->m2 |= cpu_to_le64((scf[i] << ((i + 1) * 8)));
} }
/* Hardware stores descriptor in little endian format */
xgene_dma_cpu_to_le64(desc1, 4);
xgene_dma_cpu_to_le64(desc2, 4);
/* Update meta data */ /* Update meta data */
*nbytes = len; *nbytes = len;
*dst += XGENE_DMA_MAX_BYTE_CNT; *dst += XGENE_DMA_MAX_BYTE_CNT;
...@@ -738,7 +704,7 @@ static int xgene_chan_xfer_request(struct xgene_dma_ring *ring, ...@@ -738,7 +704,7 @@ static int xgene_chan_xfer_request(struct xgene_dma_ring *ring,
* xgene_chan_xfer_ld_pending - push any pending transactions to hw * xgene_chan_xfer_ld_pending - push any pending transactions to hw
* @chan : X-Gene DMA channel * @chan : X-Gene DMA channel
* *
* LOCKING: must hold chan->desc_lock * LOCKING: must hold chan->lock
*/ */
static void xgene_chan_xfer_ld_pending(struct xgene_dma_chan *chan) static void xgene_chan_xfer_ld_pending(struct xgene_dma_chan *chan)
{ {
...@@ -808,7 +774,8 @@ static void xgene_dma_cleanup_descriptors(struct xgene_dma_chan *chan) ...@@ -808,7 +774,8 @@ static void xgene_dma_cleanup_descriptors(struct xgene_dma_chan *chan)
desc_hw = &ring->desc_hw[ring->head]; desc_hw = &ring->desc_hw[ring->head];
/* Check if this descriptor has been completed */ /* Check if this descriptor has been completed */
if (unlikely(XGENE_DMA_DESC_IS_EMPTY(desc_hw))) if (unlikely(le64_to_cpu(desc_hw->m0) ==
XGENE_DMA_DESC_EMPTY_SIGNATURE))
break; break;
if (++ring->head == ring->slots) if (++ring->head == ring->slots)
...@@ -842,7 +809,7 @@ static void xgene_dma_cleanup_descriptors(struct xgene_dma_chan *chan) ...@@ -842,7 +809,7 @@ static void xgene_dma_cleanup_descriptors(struct xgene_dma_chan *chan)
iowrite32(-1, ring->cmd); iowrite32(-1, ring->cmd);
/* Mark this hw descriptor as processed */ /* Mark this hw descriptor as processed */
XGENE_DMA_DESC_SET_EMPTY(desc_hw); desc_hw->m0 = cpu_to_le64(XGENE_DMA_DESC_EMPTY_SIGNATURE);
xgene_dma_run_tx_complete_actions(chan, desc_sw); xgene_dma_run_tx_complete_actions(chan, desc_sw);
...@@ -889,7 +856,7 @@ static int xgene_dma_alloc_chan_resources(struct dma_chan *dchan) ...@@ -889,7 +856,7 @@ static int xgene_dma_alloc_chan_resources(struct dma_chan *dchan)
* @chan: X-Gene DMA channel * @chan: X-Gene DMA channel
* @list: the list to free * @list: the list to free
* *
* LOCKING: must hold chan->desc_lock * LOCKING: must hold chan->lock
*/ */
static void xgene_dma_free_desc_list(struct xgene_dma_chan *chan, static void xgene_dma_free_desc_list(struct xgene_dma_chan *chan,
struct list_head *list) struct list_head *list)
...@@ -900,15 +867,6 @@ static void xgene_dma_free_desc_list(struct xgene_dma_chan *chan, ...@@ -900,15 +867,6 @@ static void xgene_dma_free_desc_list(struct xgene_dma_chan *chan,
xgene_dma_clean_descriptor(chan, desc); xgene_dma_clean_descriptor(chan, desc);
} }
static void xgene_dma_free_tx_desc_list(struct xgene_dma_chan *chan,
struct list_head *list)
{
struct xgene_dma_desc_sw *desc, *_desc;
list_for_each_entry_safe(desc, _desc, list, node)
xgene_dma_clean_descriptor(chan, desc);
}
static void xgene_dma_free_chan_resources(struct dma_chan *dchan) static void xgene_dma_free_chan_resources(struct dma_chan *dchan)
{ {
struct xgene_dma_chan *chan = to_dma_chan(dchan); struct xgene_dma_chan *chan = to_dma_chan(dchan);
...@@ -985,7 +943,7 @@ static struct dma_async_tx_descriptor *xgene_dma_prep_memcpy( ...@@ -985,7 +943,7 @@ static struct dma_async_tx_descriptor *xgene_dma_prep_memcpy(
if (!first) if (!first)
return NULL; return NULL;
xgene_dma_free_tx_desc_list(chan, &first->tx_list); xgene_dma_free_desc_list(chan, &first->tx_list);
return NULL; return NULL;
} }
...@@ -1093,7 +1051,7 @@ static struct dma_async_tx_descriptor *xgene_dma_prep_sg( ...@@ -1093,7 +1051,7 @@ static struct dma_async_tx_descriptor *xgene_dma_prep_sg(
if (!first) if (!first)
return NULL; return NULL;
xgene_dma_free_tx_desc_list(chan, &first->tx_list); xgene_dma_free_desc_list(chan, &first->tx_list);
return NULL; return NULL;
} }
...@@ -1141,7 +1099,7 @@ static struct dma_async_tx_descriptor *xgene_dma_prep_xor( ...@@ -1141,7 +1099,7 @@ static struct dma_async_tx_descriptor *xgene_dma_prep_xor(
if (!first) if (!first)
return NULL; return NULL;
xgene_dma_free_tx_desc_list(chan, &first->tx_list); xgene_dma_free_desc_list(chan, &first->tx_list);
return NULL; return NULL;
} }
...@@ -1218,7 +1176,7 @@ static struct dma_async_tx_descriptor *xgene_dma_prep_pq( ...@@ -1218,7 +1176,7 @@ static struct dma_async_tx_descriptor *xgene_dma_prep_pq(
if (!first) if (!first)
return NULL; return NULL;
xgene_dma_free_tx_desc_list(chan, &first->tx_list); xgene_dma_free_desc_list(chan, &first->tx_list);
return NULL; return NULL;
} }
...@@ -1316,7 +1274,6 @@ static void xgene_dma_setup_ring(struct xgene_dma_ring *ring) ...@@ -1316,7 +1274,6 @@ static void xgene_dma_setup_ring(struct xgene_dma_ring *ring)
{ {
void *ring_cfg = ring->state; void *ring_cfg = ring->state;
u64 addr = ring->desc_paddr; u64 addr = ring->desc_paddr;
void *desc;
u32 i, val; u32 i, val;
ring->slots = ring->size / XGENE_DMA_RING_WQ_DESC_SIZE; ring->slots = ring->size / XGENE_DMA_RING_WQ_DESC_SIZE;
...@@ -1358,8 +1315,10 @@ static void xgene_dma_setup_ring(struct xgene_dma_ring *ring) ...@@ -1358,8 +1315,10 @@ static void xgene_dma_setup_ring(struct xgene_dma_ring *ring)
/* Set empty signature to DMA Rx ring descriptors */ /* Set empty signature to DMA Rx ring descriptors */
for (i = 0; i < ring->slots; i++) { for (i = 0; i < ring->slots; i++) {
struct xgene_dma_desc_hw *desc;
desc = &ring->desc_hw[i]; desc = &ring->desc_hw[i];
XGENE_DMA_DESC_SET_EMPTY(desc); desc->m0 = cpu_to_le64(XGENE_DMA_DESC_EMPTY_SIGNATURE);
} }
/* Enable DMA Rx ring interrupt */ /* Enable DMA Rx ring interrupt */
......
#ifndef _PXA_DMA_H_
#define _PXA_DMA_H_
enum pxad_chan_prio {
PXAD_PRIO_HIGHEST = 0,
PXAD_PRIO_NORMAL,
PXAD_PRIO_LOW,
PXAD_PRIO_LOWEST,
};
struct pxad_param {
unsigned int drcmr;
enum pxad_chan_prio prio;
};
struct dma_chan;
#ifdef CONFIG_PXA_DMA
bool pxad_filter_fn(struct dma_chan *chan, void *param);
#else
static inline bool pxad_filter_fn(struct dma_chan *chan, void *param)
{
return false;
}
#endif
#endif /* _PXA_DMA_H_ */
...@@ -65,6 +65,7 @@ enum dma_transaction_type { ...@@ -65,6 +65,7 @@ enum dma_transaction_type {
DMA_PQ, DMA_PQ,
DMA_XOR_VAL, DMA_XOR_VAL,
DMA_PQ_VAL, DMA_PQ_VAL,
DMA_MEMSET,
DMA_INTERRUPT, DMA_INTERRUPT,
DMA_SG, DMA_SG,
DMA_PRIVATE, DMA_PRIVATE,
...@@ -122,10 +123,18 @@ enum dma_transfer_direction { ...@@ -122,10 +123,18 @@ enum dma_transfer_direction {
* chunk and before first src/dst address for next chunk. * chunk and before first src/dst address for next chunk.
* Ignored for dst(assumed 0), if dst_inc is true and dst_sgl is false. * Ignored for dst(assumed 0), if dst_inc is true and dst_sgl is false.
* Ignored for src(assumed 0), if src_inc is true and src_sgl is false. * Ignored for src(assumed 0), if src_inc is true and src_sgl is false.
* @dst_icg: Number of bytes to jump after last dst address of this
* chunk and before the first dst address for next chunk.
* Ignored if dst_inc is true and dst_sgl is false.
* @src_icg: Number of bytes to jump after last src address of this
* chunk and before the first src address for next chunk.
* Ignored if src_inc is true and src_sgl is false.
*/ */
struct data_chunk { struct data_chunk {
size_t size; size_t size;
size_t icg; size_t icg;
size_t dst_icg;
size_t src_icg;
}; };
/** /**
...@@ -221,6 +230,16 @@ struct dma_chan_percpu { ...@@ -221,6 +230,16 @@ struct dma_chan_percpu {
unsigned long bytes_transferred; unsigned long bytes_transferred;
}; };
/**
* struct dma_router - DMA router structure
* @dev: pointer to the DMA router device
* @route_free: function to be called when the route can be disconnected
*/
struct dma_router {
struct device *dev;
void (*route_free)(struct device *dev, void *route_data);
};
/** /**
* struct dma_chan - devices supply DMA channels, clients use them * struct dma_chan - devices supply DMA channels, clients use them
* @device: ptr to the dma device who supplies this channel, always !%NULL * @device: ptr to the dma device who supplies this channel, always !%NULL
...@@ -232,6 +251,8 @@ struct dma_chan_percpu { ...@@ -232,6 +251,8 @@ struct dma_chan_percpu {
* @local: per-cpu pointer to a struct dma_chan_percpu * @local: per-cpu pointer to a struct dma_chan_percpu
* @client_count: how many clients are using this channel * @client_count: how many clients are using this channel
* @table_count: number of appearances in the mem-to-mem allocation table * @table_count: number of appearances in the mem-to-mem allocation table
* @router: pointer to the DMA router structure
* @route_data: channel specific data for the router
* @private: private data for certain client-channel associations * @private: private data for certain client-channel associations
*/ */
struct dma_chan { struct dma_chan {
...@@ -247,6 +268,11 @@ struct dma_chan { ...@@ -247,6 +268,11 @@ struct dma_chan {
struct dma_chan_percpu __percpu *local; struct dma_chan_percpu __percpu *local;
int client_count; int client_count;
int table_count; int table_count;
/* DMA router */
struct dma_router *router;
void *route_data;
void *private; void *private;
}; };
...@@ -570,6 +596,7 @@ struct dma_tx_state { ...@@ -570,6 +596,7 @@ struct dma_tx_state {
* @copy_align: alignment shift for memcpy operations * @copy_align: alignment shift for memcpy operations
* @xor_align: alignment shift for xor operations * @xor_align: alignment shift for xor operations
* @pq_align: alignment shift for pq operations * @pq_align: alignment shift for pq operations
* @fill_align: alignment shift for memset operations
* @dev_id: unique device ID * @dev_id: unique device ID
* @dev: struct device reference for dma mapping api * @dev: struct device reference for dma mapping api
* @src_addr_widths: bit mask of src addr widths the device supports * @src_addr_widths: bit mask of src addr widths the device supports
...@@ -588,6 +615,7 @@ struct dma_tx_state { ...@@ -588,6 +615,7 @@ struct dma_tx_state {
* @device_prep_dma_xor_val: prepares a xor validation operation * @device_prep_dma_xor_val: prepares a xor validation operation
* @device_prep_dma_pq: prepares a pq operation * @device_prep_dma_pq: prepares a pq operation
* @device_prep_dma_pq_val: prepares a pqzero_sum operation * @device_prep_dma_pq_val: prepares a pqzero_sum operation
* @device_prep_dma_memset: prepares a memset operation
* @device_prep_dma_interrupt: prepares an end of chain interrupt operation * @device_prep_dma_interrupt: prepares an end of chain interrupt operation
* @device_prep_slave_sg: prepares a slave dma operation * @device_prep_slave_sg: prepares a slave dma operation
* @device_prep_dma_cyclic: prepare a cyclic dma operation suitable for audio. * @device_prep_dma_cyclic: prepare a cyclic dma operation suitable for audio.
...@@ -620,6 +648,7 @@ struct dma_device { ...@@ -620,6 +648,7 @@ struct dma_device {
u8 copy_align; u8 copy_align;
u8 xor_align; u8 xor_align;
u8 pq_align; u8 pq_align;
u8 fill_align;
#define DMA_HAS_PQ_CONTINUE (1 << 15) #define DMA_HAS_PQ_CONTINUE (1 << 15)
int dev_id; int dev_id;
...@@ -650,6 +679,9 @@ struct dma_device { ...@@ -650,6 +679,9 @@ struct dma_device {
struct dma_chan *chan, dma_addr_t *pq, dma_addr_t *src, struct dma_chan *chan, dma_addr_t *pq, dma_addr_t *src,
unsigned int src_cnt, const unsigned char *scf, size_t len, unsigned int src_cnt, const unsigned char *scf, size_t len,
enum sum_check_flags *pqres, unsigned long flags); enum sum_check_flags *pqres, unsigned long flags);
struct dma_async_tx_descriptor *(*device_prep_dma_memset)(
struct dma_chan *chan, dma_addr_t dest, int value, size_t len,
unsigned long flags);
struct dma_async_tx_descriptor *(*device_prep_dma_interrupt)( struct dma_async_tx_descriptor *(*device_prep_dma_interrupt)(
struct dma_chan *chan, unsigned long flags); struct dma_chan *chan, unsigned long flags);
struct dma_async_tx_descriptor *(*device_prep_dma_sg)( struct dma_async_tx_descriptor *(*device_prep_dma_sg)(
...@@ -745,6 +777,17 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_interleaved_dma( ...@@ -745,6 +777,17 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_interleaved_dma(
return chan->device->device_prep_interleaved_dma(chan, xt, flags); return chan->device->device_prep_interleaved_dma(chan, xt, flags);
} }
static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_memset(
struct dma_chan *chan, dma_addr_t dest, int value, size_t len,
unsigned long flags)
{
if (!chan || !chan->device)
return NULL;
return chan->device->device_prep_dma_memset(chan, dest, value,
len, flags);
}
static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_sg( static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_sg(
struct dma_chan *chan, struct dma_chan *chan,
struct scatterlist *dst_sg, unsigned int dst_nents, struct scatterlist *dst_sg, unsigned int dst_nents,
...@@ -820,6 +863,12 @@ static inline bool is_dma_pq_aligned(struct dma_device *dev, size_t off1, ...@@ -820,6 +863,12 @@ static inline bool is_dma_pq_aligned(struct dma_device *dev, size_t off1,
return dmaengine_check_align(dev->pq_align, off1, off2, len); return dmaengine_check_align(dev->pq_align, off1, off2, len);
} }
static inline bool is_dma_fill_aligned(struct dma_device *dev, size_t off1,
size_t off2, size_t len)
{
return dmaengine_check_align(dev->fill_align, off1, off2, len);
}
static inline void static inline void
dma_set_maxpq(struct dma_device *dma, int maxpq, int has_pq_continue) dma_set_maxpq(struct dma_device *dma, int maxpq, int has_pq_continue)
{ {
...@@ -874,6 +923,33 @@ static inline int dma_maxpq(struct dma_device *dma, enum dma_ctrl_flags flags) ...@@ -874,6 +923,33 @@ static inline int dma_maxpq(struct dma_device *dma, enum dma_ctrl_flags flags)
BUG(); BUG();
} }
static inline size_t dmaengine_get_icg(bool inc, bool sgl, size_t icg,
size_t dir_icg)
{
if (inc) {
if (dir_icg)
return dir_icg;
else if (sgl)
return icg;
}
return 0;
}
static inline size_t dmaengine_get_dst_icg(struct dma_interleaved_template *xt,
struct data_chunk *chunk)
{
return dmaengine_get_icg(xt->dst_inc, xt->dst_sgl,
chunk->icg, chunk->dst_icg);
}
static inline size_t dmaengine_get_src_icg(struct dma_interleaved_template *xt,
struct data_chunk *chunk)
{
return dmaengine_get_icg(xt->src_inc, xt->src_sgl,
chunk->icg, chunk->src_icg);
}
/* --- public DMA engine API --- */ /* --- public DMA engine API --- */
#ifdef CONFIG_DMA_ENGINE #ifdef CONFIG_DMA_ENGINE
......
...@@ -23,6 +23,9 @@ struct of_dma { ...@@ -23,6 +23,9 @@ struct of_dma {
struct device_node *of_node; struct device_node *of_node;
struct dma_chan *(*of_dma_xlate) struct dma_chan *(*of_dma_xlate)
(struct of_phandle_args *, struct of_dma *); (struct of_phandle_args *, struct of_dma *);
void *(*of_dma_route_allocate)
(struct of_phandle_args *, struct of_dma *);
struct dma_router *dma_router;
void *of_dma_data; void *of_dma_data;
}; };
...@@ -37,12 +40,20 @@ extern int of_dma_controller_register(struct device_node *np, ...@@ -37,12 +40,20 @@ extern int of_dma_controller_register(struct device_node *np,
(struct of_phandle_args *, struct of_dma *), (struct of_phandle_args *, struct of_dma *),
void *data); void *data);
extern void of_dma_controller_free(struct device_node *np); extern void of_dma_controller_free(struct device_node *np);
extern int of_dma_router_register(struct device_node *np,
void *(*of_dma_route_allocate)
(struct of_phandle_args *, struct of_dma *),
struct dma_router *dma_router);
#define of_dma_router_free of_dma_controller_free
extern struct dma_chan *of_dma_request_slave_channel(struct device_node *np, extern struct dma_chan *of_dma_request_slave_channel(struct device_node *np,
const char *name); const char *name);
extern struct dma_chan *of_dma_simple_xlate(struct of_phandle_args *dma_spec, extern struct dma_chan *of_dma_simple_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma); struct of_dma *ofdma);
extern struct dma_chan *of_dma_xlate_by_chan_id(struct of_phandle_args *dma_spec, extern struct dma_chan *of_dma_xlate_by_chan_id(struct of_phandle_args *dma_spec,
struct of_dma *ofdma); struct of_dma *ofdma);
#else #else
static inline int of_dma_controller_register(struct device_node *np, static inline int of_dma_controller_register(struct device_node *np,
struct dma_chan *(*of_dma_xlate) struct dma_chan *(*of_dma_xlate)
...@@ -56,6 +67,16 @@ static inline void of_dma_controller_free(struct device_node *np) ...@@ -56,6 +67,16 @@ static inline void of_dma_controller_free(struct device_node *np)
{ {
} }
static inline int of_dma_router_register(struct device_node *np,
void *(*of_dma_route_allocate)
(struct of_phandle_args *, struct of_dma *),
struct dma_router *dma_router)
{
return -ENODEV;
}
#define of_dma_router_free of_dma_controller_free
static inline struct dma_chan *of_dma_request_slave_channel(struct device_node *np, static inline struct dma_chan *of_dma_request_slave_channel(struct device_node *np,
const char *name) const char *name)
{ {
......
/*
* This is for Renesas R-Car Audio-DMAC-peri-peri.
*
* Copyright (C) 2014 Renesas Electronics Corporation
* Copyright (C) 2014 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
*
* This file is based on the include/linux/sh_dma.h
*
* Header for the new SH dmaengine driver
*
* Copyright (C) 2010 Guennadi Liakhovetski <g.liakhovetski@gmx.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef SH_AUDMAPP_H
#define SH_AUDMAPP_H
#include <linux/dmaengine.h>
struct audmapp_slave_config {
int slave_id;
dma_addr_t src;
dma_addr_t dst;
u32 chcr;
};
struct audmapp_pdata {
struct audmapp_slave_config *slave;
int slave_num;
};
#endif /* SH_AUDMAPP_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment