Commit 6c61403a authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma

Pull slave-dmaengine updates from Vinod Koul:
 - New driver for Qcom bam dma
 - New driver for RCAR peri-peri
 - New driver for FSL eDMA
 - Various odd fixes and updates thru the subsystem

* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (29 commits)
  dmaengine: add Qualcomm BAM dma driver
  shdma: add R-Car Audio DMAC peri peri driver
  dmaengine: sirf: enable generic dt binding for dma channels
  dma: omap-dma: Implement device_slave_caps callback
  dmaengine: qcom_bam_dma: Add device tree binding
  dma: dw: Add suspend and resume handling for PCI mode DW_DMAC.
  dma: dw: allocate memory in two stages in probe
  Add new line to test result strings produced in verbose mode
  dmaengine: pch_dma: use tasklet_kill in teardown
  dmaengine: at_hdmac: use tasklet_kill in teardown
  dma: cppi41: start tear down only if channel is busy
  usb: musb: musb_cppi41: Dont reprogram DMA if tear down is initiated
  dmaengine: s3c24xx-dma: make phy->irq signed for error handling
  dma: imx-dma: Add missing module owner field
  dma: imx-dma: Replace printk with dev_*
  dma: fsl-edma: fix static checker warning of NULL dereference
  dma: Remove comment about embedding dma_slave_config into custom structs
  dma: mmp_tdma: move to generic device tree binding
  dma: mmp_pdma: add IRQF_SHARED when request irq
  dma: edma: Fix memory leak in edma_prep_dma_cyclic()
  ...
parents edf2377c 8673bcef
* Freescale enhanced Direct Memory Access(eDMA) Controller
The eDMA channels have multiplex capability by programmble memory-mapped
registers. channels are split into two groups, called DMAMUX0 and DMAMUX1,
specific DMA request source can only be multiplexed by any channel of certain
group, DMAMUX0 or DMAMUX1, but not both.
* eDMA Controller
Required properties:
- compatible :
- "fsl,vf610-edma" for eDMA used similar to that on Vybrid vf610 SoC
- reg : Specifies base physical address(s) and size of the eDMA registers.
The 1st region is eDMA control register's address and size.
The 2nd and the 3rd regions are programmable channel multiplexing
control register's address and size.
- interrupts : A list of interrupt-specifiers, one for each entry in
interrupt-names.
- interrupt-names : Should contain:
"edma-tx" - the transmission interrupt
"edma-err" - the error interrupt
- #dma-cells : Must be <2>.
The 1st cell specifies the DMAMUX(0 for DMAMUX0 and 1 for DMAMUX1).
Specific request source can only be multiplexed by specific channels
group called DMAMUX.
The 2nd cell specifies the request source(slot) ID.
See the SoC's reference manual for all the supported request sources.
- dma-channels : Number of channels supported by the controller
- clock-names : A list of channel group clock names. Should contain:
"dmamux0" - clock name of mux0 group
"dmamux1" - clock name of mux1 group
- clocks : A list of phandle and clock-specifier pairs, one for each entry in
clock-names.
Optional properties:
- big-endian: If present registers and hardware scatter/gather descriptors
of the eDMA are implemented in big endian mode, otherwise in little
mode.
Examples:
edma0: dma-controller@40018000 {
#dma-cells = <2>;
compatible = "fsl,vf610-edma";
reg = <0x40018000 0x2000>,
<0x40024000 0x1000>,
<0x40025000 0x1000>;
interrupts = <0 8 IRQ_TYPE_LEVEL_HIGH>,
<0 9 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "edma-tx", "edma-err";
dma-channels = <32>;
clock-names = "dmamux0", "dmamux1";
clocks = <&clks VF610_CLK_DMAMUX0>,
<&clks VF610_CLK_DMAMUX1>;
};
* DMA clients
DMA client drivers that uses the DMA function must use the format described
in the dma.txt file, using a two-cell specifier for each channel: the 1st
specifies the channel group(DMAMUX) in which this request can be multiplexed,
and the 2nd specifies the request source.
Examples:
sai2: sai@40031000 {
compatible = "fsl,vf610-sai";
reg = <0x40031000 0x1000>;
interrupts = <0 86 IRQ_TYPE_LEVEL_HIGH>;
clock-names = "sai";
clocks = <&clks VF610_CLK_SAI2>;
dma-names = "tx", "rx";
dmas = <&edma0 0 21>,
<&edma0 0 20>;
status = "disabled";
};
QCOM BAM DMA controller
Required properties:
- compatible: must contain "qcom,bam-v1.4.0" for MSM8974
- reg: Address range for DMA registers
- interrupts: Should contain the one interrupt shared by all channels
- #dma-cells: must be <1>, the cell in the dmas property of the client device
represents the channel number
- clocks: required clock
- clock-names: must contain "bam_clk" entry
- qcom,ee : indicates the active Execution Environment identifier (0-7) used in
the secure world.
Example:
uart-bam: dma@f9984000 = {
compatible = "qcom,bam-v1.4.0";
reg = <0xf9984000 0x15000>;
interrupts = <0 94 0>;
clocks = <&gcc GCC_BAM_DMA_AHB_CLK>;
clock-names = "bam_clk";
#dma-cells = <1>;
qcom,ee = <0>;
};
DMA clients must use the format described in the dma.txt file, using a two cell
specifier for each channel.
Example:
serial@f991e000 {
compatible = "qcom,msm-uart";
reg = <0xf991e000 0x1000>
<0xf9944000 0x19000>;
interrupts = <0 108 0>;
clocks = <&gcc GCC_BLSP1_UART2_APPS_CLK>,
<&gcc GCC_BLSP1_AHB_CLK>;
clock-names = "core", "iface";
dmas = <&uart-bam 0>, <&uart-bam 1>;
dma-names = "rx", "tx";
};
* CSR SiRFSoC DMA controller
See dma.txt first
Required properties:
- compatible: Should be "sirf,prima2-dmac" or "sirf,marco-dmac"
- reg: Should contain DMA registers location and length.
- interrupts: Should contain one interrupt shared by all channel
- #dma-cells: must be <1>. used to represent the number of integer
cells in the dmas property of client device.
- clocks: clock required
Example:
Controller:
dmac0: dma-controller@b00b0000 {
compatible = "sirf,prima2-dmac";
reg = <0xb00b0000 0x10000>;
interrupts = <12>;
clocks = <&clks 24>;
#dma-cells = <1>;
};
Client:
Fill the specific dma request line in dmas. In the below example, spi0 read
channel request line is 9 of the 2nd dma controller, while write channel uses
4 of the 2nd dma controller; spi1 read channel request line is 12 of the 1st
dma controller, while write channel uses 13 of the 1st dma controller:
spi0: spi@b00d0000 {
compatible = "sirf,prima2-spi";
dmas = <&dmac1 9>,
<&dmac1 4>;
dma-names = "rx", "tx";
};
spi1: spi@b0170000 {
compatible = "sirf,prima2-spi";
dmas = <&dmac0 12>,
<&dmac0 13>;
dma-names = "rx", "tx";
};
......@@ -271,6 +271,7 @@ dmac0: dma-controller@b00b0000 {
reg = <0xb00b0000 0x10000>;
interrupts = <12>;
clocks = <&clks 24>;
#dma-cells = <1>;
};
dmac1: dma-controller@b0160000 {
......@@ -279,6 +280,7 @@ dmac1: dma-controller@b0160000 {
reg = <0xb0160000 0x10000>;
interrupts = <13>;
clocks = <&clks 25>;
#dma-cells = <1>;
};
vip@b00C0000 {
......
......@@ -287,6 +287,7 @@ dmac0: dma-controller@b00b0000 {
reg = <0xb00b0000 0x10000>;
interrupts = <12>;
clocks = <&clks 24>;
#dma-cells = <1>;
};
dmac1: dma-controller@b0160000 {
......@@ -295,6 +296,7 @@ dmac1: dma-controller@b0160000 {
reg = <0xb0160000 0x10000>;
interrupts = <13>;
clocks = <&clks 25>;
#dma-cells = <1>;
};
vip@b00C0000 {
......
......@@ -308,7 +308,7 @@ config DMA_OMAP
config DMA_BCM2835
tristate "BCM2835 DMA engine support"
depends on (ARCH_BCM2835 || MACH_BCM2708)
depends on ARCH_BCM2835
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
......@@ -350,6 +350,16 @@ config MOXART_DMA
select DMA_VIRTUAL_CHANNELS
help
Enable support for the MOXA ART SoC DMA controller.
config FSL_EDMA
tristate "Freescale eDMA engine support"
depends on OF
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Support the Freescale eDMA engine with programmable channel
multiplexing capability for DMA request sources(slot).
This module can be found on Freescale Vybrid and LS-1 SoCs.
config DMA_ENGINE
bool
......@@ -401,4 +411,13 @@ config DMATEST
config DMA_ENGINE_RAID
bool
config QCOM_BAM_DMA
tristate "QCOM BAM DMA support"
depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM)
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
---help---
Enable support for the QCOM BAM DMA controller. This controller
provides DMA capabilities for a variety of on-chip devices.
endif
......@@ -44,3 +44,5 @@ obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o
obj-$(CONFIG_TI_CPPI41) += cppi41.o
obj-$(CONFIG_K3_DMA) += k3dma.o
obj-$(CONFIG_MOXART_DMA) += moxart-dma.o
obj-$(CONFIG_FSL_EDMA) += fsl-edma.o
obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
......@@ -13,6 +13,7 @@
*/
#include <linux/device.h>
#include <linux/err.h>
#include <linux/module.h>
#include <linux/list.h>
#include <linux/mutex.h>
......@@ -265,7 +266,7 @@ EXPORT_SYMBOL_GPL(devm_acpi_dma_controller_register);
*/
void devm_acpi_dma_controller_free(struct device *dev)
{
WARN_ON(devres_destroy(dev, devm_acpi_dma_release, NULL, NULL));
WARN_ON(devres_release(dev, devm_acpi_dma_release, NULL, NULL));
}
EXPORT_SYMBOL_GPL(devm_acpi_dma_controller_free);
......@@ -343,7 +344,7 @@ static int acpi_dma_parse_fixed_dma(struct acpi_resource *res, void *data)
* @index: index of FixedDMA descriptor for @dev
*
* Return:
* Pointer to appropriate dma channel on success or NULL on error.
* Pointer to appropriate dma channel on success or an error pointer.
*/
struct dma_chan *acpi_dma_request_slave_chan_by_index(struct device *dev,
size_t index)
......@@ -358,10 +359,10 @@ struct dma_chan *acpi_dma_request_slave_chan_by_index(struct device *dev,
/* Check if the device was enumerated by ACPI */
if (!dev || !ACPI_HANDLE(dev))
return NULL;
return ERR_PTR(-ENODEV);
if (acpi_bus_get_device(ACPI_HANDLE(dev), &adev))
return NULL;
return ERR_PTR(-ENODEV);
memset(&pdata, 0, sizeof(pdata));
pdata.index = index;
......@@ -376,7 +377,7 @@ struct dma_chan *acpi_dma_request_slave_chan_by_index(struct device *dev,
acpi_dev_free_resource_list(&resource_list);
if (dma_spec->slave_id < 0 || dma_spec->chan_id < 0)
return NULL;
return ERR_PTR(-ENODEV);
mutex_lock(&acpi_dma_lock);
......@@ -399,7 +400,7 @@ struct dma_chan *acpi_dma_request_slave_chan_by_index(struct device *dev,
}
mutex_unlock(&acpi_dma_lock);
return chan;
return chan ? chan : ERR_PTR(-EPROBE_DEFER);
}
EXPORT_SYMBOL_GPL(acpi_dma_request_slave_chan_by_index);
......@@ -413,7 +414,7 @@ EXPORT_SYMBOL_GPL(acpi_dma_request_slave_chan_by_index);
* the first FixedDMA descriptor is TX and second is RX.
*
* Return:
* Pointer to appropriate dma channel on success or NULL on error.
* Pointer to appropriate dma channel on success or an error pointer.
*/
struct dma_chan *acpi_dma_request_slave_chan_by_name(struct device *dev,
const char *name)
......@@ -425,7 +426,7 @@ struct dma_chan *acpi_dma_request_slave_chan_by_name(struct device *dev,
else if (!strcmp(name, "rx"))
index = 1;
else
return NULL;
return ERR_PTR(-ENODEV);
return acpi_dma_request_slave_chan_by_index(dev, index);
}
......
......@@ -1569,7 +1569,6 @@ static int at_dma_remove(struct platform_device *pdev)
/* Disable interrupts */
atc_disable_chan_irq(atdma, chan->chan_id);
tasklet_disable(&atchan->tasklet);
tasklet_kill(&atchan->tasklet);
list_del(&chan->device_node);
......
......@@ -620,12 +620,15 @@ static int cppi41_stop_chan(struct dma_chan *chan)
u32 desc_phys;
int ret;
desc_phys = lower_32_bits(c->desc_phys);
desc_num = (desc_phys - cdd->descs_phys) / sizeof(struct cppi41_desc);
if (!cdd->chan_busy[desc_num])
return 0;
ret = cppi41_tear_down_chan(c);
if (ret)
return ret;
desc_phys = lower_32_bits(c->desc_phys);
desc_num = (desc_phys - cdd->descs_phys) / sizeof(struct cppi41_desc);
WARN_ON(!cdd->chan_busy[desc_num]);
cdd->chan_busy[desc_num] = NULL;
......
......@@ -627,18 +627,13 @@ EXPORT_SYMBOL_GPL(__dma_request_channel);
struct dma_chan *dma_request_slave_channel_reason(struct device *dev,
const char *name)
{
struct dma_chan *chan;
/* If device-tree is present get slave info from here */
if (dev->of_node)
return of_dma_request_slave_channel(dev->of_node, name);
/* If device was enumerated by ACPI get slave info from here */
if (ACPI_HANDLE(dev)) {
chan = acpi_dma_request_slave_chan_by_name(dev, name);
if (chan)
return chan;
}
if (ACPI_HANDLE(dev))
return acpi_dma_request_slave_chan_by_name(dev, name);
return ERR_PTR(-ENODEV);
}
......
......@@ -340,7 +340,7 @@ static unsigned int min_odd(unsigned int x, unsigned int y)
static void result(const char *err, unsigned int n, unsigned int src_off,
unsigned int dst_off, unsigned int len, unsigned long data)
{
pr_info("%s: result #%u: '%s' with src_off=0x%x dst_off=0x%x len=0x%x (%lu)",
pr_info("%s: result #%u: '%s' with src_off=0x%x dst_off=0x%x len=0x%x (%lu)\n",
current->comm, n, err, src_off, dst_off, len, data);
}
......@@ -348,7 +348,7 @@ static void dbg_result(const char *err, unsigned int n, unsigned int src_off,
unsigned int dst_off, unsigned int len,
unsigned long data)
{
pr_debug("%s: result #%u: '%s' with src_off=0x%x dst_off=0x%x len=0x%x (%lu)",
pr_debug("%s: result #%u: '%s' with src_off=0x%x dst_off=0x%x len=0x%x (%lu)\n",
current->comm, n, err, src_off, dst_off, len, data);
}
......
......@@ -33,8 +33,8 @@
* of which use ARM any more). See the "Databook" from Synopsys for
* information beyond what licensees probably provide.
*
* The driver has currently been tested only with the Atmel AT32AP7000,
* which does not support descriptor writeback.
* The driver has been tested with the Atmel AT32AP7000, which does not
* support descriptor writeback.
*/
static inline bool is_request_line_unset(struct dw_dma_chan *dwc)
......@@ -1479,7 +1479,6 @@ static void dw_dma_off(struct dw_dma *dw)
int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
{
struct dw_dma *dw;
size_t size;
bool autocfg;
unsigned int dw_params;
unsigned int nr_channels;
......@@ -1487,6 +1486,13 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
int err;
int i;
dw = devm_kzalloc(chip->dev, sizeof(*dw), GFP_KERNEL);
if (!dw)
return -ENOMEM;
dw->regs = chip->regs;
chip->dw = dw;
dw_params = dma_read_byaddr(chip->regs, DW_PARAMS);
autocfg = dw_params >> DW_PARAMS_EN & 0x1;
......@@ -1509,9 +1515,9 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
else
nr_channels = pdata->nr_channels;
size = sizeof(struct dw_dma) + nr_channels * sizeof(struct dw_dma_chan);
dw = devm_kzalloc(chip->dev, size, GFP_KERNEL);
if (!dw)
dw->chan = devm_kcalloc(chip->dev, nr_channels, sizeof(*dw->chan),
GFP_KERNEL);
if (!dw->chan)
return -ENOMEM;
dw->clk = devm_clk_get(chip->dev, "hclk");
......@@ -1519,9 +1525,6 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
return PTR_ERR(dw->clk);
clk_prepare_enable(dw->clk);
dw->regs = chip->regs;
chip->dw = dw;
/* Get hardware configuration parameters */
if (autocfg) {
max_blk_size = dma_readl(dw, MAX_BLK_SIZE);
......
......@@ -75,6 +75,36 @@ static void dw_pci_remove(struct pci_dev *pdev)
dev_warn(&pdev->dev, "can't remove device properly: %d\n", ret);
}
#ifdef CONFIG_PM_SLEEP
static int dw_pci_suspend_late(struct device *dev)
{
struct pci_dev *pci = to_pci_dev(dev);
struct dw_dma_chip *chip = pci_get_drvdata(pci);
return dw_dma_suspend(chip);
};
static int dw_pci_resume_early(struct device *dev)
{
struct pci_dev *pci = to_pci_dev(dev);
struct dw_dma_chip *chip = pci_get_drvdata(pci);
return dw_dma_resume(chip);
};
#else /* !CONFIG_PM_SLEEP */
#define dw_pci_suspend_late NULL
#define dw_pci_resume_early NULL
#endif /* !CONFIG_PM_SLEEP */
static const struct dev_pm_ops dw_pci_dev_pm_ops = {
.suspend_late = dw_pci_suspend_late,
.resume_early = dw_pci_resume_early,
};
static DEFINE_PCI_DEVICE_TABLE(dw_pci_id_table) = {
/* Medfield */
{ PCI_VDEVICE(INTEL, 0x0827), (kernel_ulong_t)&dw_pci_pdata },
......@@ -83,6 +113,9 @@ static DEFINE_PCI_DEVICE_TABLE(dw_pci_id_table) = {
/* BayTrail */
{ PCI_VDEVICE(INTEL, 0x0f06), (kernel_ulong_t)&dw_pci_pdata },
{ PCI_VDEVICE(INTEL, 0x0f40), (kernel_ulong_t)&dw_pci_pdata },
/* Haswell */
{ PCI_VDEVICE(INTEL, 0x9c60), (kernel_ulong_t)&dw_pci_pdata },
{ }
};
MODULE_DEVICE_TABLE(pci, dw_pci_id_table);
......@@ -92,6 +125,9 @@ static struct pci_driver dw_pci_driver = {
.id_table = dw_pci_id_table,
.probe = dw_pci_probe,
.remove = dw_pci_remove,
.driver = {
.pm = &dw_pci_dev_pm_ops,
},
};
module_pci_driver(dw_pci_driver);
......
......@@ -252,13 +252,13 @@ struct dw_dma {
struct tasklet_struct tasklet;
struct clk *clk;
/* channels */
struct dw_dma_chan *chan;
u8 all_chan_mask;
/* hardware configuration */
unsigned char nr_masters;
unsigned char data_width[4];
struct dw_dma_chan chan[0];
};
static inline struct dw_dma_regs __iomem *__dw_regs(struct dw_dma *dw)
......
......@@ -539,6 +539,7 @@ static struct dma_async_tx_descriptor *edma_prep_dma_cyclic(
edma_alloc_slot(EDMA_CTLR(echan->ch_num),
EDMA_SLOT_ANY);
if (echan->slot[i] < 0) {
kfree(edesc);
dev_err(dev, "Failed to allocate slot\n");
return NULL;
}
......@@ -553,8 +554,10 @@ static struct dma_async_tx_descriptor *edma_prep_dma_cyclic(
ret = edma_config_pset(chan, &edesc->pset[i], src_addr,
dst_addr, burst, dev_width, period_len,
direction);
if (ret < 0)
if (ret < 0) {
kfree(edesc);
return NULL;
}
if (direction == DMA_DEV_TO_MEM)
dst_addr += period_len;
......
/*
* drivers/dma/fsl-edma.c
*
* Copyright 2013-2014 Freescale Semiconductor, Inc.
*
* Driver for the Freescale eDMA engine with flexible channel multiplexing
* capability for DMA request sources. The eDMA block can be found on some
* Vybrid and Layerscape SoCs.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/clk.h>
#include <linux/dma-mapping.h>
#include <linux/dmapool.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_dma.h>
#include "virt-dma.h"
#define EDMA_CR 0x00
#define EDMA_ES 0x04
#define EDMA_ERQ 0x0C
#define EDMA_EEI 0x14
#define EDMA_SERQ 0x1B
#define EDMA_CERQ 0x1A
#define EDMA_SEEI 0x19
#define EDMA_CEEI 0x18
#define EDMA_CINT 0x1F
#define EDMA_CERR 0x1E
#define EDMA_SSRT 0x1D
#define EDMA_CDNE 0x1C
#define EDMA_INTR 0x24
#define EDMA_ERR 0x2C
#define EDMA_TCD_SADDR(x) (0x1000 + 32 * (x))
#define EDMA_TCD_SOFF(x) (0x1004 + 32 * (x))
#define EDMA_TCD_ATTR(x) (0x1006 + 32 * (x))
#define EDMA_TCD_NBYTES(x) (0x1008 + 32 * (x))
#define EDMA_TCD_SLAST(x) (0x100C + 32 * (x))
#define EDMA_TCD_DADDR(x) (0x1010 + 32 * (x))
#define EDMA_TCD_DOFF(x) (0x1014 + 32 * (x))
#define EDMA_TCD_CITER_ELINK(x) (0x1016 + 32 * (x))
#define EDMA_TCD_CITER(x) (0x1016 + 32 * (x))
#define EDMA_TCD_DLAST_SGA(x) (0x1018 + 32 * (x))
#define EDMA_TCD_CSR(x) (0x101C + 32 * (x))
#define EDMA_TCD_BITER_ELINK(x) (0x101E + 32 * (x))
#define EDMA_TCD_BITER(x) (0x101E + 32 * (x))
#define EDMA_CR_EDBG BIT(1)
#define EDMA_CR_ERCA BIT(2)
#define EDMA_CR_ERGA BIT(3)
#define EDMA_CR_HOE BIT(4)
#define EDMA_CR_HALT BIT(5)
#define EDMA_CR_CLM BIT(6)
#define EDMA_CR_EMLM BIT(7)
#define EDMA_CR_ECX BIT(16)
#define EDMA_CR_CX BIT(17)
#define EDMA_SEEI_SEEI(x) ((x) & 0x1F)
#define EDMA_CEEI_CEEI(x) ((x) & 0x1F)
#define EDMA_CINT_CINT(x) ((x) & 0x1F)
#define EDMA_CERR_CERR(x) ((x) & 0x1F)
#define EDMA_TCD_ATTR_DSIZE(x) (((x) & 0x0007))
#define EDMA_TCD_ATTR_DMOD(x) (((x) & 0x001F) << 3)
#define EDMA_TCD_ATTR_SSIZE(x) (((x) & 0x0007) << 8)
#define EDMA_TCD_ATTR_SMOD(x) (((x) & 0x001F) << 11)
#define EDMA_TCD_ATTR_SSIZE_8BIT (0x0000)
#define EDMA_TCD_ATTR_SSIZE_16BIT (0x0100)
#define EDMA_TCD_ATTR_SSIZE_32BIT (0x0200)
#define EDMA_TCD_ATTR_SSIZE_64BIT (0x0300)
#define EDMA_TCD_ATTR_SSIZE_32BYTE (0x0500)
#define EDMA_TCD_ATTR_DSIZE_8BIT (0x0000)
#define EDMA_TCD_ATTR_DSIZE_16BIT (0x0001)
#define EDMA_TCD_ATTR_DSIZE_32BIT (0x0002)
#define EDMA_TCD_ATTR_DSIZE_64BIT (0x0003)
#define EDMA_TCD_ATTR_DSIZE_32BYTE (0x0005)
#define EDMA_TCD_SOFF_SOFF(x) (x)
#define EDMA_TCD_NBYTES_NBYTES(x) (x)
#define EDMA_TCD_SLAST_SLAST(x) (x)
#define EDMA_TCD_DADDR_DADDR(x) (x)
#define EDMA_TCD_CITER_CITER(x) ((x) & 0x7FFF)
#define EDMA_TCD_DOFF_DOFF(x) (x)
#define EDMA_TCD_DLAST_SGA_DLAST_SGA(x) (x)
#define EDMA_TCD_BITER_BITER(x) ((x) & 0x7FFF)
#define EDMA_TCD_CSR_START BIT(0)
#define EDMA_TCD_CSR_INT_MAJOR BIT(1)
#define EDMA_TCD_CSR_INT_HALF BIT(2)
#define EDMA_TCD_CSR_D_REQ BIT(3)
#define EDMA_TCD_CSR_E_SG BIT(4)
#define EDMA_TCD_CSR_E_LINK BIT(5)
#define EDMA_TCD_CSR_ACTIVE BIT(6)
#define EDMA_TCD_CSR_DONE BIT(7)
#define EDMAMUX_CHCFG_DIS 0x0
#define EDMAMUX_CHCFG_ENBL 0x80
#define EDMAMUX_CHCFG_SOURCE(n) ((n) & 0x3F)
#define DMAMUX_NR 2
#define FSL_EDMA_BUSWIDTHS BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_8_BYTES)
struct fsl_edma_hw_tcd {
u32 saddr;
u16 soff;
u16 attr;
u32 nbytes;
u32 slast;
u32 daddr;
u16 doff;
u16 citer;
u32 dlast_sga;
u16 csr;
u16 biter;
};
struct fsl_edma_sw_tcd {
dma_addr_t ptcd;
struct fsl_edma_hw_tcd *vtcd;
};
struct fsl_edma_slave_config {
enum dma_transfer_direction dir;
enum dma_slave_buswidth addr_width;
u32 dev_addr;
u32 burst;
u32 attr;
};
struct fsl_edma_chan {
struct virt_dma_chan vchan;
enum dma_status status;
struct fsl_edma_engine *edma;
struct fsl_edma_desc *edesc;
struct fsl_edma_slave_config fsc;
struct dma_pool *tcd_pool;
};
struct fsl_edma_desc {
struct virt_dma_desc vdesc;
struct fsl_edma_chan *echan;
bool iscyclic;
unsigned int n_tcds;
struct fsl_edma_sw_tcd tcd[];
};
struct fsl_edma_engine {
struct dma_device dma_dev;
void __iomem *membase;
void __iomem *muxbase[DMAMUX_NR];
struct clk *muxclk[DMAMUX_NR];
struct mutex fsl_edma_mutex;
u32 n_chans;
int txirq;
int errirq;
bool big_endian;
struct fsl_edma_chan chans[];
};
/*
* R/W functions for big- or little-endian registers
* the eDMA controller's endian is independent of the CPU core's endian.
*/
static u16 edma_readw(struct fsl_edma_engine *edma, void __iomem *addr)
{
if (edma->big_endian)
return ioread16be(addr);
else
return ioread16(addr);
}
static u32 edma_readl(struct fsl_edma_engine *edma, void __iomem *addr)
{
if (edma->big_endian)
return ioread32be(addr);
else
return ioread32(addr);
}
static void edma_writeb(struct fsl_edma_engine *edma, u8 val, void __iomem *addr)
{
iowrite8(val, addr);
}
static void edma_writew(struct fsl_edma_engine *edma, u16 val, void __iomem *addr)
{
if (edma->big_endian)
iowrite16be(val, addr);
else
iowrite16(val, addr);
}
static void edma_writel(struct fsl_edma_engine *edma, u32 val, void __iomem *addr)
{
if (edma->big_endian)
iowrite32be(val, addr);
else
iowrite32(val, addr);
}
static struct fsl_edma_chan *to_fsl_edma_chan(struct dma_chan *chan)
{
return container_of(chan, struct fsl_edma_chan, vchan.chan);
}
static struct fsl_edma_desc *to_fsl_edma_desc(struct virt_dma_desc *vd)
{
return container_of(vd, struct fsl_edma_desc, vdesc);
}
static void fsl_edma_enable_request(struct fsl_edma_chan *fsl_chan)
{
void __iomem *addr = fsl_chan->edma->membase;
u32 ch = fsl_chan->vchan.chan.chan_id;
edma_writeb(fsl_chan->edma, EDMA_SEEI_SEEI(ch), addr + EDMA_SEEI);
edma_writeb(fsl_chan->edma, ch, addr + EDMA_SERQ);
}
static void fsl_edma_disable_request(struct fsl_edma_chan *fsl_chan)
{
void __iomem *addr = fsl_chan->edma->membase;
u32 ch = fsl_chan->vchan.chan.chan_id;
edma_writeb(fsl_chan->edma, ch, addr + EDMA_CERQ);
edma_writeb(fsl_chan->edma, EDMA_CEEI_CEEI(ch), addr + EDMA_CEEI);
}
static void fsl_edma_chan_mux(struct fsl_edma_chan *fsl_chan,
unsigned int slot, bool enable)
{
u32 ch = fsl_chan->vchan.chan.chan_id;
void __iomem *muxaddr = fsl_chan->edma->muxbase[ch / DMAMUX_NR];
unsigned chans_per_mux, ch_off;
chans_per_mux = fsl_chan->edma->n_chans / DMAMUX_NR;
ch_off = fsl_chan->vchan.chan.chan_id % chans_per_mux;
if (enable)
edma_writeb(fsl_chan->edma,
EDMAMUX_CHCFG_ENBL | EDMAMUX_CHCFG_SOURCE(slot),
muxaddr + ch_off);
else
edma_writeb(fsl_chan->edma, EDMAMUX_CHCFG_DIS, muxaddr + ch_off);
}
static unsigned int fsl_edma_get_tcd_attr(enum dma_slave_buswidth addr_width)
{
switch (addr_width) {
case 1:
return EDMA_TCD_ATTR_SSIZE_8BIT | EDMA_TCD_ATTR_DSIZE_8BIT;
case 2:
return EDMA_TCD_ATTR_SSIZE_16BIT | EDMA_TCD_ATTR_DSIZE_16BIT;
case 4:
return EDMA_TCD_ATTR_SSIZE_32BIT | EDMA_TCD_ATTR_DSIZE_32BIT;
case 8:
return EDMA_TCD_ATTR_SSIZE_64BIT | EDMA_TCD_ATTR_DSIZE_64BIT;
default:
return EDMA_TCD_ATTR_SSIZE_32BIT | EDMA_TCD_ATTR_DSIZE_32BIT;
}
}
static void fsl_edma_free_desc(struct virt_dma_desc *vdesc)
{
struct fsl_edma_desc *fsl_desc;
int i;
fsl_desc = to_fsl_edma_desc(vdesc);
for (i = 0; i < fsl_desc->n_tcds; i++)
dma_pool_free(fsl_desc->echan->tcd_pool,
fsl_desc->tcd[i].vtcd,
fsl_desc->tcd[i].ptcd);
kfree(fsl_desc);
}
static int fsl_edma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
struct dma_slave_config *cfg = (void *)arg;
unsigned long flags;
LIST_HEAD(head);
switch (cmd) {
case DMA_TERMINATE_ALL:
spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
fsl_edma_disable_request(fsl_chan);
fsl_chan->edesc = NULL;
vchan_get_all_descriptors(&fsl_chan->vchan, &head);
spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
vchan_dma_desc_free_list(&fsl_chan->vchan, &head);
return 0;
case DMA_SLAVE_CONFIG:
fsl_chan->fsc.dir = cfg->direction;
if (cfg->direction == DMA_DEV_TO_MEM) {
fsl_chan->fsc.dev_addr = cfg->src_addr;
fsl_chan->fsc.addr_width = cfg->src_addr_width;
fsl_chan->fsc.burst = cfg->src_maxburst;
fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg->src_addr_width);
} else if (cfg->direction == DMA_MEM_TO_DEV) {
fsl_chan->fsc.dev_addr = cfg->dst_addr;
fsl_chan->fsc.addr_width = cfg->dst_addr_width;
fsl_chan->fsc.burst = cfg->dst_maxburst;
fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg->dst_addr_width);
} else {
return -EINVAL;
}
return 0;
case DMA_PAUSE:
spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
if (fsl_chan->edesc) {
fsl_edma_disable_request(fsl_chan);
fsl_chan->status = DMA_PAUSED;
}
spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
return 0;
case DMA_RESUME:
spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
if (fsl_chan->edesc) {
fsl_edma_enable_request(fsl_chan);
fsl_chan->status = DMA_IN_PROGRESS;
}
spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
return 0;
default:
return -ENXIO;
}
}
static size_t fsl_edma_desc_residue(struct fsl_edma_chan *fsl_chan,
struct virt_dma_desc *vdesc, bool in_progress)
{
struct fsl_edma_desc *edesc = fsl_chan->edesc;
void __iomem *addr = fsl_chan->edma->membase;
u32 ch = fsl_chan->vchan.chan.chan_id;
enum dma_transfer_direction dir = fsl_chan->fsc.dir;
dma_addr_t cur_addr, dma_addr;
size_t len, size;
int i;
/* calculate the total size in this desc */
for (len = i = 0; i < fsl_chan->edesc->n_tcds; i++)
len += edma_readl(fsl_chan->edma, &(edesc->tcd[i].vtcd->nbytes))
* edma_readw(fsl_chan->edma, &(edesc->tcd[i].vtcd->biter));
if (!in_progress)
return len;
if (dir == DMA_MEM_TO_DEV)
cur_addr = edma_readl(fsl_chan->edma, addr + EDMA_TCD_SADDR(ch));
else
cur_addr = edma_readl(fsl_chan->edma, addr + EDMA_TCD_DADDR(ch));
/* figure out the finished and calculate the residue */
for (i = 0; i < fsl_chan->edesc->n_tcds; i++) {
size = edma_readl(fsl_chan->edma, &(edesc->tcd[i].vtcd->nbytes))
* edma_readw(fsl_chan->edma, &(edesc->tcd[i].vtcd->biter));
if (dir == DMA_MEM_TO_DEV)
dma_addr = edma_readl(fsl_chan->edma,
&(edesc->tcd[i].vtcd->saddr));
else
dma_addr = edma_readl(fsl_chan->edma,
&(edesc->tcd[i].vtcd->daddr));
len -= size;
if (cur_addr > dma_addr && cur_addr < dma_addr + size) {
len += dma_addr + size - cur_addr;
break;
}
}
return len;
}
static enum dma_status fsl_edma_tx_status(struct dma_chan *chan,
dma_cookie_t cookie, struct dma_tx_state *txstate)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
struct virt_dma_desc *vdesc;
enum dma_status status;
unsigned long flags;
status = dma_cookie_status(chan, cookie, txstate);
if (status == DMA_COMPLETE)
return status;
if (!txstate)
return fsl_chan->status;
spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
vdesc = vchan_find_desc(&fsl_chan->vchan, cookie);
if (fsl_chan->edesc && cookie == fsl_chan->edesc->vdesc.tx.cookie)
txstate->residue = fsl_edma_desc_residue(fsl_chan, vdesc, true);
else if (vdesc)
txstate->residue = fsl_edma_desc_residue(fsl_chan, vdesc, false);
else
txstate->residue = 0;
spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
return fsl_chan->status;
}
static void fsl_edma_set_tcd_params(struct fsl_edma_chan *fsl_chan,
u32 src, u32 dst, u16 attr, u16 soff, u32 nbytes,
u32 slast, u16 citer, u16 biter, u32 doff, u32 dlast_sga,
u16 csr)
{
void __iomem *addr = fsl_chan->edma->membase;
u32 ch = fsl_chan->vchan.chan.chan_id;
/*
* TCD parameters have been swapped in fill_tcd_params(),
* so just write them to registers in the cpu endian here
*/
writew(0, addr + EDMA_TCD_CSR(ch));
writel(src, addr + EDMA_TCD_SADDR(ch));
writel(dst, addr + EDMA_TCD_DADDR(ch));
writew(attr, addr + EDMA_TCD_ATTR(ch));
writew(soff, addr + EDMA_TCD_SOFF(ch));
writel(nbytes, addr + EDMA_TCD_NBYTES(ch));
writel(slast, addr + EDMA_TCD_SLAST(ch));
writew(citer, addr + EDMA_TCD_CITER(ch));
writew(biter, addr + EDMA_TCD_BITER(ch));
writew(doff, addr + EDMA_TCD_DOFF(ch));
writel(dlast_sga, addr + EDMA_TCD_DLAST_SGA(ch));
writew(csr, addr + EDMA_TCD_CSR(ch));
}
static void fill_tcd_params(struct fsl_edma_engine *edma,
struct fsl_edma_hw_tcd *tcd, u32 src, u32 dst,
u16 attr, u16 soff, u32 nbytes, u32 slast, u16 citer,
u16 biter, u16 doff, u32 dlast_sga, bool major_int,
bool disable_req, bool enable_sg)
{
u16 csr = 0;
/*
* eDMA hardware SGs require the TCD parameters stored in memory
* the same endian as the eDMA module so that they can be loaded
* automatically by the engine
*/
edma_writel(edma, src, &(tcd->saddr));
edma_writel(edma, dst, &(tcd->daddr));
edma_writew(edma, attr, &(tcd->attr));
edma_writew(edma, EDMA_TCD_SOFF_SOFF(soff), &(tcd->soff));
edma_writel(edma, EDMA_TCD_NBYTES_NBYTES(nbytes), &(tcd->nbytes));
edma_writel(edma, EDMA_TCD_SLAST_SLAST(slast), &(tcd->slast));
edma_writew(edma, EDMA_TCD_CITER_CITER(citer), &(tcd->citer));
edma_writew(edma, EDMA_TCD_DOFF_DOFF(doff), &(tcd->doff));
edma_writel(edma, EDMA_TCD_DLAST_SGA_DLAST_SGA(dlast_sga), &(tcd->dlast_sga));
edma_writew(edma, EDMA_TCD_BITER_BITER(biter), &(tcd->biter));
if (major_int)
csr |= EDMA_TCD_CSR_INT_MAJOR;
if (disable_req)
csr |= EDMA_TCD_CSR_D_REQ;
if (enable_sg)
csr |= EDMA_TCD_CSR_E_SG;
edma_writew(edma, csr, &(tcd->csr));
}
static struct fsl_edma_desc *fsl_edma_alloc_desc(struct fsl_edma_chan *fsl_chan,
int sg_len)
{
struct fsl_edma_desc *fsl_desc;
int i;
fsl_desc = kzalloc(sizeof(*fsl_desc) + sizeof(struct fsl_edma_sw_tcd) * sg_len,
GFP_NOWAIT);
if (!fsl_desc)
return NULL;
fsl_desc->echan = fsl_chan;
fsl_desc->n_tcds = sg_len;
for (i = 0; i < sg_len; i++) {
fsl_desc->tcd[i].vtcd = dma_pool_alloc(fsl_chan->tcd_pool,
GFP_NOWAIT, &fsl_desc->tcd[i].ptcd);
if (!fsl_desc->tcd[i].vtcd)
goto err;
}
return fsl_desc;
err:
while (--i >= 0)
dma_pool_free(fsl_chan->tcd_pool, fsl_desc->tcd[i].vtcd,
fsl_desc->tcd[i].ptcd);
kfree(fsl_desc);
return NULL;
}
static struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
struct fsl_edma_desc *fsl_desc;
dma_addr_t dma_buf_next;
int sg_len, i;
u32 src_addr, dst_addr, last_sg, nbytes;
u16 soff, doff, iter;
if (!is_slave_direction(fsl_chan->fsc.dir))
return NULL;
sg_len = buf_len / period_len;
fsl_desc = fsl_edma_alloc_desc(fsl_chan, sg_len);
if (!fsl_desc)
return NULL;
fsl_desc->iscyclic = true;
dma_buf_next = dma_addr;
nbytes = fsl_chan->fsc.addr_width * fsl_chan->fsc.burst;
iter = period_len / nbytes;
for (i = 0; i < sg_len; i++) {
if (dma_buf_next >= dma_addr + buf_len)
dma_buf_next = dma_addr;
/* get next sg's physical address */
last_sg = fsl_desc->tcd[(i + 1) % sg_len].ptcd;
if (fsl_chan->fsc.dir == DMA_MEM_TO_DEV) {
src_addr = dma_buf_next;
dst_addr = fsl_chan->fsc.dev_addr;
soff = fsl_chan->fsc.addr_width;
doff = 0;
} else {
src_addr = fsl_chan->fsc.dev_addr;
dst_addr = dma_buf_next;
soff = 0;
doff = fsl_chan->fsc.addr_width;
}
fill_tcd_params(fsl_chan->edma, fsl_desc->tcd[i].vtcd, src_addr,
dst_addr, fsl_chan->fsc.attr, soff, nbytes, 0,
iter, iter, doff, last_sg, true, false, true);
dma_buf_next += period_len;
}
return vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc, flags);
}
static struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
struct fsl_edma_desc *fsl_desc;
struct scatterlist *sg;
u32 src_addr, dst_addr, last_sg, nbytes;
u16 soff, doff, iter;
int i;
if (!is_slave_direction(fsl_chan->fsc.dir))
return NULL;
fsl_desc = fsl_edma_alloc_desc(fsl_chan, sg_len);
if (!fsl_desc)
return NULL;
fsl_desc->iscyclic = false;
nbytes = fsl_chan->fsc.addr_width * fsl_chan->fsc.burst;
for_each_sg(sgl, sg, sg_len, i) {
/* get next sg's physical address */
last_sg = fsl_desc->tcd[(i + 1) % sg_len].ptcd;
if (fsl_chan->fsc.dir == DMA_MEM_TO_DEV) {
src_addr = sg_dma_address(sg);
dst_addr = fsl_chan->fsc.dev_addr;
soff = fsl_chan->fsc.addr_width;
doff = 0;
} else {
src_addr = fsl_chan->fsc.dev_addr;
dst_addr = sg_dma_address(sg);
soff = 0;
doff = fsl_chan->fsc.addr_width;
}
iter = sg_dma_len(sg) / nbytes;
if (i < sg_len - 1) {
last_sg = fsl_desc->tcd[(i + 1)].ptcd;
fill_tcd_params(fsl_chan->edma, fsl_desc->tcd[i].vtcd,
src_addr, dst_addr, fsl_chan->fsc.attr,
soff, nbytes, 0, iter, iter, doff, last_sg,
false, false, true);
} else {
last_sg = 0;
fill_tcd_params(fsl_chan->edma, fsl_desc->tcd[i].vtcd,
src_addr, dst_addr, fsl_chan->fsc.attr,
soff, nbytes, 0, iter, iter, doff, last_sg,
true, true, false);
}
}
return vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc, flags);
}
static void fsl_edma_xfer_desc(struct fsl_edma_chan *fsl_chan)
{
struct fsl_edma_hw_tcd *tcd;
struct virt_dma_desc *vdesc;
vdesc = vchan_next_desc(&fsl_chan->vchan);
if (!vdesc)
return;
fsl_chan->edesc = to_fsl_edma_desc(vdesc);
tcd = fsl_chan->edesc->tcd[0].vtcd;
fsl_edma_set_tcd_params(fsl_chan, tcd->saddr, tcd->daddr, tcd->attr,
tcd->soff, tcd->nbytes, tcd->slast, tcd->citer,
tcd->biter, tcd->doff, tcd->dlast_sga, tcd->csr);
fsl_edma_enable_request(fsl_chan);
fsl_chan->status = DMA_IN_PROGRESS;
}
static irqreturn_t fsl_edma_tx_handler(int irq, void *dev_id)
{
struct fsl_edma_engine *fsl_edma = dev_id;
unsigned int intr, ch;
void __iomem *base_addr;
struct fsl_edma_chan *fsl_chan;
base_addr = fsl_edma->membase;
intr = edma_readl(fsl_edma, base_addr + EDMA_INTR);
if (!intr)
return IRQ_NONE;
for (ch = 0; ch < fsl_edma->n_chans; ch++) {
if (intr & (0x1 << ch)) {
edma_writeb(fsl_edma, EDMA_CINT_CINT(ch),
base_addr + EDMA_CINT);
fsl_chan = &fsl_edma->chans[ch];
spin_lock(&fsl_chan->vchan.lock);
if (!fsl_chan->edesc->iscyclic) {
list_del(&fsl_chan->edesc->vdesc.node);
vchan_cookie_complete(&fsl_chan->edesc->vdesc);
fsl_chan->edesc = NULL;
fsl_chan->status = DMA_COMPLETE;
} else {
vchan_cyclic_callback(&fsl_chan->edesc->vdesc);
}
if (!fsl_chan->edesc)
fsl_edma_xfer_desc(fsl_chan);
spin_unlock(&fsl_chan->vchan.lock);
}
}
return IRQ_HANDLED;
}
static irqreturn_t fsl_edma_err_handler(int irq, void *dev_id)
{
struct fsl_edma_engine *fsl_edma = dev_id;
unsigned int err, ch;
err = edma_readl(fsl_edma, fsl_edma->membase + EDMA_ERR);
if (!err)
return IRQ_NONE;
for (ch = 0; ch < fsl_edma->n_chans; ch++) {
if (err & (0x1 << ch)) {
fsl_edma_disable_request(&fsl_edma->chans[ch]);
edma_writeb(fsl_edma, EDMA_CERR_CERR(ch),
fsl_edma->membase + EDMA_CERR);
fsl_edma->chans[ch].status = DMA_ERROR;
}
}
return IRQ_HANDLED;
}
static irqreturn_t fsl_edma_irq_handler(int irq, void *dev_id)
{
if (fsl_edma_tx_handler(irq, dev_id) == IRQ_HANDLED)
return IRQ_HANDLED;
return fsl_edma_err_handler(irq, dev_id);
}
static void fsl_edma_issue_pending(struct dma_chan *chan)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
unsigned long flags;
spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
if (vchan_issue_pending(&fsl_chan->vchan) && !fsl_chan->edesc)
fsl_edma_xfer_desc(fsl_chan);
spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
}
static struct dma_chan *fsl_edma_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct fsl_edma_engine *fsl_edma = ofdma->of_dma_data;
struct dma_chan *chan, *_chan;
if (dma_spec->args_count != 2)
return NULL;
mutex_lock(&fsl_edma->fsl_edma_mutex);
list_for_each_entry_safe(chan, _chan, &fsl_edma->dma_dev.channels, device_node) {
if (chan->client_count)
continue;
if ((chan->chan_id / DMAMUX_NR) == dma_spec->args[0]) {
chan = dma_get_slave_channel(chan);
if (chan) {
chan->device->privatecnt++;
fsl_edma_chan_mux(to_fsl_edma_chan(chan),
dma_spec->args[1], true);
mutex_unlock(&fsl_edma->fsl_edma_mutex);
return chan;
}
}
}
mutex_unlock(&fsl_edma->fsl_edma_mutex);
return NULL;
}
static int fsl_edma_alloc_chan_resources(struct dma_chan *chan)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
fsl_chan->tcd_pool = dma_pool_create("tcd_pool", chan->device->dev,
sizeof(struct fsl_edma_hw_tcd),
32, 0);
return 0;
}
static void fsl_edma_free_chan_resources(struct dma_chan *chan)
{
struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
unsigned long flags;
LIST_HEAD(head);
spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
fsl_edma_disable_request(fsl_chan);
fsl_edma_chan_mux(fsl_chan, 0, false);
fsl_chan->edesc = NULL;
vchan_get_all_descriptors(&fsl_chan->vchan, &head);
spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
vchan_dma_desc_free_list(&fsl_chan->vchan, &head);
dma_pool_destroy(fsl_chan->tcd_pool);
fsl_chan->tcd_pool = NULL;
}
static int fsl_dma_device_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = FSL_EDMA_BUSWIDTHS;
caps->dstn_addr_widths = FSL_EDMA_BUSWIDTHS;
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = true;
caps->cmd_terminate = true;
return 0;
}
static int
fsl_edma_irq_init(struct platform_device *pdev, struct fsl_edma_engine *fsl_edma)
{
int ret;
fsl_edma->txirq = platform_get_irq_byname(pdev, "edma-tx");
if (fsl_edma->txirq < 0) {
dev_err(&pdev->dev, "Can't get edma-tx irq.\n");
return fsl_edma->txirq;
}
fsl_edma->errirq = platform_get_irq_byname(pdev, "edma-err");
if (fsl_edma->errirq < 0) {
dev_err(&pdev->dev, "Can't get edma-err irq.\n");
return fsl_edma->errirq;
}
if (fsl_edma->txirq == fsl_edma->errirq) {
ret = devm_request_irq(&pdev->dev, fsl_edma->txirq,
fsl_edma_irq_handler, 0, "eDMA", fsl_edma);
if (ret) {
dev_err(&pdev->dev, "Can't register eDMA IRQ.\n");
return ret;
}
} else {
ret = devm_request_irq(&pdev->dev, fsl_edma->txirq,
fsl_edma_tx_handler, 0, "eDMA tx", fsl_edma);
if (ret) {
dev_err(&pdev->dev, "Can't register eDMA tx IRQ.\n");
return ret;
}
ret = devm_request_irq(&pdev->dev, fsl_edma->errirq,
fsl_edma_err_handler, 0, "eDMA err", fsl_edma);
if (ret) {
dev_err(&pdev->dev, "Can't register eDMA err IRQ.\n");
return ret;
}
}
return 0;
}
static int fsl_edma_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct fsl_edma_engine *fsl_edma;
struct fsl_edma_chan *fsl_chan;
struct resource *res;
int len, chans;
int ret, i;
ret = of_property_read_u32(np, "dma-channels", &chans);
if (ret) {
dev_err(&pdev->dev, "Can't get dma-channels.\n");
return ret;
}
len = sizeof(*fsl_edma) + sizeof(*fsl_chan) * chans;
fsl_edma = devm_kzalloc(&pdev->dev, len, GFP_KERNEL);
if (!fsl_edma)
return -ENOMEM;
fsl_edma->n_chans = chans;
mutex_init(&fsl_edma->fsl_edma_mutex);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
fsl_edma->membase = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(fsl_edma->membase))
return PTR_ERR(fsl_edma->membase);
for (i = 0; i < DMAMUX_NR; i++) {
char clkname[32];
res = platform_get_resource(pdev, IORESOURCE_MEM, 1 + i);
fsl_edma->muxbase[i] = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(fsl_edma->muxbase[i]))
return PTR_ERR(fsl_edma->muxbase[i]);
sprintf(clkname, "dmamux%d", i);
fsl_edma->muxclk[i] = devm_clk_get(&pdev->dev, clkname);
if (IS_ERR(fsl_edma->muxclk[i])) {
dev_err(&pdev->dev, "Missing DMAMUX block clock.\n");
return PTR_ERR(fsl_edma->muxclk[i]);
}
ret = clk_prepare_enable(fsl_edma->muxclk[i]);
if (ret) {
dev_err(&pdev->dev, "DMAMUX clk block failed.\n");
return ret;
}
}
ret = fsl_edma_irq_init(pdev, fsl_edma);
if (ret)
return ret;
fsl_edma->big_endian = of_property_read_bool(np, "big-endian");
INIT_LIST_HEAD(&fsl_edma->dma_dev.channels);
for (i = 0; i < fsl_edma->n_chans; i++) {
struct fsl_edma_chan *fsl_chan = &fsl_edma->chans[i];
fsl_chan->edma = fsl_edma;
fsl_chan->vchan.desc_free = fsl_edma_free_desc;
vchan_init(&fsl_chan->vchan, &fsl_edma->dma_dev);
edma_writew(fsl_edma, 0x0, fsl_edma->membase + EDMA_TCD_CSR(i));
fsl_edma_chan_mux(fsl_chan, 0, false);
}
dma_cap_set(DMA_PRIVATE, fsl_edma->dma_dev.cap_mask);
dma_cap_set(DMA_SLAVE, fsl_edma->dma_dev.cap_mask);
dma_cap_set(DMA_CYCLIC, fsl_edma->dma_dev.cap_mask);
fsl_edma->dma_dev.dev = &pdev->dev;
fsl_edma->dma_dev.device_alloc_chan_resources
= fsl_edma_alloc_chan_resources;
fsl_edma->dma_dev.device_free_chan_resources
= fsl_edma_free_chan_resources;
fsl_edma->dma_dev.device_tx_status = fsl_edma_tx_status;
fsl_edma->dma_dev.device_prep_slave_sg = fsl_edma_prep_slave_sg;
fsl_edma->dma_dev.device_prep_dma_cyclic = fsl_edma_prep_dma_cyclic;
fsl_edma->dma_dev.device_control = fsl_edma_control;
fsl_edma->dma_dev.device_issue_pending = fsl_edma_issue_pending;
fsl_edma->dma_dev.device_slave_caps = fsl_dma_device_slave_caps;
platform_set_drvdata(pdev, fsl_edma);
ret = dma_async_device_register(&fsl_edma->dma_dev);
if (ret) {
dev_err(&pdev->dev, "Can't register Freescale eDMA engine.\n");
return ret;
}
ret = of_dma_controller_register(np, fsl_edma_xlate, fsl_edma);
if (ret) {
dev_err(&pdev->dev, "Can't register Freescale eDMA of_dma.\n");
dma_async_device_unregister(&fsl_edma->dma_dev);
return ret;
}
/* enable round robin arbitration */
edma_writel(fsl_edma, EDMA_CR_ERGA | EDMA_CR_ERCA, fsl_edma->membase + EDMA_CR);
return 0;
}
static int fsl_edma_remove(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct fsl_edma_engine *fsl_edma = platform_get_drvdata(pdev);
int i;
of_dma_controller_free(np);
dma_async_device_unregister(&fsl_edma->dma_dev);
for (i = 0; i < DMAMUX_NR; i++)
clk_disable_unprepare(fsl_edma->muxclk[i]);
return 0;
}
static const struct of_device_id fsl_edma_dt_ids[] = {
{ .compatible = "fsl,vf610-edma", },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, fsl_edma_dt_ids);
static struct platform_driver fsl_edma_driver = {
.driver = {
.name = "fsl-edma",
.owner = THIS_MODULE,
.of_match_table = fsl_edma_dt_ids,
},
.probe = fsl_edma_probe,
.remove = fsl_edma_remove,
};
module_platform_driver(fsl_edma_driver);
MODULE_ALIAS("platform:fsl-edma");
MODULE_DESCRIPTION("Freescale eDMA engine driver");
MODULE_LICENSE("GPL v2");
......@@ -422,12 +422,12 @@ static irqreturn_t imxdma_err_handler(int irq, void *dev_id)
/* Tasklet error handler */
tasklet_schedule(&imxdma->channel[i].dma_tasklet);
printk(KERN_WARNING
"DMA timeout on channel %d -%s%s%s%s\n", i,
errcode & IMX_DMA_ERR_BURST ? " burst" : "",
errcode & IMX_DMA_ERR_REQUEST ? " request" : "",
errcode & IMX_DMA_ERR_TRANSFER ? " transfer" : "",
errcode & IMX_DMA_ERR_BUFFER ? " buffer" : "");
dev_warn(imxdma->dev,
"DMA timeout on channel %d -%s%s%s%s\n", i,
errcode & IMX_DMA_ERR_BURST ? " burst" : "",
errcode & IMX_DMA_ERR_REQUEST ? " request" : "",
errcode & IMX_DMA_ERR_TRANSFER ? " transfer" : "",
errcode & IMX_DMA_ERR_BUFFER ? " buffer" : "");
}
return IRQ_HANDLED;
}
......@@ -1236,6 +1236,7 @@ static int imxdma_remove(struct platform_device *pdev)
static struct platform_driver imxdma_driver = {
.driver = {
.name = "imx-dma",
.owner = THIS_MODULE,
.of_match_table = imx_dma_of_dev_id,
},
.id_table = imx_dma_devtype,
......
......@@ -867,8 +867,8 @@ static int mmp_pdma_chan_init(struct mmp_pdma_device *pdev, int idx, int irq)
phy->base = pdev->base;
if (irq) {
ret = devm_request_irq(pdev->dev, irq, mmp_pdma_chan_handler, 0,
"pdma", phy);
ret = devm_request_irq(pdev->dev, irq, mmp_pdma_chan_handler,
IRQF_SHARED, "pdma", phy);
if (ret) {
dev_err(pdev->dev, "channel request irq fail!\n");
return ret;
......@@ -957,8 +957,8 @@ static int mmp_pdma_probe(struct platform_device *op)
if (irq_num != dma_channels) {
/* all chan share one irq, demux inside */
irq = platform_get_irq(op, 0);
ret = devm_request_irq(pdev->dev, irq, mmp_pdma_int_handler, 0,
"pdma", pdev);
ret = devm_request_irq(pdev->dev, irq, mmp_pdma_int_handler,
IRQF_SHARED, "pdma", pdev);
if (ret)
return ret;
}
......
......@@ -22,6 +22,7 @@
#include <mach/regs-icu.h>
#include <linux/platform_data/dma-mmp_tdma.h>
#include <linux/of_device.h>
#include <linux/of_dma.h>
#include "dmaengine.h"
......@@ -541,6 +542,45 @@ static int mmp_tdma_chan_init(struct mmp_tdma_device *tdev,
return 0;
}
struct mmp_tdma_filter_param {
struct device_node *of_node;
unsigned int chan_id;
};
static bool mmp_tdma_filter_fn(struct dma_chan *chan, void *fn_param)
{
struct mmp_tdma_filter_param *param = fn_param;
struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan);
struct dma_device *pdma_device = tdmac->chan.device;
if (pdma_device->dev->of_node != param->of_node)
return false;
if (chan->chan_id != param->chan_id)
return false;
return true;
}
struct dma_chan *mmp_tdma_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct mmp_tdma_device *tdev = ofdma->of_dma_data;
dma_cap_mask_t mask = tdev->device.cap_mask;
struct mmp_tdma_filter_param param;
if (dma_spec->args_count != 1)
return NULL;
param.of_node = ofdma->of_node;
param.chan_id = dma_spec->args[0];
if (param.chan_id >= TDMA_CHANNEL_NUM)
return NULL;
return dma_request_channel(mask, mmp_tdma_filter_fn, &param);
}
static struct of_device_id mmp_tdma_dt_ids[] = {
{ .compatible = "marvell,adma-1.0", .data = (void *)MMP_AUD_TDMA},
{ .compatible = "marvell,pxa910-squ", .data = (void *)PXA910_SQU},
......@@ -631,6 +671,16 @@ static int mmp_tdma_probe(struct platform_device *pdev)
return ret;
}
if (pdev->dev.of_node) {
ret = of_dma_controller_register(pdev->dev.of_node,
mmp_tdma_xlate, tdev);
if (ret) {
dev_err(tdev->device.dev,
"failed to register controller\n");
dma_async_device_unregister(&tdev->device);
}
}
dev_info(tdev->device.dev, "initialized\n");
return 0;
}
......
......@@ -1088,6 +1088,23 @@ static void omap_dma_free(struct omap_dmadev *od)
}
}
#define OMAP_DMA_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
BIT(DMA_SLAVE_BUSWIDTH_4_BYTES))
static int omap_dma_device_slave_caps(struct dma_chan *dchan,
struct dma_slave_caps *caps)
{
caps->src_addr_widths = OMAP_DMA_BUSWIDTHS;
caps->dstn_addr_widths = OMAP_DMA_BUSWIDTHS;
caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
caps->cmd_pause = true;
caps->cmd_terminate = true;
caps->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
return 0;
}
static int omap_dma_probe(struct platform_device *pdev)
{
struct omap_dmadev *od;
......@@ -1118,6 +1135,7 @@ static int omap_dma_probe(struct platform_device *pdev)
od->ddev.device_prep_slave_sg = omap_dma_prep_slave_sg;
od->ddev.device_prep_dma_cyclic = omap_dma_prep_dma_cyclic;
od->ddev.device_control = omap_dma_control;
od->ddev.device_slave_caps = omap_dma_device_slave_caps;
od->ddev.dev = &pdev->dev;
INIT_LIST_HEAD(&od->ddev.channels);
INIT_LIST_HEAD(&od->pending);
......
......@@ -964,16 +964,16 @@ static void pch_dma_remove(struct pci_dev *pdev)
if (pd) {
dma_async_device_unregister(&pd->dma);
free_irq(pdev->irq, pd);
list_for_each_entry_safe(chan, _c, &pd->dma.channels,
device_node) {
pd_chan = to_pd_chan(chan);
tasklet_disable(&pd_chan->tasklet);
tasklet_kill(&pd_chan->tasklet);
}
pci_pool_destroy(pd->pool);
free_irq(pdev->irq, pd);
pci_iounmap(pdev, pd->membase);
pci_release_regions(pdev);
pci_disable_device(pdev);
......
/*
* Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
/*
* QCOM BAM DMA engine driver
*
* QCOM BAM DMA blocks are distributed amongst a number of the on-chip
* peripherals on the MSM 8x74. The configuration of the channels are dependent
* on the way they are hard wired to that specific peripheral. The peripheral
* device tree entries specify the configuration of each channel.
*
* The DMA controller requires the use of external memory for storage of the
* hardware descriptors for each channel. The descriptor FIFO is accessed as a
* circular buffer and operations are managed according to the offset within the
* FIFO. After pipe/channel reset, all of the pipe registers and internal state
* are back to defaults.
*
* During DMA operations, we write descriptors to the FIFO, being careful to
* handle wrapping and then write the last FIFO offset to that channel's
* P_EVNT_REG register to kick off the transaction. The P_SW_OFSTS register
* indicates the current FIFO offset that is being processed, so there is some
* indication of where the hardware is currently working.
*/
#include <linux/kernel.h>
#include <linux/io.h>
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/dma-mapping.h>
#include <linux/scatterlist.h>
#include <linux/device.h>
#include <linux/platform_device.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_dma.h>
#include <linux/clk.h>
#include <linux/dmaengine.h>
#include "dmaengine.h"
#include "virt-dma.h"
struct bam_desc_hw {
u32 addr; /* Buffer physical address */
u16 size; /* Buffer size in bytes */
u16 flags;
};
#define DESC_FLAG_INT BIT(15)
#define DESC_FLAG_EOT BIT(14)
#define DESC_FLAG_EOB BIT(13)
struct bam_async_desc {
struct virt_dma_desc vd;
u32 num_desc;
u32 xfer_len;
struct bam_desc_hw *curr_desc;
enum dma_transfer_direction dir;
size_t length;
struct bam_desc_hw desc[0];
};
#define BAM_CTRL 0x0000
#define BAM_REVISION 0x0004
#define BAM_SW_REVISION 0x0080
#define BAM_NUM_PIPES 0x003C
#define BAM_TIMER 0x0040
#define BAM_TIMER_CTRL 0x0044
#define BAM_DESC_CNT_TRSHLD 0x0008
#define BAM_IRQ_SRCS 0x000C
#define BAM_IRQ_SRCS_MSK 0x0010
#define BAM_IRQ_SRCS_UNMASKED 0x0030
#define BAM_IRQ_STTS 0x0014
#define BAM_IRQ_CLR 0x0018
#define BAM_IRQ_EN 0x001C
#define BAM_CNFG_BITS 0x007C
#define BAM_IRQ_SRCS_EE(ee) (0x0800 + ((ee) * 0x80))
#define BAM_IRQ_SRCS_MSK_EE(ee) (0x0804 + ((ee) * 0x80))
#define BAM_P_CTRL(pipe) (0x1000 + ((pipe) * 0x1000))
#define BAM_P_RST(pipe) (0x1004 + ((pipe) * 0x1000))
#define BAM_P_HALT(pipe) (0x1008 + ((pipe) * 0x1000))
#define BAM_P_IRQ_STTS(pipe) (0x1010 + ((pipe) * 0x1000))
#define BAM_P_IRQ_CLR(pipe) (0x1014 + ((pipe) * 0x1000))
#define BAM_P_IRQ_EN(pipe) (0x1018 + ((pipe) * 0x1000))
#define BAM_P_EVNT_DEST_ADDR(pipe) (0x182C + ((pipe) * 0x1000))
#define BAM_P_EVNT_REG(pipe) (0x1818 + ((pipe) * 0x1000))
#define BAM_P_SW_OFSTS(pipe) (0x1800 + ((pipe) * 0x1000))
#define BAM_P_DATA_FIFO_ADDR(pipe) (0x1824 + ((pipe) * 0x1000))
#define BAM_P_DESC_FIFO_ADDR(pipe) (0x181C + ((pipe) * 0x1000))
#define BAM_P_EVNT_TRSHLD(pipe) (0x1828 + ((pipe) * 0x1000))
#define BAM_P_FIFO_SIZES(pipe) (0x1820 + ((pipe) * 0x1000))
/* BAM CTRL */
#define BAM_SW_RST BIT(0)
#define BAM_EN BIT(1)
#define BAM_EN_ACCUM BIT(4)
#define BAM_TESTBUS_SEL_SHIFT 5
#define BAM_TESTBUS_SEL_MASK 0x3F
#define BAM_DESC_CACHE_SEL_SHIFT 13
#define BAM_DESC_CACHE_SEL_MASK 0x3
#define BAM_CACHED_DESC_STORE BIT(15)
#define IBC_DISABLE BIT(16)
/* BAM REVISION */
#define REVISION_SHIFT 0
#define REVISION_MASK 0xFF
#define NUM_EES_SHIFT 8
#define NUM_EES_MASK 0xF
#define CE_BUFFER_SIZE BIT(13)
#define AXI_ACTIVE BIT(14)
#define USE_VMIDMT BIT(15)
#define SECURED BIT(16)
#define BAM_HAS_NO_BYPASS BIT(17)
#define HIGH_FREQUENCY_BAM BIT(18)
#define INACTIV_TMRS_EXST BIT(19)
#define NUM_INACTIV_TMRS BIT(20)
#define DESC_CACHE_DEPTH_SHIFT 21
#define DESC_CACHE_DEPTH_1 (0 << DESC_CACHE_DEPTH_SHIFT)
#define DESC_CACHE_DEPTH_2 (1 << DESC_CACHE_DEPTH_SHIFT)
#define DESC_CACHE_DEPTH_3 (2 << DESC_CACHE_DEPTH_SHIFT)
#define DESC_CACHE_DEPTH_4 (3 << DESC_CACHE_DEPTH_SHIFT)
#define CMD_DESC_EN BIT(23)
#define INACTIV_TMR_BASE_SHIFT 24
#define INACTIV_TMR_BASE_MASK 0xFF
/* BAM NUM PIPES */
#define BAM_NUM_PIPES_SHIFT 0
#define BAM_NUM_PIPES_MASK 0xFF
#define PERIPH_NON_PIPE_GRP_SHIFT 16
#define PERIPH_NON_PIP_GRP_MASK 0xFF
#define BAM_NON_PIPE_GRP_SHIFT 24
#define BAM_NON_PIPE_GRP_MASK 0xFF
/* BAM CNFG BITS */
#define BAM_PIPE_CNFG BIT(2)
#define BAM_FULL_PIPE BIT(11)
#define BAM_NO_EXT_P_RST BIT(12)
#define BAM_IBC_DISABLE BIT(13)
#define BAM_SB_CLK_REQ BIT(14)
#define BAM_PSM_CSW_REQ BIT(15)
#define BAM_PSM_P_RES BIT(16)
#define BAM_AU_P_RES BIT(17)
#define BAM_SI_P_RES BIT(18)
#define BAM_WB_P_RES BIT(19)
#define BAM_WB_BLK_CSW BIT(20)
#define BAM_WB_CSW_ACK_IDL BIT(21)
#define BAM_WB_RETR_SVPNT BIT(22)
#define BAM_WB_DSC_AVL_P_RST BIT(23)
#define BAM_REG_P_EN BIT(24)
#define BAM_PSM_P_HD_DATA BIT(25)
#define BAM_AU_ACCUMED BIT(26)
#define BAM_CMD_ENABLE BIT(27)
#define BAM_CNFG_BITS_DEFAULT (BAM_PIPE_CNFG | \
BAM_NO_EXT_P_RST | \
BAM_IBC_DISABLE | \
BAM_SB_CLK_REQ | \
BAM_PSM_CSW_REQ | \
BAM_PSM_P_RES | \
BAM_AU_P_RES | \
BAM_SI_P_RES | \
BAM_WB_P_RES | \
BAM_WB_BLK_CSW | \
BAM_WB_CSW_ACK_IDL | \
BAM_WB_RETR_SVPNT | \
BAM_WB_DSC_AVL_P_RST | \
BAM_REG_P_EN | \
BAM_PSM_P_HD_DATA | \
BAM_AU_ACCUMED | \
BAM_CMD_ENABLE)
/* PIPE CTRL */
#define P_EN BIT(1)
#define P_DIRECTION BIT(3)
#define P_SYS_STRM BIT(4)
#define P_SYS_MODE BIT(5)
#define P_AUTO_EOB BIT(6)
#define P_AUTO_EOB_SEL_SHIFT 7
#define P_AUTO_EOB_SEL_512 (0 << P_AUTO_EOB_SEL_SHIFT)
#define P_AUTO_EOB_SEL_256 (1 << P_AUTO_EOB_SEL_SHIFT)
#define P_AUTO_EOB_SEL_128 (2 << P_AUTO_EOB_SEL_SHIFT)
#define P_AUTO_EOB_SEL_64 (3 << P_AUTO_EOB_SEL_SHIFT)
#define P_PREFETCH_LIMIT_SHIFT 9
#define P_PREFETCH_LIMIT_32 (0 << P_PREFETCH_LIMIT_SHIFT)
#define P_PREFETCH_LIMIT_16 (1 << P_PREFETCH_LIMIT_SHIFT)
#define P_PREFETCH_LIMIT_4 (2 << P_PREFETCH_LIMIT_SHIFT)
#define P_WRITE_NWD BIT(11)
#define P_LOCK_GROUP_SHIFT 16
#define P_LOCK_GROUP_MASK 0x1F
/* BAM_DESC_CNT_TRSHLD */
#define CNT_TRSHLD 0xffff
#define DEFAULT_CNT_THRSHLD 0x4
/* BAM_IRQ_SRCS */
#define BAM_IRQ BIT(31)
#define P_IRQ 0x7fffffff
/* BAM_IRQ_SRCS_MSK */
#define BAM_IRQ_MSK BAM_IRQ
#define P_IRQ_MSK P_IRQ
/* BAM_IRQ_STTS */
#define BAM_TIMER_IRQ BIT(4)
#define BAM_EMPTY_IRQ BIT(3)
#define BAM_ERROR_IRQ BIT(2)
#define BAM_HRESP_ERR_IRQ BIT(1)
/* BAM_IRQ_CLR */
#define BAM_TIMER_CLR BIT(4)
#define BAM_EMPTY_CLR BIT(3)
#define BAM_ERROR_CLR BIT(2)
#define BAM_HRESP_ERR_CLR BIT(1)
/* BAM_IRQ_EN */
#define BAM_TIMER_EN BIT(4)
#define BAM_EMPTY_EN BIT(3)
#define BAM_ERROR_EN BIT(2)
#define BAM_HRESP_ERR_EN BIT(1)
/* BAM_P_IRQ_EN */
#define P_PRCSD_DESC_EN BIT(0)
#define P_TIMER_EN BIT(1)
#define P_WAKE_EN BIT(2)
#define P_OUT_OF_DESC_EN BIT(3)
#define P_ERR_EN BIT(4)
#define P_TRNSFR_END_EN BIT(5)
#define P_DEFAULT_IRQS_EN (P_PRCSD_DESC_EN | P_ERR_EN | P_TRNSFR_END_EN)
/* BAM_P_SW_OFSTS */
#define P_SW_OFSTS_MASK 0xffff
#define BAM_DESC_FIFO_SIZE SZ_32K
#define MAX_DESCRIPTORS (BAM_DESC_FIFO_SIZE / sizeof(struct bam_desc_hw) - 1)
#define BAM_MAX_DATA_SIZE (SZ_32K - 8)
struct bam_chan {
struct virt_dma_chan vc;
struct bam_device *bdev;
/* configuration from device tree */
u32 id;
struct bam_async_desc *curr_txd; /* current running dma */
/* runtime configuration */
struct dma_slave_config slave;
/* fifo storage */
struct bam_desc_hw *fifo_virt;
dma_addr_t fifo_phys;
/* fifo markers */
unsigned short head; /* start of active descriptor entries */
unsigned short tail; /* end of active descriptor entries */
unsigned int initialized; /* is the channel hw initialized? */
unsigned int paused; /* is the channel paused? */
unsigned int reconfigure; /* new slave config? */
struct list_head node;
};
static inline struct bam_chan *to_bam_chan(struct dma_chan *common)
{
return container_of(common, struct bam_chan, vc.chan);
}
struct bam_device {
void __iomem *regs;
struct device *dev;
struct dma_device common;
struct device_dma_parameters dma_parms;
struct bam_chan *channels;
u32 num_channels;
/* execution environment ID, from DT */
u32 ee;
struct clk *bamclk;
int irq;
/* dma start transaction tasklet */
struct tasklet_struct task;
};
/**
* bam_reset_channel - Reset individual BAM DMA channel
* @bchan: bam channel
*
* This function resets a specific BAM channel
*/
static void bam_reset_channel(struct bam_chan *bchan)
{
struct bam_device *bdev = bchan->bdev;
lockdep_assert_held(&bchan->vc.lock);
/* reset channel */
writel_relaxed(1, bdev->regs + BAM_P_RST(bchan->id));
writel_relaxed(0, bdev->regs + BAM_P_RST(bchan->id));
/* don't allow cpu to reorder BAM register accesses done after this */
wmb();
/* make sure hw is initialized when channel is used the first time */
bchan->initialized = 0;
}
/**
* bam_chan_init_hw - Initialize channel hardware
* @bchan: bam channel
*
* This function resets and initializes the BAM channel
*/
static void bam_chan_init_hw(struct bam_chan *bchan,
enum dma_transfer_direction dir)
{
struct bam_device *bdev = bchan->bdev;
u32 val;
/* Reset the channel to clear internal state of the FIFO */
bam_reset_channel(bchan);
/*
* write out 8 byte aligned address. We have enough space for this
* because we allocated 1 more descriptor (8 bytes) than we can use
*/
writel_relaxed(ALIGN(bchan->fifo_phys, sizeof(struct bam_desc_hw)),
bdev->regs + BAM_P_DESC_FIFO_ADDR(bchan->id));
writel_relaxed(BAM_DESC_FIFO_SIZE, bdev->regs +
BAM_P_FIFO_SIZES(bchan->id));
/* enable the per pipe interrupts, enable EOT, ERR, and INT irqs */
writel_relaxed(P_DEFAULT_IRQS_EN, bdev->regs + BAM_P_IRQ_EN(bchan->id));
/* unmask the specific pipe and EE combo */
val = readl_relaxed(bdev->regs + BAM_IRQ_SRCS_MSK_EE(bdev->ee));
val |= BIT(bchan->id);
writel_relaxed(val, bdev->regs + BAM_IRQ_SRCS_MSK_EE(bdev->ee));
/* don't allow cpu to reorder the channel enable done below */
wmb();
/* set fixed direction and mode, then enable channel */
val = P_EN | P_SYS_MODE;
if (dir == DMA_DEV_TO_MEM)
val |= P_DIRECTION;
writel_relaxed(val, bdev->regs + BAM_P_CTRL(bchan->id));
bchan->initialized = 1;
/* init FIFO pointers */
bchan->head = 0;
bchan->tail = 0;
}
/**
* bam_alloc_chan - Allocate channel resources for DMA channel.
* @chan: specified channel
*
* This function allocates the FIFO descriptor memory
*/
static int bam_alloc_chan(struct dma_chan *chan)
{
struct bam_chan *bchan = to_bam_chan(chan);
struct bam_device *bdev = bchan->bdev;
if (bchan->fifo_virt)
return 0;
/* allocate FIFO descriptor space, but only if necessary */
bchan->fifo_virt = dma_alloc_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE,
&bchan->fifo_phys, GFP_KERNEL);
if (!bchan->fifo_virt) {
dev_err(bdev->dev, "Failed to allocate desc fifo\n");
return -ENOMEM;
}
return 0;
}
/**
* bam_free_chan - Frees dma resources associated with specific channel
* @chan: specified channel
*
* Free the allocated fifo descriptor memory and channel resources
*
*/
static void bam_free_chan(struct dma_chan *chan)
{
struct bam_chan *bchan = to_bam_chan(chan);
struct bam_device *bdev = bchan->bdev;
u32 val;
unsigned long flags;
vchan_free_chan_resources(to_virt_chan(chan));
if (bchan->curr_txd) {
dev_err(bchan->bdev->dev, "Cannot free busy channel\n");
return;
}
spin_lock_irqsave(&bchan->vc.lock, flags);
bam_reset_channel(bchan);
spin_unlock_irqrestore(&bchan->vc.lock, flags);
dma_free_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE, bchan->fifo_virt,
bchan->fifo_phys);
bchan->fifo_virt = NULL;
/* mask irq for pipe/channel */
val = readl_relaxed(bdev->regs + BAM_IRQ_SRCS_MSK_EE(bdev->ee));
val &= ~BIT(bchan->id);
writel_relaxed(val, bdev->regs + BAM_IRQ_SRCS_MSK_EE(bdev->ee));
/* disable irq */
writel_relaxed(0, bdev->regs + BAM_P_IRQ_EN(bchan->id));
}
/**
* bam_slave_config - set slave configuration for channel
* @chan: dma channel
* @cfg: slave configuration
*
* Sets slave configuration for channel
*
*/
static void bam_slave_config(struct bam_chan *bchan,
struct dma_slave_config *cfg)
{
memcpy(&bchan->slave, cfg, sizeof(*cfg));
bchan->reconfigure = 1;
}
/**
* bam_prep_slave_sg - Prep slave sg transaction
*
* @chan: dma channel
* @sgl: scatter gather list
* @sg_len: length of sg
* @direction: DMA transfer direction
* @flags: DMA flags
* @context: transfer context (unused)
*/
static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
struct scatterlist *sgl, unsigned int sg_len,
enum dma_transfer_direction direction, unsigned long flags,
void *context)
{
struct bam_chan *bchan = to_bam_chan(chan);
struct bam_device *bdev = bchan->bdev;
struct bam_async_desc *async_desc;
struct scatterlist *sg;
u32 i;
struct bam_desc_hw *desc;
unsigned int num_alloc = 0;
if (!is_slave_direction(direction)) {
dev_err(bdev->dev, "invalid dma direction\n");
return NULL;
}
/* calculate number of required entries */
for_each_sg(sgl, sg, sg_len, i)
num_alloc += DIV_ROUND_UP(sg_dma_len(sg), BAM_MAX_DATA_SIZE);
/* allocate enough room to accomodate the number of entries */
async_desc = kzalloc(sizeof(*async_desc) +
(num_alloc * sizeof(struct bam_desc_hw)), GFP_NOWAIT);
if (!async_desc)
goto err_out;
async_desc->num_desc = num_alloc;
async_desc->curr_desc = async_desc->desc;
async_desc->dir = direction;
/* fill in temporary descriptors */
desc = async_desc->desc;
for_each_sg(sgl, sg, sg_len, i) {
unsigned int remainder = sg_dma_len(sg);
unsigned int curr_offset = 0;
do {
desc->addr = sg_dma_address(sg) + curr_offset;
if (remainder > BAM_MAX_DATA_SIZE) {
desc->size = BAM_MAX_DATA_SIZE;
remainder -= BAM_MAX_DATA_SIZE;
curr_offset += BAM_MAX_DATA_SIZE;
} else {
desc->size = remainder;
remainder = 0;
}
async_desc->length += desc->size;
desc++;
} while (remainder > 0);
}
return vchan_tx_prep(&bchan->vc, &async_desc->vd, flags);
err_out:
kfree(async_desc);
return NULL;
}
/**
* bam_dma_terminate_all - terminate all transactions on a channel
* @bchan: bam dma channel
*
* Dequeues and frees all transactions
* No callbacks are done
*
*/
static void bam_dma_terminate_all(struct bam_chan *bchan)
{
unsigned long flag;
LIST_HEAD(head);
/* remove all transactions, including active transaction */
spin_lock_irqsave(&bchan->vc.lock, flag);
if (bchan->curr_txd) {
list_add(&bchan->curr_txd->vd.node, &bchan->vc.desc_issued);
bchan->curr_txd = NULL;
}
vchan_get_all_descriptors(&bchan->vc, &head);
spin_unlock_irqrestore(&bchan->vc.lock, flag);
vchan_dma_desc_free_list(&bchan->vc, &head);
}
/**
* bam_control - DMA device control
* @chan: dma channel
* @cmd: control cmd
* @arg: cmd argument
*
* Perform DMA control command
*
*/
static int bam_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
unsigned long arg)
{
struct bam_chan *bchan = to_bam_chan(chan);
struct bam_device *bdev = bchan->bdev;
int ret = 0;
unsigned long flag;
switch (cmd) {
case DMA_PAUSE:
spin_lock_irqsave(&bchan->vc.lock, flag);
writel_relaxed(1, bdev->regs + BAM_P_HALT(bchan->id));
bchan->paused = 1;
spin_unlock_irqrestore(&bchan->vc.lock, flag);
break;
case DMA_RESUME:
spin_lock_irqsave(&bchan->vc.lock, flag);
writel_relaxed(0, bdev->regs + BAM_P_HALT(bchan->id));
bchan->paused = 0;
spin_unlock_irqrestore(&bchan->vc.lock, flag);
break;
case DMA_TERMINATE_ALL:
bam_dma_terminate_all(bchan);
break;
case DMA_SLAVE_CONFIG:
spin_lock_irqsave(&bchan->vc.lock, flag);
bam_slave_config(bchan, (struct dma_slave_config *)arg);
spin_unlock_irqrestore(&bchan->vc.lock, flag);
break;
default:
ret = -ENXIO;
break;
}
return ret;
}
/**
* process_channel_irqs - processes the channel interrupts
* @bdev: bam controller
*
* This function processes the channel interrupts
*
*/
static u32 process_channel_irqs(struct bam_device *bdev)
{
u32 i, srcs, pipe_stts;
unsigned long flags;
struct bam_async_desc *async_desc;
srcs = readl_relaxed(bdev->regs + BAM_IRQ_SRCS_EE(bdev->ee));
/* return early if no pipe/channel interrupts are present */
if (!(srcs & P_IRQ))
return srcs;
for (i = 0; i < bdev->num_channels; i++) {
struct bam_chan *bchan = &bdev->channels[i];
if (!(srcs & BIT(i)))
continue;
/* clear pipe irq */
pipe_stts = readl_relaxed(bdev->regs +
BAM_P_IRQ_STTS(i));
writel_relaxed(pipe_stts, bdev->regs +
BAM_P_IRQ_CLR(i));
spin_lock_irqsave(&bchan->vc.lock, flags);
async_desc = bchan->curr_txd;
if (async_desc) {
async_desc->num_desc -= async_desc->xfer_len;
async_desc->curr_desc += async_desc->xfer_len;
bchan->curr_txd = NULL;
/* manage FIFO */
bchan->head += async_desc->xfer_len;
bchan->head %= MAX_DESCRIPTORS;
/*
* if complete, process cookie. Otherwise
* push back to front of desc_issued so that
* it gets restarted by the tasklet
*/
if (!async_desc->num_desc)
vchan_cookie_complete(&async_desc->vd);
else
list_add(&async_desc->vd.node,
&bchan->vc.desc_issued);
}
spin_unlock_irqrestore(&bchan->vc.lock, flags);
}
return srcs;
}
/**
* bam_dma_irq - irq handler for bam controller
* @irq: IRQ of interrupt
* @data: callback data
*
* IRQ handler for the bam controller
*/
static irqreturn_t bam_dma_irq(int irq, void *data)
{
struct bam_device *bdev = data;
u32 clr_mask = 0, srcs = 0;
srcs |= process_channel_irqs(bdev);
/* kick off tasklet to start next dma transfer */
if (srcs & P_IRQ)
tasklet_schedule(&bdev->task);
if (srcs & BAM_IRQ)
clr_mask = readl_relaxed(bdev->regs + BAM_IRQ_STTS);
/* don't allow reorder of the various accesses to the BAM registers */
mb();
writel_relaxed(clr_mask, bdev->regs + BAM_IRQ_CLR);
return IRQ_HANDLED;
}
/**
* bam_tx_status - returns status of transaction
* @chan: dma channel
* @cookie: transaction cookie
* @txstate: DMA transaction state
*
* Return status of dma transaction
*/
static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
struct dma_tx_state *txstate)
{
struct bam_chan *bchan = to_bam_chan(chan);
struct virt_dma_desc *vd;
int ret;
size_t residue = 0;
unsigned int i;
unsigned long flags;
ret = dma_cookie_status(chan, cookie, txstate);
if (ret == DMA_COMPLETE)
return ret;
if (!txstate)
return bchan->paused ? DMA_PAUSED : ret;
spin_lock_irqsave(&bchan->vc.lock, flags);
vd = vchan_find_desc(&bchan->vc, cookie);
if (vd)
residue = container_of(vd, struct bam_async_desc, vd)->length;
else if (bchan->curr_txd && bchan->curr_txd->vd.tx.cookie == cookie)
for (i = 0; i < bchan->curr_txd->num_desc; i++)
residue += bchan->curr_txd->curr_desc[i].size;
spin_unlock_irqrestore(&bchan->vc.lock, flags);
dma_set_residue(txstate, residue);
if (ret == DMA_IN_PROGRESS && bchan->paused)
ret = DMA_PAUSED;
return ret;
}
/**
* bam_apply_new_config
* @bchan: bam dma channel
* @dir: DMA direction
*/
static void bam_apply_new_config(struct bam_chan *bchan,
enum dma_transfer_direction dir)
{
struct bam_device *bdev = bchan->bdev;
u32 maxburst;
if (dir == DMA_DEV_TO_MEM)
maxburst = bchan->slave.src_maxburst;
else
maxburst = bchan->slave.dst_maxburst;
writel_relaxed(maxburst, bdev->regs + BAM_DESC_CNT_TRSHLD);
bchan->reconfigure = 0;
}
/**
* bam_start_dma - start next transaction
* @bchan - bam dma channel
*/
static void bam_start_dma(struct bam_chan *bchan)
{
struct virt_dma_desc *vd = vchan_next_desc(&bchan->vc);
struct bam_device *bdev = bchan->bdev;
struct bam_async_desc *async_desc;
struct bam_desc_hw *desc;
struct bam_desc_hw *fifo = PTR_ALIGN(bchan->fifo_virt,
sizeof(struct bam_desc_hw));
lockdep_assert_held(&bchan->vc.lock);
if (!vd)
return;
list_del(&vd->node);
async_desc = container_of(vd, struct bam_async_desc, vd);
bchan->curr_txd = async_desc;
/* on first use, initialize the channel hardware */
if (!bchan->initialized)
bam_chan_init_hw(bchan, async_desc->dir);
/* apply new slave config changes, if necessary */
if (bchan->reconfigure)
bam_apply_new_config(bchan, async_desc->dir);
desc = bchan->curr_txd->curr_desc;
if (async_desc->num_desc > MAX_DESCRIPTORS)
async_desc->xfer_len = MAX_DESCRIPTORS;
else
async_desc->xfer_len = async_desc->num_desc;
/* set INT on last descriptor */
desc[async_desc->xfer_len - 1].flags |= DESC_FLAG_INT;
if (bchan->tail + async_desc->xfer_len > MAX_DESCRIPTORS) {
u32 partial = MAX_DESCRIPTORS - bchan->tail;
memcpy(&fifo[bchan->tail], desc,
partial * sizeof(struct bam_desc_hw));
memcpy(fifo, &desc[partial], (async_desc->xfer_len - partial) *
sizeof(struct bam_desc_hw));
} else {
memcpy(&fifo[bchan->tail], desc,
async_desc->xfer_len * sizeof(struct bam_desc_hw));
}
bchan->tail += async_desc->xfer_len;
bchan->tail %= MAX_DESCRIPTORS;
/* ensure descriptor writes and dma start not reordered */
wmb();
writel_relaxed(bchan->tail * sizeof(struct bam_desc_hw),
bdev->regs + BAM_P_EVNT_REG(bchan->id));
}
/**
* dma_tasklet - DMA IRQ tasklet
* @data: tasklet argument (bam controller structure)
*
* Sets up next DMA operation and then processes all completed transactions
*/
static void dma_tasklet(unsigned long data)
{
struct bam_device *bdev = (struct bam_device *)data;
struct bam_chan *bchan;
unsigned long flags;
unsigned int i;
/* go through the channels and kick off transactions */
for (i = 0; i < bdev->num_channels; i++) {
bchan = &bdev->channels[i];
spin_lock_irqsave(&bchan->vc.lock, flags);
if (!list_empty(&bchan->vc.desc_issued) && !bchan->curr_txd)
bam_start_dma(bchan);
spin_unlock_irqrestore(&bchan->vc.lock, flags);
}
}
/**
* bam_issue_pending - starts pending transactions
* @chan: dma channel
*
* Calls tasklet directly which in turn starts any pending transactions
*/
static void bam_issue_pending(struct dma_chan *chan)
{
struct bam_chan *bchan = to_bam_chan(chan);
unsigned long flags;
spin_lock_irqsave(&bchan->vc.lock, flags);
/* if work pending and idle, start a transaction */
if (vchan_issue_pending(&bchan->vc) && !bchan->curr_txd)
bam_start_dma(bchan);
spin_unlock_irqrestore(&bchan->vc.lock, flags);
}
/**
* bam_dma_free_desc - free descriptor memory
* @vd: virtual descriptor
*
*/
static void bam_dma_free_desc(struct virt_dma_desc *vd)
{
struct bam_async_desc *async_desc = container_of(vd,
struct bam_async_desc, vd);
kfree(async_desc);
}
static struct dma_chan *bam_dma_xlate(struct of_phandle_args *dma_spec,
struct of_dma *of)
{
struct bam_device *bdev = container_of(of->of_dma_data,
struct bam_device, common);
unsigned int request;
if (dma_spec->args_count != 1)
return NULL;
request = dma_spec->args[0];
if (request >= bdev->num_channels)
return NULL;
return dma_get_slave_channel(&(bdev->channels[request].vc.chan));
}
/**
* bam_init
* @bdev: bam device
*
* Initialization helper for global bam registers
*/
static int bam_init(struct bam_device *bdev)
{
u32 val;
/* read revision and configuration information */
val = readl_relaxed(bdev->regs + BAM_REVISION) >> NUM_EES_SHIFT;
val &= NUM_EES_MASK;
/* check that configured EE is within range */
if (bdev->ee >= val)
return -EINVAL;
val = readl_relaxed(bdev->regs + BAM_NUM_PIPES);
bdev->num_channels = val & BAM_NUM_PIPES_MASK;
/* s/w reset bam */
/* after reset all pipes are disabled and idle */
val = readl_relaxed(bdev->regs + BAM_CTRL);
val |= BAM_SW_RST;
writel_relaxed(val, bdev->regs + BAM_CTRL);
val &= ~BAM_SW_RST;
writel_relaxed(val, bdev->regs + BAM_CTRL);
/* make sure previous stores are visible before enabling BAM */
wmb();
/* enable bam */
val |= BAM_EN;
writel_relaxed(val, bdev->regs + BAM_CTRL);
/* set descriptor threshhold, start with 4 bytes */
writel_relaxed(DEFAULT_CNT_THRSHLD, bdev->regs + BAM_DESC_CNT_TRSHLD);
/* Enable default set of h/w workarounds, ie all except BAM_FULL_PIPE */
writel_relaxed(BAM_CNFG_BITS_DEFAULT, bdev->regs + BAM_CNFG_BITS);
/* enable irqs for errors */
writel_relaxed(BAM_ERROR_EN | BAM_HRESP_ERR_EN,
bdev->regs + BAM_IRQ_EN);
/* unmask global bam interrupt */
writel_relaxed(BAM_IRQ_MSK, bdev->regs + BAM_IRQ_SRCS_MSK_EE(bdev->ee));
return 0;
}
static void bam_channel_init(struct bam_device *bdev, struct bam_chan *bchan,
u32 index)
{
bchan->id = index;
bchan->bdev = bdev;
vchan_init(&bchan->vc, &bdev->common);
bchan->vc.desc_free = bam_dma_free_desc;
}
static int bam_dma_probe(struct platform_device *pdev)
{
struct bam_device *bdev;
struct resource *iores;
int ret, i;
bdev = devm_kzalloc(&pdev->dev, sizeof(*bdev), GFP_KERNEL);
if (!bdev)
return -ENOMEM;
bdev->dev = &pdev->dev;
iores = platform_get_resource(pdev, IORESOURCE_MEM, 0);
bdev->regs = devm_ioremap_resource(&pdev->dev, iores);
if (IS_ERR(bdev->regs))
return PTR_ERR(bdev->regs);
bdev->irq = platform_get_irq(pdev, 0);
if (bdev->irq < 0)
return bdev->irq;
ret = of_property_read_u32(pdev->dev.of_node, "qcom,ee", &bdev->ee);
if (ret) {
dev_err(bdev->dev, "Execution environment unspecified\n");
return ret;
}
bdev->bamclk = devm_clk_get(bdev->dev, "bam_clk");
if (IS_ERR(bdev->bamclk))
return PTR_ERR(bdev->bamclk);
ret = clk_prepare_enable(bdev->bamclk);
if (ret) {
dev_err(bdev->dev, "failed to prepare/enable clock\n");
return ret;
}
ret = bam_init(bdev);
if (ret)
goto err_disable_clk;
tasklet_init(&bdev->task, dma_tasklet, (unsigned long)bdev);
bdev->channels = devm_kcalloc(bdev->dev, bdev->num_channels,
sizeof(*bdev->channels), GFP_KERNEL);
if (!bdev->channels) {
ret = -ENOMEM;
goto err_disable_clk;
}
/* allocate and initialize channels */
INIT_LIST_HEAD(&bdev->common.channels);
for (i = 0; i < bdev->num_channels; i++)
bam_channel_init(bdev, &bdev->channels[i], i);
ret = devm_request_irq(bdev->dev, bdev->irq, bam_dma_irq,
IRQF_TRIGGER_HIGH, "bam_dma", bdev);
if (ret)
goto err_disable_clk;
/* set max dma segment size */
bdev->common.dev = bdev->dev;
bdev->common.dev->dma_parms = &bdev->dma_parms;
ret = dma_set_max_seg_size(bdev->common.dev, BAM_MAX_DATA_SIZE);
if (ret) {
dev_err(bdev->dev, "cannot set maximum segment size\n");
goto err_disable_clk;
}
platform_set_drvdata(pdev, bdev);
/* set capabilities */
dma_cap_zero(bdev->common.cap_mask);
dma_cap_set(DMA_SLAVE, bdev->common.cap_mask);
/* initialize dmaengine apis */
bdev->common.device_alloc_chan_resources = bam_alloc_chan;
bdev->common.device_free_chan_resources = bam_free_chan;
bdev->common.device_prep_slave_sg = bam_prep_slave_sg;
bdev->common.device_control = bam_control;
bdev->common.device_issue_pending = bam_issue_pending;
bdev->common.device_tx_status = bam_tx_status;
bdev->common.dev = bdev->dev;
ret = dma_async_device_register(&bdev->common);
if (ret) {
dev_err(bdev->dev, "failed to register dma async device\n");
goto err_disable_clk;
}
ret = of_dma_controller_register(pdev->dev.of_node, bam_dma_xlate,
&bdev->common);
if (ret)
goto err_unregister_dma;
return 0;
err_unregister_dma:
dma_async_device_unregister(&bdev->common);
err_disable_clk:
clk_disable_unprepare(bdev->bamclk);
return ret;
}
static int bam_dma_remove(struct platform_device *pdev)
{
struct bam_device *bdev = platform_get_drvdata(pdev);
u32 i;
of_dma_controller_free(pdev->dev.of_node);
dma_async_device_unregister(&bdev->common);
/* mask all interrupts for this execution environment */
writel_relaxed(0, bdev->regs + BAM_IRQ_SRCS_MSK_EE(bdev->ee));
devm_free_irq(bdev->dev, bdev->irq, bdev);
for (i = 0; i < bdev->num_channels; i++) {
bam_dma_terminate_all(&bdev->channels[i]);
tasklet_kill(&bdev->channels[i].vc.task);
dma_free_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE,
bdev->channels[i].fifo_virt,
bdev->channels[i].fifo_phys);
}
tasklet_kill(&bdev->task);
clk_disable_unprepare(bdev->bamclk);
return 0;
}
static const struct of_device_id bam_of_match[] = {
{ .compatible = "qcom,bam-v1.4.0", },
{}
};
MODULE_DEVICE_TABLE(of, bam_of_match);
static struct platform_driver bam_dma_driver = {
.probe = bam_dma_probe,
.remove = bam_dma_remove,
.driver = {
.name = "bam-dma-engine",
.owner = THIS_MODULE,
.of_match_table = bam_of_match,
},
};
module_platform_driver(bam_dma_driver);
MODULE_AUTHOR("Andy Gross <agross@codeaurora.org>");
MODULE_DESCRIPTION("QCOM BAM DMA engine driver");
MODULE_LICENSE("GPL v2");
......@@ -192,7 +192,7 @@ struct s3c24xx_dma_phy {
unsigned int id;
bool valid;
void __iomem *base;
unsigned int irq;
int irq;
struct clk *clk;
spinlock_t lock;
struct s3c24xx_dma_chan *serving;
......
......@@ -29,6 +29,12 @@ config RCAR_HPB_DMAE
help
Enable support for the Renesas R-Car series DMA controllers.
config RCAR_AUDMAC_PP
tristate "Renesas R-Car Audio DMAC Peripheral Peripheral support"
depends on SH_DMAE_BASE
help
Enable support for the Renesas R-Car Audio DMAC Peripheral Peripheral controllers.
config SHDMA_R8A73A4
def_bool y
depends on ARCH_R8A73A4 && SH_DMAE != n
......@@ -7,3 +7,4 @@ endif
shdma-objs := $(shdma-y)
obj-$(CONFIG_SUDMAC) += sudmac.o
obj-$(CONFIG_RCAR_HPB_DMAE) += rcar-hpbdma.o
obj-$(CONFIG_RCAR_AUDMAC_PP) += rcar-audmapp.o
/*
* This is for Renesas R-Car Audio-DMAC-peri-peri.
*
* Copyright (C) 2014 Renesas Electronics Corporation
* Copyright (C) 2014 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
*
* based on the drivers/dma/sh/shdma.c
*
* Copyright (C) 2011-2012 Guennadi Liakhovetski <g.liakhovetski@gmx.de>
* Copyright (C) 2009 Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
* Copyright (C) 2009 Renesas Solutions, Inc. All rights reserved.
* Copyright (C) 2007 Freescale Semiconductor, Inc. All rights reserved.
*
* This is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
*/
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/dmaengine.h>
#include <linux/platform_data/dma-rcar-audmapp.h>
#include <linux/platform_device.h>
#include <linux/shdma-base.h>
/*
* DMA register
*/
#define PDMASAR 0x00
#define PDMADAR 0x04
#define PDMACHCR 0x0c
/* PDMACHCR */
#define PDMACHCR_DE (1 << 0)
#define AUDMAPP_MAX_CHANNELS 29
/* Default MEMCPY transfer size = 2^2 = 4 bytes */
#define LOG2_DEFAULT_XFER_SIZE 2
#define AUDMAPP_SLAVE_NUMBER 256
#define AUDMAPP_LEN_MAX (16 * 1024 * 1024)
struct audmapp_chan {
struct shdma_chan shdma_chan;
struct audmapp_slave_config *config;
void __iomem *base;
};
struct audmapp_device {
struct shdma_dev shdma_dev;
struct audmapp_pdata *pdata;
struct device *dev;
void __iomem *chan_reg;
};
#define to_chan(chan) container_of(chan, struct audmapp_chan, shdma_chan)
#define to_dev(chan) container_of(chan->shdma_chan.dma_chan.device, \
struct audmapp_device, shdma_dev.dma_dev)
static void audmapp_write(struct audmapp_chan *auchan, u32 data, u32 reg)
{
struct audmapp_device *audev = to_dev(auchan);
struct device *dev = audev->dev;
dev_dbg(dev, "w %p : %08x\n", auchan->base + reg, data);
iowrite32(data, auchan->base + reg);
}
static u32 audmapp_read(struct audmapp_chan *auchan, u32 reg)
{
return ioread32(auchan->base + reg);
}
static void audmapp_halt(struct shdma_chan *schan)
{
struct audmapp_chan *auchan = to_chan(schan);
int i;
audmapp_write(auchan, 0, PDMACHCR);
for (i = 0; i < 1024; i++) {
if (0 == audmapp_read(auchan, PDMACHCR))
return;
udelay(1);
}
}
static void audmapp_start_xfer(struct shdma_chan *schan,
struct shdma_desc *sdecs)
{
struct audmapp_chan *auchan = to_chan(schan);
struct audmapp_device *audev = to_dev(auchan);
struct audmapp_slave_config *cfg = auchan->config;
struct device *dev = audev->dev;
u32 chcr = cfg->chcr | PDMACHCR_DE;
dev_dbg(dev, "src/dst/chcr = %pad/%pad/%x\n",
&cfg->src, &cfg->dst, cfg->chcr);
audmapp_write(auchan, cfg->src, PDMASAR);
audmapp_write(auchan, cfg->dst, PDMADAR);
audmapp_write(auchan, chcr, PDMACHCR);
}
static struct audmapp_slave_config *
audmapp_find_slave(struct audmapp_chan *auchan, int slave_id)
{
struct audmapp_device *audev = to_dev(auchan);
struct audmapp_pdata *pdata = audev->pdata;
struct audmapp_slave_config *cfg;
int i;
if (slave_id >= AUDMAPP_SLAVE_NUMBER)
return NULL;
for (i = 0, cfg = pdata->slave; i < pdata->slave_num; i++, cfg++)
if (cfg->slave_id == slave_id)
return cfg;
return NULL;
}
static int audmapp_set_slave(struct shdma_chan *schan, int slave_id,
dma_addr_t slave_addr, bool try)
{
struct audmapp_chan *auchan = to_chan(schan);
struct audmapp_slave_config *cfg =
audmapp_find_slave(auchan, slave_id);
if (!cfg)
return -ENODEV;
if (try)
return 0;
auchan->config = cfg;
return 0;
}
static int audmapp_desc_setup(struct shdma_chan *schan,
struct shdma_desc *sdecs,
dma_addr_t src, dma_addr_t dst, size_t *len)
{
struct audmapp_chan *auchan = to_chan(schan);
struct audmapp_slave_config *cfg = auchan->config;
if (!cfg)
return -ENODEV;
if (*len > (size_t)AUDMAPP_LEN_MAX)
*len = (size_t)AUDMAPP_LEN_MAX;
return 0;
}
static void audmapp_setup_xfer(struct shdma_chan *schan,
int slave_id)
{
}
static dma_addr_t audmapp_slave_addr(struct shdma_chan *schan)
{
return 0; /* always fixed address */
}
static bool audmapp_channel_busy(struct shdma_chan *schan)
{
struct audmapp_chan *auchan = to_chan(schan);
u32 chcr = audmapp_read(auchan, PDMACHCR);
return chcr & ~PDMACHCR_DE;
}
static bool audmapp_desc_completed(struct shdma_chan *schan,
struct shdma_desc *sdesc)
{
return true;
}
static struct shdma_desc *audmapp_embedded_desc(void *buf, int i)
{
return &((struct shdma_desc *)buf)[i];
}
static const struct shdma_ops audmapp_shdma_ops = {
.halt_channel = audmapp_halt,
.desc_setup = audmapp_desc_setup,
.set_slave = audmapp_set_slave,
.start_xfer = audmapp_start_xfer,
.embedded_desc = audmapp_embedded_desc,
.setup_xfer = audmapp_setup_xfer,
.slave_addr = audmapp_slave_addr,
.channel_busy = audmapp_channel_busy,
.desc_completed = audmapp_desc_completed,
};
static int audmapp_chan_probe(struct platform_device *pdev,
struct audmapp_device *audev, int id)
{
struct shdma_dev *sdev = &audev->shdma_dev;
struct audmapp_chan *auchan;
struct shdma_chan *schan;
struct device *dev = audev->dev;
auchan = devm_kzalloc(dev, sizeof(*auchan), GFP_KERNEL);
if (!auchan)
return -ENOMEM;
schan = &auchan->shdma_chan;
schan->max_xfer_len = AUDMAPP_LEN_MAX;
shdma_chan_probe(sdev, schan, id);
auchan->base = audev->chan_reg + 0x20 + (0x10 * id);
dev_dbg(dev, "%02d : %p / %p", id, auchan->base, audev->chan_reg);
return 0;
}
static void audmapp_chan_remove(struct audmapp_device *audev)
{
struct dma_device *dma_dev = &audev->shdma_dev.dma_dev;
struct shdma_chan *schan;
int i;
shdma_for_each_chan(schan, &audev->shdma_dev, i) {
BUG_ON(!schan);
shdma_chan_remove(schan);
}
dma_dev->chancnt = 0;
}
static int audmapp_probe(struct platform_device *pdev)
{
struct audmapp_pdata *pdata = pdev->dev.platform_data;
struct audmapp_device *audev;
struct shdma_dev *sdev;
struct dma_device *dma_dev;
struct resource *res;
int err, i;
if (!pdata)
return -ENODEV;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
audev = devm_kzalloc(&pdev->dev, sizeof(*audev), GFP_KERNEL);
if (!audev)
return -ENOMEM;
audev->dev = &pdev->dev;
audev->pdata = pdata;
audev->chan_reg = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(audev->chan_reg))
return PTR_ERR(audev->chan_reg);
sdev = &audev->shdma_dev;
sdev->ops = &audmapp_shdma_ops;
sdev->desc_size = sizeof(struct shdma_desc);
dma_dev = &sdev->dma_dev;
dma_dev->copy_align = LOG2_DEFAULT_XFER_SIZE;
dma_cap_set(DMA_SLAVE, dma_dev->cap_mask);
err = shdma_init(&pdev->dev, sdev, AUDMAPP_MAX_CHANNELS);
if (err < 0)
return err;
platform_set_drvdata(pdev, audev);
/* Create DMA Channel */
for (i = 0; i < AUDMAPP_MAX_CHANNELS; i++) {
err = audmapp_chan_probe(pdev, audev, i);
if (err)
goto chan_probe_err;
}
err = dma_async_device_register(dma_dev);
if (err < 0)
goto chan_probe_err;
return err;
chan_probe_err:
audmapp_chan_remove(audev);
shdma_cleanup(sdev);
return err;
}
static int audmapp_remove(struct platform_device *pdev)
{
struct audmapp_device *audev = platform_get_drvdata(pdev);
struct dma_device *dma_dev = &audev->shdma_dev.dma_dev;
dma_async_device_unregister(dma_dev);
audmapp_chan_remove(audev);
shdma_cleanup(&audev->shdma_dev);
return 0;
}
static struct platform_driver audmapp_driver = {
.probe = audmapp_probe,
.remove = audmapp_remove,
.driver = {
.owner = THIS_MODULE,
.name = "rcar-audmapp-engine",
},
};
module_platform_driver(audmapp_driver);
MODULE_AUTHOR("Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>");
MODULE_DESCRIPTION("Renesas R-Car Audio DMAC peri-peri driver");
MODULE_LICENSE("GPL");
......@@ -227,7 +227,7 @@ bool shdma_chan_filter(struct dma_chan *chan, void *arg)
struct shdma_chan *schan = to_shdma_chan(chan);
struct shdma_dev *sdev = to_shdma_dev(schan->dma_chan.device);
const struct shdma_ops *ops = sdev->ops;
int match = (int)arg;
int match = (long)arg;
int ret;
if (match < 0)
......@@ -491,8 +491,8 @@ static struct shdma_desc *shdma_add_desc(struct shdma_chan *schan,
}
dev_dbg(schan->dev,
"chaining (%u/%u)@%x -> %x with %p, cookie %d\n",
copy_size, *len, *src, *dst, &new->async_tx,
"chaining (%zu/%zu)@%pad -> %pad with %p, cookie %d\n",
copy_size, *len, src, dst, &new->async_tx,
new->async_tx.cookie);
new->mark = DESC_PREPARED;
......@@ -555,8 +555,8 @@ static struct dma_async_tx_descriptor *shdma_prep_sg(struct shdma_chan *schan,
goto err_get_desc;
do {
dev_dbg(schan->dev, "Add SG #%d@%p[%d], dma %llx\n",
i, sg, len, (unsigned long long)sg_addr);
dev_dbg(schan->dev, "Add SG #%d@%p[%zu], dma %pad\n",
i, sg, len, &sg_addr);
if (direction == DMA_DEV_TO_MEM)
new = shdma_add_desc(schan, flags,
......
......@@ -33,7 +33,8 @@ static struct dma_chan *shdma_of_xlate(struct of_phandle_args *dma_spec,
/* Only slave DMA channels can be allocated via DT */
dma_cap_set(DMA_SLAVE, mask);
chan = dma_request_channel(mask, shdma_chan_filter, (void *)id);
chan = dma_request_channel(mask, shdma_chan_filter,
(void *)(uintptr_t)id);
if (chan)
to_shdma_chan(chan)->hw_req = id;
......
......@@ -443,6 +443,7 @@ static bool sh_dmae_reset(struct sh_dmae_device *shdev)
return ret;
}
#if defined(CONFIG_CPU_SH4) || defined(CONFIG_ARM)
static irqreturn_t sh_dmae_err(int irq, void *data)
{
struct sh_dmae_device *shdev = data;
......@@ -453,6 +454,7 @@ static irqreturn_t sh_dmae_err(int irq, void *data)
sh_dmae_reset(shdev);
return IRQ_HANDLED;
}
#endif
static bool sh_dmae_desc_completed(struct shdma_chan *schan,
struct shdma_desc *sdesc)
......@@ -637,7 +639,7 @@ static int sh_dmae_resume(struct device *dev)
#define sh_dmae_resume NULL
#endif
const struct dev_pm_ops sh_dmae_pm = {
static const struct dev_pm_ops sh_dmae_pm = {
.suspend = sh_dmae_suspend,
.resume = sh_dmae_resume,
.runtime_suspend = sh_dmae_runtime_suspend,
......@@ -685,9 +687,12 @@ MODULE_DEVICE_TABLE(of, sh_dmae_of_match);
static int sh_dmae_probe(struct platform_device *pdev)
{
const struct sh_dmae_pdata *pdata;
unsigned long irqflags = 0,
chan_flag[SH_DMAE_MAX_CHANNELS] = {};
int errirq, chan_irq[SH_DMAE_MAX_CHANNELS];
unsigned long chan_flag[SH_DMAE_MAX_CHANNELS] = {};
int chan_irq[SH_DMAE_MAX_CHANNELS];
#if defined(CONFIG_CPU_SH4) || defined(CONFIG_ARM)
unsigned long irqflags = 0;
int errirq;
#endif
int err, i, irq_cnt = 0, irqres = 0, irq_cap = 0;
struct sh_dmae_device *shdev;
struct dma_device *dma_dev;
......
......@@ -178,8 +178,8 @@ static int sudmac_desc_setup(struct shdma_chan *schan,
struct sudmac_chan *sc = to_chan(schan);
struct sudmac_desc *sd = to_desc(sdesc);
dev_dbg(sc->shdma_chan.dev, "%s: src=%x, dst=%x, len=%d\n",
__func__, src, dst, *len);
dev_dbg(sc->shdma_chan.dev, "%s: src=%pad, dst=%pad, len=%zu\n",
__func__, &src, &dst, *len);
if (*len > schan->max_xfer_len)
*len = schan->max_xfer_len;
......
......@@ -18,6 +18,7 @@
#include <linux/of_device.h>
#include <linux/of_platform.h>
#include <linux/clk.h>
#include <linux/of_dma.h>
#include <linux/sirfsoc_dma.h>
#include "dmaengine.h"
......@@ -659,6 +660,18 @@ static int sirfsoc_dma_device_slave_caps(struct dma_chan *dchan,
return 0;
}
static struct dma_chan *of_dma_sirfsoc_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct sirfsoc_dma *sdma = ofdma->of_dma_data;
unsigned int request = dma_spec->args[0];
if (request > SIRFSOC_DMA_CHANNELS)
return NULL;
return dma_get_slave_channel(&sdma->channels[request].chan);
}
static int sirfsoc_dma_probe(struct platform_device *op)
{
struct device_node *dn = op->dev.of_node;
......@@ -764,11 +777,20 @@ static int sirfsoc_dma_probe(struct platform_device *op)
if (ret)
goto free_irq;
/* Device-tree DMA controller registration */
ret = of_dma_controller_register(dn, of_dma_sirfsoc_xlate, sdma);
if (ret) {
dev_err(dev, "failed to register DMA controller\n");
goto unreg_dma_dev;
}
pm_runtime_enable(&op->dev);
dev_info(dev, "initialized SIRFSOC DMAC driver\n");
return 0;
unreg_dma_dev:
dma_async_device_unregister(dma);
free_irq:
free_irq(sdma->irq, sdma);
irq_dispose:
......@@ -781,6 +803,7 @@ static int sirfsoc_dma_remove(struct platform_device *op)
struct device *dev = &op->dev;
struct sirfsoc_dma *sdma = dev_get_drvdata(dev);
of_dma_controller_free(op->dev.of_node);
dma_async_device_unregister(&sdma->dma);
free_irq(sdma->irq, sdma);
irq_dispose_mapping(sdma->irq);
......
......@@ -16,6 +16,7 @@
#include <linux/list.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/dmaengine.h>
/**
......@@ -103,12 +104,12 @@ static inline void devm_acpi_dma_controller_free(struct device *dev)
static inline struct dma_chan *acpi_dma_request_slave_chan_by_index(
struct device *dev, size_t index)
{
return NULL;
return ERR_PTR(-ENODEV);
}
static inline struct dma_chan *acpi_dma_request_slave_chan_by_name(
struct device *dev, const char *name)
{
return NULL;
return ERR_PTR(-ENODEV);
}
#define acpi_dma_simple_xlate NULL
......
......@@ -341,15 +341,11 @@ enum dma_slave_buswidth {
* and this struct will then be passed in as an argument to the
* DMA engine device_control() function.
*
* The rationale for adding configuration information to this struct
* is as follows: if it is likely that most DMA slave controllers in
* the world will support the configuration option, then make it
* generic. If not: if it is fixed so that it be sent in static from
* the platform data, then prefer to do that. Else, if it is neither
* fixed at runtime, nor generic enough (such as bus mastership on
* some CPU family and whatnot) then create a custom slave config
* struct and pass that, then make this config a member of that
* struct, if applicable.
* The rationale for adding configuration information to this struct is as
* follows: if it is likely that more than one DMA slave controllers in
* the world will support the configuration option, then make it generic.
* If not: if it is fixed so that it be sent in static from the platform
* data, then prefer to do that.
*/
struct dma_slave_config {
enum dma_transfer_direction direction;
......
/*
* Driver for the Synopsys DesignWare DMA Controller (aka DMACA on
* AVR32 systems.)
* Driver for the Synopsys DesignWare DMA Controller
*
* Copyright (C) 2007 Atmel Corporation
* Copyright (C) 2010-2011 ST Microelectronics
......@@ -44,8 +43,6 @@ struct dw_dma_slave {
* @nr_masters: Number of AHB masters supported by the controller
* @data_width: Maximum data width supported by hardware per AHB master
* (0 - 8bits, 1 - 16bits, ..., 5 - 256bits)
* @sd: slave specific data. Used for configuring channels
* @sd_count: count of slave data structures passed.
*/
struct dw_dma_platform_data {
unsigned int nr_channels;
......
/*
* This is for Renesas R-Car Audio-DMAC-peri-peri.
*
* Copyright (C) 2014 Renesas Electronics Corporation
* Copyright (C) 2014 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
*
* This file is based on the include/linux/sh_dma.h
*
* Header for the new SH dmaengine driver
*
* Copyright (C) 2010 Guennadi Liakhovetski <g.liakhovetski@gmx.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef SH_AUDMAPP_H
#define SH_AUDMAPP_H
#include <linux/dmaengine.h>
struct audmapp_slave_config {
int slave_id;
dma_addr_t src;
dma_addr_t dst;
u32 chcr;
};
struct audmapp_pdata {
struct audmapp_slave_config *slave;
int slave_num;
};
#endif /* SH_AUDMAPP_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment