Commit 08ba9306 authored by Mark Brown's avatar Mark Brown

Merge series "spi: dw: Add generic DW DMA controller support" from Serge Semin...

Merge series "spi: dw: Add generic DW DMA controller support" from Serge Semin <Sergey.Semin@baikalelectronics.ru>:

Baikal-T1 SoC provides a DW DMA controller to perform low-speed peripherals
Mem-to-Dev and Dev-to-Mem transaction. This is also applicable to the DW
APB SSI devices embedded into the SoC. Currently the DMA-based transfers
are supported by the DW APB SPI driver only as a middle layer code for
Intel MID/Elkhart PCI devices. Seeing the same code can be used for normal
platform DMAC device we introduced a set of patches to fix it within this
series.

First of all we need to add the Tx and Rx DMA channels support into the DW
APB SSI binding. Then there are several fixes and cleanups provided as a
initial preparation for the Generic DMA support integration: add Tx/Rx
finish wait methods, clear DMAC register when done or stopped, Fix native
CS being unset, enable interrupts in accordance with DMA xfer mode,
discard static DW DMA slave structures, discard unused void priv pointer
and dma_width member of the dw_spi structure, provide the DMA Tx/Rx burst
length parametrisation and make sure it's optionally set in accordance
with the DMA max-burst capability.

In order to have the DW APB SSI MMIO driver working with DMA we need to
initialize the paddr field with the physical base address of the DW APB SSI
registers space. Then we unpin the Intel MID specific code from the
generic DMA one and placed it into the spi-dw-pci.c driver, which is a
better place for it anyway. After that the naming cleanups are performed
since the code is going to be used for a generic DMAC device. Finally the
Generic DMA initialization can be added to the generic version of the
DW APB SSI IP.

Last but not least we traditionally convert the legacy plain text-based
dt-binding file with yaml-based one and as a cherry on a cake replace
the manually written DebugFS registers read method with a ready-to-use
for the same purpose regset32 DebugFS interface usage.

This patchset is rebased and tested on the spi/for-next (5.7-rc5):
base-commit: fe9fce6b2cf3 ("Merge remote-tracking branch 'spi/for-5.8' into spi-next")

Link: https://lore.kernel.org/linux-spi/20200508132943.9826-1-Sergey.Semin@baikalelectronics.ru/
Changelog v2:
- Rebase on top of the spi repository for-next branch.
- Move bindings conversion patch to the tail of the series.
- Move fixes to the head of the series.
- Apply as many changes as possible to be applied the Generic DMA
  functionality support is added and the spi-dw-mid is moved to the
  spi-dw-dma driver.
- Discard patch "spi: dw: Fix dma_slave_config used partly uninitialized"
  since the problem has already been fixed.
- Add new patch "spi: dw: Discard unused void priv pointer".
- Add new patch "spi: dw: Discard dma_width member of the dw_spi structure".
  n_bytes member of the DW SPI data can be used instead.
- Build the DMA functionality into the DW APB SSI core if required instead
  of creating a separate kernel module.
- Use conditional statement instead of the ternary operator in the ref
  clock getter.

Link: https://lore.kernel.org/linux-spi/20200515104758.6934-1-Sergey.Semin@baikalelectronics.ru/
Changelog v3:
- Use spi_delay_exec() method to wait for the DMA operation completion.
- Explicitly initialize the dw_dma_slave members on stack.
- Discard the dws->fifo_len utilization in the Tx FIFO DMA threshold
  setting from the patch where we just add the default burst length
  constants.
- Use min() method to calculate the optimal burst values.
- Add new patch which moves the spi-dw.c source file to spi-dw-core.c in
  order to preserve the DW APB SSI core driver name.
- Add commas in the debugfs_reg32 structure initializer and after the last
  entry of the dw_spi_dbgfs_regs array.

Link: https://lore.kernel.org/linux-spi/20200521012206.14472-1-Sergey.Semin@baikalelectronics.ru
Changelog v4:
- Get back ndelay() method to wait for an SPI transfer completion.
  spi_delay_exec() isn't suitable for the atomic context.

Link: https://lore.kernel.org/linux-spi/20200522000806.7381-1-Sergey.Semin@baikalelectronics.ru
Changelog v5:
- Refactor the Tx/Rx DMA-based SPI transfers wait methods.
- Add a new patch "spi: dw: Set xfer effective_speed_hz".
- Add a new patch "spi: dw: Return any value retrieved from the
  dma_transfer callback" as a preparation patch before implementing
  the local DMA, Tx SPI and Rx SPI transfers wait methods.
- Add a new patch "spi: dw: Locally wait for the DMA transactions
  completion", which provides a local DMA transaction complete
  method
- Create a dedicated patch which adds the Rx-done wait method:
  "spi: dw: Add SPI Rx-done wait method to DMA-based transfer".
- Add more detailed description of the problems the Tx/Rx-wait
  methods-related patches fix.
- Wait for the SPI Tx and Rx transfers being finished in the
  mid_spi_dma_transfer() method executed in the task context.
- Use spi_delay_exec() to wait for the SPI Tx/Rx completion, since now
  the driver calls the wait methods in the kernel thread context.
- Use SPI_DELAY_UNIT_SCK spi_delay unit for Tx-wait delay, since SPI
  xfer's are now have the effective_speed_hz initialized.
- Rx-wait for a delay correlated with the APB/SSI synchronous clock
  rate instead of using the SPI bus clock rate.

Link: https://lore.kernel.org/linux-spi/20200529035915.20790-1-Sergey.Semin@baikalelectronics.ru
Changelog v6:
- Provide a more detailed description of the patch:
  2901db35bea1 ("spi: dw: Locally wait for the DMA transfers completion")
- Calculate the Rx delay with better accuracy by moving 4-multiplication
  to the head of the formulae:
  ns = 4U * NSEC_PER_SEC / dws->max_freq * nents.
Co-developed-by: default avatarGeorgy Vlasov <Georgy.Vlasov@baikalelectronics.ru>
Signed-off-by: default avatarGeorgy Vlasov <Georgy.Vlasov@baikalelectronics.ru>
Co-developed-by: default avatarRamil Zaripov <Ramil.Zaripov@baikalelectronics.ru>
Signed-off-by: default avatarRamil Zaripov <Ramil.Zaripov@baikalelectronics.ru>
Signed-off-by: default avatarSerge Semin <Sergey.Semin@baikalelectronics.ru>
Cc: Alexey Malahov <Alexey.Malahov@baikalelectronics.ru>
Cc: Maxim Kaurkin <Maxim.Kaurkin@baikalelectronics.ru>
Cc: Pavel Parkhomenko <Pavel.Parkhomenko@baikalelectronics.ru>
Cc: Ekaterina Skachko <Ekaterina.Skachko@baikalelectronics.ru>
Cc: Vadim Vlasov <V.Vlasov@baikalelectronics.ru>
Cc: Alexey Kolotnikov <Alexey.Kolotnikov@baikalelectronics.ru>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Feng Tang <feng.tang@intel.com>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: linux-mips@vger.kernel.org
Cc: linux-spi@vger.kernel.org
Cc: devicetree@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

Serge Semin (16):
  spi: dw: Set xfer effective_speed_hz
  spi: dw: Return any value retrieved from the dma_transfer callback
  spi: dw: Locally wait for the DMA transfers completion
  spi: dw: Add SPI Tx-done wait method to DMA-based transfer
  spi: dw: Add SPI Rx-done wait method to DMA-based transfer
  spi: dw: Parameterize the DMA Rx/Tx burst length
  spi: dw: Use DMA max burst to set the request thresholds
  spi: dw: Fix Rx-only DMA transfers
  spi: dw: Add core suffix to the DW APB SSI core source file
  spi: dw: Move Non-DMA code to the DW PCIe-SPI driver
  spi: dw: Remove DW DMA code dependency from DW_DMAC_PCI
  spi: dw: Add DW SPI DMA/PCI/MMIO dependency on the DW SPI core
  spi: dw: Cleanup generic DW DMA code namings
  spi: dw: Add DMA support to the DW SPI MMIO driver
  spi: dw: Use regset32 DebugFS method to create regdump file
  dt-bindings: spi: Convert DW SPI binding to DT schema

 .../bindings/spi/snps,dw-apb-ssi.txt          |  44 --
 .../bindings/spi/snps,dw-apb-ssi.yaml         | 127 +++++
 .../devicetree/bindings/spi/spi-dw.txt        |  24 -
 drivers/spi/Kconfig                           |  15 +-
 drivers/spi/Makefile                          |   5 +-
 drivers/spi/{spi-dw.c => spi-dw-core.c}       |  95 ++--
 drivers/spi/spi-dw-dma.c                      | 482 ++++++++++++++++++
 drivers/spi/spi-dw-mid.c                      | 382 --------------
 drivers/spi/spi-dw-mmio.c                     |   4 +
 drivers/spi/spi-dw-pci.c                      |  50 +-
 drivers/spi/spi-dw.h                          |  20 +-
 11 files changed, 719 insertions(+), 529 deletions(-)
 delete mode 100644 Documentation/devicetree/bindings/spi/snps,dw-apb-ssi.txt
 create mode 100644 Documentation/devicetree/bindings/spi/snps,dw-apb-ssi.yaml
 delete mode 100644 Documentation/devicetree/bindings/spi/spi-dw.txt
 rename drivers/spi/{spi-dw.c => spi-dw-core.c} (82%)
 create mode 100644 drivers/spi/spi-dw-dma.c
 delete mode 100644 drivers/spi/spi-dw-mid.c

--
2.26.2
parents 2604d487 8378449d
......@@ -226,17 +226,20 @@ config SPI_DESIGNWARE
help
general driver for SPI controller core from DesignWare
if SPI_DESIGNWARE
config SPI_DW_DMA
bool "DMA support for DW SPI controller"
config SPI_DW_PCI
tristate "PCI interface driver for DW SPI core"
depends on SPI_DESIGNWARE && PCI
config SPI_DW_MID_DMA
bool "DMA support for DW SPI controller on Intel MID platform"
depends on SPI_DW_PCI && DW_DMAC_PCI
depends on PCI
config SPI_DW_MMIO
tristate "Memory-mapped io interface driver for DW SPI core"
depends on SPI_DESIGNWARE
depends on HAS_IOMEM
endif
config SPI_DLN2
tristate "Diolan DLN-2 USB SPI adapter"
......
......@@ -36,9 +36,10 @@ obj-$(CONFIG_SPI_COLDFIRE_QSPI) += spi-coldfire-qspi.o
obj-$(CONFIG_SPI_DAVINCI) += spi-davinci.o
obj-$(CONFIG_SPI_DLN2) += spi-dln2.o
obj-$(CONFIG_SPI_DESIGNWARE) += spi-dw.o
spi-dw-y := spi-dw-core.o
spi-dw-$(CONFIG_SPI_DW_DMA) += spi-dw-dma.o
obj-$(CONFIG_SPI_DW_MMIO) += spi-dw-mmio.o
obj-$(CONFIG_SPI_DW_PCI) += spi-dw-midpci.o
spi-dw-midpci-objs := spi-dw-pci.o spi-dw-mid.o
obj-$(CONFIG_SPI_DW_PCI) += spi-dw-pci.o
obj-$(CONFIG_SPI_EFM32) += spi-efm32.o
obj-$(CONFIG_SPI_EP93XX) += spi-ep93xx.o
obj-$(CONFIG_SPI_FALCON) += spi-falcon.o
......
......@@ -29,66 +29,29 @@ struct chip_data {
};
#ifdef CONFIG_DEBUG_FS
#define SPI_REGS_BUFSIZE 1024
static ssize_t dw_spi_show_regs(struct file *file, char __user *user_buf,
size_t count, loff_t *ppos)
{
struct dw_spi *dws = file->private_data;
char *buf;
u32 len = 0;
ssize_t ret;
buf = kzalloc(SPI_REGS_BUFSIZE, GFP_KERNEL);
if (!buf)
return 0;
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"%s registers:\n", dev_name(&dws->master->dev));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"=================================\n");
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"CTRLR0: \t0x%08x\n", dw_readl(dws, DW_SPI_CTRLR0));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"CTRLR1: \t0x%08x\n", dw_readl(dws, DW_SPI_CTRLR1));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"SSIENR: \t0x%08x\n", dw_readl(dws, DW_SPI_SSIENR));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"SER: \t\t0x%08x\n", dw_readl(dws, DW_SPI_SER));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"BAUDR: \t\t0x%08x\n", dw_readl(dws, DW_SPI_BAUDR));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"TXFTLR: \t0x%08x\n", dw_readl(dws, DW_SPI_TXFTLR));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"RXFTLR: \t0x%08x\n", dw_readl(dws, DW_SPI_RXFTLR));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"TXFLR: \t\t0x%08x\n", dw_readl(dws, DW_SPI_TXFLR));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"RXFLR: \t\t0x%08x\n", dw_readl(dws, DW_SPI_RXFLR));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"SR: \t\t0x%08x\n", dw_readl(dws, DW_SPI_SR));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"IMR: \t\t0x%08x\n", dw_readl(dws, DW_SPI_IMR));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"ISR: \t\t0x%08x\n", dw_readl(dws, DW_SPI_ISR));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"DMACR: \t\t0x%08x\n", dw_readl(dws, DW_SPI_DMACR));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"DMATDLR: \t0x%08x\n", dw_readl(dws, DW_SPI_DMATDLR));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"DMARDLR: \t0x%08x\n", dw_readl(dws, DW_SPI_DMARDLR));
len += scnprintf(buf + len, SPI_REGS_BUFSIZE - len,
"=================================\n");
ret = simple_read_from_buffer(user_buf, count, ppos, buf, len);
kfree(buf);
return ret;
#define DW_SPI_DBGFS_REG(_name, _off) \
{ \
.name = _name, \
.offset = _off, \
}
static const struct file_operations dw_spi_regs_ops = {
.owner = THIS_MODULE,
.open = simple_open,
.read = dw_spi_show_regs,
.llseek = default_llseek,
static const struct debugfs_reg32 dw_spi_dbgfs_regs[] = {
DW_SPI_DBGFS_REG("CTRLR0", DW_SPI_CTRLR0),
DW_SPI_DBGFS_REG("CTRLR1", DW_SPI_CTRLR1),
DW_SPI_DBGFS_REG("SSIENR", DW_SPI_SSIENR),
DW_SPI_DBGFS_REG("SER", DW_SPI_SER),
DW_SPI_DBGFS_REG("BAUDR", DW_SPI_BAUDR),
DW_SPI_DBGFS_REG("TXFTLR", DW_SPI_TXFTLR),
DW_SPI_DBGFS_REG("RXFTLR", DW_SPI_RXFTLR),
DW_SPI_DBGFS_REG("TXFLR", DW_SPI_TXFLR),
DW_SPI_DBGFS_REG("RXFLR", DW_SPI_RXFLR),
DW_SPI_DBGFS_REG("SR", DW_SPI_SR),
DW_SPI_DBGFS_REG("IMR", DW_SPI_IMR),
DW_SPI_DBGFS_REG("ISR", DW_SPI_ISR),
DW_SPI_DBGFS_REG("DMACR", DW_SPI_DMACR),
DW_SPI_DBGFS_REG("DMATDLR", DW_SPI_DMATDLR),
DW_SPI_DBGFS_REG("DMARDLR", DW_SPI_DMARDLR),
};
static int dw_spi_debugfs_init(struct dw_spi *dws)
......@@ -100,8 +63,11 @@ static int dw_spi_debugfs_init(struct dw_spi *dws)
if (!dws->debugfs)
return -ENOMEM;
debugfs_create_file("registers", S_IFREG | S_IRUGO,
dws->debugfs, (void *)dws, &dw_spi_regs_ops);
dws->regset.regs = dw_spi_dbgfs_regs;
dws->regset.nregs = ARRAY_SIZE(dw_spi_dbgfs_regs);
dws->regset.base = dws->regs;
debugfs_create_regset32("registers", 0400, dws->debugfs, &dws->regset);
return 0;
}
......@@ -352,6 +318,7 @@ static int dw_spi_transfer_one(struct spi_controller *master,
spi_set_clk(dws, chip->clk_div);
}
transfer->effective_speed_hz = dws->max_freq / chip->clk_div;
dws->n_bytes = DIV_ROUND_UP(transfer->bits_per_word, BITS_PER_BYTE);
cr0 = dws->update_cr0(master, spi, transfer);
......@@ -388,11 +355,8 @@ static int dw_spi_transfer_one(struct spi_controller *master,
spi_enable_chip(dws, 1);
if (dws->dma_mapped) {
ret = dws->dma_ops->dma_transfer(dws, transfer);
if (ret < 0)
return ret;
}
if (dws->dma_mapped)
return dws->dma_ops->dma_transfer(dws, transfer);
return 1;
}
......@@ -517,6 +481,7 @@ int dw_spi_add_host(struct device *dev, struct dw_spi *dws)
dev_warn(dev, "DMA init failed\n");
} else {
master->can_dma = dws->dma_ops->can_dma;
master->flags |= SPI_CONTROLLER_MUST_TX;
}
}
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Special handling for DW core on Intel MID platform
* Special handling for DW DMA core
*
* Copyright (c) 2009, 2014 Intel Corporation.
*/
#include <linux/spi/spi.h>
#include <linux/types.h>
#include "spi-dw.h"
#ifdef CONFIG_SPI_DW_MID_DMA
#include <linux/completion.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/irqreturn.h>
#include <linux/jiffies.h>
#include <linux/pci.h>
#include <linux/platform_data/dma-dw.h>
#include <linux/spi/spi.h>
#include <linux/types.h>
#include "spi-dw.h"
#define WAIT_RETRIES 5
#define RX_BUSY 0
#define RX_BURST_LEVEL 16
#define TX_BUSY 1
#define TX_BURST_LEVEL 16
static bool mid_spi_dma_chan_filter(struct dma_chan *chan, void *param)
static bool dw_spi_dma_chan_filter(struct dma_chan *chan, void *param)
{
struct dw_dma_slave *s = param;
......@@ -31,7 +34,32 @@ static bool mid_spi_dma_chan_filter(struct dma_chan *chan, void *param)
return true;
}
static int mid_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws)
static void dw_spi_dma_maxburst_init(struct dw_spi *dws)
{
struct dma_slave_caps caps;
u32 max_burst, def_burst;
int ret;
def_burst = dws->fifo_len / 2;
ret = dma_get_slave_caps(dws->rxchan, &caps);
if (!ret && caps.max_burst)
max_burst = caps.max_burst;
else
max_burst = RX_BURST_LEVEL;
dws->rxburst = min(max_burst, def_burst);
ret = dma_get_slave_caps(dws->txchan, &caps);
if (!ret && caps.max_burst)
max_burst = caps.max_burst;
else
max_burst = TX_BURST_LEVEL;
dws->txburst = min(max_burst, def_burst);
}
static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws)
{
struct dw_dma_slave slave = {
.src_id = 0,
......@@ -53,19 +81,23 @@ static int mid_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws)
/* 1. Init rx channel */
slave.dma_dev = &dma_dev->dev;
dws->rxchan = dma_request_channel(mask, mid_spi_dma_chan_filter, &slave);
dws->rxchan = dma_request_channel(mask, dw_spi_dma_chan_filter, &slave);
if (!dws->rxchan)
goto err_exit;
/* 2. Init tx channel */
slave.dst_id = 1;
dws->txchan = dma_request_channel(mask, mid_spi_dma_chan_filter, &slave);
dws->txchan = dma_request_channel(mask, dw_spi_dma_chan_filter, &slave);
if (!dws->txchan)
goto free_rxchan;
dws->master->dma_rx = dws->rxchan;
dws->master->dma_tx = dws->txchan;
init_completion(&dws->dma_completion);
dw_spi_dma_maxburst_init(dws);
return 0;
free_rxchan:
......@@ -75,7 +107,7 @@ static int mid_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws)
return -EBUSY;
}
static int mid_spi_dma_init_generic(struct device *dev, struct dw_spi *dws)
static int dw_spi_dma_init_generic(struct device *dev, struct dw_spi *dws)
{
dws->rxchan = dma_request_slave_channel(dev, "rx");
if (!dws->rxchan)
......@@ -91,10 +123,14 @@ static int mid_spi_dma_init_generic(struct device *dev, struct dw_spi *dws)
dws->master->dma_rx = dws->rxchan;
dws->master->dma_tx = dws->txchan;
init_completion(&dws->dma_completion);
dw_spi_dma_maxburst_init(dws);
return 0;
}
static void mid_spi_dma_exit(struct dw_spi *dws)
static void dw_spi_dma_exit(struct dw_spi *dws)
{
if (dws->txchan) {
dmaengine_terminate_sync(dws->txchan);
......@@ -109,7 +145,7 @@ static void mid_spi_dma_exit(struct dw_spi *dws)
dw_writel(dws, DW_SPI_DMACR, 0);
}
static irqreturn_t dma_transfer(struct dw_spi *dws)
static irqreturn_t dw_spi_dma_transfer_handler(struct dw_spi *dws)
{
u16 irq_status = dw_readl(dws, DW_SPI_ISR);
......@@ -121,19 +157,20 @@ static irqreturn_t dma_transfer(struct dw_spi *dws)
dev_err(&dws->master->dev, "%s: FIFO overrun/underrun\n", __func__);
dws->master->cur_msg->status = -EIO;
spi_finalize_current_transfer(dws->master);
complete(&dws->dma_completion);
return IRQ_HANDLED;
}
static bool mid_spi_can_dma(struct spi_controller *master,
struct spi_device *spi, struct spi_transfer *xfer)
static bool dw_spi_can_dma(struct spi_controller *master,
struct spi_device *spi, struct spi_transfer *xfer)
{
struct dw_spi *dws = spi_controller_get_devdata(master);
return xfer->len > dws->fifo_len;
}
static enum dma_slave_buswidth convert_dma_width(u8 n_bytes) {
static enum dma_slave_buswidth dw_spi_dma_convert_width(u8 n_bytes)
{
if (n_bytes == 1)
return DMA_SLAVE_BUSWIDTH_1_BYTE;
else if (n_bytes == 2)
......@@ -142,6 +179,56 @@ static enum dma_slave_buswidth convert_dma_width(u8 n_bytes) {
return DMA_SLAVE_BUSWIDTH_UNDEFINED;
}
static int dw_spi_dma_wait(struct dw_spi *dws, struct spi_transfer *xfer)
{
unsigned long long ms;
ms = xfer->len * MSEC_PER_SEC * BITS_PER_BYTE;
do_div(ms, xfer->effective_speed_hz);
ms += ms + 200;
if (ms > UINT_MAX)
ms = UINT_MAX;
ms = wait_for_completion_timeout(&dws->dma_completion,
msecs_to_jiffies(ms));
if (ms == 0) {
dev_err(&dws->master->cur_msg->spi->dev,
"DMA transaction timed out\n");
return -ETIMEDOUT;
}
return 0;
}
static inline bool dw_spi_dma_tx_busy(struct dw_spi *dws)
{
return !(dw_readl(dws, DW_SPI_SR) & SR_TF_EMPT);
}
static int dw_spi_dma_wait_tx_done(struct dw_spi *dws,
struct spi_transfer *xfer)
{
int retry = WAIT_RETRIES;
struct spi_delay delay;
u32 nents;
nents = dw_readl(dws, DW_SPI_TXFLR);
delay.unit = SPI_DELAY_UNIT_SCK;
delay.value = nents * dws->n_bytes * BITS_PER_BYTE;
while (dw_spi_dma_tx_busy(dws) && retry--)
spi_delay_exec(&delay, xfer);
if (retry < 0) {
dev_err(&dws->master->dev, "Tx hanged up\n");
return -EIO;
}
return 0;
}
/*
* dws->dma_chan_busy is set before the dma transfer starts, callback for tx
* channel will clear a corresponding bit.
......@@ -155,11 +242,11 @@ static void dw_spi_dma_tx_done(void *arg)
return;
dw_writel(dws, DW_SPI_DMACR, 0);
spi_finalize_current_transfer(dws->master);
complete(&dws->dma_completion);
}
static struct dma_async_tx_descriptor *dw_spi_dma_prepare_tx(struct dw_spi *dws,
struct spi_transfer *xfer)
static struct dma_async_tx_descriptor *
dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
{
struct dma_slave_config txconf;
struct dma_async_tx_descriptor *txdesc;
......@@ -170,9 +257,9 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_tx(struct dw_spi *dws,
memset(&txconf, 0, sizeof(txconf));
txconf.direction = DMA_MEM_TO_DEV;
txconf.dst_addr = dws->dma_addr;
txconf.dst_maxburst = 16;
txconf.dst_maxburst = dws->txburst;
txconf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
txconf.dst_addr_width = convert_dma_width(dws->n_bytes);
txconf.dst_addr_width = dw_spi_dma_convert_width(dws->n_bytes);
txconf.device_fc = false;
dmaengine_slave_config(dws->txchan, &txconf);
......@@ -191,6 +278,49 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_tx(struct dw_spi *dws,
return txdesc;
}
static inline bool dw_spi_dma_rx_busy(struct dw_spi *dws)
{
return !!(dw_readl(dws, DW_SPI_SR) & SR_RF_NOT_EMPT);
}
static int dw_spi_dma_wait_rx_done(struct dw_spi *dws)
{
int retry = WAIT_RETRIES;
struct spi_delay delay;
unsigned long ns, us;
u32 nents;
/*
* It's unlikely that DMA engine is still doing the data fetching, but
* if it's let's give it some reasonable time. The timeout calculation
* is based on the synchronous APB/SSI reference clock rate, on a
* number of data entries left in the Rx FIFO, times a number of clock
* periods normally needed for a single APB read/write transaction
* without PREADY signal utilized (which is true for the DW APB SSI
* controller).
*/
nents = dw_readl(dws, DW_SPI_RXFLR);
ns = 4U * NSEC_PER_SEC / dws->max_freq * nents;
if (ns <= NSEC_PER_USEC) {
delay.unit = SPI_DELAY_UNIT_NSECS;
delay.value = ns;
} else {
us = DIV_ROUND_UP(ns, NSEC_PER_USEC);
delay.unit = SPI_DELAY_UNIT_USECS;
delay.value = clamp_val(us, 0, USHRT_MAX);
}
while (dw_spi_dma_rx_busy(dws) && retry--)
spi_delay_exec(&delay, NULL);
if (retry < 0) {
dev_err(&dws->master->dev, "Rx hanged up\n");
return -EIO;
}
return 0;
}
/*
* dws->dma_chan_busy is set before the dma transfer starts, callback for rx
* channel will clear a corresponding bit.
......@@ -204,7 +334,7 @@ static void dw_spi_dma_rx_done(void *arg)
return;
dw_writel(dws, DW_SPI_DMACR, 0);
spi_finalize_current_transfer(dws->master);
complete(&dws->dma_completion);
}
static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
......@@ -219,9 +349,9 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
memset(&rxconf, 0, sizeof(rxconf));
rxconf.direction = DMA_DEV_TO_MEM;
rxconf.src_addr = dws->dma_addr;
rxconf.src_maxburst = 16;
rxconf.src_maxburst = dws->rxburst;
rxconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
rxconf.src_addr_width = convert_dma_width(dws->n_bytes);
rxconf.src_addr_width = dw_spi_dma_convert_width(dws->n_bytes);
rxconf.device_fc = false;
dmaengine_slave_config(dws->rxchan, &rxconf);
......@@ -240,12 +370,12 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
return rxdesc;
}
static int mid_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
{
u16 imr = 0, dma_ctrl = 0;
dw_writel(dws, DW_SPI_DMARDLR, 0xf);
dw_writel(dws, DW_SPI_DMATDLR, 0x10);
dw_writel(dws, DW_SPI_DMARDLR, dws->rxburst - 1);
dw_writel(dws, DW_SPI_DMATDLR, dws->fifo_len - dws->txburst);
if (xfer->tx_buf) {
dma_ctrl |= SPI_DMA_TDMAE;
......@@ -260,14 +390,17 @@ static int mid_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
/* Set the interrupt mask */
spi_umask_intr(dws, imr);
dws->transfer_handler = dma_transfer;
reinit_completion(&dws->dma_completion);
dws->transfer_handler = dw_spi_dma_transfer_handler;
return 0;
}
static int mid_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
{
struct dma_async_tx_descriptor *txdesc, *rxdesc;
int ret;
/* Prepare the TX dma transfer */
txdesc = dw_spi_dma_prepare_tx(dws, xfer);
......@@ -288,10 +421,23 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
dma_async_issue_pending(dws->txchan);
}
return 0;
ret = dw_spi_dma_wait(dws, xfer);
if (ret)
return ret;
if (txdesc && dws->master->cur_msg->status == -EINPROGRESS) {
ret = dw_spi_dma_wait_tx_done(dws, xfer);
if (ret)
return ret;
}
if (rxdesc && dws->master->cur_msg->status == -EINPROGRESS)
ret = dw_spi_dma_wait_rx_done(dws);
return ret;
}
static void mid_spi_dma_stop(struct dw_spi *dws)
static void dw_spi_dma_stop(struct dw_spi *dws)
{
if (test_bit(TX_BUSY, &dws->dma_chan_busy)) {
dmaengine_terminate_sync(dws->txchan);
......@@ -305,78 +451,32 @@ static void mid_spi_dma_stop(struct dw_spi *dws)
dw_writel(dws, DW_SPI_DMACR, 0);
}
static const struct dw_spi_dma_ops mfld_dma_ops = {
.dma_init = mid_spi_dma_init_mfld,
.dma_exit = mid_spi_dma_exit,
.dma_setup = mid_spi_dma_setup,
.can_dma = mid_spi_can_dma,
.dma_transfer = mid_spi_dma_transfer,
.dma_stop = mid_spi_dma_stop,
static const struct dw_spi_dma_ops dw_spi_dma_mfld_ops = {
.dma_init = dw_spi_dma_init_mfld,
.dma_exit = dw_spi_dma_exit,
.dma_setup = dw_spi_dma_setup,
.can_dma = dw_spi_can_dma,
.dma_transfer = dw_spi_dma_transfer,
.dma_stop = dw_spi_dma_stop,
};
static void dw_spi_mid_setup_dma_mfld(struct dw_spi *dws)
void dw_spi_dma_setup_mfld(struct dw_spi *dws)
{
dws->dma_ops = &mfld_dma_ops;
dws->dma_ops = &dw_spi_dma_mfld_ops;
}
static const struct dw_spi_dma_ops generic_dma_ops = {
.dma_init = mid_spi_dma_init_generic,
.dma_exit = mid_spi_dma_exit,
.dma_setup = mid_spi_dma_setup,
.can_dma = mid_spi_can_dma,
.dma_transfer = mid_spi_dma_transfer,
.dma_stop = mid_spi_dma_stop,
EXPORT_SYMBOL_GPL(dw_spi_dma_setup_mfld);
static const struct dw_spi_dma_ops dw_spi_dma_generic_ops = {
.dma_init = dw_spi_dma_init_generic,
.dma_exit = dw_spi_dma_exit,
.dma_setup = dw_spi_dma_setup,
.can_dma = dw_spi_can_dma,
.dma_transfer = dw_spi_dma_transfer,
.dma_stop = dw_spi_dma_stop,
};
static void dw_spi_mid_setup_dma_generic(struct dw_spi *dws)
{
dws->dma_ops = &generic_dma_ops;
}
#else /* CONFIG_SPI_DW_MID_DMA */
static inline void dw_spi_mid_setup_dma_mfld(struct dw_spi *dws) {}
static inline void dw_spi_mid_setup_dma_generic(struct dw_spi *dws) {}
#endif
/* Some specific info for SPI0 controller on Intel MID */
/* HW info for MRST Clk Control Unit, 32b reg per controller */
#define MRST_SPI_CLK_BASE 100000000 /* 100m */
#define MRST_CLK_SPI_REG 0xff11d86c
#define CLK_SPI_BDIV_OFFSET 0
#define CLK_SPI_BDIV_MASK 0x00000007
#define CLK_SPI_CDIV_OFFSET 9
#define CLK_SPI_CDIV_MASK 0x00000e00
#define CLK_SPI_DISABLE_OFFSET 8
int dw_spi_mid_init_mfld(struct dw_spi *dws)
{
void __iomem *clk_reg;
u32 clk_cdiv;
clk_reg = ioremap(MRST_CLK_SPI_REG, 16);
if (!clk_reg)
return -ENOMEM;
/* Get SPI controller operating freq info */
clk_cdiv = readl(clk_reg + dws->bus_num * sizeof(u32));
clk_cdiv &= CLK_SPI_CDIV_MASK;
clk_cdiv >>= CLK_SPI_CDIV_OFFSET;
dws->max_freq = MRST_SPI_CLK_BASE / (clk_cdiv + 1);
iounmap(clk_reg);
/* Register hook to configure CTRLR0 */
dws->update_cr0 = dw_spi_update_cr0;
dw_spi_mid_setup_dma_mfld(dws);
return 0;
}
int dw_spi_mid_init_generic(struct dw_spi *dws)
void dw_spi_dma_setup_generic(struct dw_spi *dws)
{
/* Register hook to configure CTRLR0 */
dws->update_cr0 = dw_spi_update_cr0;
dw_spi_mid_setup_dma_generic(dws);
return 0;
dws->dma_ops = &dw_spi_dma_generic_ops;
}
EXPORT_SYMBOL_GPL(dw_spi_dma_setup_generic);
......@@ -151,6 +151,8 @@ static int dw_spi_dw_apb_init(struct platform_device *pdev,
/* Register hook to configure CTRLR0 */
dwsmmio->dws.update_cr0 = dw_spi_update_cr0;
dw_spi_dma_setup_generic(&dwsmmio->dws);
return 0;
}
......@@ -160,6 +162,8 @@ static int dw_spi_dwc_ssi_init(struct platform_device *pdev,
/* Register hook to configure CTRLR0 */
dwsmmio->dws.update_cr0 = dw_spi_update_cr0_v1_01a;
dw_spi_dma_setup_generic(&dwsmmio->dws);
return 0;
}
......
......@@ -15,6 +15,15 @@
#define DRIVER_NAME "dw_spi_pci"
/* HW info for MRST Clk Control Unit, 32b reg per controller */
#define MRST_SPI_CLK_BASE 100000000 /* 100m */
#define MRST_CLK_SPI_REG 0xff11d86c
#define CLK_SPI_BDIV_OFFSET 0
#define CLK_SPI_BDIV_MASK 0x00000007
#define CLK_SPI_CDIV_OFFSET 9
#define CLK_SPI_CDIV_MASK 0x00000e00
#define CLK_SPI_DISABLE_OFFSET 8
struct spi_pci_desc {
int (*setup)(struct dw_spi *);
u16 num_cs;
......@@ -22,20 +31,55 @@ struct spi_pci_desc {
u32 max_freq;
};
static int spi_mid_init(struct dw_spi *dws)
{
void __iomem *clk_reg;
u32 clk_cdiv;
clk_reg = ioremap(MRST_CLK_SPI_REG, 16);
if (!clk_reg)
return -ENOMEM;
/* Get SPI controller operating freq info */
clk_cdiv = readl(clk_reg + dws->bus_num * sizeof(u32));
clk_cdiv &= CLK_SPI_CDIV_MASK;
clk_cdiv >>= CLK_SPI_CDIV_OFFSET;
dws->max_freq = MRST_SPI_CLK_BASE / (clk_cdiv + 1);
iounmap(clk_reg);
/* Register hook to configure CTRLR0 */
dws->update_cr0 = dw_spi_update_cr0;
dw_spi_dma_setup_mfld(dws);
return 0;
}
static int spi_generic_init(struct dw_spi *dws)
{
/* Register hook to configure CTRLR0 */
dws->update_cr0 = dw_spi_update_cr0;
dw_spi_dma_setup_generic(dws);
return 0;
}
static struct spi_pci_desc spi_pci_mid_desc_1 = {
.setup = dw_spi_mid_init_mfld,
.setup = spi_mid_init,
.num_cs = 5,
.bus_num = 0,
};
static struct spi_pci_desc spi_pci_mid_desc_2 = {
.setup = dw_spi_mid_init_mfld,
.setup = spi_mid_init,
.num_cs = 2,
.bus_num = 1,
};
static struct spi_pci_desc spi_pci_ehl_desc = {
.setup = dw_spi_mid_init_generic,
.setup = spi_generic_init,
.num_cs = 2,
.bus_num = -1,
.max_freq = 100000000,
......
......@@ -2,6 +2,8 @@
#ifndef DW_SPI_HEADER_H
#define DW_SPI_HEADER_H
#include <linux/completion.h>
#include <linux/debugfs.h>
#include <linux/irqreturn.h>
#include <linux/io.h>
#include <linux/scatterlist.h>
......@@ -141,13 +143,17 @@ struct dw_spi {
/* DMA info */
struct dma_chan *txchan;
u32 txburst;
struct dma_chan *rxchan;
u32 rxburst;
unsigned long dma_chan_busy;
dma_addr_t dma_addr; /* phy address of the Data register */
const struct dw_spi_dma_ops *dma_ops;
struct completion dma_completion;
#ifdef CONFIG_DEBUG_FS
struct dentry *debugfs;
struct debugfs_regset32 regset;
#endif
};
......@@ -253,8 +259,16 @@ extern u32 dw_spi_update_cr0_v1_01a(struct spi_controller *master,
struct spi_device *spi,
struct spi_transfer *transfer);
/* platform related setup */
extern int dw_spi_mid_init_mfld(struct dw_spi *dws);
extern int dw_spi_mid_init_generic(struct dw_spi *dws);
#ifdef CONFIG_SPI_DW_DMA
extern void dw_spi_dma_setup_mfld(struct dw_spi *dws);
extern void dw_spi_dma_setup_generic(struct dw_spi *dws);
#else
static inline void dw_spi_dma_setup_mfld(struct dw_spi *dws) {}
static inline void dw_spi_dma_setup_generic(struct dw_spi *dws) {}
#endif /* !CONFIG_SPI_DW_DMA */
#endif /* DW_SPI_HEADER_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment