• Serge Semin's avatar
    spi: dw-dma: Add one-by-one SG list entries transfer · ad4fe126
    Serge Semin authored
    In case if at least one of the requested DMA engine channels doesn't
    support the hardware accelerated SG list entries traverse, the DMA driver
    will most likely work that around by performing the IRQ-based SG list
    entries resubmission. That might and will cause a problem if the DMA Tx
    channel is recharged and re-executed before the Rx DMA channel. Due to
    non-deterministic IRQ-handler execution latency the DMA Tx channel will
    start pushing data to the SPI bus before the Rx DMA channel is even
    reinitialized with the next inbound SG list entry. By doing so the DMA
    Tx channel will implicitly start filling the DW APB SSI Rx FIFO up, which
    while the DMA Rx channel being recharged and re-executed will eventually
    be overflown.
    
    In order to solve the problem we have to feed the DMA engine with SG
    list entries one-by-one. It shall keep the DW APB SSI Tx and Rx FIFOs
    synchronized and prevent the Rx FIFO overflow. Since in general the SPI
    tx_sg and rx_sg lists may have different number of entries of different
    lengths (though total length should match) we virtually split the
    SG-lists to the set of DMA transfers, which length is a minimum of the
    ordered SG-entries lengths.
    
    The solution described above is only executed if a full-duplex SPI
    transfer is requested and the DMA engine hasn't provided channels with
    hardware accelerated SG list traverse capability to handle both SG
    lists at once.
    Signed-off-by: default avatarSerge Semin <Sergey.Semin@baikalelectronics.ru>
    Suggested-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
    Link: https://lore.kernel.org/r/20200920112322.24585-12-Sergey.Semin@baikalelectronics.ruSigned-off-by: default avatarMark Brown <broonie@kernel.org>
    ad4fe126
spi-dw-dma.c 16.6 KB