1. 20 Aug, 2015 1 commit
  2. 19 Aug, 2015 6 commits
    • Thomas Petazzoni's avatar
      dmaengine: mv_xor: optimize performance by using a subset of the XOR channels · 77757291
      Thomas Petazzoni authored
      Due to how async_tx behaves internally, having more XOR channels than
      CPUs is actually hurting performance more than it improves it, because
      memcpy requests get scheduled on a different channel than the XOR
      requests, but async_tx will still wait for the completion of the
      memcpy requests before scheduling the XOR requests.
      
      It is in fact more efficient to have at most one channel per CPU,
      which this patch implements by limiting the number of channels per
      engine, and the number of engines registered depending on the number
      of availables CPUs.
      
      Marvell platforms are currently available in one CPU, two CPUs and
      four CPUs configurations:
      
       - in the configurations with one CPU, only one channel from one
         engine is used.
      
       - in the configurations with two CPUs, only one channel from each
         engine is used (they are two XOR engines)
      
       - in the configurations with four CPUs, both channels of both engines
         are used.
      Signed-off-by: default avatarThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      77757291
    • Thomas Petazzoni's avatar
      dmaengine: mv_xor: remove support for dmacap,* DT properties · 6d8f7abd
      Thomas Petazzoni authored
      The only reason why we had dmacap,* properties is because back when
      DMA_MEMSET was supported, only one out of the two channels per engine
      could do a memset operation. But this is something that the driver
      already knows anyway, and since then, the DMA_MEMSET support has been
      removed.
      
      The driver is already well aware of what each channel supports and the
      one to one mapping between Linux specific implementation details (such
      as dmacap,interrupt enabling DMA_INTERRUPT) and DT properties is a
      good indication that these DT properties are wrong.
      
      Therefore, this commit simply gets rid of these dmacap,* properties,
      they are now ignored, and the driver is responsible for knowing the
      capabilities of the hardware with regard to the dmaengine subsystem
      expectations.
      Signed-off-by: default avatarThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Reviewed-by: default avatarMaxime Ripard <maxime.ripard@free-electrons.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      6d8f7abd
    • Michal Suchanek's avatar
      dmaengine: pl330: do not emit loop for 1 byte transfer. · 31495d60
      Michal Suchanek authored
      When there is only one burst required do not emit loop instructions to
      loop exactly once. Emit just the body of the loop.
      Signed-off-by: default avatarMichal Suchanek <hramrach@gmail.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      31495d60
    • Rob Herring's avatar
      dmaengine: kill off set_irq_flags usage · 2f27b81c
      Rob Herring authored
      set_irq_flags is ARM specific with custom flags which have genirq
      equivalents. Convert drivers to use the genirq interfaces directly, so we
      can kill off set_irq_flags. The translation of flags is as follows:
      
      IRQF_VALID -> !IRQ_NOREQUEST
      IRQF_PROBE -> !IRQ_NOPROBE
      IRQF_NOAUTOEN -> IRQ_NOAUTOEN
      
      For IRQs managed by an irqdomain, the irqdomain core code handles clearing
      and setting IRQ_NOREQUEST already, so there is no need to do this in
      .map() functions and we can simply remove the set_irq_flags calls. Some
      users also modify IRQ_NOPROBE and this has been maintained although it
      is not clear that is really needed. There appears to be a great deal of
      blind copy and paste of this code.
      Signed-off-by: default avatarRob Herring <robh@kernel.org>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Vinod Koul <vinod.koul@intel.com>
      Cc: dmaengine@vger.kernel.org
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      2f27b81c
    • Zidan Wang's avatar
      dmaengine: imx-sdma: Add imx6sx platform support · d078cd1b
      Zidan Wang authored
      The new Solo X has more requirements for SDMA events. So it creates
      a event mux to remap most of event numbers in GPR (General Purpose
      Register). If we want to use SDMA support for those module who do
      not get the even number as default, we need to configure GPR first.
      
      Thus this patch adds this support of GPR event remapping configuration
      to the SDMA driver.
      Signed-off-by: default avatarZidan Wang <zidan.wang@freescale.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      d078cd1b
    • Ludovic Desroches's avatar
      dmaengine: at_xdmac: fix bug in prep_dma_cyclic · e900c30d
      Ludovic Desroches authored
      In cyclic mode, the round chaining has been broken by the introduction
      of at_xdmac_queue_desc(): AT_XDMAC_MBR_UBC_NDE is set for all descriptors
      excepted for the last one. at_xdmac_queue_desc() has to be called one
      more time to chain the last and the first descriptors.
      Signed-off-by: default avatarLudovic Desroches <ludovic.desroches@atmel.com>
      Fixes: 0d0ee751 ("dmaengine: xdmac: Rework the chaining logic")
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      e900c30d
  3. 18 Aug, 2015 14 commits
  4. 17 Aug, 2015 19 commits