1. 28 Aug, 2015 2 commits
  2. 26 Aug, 2015 5 commits
  3. 24 Aug, 2015 3 commits
  4. 23 Aug, 2015 6 commits
  5. 21 Aug, 2015 3 commits
  6. 20 Aug, 2015 8 commits
    • Jon Hunter's avatar
      dmaengine: tegra-apb: Simplify locking for device using global pause · 23a1ec30
      Jon Hunter authored
      Sparse reports the following with regard to locking in the
      tegra_dma_global_pause() and tegra_dma_global_resume() functions:
      
      drivers/dma/tegra20-apb-dma.c:362:9: warning: context imbalance in
      	'tegra_dma_global_pause' - wrong count at exit
      drivers/dma/tegra20-apb-dma.c:366:13: warning: context imbalance in
      	'tegra_dma_global_resume' - unexpected unlock
      
      The warning is caused because tegra_dma_global_pause() acquires a lock
      but does not release it. However, the lock is released by
      tegra_dma_global_resume(). These pause/resume functions are called in
      pairs and so it does appear to work.
      
      This global pause is used on early tegra devices that do not have an
      individual pause for each channel. The lock appears to be used to ensure
      that multiple channels do not attempt to assert/de-assert the global pause
      at the same time which could cause the DMA controller to be in the wrong
      paused state. Rather than locking around the entire code between the pause
      and resume, employ a simple counter to keep track of the global pause
      requests. By using a counter, it is only necessary to hold the lock when
      pausing and unpausing the DMA controller and hence, fixes the sparse
      warning.
      
      Please note that for devices that support individual channel pausing, the
      DMA controller lock is not held between pausing and unpausing the channel.
      Hence, this change will make the devices that use the global pause behave
      in the same way, with regard to locking, as those that don't.
      Signed-off-by: default avatarJon Hunter <jonathanh@nvidia.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      23a1ec30
    • Jon Hunter's avatar
      dmaengine: tegra-apb: Remove unnecessary return statements and variables · dc1ff4b3
      Jon Hunter authored
      Some void functions have unnecessary return statements at the end
      (reported by sparse) and so remove these. Also remove the return variables
      from functions tegra_dma_prep_slave_sg() and tegra_dma_prep_slave_cyclic()
      because the value is not used.
      Signed-off-by: default avatarJon Hunter <jonathanh@nvidia.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      dc1ff4b3
    • Jon Hunter's avatar
      dmaengine: tegra-apb: Avoid unnecessary channel base address calculation · 13a33286
      Jon Hunter authored
      Everytime a DMA channel register is accessed, the channel base address
      is calculated by adding the DMA base address and the channel register
      offset. Avoid this calculation and simply calculate the channel base
      address once at probe time for each DMA channel.
      Signed-off-by: default avatarJon Hunter <jonathanh@nvidia.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      13a33286
    • Jon Hunter's avatar
      dmaengine: tegra-apb: Remove unused variables · c67886f5
      Jon Hunter authored
      The callback and callback_param members of the tegra_dma_sg_req structure
      are never used. The dma-engine structure, dma_async_tx_descriptor, defines
      the same members and these are the ones used by the driver. Therefore,
      remove the unused versions from the tegra_dma_sg_req structure.
      
      The half_done member of tegra_dma_channel structure is configured but
      never used and so remove it.
      Signed-off-by: default avatarJon Hunter <jonathanh@nvidia.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      c67886f5
    • Geert Uytterhoeven's avatar
      dmaengine: Stricter legacy checking in dma_request_slave_channel_compat() · 7dfffb95
      Geert Uytterhoeven authored
      dma_request_slave_channel_compat() is meant for drivers that support
      both DT and legacy platform device based probing: if DT channel DMA
      setup fails, it will fall back to platform data based DMA channel setup,
      using hardcoded DMA channel IDs and a filter function.
      
      However, if the DTS doesn't provide a "dmas" property for the device,
      the fallback is also used. If the legacy filter function is not
      hardcoded in the DMA slave driver, but comes from platform data, it will
      be NULL. Then dma_request_slave_channel_compat() will succeed
      incorrectly, and return a DMA channel, as a NULL legacy filter function
      actually means "all channels are OK", not "do not match".
      
      Later, when trying to use that DMA channel, it will fail with:
      
          rcar-dmac e6700000.dma-controller: rcar_dmac_prep_slave_sg: bad parameter: len=1, id=-22
      
      To fix this, ensure that both the filter function and the DMA channel ID
      are not NULL before using the legacy fallback.
      
      Note that some DMA slave drivers can handle this failure, and will fall
      back to PIO.
      
      See also commit 056f6c87 ("dmaengine: shdma: Make dummy
      shdma_chan_filter() always return false"), which fixed the same issue
      for the case where shdma_chan_filter() is hardcoded in a DMA slave
      driver.
      Suggested-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarGeert Uytterhoeven <geert+renesas@glider.be>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      7dfffb95
    • Rameshwar Prasad Sahu's avatar
      dmaengine: xgene-dma: Add ACPI support for X-Gene DMA engine driver · 89079493
      Rameshwar Prasad Sahu authored
      This patch adds ACPI support for the APM X-Gene DMA engine driver.
      Signed-off-by: default avatarRameshwar Prasad Sahu <rsahu@apm.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      89079493
    • Fabio Estevam's avatar
      dmaengine: imx-sdma: Check for clk_enable() errors · b93edcdd
      Fabio Estevam authored
      clk_enable() may fail, so we should better check the return value and
      propagate it in the case of error.
      Signed-off-by: default avatarFabio Estevam <fabio.estevam@freescale.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      b93edcdd
    • Emilio López's avatar
      dmaengine: sun4i: Add support for the DMA engine on sun[457]i SoCs · b096c137
      Emilio López authored
      This patch adds support for the DMA engine present on Allwinner A10,
      A13, A10S and A20 SoCs. This engine has two kinds of channels: normal
      and dedicated. The main difference is in the mode of operation;
      while a single normal channel may be operating at any given time,
      dedicated channels may operate simultaneously provided there is no
      overlap of source or destination.
      
      Hardware documentation can be found on A10 User Manual (section 12), A13
      User Manual (section 14) and A20 User Manual (section 1.12)
      Signed-off-by: default avatarEmilio López <emilio@elopez.com.ar>
      Signed-off-by: default avatarHans de Goede <hdegoede@redhat.com>
      Signed-off-by: default avatarMaxime Ripard <maxime.ripard@free-electrons.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      b096c137
  7. 19 Aug, 2015 6 commits
    • Thomas Petazzoni's avatar
      dmaengine: mv_xor: optimize performance by using a subset of the XOR channels · 77757291
      Thomas Petazzoni authored
      Due to how async_tx behaves internally, having more XOR channels than
      CPUs is actually hurting performance more than it improves it, because
      memcpy requests get scheduled on a different channel than the XOR
      requests, but async_tx will still wait for the completion of the
      memcpy requests before scheduling the XOR requests.
      
      It is in fact more efficient to have at most one channel per CPU,
      which this patch implements by limiting the number of channels per
      engine, and the number of engines registered depending on the number
      of availables CPUs.
      
      Marvell platforms are currently available in one CPU, two CPUs and
      four CPUs configurations:
      
       - in the configurations with one CPU, only one channel from one
         engine is used.
      
       - in the configurations with two CPUs, only one channel from each
         engine is used (they are two XOR engines)
      
       - in the configurations with four CPUs, both channels of both engines
         are used.
      Signed-off-by: default avatarThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      77757291
    • Thomas Petazzoni's avatar
      dmaengine: mv_xor: remove support for dmacap,* DT properties · 6d8f7abd
      Thomas Petazzoni authored
      The only reason why we had dmacap,* properties is because back when
      DMA_MEMSET was supported, only one out of the two channels per engine
      could do a memset operation. But this is something that the driver
      already knows anyway, and since then, the DMA_MEMSET support has been
      removed.
      
      The driver is already well aware of what each channel supports and the
      one to one mapping between Linux specific implementation details (such
      as dmacap,interrupt enabling DMA_INTERRUPT) and DT properties is a
      good indication that these DT properties are wrong.
      
      Therefore, this commit simply gets rid of these dmacap,* properties,
      they are now ignored, and the driver is responsible for knowing the
      capabilities of the hardware with regard to the dmaengine subsystem
      expectations.
      Signed-off-by: default avatarThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Reviewed-by: default avatarMaxime Ripard <maxime.ripard@free-electrons.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      6d8f7abd
    • Michal Suchanek's avatar
      dmaengine: pl330: do not emit loop for 1 byte transfer. · 31495d60
      Michal Suchanek authored
      When there is only one burst required do not emit loop instructions to
      loop exactly once. Emit just the body of the loop.
      Signed-off-by: default avatarMichal Suchanek <hramrach@gmail.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      31495d60
    • Rob Herring's avatar
      dmaengine: kill off set_irq_flags usage · 2f27b81c
      Rob Herring authored
      set_irq_flags is ARM specific with custom flags which have genirq
      equivalents. Convert drivers to use the genirq interfaces directly, so we
      can kill off set_irq_flags. The translation of flags is as follows:
      
      IRQF_VALID -> !IRQ_NOREQUEST
      IRQF_PROBE -> !IRQ_NOPROBE
      IRQF_NOAUTOEN -> IRQ_NOAUTOEN
      
      For IRQs managed by an irqdomain, the irqdomain core code handles clearing
      and setting IRQ_NOREQUEST already, so there is no need to do this in
      .map() functions and we can simply remove the set_irq_flags calls. Some
      users also modify IRQ_NOPROBE and this has been maintained although it
      is not clear that is really needed. There appears to be a great deal of
      blind copy and paste of this code.
      Signed-off-by: default avatarRob Herring <robh@kernel.org>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Vinod Koul <vinod.koul@intel.com>
      Cc: dmaengine@vger.kernel.org
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      2f27b81c
    • Zidan Wang's avatar
      dmaengine: imx-sdma: Add imx6sx platform support · d078cd1b
      Zidan Wang authored
      The new Solo X has more requirements for SDMA events. So it creates
      a event mux to remap most of event numbers in GPR (General Purpose
      Register). If we want to use SDMA support for those module who do
      not get the even number as default, we need to configure GPR first.
      
      Thus this patch adds this support of GPR event remapping configuration
      to the SDMA driver.
      Signed-off-by: default avatarZidan Wang <zidan.wang@freescale.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      d078cd1b
    • Ludovic Desroches's avatar
      dmaengine: at_xdmac: fix bug in prep_dma_cyclic · e900c30d
      Ludovic Desroches authored
      In cyclic mode, the round chaining has been broken by the introduction
      of at_xdmac_queue_desc(): AT_XDMAC_MBR_UBC_NDE is set for all descriptors
      excepted for the last one. at_xdmac_queue_desc() has to be called one
      more time to chain the last and the first descriptors.
      Signed-off-by: default avatarLudovic Desroches <ludovic.desroches@atmel.com>
      Fixes: 0d0ee751 ("dmaengine: xdmac: Rework the chaining logic")
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      e900c30d
  8. 18 Aug, 2015 7 commits