aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/dma
Commit message (Collapse)AuthorAge
...
| | * | | dmaengine: pl330: Acquire dmac's spinlock in pl330_tx_statusHsin-Yu Chao2016-09-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a racing when accessing dmac thread in pl330_tx_status that the pl330_update is handling active request at the same time and changing the status of descriptors. This could cause an invalid transferred count from BUSY descriptor added up to the residual number. Fix the bug by using the dmac's spinlock in pl330_tx_status to protect thread resources from changing. Note that the nested order of holding dmac's and dma_chan's spinlock is consistent with the rest of the driver: dma_chan first and then dmac, so it is safe from deadlock scenario. Signed-off-by: Hsin-Yu Chao <hychao@chromium.org> Reviewed-by: Guenter Roeck <groeck@chromium.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: pl330: fix residual for non-running BUSY descriptorsStephen Barber2016-09-09
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Only one descriptor in the work list should be running at any given time, but it's possible to have an enqueued BUSY descriptor that has not yet transferred any data, or for a BUSY descriptor to linger briefly before transitioning to DONE. These cases should be handled to keep residual calculations consistent even with the non-running BUSY descriptors in the work list. Signed-off-by: Stephen Barber <smbarber@chromium.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | | Merge branch 'topic/omap' into for-linusVinod Koul2016-10-02
| |\ \ \
| | * | | dmaengine: omap-dma: Enable burst and data pack for SGMisael Lopez Cruz2016-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enable the burst and data pack modes for the scatter-gather in order to improve the throughput of the data transfers. The improvement has been verified with MMC HS200 mode in the DRA72 EVM using the iozone tool to compare the read throughput (in kB/s) with and without burst/pack for different reclens (in kB). With reclen Baseline sDMA burst/pack ------ -------- --------------- 64 46568 50820 128 57564 63413 256 65634 74937 512 72427 83483 1024 74563 84504 2048 76265 86079 4096 78045 87335 8192 78989 88154 16384 81265 91034 Signed-off-by: Misael Lopez Cruz <misael.lopez@ti.com> Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: omap-dma: Correct type2 descriptor's member typesPeter Ujfalusi2016-09-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The type of CDEI, CSEI, CDFI and CSFI is signed. This did not caused issue so far as we only use unsigned values. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: omap-dma: Support for LinkedList transfer of slave_sgPeter Ujfalusi2016-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sDMA in OMAP3630 or newer SoC have support for LinkedList transfer. When LinkedList or Descriptor load feature is present we can create the descriptors for each and program sDMA to walk through the list of descriptors instead of the current way of sDMA stop, sDMA reconfiguration and sDMA start after each SG transfer. By using LinkedList transfer in sDMA the number of DMA interrupts will decrease dramatically. Booting up the board with filesystem on SD card for example: W/o LinkedList support: 27: 4436 0 WUGEN 13 Level omap-dma-engine Same board/filesystem with this patch: 27: 1027 0 WUGEN 13 Level omap-dma-engine Or copying files from SD card to eMCC: 2.1G /usr/ 232001 W/o LinkedList we see ~761069 DMA interrupts. With LinkedList support it is down to ~269314 DMA interrupts. With the decreased DMA interrupt number the CPU load is dropping significantly as well. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: omap-dma: Use pointer to omap_sg in slave_sg setup's loopPeter Ujfalusi2016-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of accessing the array via index, take the pointer first and use it to set up the omap_sg struct. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: omap-dma: Add more debug information when freeing channelPeter Ujfalusi2016-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Print the same information the driver prints when allocating the channel resources regarding to the sDMA channel. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: omap-dma: Dynamically allocate memory for lch_mapPeter Ujfalusi2016-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On OMAP1 platforms we do not have 32 channels available. Allocate the lch_map based on the available channels. This way we are not going to have more visible channels then it is available on the platform. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: omap-dma: Simplify omap_dma_callbackPeter Ujfalusi2016-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Flatten the indentation level of the function which gives better view on the cases we handle here. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: omap-dma: Simplify omap_dma_start_sg parameter listPeter Ujfalusi2016-08-10
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | We can drop the (sg)idx parameter for the omap_dma_start_sg() function and increment the sgidx inside of the same function. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | | Merge branch 'topic/no_irq' into for-linusVinod Koul2016-10-02
| |\ \ \
| | * | | dmaengine: ipu: remove bogus NO_IRQ referenceArnd Bergmann2016-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A workaround for a warning introduced a use of the NO_IRQ macro that should have been gone for a long time. It is clear from the code that the value cannot actually be used, but apparently there was a configuration at some point that caused a warning, so instead of just reverting that patch, this rearranges the code in a way that the warning cannot reappear. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: 6ef41cf6f721 ("dmaengine :ipu: change ipu_irq_handler() to remove compile warning") Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: sirf: fix irq number error checkArnd Bergmann2016-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | irq_of_parse_and_map() returns 0 on error, no NO_IRQ, so the failure condition can never be met. This changes the comparison to check for zero instead. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: mxs: remove NO_IRQ checkArnd Bergmann2016-09-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The mxs_chan->chan_irq variable is guaranteed to never be NO_IRQ, as it gets assigned the result of platform_get_irq() that returns either a valid positive interrupt number, or a negative failure code that leads to the channel not being used. This removes the redundant check, eliminating one more instance of NO_IRQ. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: moxart: remove NO_IRQArnd Bergmann2016-09-05
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | The use of NO_IRQ is incorrect here and should never have been there, as irq_of_parse_and_map() returns '0' on failure, not NO_IRQ. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | | Merge branch 'topic/mv_xor' into for-linusVinod Koul2016-10-02
| |\ \ \
| | * | | dmaengine: mv_xor: Add support for IO (PCIe) src/dst areasStefan Roese2016-09-15
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To enable the access to a specific area, the MVEBU XOR controllers needs to have this area enabled / mapped via an address window. Right now, only the DRAM memory area is enabled via such memory windows. So using this driver to DMA to / from a e.g. PCIe memory region is currently not supported. This patch now adds support for such PCIe / IO regions by checking if the src / dst address is located in an IO memory area in contrast to being located in DRAM. This is done by using the newly introduced MBus function mvebu_mbus_get_io_win_info(). If the src / dst address is located in such an IO area, a new address window is created in the XOR DMA controller. Enabling the controller to access this area. Signed-off-by: Stefan Roese <sr@denx.de> Cc: Gregory CLEMENT <gregory.clement@free-electrons.com> Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: Marcin Wojtas <mw@semihalf.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | | Merge branch 'topic/k3' into for-linusVinod Koul2016-10-02
| |\ \ \
| | * | | dmaengine: k3dma: use correct format string for debug outputArnd Bergmann2016-09-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The newly added k3_dma_prep_dma_cyclic function has some debug output that uses incorrect typecasts, some of which cause a warning like: drivers/dma/k3dma.c: In function 'k3_dma_prep_dma_cyclic': drivers/dma/k3dma.c:589:671: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast] In general, we have to print 'dma_addr_t' values using special '%pad' format to get the correct behavior on kernels that have a 64-bit dma_addr_t type but 32-bit pointers. Similarly, printing size_t values should be done using the %z modifier to get the correct behavior on 64-bit kernels. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: a7e08fa6cc78 ("k3dma: Add cyclic mode for audio") Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | Kconfig: Allow k3dma driver to be selected for more then HISI3xx platformsJohn Stultz2016-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows the k3dma driver to be selected on HiKey via the ARCH_HISI dependency. Cc: Zhangfei Gao <zhangfei.gao@linaro.org> Cc: Jingoo Han <jg1.han@samsung.com> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Mark Brown <broonie@kernel.org> Cc: Andy Green <andy@warmcat.com> Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org> Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | k3dma: Add cyclic mode for audioAndy Green2016-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the k3dma driver doesn't offer the cyclic mode necessary for handling audio. This patch adds it. Cc: Zhangfei Gao <zhangfei.gao@linaro.org> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Mark Brown <broonie@kernel.org> Cc: Andy Green <andy@warmcat.com> Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org> Signed-off-by: Andy Green <andy.green@linaro.org> [jstultz: Forward ported to mainline, removed a few bits of logic that didn't seem to have much effect] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | k3dma: Fix memory handling in preparation for cyclic modeJohn Stultz2016-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With cyclic mode, the shared virt-dma logic doesn't actually manage the descriptor state, nor the calling of the descriptor free callback. This results in leaking a desc structure every time we start an audio transfer. Thus we must manage it ourselves. The k3dma driver already keeps track of the active and finished descriptors via ds_run and ds_done pointers, so cleanup how we handle those two values, so when we tear down everything in terminate_all, call free_desc on the ds_run and ds_done pointers if they are not null. NOTE: HiKey doesn't use the non-cyclic dma modes, so I'm not been able to test those modes. But with this patch we no longer leak the desc structures. Cc: Zhangfei Gao <zhangfei.gao@linaro.org> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Mark Brown <broonie@kernel.org> Cc: Andy Green <andy@warmcat.com> Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org> Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | k3dma: Fix occasional DMA ERR issue by using proper dma apiJohn Stultz2016-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After lots of debugging on an occasional DMA ERR issue, I realized that the desc structures which we point the dma hardware are being allocated out of regular memory. This means when we fill the desc structures, that data doesn't always get flushed out to memory by the time we start the dma transfer, resulting in the dma engine getting some null values, resulting in a DMA ERR on the first irq. Thus, this patch adopts mechanism similar to the zx296702_dma of allocating the desc structures from a dma pool, so the memory caching rules are properly set to avoid this issue. Cc: Zhangfei Gao <zhangfei.gao@linaro.org> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Mark Brown <broonie@kernel.org> Cc: Andy Green <andy@warmcat.com> Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org> Signed-off-by: John Stutlz <john.stultz@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | k3dma: Fix "nobody cared" message seen on any errorAndy Green2016-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As it was before, as soon as the DMAC IP felt there was an error he would return IRQ_NONE since no actual transfer had completed. After spinning on that for 100K interrupts, Linux yanks the IRQ with a "nobody cared" error. This patch lets it handle the interrupt and keep the IRQ alive. Cc: Zhangfei Gao <zhangfei.gao@linaro.org> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Mark Brown <broonie@kernel.org> Cc: Andy Green <andy@warmcat.com> Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org> Signed-off-by: Andy Green <andy.green@linaro.org> [jstultz: Forward ported to mainline] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | k3dma: Fix dma err offsetsAndy Green2016-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The offsets for ERR1 and ERR2 are wrong actually. That's why you can never clear an error. Cc: Zhangfei Gao <zhangfei.gao@linaro.org> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Mark Brown <broonie@kernel.org> Cc: Andy Green <andy@warmcat.com> Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org> Signed-off-by: Andy Green <andy.green@linaro.org> [jstultz: Forward ported to mainline] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | k3dma: Fix hisi burst clippingAndy Green2016-08-31
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Max burst len is a 4-bit field, but at the moment it's clipped with a 5-bit constant... reduce it to that which can be expressed Cc: Zhangfei Gao <zhangfei.gao@linaro.org> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Mark Brown <broonie@kernel.org> Cc: Andy Green <andy@warmcat.com> Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org> Signed-off-by: Andy Green <andy.green@linaro.org> [jstultz: Forward ported to mainline] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | | Merge branch 'topic/iommu' into for-linusVinod Koul2016-10-02
| |\ \ \
| | * | | dmaengine: rcar-dmac: add iommu support for slave transfersNiklas Söderlund2016-09-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enable slave transfers to a device behind a IPMMU by mapping the slave addresses using the dma-mapping API. Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: rcar-dmac: group slave configurationNiklas Söderlund2016-09-26
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Group slave address and transfer size in own structs for source and destination. This is in preparation for hooking up the dma-mapping API to the slave addresses. Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | | Merge branch 'topic/ioatdma' into for-linusVinod Koul2016-10-02
| |\ \ \
| | * | | dmaengine: ioatdma: fix uninitialized array usageDave Jiang2016-08-07
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Static analysis showed that unitialized array is being used for compare. At line 850 when a dma_mapping_error() occurs, it jumps to dma_unmap. At this point, dma_srcs has not been initialized. However, the code after dma_unmap label checks dma_srcs for a comparison and thus is comparing to random garbage in the array. Given that when dest_dma is being mapped this is the first instance of mapping DMA memory and failed, there is really nothing to be cleaned up and thus should jump to free_resources label instead. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | | Merge branch 'topic/imx' into for-linusVinod Koul2016-10-02
| |\ \ \
| | * | | dmaengine: imx-sdma: Add i.MX7 supportFabio Estevam2016-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow i.MX7 to work with the imx-sdma driver. Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com> Tested-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: imx-sdma: (trivial) fix a typoMartin Kaiser2016-08-10
| | |/ / | | | | | | | | | | | | | | | | Signed-off-by: Martin Kaiser <martin@kaiser.cx> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | | Merge branch 'topic/hsu' into for-linusVinod Koul2016-10-02
| |\ \ \
| | * | | dmaengine: hsu: refactor hsu_dma_do_irq() to return intAndy Shevchenko2016-09-15
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since we have nice macro IRQ_RETVAL() we would use it to convert a flag of handled interrupt from int to irqreturn_t. The rationale of doing this is: a) hence we implicitly mark hsu_dma_do_irq() as an auxiliary function that can't be used as interrupt handler directly, and b) to be in align with serial driver which is using serial8250_handle_irq() that returns plain int by design. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | | Merge branch 'topic/err_reporting' into for-linusVinod Koul2016-10-02
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Vinod Koul <vinod.koul@intel.com> Conflicts: drivers/dma/cppi41.c
| | * | | dmaengine: qcom_hidma: add error reporting for tx_statusSinan Kaya2016-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The HIDMA driver is capable of error detection. However, the error was not being passed back to the client when tx_status API is called. Changing the error handling behavior to follow this oder. 1. dmaengine asserts error interrupt 2. Driver receives and mark's the txn as error 3. Driver completes the txn and intimates the client. No further submissions. Drop the locks before calling callback, as subsequent processing by client maybe in callback thread. 4. Client invokes status and you can return error 5. On error, client calls terminate_all. You can reset channel, free all descriptors in the active, pending and completed lists 6. Client prepares new txn and so on. As part of this work, got rid of the reset in the interrupt handler when an error happens and the HW is put into disabled state. The only way to recover is for the client to terminate the channel. Signed-off-by: Sinan Kaya <okaya@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: qcom_hidma: report transfer errors with new interfaceSinan Kaya2016-08-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pass the DMA errors to the client by passing a result argument. The HW only supports a generic error when something goes wrong. That's why, using DMA_TRANS_ABORTED all the time. Signed-off-by: Sinan Kaya <okaya@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | | dmaengine: qcom_hidma: release the descriptor before the callbackSinan Kaya2016-08-31
| | | |/ | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a race condition between data transfer callback and descriptor free code. The callback routine may decide to clear the resources even though the descriptor has not yet been freed. Instead of calling the callback first and then releasing the memory, this code is changing the order to return the descriptor back to the free pool and then call the user provided callback. Signed-off-by: Sinan Kaya <okaya@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sh_shdma-base: convert callback to helper functionDave Jiang2016-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is in preperation of moving to a callback that provides results to the callback for the transaction. The conversion will maintain current behavior and the driver must convert to new callback mechanism at a later time in order to receive results. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmengine: xilinx_dma: convert callback to helper functionVinod Koul2016-08-07
| | | | | | | | | | | | | | | | | | | | | | | | Move the xilinx driver to new dmaengine callback Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: ioatdma: add error strings to chanerr outputDave Jiang2016-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide a mechanism to translate CHANERR bits to English strings in order to allow user to report more concise errors. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: ioatdma: Add error handling to ioat driverDave Jiang2016-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adding error handling to the ioatdma driver so that when a read/write error occurs the error results are reported back and all the remaining descriptors are aborted. This utilizes the new dmaengine callback function that allows reporting of results. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: add support to provide error result from a DMA transationDave Jiang2016-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adding a new callback that will provide the error result for a transaction. The result is allocated on the stack and the callback should create a copy if it wishes to retain the information after exiting. The result parameter is now defined and takes over the dummy void pointer we placed in the helper functions previously. dmaengine drivers should start converting to the new "callback_result" callback in order to receive transaction results. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: xgene-dma: convert callback to helper functionDave Jiang2016-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is in preperation of moving to a callback that provides results to the callback for the transaction. The conversion will maintain current behavior and the driver must convert to new callback mechanism at a later time in order to receive results. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: virt-dma: convert callback to helper functionDave Jiang2016-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is in preperation of moving to a callback that provides results to the callback for the transaction. The conversion will maintain current behavior and the driver must convert to new callback mechanism at a later time in order to receive results. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: txx9dmac: convert callback to helper functionDave Jiang2016-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is in preperation of moving to a callback that provides results to the callback for the transaction. The conversion will maintain current behavior and the driver must convert to new callback mechanism at a later time in order to receive results. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: timb_dma: convert callback to helper functionDave Jiang2016-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is in preperation of moving to a callback that provides results to the callback for the transaction. The conversion will maintain current behavior and the driver must convert to new callback mechanism at a later time in order to receive results. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>