aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/dma
Commit message (Collapse)AuthorAge
* I/OAT: I/OAT version 3.0 supportMaciej Sosnowski2008-07-22
| | | | | | | | | | | | This patch adds to ioatdma and dca modules support for Intel I/OAT DMA engine ver.3 (aka CB3 device). The main features of I/OAT ver.3 are: * 8 single channel DMA devices (8 channels total) * 8 DCA providers, each can accept 2 requesters * 8-bit TAG values and 32-bit extended APIC IDs Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* I/OAT: tcp_dma_copybreak default value dependent on I/OAT versionMaciej Sosnowski2008-07-22
| | | | | | | | | | | I/OAT DMA performance tuning showed different optimal values of tcp_dma_copybreak for different I/OAT versions (4096 for 1.2 and 2048 for 2.0). This patch lets ioatdma driver set tcp_dma_copybreak value according to these results. [dan.j.williams@intel.com: remove some ifdefs] Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* I/OAT: Add watchdog/reset functionality to ioatdmaMaciej Sosnowski2008-07-22
| | | | | | | | | | | | Due to occasional DMA channel hangs observed for I/OAT versions 1.2 and 2.0 a watchdog has been introduced to check every 2 seconds if all channels progress normally. If stuck channel is detected, driver resets it. The reset is done in two parts. The second part is scheduled by the first one to reinitialize the channel after the restart. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* iop_adma: document how to calculate the minimum descriptor pool sizeDan Williams2008-07-17
| | | | Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* iop_adma: directly reclaim descriptors on allocation failureDan Williams2008-07-17
| | | | | | | | Force callers that trigger an "out of descriptors" condition to run the cleanup loop directly. Alleviates the requirement to have soft-irqs enabled when polling for a descriptor in async_xor. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: Driver for the Synopsys DesignWare DMA controllerHaavard Skinnemoen2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a driver for the Synopsys DesignWare DMA controller (aka DMACA on AVR32 systems.) This DMA controller can be found integrated on the AT32AP7000 chip and is primarily meant for peripheral DMA transfer, but can also be used for memory-to-memory transfers. This patch is based on a driver from David Brownell which was based on an older version of the DMA Engine framework. It also implements the proposed extensions to the DMA Engine API for slave DMA operations. The dmatest client shows no problems, but there may still be room for improvement performance-wise. DMA slave transfer performance is definitely "good enough"; reading 100 MiB from an SD card running at ~20 MHz yields ~7.2 MiB/s average transfer rate. Full documentation for this controller can be found in the Synopsys DW AHB DMAC Databook: http://www.synopsys.com/designware/docs/iip/DW_ahb_dmac/latest/doc/dw_ahb_dmac_db.pdf The controller has lots of implementation options, so it's usually a good idea to check the data sheet of the chip it's intergrated on as well. The AT32AP7000 data sheet can be found here: http://www.atmel.com/dyn/products/datasheets.asp?family_id=682 Changes since v4: * Use client_count instead of dma_chan_is_in_use() * Add missing include * Unmap buffers unless client told us not to Changes since v3: * Update to latest DMA engine and DMA slave APIs * Embed the hw descriptor into the sw descriptor * Clean up and update MODULE_DESCRIPTION, copyright date, etc. Changes since v2: * Dequeue all pending transfers in terminate_all() * Rename dw_dmac.h -> dw_dmac_regs.h * Define and use controller-specific dma_slave data * Fix up a few outdated comments * Define hardware registers as structs (doesn't generate better code, unfortunately, but it looks nicer.) * Get number of channels from platform_data instead of hardcoding it based on CONFIG_WHATEVER_CPU. * Give slave clients exclusive access to the channel Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>, Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: Add slave DMA interfaceHaavard Skinnemoen2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the necessary interfaces to the DMA Engine framework to use functionality found on most embedded DMA controllers: DMA from and to I/O registers with hardware handshaking. In this context, hardware hanshaking means that the peripheral that owns the I/O registers in question is able to tell the DMA controller when more data is available for reading, or when there is room for more data to be written. This usually happens internally on the chip, but these signals may also be exported outside the chip for things like IDE DMA, etc. A new struct dma_slave is introduced. This contains information that the DMA engine driver needs to set up slave transfers to and from a slave device. Most engines supporting DMA slave transfers will want to extend this structure with controller-specific parameters. This additional information is usually passed from the platform/board code through the client driver. A "slave" pointer is added to the dma_client struct. This must point to a valid dma_slave structure iff the DMA_SLAVE capability is requested. The DMA engine driver may use this information in its device_alloc_chan_resources hook to configure the DMA controller for slave transfers from and to the given slave device. A new operation for preparing slave DMA transfers is added to struct dma_device. This takes a scatterlist and returns a single descriptor representing the whole transfer. Another new operation for terminating all pending transfers is added as well. The latter is needed because there may be errors outside the scope of the DMA Engine framework that may require DMA operations to be terminated prematurely. DMA Engine drivers may extend the dma_device, dma_chan and/or dma_slave_descriptor structures to allow controller-specific operations. The client driver can detect such extensions by looking at the DMA Engine's struct device, or it can request a specific DMA Engine device by setting the dma_dev field in struct dma_slave. dmaslave interface changes since v4: * Fix checkpatch errors * Fix changelog (there are no slave descriptors anymore) dmaslave interface changes since v3: * Use dma_data_direction instead of a new enum * Submit slave transfers as scatterlists * Remove the DMA slave descriptor struct dmaslave interface changes since v2: * Add a dma_dev field to struct dma_slave. If set, the client can only be bound to the DMA controller that corresponds to this device. This allows controller-specific extensions of the dma_slave structure; if the device matches, the controller may safely assume its extensions are present. * Move reg_width into struct dma_slave as there are currently no users that need to be able to set the width on a per-transfer basis. dmaslave interface changes since v1: * Drop the set_direction and set_width descriptor hooks. Pass the direction and width to the prep function instead. * Declare a dma_slave struct with fixed information about a slave, i.e. register addresses, handshake interfaces and such. * Add pointer to a dma_slave struct to dma_client. Can be NULL if the DMA_SLAVE capability isn't requested. * Drop the set_slave device hook since the alloc_chan_resources hook now has enough information to set up the channel for slave transfers. Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: add DMA_COMPL_SKIP_{SRC,DEST}_UNMAP flags to control dma unmapDan Williams2008-07-08
| | | | | | | | | | | | | | In some cases client code may need the dma-driver to skip the unmap of source and/or destination buffers. Setting these flags indicates to the driver to skip the unmap step. In this regard async_xor is currently broken in that it allows the destination buffer to be unmapped while an operation is still in progress, i.e. when the number of sources exceeds the hardware channel's maximum (fixed in a subsequent patch). Acked-by: Saeed Bishara <saeed@marvell.com> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: Add dma_client parameter to device_alloc_chan_resourcesHaavard Skinnemoen2008-07-08
| | | | | | | | | | | | | | | | A DMA controller capable of doing slave transfers may need to know a few things about the slave when preparing the channel. We don't want to add this information to struct dma_channel since the channel hasn't yet been bound to a client at this point. Instead, pass a reference to the client requesting the channel to the driver's device_alloc_chan_resources hook so that it can pick the necessary information from the dma_client struct by itself. [dan.j.williams@intel.com: fixed up fsldma and mv_xor] Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmatest: Simple DMA memcpy test clientHaavard Skinnemoen2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This client tests DMA memcpy using various lengths and various offsets into the source and destination buffers. It will initialize both buffers with a repeatable pattern and verify that the DMA engine copies the requested region and nothing more. It will also verify that the bytes aren't swapped around, and that the source buffer isn't modified. The dmatest module can be configured to test a specific device, a specific channel. It can also test multiple channels at the same time, and it can start multiple threads competing for the same channel. Changes since v2: * Support testing multiple channels at the same time * Support testing with multiple threads competing for the same channel * Use counting test patterns in order to catch byte ordering issues Changes since v1: * Remove extra dashes around "help" * Remove "default n" from Kconfig * Turn TEST_BUF_SIZE into a module parameter * Return DMA_NAK instead of DMA_DUP * Print unhandled events * Support testing specific channels and devices * Move to the end of the Makefile Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: DMA engine driver for Marvell XOR engineSaeed Bishara2008-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | The XOR engine found in Marvell's SoCs and system controllers provides XOR and DMA operation, iSCSI CRC32C calculation, memory initialization, and memory ECC error cleanup operation support. This driver implements the DMA engine API and supports the following capabilities: - memcpy - xor - memset The XOR engine can be used by DMA engine clients implemented in the kernel, one of those clients is the RAID module. In that case, I observed 20% improvement in the raid5 write throughput, and 40% decrease in the CPU utilization when doing array construction, those results obtained on an 5182 running at 500Mhz. When enabling the NET DMA client, the performance decreased, so meanwhile it is recommended to keep this client off. Signed-off-by: Saeed Bishara <saeed@marvell.com> Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: Nicolas Pitre <nico@marvell.com> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* iop-adma: fix platform driver hotplug/coldplugKay Sievers2008-07-08
| | | | | | | | | | | | Since 43cc71eed1250755986da4c0f9898f9a635cb3bf, the platform modalias is prefixed with "platform:". Add MODULE_ALIAS() to most of the hotpluggable platform drivers, to re-enable auto loading. Cc: <stable@kernel.org> Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: David Brownell <dbrownell@users.sourceforge.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: track the number of clients using a channelDan Williams2008-07-08
| | | | | | | | | | | | | Haavard's dma-slave interface would like to test for exclusive access to a channel. The standard channel refcounting is not sufficient in that it tracks more than just client references, it is also inaccurate as reference counts are percpu until the channel is removed. This change also enables a future fix to deallocate resources when a client declines to use a capable channel. Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: remove arch dependency from DMADEVICESDan Williams2008-07-08
| | | | | | | | | | | The dependency is redundant since all drivers set their specific arch dependencies. The NET_DMA option is modified to be enabled only on platforms where it is known to have a positive effect. HAS_DMA is added as an explicit dependency for the DMADEVICES menu. Acked-by: Adrian Bunk <bunk@kernel.org> Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: Couple DMA channels to their physical DMA deviceHaavard Skinnemoen2008-07-08
| | | | | | | | | | | | Set the 'parent' field of channel class devices to point to the physical DMA device initialized by the DMA engine driver. This allows drivers to use chan->dev.parent for syncing DMA buffers and adds a 'device' symlink to the real device in /sys/class/dma/dmaXchanY. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* fsldma: fix incorrect exit path for initializationLi Yang2008-07-08
| | | | | | Signed-off-by: Li Yang <leoli@freescale.com> Acked-by: Zhang Wei <zw@zh-kernel.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* iop-adma: fixup some kzalloc/memset confusionsChristophe Jaillet2008-05-20
| | | | | | | | | | | | | | 1) Remove an explicit memset(.., 0, ...) to a variable allocated with kzalloc (i.e. 'dest'). 2) Allocate 'src' with kmalloc instead of kzalloc as all elements of the 'src' buffer are initialized in a 'for(...)' loop just after. 3) remove useless 'sizeof(u8)', which always returns 1, when computing the size of the memory to be allocated. Signed-off-by: Christophe Jaillet <christophe.jaillet@wanadoo.fr> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* DMA engine: typo fixesSebastian Siewior2008-04-21
| | | | | | | | Spelling fixes for dmaengine.[ch] Signed-off-by: Sebastian Siewior <bigeasy@linutronix.de> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com>
* dmaengine: ack to flags: make use of the unused bits in the 'ack' fieldDan Williams2008-04-17
| | | | | | | | | | | | | | | | | | 'ack' is currently a simple integer that flags whether or not a client is done touching fields in the given descriptor. It is effectively just a single bit of information. Converting this to a flags parameter allows the other bits to be put to use to control completion actions, like dma-unmap, and capture results, like xor-zero-sum == 0. Changes are one of: 1/ convert all open-coded ->ack manipulations to use async_tx_ack and async_tx_test_ack. 2/ set the ack bit at prep time where possible 3/ make drivers store the flags at prep time 4/ add flags to the device_prep_dma_interrupt prototype Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* iop-adma: remove the workaround for missed interrupts on iop3xxDan Williams2008-04-17
| | | | | | This workaround was covering the dependency submission bug in async_tx. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* async_tx: kill ->device_dependency_addedDan Williams2008-04-17
| | | | | | | | | | DMA drivers no longer need to be notified of dependency submission events as async_tx_run_dependencies and async_tx_channel_switch will handle the scheduling and execution of dependent operations. [sfr@canb.auug.org.au: extend this for fsldma] Acked-by: Shannon Nelson <shannon.nelson@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* async_tx: fix multiple dependency submissionDan Williams2008-04-17
| | | | | | | | | | Shrink struct dma_async_tx_descriptor and introduce async_tx_channel_switch to properly inject a channel switch interrupt in the descriptor stream. This simplifies the locking model as drivers no longer need to handle dma_async_tx_descriptor.lock. Acked-by: Shannon Nelson <shannon.nelson@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* fsldma: Split the MPC83xx event from MPC85xx and refine irq codes.Zhang Wei2008-04-17
| | | | | | | | | | | Split MPC83xx EOCDI event from MPC85xx EOLNI event, which is also need to update cookie and start the next transfer. The DMA channel irq handler function code is refined. The patch is tested on MPC8377MDS board. Signed-off-by: Zhang Wei <wei.zhang@freescale.com> Signed-off-by; Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* fsldma: Remove CONFIG_FSL_DMA_SELFTEST, keep fsl_dma_self_test() running always.Zhang Wei2008-04-17
| | | | | | | | Always enabling the fsl_dma_self_test() to ensure the DMA controller should works well after the driver probed. Signed-off-by: Zhang Wei <wei.zhang@freescale.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* [POWERPC] fsldma: Use compatiable binding as specKumar Gala2008-03-31
| | | | | | | | | | Documentation/powerpc/booting-without-of.txt specifies the compatiables we should bind to for this driver (elo, eloplus). Use these instead of the extremely specific 'mpc8540' and 'mpc8349' compatiables. Acked-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* fix the broken annotations in fsldmaAl Viro2008-03-30
| | | | | | | | | | | a) every bitwise declaration will give a unique type; use typedefs. b) no need to bother with the stuff pointed to by iomem pointers, unless it's accessed directly. noderef will force us to use helpers anyway. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ioat_dca __iomem annotationsAl Viro2008-03-30
| | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* fsldma: Fix the DMA halt when using DMA_INTERRUPT async_tx transfer.Zhang Wei2008-03-18
| | | | | | | | | | | | The DMA_INTERRUPT async_tx is a NULL transfer, thus the BCR(count register) is 0. When the transfer started with a byte count of zero, the DMA controller will triger a PE(programming error) event and halt, not a normal interrupt. I add special codes for PE event and DMA_INTERRUPT async_tx testing. Signed-off-by: Zhang Wei <wei.zhang@freescale.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* iop-adma.c: replace remaining __FUNCTION__ occurrencesHarvey Harrison2008-03-13
| | | | | | | __FUNCTION__ is gcc-specific, use __func__ Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* fsldma: Add a completed cookie updated action in DMA finish interrupt.Zhang Wei2008-03-13
| | | | | | | | | | | | | The patch 'fsldma: do not cleanup descriptors in hardirq context' (commit 222ccf9ab838a1ca7163969fabd2cddc10403fb5) removed descriptors cleanup function to tasklet but the completed cookie do not updated. Thus, the DMA controller will get lots of duplicated transfer interrupts. Just make a completed cookie update in interrupt handler. And keep other cleanup jobs in tasklet function. Tested-by: Sebastian Siewior <bigeasy@linutronix.de> Signed-off-by: Zhang Wei <wei.zhang@freescale.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* fsldma: Add device_prep_dma_interrupt support to fsldma.cZhang Wei2008-03-13
| | | | | | | | | This is a bug that I assigned DMA_INTERRUPT capability to fsldma but missing device_prep_dma_interrupt function. For a bug in dmaengine.c the driver passed BUG_ON() checking. The patch fixes it. Signed-off-by: Zhang Wei <wei.zhang@freescale.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: Fix a bug about BUG_ON() on DMA engine capability DMA_INTERRUPT.Zhang Wei2008-03-13
| | | | | | | | | The device->device_prep_dma_interrupt function is used by DMA_INTERRUPT capability, not DMA_ZERO_SUM. Signed-off-by: Zhang Wei <wei.zhang@freescale.com> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* fsldma: Fix fsldma.c warning messages when it's compiled under PPC64.Zhang Wei2008-03-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are warning messages reported by Stephen Rothwell with ARCH=powerpc allmodconfig build: drivers/dma/fsldma.c: In function 'fsl_dma_prep_memcpy': drivers/dma/fsldma.c:439: warning: comparison of distinct pointer types lacks a cast drivers/dma/fsldma.c: In function 'fsl_chan_xfer_ld_queue': drivers/dma/fsldma.c:584: warning: format '%016llx' expects type 'long long unsigned int', but argument 4 has type 'dma_addr_t' drivers/dma/fsldma.c: In function 'fsl_dma_chan_do_interrupt': drivers/dma/fsldma.c:668: warning: format '%x' expects type 'unsigned int', but argument 5 has type 'dma_addr_t' drivers/dma/fsldma.c:684: warning: format '%016llx' expects type 'long long unsigned int', but argument 4 has type 'dma_addr_t' drivers/dma/fsldma.c:684: warning: format '%016llx' expects type 'long long unsigned int', but argument 5 has type 'dma_addr_t' drivers/dma/fsldma.c:701: warning: format '%02x' expects type 'unsigned int', but argument 4 has type 'dma_addr_t' drivers/dma/fsldma.c: In function 'fsl_dma_self_test': drivers/dma/fsldma.c:840: warning: format '%d' expects type 'int', but argument 5 has type 'size_t' drivers/dma/fsldma.c: In function 'of_fsl_dma_probe': drivers/dma/fsldma.c:1010: warning: format '%08x' expects type 'unsigned int', but argument 5 has type 'resource_size_t' This patch fixed the above warning messages. Signed-off-by: Zhang Wei <wei.zhang@freescale.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: fix 'ack' handling, driver must ensure that 'ack' is zeroDan Williams2008-03-04
| | | | | | | | | | Initialize 'ack' to zero in case the descriptor has been recycled. Prevents "kernel BUG at crypto/async_tx/async_xor.c:185!" Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Shannon Nelson <shannon.nelson@intel.com> Cc: stable@kernel.org
* fsldma: do not cleanup descriptors in hardirq contextDan Williams2008-03-04
| | | | | | | | | "Cleaning" descriptors involves calling pending callbacks and clients assume that their callback will only ever happen in softirq context. Delay cleanup to the tasklet. Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Zhang Wei <wei.zhang@freescale.com>
* dmaengine: add driver for Freescale MPC85xx DMA controllerZhang Wei2008-03-04
| | | | | | | | | | | | | | | | | | | | The driver implements DMA engine API for Freescale MPC85xx DMA controller, which could be used by devices in the silicon. The driver supports the Basic mode of Freescale MPC85xx DMA controller. The MPC85xx processors supported include MPC8540/60, MPC8555, MPC8548, MPC8641 and so on. The MPC83xx(MPC8349, MPC8360) are also supported. [kamalesh@linux.vnet.ibm.com: build fix] [dan.j.williams@intel.com: merge mm fixes, rebase on async_tx-2.6.25] Signed-off-by: Zhang Wei <wei.zhang@freescale.com> Signed-off-by: Ebony Zhu <ebony.zhu@freescale.com> Acked-by: Kumar Gala <galak@gate.crashing.org> Cc: Shannon Nelson <shannon.nelson@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* async_tx: replace 'int_en' with operation preparation flagsDan Williams2008-02-06
| | | | | | | | | | | Pass a full set of flags to drivers' per-operation 'prep' routines. Currently the only flag passed is DMA_PREP_INTERRUPT. The expectation is that arch-specific async_tx_find_channel() implementations can exploit this capability to find the best channel for an operation. Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Shannon Nelson <shannon.nelson@intel.com> Reviewed-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
* async_tx: kill tx_set_src and tx_set_dest methodsDan Williams2008-02-06
| | | | | | | | | | | | | | | | | | | | | | The tx_set_src and tx_set_dest methods were originally implemented to allow an array of addresses to be passed down from async_xor to the dmaengine driver while minimizing stack overhead. Removing these methods allows drivers to have all transaction parameters available at 'prep' time, saves two function pointers in struct dma_async_tx_descriptor, and reduces the number of indirect branches.. A consequence of moving this data to the 'prep' routine is that multi-source routines like async_xor need temporary storage to convert an array of linear addresses into an array of dma addresses. In order to keep the same stack footprint of the previous implementation the input array is reused as storage for the dma addresses. This requires that sizeof(dma_addr_t) be less than or equal to sizeof(void *). As a consequence CONFIG_DMADEVICES now depends on !CONFIG_HIGHMEM64G. It also requires that drivers be able to make descriptor resources available when the 'prep' routine is polled. Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Shannon Nelson <shannon.nelson@intel.com>
* iop-adma: use LIST_HEAD instead of LIST_HEAD_INITDenis Cheng2008-02-06
| | | | | | | | these three list_head are all local variables, but can also use LIST_HEAD. Signed-off-by: Denis Cheng <crquan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* DMA: Convert from class_device to device for DMA engineTony Jones2008-01-24
| | | | | | | | | Signed-off-by: Tony Jones <tonyj@suse.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Cc: Shannon Nelson <shannon.nelson@intel.com> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* I/OAT: fix null device in call to dev_err()Shannon Nelson2007-12-17
| | | | | | | | | | We can't use the device in a dev_err() after a kzalloc failure or after the kfree, so simplify it to the pdev that was originally passed in. Cc: Eric Sesterhenn <snakebyte@gmx.de> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: fixups from code commentsShannon Nelson2007-12-17
| | | | | | | | | | | | | | | | | | | A few fixups from Andrew's code comments. - removed "static inline" forward-declares - changed use of min() to min_t() - removed some unnecessary NULL initializations - removed a couple of BUG() calls Fixes this: drivers/dma/ioat_dma.c: In function `ioat1_tx_submit': drivers/dma/ioat_dma.c:177: sorry, unimplemented: inlining failed in call to '__ioat1_dma_memcpy_issue_pending': function body not available drivers/dma/ioat_dma.c:268: sorry, unimplemented: called from here Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Cc: "Williams, Dan J" <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* dmaengine: correct invalid assumptions in the Kconfig textHaavard Skinnemoen2007-11-29
| | | | | | | | | | | | | | | | | | This patch corrects recently changed (and now invalid) Kconfig descriptions for the DMA engine framework: - Non-Intel(R) hardware also has DMA engines; - DMA is used for more than memcpy and RAID offloading. In fact, on most platforms memcpy and RAID aren't factors, and DMA exists so that peripherals can transfer data to/from memory while the CPU does other work. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: David Brownell <dbrownell@users.sourceforge.net> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: Add support for version 2 of ioatdma deviceShannon Nelson2007-11-14
| | | | | | | | | | | | | | | | Add support for version 2 of the ioatdma device. This device handles the descriptor chain and DCA services slightly differently: - Instead of moving the dma descriptors between a busy and an idle chain, this new version uses a single circular chain so that we don't have rewrite the next_descriptor pointers as we add new requests, and the device doesn't need to re-read the last descriptor. - The new device has the DCA tags defined internally instead of needing them defined statically. Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Cc: "Williams, Dan J" <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* dmaengine: fix broken device refcountingHaavard Skinnemoen2007-11-14
| | | | | | | | | | | | | | | | | | | When a DMA device is unregistered, its reference count is decremented twice for each channel: Once dma_class_dev_release() and once in dma_chan_cleanup(). This may result in the DMA device driver's remove() function completing before all channels have been cleaned up, causing lots of use-after-free fun. Fix it by incrementing the device's reference count twice for each channel during registration. [dan.j.williams@intel.com: kill unnecessary client refcounting] Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Remove bogus default y for DMAR and NET_DMAAndi Kleen2007-10-30
| | | | | | | | | | | | No reason I can think of of making them default y Most people don't have the hardware and with default y they just pollute lots of configs during make oldconfig. Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Jeff Garzik <jeff@garzik.org> Acked-by: "Nelson, Shannon" <shannon.nelson@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: Add completion callback for async_tx interface useShannon Nelson2007-10-18
| | | | | | | | | | | The async_tx interface includes a completion callback. This adds support for using that callback, including using interrupts on completion. [akpm@linux-foundation.org: various fixes] Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: Tighten descriptor setup performanceShannon Nelson2007-10-18
| | | | | | | | | | | | | The change to the async_tx interface cost this driver some performance by spreading the descriptor setup across several functions, including multiple passes over the new descriptor chain. Here we bring the work back into one primary function and only do one pass. [akpm@linux-foundation.org: cleanups, uninline] Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: clean up error handling and some print messagesShannon Nelson2007-10-18
| | | | | | | | | | Make better use of dev_err(), and catch an error where the transaction creation might fail. Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: clean up of dca provider start and stopShannon Nelson2007-10-18
| | | | | | | | | | | Don't start ioat_dca if ioat_dma didn't start, and then stop ioat_dca before stopping ioat_dma. Since the ioat_dma side does the pci device work, This takes care of ioat_dca trying to use a bad device reference. Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>