aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/infiniband/hw/mthca/mthca_cq.c
Commit message (Collapse)AuthorAge
* RDMA/core: Add memory management extensions supportSteve Wise2008-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for the IB "base memory management extension" (BMME) and the equivalent iWARP operations (which the iWARP verbs mandates all devices must implement). The new operations are: - Allocate an ib_mr for use in fast register work requests. - Allocate/free a physical buffer lists for use in fast register work requests. This allows device drivers to allocate this memory as needed for use in posting send requests (eg via dma_alloc_coherent). - New send queue work requests: * send with remote invalidate * fast register memory region * local invalidate memory region * RDMA read with invalidate local memory region (iWARP only) Consumer interface details: - A new device capability flag IB_DEVICE_MEM_MGT_EXTENSIONS is added to indicate device support for these features. - New send work request opcodes IB_WR_FAST_REG_MR, IB_WR_LOCAL_INV, IB_WR_RDMA_READ_WITH_INV are added. - A new consumer API function, ib_alloc_mr() is added to allocate fast register memory regions. - New consumer API functions, ib_alloc_fast_reg_page_list() and ib_free_fast_reg_page_list() are added to allocate and free device-specific memory for fast registration page lists. - A new consumer API function, ib_update_fast_reg_key(), is added to allow the key portion of the R_Key and L_Key of a fast registration MR to be updated. Consumers call this if desired before posting a IB_WR_FAST_REG_MR work request. Consumers can use this as follows: - MR is allocated with ib_alloc_mr(). - Page list memory is allocated with ib_alloc_fast_reg_page_list(). - MR R_Key/L_Key "key" field is updated with ib_update_fast_reg_key(). - MR made VALID and bound to a specific page list via ib_post_send(IB_WR_FAST_REG_MR) - MR made INVALID via ib_post_send(IB_WR_LOCAL_INV), ib_post_send(IB_WR_RDMA_READ_WITH_INV) or an incoming send with invalidate operation. - MR is deallocated with ib_dereg_mr() - page lists dealloced via ib_free_fast_reg_page_list(). Applications can allocate a fast register MR once, and then can repeatedly bind the MR to different physical block lists (PBLs) via posting work requests to a send queue (SQ). For each outstanding MR-to-PBL binding in the SQ pipe, a fast_reg_page_list needs to be allocated (the fast_reg_page_list is owned by the low-level driver from the consumer posting a work request until the request completes). Thus pipelining can be achieved while still allowing device-specific page_list processing. The 32-bit fast register memory key/STag is composed of a 24-bit index and an 8-bit key. The application can change the key each time it fast registers thus allowing more control over the peer's use of the key/STag (ie it can effectively be changed each time the rkey is rebound to a page list). Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA: Remove subversion $Id tagsRoland Dreier2008-07-15
| | | | | | They don't get updated by git and so they're worse than useless. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Add IPoIB checksum offload supportEli Cohen2008-04-17
| | | | | | | | | | | Arbel and Sinai devices support checksum generation and verification of TCP and UDP packets for UD IPoIB messages. This patch checks if the HCA supports this and sets the IB_DEVICE_UD_IP_CSUM capability flag if it does. It implements support for handling the IB_SEND_IP_CSUM send flag and setting the csum_ok field in receive work completions. Signed-off-by: Eli Cohen <eli@mellnaox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Convert to use be16_add_cpu()Marcin Slusarz2008-02-13
| | | | | | | | | | | | | | | | | | replace: big_endian_variable = cpu_to_beX(beX_to_cpu(big_endian_variable) + expression_in_cpu_byteorder); with: beX_add_cpu(&big_endian_variable, expression_in_cpu_byteorder); Generated with a semantic patch. Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com> Cc: Sean Hefty <sean.hefty@intel.com> Cc: Hal Rosenstock <hal.rosenstock@gmail.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Avoid alignment traps when writing doorbellsRoland Dreier2007-10-15
| | | | | | | | | | Architectures such as ia64 see alignment traps when doing a 64-bit read from __be32 doorbell[2] arrays to do doorbell writes in mthca_write64(). Fix this by just passing the two halves of the doorbell value into mthca_write64(). This actually improves the generated code by allowing the compiler to see what's going on better. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* Detach sched.h from mm.hAlexey Dobriyan2007-05-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | First thing mm.h does is including sched.h solely for can_do_mlock() inline function which has "current" dereference inside. By dealing with can_do_mlock() mm.h can be detached from sched.h which is good. See below, why. This patch a) removes unconditional inclusion of sched.h from mm.h b) makes can_do_mlock() normal function in mm/mlock.c c) exports can_do_mlock() to not break compilation d) adds sched.h inclusions back to files that were getting it indirectly. e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were getting them indirectly Net result is: a) mm.h users would get less code to open, read, preprocess, parse, ... if they don't need sched.h b) sched.h stops being dependency for significant number of files: on x86_64 allmodconfig touching sched.h results in recompile of 4083 files, after patch it's only 3744 (-8.3%). Cross-compile tested on all arm defconfigs, all mips defconfigs, all powerpc defconfigs, alpha alpha-up arm i386 i386-up i386-defconfig i386-allnoconfig ia64 ia64-up m68k mips parisc parisc-up powerpc powerpc-up s390 s390-up sparc sparc-up sparc64 sparc64-up um-x86_64 x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig as well as my two usual configs. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IB/mthca: Set cleaned CQEs back to HW ownership when cleaning CQMichael S. Tsirkin2007-05-14
| | | | | | | | | | | | mthca_cq_clean() updates the CQ consumer index without moving CQEs back to HW ownership. As a result, the same WRID might get reported twice, resulting in a use-after-free. This was observed in IPoIB CM. Fix by moving all freed CQEs to HW ownership. This fixes <https://bugs.openfabrics.org/show_bug.cgi?id=617> Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB: Return "maybe missed event" hint from ib_req_notify_cq()Roland Dreier2007-05-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The semantics defined by the InfiniBand specification say that completion events are only generated when a completions is added to a completion queue (CQ) after completion notification is requested. In other words, this means that the following race is possible: while (CQ is not empty) ib_poll_cq(CQ); // new completion is added after while loop is exited ib_req_notify_cq(CQ); // no event is generated for the existing completion To close this race, the IB spec recommends doing another poll of the CQ after requesting notification. However, it is not always possible to arrange code this way (for example, we have found that NAPI for IPoIB cannot poll after requesting notification). Also, some hardware (eg Mellanox HCAs) actually will generate an event for completions added before the call to ib_req_notify_cq() -- which is allowed by the spec, since there's no way for any upper-layer consumer to know exactly when a completion was really added -- so the extra poll of the CQ is just a waste. Motivated by this, we add a new flag "IB_CQ_REPORT_MISSED_EVENTS" for ib_req_notify_cq() so that it can return a hint about whether the a completion may have been added before the request for notification. The return value of ib_req_notify_cq() is extended so: < 0 means an error occurred while requesting notification == 0 means notification was requested successfully, and if IB_CQ_REPORT_MISSED_EVENTS was passed in, then no events were missed and it is safe to wait for another event. > 0 is only returned if IB_CQ_REPORT_MISSED_EVENTS was passed in. It means that the consumer must poll the CQ again to make sure it is empty to avoid the race described above. We add a flag to enable this behavior rather than turning it on unconditionally, because checking for missed events may incur significant overhead for some low-level drivers, and consumers that don't care about the results of this test shouldn't be forced to pay for the test. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB: Return qp pointer as part of ib_wcMichael S. Tsirkin2007-02-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | struct ib_wc currently only includes the local QP number: this matches the IB spec, but seems mostly useless. The following patch replaces this with the pointer to qp itself, and updates all low level drivers and all users. This has the following advantages: - Ability to get a per-qp context through wc->qp->qp_context - Existing drivers already have the qp pointer ready in poll cq, so this change actually saves a tiny bit (extra memory read) on data path (for ehca it would actually be expensive to find the QP pointer when polling a CQ, but ehca does not support SRQ so we can leave wc->qp as NULL for ehca) - Users that need the QP number can still get it through wc->qp->qp_num Use case: In IPoIB connected mode code, I have a common CQ shared by multiple QPs. To track connection usage, I need a way to get at some per-QP context upon the completion, and I would like to avoid allocating context object per work request just to stick a QP pointer into it. With this code, I can just use wc->qp->qp_context. Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Fix PRM compliance problem in atomic-send completionsJack Morgenstein2007-01-07
| | | | | | | | | | | | According to the Tavor and Arbel programmer's reference manuals, the number of bytes transferred is not provided in the byte_cnt field of the CQ entry for atomic operation completions. For atomic operations, the number of bytes transferred is always 8 (when the status is "success"), and this constant value should always be used by the driver in the ib_wc entry returned, rather than using the CQE. Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Fix section mismatchesRoland Dreier2006-11-29
| | | | | | | | | | | | | Commit b3b30f5e ("IB/mthca: Recover from catastrophic errors") introduced some section mismatch breakage, because the error recovery code tears down and reinitializes the device, which calls into lots of code originally marked __devinit and __devexit from regular .text. Fix this by getting rid of these now-incorrect section markers. Reported by Randy Dunlap <randy.dunlap@oracle.com>. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Use mmiowb after doorbell ringArthur Kepner2006-10-16
| | | | | | | | | | | | | | | | We discovered a problem when running IPoIB applications on multiple CPUs on an Altix system. Many messages such as: ib_mthca 0002:01:00.0: SQ 000014 full (19941644 head, 19941707 tail, 64 max, 0 nreq) appear in syslog, and the driver wedges up. Apparently this is because writes to the doorbells from different CPUs reach the device out of order. The following patch adds mmiowb() calls after doorbell rings to ensure the doorbell writes are ordered. Signed-off-by: Arthur Kepner <akepner@sgi.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB: Whitespace fixesRoland Dreier2006-09-22
| | | | | | | | Remove some trailing whitespace that has snuck in despite the best efforts of whitespace=error-all. Also fix a few other whitespace bogosities. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Make all device methods truly reentrantRoland Dreier2006-06-17
| | | | | | | | | | | | | | | | Documentation/infiniband/core_locking.txt says: All of the methods in struct ib_device exported by a low-level driver must be fully reentrant. The low-level driver is required to perform all synchronization necessary to maintain consistency, even if multiple function calls using the same object are run simultaneously. However, mthca's modify_qp, modify_srq and resize_cq methods are currently not reentrant. Add a mutex to the QP, SRQ and CQ structures so that these calls can be properly serialized. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: memfree completion with error FW bug workaroundMichael S. Tsirkin2006-06-17
| | | | | | | | | | | Memfree firmware is in rare cases reporting WQE index == base - 1 in receive completion with error, instead of (rq size - 1); base is 0 in mthca. Here is a patch to avoid kernel crash and report a correct WR id in this case. Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Fix race in reference countingRoland Dreier2006-05-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix races in in destroying various objects. If a destroy routine waits for an object to become free by doing wait_event(&obj->wait, !atomic_read(&obj->refcount)); /* now clean up and destroy the object */ and another place drops a reference to the object by doing if (atomic_dec_and_test(&obj->refcount)) wake_up(&obj->wait); then this is susceptible to a race where the wait_event() and final freeing of the object occur between the atomic_dec_and_test() and the wake_up(). And this is a use-after-free, since wake_up() will be called on part of the already-freed object. Fix this in mthca by replacing the atomic_t refcounts with plain old integers protected by a spinlock. This makes it possible to do the decrement of the reference count and the wake_up() so that it appears as a single atomic operation to the code waiting on the wait queue. While touching this code, also simplify mthca_cq_clean(): the CQ being cleaned cannot go away, because it still has a QP attached to it. So there's no reason to be paranoid and look up the CQ by number; it's perfectly safe to use the pointer that the callers already have. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Fix section mismatch problemsRoland Dreier2006-03-29
| | | | | | | | | Quite a few cleanup functions in mthca were marked as __devexit. However, they could also be called from error paths during initialization, so they cannot be marked that way. Just delete all of the incorrect annotations. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Add device-specific support for resizing CQsRoland Dreier2006-03-20
| | | | | | | Add low-level driver support for resizing CQs (both kernel and userspace) to mthca. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Get rid of might_sleep() annotationsRoland Dreier2006-03-20
| | | | | | | | | | | | | | | | | | | | | The might_sleep() annotations in mthca are silly -- they all occur shortly before calls that will end up in core functions like kmalloc() that will print the same warning in an unsafe context anyway. In fact, beyond cluttering the source, we're actually bloating text with CONFIG_DEBUG_SPINLOCK_SLEEP and/or CONFIG_PREEMPT_VOLUNTARY set. With both options set, getting rid of the might_sleep()s saves a lot: add/remove: 0/0 grow/shrink: 0/7 up/down: 0/-171 (-171) function old new delta mthca_pd_alloc 132 109 -23 mthca_init_cq 969 946 -23 mthca_mr_alloc 592 568 -24 mthca_pd_free 67 42 -25 mthca_free_mr 219 194 -25 mthca_free_cq 570 545 -25 mthca_fmr_alloc 742 716 -26 Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Make functions that never fail return voidRoland Dreier2006-03-20
| | | | | | | | | | | | | | | The function mthca_free_err_wqe() can never fail, so get rid of its return value. That means handle_error_cqe() doesn't have to check what mthca_free_err_wqe() returns, which means it can't fail either and doesn't have to return anything either. All this results in simpler source code and a slight object code improvement: add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-10 (-10) function old new delta mthca_free_err_wqe 83 81 -2 mthca_poll_cq 1758 1750 -8 Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Fill in vendor_err field in completion with errorMichael S. Tsirkin2006-01-06
| | | | | | | Fill vendor_err field in completion with error. Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Fix SRQ cleanup during QP destroyJack Morgenstein2005-12-15
| | | | | | | | When cleaning up a CQ for a QP attached to SRQ, need to free an SRQ WQE only if the CQE is a receive completion. Signed-off-by: Jack Morgenstein <jackm@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* [IB] mthca: fix wraparound handling in mthca_cq_clean()Roland Dreier2005-11-10
| | | | | | | | Handle case where prod_index has wrapped around and become less than cq->cons_index by checking that their difference as a signed int is positive rather than comparing directly. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* [IB] mthca: report asynchronous CQ eventsMichael S. Tsirkin2005-10-29
| | | | | | | Implement reporting asynchronous CQ events in Mellanox HCA driver. Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* [PATCH] IB: move include files to include/rdmaRoland Dreier2005-08-26
| | | | | | | | Move the InfiniBand headers from drivers/infiniband/include to include/rdma. This allows InfiniBand-using code to live elsewhere, and lets us remove the ugly EXTRA_CFLAGS include path from the InfiniBand Makefiles. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* [PATCH] IB/mthca: Add SRQ implementationRoland Dreier2005-08-26
| | | | | | | Add mthca support for shared receive queues (SRQs), including userspace SRQs. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* [PATCH] IB/mthca: Simplify handling of completions with errorRoland Dreier2005-08-26
| | | | | | | | Mem-free HCAs never generate error CQEs that complete multiple WQEs, so just skip the call to mthca_free_err_wqe() for them rather than having logic to handle the mem-free case in mthca_free_err_wqe(). Signed-off-by: Roland Dreier <rolandd@cisco.com>
* [PATCH] IB/mthca: Factor out common queue alloc codeRoland Dreier2005-08-26
| | | | | | | | Clean up the allocation of memory for queues by factoring out the common code into mthca_buf_alloc() and mthca_buf_free(). Now CQs and QPs share the same queue allocation code, which we'll also use for SRQs. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* [PATCH] IB: sparse endianness cleanupSean Hefty2005-08-26
| | | | | | | Fix sparse warnings. Use __be* where appropriate. Signed-off-by: Sean Hefty <sean.hefty@intel.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* [PATCH] IB: Add copyright noticesRoland Dreier2005-08-26
| | | | | | | Make some lawyers happy and add copyright notices for people who forgot to include them when they actually touched the code. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* [IB/mthca]: Fix error CQ entry handling on mem-free HCAsRoland Dreier2005-07-27
| | | | | | | | Fix handling of error CQ entries on mem-free HCAs: the doorbell count is never valid so we shouldn't look at it. This fixes problems exposed by new HCA firmware. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* [PATCH] IB uverbs: add mthca user CQ supportRoland Dreier2005-07-07
| | | | | | | | Add support for userspace completion queues (CQs) to mthca. Signed-off-by: Roland Dreier <rolandd@cisco.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] IB/mthca: Align FW command mailboxes to 4KRoland Dreier2005-06-27
| | | | | | | | | | Future versions of Mellanox HCA firmware will require command mailboxes to be aligned to 4K. Support this by using a pci_pool to allocate all mailboxes. This has the added benefit of shrinking the source and text of mthca. Signed-off-by: Roland Dreier <roland@topspin.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] IB/mthca: Move mthca_is_memfree checksRoland Dreier2005-06-27
| | | | | | | | | | | Make mthca_table_put() and mthca_table_put_range() NOPs if the device is not mem-free, so that we don't have to have "if (mthca_is_memfree())" tests in the callers of these functions. This makes our code more readable and maintainable, and saves a couple dozen bytes of text in ib_mthca.ko as well. Signed-off-by: Roland Dreier <roland@topspin.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] IB/mthca: Use dma_alloc_coherent instead of pci_alloc_consistentRoland Dreier2005-06-27
| | | | | | | | | | Switch all allocations of coherent memory from pci_alloc_consistent() to dma_alloc_coherent(), so that we can pass GFP_KERNEL. This should help when the system is low on memory. Signed-off-by: Roland Dreier <roland@topspin.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] IB/mthca: Clean up CQ debugRoland Dreier2005-06-27
| | | | | | | | | Clean up CQ debugging code: make dump_cqe print on one line, and only dump error CQ entries for local operation errors. Signed-off-by: Roland Dreier <roland@topspin.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] IB/mthca: Add Sun copyright noticeTom Duffy2005-06-27
| | | | | | | | | Add Sun copyright to files modified by Tom Duffy. Signed-off-by: Tom Duffy <tduffy@sun.com> Signed-off-by: Roland Dreier <roland@topspin.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] IB/mthca: encapsulate mem-free check into mthca_is_memfree()Roland Dreier2005-04-16
| | | | | | | | | Clean up mem-free mode support by introducing mthca_is_memfree() function, which encapsulates the logic of deciding if a device is mem-free. Signed-off-by: Roland Dreier <roland@topspin.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] IB/mthca: fill in opcode field for send completionsMichael S. Tsirkin2005-04-16
| | | | | | | | | | Fill in missing fields in send completions. Signed-off-by: Itamar Rabenstein <itamar@mellanox.co.il> Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: Roland Dreier <roland@topspin.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] IB/mthca: only free doorbell records in mem-free modeRoland Dreier2005-04-16
| | | | | | | | On error path, only free doorbell records if we're in mem-free mode. Signed-off-by: Roland Dreier <roland@topspin.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-16
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!