aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/infiniband
Commit message (Collapse)AuthorAge
* Merge branch 'for-linus' of ↵Linus Torvalds2006-06-19
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband: (46 commits) IB/uverbs: Don't serialize with ib_uverbs_idr_mutex IB/mthca: Make all device methods truly reentrant IB/mthca: Fix memory leak on modify_qp error paths IB/uverbs: Factor out common idr code IB/uverbs: Don't decrement usecnt on error paths IB/uverbs: Release lock on error path IB/cm: Use address handle helpers IB/sa: Add ib_init_ah_from_path() IB: Add ib_init_ah_from_wc() IB/ucm: Get rid of duplicate P_Key parameter IB/srp: Factor out common request reset code IB/srp: Support SRP rev. 10 targets [SCSI] srp.h: Add I/O Class values IB/fmr: Use device's max_map_map_per_fmr attribute in FMR pool. IB/mthca: Fill in max_map_per_fmr device attribute IB/ipath: Add client reregister event generation IB/mthca: Add client reregister event generation IB: Move struct port_info from ipath to <rdma/ib_smi.h> IPoIB: Handle client reregister events IB: Add client reregister event type ...
| * IB/uverbs: Don't serialize with ib_uverbs_idr_mutexRoland Dreier2006-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, all userspace verbs operations that call into the kernel are serialized by ib_uverbs_idr_mutex. This can be a scalability issue for some workloads, especially for devices driven by the ipath driver, which needs to call into the kernel even for datapath operations. Fix this by adding reference counts to the userspace objects, and then converting ib_uverbs_idr_mutex into a spinlock that only protects the idrs long enough to take a reference on the object being looked up. Because remove operations may fail, we have to do a slightly funky two-step deletion, which is described in the comments at the top of uverbs_cmd.c. This also still leaves ib_uverbs_idr_lock as a single lock that is possibly subject to contention. However, the lock hold time will only be a single idr operation, so multiple threads should still be able to make progress, even if ib_uverbs_idr_lock is being ping-ponged. Surprisingly, these changes even shrink the object code: add/remove: 23/5 grow/shrink: 4/21 up/down: 633/-693 (-60) Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/mthca: Make all device methods truly reentrantRoland Dreier2006-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Documentation/infiniband/core_locking.txt says: All of the methods in struct ib_device exported by a low-level driver must be fully reentrant. The low-level driver is required to perform all synchronization necessary to maintain consistency, even if multiple function calls using the same object are run simultaneously. However, mthca's modify_qp, modify_srq and resize_cq methods are currently not reentrant. Add a mutex to the QP, SRQ and CQ structures so that these calls can be properly serialized. Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/mthca: Fix memory leak on modify_qp error pathsRoland Dreier2006-06-17
| | | | | | | | | | | | | | | | Some error paths after the mthca_alloc_mailbox() call in mthca_modify_qp() just do a "return -EINVAL" without freeing the mailbox. Convert these returns to "goto out" to avoid leaking the mailbox storage. Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/uverbs: Factor out common idr codeRoland Dreier2006-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Factor out common code for adding a userspace object to an idr into a function idr_add_uobj(). This shrinks both the source and object code: add/remove: 1/0 grow/shrink: 0/6 up/down: 57/-220 (-163) function old new delta idr_add_uobj - 57 +57 ib_uverbs_create_ah 543 512 -31 ib_uverbs_create_srq 662 630 -32 ib_uverbs_reg_mr 737 699 -38 ib_uverbs_create_cq 639 600 -39 ib_uverbs_alloc_pd 485 446 -39 ib_uverbs_create_qp 1020 979 -41 Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/uverbs: Don't decrement usecnt on error pathsRoland Dreier2006-06-17
| | | | | | | | | | | | | | | | In error paths when destroying an object, uverbs should not decrement associated objects' usecnt, since ib_dereg_mr(), ib_destroy_qp(), etc. already do that. Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/uverbs: Release lock on error pathGanapathi CH2006-06-17
| | | | | | | | | | | | | | | | If ibdev->alloc_ucontext() fails then ib_uverbs_get_context() does not unlock file->mutex before returning error. Signed-off by: Ganapathi CH <cganapathi@novell.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/cm: Use address handle helpersSean Hefty2006-06-17
| | | | | | | | | | | | | | | | Use new ib_init_ah_from_wc() and ib_init_ah_from_path() helper functions to clean up the IB CM. Signed-off-by: Sean Hefty <sean.hefty@intel.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/sa: Add ib_init_ah_from_path()Sean Hefty2006-06-17
| | | | | | | | | | | | | | | | Add a call to initialize address handle attributes given a path record. This is used by the CM, and would be useful for users of UD QPs. Signed-off-by: Sean Hefty <sean.hefty@intel.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB: Add ib_init_ah_from_wc()Sean Hefty2006-06-17
| | | | | | | | | | | | | | | | Add a function to initialize address handle attributes from a work completion. This functionality is duplicated by both verbs and the CM. Signed-off-by: Sean Hefty <sean.hefty@intel.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/ucm: Get rid of duplicate P_Key parameterSean Hefty2006-06-17
| | | | | | | | | | | | | | | | | | | | | | The P_Key is provided into a SIDR REQ in two places, once as a parameter, and again in the path record. Remove the P_Key as a parameter and always use the one given in the path record. This change has no practical effect on ABI functionality. Signed-off-by: Sean Hefty <sean.hefty@intel.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/srp: Factor out common request reset codeIshai Rabinovitz2006-06-17
| | | | | | | | | | | | | | | | | | | | | | | | Misc cleanups in ib_srp: 1) I think that it is more efficient to move the req entries from req_list to free_list in srp_reconnect_target (rather than rebuild the free_list). (In any case this code is shorter). 2) This allows us to reuse code in srp_reset_device and srp_reconnect_target and call a new function srp_reset_req. Signed-off-by: Ishai Rabinovitz <ishai@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/srp: Support SRP rev. 10 targetsRamachandra K2006-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There has been a change in the format of port identifiers between revision 10 of the SRP specification and the current revision 16A. Revision 10 specifies port identifier format as lower 8 bytes : GUID upper 8 bytes : Extension Whereas revision 16A specifies it as lower 8 bytes : Extension upper 8 bytes : GUID There are older targets (e.g. SilverStorm Virtual Fibre Channel Bridge) which conform to revision 10 of the SRP specification. The I/O class of revision 10 is 0xFF00 and the I/O class of revision 16A is 0x0100. For supporting older targets, this patch: 1) Adds a new optional target creation parameter "io_class". Default value of io_class is 0x0100 (i.e. revision 16A) 2) Uses the correct port identifier format for targets with IO class of 0xFF00 (i.e. conforming to revision 10) Signed-off-by: Ramachandra K <rkuchimanchi@silverstorm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/fmr: Use device's max_map_map_per_fmr attribute in FMR pool.Or Gerlitz2006-06-17
| | | | | | | | | | | | | | | | | | | | When creating a FMR pool, query the IB device and use the returned max_map_map_per_fmr attribute as for the max number of FMR remaps. If the device does not suport querying this attribute, use the original IB_FMR_MAX_REMAPS (32) default. Signed-off-by: Or Gerlitz <ogerlitz@voltaire.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/mthca: Fill in max_map_per_fmr device attributeOr Gerlitz2006-06-17
| | | | | | | | | | | | | | | | | | Report the true max_map_per_fmr value from mthca_query_device(), taking into account the change in FMR remapping introduced by the Sinai performance optimization. Signed-off-by: Or Gerlitz <ogerlitz@voltaire.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/ipath: Add client reregister event generationRoland Dreier2006-06-17
| | | | | | | | | | | | | | Generate a client reregister event instead of a LID change event when client reregister bit is set. Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/mthca: Add client reregister event generationLeonid Arsh2006-06-17
| | | | | | | | | | | | | | | | | | | | Change the mthca snoop of MADs that set PortInfo to check if the SM has set the client reregister bit, and if it has, generate a client reregister event. If the bit is not set, just generate a LID change event as usual. Signed-off-by: Leonid Arsh <leonida@voltaire.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB: Move struct port_info from ipath to <rdma/ib_smi.h>Leonid Arsh2006-06-17
| | | | | | | | | | | | | | | | | | | | | | Move ipath's struct port_info into <rdma/ib_smi.h>, so that it can be used by mthca to implement client reregister support. Remove the __attribute__((packed)) because all the members of the struct are naturally aligned anyway. Signed-off-by: Leonid Arsh <leonida@voltaire.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IPoIB: Handle client reregister eventsLeonid Arsh2006-06-17
| | | | | | | | | | | | | | | | Handle client reregister events by treating them just like LID or SM changes -- flush all cached paths and rejoin multicast groups. Signed-off-by: Leonid Arsh <leonida@voltaire.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IPoIB: Fix kernel unaligned access on ia64Jack Morgenstein2006-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix misaligned access faults on ia64: never cast a misaligned neighbour->ha + 4 pointer to union ib_gid type; pass a void * pointer instead. The memcpy was being optimized to use full word accesses because the compiler thought that union ib_gid is always aligned. The cast in IPOIB_GID_ARG is safe, since it is fixed to access each byte separately. Signed-off-by: Jack Morgenstein <jackm@mellanox.co.il> Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IPoIB: Avoid using stale last_send counter when reaping AHsRoland Dreier2006-06-17
| | | | | | | | | | | | | | | | | | | | | | | | The comparisons of priv->tx_tail to ah->last_send in ipoib_free_ah() and ipoib_post_receive() are slightly unsafe, because priv->tx_lock is not held and hence a stale value of ah->last_send might be used, which would lead to freeing an AH before the driver was really done with it. The simple way to fix this is to the optimization of early free from ipoib_free_ah() and unconditionally queue AHs for reaping, and then take priv->tx_lock in __ipoib_reap_ah(). Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/mad: Check GID/LID when matching requestsJack Morgenstein2006-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Check GID/LID for requester side when searching for request which matches received response. This is in order to guarantee uniqueness if the same TID is used when requesting via multiple source LIDs (when LMC is not zero). Use ports' cached LMC to perform the check. Further, do not perform LID check for direct-routed packets, since the permissive LID makes a proper check impossible. Signed-off-by: Jack Morgenstein <jackm@mellanox.co.il> Signed-off-by: Sean Hefty <sean.hefty@intel.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB: Add caching of ports' LMCJack Morgenstein2006-06-17
| | | | | | | | | | | | | | | | | | Add an LMC cache to struct ib_device, and add a function ib_get_cached_lmc() to query the cache. Signed-off-by: Jack Morgenstein <jackm@mellanox.co.il> Signed-off-by: Sean Hefty <sean.hefty@intel.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/cm: remove unneeded flush_workqueueMichael S. Tsirkin2006-06-17
| | | | | | | | | | | | | | destroy_workqueue() already does flush_workqueue(). Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: Sean Hefty <sean.hefty@intel.com>
| * IB/ucm: convert semaphore to mutexSean Hefty2006-06-17
| | | | | | | | | | | | | | Convert semaphore in ib_ucm_file to a real mutex. Signed-off-by: Sean Hefty <sean.hefty@intel.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/srp: Get rid of "Target has req_lim 0" messagesRoland Dreier2006-06-17
| | | | | | | | | | | | | | | | | | | | It's perfectly valid for a connection to an SRP target to have a request limit of 0, so get rid of the message about it, which can spam kernel logs even with printk_ratelimit(). Keep a count of such events in a "zero_req_lim" SCSI host attribute instead, so someone who cares can look at the statistics. Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/srp: Handle DREQ events from CMIshai Rabinovitz2006-06-17
| | | | | | | | | | | | | | | | | | | | | | Handle IB_CM_DREQ_ERROR and IB_CM_DREQ_RECEIVED events from the CM, instead of just printing "Unhandled CM event". In the case of DREQ_ERROR, just ignore the event -- a TIMEWAIT_EXIT will be generated also. For DREQ_RECEIVED, send a DREP in response to shut the connection down cleanly. Signed-off-by: Ishai Rabinovitz <ishai@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/srp: Allow sg_tablesize to be adjustedVu Pham2006-06-17
| | | | | | | | | | | | | | | | | | Make the sg_tablesize used by SRP adjustable at module load time via a module parameter. Calculate the corresponding IU length required to support this. Signed-off-by: Vu Pham <vu@mellanox.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/srp: Allow cmd_per_lun to be set per target portVu Pham2006-06-17
| | | | | | | | | | | | | | | | | | Allow userspace to throttle traffic on a given connection to a target port by adding "max_cmd_per_lun=xyz" to lower the cmd_per_lun value set for that scsi_host. Signed-off-by: Vu Pham <vu@mellanox.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/srp: Clean up loop in srp_remove_one()Ishai Rabinovitz2006-06-17
| | | | | | | | | | | | | | | | | | | | Interrupts will always be enabled in srp_remove_one(), so spin_lock_irq() can be used instead of spin_lock_irqsave(). Also, the loop takes target->scsi_host->host_lock, so target->state can just be set to SRP_TARGET_REMOVED witout testing the old value. Signed-off-by: Ishai Rabinovitz <ishai@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB: Make needlessly global ib_mad_cache staticRoland Dreier2006-06-17
| | | | | | | | Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/srp: Change target_mutex to a spinlockMatthew Wilcox2006-06-17
| | | | | | | | | | | | | | | | | | The SRP driver never sleeps while holding target_mutex, and it's just used to protect some simple list operations, so hold times will be short. So just convert it to a spinlock, which is smaller and faster. Signed-off-by: Matthew Wilcox <matthew@wil.cx> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/srp: Get rid of unneeded use of list_for_each_entry_safe()Matthew Wilcox2006-06-17
| | | | | | | | | | | | | | | | list_for_each_entry_safe() is used in one place where the list isn't modified. So just change it to list_for_each_entry(). Signed-off-by: Matthew Wilcox <matthew@wil.cx> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/srp: Use SCAN_WILD_CARD from SCSI headersMatthew Wilcox2006-06-17
| | | | | | | | | | | | | | | | SCAN_WILD_CARD is indeed available from <scsi/scsi.h>, which is already included. So get rid of private hack. Signed-off-by: Matthew Wilcox <matthew@wil.cx> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/mthca: Convert FW commands to use wait_for_completion_timeout()Roland Dreier2006-06-17
| | | | | | | | | | | | | | | | | | The kernel has had wait_for_completion_timeout() for a long time now. mthca should use it to handle FW commands timing out, instead of implementing the same thing in a much more complicated way by using wait_for_completion() along with a timer that does complete(). Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/srp: Use FMRs to map gather/scatter listsRoland Dreier2006-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Create an SRP FMR pool on HCAs that support FMRs, and use FMRs to map gather/scatter lists that have more than one entry into a single memory region that appears virtually contiguous to the SRP target (which is the RDMA initiator). This patch bails out on FMR mapping for SCSI commands where the gather/scatter list cannot be mapped into a single FMR because there are sub-page-sized entries in middle of the list. An unaligned start or end of the list is OK. Based on a patch by Vu Pham <vuhuong@mellanox.com>. Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/mthca: Remove dead codeMichael S. Tsirkin2006-06-17
| | | | | | | | | | | | | | Kill some dead code in mthca_eq.c Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB: IP address based RDMA connection managerSean Hefty2006-06-17
| | | | | | | | | | | | | | | | | | | | | | | | Kernel connection management agent over InfiniBand that connects based on IP addresses. The agent defines a generic RDMA connection abstraction to support clients wanting to connect over different RDMA devices. The agent also handles RDMA device hotplug events on behalf of clients. Signed-off-by: Sean Hefty <sean.hefty@intel.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB: address translation to map IP toIB addresses (GIDs)Sean Hefty2006-06-17
| | | | | | | | | | | | | | | | Add an address translation service that maps IP addresses to InfiniBand GID addresses using IPoIB. Signed-off-by: Sean Hefty <sean.hefty@intel.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/cm: Match connection requests based on private dataSean Hefty2006-06-17
| | | | | | | | | | | | | | | | | | | | | | Extend matching connection requests to listens in the InfiniBand CM to include private data checks. This allows applications to listen on the same service identifier, with private data directing the request to the appropriate application. Signed-off-by: Sean Hefty <sean.hefty@intel.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB: common handling for marshalling parameters to/from userspaceSean Hefty2006-06-17
| | | | | | | | | | | | | | | | Provide common handling for marshalling data between userspace clients and kernel InfiniBand drivers. Signed-off-by: Sean Hefty <sean.hefty@intel.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/mthca: memfree completion with error FW bug workaroundMichael S. Tsirkin2006-06-17
| | | | | | | | | | | | | | | | | | | | | | Memfree firmware is in rare cases reporting WQE index == base - 1 in receive completion with error, instead of (rq size - 1); base is 0 in mthca. Here is a patch to avoid kernel crash and report a correct WR id in this case. Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
| * IB/mthca: restore missing PCI registers after resetMichael S. Tsirkin2006-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | mthca does not restore the following PCI-X/PCI Express registers after reset: PCI-X device: PCI-X command register PCI-X bridge: upstream and downstream split transaction registers PCI Express : PCI Express device control and link control registers This causes instability and/or bad performance on systems where one of these registers is set to a non-default value by BIOS. Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* | [NET]: Add netif_tx_lockHerbert Xu2006-06-18
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Various drivers use xmit_lock internally to synchronise with their transmission routines. They do so without setting xmit_lock_owner. This is fine as long as netpoll is not in use. With netpoll it is possible for deadlocks to occur if xmit_lock_owner isn't set. This is because if a printk occurs while xmit_lock is held and xmit_lock_owner is not set can cause netpoll to attempt to take xmit_lock recursively. While it is possible to resolve this by getting netpoll to use trylock, it is suboptimal because netpoll's sole objective is to maximise the chance of getting the printk out on the wire. So delaying or dropping the message is to be avoided as much as possible. So the only alternative is to always set xmit_lock_owner. The following patch does this by introducing the netif_tx_lock family of functions that take care of setting/unsetting xmit_lock_owner. I renamed xmit_lock to _xmit_lock to indicate that it should not be used directly. I didn't provide irq versions of the netif_tx_lock functions since xmit_lock is meant to be a BH-disabling lock. This is pretty much a straight text substitution except for a small bug fix in winbond. It currently uses netif_stop_queue/spin_unlock_wait to stop transmission. This is unsafe as an IRQ can potentially wake up the queue. So it is safer to use netif_tx_disable. The hamradio bits used spin_lock_irq but it is unnecessary as xmit_lock must never be taken in an IRQ handler. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* IPoIB: Fix AH leak at interface downEli Cohen2006-06-05
| | | | | | | | | | | | | | | When ipoib_stop() is called it first calls netif_stop_queue() to stop the kernel from passing more packets to the network driver. However, the completion handler may call netif_wake_queue() re-enabling packet transfer. This might result in leaks (we see AH leaks which we think can be attributed to this bug) as new packets get posted while the interface is going down. Signed-off-by: Eli Cohen <eli@mellanox.co.il> Signed-off-by: Michael Tsirkin <mst@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/mthca: Fix posting lists of 256 receive requests to SRQ for TavorMichael S. Tsirkin2006-05-24
| | | | | | | | | | | | | If we post a list of length exactly a multiple of 256, nreq in doorbell gets set to 256 which is wrong: it should be encoded by 0. This is because we only zero it out on the next WR, which may not be there. The solution is to ring the doorbell after posting a WQE, not before posting the next one. This is the same bug that we just fixed for QPs with non-shared RQ. Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ipath: deref correct pointer when using kernel SMABryan O'Sullivan2006-05-23
| | | | | | | | At this point, the core QP structure hasn't been initialized, so what's in there isn't valid. Get the same information elsewhere. Signed-off-by: Bryan O'Sullivan <bos@pathscale.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ipath: fix null deref during rdma opsBryan O'Sullivan2006-05-23
| | | | | | | | | | | | | | | | The problem was that node A's sending thread, which handles sending RDMA read response data, would write the trigger word, the last packet would be sent, node B would send a new RDMA read request, node A's interrupt handler would initialize s_rdma_sge, then node A's sending thread would update s_rdma_sge. This didn't happen very often naturally but was more frequent with 1 byte RDMA reads. Rather than adding more locking or increasing the QP structure size and copying sge data, I modified the copy routine to update the pointers before writing the trigger word to avoid the update race. Signed-off-by: Ralph Campbell <ralphc@pathscale.com> Signed-off-by: Bryan O'Sullivan <bos@pathscale.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ipath: register as IB device ownerBryan O'Sullivan2006-05-23
| | | | | | | This fixes an oops. Signed-off-by: Bryan O'Sullivan <bos@pathscale.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ipath: enable PE800 receive interrupts on user portsBryan O'Sullivan2006-05-23
| | | | | | | | Fixed so it works on the PE-800. It had not previously been updated to match PE-800 receive interrupt differences from HT-400. Signed-off-by: Bryan O'Sullivan <bos@pathscale.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>