| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
| |
- Add a new rdma ctl command called RDMA_GET_MIB to the cxgb3 low
level driver to obtain the protocol mib from the rnic hardware.
- Add new iw_cxgb3 provider method to get the MIB from the low level
driver.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a sysfs attribute group called "proto_stats" under
/sys/class/infiniband/$device/ and populates this group with protocol
statistics if they exist for a given device. Currently, only iWARP
stats are defined, but the code is designed to allow InfiniBand
protocol stats if they become available. These stats are per-device
and more importantly -not- per port.
Details:
- Add union rdma_protocol_stats in ib_verbs.h. This union allows
defining transport-specific stats. Currently only iwarp stats are
defined.
- Add struct iw_protocol_stats to define the current set of iwarp
protocol stats.
- Add new ib_device method called get_proto_stats() to return protocol
statistics.
- Add logic in core/sysfs.c to create iwarp protocol stats attributes
if the device is an RNIC and has a get_proto_stats() method.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For devices that don't support SRQs, ipoib_cm_post_receive_nonsrq() is
called from both ipoib_cm_handle_rx_wc() and ipoib_cm_nonsrq_init_rx(),
and these two callers are not synchronized against each other.
However, ipoib_cm_post_receive_nonsrq() always reuses the same receive
work request and scatter list structures, so multiple callers can end
up stepping on each other, which leads to posting garbled work
requests.
Fix this by having the caller pass in the ib_recv_wr and ib_sge
structures to use, and allocating new local structures in
ipoib_cm_nonsrq_init_rx().
Based on a patch by Pradeep Satyanarayana <pradeep@us.ibm.com> and
David Wilder <dwilder@us.ibm.com>, with debugging help from Hoang-Nam
Nguyen <hnguyen@de.ibm.com>.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
| |
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Acked-by: Sean Hefty <sean.hefty@intel.com>
|
|
|
|
|
|
|
|
| |
The members struct iwch_rnic_attributes.vendor_id and .vendor_part_id
are write-only, so we might as well get rid of them.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Acked-by: Steve Wise <swise@opengridcomputing.com>
|
|
|
|
|
|
|
|
|
|
| |
- set fw_ver
- set hw_ver
- set max_qp_wr to something reasonable
- set max_cqe to something reasonable
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
|
| |
During corner case testing, we noticed that some versions of ehca do
not properly transition to interrupt done in special load situations.
This can be resolved by periodically triggering EOI through H_EOI, if
EQEs are pending.
Signed-off-by: Stefan Roscher <stefan.roscher@de.ibm.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
| |
Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
| |
Commit 65adfa91 ("IB/mlx4: Fix RESET to RESET and RESET to ERROR
transitions") added some extra code to handle a QP state transition
from RESET to ERROR. However, the latest 1.2.1 version of the IB spec
has clarified that this transition is actually not allowed, so we can
remove this extra code again.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
| |
Commit b18aad71 ("IB/mthca: Fix RESET to ERROR transition") added some
extra code to handle a QP state transition from RESET to ERROR.
However, the latest 1.2.1 version of the IB spec has clarified that
this transition is actually not allowed, so we can remove this extra
code again.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I was reviewing the QP state transition diagram in the IB 1.2.1 spec
and the code for qp_state_table[], and noticed that the code allows a
QP to be modified from IB_QPS_RESET to IB_QPS_ERR whereas the notes
for figure 124 (pg 457) specifically says that this transition isn't
allowed. This is a clarification from earlier versions of the IB
spec, which were ambiguous in this area and suggested that the RESET
to ERR transition was allowed.
Fix up the qp_state_table[] to make RESET->ERR not allowed.
Signed-off-by: Ralph Campbell <ralph.campbell@qlogic.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
| |
ConnectX HCAs support the IB_MGMT_CLASS_CONG_MGMT management class, so
process MADs of this class through the MAD_IFC firmware command.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
ConnectX returns the max message size it supports through the
QUERY_DEV_CAP firmware command. When modifying a QP to RTR, the max
message size for the QP must be specified. This value must not exceed
the value declared through QUERY_DEV_CAP. The current code ignores
the max allowed size and unconditionally sets the value to 2^31. This
patch sets all QPs to the max value allowed as returned from firmware.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- set IB_DEVICE_MEM_MGT_EXTENSIONS capability bit if fw supports it.
- set max_fast_reg_page_list_len device attribute.
- add iwch_alloc_fast_reg_mr function.
- add iwch_alloc_fastreg_pbl
- add iwch_free_fastreg_pbl
- adjust the WQ depth for kernel mode work queues to account for
fastreg possibly taking 2 WR slots.
- add fastreg_mr work request support.
- add local_inv work request support.
- add send_with_inv and send_with_se_inv work request support.
- removed useless duplicate enums/defines for TPT/MW/MR stuff.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds support for the IB "base memory management extension"
(BMME) and the equivalent iWARP operations (which the iWARP verbs
mandates all devices must implement). The new operations are:
- Allocate an ib_mr for use in fast register work requests.
- Allocate/free a physical buffer lists for use in fast register work
requests. This allows device drivers to allocate this memory as
needed for use in posting send requests (eg via dma_alloc_coherent).
- New send queue work requests:
* send with remote invalidate
* fast register memory region
* local invalidate memory region
* RDMA read with invalidate local memory region (iWARP only)
Consumer interface details:
- A new device capability flag IB_DEVICE_MEM_MGT_EXTENSIONS is added
to indicate device support for these features.
- New send work request opcodes IB_WR_FAST_REG_MR, IB_WR_LOCAL_INV,
IB_WR_RDMA_READ_WITH_INV are added.
- A new consumer API function, ib_alloc_mr() is added to allocate
fast register memory regions.
- New consumer API functions, ib_alloc_fast_reg_page_list() and
ib_free_fast_reg_page_list() are added to allocate and free
device-specific memory for fast registration page lists.
- A new consumer API function, ib_update_fast_reg_key(), is added to
allow the key portion of the R_Key and L_Key of a fast registration
MR to be updated. Consumers call this if desired before posting
a IB_WR_FAST_REG_MR work request.
Consumers can use this as follows:
- MR is allocated with ib_alloc_mr().
- Page list memory is allocated with ib_alloc_fast_reg_page_list().
- MR R_Key/L_Key "key" field is updated with ib_update_fast_reg_key().
- MR made VALID and bound to a specific page list via
ib_post_send(IB_WR_FAST_REG_MR)
- MR made INVALID via ib_post_send(IB_WR_LOCAL_INV),
ib_post_send(IB_WR_RDMA_READ_WITH_INV) or an incoming send with
invalidate operation.
- MR is deallocated with ib_dereg_mr()
- page lists dealloced via ib_free_fast_reg_page_list().
Applications can allocate a fast register MR once, and then can
repeatedly bind the MR to different physical block lists (PBLs) via
posting work requests to a send queue (SQ). For each outstanding
MR-to-PBL binding in the SQ pipe, a fast_reg_page_list needs to be
allocated (the fast_reg_page_list is owned by the low-level driver
from the consumer posting a work request until the request completes).
Thus pipelining can be achieved while still allowing device-specific
page_list processing.
The 32-bit fast register memory key/STag is composed of a 24-bit index
and an 8-bit key. The application can change the key each time it
fast registers thus allowing more control over the peer's use of the
key/STag (ie it can effectively be changed each time the rkey is
rebound to a page list).
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The connected mode implementation in the IPoIB driver has a large
overhead in the way SKBs are handled in the receive flow. It usually
allocates an SKB with as big as was used in the currently received SKB
and moves unused fragments from the old SKB to the new one. This
involves a loop on all the remaining fragments and incurs overhead on
the CPU. This patch, for small SKBs, allocates an SKB just large
enough to contain the received data and copies to it the data from the
received SKB. The newly allocated SKB is passed to the stack and the
old SKB is reposted.
When running netperf, UDP small messages, without this pach I get:
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
14.4.3.178 (14.4.3.178) port 0 AF_INET
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
114688 128 10.00 5142034 0 526.31
114688 10.00 1130489 115.71
With this patch I get both send and receive at ~315 mbps.
The reason that send performance actually slows down is as follows:
When using this patch, the overhead of the CPU for handling RX packets
is dramatically reduced. As a result, we do not experience RNR NAK
messages from the receiver which cause the connection to be closed and
reopened again; when the patch is not used, the receiver cannot handle
the packets fast enough so there is less time to post new buffers and
hence the mentioned RNR NACKs. So what happens is that the
application *thinks* it posted a certain number of packets for
transmission but these packets are flushed and do not really get
transmitted. Since the connection gets opened and closed many times,
each time netperf gets the CPU time that otherwise would have been
given to IPoIB to actually transmit the packets. This can be verified
when looking at the port counters -- the output of ifconfig and the
oputput of netperf (this is for the case without the patch):
tx packets
==========
port counter: 1,543,996
ifconfig: 1,581,426
netperf: 5,142,034
rx packets
==========
netperf 1,1304,089
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
|
|
|
|
|
|
| |
They don't get updated by git and so they're worse than useless.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
| |
Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea is that for QPs with fixed size work requests (eg selective
signaling QPs), before stamping the WQE, we read the value of the DS
field, which gives the effective size of the descriptor as used in the
previous post. Then we stamp only that area, since the rest of the
descriptor is already stamped.
When initializing the send queue buffer, make sure the DS field is
initialized to the max descriptor size so that the subsequent stamping
will be done on the entire descriptor area.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch solves a race that occurs after an event occurs that causes
the SA query module to flush its SM address handle (AH). When SM AH
becomes invalid and needs an update it is handled by the global
workqueue. On the other hand this event is also handled in the IPoIB
driver by queuing work in the ipoib_workqueue that does multicast
joins. Although queuing is in the right order, it is done to 2
different workqueues and so there is no guarantee that the first to be
queued is the first to be executed.
This causes a problem because IPoIB may end up sending an request to
the old SM, which will take a long time to time out (since the old SM
is gone); this leads to a much longer than necessary interruption in
multicast traffer.
The patch sets the SA query module's SM AH to NULL when the event
occurs, and until update_sm_ah() is done, any request that needs sm_ah
fails with -EAGAIN return status.
For consumers, the patch doesn't make things worse. Before the patch,
MADs are sent to the wrong SM so the request gets lost. Consumers can
be improved if they examine the return code and respond to EAGAIN
properly but even without an improvement the situation is not getting
worse.
Signed-off-by: Moni Levy <monil@voltaire.com>
Signed-off-by: Moni Shoua <monis@voltaire.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
| |
The license text for several files references a third software license
that was inadvertently copied in. Update the license to what was
intended. This update was based on a request from HP.
Signed-off-by: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
| |
Remove an explicit memset(..., 0, ...) of a 'listener' structure
allocated with kzalloc().
Signed-off-by: Christophe Jaillet <christophe.jaillet@wanadoo.fr>
Acked-by: Faisal Latif <faisal@neteffect.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The SRP initiator is currently using ib_find_cached_pkey() and
ib_get_cached_gid() in situations where the uncached ib_find_pkey()
and ib_query_gid() functions serve just as well: sleeping is allowed
and performance is not an issue. Since we want to eliminate the
cached operations in the long term, convert SRP to use the uncached
variants.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* 'for-2.6.27' of git://git.infradead.org/users/dwmw2/firmware-2.6: (64 commits)
firmware: convert sb16_csp driver to use firmware loader exclusively
dsp56k: use request_firmware
edgeport-ti: use request_firmware()
edgeport: use request_firmware()
vicam: use request_firmware()
dabusb: use request_firmware()
cpia2: use request_firmware()
ip2: use request_firmware()
firmware: convert Ambassador ATM driver to request_firmware()
whiteheat: use request_firmware()
ti_usb_3410_5052: use request_firmware()
emi62: use request_firmware()
emi26: use request_firmware()
keyspan_pda: use request_firmware()
keyspan: use request_firmware()
ttusb-budget: use request_firmware()
kaweth: use request_firmware()
smctr: use request_firmware()
firmware: convert ymfpci driver to use firmware loader exclusively
firmware: convert maestro3 driver to use firmware loader exclusively
...
Fix up trivial conflicts with BKL removal in drivers/char/dsp56k.c and
drivers/char/ip2/ip2main.c manually.
|
| |\
| | |
| | |
| | |
| | |
| | | |
Conflicts:
sound/pci/Kconfig
|
| | |
| | |
| | |
| | |
| | | |
Signed-off-by: Jaswinder Singh <jaswinder@infradead.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Firmware blob looks like this...
uint8_t MajorVersion
uint8_t MinorVersion
__le16 BuildNumber
uint8_t data[]
Signed-off-by: Jaswinder Singh <jaswinder@infradead.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Version number provided in first HEX record.
Signed-off-by: Jaswinder Singh <jaswinder@infradead.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Although it wasn't actually using ihex records before, we use the Intel
HEX record format for this firmware -- because that gives us a simple
way to split it into separate chunks internally as we need, without
loading each part as a separate file.
Signed-off-by: Jaswinder Singh <jaswinder@infradead.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | | |
Thanks for Jaswinder Singh for converting the firmware blob itself to ihex.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Converted with help from Jaswinder Singh
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Acked-by: Alan Cox <alan@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Since it had various regions to be loaded to separate addresses, and it
wanted to do them in fairly small chunks anyway, switch it to use the
new ihex code. Encode the start address in the first record.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Acked-by: Chas Williams <chas@cmf.nrl.navy.mil>
|
| | |
| | |
| | |
| | | |
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
| | |
| | |
| | |
| | | |
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
| | |
| | |
| | |
| | | |
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
| | |
| | |
| | |
| | | |
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
| | |
| | |
| | |
| | | |
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
| | |
| | |
| | |
| | | |
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
| | |
| | |
| | |
| | |
| | | |
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Acked-by: Mauro Carvalho Chehab <mchehab@infradead.org>
|
| | |
| | |
| | |
| | | |
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
| | |
| | |
| | |
| | | |
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This will control whether we build firmware into the kernel image for
_every_ driver which we convert to request_firmware(), to avoid a
proliferation of 'CONFIG_XXX_FIRMWARE' options for each one.
Default to 'y' for now, which is the wrong thing to do but people seem
to be insisting on it and refusing to even review patches until it's
done. And it does preserve the existing behaviour for built-in drivers.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This allows arbitrary firmware files to be included in the static kernel
where the firmware loader can find them without requiring userspace to
be alive.
(Updated and CONFIG_EXTRA_FIRMWARE_DIR added with lots of help from
Johannes Berg).
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Some drivers have their own hacks to bypass the kernel's firmware loader
and build their firmware into the kernel; this renders those unnecessary.
Other drivers don't use the firmware loader at all, because they always
want the firmware to be available. This allows them to start using the
firmware loader.
A third set of drivers already use the firmware loader, but can't be
used without help from userspace, which sometimes requires an initrd.
This allows them to work in a static kernel.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In preparation for supporting firmware files linked into the static
kernel, make fw->data const to ensure that users aren't modifying it (so
that we can pass a pointer to the original in-kernel copy, rather than
having to copy it).
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fix a const pointer to non-const pointer assignment error in the Conexant
cx23418 MPEG encoder driver.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fix an assignment of a const pointer to a non-const pointer in moxa_load_fw().
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | |
| | |
| | |
| | |
| | | |
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|
| | |
| | |
| | |
| | |
| | | |
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
|