aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/networking
diff options
context:
space:
mode:
authorDavid S. Miller <davem@davemloft.net>2008-07-15 01:30:17 -0400
committerDavid S. Miller <davem@davemloft.net>2008-07-15 01:30:17 -0400
commit925068dcdc746236264d1877d3d5df656e87882a (patch)
treedc7615e1e87a1ca26ee31510c240a1c85fb6f1ad /Documentation/networking
parent83aa2e964b9b04effa304aaf3c1090b46812a04b (diff)
parent67fbbe1551b24d1bcab8478407f9b8c713d5596e (diff)
Merge branch 'davem-next' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6
Diffstat (limited to 'Documentation/networking')
-rw-r--r--Documentation/networking/ixgb.txt419
1 files changed, 320 insertions, 99 deletions
diff --git a/Documentation/networking/ixgb.txt b/Documentation/networking/ixgb.txt
index 7c98277777eb..a0d0ffb5e584 100644
--- a/Documentation/networking/ixgb.txt
+++ b/Documentation/networking/ixgb.txt
@@ -1,7 +1,7 @@
1Linux* Base Driver for the Intel(R) PRO/10GbE Family of Adapters 1Linux Base Driver for 10 Gigabit Intel(R) Network Connection
2================================================================ 2=============================================================
3 3
4November 17, 2004 4October 9, 2007
5 5
6 6
7Contents 7Contents
@@ -9,94 +9,151 @@ Contents
9 9
10- In This Release 10- In This Release
11- Identifying Your Adapter 11- Identifying Your Adapter
12- Building and Installation
12- Command Line Parameters 13- Command Line Parameters
13- Improving Performance 14- Improving Performance
15- Additional Configurations
16- Known Issues/Troubleshooting
14- Support 17- Support
15 18
16 19
20
17In This Release 21In This Release
18=============== 22===============
19 23
20This file describes the Linux* Base Driver for the Intel(R) PRO/10GbE Family 24This file describes the ixgb Linux Base Driver for the 10 Gigabit Intel(R)
21of Adapters, version 1.0.x. 25Network Connection. This driver includes support for Itanium(R)2-based
26systems.
27
28For questions related to hardware requirements, refer to the documentation
29supplied with your 10 Gigabit adapter. All hardware requirements listed apply
30to use with Linux.
31
32The following features are available in this kernel:
33 - Native VLANs
34 - Channel Bonding (teaming)
35 - SNMP
36
37Channel Bonding documentation can be found in the Linux kernel source:
38/Documentation/networking/bonding.txt
39
40The driver information previously displayed in the /proc filesystem is not
41supported in this release. Alternatively, you can use ethtool (version 1.6
42or later), lspci, and ifconfig to obtain the same information.
43
44Instructions on updating ethtool can be found in the section "Additional
45Configurations" later in this document.
22 46
23For questions related to hardware requirements, refer to the documentation
24supplied with your Intel PRO/10GbE adapter. All hardware requirements listed
25apply to use with Linux.
26 47
27Identifying Your Adapter 48Identifying Your Adapter
28======================== 49========================
29 50
30To verify your Intel adapter is supported, find the board ID number on the 51The following Intel network adapters are compatible with the drivers in this
31adapter. Look for a label that has a barcode and a number in the format 52release:
32A12345-001. 53
54Controller Adapter Name Physical Layer
55---------- ------------ --------------
5682597EX Intel(R) PRO/10GbE LR/SR/CX4 10G Base-LR (1310 nm optical fiber)
57 Server Adapters 10G Base-SR (850 nm optical fiber)
58 10G Base-CX4(twin-axial copper cabling)
59
60For more information on how to identify your adapter, go to the Adapter &
61Driver ID Guide at:
62
63 http://support.intel.com/support/network/sb/CS-012904.htm
64
65
66Building and Installation
67=========================
68
69select m for "Intel(R) PRO/10GbE support" located at:
70 Location:
71 -> Device Drivers
72 -> Network device support (NETDEVICES [=y])
73 -> Ethernet (10000 Mbit) (NETDEV_10000 [=y])
741. make modules && make modules_install
75
762. Load the module:
77
78    modprobe ixgb <parameter>=<value>
79
80 The insmod command can be used if the full
81 path to the driver module is specified. For example:
82
83 insmod /lib/modules/<KERNEL VERSION>/kernel/drivers/net/ixgb/ixgb.ko
84
85 With 2.6 based kernels also make sure that older ixgb drivers are
86 removed from the kernel, before loading the new module:
33 87
34Use the above information and the Adapter & Driver ID Guide at: 88 rmmod ixgb; modprobe ixgb
35 89
36 http://support.intel.com/support/network/adapter/pro100/21397.htm 903. Assign an IP address to the interface by entering the following, where
91 x is the interface number:
37 92
38For the latest Intel network drivers for Linux, go to: 93 ifconfig ethx <IP_address>
94
954. Verify that the interface works. Enter the following, where <IP_address>
96 is the IP address for another machine on the same subnet as the interface
97 that is being tested:
98
99 ping <IP_address>
39 100
40 http://downloadfinder.intel.com/scripts-df/support_intel.asp
41 101
42Command Line Parameters 102Command Line Parameters
43======================= 103=======================
44 104
45If the driver is built as a module, the following optional parameters are 105If the driver is built as a module, the following optional parameters are
46used by entering them on the command line with the modprobe or insmod command 106used by entering them on the command line with the modprobe command using
47using this syntax: 107this syntax:
48 108
49 modprobe ixgb [<option>=<VAL1>,<VAL2>,...] 109 modprobe ixgb [<option>=<VAL1>,<VAL2>,...]
50 110
51 insmod ixgb [<option>=<VAL1>,<VAL2>,...] 111For example, with two 10GbE PCI adapters, entering:
52 112
53For example, with two PRO/10GbE PCI adapters, entering: 113 modprobe ixgb TxDescriptors=80,128
54 114
55 insmod ixgb TxDescriptors=80,128 115loads the ixgb driver with 80 TX resources for the first adapter and 128 TX
56
57loads the ixgb driver with 80 TX resources for the first adapter and 128 TX
58resources for the second adapter. 116resources for the second adapter.
59 117
60The default value for each parameter is generally the recommended setting, 118The default value for each parameter is generally the recommended setting,
61unless otherwise noted. Also, if the driver is statically built into the 119unless otherwise noted.
62kernel, the driver is loaded with the default values for all the parameters.
63Ethtool can be used to change some of the parameters at runtime.
64 120
65FlowControl 121FlowControl
66Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx) 122Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx)
67Default: Read from the EEPROM 123Default: Read from the EEPROM
68 If EEPROM is not detected, default is 3 124 If EEPROM is not detected, default is 1
69 This parameter controls the automatic generation(Tx) and response(Rx) to 125 This parameter controls the automatic generation(Tx) and response(Rx) to
70 Ethernet PAUSE frames. 126 Ethernet PAUSE frames. There are hardware bugs associated with enabling
127 Tx flow control so beware.
71 128
72RxDescriptors 129RxDescriptors
73Valid Range: 64-512 130Valid Range: 64-512
74Default Value: 512 131Default Value: 512
75 This value is the number of receive descriptors allocated by the driver. 132 This value is the number of receive descriptors allocated by the driver.
76 Increasing this value allows the driver to buffer more incoming packets. 133 Increasing this value allows the driver to buffer more incoming packets.
77 Each descriptor is 16 bytes. A receive buffer is also allocated for 134 Each descriptor is 16 bytes. A receive buffer is also allocated for
78 each descriptor and can be either 2048, 4056, 8192, or 16384 bytes, 135 each descriptor and can be either 2048, 4056, 8192, or 16384 bytes,
79 depending on the MTU setting. When the MTU size is 1500 or less, the 136 depending on the MTU setting. When the MTU size is 1500 or less, the
80 receive buffer size is 2048 bytes. When the MTU is greater than 1500 the 137 receive buffer size is 2048 bytes. When the MTU is greater than 1500 the
81 receive buffer size will be either 4056, 8192, or 16384 bytes. The 138 receive buffer size will be either 4056, 8192, or 16384 bytes. The
82 maximum MTU size is 16114. 139 maximum MTU size is 16114.
83 140
84RxIntDelay 141RxIntDelay
85Valid Range: 0-65535 (0=off) 142Valid Range: 0-65535 (0=off)
86Default Value: 6 143Default Value: 72
87 This value delays the generation of receive interrupts in units of 144 This value delays the generation of receive interrupts in units of
88 0.8192 microseconds. Receive interrupt reduction can improve CPU 145 0.8192 microseconds. Receive interrupt reduction can improve CPU
89 efficiency if properly tuned for specific network traffic. Increasing 146 efficiency if properly tuned for specific network traffic. Increasing
90 this value adds extra latency to frame reception and can end up 147 this value adds extra latency to frame reception and can end up
91 decreasing the throughput of TCP traffic. If the system is reporting 148 decreasing the throughput of TCP traffic. If the system is reporting
92 dropped receives, this value may be set too high, causing the driver to 149 dropped receives, this value may be set too high, causing the driver to
93 run out of available receive descriptors. 150 run out of available receive descriptors.
94 151
95TxDescriptors 152TxDescriptors
96Valid Range: 64-4096 153Valid Range: 64-4096
97Default Value: 256 154Default Value: 256
98 This value is the number of transmit descriptors allocated by the driver. 155 This value is the number of transmit descriptors allocated by the driver.
99 Increasing this value allows the driver to queue more transmits. Each 156 Increasing this value allows the driver to queue more transmits. Each
100 descriptor is 16 bytes. 157 descriptor is 16 bytes.
101 158
102XsumRX 159XsumRX
@@ -105,51 +162,49 @@ Default Value: 1
105 A value of '1' indicates that the driver should enable IP checksum 162 A value of '1' indicates that the driver should enable IP checksum
106 offload for received packets (both UDP and TCP) to the adapter hardware. 163 offload for received packets (both UDP and TCP) to the adapter hardware.
107 164
108XsumTX
109Valid Range: 0-1
110Default Value: 1
111 A value of '1' indicates that the driver should enable IP checksum
112 offload for transmitted packets (both UDP and TCP) to the adapter
113 hardware.
114 165
115Improving Performance 166Improving Performance
116===================== 167=====================
117 168
118With the Intel PRO/10 GbE adapter, the default Linux configuration will very 169With the 10 Gigabit server adapters, the default Linux configuration will
119likely limit the total available throughput artificially. There is a set of 170very likely limit the total available throughput artificially. There is a set
120things that when applied together increase the ability of Linux to transmit 171of configuration changes that, when applied together, will increase the ability
121and receive data. The following enhancements were originally acquired from 172of Linux to transmit and receive data. The following enhancements were
122settings published at http://www.spec.org/web99 for various submitted results 173originally acquired from settings published at http://www.spec.org/web99/ for
123using Linux. 174various submitted results using Linux.
124 175
125NOTE: These changes are only suggestions, and serve as a starting point for 176NOTE: These changes are only suggestions, and serve as a starting point for
126tuning your network performance. 177 tuning your network performance.
127 178
128The changes are made in three major ways, listed in order of greatest effect: 179The changes are made in three major ways, listed in order of greatest effect:
129- Use ifconfig to modify the mtu (maximum transmission unit) and the txqueuelen 180- Use ifconfig to modify the mtu (maximum transmission unit) and the txqueuelen
130 parameter. 181 parameter.
131- Use sysctl to modify /proc parameters (essentially kernel tuning) 182- Use sysctl to modify /proc parameters (essentially kernel tuning)
132- Use setpci to modify the MMRBC field in PCI-X configuration space to increase 183- Use setpci to modify the MMRBC field in PCI-X configuration space to increase
133 transmit burst lengths on the bus. 184 transmit burst lengths on the bus.
134 185
135NOTE: setpci modifies the adapter's configuration registers to allow it to read 186NOTE: setpci modifies the adapter's configuration registers to allow it to read
136up to 4k bytes at a time (for transmits). However, for some systems the 187up to 4k bytes at a time (for transmits). However, for some systems the
137behavior after modifying this register may be undefined (possibly errors of some 188behavior after modifying this register may be undefined (possibly errors of
138kind). A power-cycle, hard reset or explicitly setting the e6 register back to 189some kind). A power-cycle, hard reset or explicitly setting the e6 register
13922 (setpci -d 8086:1048 e6.b=22) may be required to get back to a stable 190back to 22 (setpci -d 8086:1a48 e6.b=22) may be required to get back to a
140configuration. 191stable configuration.
141 192
142- COPY these lines and paste them into ixgb_perf.sh: 193- COPY these lines and paste them into ixgb_perf.sh:
143#!/bin/bash 194#!/bin/bash
144echo "configuring network performance , edit this file to change the interface" 195echo "configuring network performance , edit this file to change the interface
196or device ID of 10GbE card"
145# set mmrbc to 4k reads, modify only Intel 10GbE device IDs 197# set mmrbc to 4k reads, modify only Intel 10GbE device IDs
146setpci -d 8086:1048 e6.b=2e 198# replace 1a48 with appropriate 10GbE device's ID installed on the system,
147# set the MTU (max transmission unit) - it requires your switch and clients to change too! 199# if needed.
200setpci -d 8086:1a48 e6.b=2e
201# set the MTU (max transmission unit) - it requires your switch and clients
202# to change as well.
148# set the txqueuelen 203# set the txqueuelen
149# your ixgb adapter should be loaded as eth1 for this to work, change if needed 204# your ixgb adapter should be loaded as eth1 for this to work, change if needed
150ifconfig eth1 mtu 9000 txqueuelen 1000 up 205ifconfig eth1 mtu 9000 txqueuelen 1000 up
151# call the sysctl utility to modify /proc/sys entries 206# call the sysctl utility to modify /proc/sys entries
152sysctl -p ./sysctl_ixgb.conf 207sysctl -p ./sysctl_ixgb.conf
153- END ixgb_perf.sh 208- END ixgb_perf.sh
154 209
155- COPY these lines and paste them into sysctl_ixgb.conf: 210- COPY these lines and paste them into sysctl_ixgb.conf:
@@ -159,54 +214,220 @@ sysctl -p ./sysctl_ixgb.conf
159# several network benchmark tests, your mileage may vary 214# several network benchmark tests, your mileage may vary
160 215
161### IPV4 specific settings 216### IPV4 specific settings
162net.ipv4.tcp_timestamps = 0 # turns TCP timestamp support off, default 1, reduces CPU use 217# turn TCP timestamp support off, default 1, reduces CPU use
163net.ipv4.tcp_sack = 0 # turn SACK support off, default on 218net.ipv4.tcp_timestamps = 0
164# on systems with a VERY fast bus -> memory interface this is the big gainer 219# turn SACK support off, default on
165net.ipv4.tcp_rmem = 10000000 10000000 10000000 # sets min/default/max TCP read buffer, default 4096 87380 174760 220# on systems with a VERY fast bus -> memory interface this is the big gainer
166net.ipv4.tcp_wmem = 10000000 10000000 10000000 # sets min/pressure/max TCP write buffer, default 4096 16384 131072 221net.ipv4.tcp_sack = 0
167net.ipv4.tcp_mem = 10000000 10000000 10000000 # sets min/pressure/max TCP buffer space, default 31744 32256 32768 222# set min/default/max TCP read buffer, default 4096 87380 174760
223net.ipv4.tcp_rmem = 10000000 10000000 10000000
224# set min/pressure/max TCP write buffer, default 4096 16384 131072
225net.ipv4.tcp_wmem = 10000000 10000000 10000000
226# set min/pressure/max TCP buffer space, default 31744 32256 32768
227net.ipv4.tcp_mem = 10000000 10000000 10000000
168 228
169### CORE settings (mostly for socket and UDP effect) 229### CORE settings (mostly for socket and UDP effect)
170net.core.rmem_max = 524287 # maximum receive socket buffer size, default 131071 230# set maximum receive socket buffer size, default 131071
171net.core.wmem_max = 524287 # maximum send socket buffer size, default 131071 231net.core.rmem_max = 524287
172net.core.rmem_default = 524287 # default receive socket buffer size, default 65535 232# set maximum send socket buffer size, default 131071
173net.core.wmem_default = 524287 # default send socket buffer size, default 65535 233net.core.wmem_max = 524287
174net.core.optmem_max = 524287 # maximum amount of option memory buffers, default 10240 234# set default receive socket buffer size, default 65535
175net.core.netdev_max_backlog = 300000 # number of unprocessed input packets before kernel starts dropping them, default 300 235net.core.rmem_default = 524287
236# set default send socket buffer size, default 65535
237net.core.wmem_default = 524287
238# set maximum amount of option memory buffers, default 10240
239net.core.optmem_max = 524287
240# set number of unprocessed input packets before kernel starts dropping them; default 300
241net.core.netdev_max_backlog = 300000
176- END sysctl_ixgb.conf 242- END sysctl_ixgb.conf
177 243
178Edit the ixgb_perf.sh script if necessary to change eth1 to whatever interface 244Edit the ixgb_perf.sh script if necessary to change eth1 to whatever interface
179your ixgb driver is using. 245your ixgb driver is using and/or replace '1a48' with appropriate 10GbE device's
246ID installed on the system.
180 247
181NOTE: Unless these scripts are added to the boot process, these changes will 248NOTE: Unless these scripts are added to the boot process, these changes will
182only last only until the next system reboot. 249 only last only until the next system reboot.
183 250
184 251
185Resolving Slow UDP Traffic 252Resolving Slow UDP Traffic
186-------------------------- 253--------------------------
254If your server does not seem to be able to receive UDP traffic as fast as it
255can receive TCP traffic, it could be because Linux, by default, does not set
256the network stack buffers as large as they need to be to support high UDP
257transfer rates. One way to alleviate this problem is to allow more memory to
258be used by the IP stack to store incoming data.
187 259
188If your server does not seem to be able to receive UDP traffic as fast as it 260For instance, use the commands:
189can receive TCP traffic, it could be because Linux, by default, does not set
190the network stack buffers as large as they need to be to support high UDP
191transfer rates. One way to alleviate this problem is to allow more memory to
192be used by the IP stack to store incoming data.
193
194For instance, use the commands:
195 sysctl -w net.core.rmem_max=262143 261 sysctl -w net.core.rmem_max=262143
196and 262and
197 sysctl -w net.core.rmem_default=262143 263 sysctl -w net.core.rmem_default=262143
198to increase the read buffer memory max and default to 262143 (256k - 1) from 264to increase the read buffer memory max and default to 262143 (256k - 1) from
199defaults of max=131071 (128k - 1) and default=65535 (64k - 1). These variables 265defaults of max=131071 (128k - 1) and default=65535 (64k - 1). These variables
200will increase the amount of memory used by the network stack for receives, and 266will increase the amount of memory used by the network stack for receives, and
201can be increased significantly more if necessary for your application. 267can be increased significantly more if necessary for your application.
202 268
269
270Additional Configurations
271=========================
272
273 Configuring the Driver on Different Distributions
274 -------------------------------------------------
275 Configuring a network driver to load properly when the system is started is
276 distribution dependent. Typically, the configuration process involves adding
277 an alias line to /etc/modprobe.conf as well as editing other system startup
278 scripts and/or configuration files. Many popular Linux distributions ship
279 with tools to make these changes for you. To learn the proper way to
280 configure a network device for your system, refer to your distribution
281 documentation. If during this process you are asked for the driver or module
282 name, the name for the Linux Base Driver for the Intel 10GbE Family of
283 Adapters is ixgb.
284
285 Viewing Link Messages
286 ---------------------
287 Link messages will not be displayed to the console if the distribution is
288 restricting system messages. In order to see network driver link messages on
289 your console, set dmesg to eight by entering the following:
290
291 dmesg -n 8
292
293 NOTE: This setting is not saved across reboots.
294
295
296 Jumbo Frames
297 ------------
298 The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
299 enabled by changing the MTU to a value larger than the default of 1500.
300 The maximum value for the MTU is 16114. Use the ifconfig command to
301 increase the MTU size. For example:
302
303 ifconfig ethx mtu 9000 up
304
305 The maximum MTU setting for Jumbo Frames is 16114. This value coincides
306 with the maximum Jumbo Frames size of 16128.
307
308
309 Ethtool
310 -------
311 The driver utilizes the ethtool interface for driver configuration and
312 diagnostics, as well as displaying statistical information. Ethtool
313 version 1.6 or later is required for this functionality.
314
315 The latest release of ethtool can be found from
316 http://sourceforge.net/projects/gkernel
317
318 NOTE: Ethtool 1.6 only supports a limited set of ethtool options. Support
319 for a more complete ethtool feature set can be enabled by upgrading
320 to the latest version.
321
322
323 NAPI
324 ----
325
326 NAPI (Rx polling mode) is supported in the ixgb driver. NAPI is enabled
327 or disabled based on the configuration of the kernel. see CONFIG_IXGB_NAPI
328
329 See www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI.
330
331
332Known Issues/Troubleshooting
333============================
334
335 NOTE: After installing the driver, if your Intel Network Connection is not
336 working, verify in the "In This Release" section of the readme that you have
337 installed the correct driver.
338
339 Intel(R) PRO/10GbE CX4 Server Adapter Cable Interoperability Issue with
340 Fujitsu XENPAK Module in SmartBits Chassis
341 ---------------------------------------------------------------------
342 Excessive CRC errors may be observed if the Intel(R) PRO/10GbE CX4
343 Server adapter is connected to a Fujitsu XENPAK CX4 module in a SmartBits
344 chassis using 15 m/24AWG cable assemblies manufactured by Fujitsu or Leoni.
345 The CRC errors may be received either by the Intel(R) PRO/10GbE CX4
346 Server adapter or the SmartBits. If this situation occurs using a different
347 cable assembly may resolve the issue.
348
349 CX4 Server Adapter Cable Interoperability Issues with HP Procurve 3400cl
350 Switch Port
351 ------------------------------------------------------------------------
352 Excessive CRC errors may be observed if the Intel(R) PRO/10GbE CX4 Server
353 adapter is connected to an HP Procurve 3400cl switch port using short cables
354 (1 m or shorter). If this situation occurs, using a longer cable may resolve
355 the issue.
356
357 Excessive CRC errors may be observed using Fujitsu 24AWG cable assemblies that
358 Are 10 m or longer or where using a Leoni 15 m/24AWG cable assembly. The CRC
359 errors may be received either by the CX4 Server adapter or at the switch. If
360 this situation occurs, using a different cable assembly may resolve the issue.
361
362
363 Jumbo Frames System Requirement
364 -------------------------------
365 Memory allocation failures have been observed on Linux systems with 64 MB
366 of RAM or less that are running Jumbo Frames. If you are using Jumbo
367 Frames, your system may require more than the advertised minimum
368 requirement of 64 MB of system memory.
369
370
371 Performance Degradation with Jumbo Frames
372 -----------------------------------------
373 Degradation in throughput performance may be observed in some Jumbo frames
374 environments. If this is observed, increasing the application's socket buffer
375 size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help.
376 See the specific application manual and /usr/src/linux*/Documentation/
377 networking/ip-sysctl.txt for more details.
378
379
380 Allocating Rx Buffers when Using Jumbo Frames
381 ---------------------------------------------
382 Allocating Rx buffers when using Jumbo Frames on 2.6.x kernels may fail if
383 the available memory is heavily fragmented. This issue may be seen with PCI-X
384 adapters or with packet split disabled. This can be reduced or eliminated
385 by changing the amount of available memory for receive buffer allocation, by
386 increasing /proc/sys/vm/min_free_kbytes.
387
388
389 Multiple Interfaces on Same Ethernet Broadcast Network
390 ------------------------------------------------------
391 Due to the default ARP behavior on Linux, it is not possible to have
392 one system on two IP networks in the same Ethernet broadcast domain
393 (non-partitioned switch) behave as expected. All Ethernet interfaces
394 will respond to IP traffic for any IP address assigned to the system.
395 This results in unbalanced receive traffic.
396
397 If you have multiple interfaces in a server, do either of the following:
398
399 - Turn on ARP filtering by entering:
400 echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
401
402 - Install the interfaces in separate broadcast domains - either in
403 different switches or in a switch partitioned to VLANs.
404
405
406 UDP Stress Test Dropped Packet Issue
407 --------------------------------------
408 Under small packets UDP stress test with 10GbE driver, the Linux system
409 may drop UDP packets due to the fullness of socket buffers. You may want
410 to change the driver's Flow Control variables to the minimum value for
411 controlling packet reception.
412
413
414 Tx Hangs Possible Under Stress
415 ------------------------------
416 Under stress conditions, if TX hangs occur, turning off TSO
417 "ethtool -K eth0 tso off" may resolve the problem.
418
419
203Support 420Support
204======= 421=======
205 422
206For general information and support, go to the Intel support website at: 423For general information, go to the Intel support website at:
207 424
208 http://support.intel.com 425 http://support.intel.com
209 426
427or the Intel Wired Networking project hosted by Sourceforge at:
428
429 http://sourceforge.net/projects/e1000
430
210If an issue is identified with the released source code on the supported 431If an issue is identified with the released source code on the supported
211kernel with a supported adapter, email the specific information related to 432kernel with a supported adapter, email the specific information related
212the issue to linux.nics@intel.com. 433to the issue to e1000-devel@lists.sf.net