aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/ABI/testing/sysfs-ata99
-rw-r--r--Documentation/ABI/testing/sysfs-devices-power88
-rw-r--r--Documentation/ABI/testing/sysfs-power29
-rw-r--r--Documentation/DocBook/device-drivers.tmpl1
-rw-r--r--Documentation/DocBook/drm.tmpl1
-rw-r--r--Documentation/DocBook/genericirq.tmpl84
-rw-r--r--Documentation/DocBook/kernel-api.tmpl4
-rw-r--r--Documentation/DocBook/kernel-locking.tmpl20
-rw-r--r--Documentation/DocBook/tracepoint.tmpl5
-rw-r--r--Documentation/RCU/checklist.txt46
-rw-r--r--Documentation/RCU/stallwarn.txt18
-rw-r--r--Documentation/RCU/trace.txt13
-rw-r--r--Documentation/arm/00-INDEX2
-rw-r--r--Documentation/arm/msm/gpiomux.txt176
-rw-r--r--Documentation/block/00-INDEX4
-rw-r--r--Documentation/block/barrier.txt261
-rw-r--r--Documentation/block/cfq-iosched.txt45
-rw-r--r--Documentation/block/writeback_cache_control.txt86
-rw-r--r--Documentation/cgroups/blkio-controller.txt134
-rw-r--r--Documentation/cputopology.txt23
-rw-r--r--Documentation/feature-removal-schedule.txt28
-rw-r--r--Documentation/filesystems/ocfs2.txt7
-rw-r--r--Documentation/gpio.txt22
-rw-r--r--Documentation/hwmon/sysfs-interface7
-rw-r--r--Documentation/kernel-doc-nano-HOWTO.txt5
-rw-r--r--Documentation/kernel-parameters.txt33
-rw-r--r--Documentation/kprobes.txt8
-rw-r--r--Documentation/lguest/lguest.c29
-rw-r--r--Documentation/mutex-design.txt3
-rw-r--r--Documentation/networking/e1000.txt373
-rw-r--r--Documentation/networking/e1000e.txt302
-rw-r--r--[-rwxr-xr-x]Documentation/networking/ixgbevf.txt40
-rw-r--r--Documentation/pcmcia/driver-changes.txt25
-rw-r--r--Documentation/power/00-INDEX2
-rw-r--r--Documentation/power/interface.txt2
-rw-r--r--Documentation/power/opp.txt375
-rw-r--r--Documentation/power/regulator/overview.txt2
-rw-r--r--Documentation/power/runtime_pm.txt227
-rw-r--r--Documentation/power/s2ram.txt7
-rw-r--r--Documentation/power/swsusp.txt3
-rw-r--r--Documentation/powerpc/dts-bindings/fsl/spi.txt24
-rw-r--r--Documentation/sound/alsa/HD-Audio-Models.txt1
-rw-r--r--Documentation/vm/page-types.c2
-rw-r--r--Documentation/workqueue.txt381
-rw-r--r--Documentation/x86/x86_64/kernel-stacks6
45 files changed, 2330 insertions, 723 deletions
diff --git a/Documentation/ABI/testing/sysfs-ata b/Documentation/ABI/testing/sysfs-ata
new file mode 100644
index 00000000000..0a932155cbb
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-ata
@@ -0,0 +1,99 @@
1What: /sys/class/ata_...
2Date: August 2008
3Contact: Gwendal Grignou<gwendal@google.com>
4Description:
5
6Provide a place in sysfs for storing the ATA topology of the system. This allows
7retrieving various information about ATA objects.
8
9Files under /sys/class/ata_port
10-------------------------------
11
12 For each port, a directory ataX is created where X is the ata_port_id of
13 the port. The device parent is the ata host device.
14
15idle_irq (read)
16
17 Number of IRQ received by the port while idle [some ata HBA only].
18
19nr_pmp_links (read)
20
21 If a SATA Port Multiplier (PM) is connected, number of link behind it.
22
23Files under /sys/class/ata_link
24-------------------------------
25
26 Behind each port, there is a ata_link. If there is a SATA PM in the
27 topology, 15 ata_link objects are created.
28
29 If a link is behind a port, the directory name is linkX, where X is
30 ata_port_id of the port.
31 If a link is behind a PM, its name is linkX.Y where X is ata_port_id
32 of the parent port and Y the PM port.
33
34hw_sata_spd_limit
35
36 Maximum speed supported by the connected SATA device.
37
38sata_spd_limit
39
40 Maximum speed imposed by libata.
41
42sata_spd
43
44 Current speed of the link [1.5, 3Gps,...].
45
46Files under /sys/class/ata_device
47---------------------------------
48
49 Behind each link, up to two ata device are created.
50 The name of the directory is devX[.Y].Z where:
51 - X is ata_port_id of the port where the device is connected,
52 - Y the port of the PM if any, and
53 - Z the device id: for PATA, there is usually 2 devices [0,1],
54 only 1 for SATA.
55
56class
57 Device class. Can be "ata" for disk, "atapi" for packet device,
58 "pmp" for PM, or "none" if no device was found behind the link.
59
60dma_mode
61
62 Transfer modes supported by the device when in DMA mode.
63 Mostly used by PATA device.
64
65pio_mode
66
67 Transfer modes supported by the device when in PIO mode.
68 Mostly used by PATA device.
69
70xfer_mode
71
72 Current transfer mode.
73
74id
75
76 Cached result of IDENTIFY command, as described in ATA8 7.16 and 7.17.
77 Only valid if the device is not a PM.
78
79gscr
80
81 Cached result of the dump of PM GSCR register.
82 Valid registers are:
83 0: SATA_PMP_GSCR_PROD_ID,
84 1: SATA_PMP_GSCR_REV,
85 2: SATA_PMP_GSCR_PORT_INFO,
86 32: SATA_PMP_GSCR_ERROR,
87 33: SATA_PMP_GSCR_ERROR_EN,
88 64: SATA_PMP_GSCR_FEAT,
89 96: SATA_PMP_GSCR_FEAT_EN,
90 130: SATA_PMP_GSCR_SII_GPIO
91 Only valid if the device is a PM.
92
93spdn_cnt
94
95 Number of time libata decided to lower the speed of link due to errors.
96
97ering
98
99 Formatted output of the error ring of the device.
diff --git a/Documentation/ABI/testing/sysfs-devices-power b/Documentation/ABI/testing/sysfs-devices-power
index 6123c523bfd..7628cd1bc36 100644
--- a/Documentation/ABI/testing/sysfs-devices-power
+++ b/Documentation/ABI/testing/sysfs-devices-power
@@ -77,3 +77,91 @@ Description:
77 devices this attribute is set to "enabled" by bus type code or 77 devices this attribute is set to "enabled" by bus type code or
78 device drivers and in that cases it should be safe to leave the 78 device drivers and in that cases it should be safe to leave the
79 default value. 79 default value.
80
81What: /sys/devices/.../power/wakeup_count
82Date: September 2010
83Contact: Rafael J. Wysocki <rjw@sisk.pl>
84Description:
85 The /sys/devices/.../wakeup_count attribute contains the number
86 of signaled wakeup events associated with the device. This
87 attribute is read-only. If the device is not enabled to wake up
88 the system from sleep states, this attribute is empty.
89
90What: /sys/devices/.../power/wakeup_active_count
91Date: September 2010
92Contact: Rafael J. Wysocki <rjw@sisk.pl>
93Description:
94 The /sys/devices/.../wakeup_active_count attribute contains the
95 number of times the processing of wakeup events associated with
96 the device was completed (at the kernel level). This attribute
97 is read-only. If the device is not enabled to wake up the
98 system from sleep states, this attribute is empty.
99
100What: /sys/devices/.../power/wakeup_hit_count
101Date: September 2010
102Contact: Rafael J. Wysocki <rjw@sisk.pl>
103Description:
104 The /sys/devices/.../wakeup_hit_count attribute contains the
105 number of times the processing of a wakeup event associated with
106 the device might prevent the system from entering a sleep state.
107 This attribute is read-only. If the device is not enabled to
108 wake up the system from sleep states, this attribute is empty.
109
110What: /sys/devices/.../power/wakeup_active
111Date: September 2010
112Contact: Rafael J. Wysocki <rjw@sisk.pl>
113Description:
114 The /sys/devices/.../wakeup_active attribute contains either 1,
115 or 0, depending on whether or not a wakeup event associated with
116 the device is being processed (1). This attribute is read-only.
117 If the device is not enabled to wake up the system from sleep
118 states, this attribute is empty.
119
120What: /sys/devices/.../power/wakeup_total_time_ms
121Date: September 2010
122Contact: Rafael J. Wysocki <rjw@sisk.pl>
123Description:
124 The /sys/devices/.../wakeup_total_time_ms attribute contains
125 the total time of processing wakeup events associated with the
126 device, in milliseconds. This attribute is read-only. If the
127 device is not enabled to wake up the system from sleep states,
128 this attribute is empty.
129
130What: /sys/devices/.../power/wakeup_max_time_ms
131Date: September 2010
132Contact: Rafael J. Wysocki <rjw@sisk.pl>
133Description:
134 The /sys/devices/.../wakeup_max_time_ms attribute contains
135 the maximum time of processing a single wakeup event associated
136 with the device, in milliseconds. This attribute is read-only.
137 If the device is not enabled to wake up the system from sleep
138 states, this attribute is empty.
139
140What: /sys/devices/.../power/wakeup_last_time_ms
141Date: September 2010
142Contact: Rafael J. Wysocki <rjw@sisk.pl>
143Description:
144 The /sys/devices/.../wakeup_last_time_ms attribute contains
145 the value of the monotonic clock corresponding to the time of
146 signaling the last wakeup event associated with the device, in
147 milliseconds. This attribute is read-only. If the device is
148 not enabled to wake up the system from sleep states, this
149 attribute is empty.
150
151What: /sys/devices/.../power/autosuspend_delay_ms
152Date: September 2010
153Contact: Alan Stern <stern@rowland.harvard.edu>
154Description:
155 The /sys/devices/.../power/autosuspend_delay_ms attribute
156 contains the autosuspend delay value (in milliseconds). Some
157 drivers do not want their device to suspend as soon as it
158 becomes idle at run time; they want the device to remain
159 inactive for a certain minimum period of time first. That
160 period is called the autosuspend delay. Negative values will
161 prevent the device from being suspended at run time (similar
162 to writing "on" to the power/control attribute). Values >=
163 1000 will cause the autosuspend timer expiration to be rounded
164 up to the nearest second.
165
166 Not all drivers support this attribute. If it isn't supported,
167 attempts to read or write it will yield I/O errors.
diff --git a/Documentation/ABI/testing/sysfs-power b/Documentation/ABI/testing/sysfs-power
index 2875f1f74a0..194ca446ac2 100644
--- a/Documentation/ABI/testing/sysfs-power
+++ b/Documentation/ABI/testing/sysfs-power
@@ -99,9 +99,38 @@ Description:
99 99
100 dmesg -s 1000000 | grep 'hash matches' 100 dmesg -s 1000000 | grep 'hash matches'
101 101
102 If you do not get any matches (or they appear to be false
103 positives), it is possible that the last PM event point
104 referred to a device created by a loadable kernel module. In
105 this case cat /sys/power/pm_trace_dev_match (see below) after
106 your system is started up and the kernel modules are loaded.
107
102 CAUTION: Using it will cause your machine's real-time (CMOS) 108 CAUTION: Using it will cause your machine's real-time (CMOS)
103 clock to be set to a random invalid time after a resume. 109 clock to be set to a random invalid time after a resume.
104 110
111What; /sys/power/pm_trace_dev_match
112Date: October 2010
113Contact: James Hogan <james@albanarts.com>
114Description:
115 The /sys/power/pm_trace_dev_match file contains the name of the
116 device associated with the last PM event point saved in the RTC
117 across reboots when pm_trace has been used. More precisely it
118 contains the list of current devices (including those
119 registered by loadable kernel modules since boot) which match
120 the device hash in the RTC at boot, with a newline after each
121 one.
122
123 The advantage of this file over the hash matches printed to the
124 kernel log (see /sys/power/pm_trace), is that it includes
125 devices created after boot by loadable kernel modules.
126
127 Due to the small hash size necessary to fit in the RTC, it is
128 possible that more than one device matches the hash, in which
129 case further investigation is required to determine which
130 device is causing the problem. Note that genuine RTC clock
131 values (such as when pm_trace has not been used), can still
132 match a device and output it's name here.
133
105What: /sys/power/pm_async 134What: /sys/power/pm_async
106Date: January 2009 135Date: January 2009
107Contact: Rafael J. Wysocki <rjw@sisk.pl> 136Contact: Rafael J. Wysocki <rjw@sisk.pl>
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
index ecd35e9d441..feca0758391 100644
--- a/Documentation/DocBook/device-drivers.tmpl
+++ b/Documentation/DocBook/device-drivers.tmpl
@@ -46,7 +46,6 @@
46 46
47 <sect1><title>Atomic and pointer manipulation</title> 47 <sect1><title>Atomic and pointer manipulation</title>
48!Iarch/x86/include/asm/atomic.h 48!Iarch/x86/include/asm/atomic.h
49!Iarch/x86/include/asm/unaligned.h
50 </sect1> 49 </sect1>
51 50
52 <sect1><title>Delaying, scheduling, and timer routines</title> 51 <sect1><title>Delaying, scheduling, and timer routines</title>
diff --git a/Documentation/DocBook/drm.tmpl b/Documentation/DocBook/drm.tmpl
index 910c923a9b8..2861055afd7 100644
--- a/Documentation/DocBook/drm.tmpl
+++ b/Documentation/DocBook/drm.tmpl
@@ -136,6 +136,7 @@
136#ifdef CONFIG_COMPAT 136#ifdef CONFIG_COMPAT
137 .compat_ioctl = i915_compat_ioctl, 137 .compat_ioctl = i915_compat_ioctl,
138#endif 138#endif
139 .llseek = noop_llseek,
139 }, 140 },
140 .pci_driver = { 141 .pci_driver = {
141 .name = DRIVER_NAME, 142 .name = DRIVER_NAME,
diff --git a/Documentation/DocBook/genericirq.tmpl b/Documentation/DocBook/genericirq.tmpl
index 1448b33fd22..fb10fd08c05 100644
--- a/Documentation/DocBook/genericirq.tmpl
+++ b/Documentation/DocBook/genericirq.tmpl
@@ -28,7 +28,7 @@
28 </authorgroup> 28 </authorgroup>
29 29
30 <copyright> 30 <copyright>
31 <year>2005-2006</year> 31 <year>2005-2010</year>
32 <holder>Thomas Gleixner</holder> 32 <holder>Thomas Gleixner</holder>
33 </copyright> 33 </copyright>
34 <copyright> 34 <copyright>
@@ -100,6 +100,10 @@
100 <listitem><para>Edge type</para></listitem> 100 <listitem><para>Edge type</para></listitem>
101 <listitem><para>Simple type</para></listitem> 101 <listitem><para>Simple type</para></listitem>
102 </itemizedlist> 102 </itemizedlist>
103 During the implementation we identified another type:
104 <itemizedlist>
105 <listitem><para>Fast EOI type</para></listitem>
106 </itemizedlist>
103 In the SMP world of the __do_IRQ() super-handler another type 107 In the SMP world of the __do_IRQ() super-handler another type
104 was identified: 108 was identified:
105 <itemizedlist> 109 <itemizedlist>
@@ -153,6 +157,7 @@
153 is still available. This leads to a kind of duality for the time 157 is still available. This leads to a kind of duality for the time
154 being. Over time the new model should be used in more and more 158 being. Over time the new model should be used in more and more
155 architectures, as it enables smaller and cleaner IRQ subsystems. 159 architectures, as it enables smaller and cleaner IRQ subsystems.
160 It's deprecated for three years now and about to be removed.
156 </para> 161 </para>
157 </chapter> 162 </chapter>
158 <chapter id="bugs"> 163 <chapter id="bugs">
@@ -217,6 +222,7 @@
217 <itemizedlist> 222 <itemizedlist>
218 <listitem><para>handle_level_irq</para></listitem> 223 <listitem><para>handle_level_irq</para></listitem>
219 <listitem><para>handle_edge_irq</para></listitem> 224 <listitem><para>handle_edge_irq</para></listitem>
225 <listitem><para>handle_fasteoi_irq</para></listitem>
220 <listitem><para>handle_simple_irq</para></listitem> 226 <listitem><para>handle_simple_irq</para></listitem>
221 <listitem><para>handle_percpu_irq</para></listitem> 227 <listitem><para>handle_percpu_irq</para></listitem>
222 </itemizedlist> 228 </itemizedlist>
@@ -233,33 +239,33 @@
233 are used by the default flow implementations. 239 are used by the default flow implementations.
234 The following helper functions are implemented (simplified excerpt): 240 The following helper functions are implemented (simplified excerpt):
235 <programlisting> 241 <programlisting>
236default_enable(irq) 242default_enable(struct irq_data *data)
237{ 243{
238 desc->chip->unmask(irq); 244 desc->chip->irq_unmask(data);
239} 245}
240 246
241default_disable(irq) 247default_disable(struct irq_data *data)
242{ 248{
243 if (!delay_disable(irq)) 249 if (!delay_disable(data))
244 desc->chip->mask(irq); 250 desc->chip->irq_mask(data);
245} 251}
246 252
247default_ack(irq) 253default_ack(struct irq_data *data)
248{ 254{
249 chip->ack(irq); 255 chip->irq_ack(data);
250} 256}
251 257
252default_mask_ack(irq) 258default_mask_ack(struct irq_data *data)
253{ 259{
254 if (chip->mask_ack) { 260 if (chip->irq_mask_ack) {
255 chip->mask_ack(irq); 261 chip->irq_mask_ack(data);
256 } else { 262 } else {
257 chip->mask(irq); 263 chip->irq_mask(data);
258 chip->ack(irq); 264 chip->irq_ack(data);
259 } 265 }
260} 266}
261 267
262noop(irq) 268noop(struct irq_data *data))
263{ 269{
264} 270}
265 271
@@ -278,12 +284,27 @@ noop(irq)
278 <para> 284 <para>
279 The following control flow is implemented (simplified excerpt): 285 The following control flow is implemented (simplified excerpt):
280 <programlisting> 286 <programlisting>
281desc->chip->start(); 287desc->chip->irq_mask();
282handle_IRQ_event(desc->action); 288handle_IRQ_event(desc->action);
283desc->chip->end(); 289desc->chip->irq_unmask();
284 </programlisting> 290 </programlisting>
285 </para> 291 </para>
286 </sect3> 292 </sect3>
293 <sect3 id="Default_FASTEOI_IRQ_flow_handler">
294 <title>Default Fast EOI IRQ flow handler</title>
295 <para>
296 handle_fasteoi_irq provides a generic implementation
297 for interrupts, which only need an EOI at the end of
298 the handler
299 </para>
300 <para>
301 The following control flow is implemented (simplified excerpt):
302 <programlisting>
303handle_IRQ_event(desc->action);
304desc->chip->irq_eoi();
305 </programlisting>
306 </para>
307 </sect3>
287 <sect3 id="Default_Edge_IRQ_flow_handler"> 308 <sect3 id="Default_Edge_IRQ_flow_handler">
288 <title>Default Edge IRQ flow handler</title> 309 <title>Default Edge IRQ flow handler</title>
289 <para> 310 <para>
@@ -294,20 +315,19 @@ desc->chip->end();
294 The following control flow is implemented (simplified excerpt): 315 The following control flow is implemented (simplified excerpt):
295 <programlisting> 316 <programlisting>
296if (desc->status &amp; running) { 317if (desc->status &amp; running) {
297 desc->chip->hold(); 318 desc->chip->irq_mask();
298 desc->status |= pending | masked; 319 desc->status |= pending | masked;
299 return; 320 return;
300} 321}
301desc->chip->start(); 322desc->chip->irq_ack();
302desc->status |= running; 323desc->status |= running;
303do { 324do {
304 if (desc->status &amp; masked) 325 if (desc->status &amp; masked)
305 desc->chip->enable(); 326 desc->chip->irq_unmask();
306 desc->status &amp;= ~pending; 327 desc->status &amp;= ~pending;
307 handle_IRQ_event(desc->action); 328 handle_IRQ_event(desc->action);
308} while (status &amp; pending); 329} while (status &amp; pending);
309desc->status &amp;= ~running; 330desc->status &amp;= ~running;
310desc->chip->end();
311 </programlisting> 331 </programlisting>
312 </para> 332 </para>
313 </sect3> 333 </sect3>
@@ -342,9 +362,9 @@ handle_IRQ_event(desc->action);
342 <para> 362 <para>
343 The following control flow is implemented (simplified excerpt): 363 The following control flow is implemented (simplified excerpt):
344 <programlisting> 364 <programlisting>
345desc->chip->start();
346handle_IRQ_event(desc->action); 365handle_IRQ_event(desc->action);
347desc->chip->end(); 366if (desc->chip->irq_eoi)
367 desc->chip->irq_eoi();
348 </programlisting> 368 </programlisting>
349 </para> 369 </para>
350 </sect3> 370 </sect3>
@@ -375,8 +395,7 @@ desc->chip->end();
375 mechanism. (It's necessary to enable CONFIG_HARDIRQS_SW_RESEND when 395 mechanism. (It's necessary to enable CONFIG_HARDIRQS_SW_RESEND when
376 you want to use the delayed interrupt disable feature and your 396 you want to use the delayed interrupt disable feature and your
377 hardware is not capable of retriggering an interrupt.) 397 hardware is not capable of retriggering an interrupt.)
378 The delayed interrupt disable can be runtime enabled, per interrupt, 398 The delayed interrupt disable is not configurable.
379 by setting the IRQ_DELAYED_DISABLE flag in the irq_desc status field.
380 </para> 399 </para>
381 </sect2> 400 </sect2>
382 </sect1> 401 </sect1>
@@ -387,13 +406,13 @@ desc->chip->end();
387 contains all the direct chip relevant functions, which 406 contains all the direct chip relevant functions, which
388 can be utilized by the irq flow implementations. 407 can be utilized by the irq flow implementations.
389 <itemizedlist> 408 <itemizedlist>
390 <listitem><para>ack()</para></listitem> 409 <listitem><para>irq_ack()</para></listitem>
391 <listitem><para>mask_ack() - Optional, recommended for performance</para></listitem> 410 <listitem><para>irq_mask_ack() - Optional, recommended for performance</para></listitem>
392 <listitem><para>mask()</para></listitem> 411 <listitem><para>irq_mask()</para></listitem>
393 <listitem><para>unmask()</para></listitem> 412 <listitem><para>irq_unmask()</para></listitem>
394 <listitem><para>retrigger() - Optional</para></listitem> 413 <listitem><para>irq_retrigger() - Optional</para></listitem>
395 <listitem><para>set_type() - Optional</para></listitem> 414 <listitem><para>irq_set_type() - Optional</para></listitem>
396 <listitem><para>set_wake() - Optional</para></listitem> 415 <listitem><para>irq_set_wake() - Optional</para></listitem>
397 </itemizedlist> 416 </itemizedlist>
398 These primitives are strictly intended to mean what they say: ack means 417 These primitives are strictly intended to mean what they say: ack means
399 ACK, masking means masking of an IRQ line, etc. It is up to the flow 418 ACK, masking means masking of an IRQ line, etc. It is up to the flow
@@ -458,6 +477,7 @@ desc->chip->end();
458 <para> 477 <para>
459 This chapter contains the autogenerated documentation of the internal functions. 478 This chapter contains the autogenerated documentation of the internal functions.
460 </para> 479 </para>
480!Ikernel/irq/irqdesc.c
461!Ikernel/irq/handle.c 481!Ikernel/irq/handle.c
462!Ikernel/irq/chip.c 482!Ikernel/irq/chip.c
463 </chapter> 483 </chapter>
diff --git a/Documentation/DocBook/kernel-api.tmpl b/Documentation/DocBook/kernel-api.tmpl
index a20c6f6fffc..6b4e07f28b6 100644
--- a/Documentation/DocBook/kernel-api.tmpl
+++ b/Documentation/DocBook/kernel-api.tmpl
@@ -57,7 +57,6 @@
57 </para> 57 </para>
58 58
59 <sect1><title>String Conversions</title> 59 <sect1><title>String Conversions</title>
60!Ilib/vsprintf.c
61!Elib/vsprintf.c 60!Elib/vsprintf.c
62 </sect1> 61 </sect1>
63 <sect1><title>String Manipulation</title> 62 <sect1><title>String Manipulation</title>
@@ -258,7 +257,8 @@ X!Earch/x86/kernel/mca_32.c
258!Iblock/blk-sysfs.c 257!Iblock/blk-sysfs.c
259!Eblock/blk-settings.c 258!Eblock/blk-settings.c
260!Eblock/blk-exec.c 259!Eblock/blk-exec.c
261!Eblock/blk-barrier.c 260!Eblock/blk-flush.c
261!Eblock/blk-lib.c
262!Eblock/blk-tag.c 262!Eblock/blk-tag.c
263!Iblock/blk-tag.c 263!Iblock/blk-tag.c
264!Eblock/blk-integrity.c 264!Eblock/blk-integrity.c
diff --git a/Documentation/DocBook/kernel-locking.tmpl b/Documentation/DocBook/kernel-locking.tmpl
index 0b1a3f97f28..f66f4df1869 100644
--- a/Documentation/DocBook/kernel-locking.tmpl
+++ b/Documentation/DocBook/kernel-locking.tmpl
@@ -1645,7 +1645,9 @@ the amount of locking which needs to be done.
1645 all the readers who were traversing the list when we deleted the 1645 all the readers who were traversing the list when we deleted the
1646 element are finished. We use <function>call_rcu()</function> to 1646 element are finished. We use <function>call_rcu()</function> to
1647 register a callback which will actually destroy the object once 1647 register a callback which will actually destroy the object once
1648 the readers are finished. 1648 all pre-existing readers are finished. Alternatively,
1649 <function>synchronize_rcu()</function> may be used to block until
1650 all pre-existing are finished.
1649 </para> 1651 </para>
1650 <para> 1652 <para>
1651 But how does Read Copy Update know when the readers are 1653 But how does Read Copy Update know when the readers are
@@ -1714,7 +1716,7 @@ the amount of locking which needs to be done.
1714- object_put(obj); 1716- object_put(obj);
1715+ list_del_rcu(&amp;obj-&gt;list); 1717+ list_del_rcu(&amp;obj-&gt;list);
1716 cache_num--; 1718 cache_num--;
1717+ call_rcu(&amp;obj-&gt;rcu, cache_delete_rcu, obj); 1719+ call_rcu(&amp;obj-&gt;rcu, cache_delete_rcu);
1718 } 1720 }
1719 1721
1720 /* Must be holding cache_lock */ 1722 /* Must be holding cache_lock */
@@ -1725,14 +1727,6 @@ the amount of locking which needs to be done.
1725 if (++cache_num > MAX_CACHE_SIZE) { 1727 if (++cache_num > MAX_CACHE_SIZE) {
1726 struct object *i, *outcast = NULL; 1728 struct object *i, *outcast = NULL;
1727 list_for_each_entry(i, &amp;cache, list) { 1729 list_for_each_entry(i, &amp;cache, list) {
1728@@ -85,6 +94,7 @@
1729 obj-&gt;popularity = 0;
1730 atomic_set(&amp;obj-&gt;refcnt, 1); /* The cache holds a reference */
1731 spin_lock_init(&amp;obj-&gt;lock);
1732+ INIT_RCU_HEAD(&amp;obj-&gt;rcu);
1733
1734 spin_lock_irqsave(&amp;cache_lock, flags);
1735 __cache_add(obj);
1736@@ -104,12 +114,11 @@ 1730@@ -104,12 +114,11 @@
1737 struct object *cache_find(int id) 1731 struct object *cache_find(int id)
1738 { 1732 {
@@ -1961,6 +1955,12 @@ machines due to caching.
1961 </sect1> 1955 </sect1>
1962 </chapter> 1956 </chapter>
1963 1957
1958 <chapter id="apiref">
1959 <title>Mutex API reference</title>
1960!Iinclude/linux/mutex.h
1961!Ekernel/mutex.c
1962 </chapter>
1963
1964 <chapter id="references"> 1964 <chapter id="references">
1965 <title>Further reading</title> 1965 <title>Further reading</title>
1966 1966
diff --git a/Documentation/DocBook/tracepoint.tmpl b/Documentation/DocBook/tracepoint.tmpl
index e8473eae2a2..b57a9ede322 100644
--- a/Documentation/DocBook/tracepoint.tmpl
+++ b/Documentation/DocBook/tracepoint.tmpl
@@ -104,4 +104,9 @@
104 <title>Block IO</title> 104 <title>Block IO</title>
105!Iinclude/trace/events/block.h 105!Iinclude/trace/events/block.h
106 </chapter> 106 </chapter>
107
108 <chapter id="workqueue">
109 <title>Workqueue</title>
110!Iinclude/trace/events/workqueue.h
111 </chapter>
107</book> 112</book>
diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt
index 790d1a81237..0c134f8afc6 100644
--- a/Documentation/RCU/checklist.txt
+++ b/Documentation/RCU/checklist.txt
@@ -218,13 +218,22 @@ over a rather long period of time, but improvements are always welcome!
218 include: 218 include:
219 219
220 a. Keeping a count of the number of data-structure elements 220 a. Keeping a count of the number of data-structure elements
221 used by the RCU-protected data structure, including those 221 used by the RCU-protected data structure, including
222 waiting for a grace period to elapse. Enforce a limit 222 those waiting for a grace period to elapse. Enforce a
223 on this number, stalling updates as needed to allow 223 limit on this number, stalling updates as needed to allow
224 previously deferred frees to complete. 224 previously deferred frees to complete. Alternatively,
225 225 limit only the number awaiting deferred free rather than
226 Alternatively, limit only the number awaiting deferred 226 the total number of elements.
227 free rather than the total number of elements. 227
228 One way to stall the updates is to acquire the update-side
229 mutex. (Don't try this with a spinlock -- other CPUs
230 spinning on the lock could prevent the grace period
231 from ever ending.) Another way to stall the updates
232 is for the updates to use a wrapper function around
233 the memory allocator, so that this wrapper function
234 simulates OOM when there is too much memory awaiting an
235 RCU grace period. There are of course many other
236 variations on this theme.
228 237
229 b. Limiting update rate. For example, if updates occur only 238 b. Limiting update rate. For example, if updates occur only
230 once per hour, then no explicit rate limiting is required, 239 once per hour, then no explicit rate limiting is required,
@@ -365,3 +374,26 @@ over a rather long period of time, but improvements are always welcome!
365 and the compiler to freely reorder code into and out of RCU 374 and the compiler to freely reorder code into and out of RCU
366 read-side critical sections. It is the responsibility of the 375 read-side critical sections. It is the responsibility of the
367 RCU update-side primitives to deal with this. 376 RCU update-side primitives to deal with this.
377
37817. Use CONFIG_PROVE_RCU, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and
379 the __rcu sparse checks to validate your RCU code. These
380 can help find problems as follows:
381
382 CONFIG_PROVE_RCU: check that accesses to RCU-protected data
383 structures are carried out under the proper RCU
384 read-side critical section, while holding the right
385 combination of locks, or whatever other conditions
386 are appropriate.
387
388 CONFIG_DEBUG_OBJECTS_RCU_HEAD: check that you don't pass the
389 same object to call_rcu() (or friends) before an RCU
390 grace period has elapsed since the last time that you
391 passed that same object to call_rcu() (or friends).
392
393 __rcu sparse checks: tag the pointer to the RCU-protected data
394 structure with __rcu, and sparse will warn you if you
395 access that pointer without the services of one of the
396 variants of rcu_dereference().
397
398 These debugging aids can help you find problems that are
399 otherwise extremely difficult to spot.
diff --git a/Documentation/RCU/stallwarn.txt b/Documentation/RCU/stallwarn.txt
index 44c6dcc93d6..862c08ef1fd 100644
--- a/Documentation/RCU/stallwarn.txt
+++ b/Documentation/RCU/stallwarn.txt
@@ -80,6 +80,24 @@ o A CPU looping with bottom halves disabled. This condition can
80o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel 80o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel
81 without invoking schedule(). 81 without invoking schedule().
82 82
83o A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might
84 happen to preempt a low-priority task in the middle of an RCU
85 read-side critical section. This is especially damaging if
86 that low-priority task is not permitted to run on any other CPU,
87 in which case the next RCU grace period can never complete, which
88 will eventually cause the system to run out of memory and hang.
89 While the system is in the process of running itself out of
90 memory, you might see stall-warning messages.
91
92o A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
93 is running at a higher priority than the RCU softirq threads.
94 This will prevent RCU callbacks from ever being invoked,
95 and in a CONFIG_TREE_PREEMPT_RCU kernel will further prevent
96 RCU grace periods from ever completing. Either way, the
97 system will eventually run out of memory and hang. In the
98 CONFIG_TREE_PREEMPT_RCU case, you might see stall-warning
99 messages.
100
83o A bug in the RCU implementation. 101o A bug in the RCU implementation.
84 102
85o A hardware failure. This is quite unlikely, but has occurred 103o A hardware failure. This is quite unlikely, but has occurred
diff --git a/Documentation/RCU/trace.txt b/Documentation/RCU/trace.txt
index efd8cc95c06..a851118775d 100644
--- a/Documentation/RCU/trace.txt
+++ b/Documentation/RCU/trace.txt
@@ -125,6 +125,17 @@ o "b" is the batch limit for this CPU. If more than this number
125 of RCU callbacks is ready to invoke, then the remainder will 125 of RCU callbacks is ready to invoke, then the remainder will
126 be deferred. 126 be deferred.
127 127
128o "ci" is the number of RCU callbacks that have been invoked for
129 this CPU. Note that ci+ql is the number of callbacks that have
130 been registered in absence of CPU-hotplug activity.
131
132o "co" is the number of RCU callbacks that have been orphaned due to
133 this CPU going offline.
134
135o "ca" is the number of RCU callbacks that have been adopted due to
136 other CPUs going offline. Note that ci+co-ca+ql is the number of
137 RCU callbacks registered on this CPU.
138
128There is also an rcu/rcudata.csv file with the same information in 139There is also an rcu/rcudata.csv file with the same information in
129comma-separated-variable spreadsheet format. 140comma-separated-variable spreadsheet format.
130 141
@@ -180,7 +191,7 @@ o "s" is the "signaled" state that drives force_quiescent_state()'s
180 191
181o "jfq" is the number of jiffies remaining for this grace period 192o "jfq" is the number of jiffies remaining for this grace period
182 before force_quiescent_state() is invoked to help push things 193 before force_quiescent_state() is invoked to help push things
183 along. Note that CPUs in dyntick-idle mode thoughout the grace 194 along. Note that CPUs in dyntick-idle mode throughout the grace
184 period will not report on their own, but rather must be check by 195 period will not report on their own, but rather must be check by
185 some other CPU via force_quiescent_state(). 196 some other CPU via force_quiescent_state().
186 197
diff --git a/Documentation/arm/00-INDEX b/Documentation/arm/00-INDEX
index 7f5fc3ba9c9..ecf7d04bca2 100644
--- a/Documentation/arm/00-INDEX
+++ b/Documentation/arm/00-INDEX
@@ -6,6 +6,8 @@ Interrupts
6 - ARM Interrupt subsystem documentation 6 - ARM Interrupt subsystem documentation
7IXP2000 7IXP2000
8 - Release Notes for Linux on Intel's IXP2000 Network Processor 8 - Release Notes for Linux on Intel's IXP2000 Network Processor
9msm
10 - MSM specific documentation
9Netwinder 11Netwinder
10 - Netwinder specific documentation 12 - Netwinder specific documentation
11Porting 13Porting
diff --git a/Documentation/arm/msm/gpiomux.txt b/Documentation/arm/msm/gpiomux.txt
new file mode 100644
index 00000000000..67a81620adf
--- /dev/null
+++ b/Documentation/arm/msm/gpiomux.txt
@@ -0,0 +1,176 @@
1This document provides an overview of the msm_gpiomux interface, which
2is used to provide gpio pin multiplexing and configuration on mach-msm
3targets.
4
5History
6=======
7
8The first-generation API for gpio configuration & multiplexing on msm
9is the function gpio_tlmm_config(). This function has a few notable
10shortcomings, which led to its deprecation and replacement by gpiomux:
11
12The 'disable' parameter: Setting the second parameter to
13gpio_tlmm_config to GPIO_CFG_DISABLE tells the peripheral
14processor in charge of the subsystem to perform a look-up into a
15low-power table and apply the low-power/sleep setting for the pin.
16As the msm family evolved this became problematic. Not all pins
17have sleep settings, not all peripheral processors will accept requests
18to apply said sleep settings, and not all msm targets have their gpio
19subsystems managed by a peripheral processor. In order to get consistent
20behavior on all targets, drivers are forced to ignore this parameter,
21rendering it useless.
22
23The 'direction' flag: for all mux-settings other than raw-gpio (0),
24the output-enable bit of a gpio is hard-wired to a known
25input (usually VDD or ground). For those settings, the direction flag
26is meaningless at best, and deceptive at worst. In addition, using the
27direction flag to change output-enable (OE) directly can cause trouble in
28gpiolib, which has no visibility into gpio direction changes made
29in this way. Direction control in gpio mode should be made through gpiolib.
30
31Key Features of gpiomux
32=======================
33
34- A consistent interface across all generations of msm. Drivers can expect
35the same results on every target.
36- gpiomux plays nicely with gpiolib. Functions that should belong to gpiolib
37are left to gpiolib and not duplicated here. gpiomux is written with the
38intent that gpio_chips will call gpiomux reference-counting methods
39from their request() and free() hooks, providing full integration.
40- Tabular configuration. Instead of having to call gpio_tlmm_config
41hundreds of times, gpio configuration is placed in a single table.
42- Per-gpio sleep. Each gpio is individually reference counted, allowing only
43those lines which are in use to be put in high-power states.
44- 0 means 'do nothing': all flags are designed so that the default memset-zero
45equates to a sensible default of 'no configuration', preventing users
46from having to provide hundreds of 'no-op' configs for unused or
47unwanted lines.
48
49Usage
50=====
51
52To use gpiomux, provide configuration information for relevant gpio lines
53in the msm_gpiomux_configs table. Since a 0 equates to "unconfigured",
54only those lines to be managed by gpiomux need to be specified. Here
55is a completely fictional example:
56
57struct msm_gpiomux_config msm_gpiomux_configs[GPIOMUX_NGPIOS] = {
58 [12] = {
59 .active = GPIOMUX_VALID | GPIOMUX_DRV_8MA | GPIOMUX_FUNC_1,
60 .suspended = GPIOMUX_VALID | GPIOMUX_PULL_DOWN,
61 },
62 [34] = {
63 .suspended = GPIOMUX_VALID | GPIOMUX_PULL_DOWN,
64 },
65};
66
67To indicate that a gpio is in use, call msm_gpiomux_get() to increase
68its reference count. To decrease the reference count, call msm_gpiomux_put().
69
70The effect of this configuration is as follows:
71
72When the system boots, gpios 12 and 34 will be initialized with their
73'suspended' configurations. All other gpios, which were left unconfigured,
74will not be touched.
75
76When msm_gpiomux_get() is called on gpio 12 to raise its reference count
77above 0, its active configuration will be applied. Since no other gpio
78line has a valid active configuration, msm_gpiomux_get() will have no
79effect on any other line.
80
81When msm_gpiomux_put() is called on gpio 12 or 34 to drop their reference
82count to 0, their suspended configurations will be applied.
83Since no other gpio line has a valid suspended configuration, no other
84gpio line will be effected by msm_gpiomux_put(). Since gpio 34 has no valid
85active configuration, this is effectively a no-op for gpio 34 as well,
86with one small caveat, see the section "About Output-Enable Settings".
87
88All of the GPIOMUX_VALID flags may seem like unnecessary overhead, but
89they address some important issues. As unused entries (all those
90except 12 and 34) are zero-filled, gpiomux needs a way to distinguish
91the used fields from the unused. In addition, the all-zero pattern
92is a valid configuration! Therefore, gpiomux defines an additional bit
93which is used to indicate when a field is used. This has the pleasant
94side-effect of allowing calls to msm_gpiomux_write to use '0' to indicate
95that a value should not be changed:
96
97 msm_gpiomux_write(0, GPIOMUX_VALID, 0);
98
99replaces the active configuration of gpio 0 with an all-zero configuration,
100but leaves the suspended configuration as it was.
101
102Static Configurations
103=====================
104
105To install a static configuration, which is applied at boot and does
106not change after that, install a configuration with a suspended component
107but no active component, as in the previous example:
108
109 [34] = {
110 .suspended = GPIOMUX_VALID | GPIOMUX_PULL_DOWN,
111 },
112
113The suspended setting is applied during boot, and the lack of any valid
114active setting prevents any other setting from being applied at runtime.
115If other subsystems attempting to access the line is a concern, one could
116*really* anchor the configuration down by calling msm_gpiomux_get on the
117line at initialization to move the line into active mode. With the line
118held, it will never be re-suspended, and with no valid active configuration,
119no new configurations will be applied.
120
121But then, if having other subsystems grabbing for the line is truly a concern,
122it should be reserved with gpio_request instead, which carries an implicit
123msm_gpiomux_get.
124
125gpiomux and gpiolib
126===================
127
128It is expected that msm gpio_chips will call msm_gpiomux_get() and
129msm_gpiomux_put() from their request and free hooks, like this fictional
130example:
131
132static int request(struct gpio_chip *chip, unsigned offset)
133{
134 return msm_gpiomux_get(chip->base + offset);
135}
136
137static void free(struct gpio_chip *chip, unsigned offset)
138{
139 msm_gpiomux_put(chip->base + offset);
140}
141
142 ...somewhere in a gpio_chip declaration...
143 .request = request,
144 .free = free,
145
146This provides important functionality:
147- It guarantees that a gpio line will have its 'active' config applied
148 when the line is requested, and will not be suspended while the line
149 remains requested; and
150- It guarantees that gpio-direction settings from gpiolib behave sensibly.
151 See "About Output-Enable Settings."
152
153This mechanism allows for "auto-request" of gpiomux lines via gpiolib
154when it is suitable. Drivers wishing more exact control are, of course,
155free to also use msm_gpiomux_set and msm_gpiomux_get.
156
157About Output-Enable Settings
158============================
159
160Some msm targets do not have the ability to query the current gpio
161configuration setting. This means that changes made to the output-enable
162(OE) bit by gpiolib cannot be consistently detected and preserved by gpiomux.
163Therefore, when gpiomux applies a configuration setting, any direction
164settings which may have been applied by gpiolib are lost and the default
165input settings are re-applied.
166
167For this reason, drivers should not assume that gpio direction settings
168continue to hold if they free and then re-request a gpio. This seems like
169common sense - after all, anybody could have obtained the line in the
170meantime - but it needs saying.
171
172This also means that calls to msm_gpiomux_write will reset the OE bit,
173which means that if the gpio line is held by a client of gpiolib and
174msm_gpiomux_write is called, the direction setting has been lost and
175gpiolib's internal state has been broken.
176Release gpio lines before reconfiguring them.
diff --git a/Documentation/block/00-INDEX b/Documentation/block/00-INDEX
index a406286f6f3..d111e3b23db 100644
--- a/Documentation/block/00-INDEX
+++ b/Documentation/block/00-INDEX
@@ -1,7 +1,5 @@
100-INDEX 100-INDEX
2 - This file 2 - This file
3barrier.txt
4 - I/O Barriers
5biodoc.txt 3biodoc.txt
6 - Notes on the Generic Block Layer Rewrite in Linux 2.5 4 - Notes on the Generic Block Layer Rewrite in Linux 2.5
7capability.txt 5capability.txt
@@ -16,3 +14,5 @@ stat.txt
16 - Block layer statistics in /sys/block/<dev>/stat 14 - Block layer statistics in /sys/block/<dev>/stat
17switching-sched.txt 15switching-sched.txt
18 - Switching I/O schedulers at runtime 16 - Switching I/O schedulers at runtime
17writeback_cache_control.txt
18 - Control of volatile write back caches
diff --git a/Documentation/block/barrier.txt b/Documentation/block/barrier.txt
deleted file mode 100644
index 2c2f24f634e..00000000000
--- a/Documentation/block/barrier.txt
+++ /dev/null
@@ -1,261 +0,0 @@
1I/O Barriers
2============
3Tejun Heo <htejun@gmail.com>, July 22 2005
4
5I/O barrier requests are used to guarantee ordering around the barrier
6requests. Unless you're crazy enough to use disk drives for
7implementing synchronization constructs (wow, sounds interesting...),
8the ordering is meaningful only for write requests for things like
9journal checkpoints. All requests queued before a barrier request
10must be finished (made it to the physical medium) before the barrier
11request is started, and all requests queued after the barrier request
12must be started only after the barrier request is finished (again,
13made it to the physical medium).
14
15In other words, I/O barrier requests have the following two properties.
16
171. Request ordering
18
19Requests cannot pass the barrier request. Preceding requests are
20processed before the barrier and following requests after.
21
22Depending on what features a drive supports, this can be done in one
23of the following three ways.
24
25i. For devices which have queue depth greater than 1 (TCQ devices) and
26support ordered tags, block layer can just issue the barrier as an
27ordered request and the lower level driver, controller and drive
28itself are responsible for making sure that the ordering constraint is
29met. Most modern SCSI controllers/drives should support this.
30
31NOTE: SCSI ordered tag isn't currently used due to limitation in the
32 SCSI midlayer, see the following random notes section.
33
34ii. For devices which have queue depth greater than 1 but don't
35support ordered tags, block layer ensures that the requests preceding
36a barrier request finishes before issuing the barrier request. Also,
37it defers requests following the barrier until the barrier request is
38finished. Older SCSI controllers/drives and SATA drives fall in this
39category.
40
41iii. Devices which have queue depth of 1. This is a degenerate case
42of ii. Just keeping issue order suffices. Ancient SCSI
43controllers/drives and IDE drives are in this category.
44
452. Forced flushing to physical medium
46
47Again, if you're not gonna do synchronization with disk drives (dang,
48it sounds even more appealing now!), the reason you use I/O barriers
49is mainly to protect filesystem integrity when power failure or some
50other events abruptly stop the drive from operating and possibly make
51the drive lose data in its cache. So, I/O barriers need to guarantee
52that requests actually get written to non-volatile medium in order.
53
54There are four cases,
55
56i. No write-back cache. Keeping requests ordered is enough.
57
58ii. Write-back cache but no flush operation. There's no way to
59guarantee physical-medium commit order. This kind of devices can't to
60I/O barriers.
61
62iii. Write-back cache and flush operation but no FUA (forced unit
63access). We need two cache flushes - before and after the barrier
64request.
65
66iv. Write-back cache, flush operation and FUA. We still need one
67flush to make sure requests preceding a barrier are written to medium,
68but post-barrier flush can be avoided by using FUA write on the
69barrier itself.
70
71
72How to support barrier requests in drivers
73------------------------------------------
74
75All barrier handling is done inside block layer proper. All low level
76drivers have to are implementing its prepare_flush_fn and using one
77the following two functions to indicate what barrier type it supports
78and how to prepare flush requests. Note that the term 'ordered' is
79used to indicate the whole sequence of performing barrier requests
80including draining and flushing.
81
82typedef void (prepare_flush_fn)(struct request_queue *q, struct request *rq);
83
84int blk_queue_ordered(struct request_queue *q, unsigned ordered,
85 prepare_flush_fn *prepare_flush_fn);
86
87@q : the queue in question
88@ordered : the ordered mode the driver/device supports
89@prepare_flush_fn : this function should prepare @rq such that it
90 flushes cache to physical medium when executed
91
92For example, SCSI disk driver's prepare_flush_fn looks like the
93following.
94
95static void sd_prepare_flush(struct request_queue *q, struct request *rq)
96{
97 memset(rq->cmd, 0, sizeof(rq->cmd));
98 rq->cmd_type = REQ_TYPE_BLOCK_PC;
99 rq->timeout = SD_TIMEOUT;
100 rq->cmd[0] = SYNCHRONIZE_CACHE;
101 rq->cmd_len = 10;
102}
103
104The following seven ordered modes are supported. The following table
105shows which mode should be used depending on what features a
106device/driver supports. In the leftmost column of table,
107QUEUE_ORDERED_ prefix is omitted from the mode names to save space.
108
109The table is followed by description of each mode. Note that in the
110descriptions of QUEUE_ORDERED_DRAIN*, '=>' is used whereas '->' is
111used for QUEUE_ORDERED_TAG* descriptions. '=>' indicates that the
112preceding step must be complete before proceeding to the next step.
113'->' indicates that the next step can start as soon as the previous
114step is issued.
115
116 write-back cache ordered tag flush FUA
117-----------------------------------------------------------------------
118NONE yes/no N/A no N/A
119DRAIN no no N/A N/A
120DRAIN_FLUSH yes no yes no
121DRAIN_FUA yes no yes yes
122TAG no yes N/A N/A
123TAG_FLUSH yes yes yes no
124TAG_FUA yes yes yes yes
125
126
127QUEUE_ORDERED_NONE
128 I/O barriers are not needed and/or supported.
129
130 Sequence: N/A
131
132QUEUE_ORDERED_DRAIN
133 Requests are ordered by draining the request queue and cache
134 flushing isn't needed.
135
136 Sequence: drain => barrier
137
138QUEUE_ORDERED_DRAIN_FLUSH
139 Requests are ordered by draining the request queue and both
140 pre-barrier and post-barrier cache flushings are needed.
141
142 Sequence: drain => preflush => barrier => postflush
143
144QUEUE_ORDERED_DRAIN_FUA
145 Requests are ordered by draining the request queue and
146 pre-barrier cache flushing is needed. By using FUA on barrier
147 request, post-barrier flushing can be skipped.
148
149 Sequence: drain => preflush => barrier
150
151QUEUE_ORDERED_TAG
152 Requests are ordered by ordered tag and cache flushing isn't
153 needed.
154
155 Sequence: barrier
156
157QUEUE_ORDERED_TAG_FLUSH
158 Requests are ordered by ordered tag and both pre-barrier and
159 post-barrier cache flushings are needed.
160
161 Sequence: preflush -> barrier -> postflush
162
163QUEUE_ORDERED_TAG_FUA
164 Requests are ordered by ordered tag and pre-barrier cache
165 flushing is needed. By using FUA on barrier request,
166 post-barrier flushing can be skipped.
167
168 Sequence: preflush -> barrier
169
170
171Random notes/caveats
172--------------------
173
174* SCSI layer currently can't use TAG ordering even if the drive,
175controller and driver support it. The problem is that SCSI midlayer
176request dispatch function is not atomic. It releases queue lock and
177switch to SCSI host lock during issue and it's possible and likely to
178happen in time that requests change their relative positions. Once
179this problem is solved, TAG ordering can be enabled.
180
181* Currently, no matter which ordered mode is used, there can be only
182one barrier request in progress. All I/O barriers are held off by
183block layer until the previous I/O barrier is complete. This doesn't
184make any difference for DRAIN ordered devices, but, for TAG ordered
185devices with very high command latency, passing multiple I/O barriers
186to low level *might* be helpful if they are very frequent. Well, this
187certainly is a non-issue. I'm writing this just to make clear that no
188two I/O barrier is ever passed to low-level driver.
189
190* Completion order. Requests in ordered sequence are issued in order
191but not required to finish in order. Barrier implementation can
192handle out-of-order completion of ordered sequence. IOW, the requests
193MUST be processed in order but the hardware/software completion paths
194are allowed to reorder completion notifications - eg. current SCSI
195midlayer doesn't preserve completion order during error handling.
196
197* Requeueing order. Low-level drivers are free to requeue any request
198after they removed it from the request queue with
199blkdev_dequeue_request(). As barrier sequence should be kept in order
200when requeued, generic elevator code takes care of putting requests in
201order around barrier. See blk_ordered_req_seq() and
202ELEVATOR_INSERT_REQUEUE handling in __elv_add_request() for details.
203
204Note that block drivers must not requeue preceding requests while
205completing latter requests in an ordered sequence. Currently, no
206error checking is done against this.
207
208* Error handling. Currently, block layer will report error to upper
209layer if any of requests in an ordered sequence fails. Unfortunately,
210this doesn't seem to be enough. Look at the following request flow.
211QUEUE_ORDERED_TAG_FLUSH is in use.
212
213 [0] [1] [2] [3] [pre] [barrier] [post] < [4] [5] [6] ... >
214 still in elevator
215
216Let's say request [2], [3] are write requests to update file system
217metadata (journal or whatever) and [barrier] is used to mark that
218those updates are valid. Consider the following sequence.
219
220 i. Requests [0] ~ [post] leaves the request queue and enters
221 low-level driver.
222 ii. After a while, unfortunately, something goes wrong and the
223 drive fails [2]. Note that any of [0], [1] and [3] could have
224 completed by this time, but [pre] couldn't have been finished
225 as the drive must process it in order and it failed before
226 processing that command.
227 iii. Error handling kicks in and determines that the error is
228 unrecoverable and fails [2], and resumes operation.
229 iv. [pre] [barrier] [post] gets processed.
230 v. *BOOM* power fails
231
232The problem here is that the barrier request is *supposed* to indicate
233that filesystem update requests [2] and [3] made it safely to the
234physical medium and, if the machine crashes after the barrier is
235written, filesystem recovery code can depend on that. Sadly, that
236isn't true in this case anymore. IOW, the success of a I/O barrier
237should also be dependent on success of some of the preceding requests,
238where only upper layer (filesystem) knows what 'some' is.
239
240This can be solved by implementing a way to tell the block layer which
241requests affect the success of the following barrier request and
242making lower lever drivers to resume operation on error only after
243block layer tells it to do so.
244
245As the probability of this happening is very low and the drive should
246be faulty, implementing the fix is probably an overkill. But, still,
247it's there.
248
249* In previous drafts of barrier implementation, there was fallback
250mechanism such that, if FUA or ordered TAG fails, less fancy ordered
251mode can be selected and the failed barrier request is retried
252automatically. The rationale for this feature was that as FUA is
253pretty new in ATA world and ordered tag was never used widely, there
254could be devices which report to support those features but choke when
255actually given such requests.
256
257 This was removed for two reasons 1. it's an overkill 2. it's
258impossible to implement properly when TAG ordering is used as low
259level drivers resume after an error automatically. If it's ever
260needed adding it back and modifying low level drivers accordingly
261shouldn't be difficult.
diff --git a/Documentation/block/cfq-iosched.txt b/Documentation/block/cfq-iosched.txt
new file mode 100644
index 00000000000..e578feed6d8
--- /dev/null
+++ b/Documentation/block/cfq-iosched.txt
@@ -0,0 +1,45 @@
1CFQ ioscheduler tunables
2========================
3
4slice_idle
5----------
6This specifies how long CFQ should idle for next request on certain cfq queues
7(for sequential workloads) and service trees (for random workloads) before
8queue is expired and CFQ selects next queue to dispatch from.
9
10By default slice_idle is a non-zero value. That means by default we idle on
11queues/service trees. This can be very helpful on highly seeky media like
12single spindle SATA/SAS disks where we can cut down on overall number of
13seeks and see improved throughput.
14
15Setting slice_idle to 0 will remove all the idling on queues/service tree
16level and one should see an overall improved throughput on faster storage
17devices like multiple SATA/SAS disks in hardware RAID configuration. The down
18side is that isolation provided from WRITES also goes down and notion of
19IO priority becomes weaker.
20
21So depending on storage and workload, it might be useful to set slice_idle=0.
22In general I think for SATA/SAS disks and software RAID of SATA/SAS disks
23keeping slice_idle enabled should be useful. For any configurations where
24there are multiple spindles behind single LUN (Host based hardware RAID
25controller or for storage arrays), setting slice_idle=0 might end up in better
26throughput and acceptable latencies.
27
28CFQ IOPS Mode for group scheduling
29===================================
30Basic CFQ design is to provide priority based time slices. Higher priority
31process gets bigger time slice and lower priority process gets smaller time
32slice. Measuring time becomes harder if storage is fast and supports NCQ and
33it would be better to dispatch multiple requests from multiple cfq queues in
34request queue at a time. In such scenario, it is not possible to measure time
35consumed by single queue accurately.
36
37What is possible though is to measure number of requests dispatched from a
38single queue and also allow dispatch from multiple cfq queue at the same time.
39This effectively becomes the fairness in terms of IOPS (IO operations per
40second).
41
42If one sets slice_idle=0 and if storage supports NCQ, CFQ internally switches
43to IOPS mode and starts providing fairness in terms of number of requests
44dispatched. Note that this mode switching takes effect only for group
45scheduling. For non-cgroup users nothing should change.
diff --git a/Documentation/block/writeback_cache_control.txt b/Documentation/block/writeback_cache_control.txt
new file mode 100644
index 00000000000..83407d36630
--- /dev/null
+++ b/Documentation/block/writeback_cache_control.txt
@@ -0,0 +1,86 @@
1
2Explicit volatile write back cache control
3=====================================
4
5Introduction
6------------
7
8Many storage devices, especially in the consumer market, come with volatile
9write back caches. That means the devices signal I/O completion to the
10operating system before data actually has hit the non-volatile storage. This
11behavior obviously speeds up various workloads, but it means the operating
12system needs to force data out to the non-volatile storage when it performs
13a data integrity operation like fsync, sync or an unmount.
14
15The Linux block layer provides two simple mechanisms that let filesystems
16control the caching behavior of the storage device. These mechanisms are
17a forced cache flush, and the Force Unit Access (FUA) flag for requests.
18
19
20Explicit cache flushes
21----------------------
22
23The REQ_FLUSH flag can be OR ed into the r/w flags of a bio submitted from
24the filesystem and will make sure the volatile cache of the storage device
25has been flushed before the actual I/O operation is started. This explicitly
26guarantees that previously completed write requests are on non-volatile
27storage before the flagged bio starts. In addition the REQ_FLUSH flag can be
28set on an otherwise empty bio structure, which causes only an explicit cache
29flush without any dependent I/O. It is recommend to use
30the blkdev_issue_flush() helper for a pure cache flush.
31
32
33Forced Unit Access
34-----------------
35
36The REQ_FUA flag can be OR ed into the r/w flags of a bio submitted from the
37filesystem and will make sure that I/O completion for this request is only
38signaled after the data has been committed to non-volatile storage.
39
40
41Implementation details for filesystems
42--------------------------------------
43
44Filesystems can simply set the REQ_FLUSH and REQ_FUA bits and do not have to
45worry if the underlying devices need any explicit cache flushing and how
46the Forced Unit Access is implemented. The REQ_FLUSH and REQ_FUA flags
47may both be set on a single bio.
48
49
50Implementation details for make_request_fn based block drivers
51--------------------------------------------------------------
52
53These drivers will always see the REQ_FLUSH and REQ_FUA bits as they sit
54directly below the submit_bio interface. For remapping drivers the REQ_FUA
55bits need to be propagated to underlying devices, and a global flush needs
56to be implemented for bios with the REQ_FLUSH bit set. For real device
57drivers that do not have a volatile cache the REQ_FLUSH and REQ_FUA bits
58on non-empty bios can simply be ignored, and REQ_FLUSH requests without
59data can be completed successfully without doing any work. Drivers for
60devices with volatile caches need to implement the support for these
61flags themselves without any help from the block layer.
62
63
64Implementation details for request_fn based block drivers
65--------------------------------------------------------------
66
67For devices that do not support volatile write caches there is no driver
68support required, the block layer completes empty REQ_FLUSH requests before
69entering the driver and strips off the REQ_FLUSH and REQ_FUA bits from
70requests that have a payload. For devices with volatile write caches the
71driver needs to tell the block layer that it supports flushing caches by
72doing:
73
74 blk_queue_flush(sdkp->disk->queue, REQ_FLUSH);
75
76and handle empty REQ_FLUSH requests in its prep_fn/request_fn. Note that
77REQ_FLUSH requests with a payload are automatically turned into a sequence
78of an empty REQ_FLUSH request followed by the actual write by the block
79layer. For devices that also support the FUA bit the block layer needs
80to be told to pass through the REQ_FUA bit using:
81
82 blk_queue_flush(sdkp->disk->queue, REQ_FLUSH | REQ_FUA);
83
84and the driver must handle write requests that have the REQ_FUA bit set
85in prep_fn/request_fn. If the FUA bit is not natively supported the block
86layer turns it into an empty REQ_FLUSH request after the actual write.
diff --git a/Documentation/cgroups/blkio-controller.txt b/Documentation/cgroups/blkio-controller.txt
index 48e0b21b005..d6da611f8f6 100644
--- a/Documentation/cgroups/blkio-controller.txt
+++ b/Documentation/cgroups/blkio-controller.txt
@@ -8,12 +8,17 @@ both at leaf nodes as well as at intermediate nodes in a storage hierarchy.
8Plan is to use the same cgroup based management interface for blkio controller 8Plan is to use the same cgroup based management interface for blkio controller
9and based on user options switch IO policies in the background. 9and based on user options switch IO policies in the background.
10 10
11In the first phase, this patchset implements proportional weight time based 11Currently two IO control policies are implemented. First one is proportional
12division of disk policy. It is implemented in CFQ. Hence this policy takes 12weight time based division of disk policy. It is implemented in CFQ. Hence
13effect only on leaf nodes when CFQ is being used. 13this policy takes effect only on leaf nodes when CFQ is being used. The second
14one is throttling policy which can be used to specify upper IO rate limits
15on devices. This policy is implemented in generic block layer and can be
16used on leaf nodes as well as higher level logical devices like device mapper.
14 17
15HOWTO 18HOWTO
16===== 19=====
20Proportional Weight division of bandwidth
21-----------------------------------------
17You can do a very simple testing of running two dd threads in two different 22You can do a very simple testing of running two dd threads in two different
18cgroups. Here is what you can do. 23cgroups. Here is what you can do.
19 24
@@ -55,6 +60,35 @@ cgroups. Here is what you can do.
55 group dispatched to the disk. We provide fairness in terms of disk time, so 60 group dispatched to the disk. We provide fairness in terms of disk time, so
56 ideally io.disk_time of cgroups should be in proportion to the weight. 61 ideally io.disk_time of cgroups should be in proportion to the weight.
57 62
63Throttling/Upper Limit policy
64-----------------------------
65- Enable Block IO controller
66 CONFIG_BLK_CGROUP=y
67
68- Enable throttling in block layer
69 CONFIG_BLK_DEV_THROTTLING=y
70
71- Mount blkio controller
72 mount -t cgroup -o blkio none /cgroup/blkio
73
74- Specify a bandwidth rate on particular device for root group. The format
75 for policy is "<major>:<minor> <byes_per_second>".
76
77 echo "8:16 1048576" > /cgroup/blkio/blkio.read_bps_device
78
79 Above will put a limit of 1MB/second on reads happening for root group
80 on device having major/minor number 8:16.
81
82- Run dd to read a file and see if rate is throttled to 1MB/s or not.
83
84 # dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024
85 # iflag=direct
86 1024+0 records in
87 1024+0 records out
88 4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s
89
90 Limits for writes can be put using blkio.write_bps_device file.
91
58Various user visible config options 92Various user visible config options
59=================================== 93===================================
60CONFIG_BLK_CGROUP 94CONFIG_BLK_CGROUP
@@ -68,8 +102,13 @@ CONFIG_CFQ_GROUP_IOSCHED
68 - Enables group scheduling in CFQ. Currently only 1 level of group 102 - Enables group scheduling in CFQ. Currently only 1 level of group
69 creation is allowed. 103 creation is allowed.
70 104
105CONFIG_BLK_DEV_THROTTLING
106 - Enable block device throttling support in block layer.
107
71Details of cgroup files 108Details of cgroup files
72======================= 109=======================
110Proportional weight policy files
111--------------------------------
73- blkio.weight 112- blkio.weight
74 - Specifies per cgroup weight. This is default weight of the group 113 - Specifies per cgroup weight. This is default weight of the group
75 on all the devices until and unless overridden by per device rule. 114 on all the devices until and unless overridden by per device rule.
@@ -210,6 +249,67 @@ Details of cgroup files
210 and minor number of the device and third field specifies the number 249 and minor number of the device and third field specifies the number
211 of times a group was dequeued from a particular device. 250 of times a group was dequeued from a particular device.
212 251
252Throttling/Upper limit policy files
253-----------------------------------
254- blkio.throttle.read_bps_device
255 - Specifies upper limit on READ rate from the device. IO rate is
256 specified in bytes per second. Rules are per deivce. Following is
257 the format.
258
259 echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.read_bps_device
260
261- blkio.throttle.write_bps_device
262 - Specifies upper limit on WRITE rate to the device. IO rate is
263 specified in bytes per second. Rules are per deivce. Following is
264 the format.
265
266 echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.write_bps_device
267
268- blkio.throttle.read_iops_device
269 - Specifies upper limit on READ rate from the device. IO rate is
270 specified in IO per second. Rules are per deivce. Following is
271 the format.
272
273 echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.read_iops_device
274
275- blkio.throttle.write_iops_device
276 - Specifies upper limit on WRITE rate to the device. IO rate is
277 specified in io per second. Rules are per deivce. Following is
278 the format.
279
280 echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.write_iops_device
281
282Note: If both BW and IOPS rules are specified for a device, then IO is
283 subjectd to both the constraints.
284
285- blkio.throttle.io_serviced
286 - Number of IOs (bio) completed to/from the disk by the group (as
287 seen by throttling policy). These are further divided by the type
288 of operation - read or write, sync or async. First two fields specify
289 the major and minor number of the device, third field specifies the
290 operation type and the fourth field specifies the number of IOs.
291
292 blkio.io_serviced does accounting as seen by CFQ and counts are in
293 number of requests (struct request). On the other hand,
294 blkio.throttle.io_serviced counts number of IO in terms of number
295 of bios as seen by throttling policy. These bios can later be
296 merged by elevator and total number of requests completed can be
297 lesser.
298
299- blkio.throttle.io_service_bytes
300 - Number of bytes transferred to/from the disk by the group. These
301 are further divided by the type of operation - read or write, sync
302 or async. First two fields specify the major and minor number of the
303 device, third field specifies the operation type and the fourth field
304 specifies the number of bytes.
305
306 These numbers should roughly be same as blkio.io_service_bytes as
307 updated by CFQ. The difference between two is that
308 blkio.io_service_bytes will not be updated if CFQ is not operating
309 on request queue.
310
311Common files among various policies
312-----------------------------------
213- blkio.reset_stats 313- blkio.reset_stats
214 - Writing an int to this file will result in resetting all the stats 314 - Writing an int to this file will result in resetting all the stats
215 for that cgroup. 315 for that cgroup.
@@ -217,6 +317,7 @@ Details of cgroup files
217CFQ sysfs tunable 317CFQ sysfs tunable
218================= 318=================
219/sys/block/<disk>/queue/iosched/group_isolation 319/sys/block/<disk>/queue/iosched/group_isolation
320-----------------------------------------------
220 321
221If group_isolation=1, it provides stronger isolation between groups at the 322If group_isolation=1, it provides stronger isolation between groups at the
222expense of throughput. By default group_isolation is 0. In general that 323expense of throughput. By default group_isolation is 0. In general that
@@ -243,6 +344,33 @@ By default one should run with group_isolation=0. If that is not sufficient
243and one wants stronger isolation between groups, then set group_isolation=1 344and one wants stronger isolation between groups, then set group_isolation=1
244but this will come at cost of reduced throughput. 345but this will come at cost of reduced throughput.
245 346
347/sys/block/<disk>/queue/iosched/slice_idle
348------------------------------------------
349On a faster hardware CFQ can be slow, especially with sequential workload.
350This happens because CFQ idles on a single queue and single queue might not
351drive deeper request queue depths to keep the storage busy. In such scenarios
352one can try setting slice_idle=0 and that would switch CFQ to IOPS
353(IO operations per second) mode on NCQ supporting hardware.
354
355That means CFQ will not idle between cfq queues of a cfq group and hence be
356able to driver higher queue depth and achieve better throughput. That also
357means that cfq provides fairness among groups in terms of IOPS and not in
358terms of disk time.
359
360/sys/block/<disk>/queue/iosched/group_idle
361------------------------------------------
362If one disables idling on individual cfq queues and cfq service trees by
363setting slice_idle=0, group_idle kicks in. That means CFQ will still idle
364on the group in an attempt to provide fairness among groups.
365
366By default group_idle is same as slice_idle and does not do anything if
367slice_idle is enabled.
368
369One can experience an overall throughput drop if you have created multiple
370groups and put applications in that group which are not driving enough
371IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle
372on individual groups and throughput should improve.
373
246What works 374What works
247========== 375==========
248- Currently only sync IO queues are support. All the buffered writes are 376- Currently only sync IO queues are support. All the buffered writes are
diff --git a/Documentation/cputopology.txt b/Documentation/cputopology.txt
index f1c5c4bccd3..902d3151f52 100644
--- a/Documentation/cputopology.txt
+++ b/Documentation/cputopology.txt
@@ -14,25 +14,39 @@ to /proc/cpuinfo.
14 identifier (rather than the kernel's). The actual value is 14 identifier (rather than the kernel's). The actual value is
15 architecture and platform dependent. 15 architecture and platform dependent.
16 16
173) /sys/devices/system/cpu/cpuX/topology/thread_siblings: 173) /sys/devices/system/cpu/cpuX/topology/book_id:
18
19 the book ID of cpuX. Typically it is the hardware platform's
20 identifier (rather than the kernel's). The actual value is
21 architecture and platform dependent.
22
234) /sys/devices/system/cpu/cpuX/topology/thread_siblings:
18 24
19 internel kernel map of cpuX's hardware threads within the same 25 internel kernel map of cpuX's hardware threads within the same
20 core as cpuX 26 core as cpuX
21 27
224) /sys/devices/system/cpu/cpuX/topology/core_siblings: 285) /sys/devices/system/cpu/cpuX/topology/core_siblings:
23 29
24 internal kernel map of cpuX's hardware threads within the same 30 internal kernel map of cpuX's hardware threads within the same
25 physical_package_id. 31 physical_package_id.
26 32
336) /sys/devices/system/cpu/cpuX/topology/book_siblings:
34
35 internal kernel map of cpuX's hardware threads within the same
36 book_id.
37
27To implement it in an architecture-neutral way, a new source file, 38To implement it in an architecture-neutral way, a new source file,
28drivers/base/topology.c, is to export the 4 attributes. 39drivers/base/topology.c, is to export the 4 or 6 attributes. The two book
40related sysfs files will only be created if CONFIG_SCHED_BOOK is selected.
29 41
30For an architecture to support this feature, it must define some of 42For an architecture to support this feature, it must define some of
31these macros in include/asm-XXX/topology.h: 43these macros in include/asm-XXX/topology.h:
32#define topology_physical_package_id(cpu) 44#define topology_physical_package_id(cpu)
33#define topology_core_id(cpu) 45#define topology_core_id(cpu)
46#define topology_book_id(cpu)
34#define topology_thread_cpumask(cpu) 47#define topology_thread_cpumask(cpu)
35#define topology_core_cpumask(cpu) 48#define topology_core_cpumask(cpu)
49#define topology_book_cpumask(cpu)
36 50
37The type of **_id is int. 51The type of **_id is int.
38The type of siblings is (const) struct cpumask *. 52The type of siblings is (const) struct cpumask *.
@@ -45,6 +59,9 @@ not defined by include/asm-XXX/topology.h:
453) thread_siblings: just the given CPU 593) thread_siblings: just the given CPU
464) core_siblings: just the given CPU 604) core_siblings: just the given CPU
47 61
62For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
63default definitions for topology_book_id() and topology_book_cpumask().
64
48Additionally, CPU topology information is provided under 65Additionally, CPU topology information is provided under
49/sys/devices/system/cpu and includes these files. The internal 66/sys/devices/system/cpu and includes these files. The internal
50source for the output is in brackets ("[]"). 67source for the output is in brackets ("[]").
diff --git a/Documentation/feature-removal-schedule.txt b/Documentation/feature-removal-schedule.txt
index 842aa9de84a..5e2bc4ab897 100644
--- a/Documentation/feature-removal-schedule.txt
+++ b/Documentation/feature-removal-schedule.txt
@@ -386,34 +386,6 @@ Who: Tejun Heo <tj@kernel.org>
386 386
387---------------------------- 387----------------------------
388 388
389What: Support for VMware's guest paravirtuliazation technique [VMI] will be
390 dropped.
391When: 2.6.37 or earlier.
392Why: With the recent innovations in CPU hardware acceleration technologies
393 from Intel and AMD, VMware ran a few experiments to compare these
394 techniques to guest paravirtualization technique on VMware's platform.
395 These hardware assisted virtualization techniques have outperformed the
396 performance benefits provided by VMI in most of the workloads. VMware
397 expects that these hardware features will be ubiquitous in a couple of
398 years, as a result, VMware has started a phased retirement of this
399 feature from the hypervisor. We will be removing this feature from the
400 Kernel too. Right now we are targeting 2.6.37 but can retire earlier if
401 technical reasons (read opportunity to remove major chunk of pvops)
402 arise.
403
404 Please note that VMI has always been an optimization and non-VMI kernels
405 still work fine on VMware's platform.
406 Latest versions of VMware's product which support VMI are,
407 Workstation 7.0 and VSphere 4.0 on ESX side, future maintainence
408 releases for these products will continue supporting VMI.
409
410 For more details about VMI retirement take a look at this,
411 http://blogs.vmware.com/guestosguide/2009/09/vmi-retirement.html
412
413Who: Alok N Kataria <akataria@vmware.com>
414
415----------------------------
416
417What: Support for lcd_switch and display_get in asus-laptop driver 389What: Support for lcd_switch and display_get in asus-laptop driver
418When: March 2010 390When: March 2010
419Why: These two features use non-standard interfaces. There are the 391Why: These two features use non-standard interfaces. There are the
diff --git a/Documentation/filesystems/ocfs2.txt b/Documentation/filesystems/ocfs2.txt
index 1f7ae144f6d..5393e661169 100644
--- a/Documentation/filesystems/ocfs2.txt
+++ b/Documentation/filesystems/ocfs2.txt
@@ -87,3 +87,10 @@ dir_resv_level= (*) By default, directory reservations will scale with file
87 reservations - users should rarely need to change this 87 reservations - users should rarely need to change this
88 value. If allocation reservations are turned off, this 88 value. If allocation reservations are turned off, this
89 option will have no effect. 89 option will have no effect.
90coherency=full (*) Disallow concurrent O_DIRECT writes, cluster inode
91 lock will be taken to force other nodes drop cache,
92 therefore full cluster coherency is guaranteed even
93 for O_DIRECT writes.
94coherency=buffered Allow concurrent O_DIRECT writes without EX lock among
95 nodes, which gains high performance at risk of getting
96 stale data on other nodes.
diff --git a/Documentation/gpio.txt b/Documentation/gpio.txt
index d96a6dba574..9633da01ff4 100644
--- a/Documentation/gpio.txt
+++ b/Documentation/gpio.txt
@@ -109,17 +109,19 @@ use numbers 2000-2063 to identify GPIOs in a bank of I2C GPIO expanders.
109 109
110If you want to initialize a structure with an invalid GPIO number, use 110If you want to initialize a structure with an invalid GPIO number, use
111some negative number (perhaps "-EINVAL"); that will never be valid. To 111some negative number (perhaps "-EINVAL"); that will never be valid. To
112test if a number could reference a GPIO, you may use this predicate: 112test if such number from such a structure could reference a GPIO, you
113may use this predicate:
113 114
114 int gpio_is_valid(int number); 115 int gpio_is_valid(int number);
115 116
116A number that's not valid will be rejected by calls which may request 117A number that's not valid will be rejected by calls which may request
117or free GPIOs (see below). Other numbers may also be rejected; for 118or free GPIOs (see below). Other numbers may also be rejected; for
118example, a number might be valid but unused on a given board. 119example, a number might be valid but temporarily unused on a given board.
119
120Whether a platform supports multiple GPIO controllers is currently a
121platform-specific implementation issue.
122 120
121Whether a platform supports multiple GPIO controllers is a platform-specific
122implementation issue, as are whether that support can leave "holes" in the space
123of GPIO numbers, and whether new controllers can be added at runtime. Such issues
124can affect things including whether adjacent GPIO numbers are both valid.
123 125
124Using GPIOs 126Using GPIOs
125----------- 127-----------
@@ -480,12 +482,16 @@ To support this framework, a platform's Kconfig will "select" either
480ARCH_REQUIRE_GPIOLIB or ARCH_WANT_OPTIONAL_GPIOLIB 482ARCH_REQUIRE_GPIOLIB or ARCH_WANT_OPTIONAL_GPIOLIB
481and arrange that its <asm/gpio.h> includes <asm-generic/gpio.h> and defines 483and arrange that its <asm/gpio.h> includes <asm-generic/gpio.h> and defines
482three functions: gpio_get_value(), gpio_set_value(), and gpio_cansleep(). 484three functions: gpio_get_value(), gpio_set_value(), and gpio_cansleep().
483They may also want to provide a custom value for ARCH_NR_GPIOS.
484 485
485ARCH_REQUIRE_GPIOLIB means that the gpio-lib code will always get compiled 486It may also provide a custom value for ARCH_NR_GPIOS, so that it better
487reflects the number of GPIOs in actual use on that platform, without
488wasting static table space. (It should count both built-in/SoC GPIOs and
489also ones on GPIO expanders.
490
491ARCH_REQUIRE_GPIOLIB means that the gpiolib code will always get compiled
486into the kernel on that architecture. 492into the kernel on that architecture.
487 493
488ARCH_WANT_OPTIONAL_GPIOLIB means the gpio-lib code defaults to off and the user 494ARCH_WANT_OPTIONAL_GPIOLIB means the gpiolib code defaults to off and the user
489can enable it and build it into the kernel optionally. 495can enable it and build it into the kernel optionally.
490 496
491If neither of these options are selected, the platform does not support 497If neither of these options are selected, the platform does not support
diff --git a/Documentation/hwmon/sysfs-interface b/Documentation/hwmon/sysfs-interface
index ff45d1f837c..48ceabedf55 100644
--- a/Documentation/hwmon/sysfs-interface
+++ b/Documentation/hwmon/sysfs-interface
@@ -91,12 +91,11 @@ name The chip name.
91 I2C devices get this attribute created automatically. 91 I2C devices get this attribute created automatically.
92 RO 92 RO
93 93
94update_rate The rate at which the chip will update readings. 94update_interval The interval at which the chip will update readings.
95 Unit: millisecond 95 Unit: millisecond
96 RW 96 RW
97 Some devices have a variable update rate. This attribute 97 Some devices have a variable update rate or interval.
98 can be used to change the update rate to the desired 98 This attribute can be used to change it to the desired value.
99 frequency.
100 99
101 100
102************ 101************
diff --git a/Documentation/kernel-doc-nano-HOWTO.txt b/Documentation/kernel-doc-nano-HOWTO.txt
index 27a52b35d55..3d8a97747f7 100644
--- a/Documentation/kernel-doc-nano-HOWTO.txt
+++ b/Documentation/kernel-doc-nano-HOWTO.txt
@@ -345,5 +345,10 @@ documentation, in <filename>, for the functions listed.
345section titled <section title> from <filename>. 345section titled <section title> from <filename>.
346Spaces are allowed in <section title>; do not quote the <section title>. 346Spaces are allowed in <section title>; do not quote the <section title>.
347 347
348!C<filename> is replaced by nothing, but makes the tools check that
349all DOC: sections and documented functions, symbols, etc. are used.
350This makes sense to use when you use !F/!P only and want to verify
351that all documentation is included.
352
348Tim. 353Tim.
349*/ <twaugh@redhat.com> 354*/ <twaugh@redhat.com>
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index f084af0cb8e..02f21d9220c 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -455,7 +455,7 @@ and is between 256 and 4096 characters. It is defined in the file
455 [ARM] imx_timer1,OSTS,netx_timer,mpu_timer2, 455 [ARM] imx_timer1,OSTS,netx_timer,mpu_timer2,
456 pxa_timer,timer3,32k_counter,timer0_1 456 pxa_timer,timer3,32k_counter,timer0_1
457 [AVR32] avr32 457 [AVR32] avr32
458 [X86-32] pit,hpet,tsc,vmi-timer; 458 [X86-32] pit,hpet,tsc;
459 scx200_hrt on Geode; cyclone on IBM x440 459 scx200_hrt on Geode; cyclone on IBM x440
460 [MIPS] MIPS 460 [MIPS] MIPS
461 [PARISC] cr16 461 [PARISC] cr16
@@ -1974,15 +1974,18 @@ and is between 256 and 4096 characters. It is defined in the file
1974 force Enable ASPM even on devices that claim not to support it. 1974 force Enable ASPM even on devices that claim not to support it.
1975 WARNING: Forcing ASPM on may cause system lockups. 1975 WARNING: Forcing ASPM on may cause system lockups.
1976 1976
1977 pcie_ports= [PCIE] PCIe ports handling:
1978 auto Ask the BIOS whether or not to use native PCIe services
1979 associated with PCIe ports (PME, hot-plug, AER). Use
1980 them only if that is allowed by the BIOS.
1981 native Use native PCIe services associated with PCIe ports
1982 unconditionally.
1983 compat Treat PCIe ports as PCI-to-PCI bridges, disable the PCIe
1984 ports driver.
1985
1977 pcie_pme= [PCIE,PM] Native PCIe PME signaling options: 1986 pcie_pme= [PCIE,PM] Native PCIe PME signaling options:
1978 Format: {auto|force}[,nomsi]
1979 auto Use native PCIe PME signaling if the BIOS allows the
1980 kernel to control PCIe config registers of root ports.
1981 force Use native PCIe PME signaling even if the BIOS refuses
1982 to allow the kernel to control the relevant PCIe config
1983 registers.
1984 nomsi Do not use MSI for native PCIe PME signaling (this makes 1987 nomsi Do not use MSI for native PCIe PME signaling (this makes
1985 all PCIe root ports use INTx for everything). 1988 all PCIe root ports use INTx for all services).
1986 1989
1987 pcmv= [HW,PCMCIA] BadgePAD 4 1990 pcmv= [HW,PCMCIA] BadgePAD 4
1988 1991
@@ -2150,6 +2153,11 @@ and is between 256 and 4096 characters. It is defined in the file
2150 Reserves a hole at the top of the kernel virtual 2153 Reserves a hole at the top of the kernel virtual
2151 address space. 2154 address space.
2152 2155
2156 reservelow= [X86]
2157 Format: nn[K]
2158 Set the amount of memory to reserve for BIOS at
2159 the bottom of the address space.
2160
2153 reset_devices [KNL] Force drivers to reset the underlying device 2161 reset_devices [KNL] Force drivers to reset the underlying device
2154 during initialization. 2162 during initialization.
2155 2163
@@ -2162,6 +2170,11 @@ and is between 256 and 4096 characters. It is defined in the file
2162 in <PAGE_SIZE> units (needed only for swap files). 2170 in <PAGE_SIZE> units (needed only for swap files).
2163 See Documentation/power/swsusp-and-swap-files.txt 2171 See Documentation/power/swsusp-and-swap-files.txt
2164 2172
2173 hibernate= [HIBERNATION]
2174 noresume Don't check if there's a hibernation image
2175 present during boot.
2176 nocompress Don't compress/decompress hibernation images.
2177
2165 retain_initrd [RAM] Keep initrd memory after extraction 2178 retain_initrd [RAM] Keep initrd memory after extraction
2166 2179
2167 rhash_entries= [KNL,NET] 2180 rhash_entries= [KNL,NET]
@@ -2432,6 +2445,10 @@ and is between 256 and 4096 characters. It is defined in the file
2432 disables clocksource verification at runtime. 2445 disables clocksource verification at runtime.
2433 Used to enable high-resolution timer mode on older 2446 Used to enable high-resolution timer mode on older
2434 hardware, and in virtualized environment. 2447 hardware, and in virtualized environment.
2448 [x86] noirqtime: Do not use TSC to do irq accounting.
2449 Used to run time disable IRQ_TIME_ACCOUNTING on any
2450 platforms where RDTSC is slow and this accounting
2451 can add overhead.
2435 2452
2436 turbografx.map[2|3]= [HW,JOY] 2453 turbografx.map[2|3]= [HW,JOY]
2437 TurboGraFX parallel port interface 2454 TurboGraFX parallel port interface
diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt
index 1762b81fcdf..741fe66d6ec 100644
--- a/Documentation/kprobes.txt
+++ b/Documentation/kprobes.txt
@@ -542,9 +542,11 @@ Kprobes does not use mutexes or allocate memory except during
542registration and unregistration. 542registration and unregistration.
543 543
544Probe handlers are run with preemption disabled. Depending on the 544Probe handlers are run with preemption disabled. Depending on the
545architecture, handlers may also run with interrupts disabled. In any 545architecture and optimization state, handlers may also run with
546case, your handler should not yield the CPU (e.g., by attempting to 546interrupts disabled (e.g., kretprobe handlers and optimized kprobe
547acquire a semaphore). 547handlers run without interrupt disabled on x86/x86-64). In any case,
548your handler should not yield the CPU (e.g., by attempting to acquire
549a semaphore).
548 550
549Since a return probe is implemented by replacing the return 551Since a return probe is implemented by replacing the return
550address with the trampoline's address, stack backtraces and calls 552address with the trampoline's address, stack backtraces and calls
diff --git a/Documentation/lguest/lguest.c b/Documentation/lguest/lguest.c
index 8a6a8c6d498..dc73bc54cc4 100644
--- a/Documentation/lguest/lguest.c
+++ b/Documentation/lguest/lguest.c
@@ -1640,15 +1640,6 @@ static void blk_request(struct virtqueue *vq)
1640 off = out->sector * 512; 1640 off = out->sector * 512;
1641 1641
1642 /* 1642 /*
1643 * The block device implements "barriers", where the Guest indicates
1644 * that it wants all previous writes to occur before this write. We
1645 * don't have a way of asking our kernel to do a barrier, so we just
1646 * synchronize all the data in the file. Pretty poor, no?
1647 */
1648 if (out->type & VIRTIO_BLK_T_BARRIER)
1649 fdatasync(vblk->fd);
1650
1651 /*
1652 * In general the virtio block driver is allowed to try SCSI commands. 1643 * In general the virtio block driver is allowed to try SCSI commands.
1653 * It'd be nice if we supported eject, for example, but we don't. 1644 * It'd be nice if we supported eject, for example, but we don't.
1654 */ 1645 */
@@ -1680,6 +1671,13 @@ static void blk_request(struct virtqueue *vq)
1680 /* Die, bad Guest, die. */ 1671 /* Die, bad Guest, die. */
1681 errx(1, "Write past end %llu+%u", off, ret); 1672 errx(1, "Write past end %llu+%u", off, ret);
1682 } 1673 }
1674
1675 wlen = sizeof(*in);
1676 *in = (ret >= 0 ? VIRTIO_BLK_S_OK : VIRTIO_BLK_S_IOERR);
1677 } else if (out->type & VIRTIO_BLK_T_FLUSH) {
1678 /* Flush */
1679 ret = fdatasync(vblk->fd);
1680 verbose("FLUSH fdatasync: %i\n", ret);
1683 wlen = sizeof(*in); 1681 wlen = sizeof(*in);
1684 *in = (ret >= 0 ? VIRTIO_BLK_S_OK : VIRTIO_BLK_S_IOERR); 1682 *in = (ret >= 0 ? VIRTIO_BLK_S_OK : VIRTIO_BLK_S_IOERR);
1685 } else { 1683 } else {
@@ -1703,15 +1701,6 @@ static void blk_request(struct virtqueue *vq)
1703 } 1701 }
1704 } 1702 }
1705 1703
1706 /*
1707 * OK, so we noted that it was pretty poor to use an fdatasync as a
1708 * barrier. But Christoph Hellwig points out that we need a sync
1709 * *afterwards* as well: "Barriers specify no reordering to the front
1710 * or the back." And Jens Axboe confirmed it, so here we are:
1711 */
1712 if (out->type & VIRTIO_BLK_T_BARRIER)
1713 fdatasync(vblk->fd);
1714
1715 /* Finished that request. */ 1704 /* Finished that request. */
1716 add_used(vq, head, wlen); 1705 add_used(vq, head, wlen);
1717} 1706}
@@ -1736,8 +1725,8 @@ static void setup_block_file(const char *filename)
1736 vblk->fd = open_or_die(filename, O_RDWR|O_LARGEFILE); 1725 vblk->fd = open_or_die(filename, O_RDWR|O_LARGEFILE);
1737 vblk->len = lseek64(vblk->fd, 0, SEEK_END); 1726 vblk->len = lseek64(vblk->fd, 0, SEEK_END);
1738 1727
1739 /* We support barriers. */ 1728 /* We support FLUSH. */
1740 add_feature(dev, VIRTIO_BLK_F_BARRIER); 1729 add_feature(dev, VIRTIO_BLK_F_FLUSH);
1741 1730
1742 /* Tell Guest how many sectors this device has. */ 1731 /* Tell Guest how many sectors this device has. */
1743 conf.capacity = cpu_to_le64(vblk->len / 512); 1732 conf.capacity = cpu_to_le64(vblk->len / 512);
diff --git a/Documentation/mutex-design.txt b/Documentation/mutex-design.txt
index c91ccc0720f..38c10fd7f41 100644
--- a/Documentation/mutex-design.txt
+++ b/Documentation/mutex-design.txt
@@ -9,7 +9,7 @@ firstly, there's nothing wrong with semaphores. But if the simpler
9mutex semantics are sufficient for your code, then there are a couple 9mutex semantics are sufficient for your code, then there are a couple
10of advantages of mutexes: 10of advantages of mutexes:
11 11
12 - 'struct mutex' is smaller on most architectures: .e.g on x86, 12 - 'struct mutex' is smaller on most architectures: E.g. on x86,
13 'struct semaphore' is 20 bytes, 'struct mutex' is 16 bytes. 13 'struct semaphore' is 20 bytes, 'struct mutex' is 16 bytes.
14 A smaller structure size means less RAM footprint, and better 14 A smaller structure size means less RAM footprint, and better
15 CPU-cache utilization. 15 CPU-cache utilization.
@@ -136,3 +136,4 @@ the APIs of 'struct mutex' have been streamlined:
136 void mutex_lock_nested(struct mutex *lock, unsigned int subclass); 136 void mutex_lock_nested(struct mutex *lock, unsigned int subclass);
137 int mutex_lock_interruptible_nested(struct mutex *lock, 137 int mutex_lock_interruptible_nested(struct mutex *lock,
138 unsigned int subclass); 138 unsigned int subclass);
139 int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);
diff --git a/Documentation/networking/e1000.txt b/Documentation/networking/e1000.txt
index 2df71861e57..d9271e74e48 100644
--- a/Documentation/networking/e1000.txt
+++ b/Documentation/networking/e1000.txt
@@ -1,82 +1,35 @@
1Linux* Base Driver for the Intel(R) PRO/1000 Family of Adapters 1Linux* Base Driver for the Intel(R) PRO/1000 Family of Adapters
2=============================================================== 2===============================================================
3 3
4September 26, 2006 4Intel Gigabit Linux driver.
5 5Copyright(c) 1999 - 2010 Intel Corporation.
6 6
7Contents 7Contents
8======== 8========
9 9
10- In This Release
11- Identifying Your Adapter 10- Identifying Your Adapter
12- Building and Installation
13- Command Line Parameters 11- Command Line Parameters
14- Speed and Duplex Configuration 12- Speed and Duplex Configuration
15- Additional Configurations 13- Additional Configurations
16- Known Issues
17- Support 14- Support
18 15
19
20In This Release
21===============
22
23This file describes the Linux* Base Driver for the Intel(R) PRO/1000 Family
24of Adapters. This driver includes support for Itanium(R)2-based systems.
25
26For questions related to hardware requirements, refer to the documentation
27supplied with your Intel PRO/1000 adapter. All hardware requirements listed
28apply to use with Linux.
29
30The following features are now available in supported kernels:
31 - Native VLANs
32 - Channel Bonding (teaming)
33 - SNMP
34
35Channel Bonding documentation can be found in the Linux kernel source:
36/Documentation/networking/bonding.txt
37
38The driver information previously displayed in the /proc filesystem is not
39supported in this release. Alternatively, you can use ethtool (version 1.6
40or later), lspci, and ifconfig to obtain the same information.
41
42Instructions on updating ethtool can be found in the section "Additional
43Configurations" later in this document.
44
45NOTE: The Intel(R) 82562v 10/100 Network Connection only provides 10/100
46support.
47
48
49Identifying Your Adapter 16Identifying Your Adapter
50======================== 17========================
51 18
52For more information on how to identify your adapter, go to the Adapter & 19For more information on how to identify your adapter, go to the Adapter &
53Driver ID Guide at: 20Driver ID Guide at:
54 21
55 http://support.intel.com/support/network/adapter/pro100/21397.htm 22 http://support.intel.com/support/go/network/adapter/idguide.htm
56 23
57For the latest Intel network drivers for Linux, refer to the following 24For the latest Intel network drivers for Linux, refer to the following
58website. In the search field, enter your adapter name or type, or use the 25website. In the search field, enter your adapter name or type, or use the
59networking link on the left to search for your adapter: 26networking link on the left to search for your adapter:
60 27
61 http://downloadfinder.intel.com/scripts-df/support_intel.asp 28 http://support.intel.com/support/go/network/adapter/home.htm
62
63 29
64Command Line Parameters 30Command Line Parameters
65======================= 31=======================
66 32
67If the driver is built as a module, the following optional parameters
68are used by entering them on the command line with the modprobe command
69using this syntax:
70
71 modprobe e1000 [<option>=<VAL1>,<VAL2>,...]
72
73For example, with two PRO/1000 PCI adapters, entering:
74
75 modprobe e1000 TxDescriptors=80,128
76
77loads the e1000 driver with 80 TX descriptors for the first adapter and
78128 TX descriptors for the second adapter.
79
80The default value for each parameter is generally the recommended setting, 33The default value for each parameter is generally the recommended setting,
81unless otherwise noted. 34unless otherwise noted.
82 35
@@ -89,10 +42,6 @@ NOTES: For more information about the AutoNeg, Duplex, and Speed
89 parameters, see the application note at: 42 parameters, see the application note at:
90 http://www.intel.com/design/network/applnots/ap450.htm 43 http://www.intel.com/design/network/applnots/ap450.htm
91 44
92 A descriptor describes a data buffer and attributes related to
93 the data buffer. This information is accessed by the hardware.
94
95
96AutoNeg 45AutoNeg
97------- 46-------
98(Supported only on adapters with copper connections) 47(Supported only on adapters with copper connections)
@@ -106,7 +55,6 @@ Duplex parameters must not be specified.
106NOTE: Refer to the Speed and Duplex section of this readme for more 55NOTE: Refer to the Speed and Duplex section of this readme for more
107 information on the AutoNeg parameter. 56 information on the AutoNeg parameter.
108 57
109
110Duplex 58Duplex
111------ 59------
112(Supported only on adapters with copper connections) 60(Supported only on adapters with copper connections)
@@ -119,7 +67,6 @@ set to auto-negotiate, the board auto-detects the correct duplex. If the
119link partner is forced (either full or half), Duplex defaults to half- 67link partner is forced (either full or half), Duplex defaults to half-
120duplex. 68duplex.
121 69
122
123FlowControl 70FlowControl
124----------- 71-----------
125Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx) 72Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx)
@@ -128,16 +75,16 @@ Default Value: Reads flow control settings from the EEPROM
128This parameter controls the automatic generation(Tx) and response(Rx) 75This parameter controls the automatic generation(Tx) and response(Rx)
129to Ethernet PAUSE frames. 76to Ethernet PAUSE frames.
130 77
131
132InterruptThrottleRate 78InterruptThrottleRate
133--------------------- 79---------------------
134(not supported on Intel(R) 82542, 82543 or 82544-based adapters) 80(not supported on Intel(R) 82542, 82543 or 82544-based adapters)
135Valid Range: 0,1,3,100-100000 (0=off, 1=dynamic, 3=dynamic conservative) 81Valid Range: 0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative,
82 4=simplified balancing)
136Default Value: 3 83Default Value: 3
137 84
138The driver can limit the amount of interrupts per second that the adapter 85The driver can limit the amount of interrupts per second that the adapter
139will generate for incoming packets. It does this by writing a value to the 86will generate for incoming packets. It does this by writing a value to the
140adapter that is based on the maximum amount of interrupts that the adapter 87adapter that is based on the maximum amount of interrupts that the adapter
141will generate per second. 88will generate per second.
142 89
143Setting InterruptThrottleRate to a value greater or equal to 100 90Setting InterruptThrottleRate to a value greater or equal to 100
@@ -146,37 +93,43 @@ per second, even if more packets have come in. This reduces interrupt
146load on the system and can lower CPU utilization under heavy load, 93load on the system and can lower CPU utilization under heavy load,
147but will increase latency as packets are not processed as quickly. 94but will increase latency as packets are not processed as quickly.
148 95
149The default behaviour of the driver previously assumed a static 96The default behaviour of the driver previously assumed a static
150InterruptThrottleRate value of 8000, providing a good fallback value for 97InterruptThrottleRate value of 8000, providing a good fallback value for
151all traffic types,but lacking in small packet performance and latency. 98all traffic types,but lacking in small packet performance and latency.
152The hardware can handle many more small packets per second however, and 99The hardware can handle many more small packets per second however, and
153for this reason an adaptive interrupt moderation algorithm was implemented. 100for this reason an adaptive interrupt moderation algorithm was implemented.
154 101
155Since 7.3.x, the driver has two adaptive modes (setting 1 or 3) in which 102Since 7.3.x, the driver has two adaptive modes (setting 1 or 3) in which
156it dynamically adjusts the InterruptThrottleRate value based on the traffic 103it dynamically adjusts the InterruptThrottleRate value based on the traffic
157that it receives. After determining the type of incoming traffic in the last 104that it receives. After determining the type of incoming traffic in the last
158timeframe, it will adjust the InterruptThrottleRate to an appropriate value 105timeframe, it will adjust the InterruptThrottleRate to an appropriate value
159for that traffic. 106for that traffic.
160 107
161The algorithm classifies the incoming traffic every interval into 108The algorithm classifies the incoming traffic every interval into
162classes. Once the class is determined, the InterruptThrottleRate value is 109classes. Once the class is determined, the InterruptThrottleRate value is
163adjusted to suit that traffic type the best. There are three classes defined: 110adjusted to suit that traffic type the best. There are three classes defined:
164"Bulk traffic", for large amounts of packets of normal size; "Low latency", 111"Bulk traffic", for large amounts of packets of normal size; "Low latency",
165for small amounts of traffic and/or a significant percentage of small 112for small amounts of traffic and/or a significant percentage of small
166packets; and "Lowest latency", for almost completely small packets or 113packets; and "Lowest latency", for almost completely small packets or
167minimal traffic. 114minimal traffic.
168 115
169In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 116In dynamic conservative mode, the InterruptThrottleRate value is set to 4000
170for traffic that falls in class "Bulk traffic". If traffic falls in the "Low 117for traffic that falls in class "Bulk traffic". If traffic falls in the "Low
171latency" or "Lowest latency" class, the InterruptThrottleRate is increased 118latency" or "Lowest latency" class, the InterruptThrottleRate is increased
172stepwise to 20000. This default mode is suitable for most applications. 119stepwise to 20000. This default mode is suitable for most applications.
173 120
174For situations where low latency is vital such as cluster or 121For situations where low latency is vital such as cluster or
175grid computing, the algorithm can reduce latency even more when 122grid computing, the algorithm can reduce latency even more when
176InterruptThrottleRate is set to mode 1. In this mode, which operates 123InterruptThrottleRate is set to mode 1. In this mode, which operates
177the same as mode 3, the InterruptThrottleRate will be increased stepwise to 124the same as mode 3, the InterruptThrottleRate will be increased stepwise to
17870000 for traffic in class "Lowest latency". 12570000 for traffic in class "Lowest latency".
179 126
127In simplified mode the interrupt rate is based on the ratio of Tx and
128Rx traffic. If the bytes per second rate is approximately equal, the
129interrupt rate will drop as low as 2000 interrupts per second. If the
130traffic is mostly transmit or mostly receive, the interrupt rate could
131be as high as 8000.
132
180Setting InterruptThrottleRate to 0 turns off any interrupt moderation 133Setting InterruptThrottleRate to 0 turns off any interrupt moderation
181and may improve small packet latency, but is generally not suitable 134and may improve small packet latency, but is generally not suitable
182for bulk throughput traffic. 135for bulk throughput traffic.
@@ -212,8 +165,6 @@ NOTE: When e1000 is loaded with default settings and multiple adapters
212 be platform-specific. If CPU utilization is not a concern, use 165 be platform-specific. If CPU utilization is not a concern, use
213 RX_POLLING (NAPI) and default driver settings. 166 RX_POLLING (NAPI) and default driver settings.
214 167
215
216
217RxDescriptors 168RxDescriptors
218------------- 169-------------
219Valid Range: 80-256 for 82542 and 82543-based adapters 170Valid Range: 80-256 for 82542 and 82543-based adapters
@@ -225,15 +176,14 @@ by the driver. Increasing this value allows the driver to buffer more
225incoming packets, at the expense of increased system memory utilization. 176incoming packets, at the expense of increased system memory utilization.
226 177
227Each descriptor is 16 bytes. A receive buffer is also allocated for each 178Each descriptor is 16 bytes. A receive buffer is also allocated for each
228descriptor and can be either 2048, 4096, 8192, or 16384 bytes, depending 179descriptor and can be either 2048, 4096, 8192, or 16384 bytes, depending
229on the MTU setting. The maximum MTU size is 16110. 180on the MTU setting. The maximum MTU size is 16110.
230 181
231NOTE: MTU designates the frame size. It only needs to be set for Jumbo 182NOTE: MTU designates the frame size. It only needs to be set for Jumbo
232 Frames. Depending on the available system resources, the request 183 Frames. Depending on the available system resources, the request
233 for a higher number of receive descriptors may be denied. In this 184 for a higher number of receive descriptors may be denied. In this
234 case, use a lower number. 185 case, use a lower number.
235 186
236
237RxIntDelay 187RxIntDelay
238---------- 188----------
239Valid Range: 0-65535 (0=off) 189Valid Range: 0-65535 (0=off)
@@ -254,7 +204,6 @@ CAUTION: When setting RxIntDelay to a value other than 0, adapters may
254 restoring the network connection. To eliminate the potential 204 restoring the network connection. To eliminate the potential
255 for the hang ensure that RxIntDelay is set to 0. 205 for the hang ensure that RxIntDelay is set to 0.
256 206
257
258RxAbsIntDelay 207RxAbsIntDelay
259------------- 208-------------
260(This parameter is supported only on 82540, 82545 and later adapters.) 209(This parameter is supported only on 82540, 82545 and later adapters.)
@@ -268,7 +217,6 @@ packet is received within the set amount of time. Proper tuning,
268along with RxIntDelay, may improve traffic throughput in specific network 217along with RxIntDelay, may improve traffic throughput in specific network
269conditions. 218conditions.
270 219
271
272Speed 220Speed
273----- 221-----
274(This parameter is supported only on adapters with copper connections.) 222(This parameter is supported only on adapters with copper connections.)
@@ -280,7 +228,6 @@ Speed forces the line speed to the specified value in megabits per second
280partner is set to auto-negotiate, the board will auto-detect the correct 228partner is set to auto-negotiate, the board will auto-detect the correct
281speed. Duplex should also be set when Speed is set to either 10 or 100. 229speed. Duplex should also be set when Speed is set to either 10 or 100.
282 230
283
284TxDescriptors 231TxDescriptors
285------------- 232-------------
286Valid Range: 80-256 for 82542 and 82543-based adapters 233Valid Range: 80-256 for 82542 and 82543-based adapters
@@ -295,6 +242,36 @@ NOTE: Depending on the available system resources, the request for a
295 higher number of transmit descriptors may be denied. In this case, 242 higher number of transmit descriptors may be denied. In this case,
296 use a lower number. 243 use a lower number.
297 244
245TxDescriptorStep
246----------------
247Valid Range: 1 (use every Tx Descriptor)
248 4 (use every 4th Tx Descriptor)
249
250Default Value: 1 (use every Tx Descriptor)
251
252On certain non-Intel architectures, it has been observed that intense TX
253traffic bursts of short packets may result in an improper descriptor
254writeback. If this occurs, the driver will report a "TX Timeout" and reset
255the adapter, after which the transmit flow will restart, though data may
256have stalled for as much as 10 seconds before it resumes.
257
258The improper writeback does not occur on the first descriptor in a system
259memory cache-line, which is typically 32 bytes, or 4 descriptors long.
260
261Setting TxDescriptorStep to a value of 4 will ensure that all TX descriptors
262are aligned to the start of a system memory cache line, and so this problem
263will not occur.
264
265NOTES: Setting TxDescriptorStep to 4 effectively reduces the number of
266 TxDescriptors available for transmits to 1/4 of the normal allocation.
267 This has a possible negative performance impact, which may be
268 compensated for by allocating more descriptors using the TxDescriptors
269 module parameter.
270
271 There are other conditions which may result in "TX Timeout", which will
272 not be resolved by the use of the TxDescriptorStep parameter. As the
273 issue addressed by this parameter has never been observed on Intel
274 Architecture platforms, it should not be used on Intel platforms.
298 275
299TxIntDelay 276TxIntDelay
300---------- 277----------
@@ -307,7 +284,6 @@ efficiency if properly tuned for specific network traffic. If the
307system is reporting dropped transmits, this value may be set too high 284system is reporting dropped transmits, this value may be set too high
308causing the driver to run out of available transmit descriptors. 285causing the driver to run out of available transmit descriptors.
309 286
310
311TxAbsIntDelay 287TxAbsIntDelay
312------------- 288-------------
313(This parameter is supported only on 82540, 82545 and later adapters.) 289(This parameter is supported only on 82540, 82545 and later adapters.)
@@ -330,6 +306,35 @@ Default Value: 1
330A value of '1' indicates that the driver should enable IP checksum 306A value of '1' indicates that the driver should enable IP checksum
331offload for received packets (both UDP and TCP) to the adapter hardware. 307offload for received packets (both UDP and TCP) to the adapter hardware.
332 308
309Copybreak
310---------
311Valid Range: 0-xxxxxxx (0=off)
312Default Value: 256
313Usage: insmod e1000.ko copybreak=128
314
315Driver copies all packets below or equaling this size to a fresh Rx
316buffer before handing it up the stack.
317
318This parameter is different than other parameters, in that it is a
319single (not 1,1,1 etc.) parameter applied to all driver instances and
320it is also available during runtime at
321/sys/module/e1000/parameters/copybreak
322
323SmartPowerDownEnable
324--------------------
325Valid Range: 0-1
326Default Value: 0 (disabled)
327
328Allows PHY to turn off in lower power states. The user can turn off
329this parameter in supported chipsets.
330
331KumeranLockLoss
332---------------
333Valid Range: 0-1
334Default Value: 1 (enabled)
335
336This workaround skips resetting the PHY at shutdown for the initial
337silicon releases of ICH8 systems.
333 338
334Speed and Duplex Configuration 339Speed and Duplex Configuration
335============================== 340==============================
@@ -385,40 +390,9 @@ If the link partner is forced to a specific speed and duplex, then this
385parameter should not be used. Instead, use the Speed and Duplex parameters 390parameter should not be used. Instead, use the Speed and Duplex parameters
386previously mentioned to force the adapter to the same speed and duplex. 391previously mentioned to force the adapter to the same speed and duplex.
387 392
388
389Additional Configurations 393Additional Configurations
390========================= 394=========================
391 395
392 Configuring the Driver on Different Distributions
393 -------------------------------------------------
394 Configuring a network driver to load properly when the system is started
395 is distribution dependent. Typically, the configuration process involves
396 adding an alias line to /etc/modules.conf or /etc/modprobe.conf as well
397 as editing other system startup scripts and/or configuration files. Many
398 popular Linux distributions ship with tools to make these changes for you.
399 To learn the proper way to configure a network device for your system,
400 refer to your distribution documentation. If during this process you are
401 asked for the driver or module name, the name for the Linux Base Driver
402 for the Intel(R) PRO/1000 Family of Adapters is e1000.
403
404 As an example, if you install the e1000 driver for two PRO/1000 adapters
405 (eth0 and eth1) and set the speed and duplex to 10full and 100half, add
406 the following to modules.conf or or modprobe.conf:
407
408 alias eth0 e1000
409 alias eth1 e1000
410 options e1000 Speed=10,100 Duplex=2,1
411
412 Viewing Link Messages
413 ---------------------
414 Link messages will not be displayed to the console if the distribution is
415 restricting system messages. In order to see network driver link messages
416 on your console, set dmesg to eight by entering the following:
417
418 dmesg -n 8
419
420 NOTE: This setting is not saved across reboots.
421
422 Jumbo Frames 396 Jumbo Frames
423 ------------ 397 ------------
424 Jumbo Frames support is enabled by changing the MTU to a value larger than 398 Jumbo Frames support is enabled by changing the MTU to a value larger than
@@ -437,9 +411,11 @@ Additional Configurations
437 setting in a different location. 411 setting in a different location.
438 412
439 Notes: 413 Notes:
440 414 Degradation in throughput performance may be observed in some Jumbo frames
441 - To enable Jumbo Frames, increase the MTU size on the interface beyond 415 environments. If this is observed, increasing the application's socket buffer
442 1500. 416 size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help.
417 See the specific application manual and /usr/src/linux*/Documentation/
418 networking/ip-sysctl.txt for more details.
443 419
444 - The maximum MTU setting for Jumbo Frames is 16110. This value coincides 420 - The maximum MTU setting for Jumbo Frames is 16110. This value coincides
445 with the maximum Jumbo Frames size of 16128. 421 with the maximum Jumbo Frames size of 16128.
@@ -447,40 +423,11 @@ Additional Configurations
447 - Using Jumbo Frames at 10 or 100 Mbps may result in poor performance or 423 - Using Jumbo Frames at 10 or 100 Mbps may result in poor performance or
448 loss of link. 424 loss of link.
449 425
450 - Some Intel gigabit adapters that support Jumbo Frames have a frame size
451 limit of 9238 bytes, with a corresponding MTU size limit of 9216 bytes.
452 The adapters with this limitation are based on the Intel(R) 82571EB,
453 82572EI, 82573L and 80003ES2LAN controller. These correspond to the
454 following product names:
455 Intel(R) PRO/1000 PT Server Adapter
456 Intel(R) PRO/1000 PT Desktop Adapter
457 Intel(R) PRO/1000 PT Network Connection
458 Intel(R) PRO/1000 PT Dual Port Server Adapter
459 Intel(R) PRO/1000 PT Dual Port Network Connection
460 Intel(R) PRO/1000 PF Server Adapter
461 Intel(R) PRO/1000 PF Network Connection
462 Intel(R) PRO/1000 PF Dual Port Server Adapter
463 Intel(R) PRO/1000 PB Server Connection
464 Intel(R) PRO/1000 PL Network Connection
465 Intel(R) PRO/1000 EB Network Connection with I/O Acceleration
466 Intel(R) PRO/1000 EB Backplane Connection with I/O Acceleration
467 Intel(R) PRO/1000 PT Quad Port Server Adapter
468
469 - Adapters based on the Intel(R) 82542 and 82573V/E controller do not 426 - Adapters based on the Intel(R) 82542 and 82573V/E controller do not
470 support Jumbo Frames. These correspond to the following product names: 427 support Jumbo Frames. These correspond to the following product names:
471 Intel(R) PRO/1000 Gigabit Server Adapter 428 Intel(R) PRO/1000 Gigabit Server Adapter
472 Intel(R) PRO/1000 PM Network Connection 429 Intel(R) PRO/1000 PM Network Connection
473 430
474 - The following adapters do not support Jumbo Frames:
475 Intel(R) 82562V 10/100 Network Connection
476 Intel(R) 82566DM Gigabit Network Connection
477 Intel(R) 82566DC Gigabit Network Connection
478 Intel(R) 82566MM Gigabit Network Connection
479 Intel(R) 82566MC Gigabit Network Connection
480 Intel(R) 82562GT 10/100 Network Connection
481 Intel(R) 82562G 10/100 Network Connection
482
483
484 Ethtool 431 Ethtool
485 ------- 432 -------
486 The driver utilizes the ethtool interface for driver configuration and 433 The driver utilizes the ethtool interface for driver configuration and
@@ -490,142 +437,14 @@ Additional Configurations
490 The latest release of ethtool can be found from 437 The latest release of ethtool can be found from
491 http://sourceforge.net/projects/gkernel. 438 http://sourceforge.net/projects/gkernel.
492 439
493 NOTE: Ethtool 1.6 only supports a limited set of ethtool options. Support
494 for a more complete ethtool feature set can be enabled by upgrading
495 ethtool to ethtool-1.8.1.
496
497 Enabling Wake on LAN* (WoL) 440 Enabling Wake on LAN* (WoL)
498 --------------------------- 441 ---------------------------
499 WoL is configured through the Ethtool* utility. Ethtool is included with 442 WoL is configured through the Ethtool* utility.
500 all versions of Red Hat after Red Hat 7.2. For other Linux distributions,
501 download and install Ethtool from the following website:
502 http://sourceforge.net/projects/gkernel.
503
504 For instructions on enabling WoL with Ethtool, refer to the website listed
505 above.
506 443
507 WoL will be enabled on the system during the next shut down or reboot. 444 WoL will be enabled on the system during the next shut down or reboot.
508 For this driver version, in order to enable WoL, the e1000 driver must be 445 For this driver version, in order to enable WoL, the e1000 driver must be
509 loaded when shutting down or rebooting the system. 446 loaded when shutting down or rebooting the system.
510 447
511 Wake On LAN is only supported on port A for the following devices:
512 Intel(R) PRO/1000 PT Dual Port Network Connection
513 Intel(R) PRO/1000 PT Dual Port Server Connection
514 Intel(R) PRO/1000 PT Dual Port Server Adapter
515 Intel(R) PRO/1000 PF Dual Port Server Adapter
516 Intel(R) PRO/1000 PT Quad Port Server Adapter
517
518 NAPI
519 ----
520 NAPI (Rx polling mode) is enabled in the e1000 driver.
521
522 See www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI.
523
524
525Known Issues
526============
527
528Dropped Receive Packets on Half-duplex 10/100 Networks
529------------------------------------------------------
530If you have an Intel PCI Express adapter running at 10mbps or 100mbps, half-
531duplex, you may observe occasional dropped receive packets. There are no
532workarounds for this problem in this network configuration. The network must
533be updated to operate in full-duplex, and/or 1000mbps only.
534
535Jumbo Frames System Requirement
536-------------------------------
537Memory allocation failures have been observed on Linux systems with 64 MB
538of RAM or less that are running Jumbo Frames. If you are using Jumbo
539Frames, your system may require more than the advertised minimum
540requirement of 64 MB of system memory.
541
542Performance Degradation with Jumbo Frames
543-----------------------------------------
544Degradation in throughput performance may be observed in some Jumbo frames
545environments. If this is observed, increasing the application's socket
546buffer size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values
547may help. See the specific application manual and
548/usr/src/linux*/Documentation/
549networking/ip-sysctl.txt for more details.
550
551Jumbo Frames on Foundry BigIron 8000 switch
552-------------------------------------------
553There is a known issue using Jumbo frames when connected to a Foundry
554BigIron 8000 switch. This is a 3rd party limitation. If you experience
555loss of packets, lower the MTU size.
556
557Allocating Rx Buffers when Using Jumbo Frames
558---------------------------------------------
559Allocating Rx buffers when using Jumbo Frames on 2.6.x kernels may fail if
560the available memory is heavily fragmented. This issue may be seen with PCI-X
561adapters or with packet split disabled. This can be reduced or eliminated
562by changing the amount of available memory for receive buffer allocation, by
563increasing /proc/sys/vm/min_free_kbytes.
564
565Multiple Interfaces on Same Ethernet Broadcast Network
566------------------------------------------------------
567Due to the default ARP behavior on Linux, it is not possible to have
568one system on two IP networks in the same Ethernet broadcast domain
569(non-partitioned switch) behave as expected. All Ethernet interfaces
570will respond to IP traffic for any IP address assigned to the system.
571This results in unbalanced receive traffic.
572
573If you have multiple interfaces in a server, either turn on ARP
574filtering by entering:
575
576 echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
577(this only works if your kernel's version is higher than 2.4.5),
578
579NOTE: This setting is not saved across reboots. The configuration
580change can be made permanent by adding the line:
581 net.ipv4.conf.all.arp_filter = 1
582to the file /etc/sysctl.conf
583
584 or,
585
586install the interfaces in separate broadcast domains (either in
587different switches or in a switch partitioned to VLANs).
588
58982541/82547 can't link or are slow to link with some link partners
590-----------------------------------------------------------------
591There is a known compatibility issue with 82541/82547 and some
592low-end switches where the link will not be established, or will
593be slow to establish. In particular, these switches are known to
594be incompatible with 82541/82547:
595
596 Planex FXG-08TE
597 I-O Data ETG-SH8
598
599To workaround this issue, the driver can be compiled with an override
600of the PHY's master/slave setting. Forcing master or forcing slave
601mode will improve time-to-link.
602
603 # make CFLAGS_EXTRA=-DE1000_MASTER_SLAVE=<n>
604
605Where <n> is:
606
607 0 = Hardware default
608 1 = Master mode
609 2 = Slave mode
610 3 = Auto master/slave
611
612Disable rx flow control with ethtool
613------------------------------------
614In order to disable receive flow control using ethtool, you must turn
615off auto-negotiation on the same command line.
616
617For example:
618
619 ethtool -A eth? autoneg off rx off
620
621Unplugging network cable while ethtool -p is running
622----------------------------------------------------
623In kernel versions 2.5.50 and later (including 2.6 kernel), unplugging
624the network cable while ethtool -p is running will cause the system to
625become unresponsive to keyboard commands, except for control-alt-delete.
626Restarting the system appears to be the only remedy.
627
628
629Support 448Support
630======= 449=======
631 450
diff --git a/Documentation/networking/e1000e.txt b/Documentation/networking/e1000e.txt
new file mode 100644
index 00000000000..6aa048badf3
--- /dev/null
+++ b/Documentation/networking/e1000e.txt
@@ -0,0 +1,302 @@
1Linux* Driver for Intel(R) Network Connection
2===============================================================
3
4Intel Gigabit Linux driver.
5Copyright(c) 1999 - 2010 Intel Corporation.
6
7Contents
8========
9
10- Identifying Your Adapter
11- Command Line Parameters
12- Additional Configurations
13- Support
14
15Identifying Your Adapter
16========================
17
18The e1000e driver supports all PCI Express Intel(R) Gigabit Network
19Connections, except those that are 82575, 82576 and 82580-based*.
20
21* NOTE: The Intel(R) PRO/1000 P Dual Port Server Adapter is supported by
22 the e1000 driver, not the e1000e driver due to the 82546 part being used
23 behind a PCI Express bridge.
24
25For more information on how to identify your adapter, go to the Adapter &
26Driver ID Guide at:
27
28 http://support.intel.com/support/go/network/adapter/idguide.htm
29
30For the latest Intel network drivers for Linux, refer to the following
31website. In the search field, enter your adapter name or type, or use the
32networking link on the left to search for your adapter:
33
34 http://support.intel.com/support/go/network/adapter/home.htm
35
36Command Line Parameters
37=======================
38
39The default value for each parameter is generally the recommended setting,
40unless otherwise noted.
41
42NOTES: For more information about the InterruptThrottleRate,
43 RxIntDelay, TxIntDelay, RxAbsIntDelay, and TxAbsIntDelay
44 parameters, see the application note at:
45 http://www.intel.com/design/network/applnots/ap450.htm
46
47InterruptThrottleRate
48---------------------
49Valid Range: 0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative,
50 4=simplified balancing)
51Default Value: 3
52
53The driver can limit the amount of interrupts per second that the adapter
54will generate for incoming packets. It does this by writing a value to the
55adapter that is based on the maximum amount of interrupts that the adapter
56will generate per second.
57
58Setting InterruptThrottleRate to a value greater or equal to 100
59will program the adapter to send out a maximum of that many interrupts
60per second, even if more packets have come in. This reduces interrupt
61load on the system and can lower CPU utilization under heavy load,
62but will increase latency as packets are not processed as quickly.
63
64The driver has two adaptive modes (setting 1 or 3) in which
65it dynamically adjusts the InterruptThrottleRate value based on the traffic
66that it receives. After determining the type of incoming traffic in the last
67timeframe, it will adjust the InterruptThrottleRate to an appropriate value
68for that traffic.
69
70The algorithm classifies the incoming traffic every interval into
71classes. Once the class is determined, the InterruptThrottleRate value is
72adjusted to suit that traffic type the best. There are three classes defined:
73"Bulk traffic", for large amounts of packets of normal size; "Low latency",
74for small amounts of traffic and/or a significant percentage of small
75packets; and "Lowest latency", for almost completely small packets or
76minimal traffic.
77
78In dynamic conservative mode, the InterruptThrottleRate value is set to 4000
79for traffic that falls in class "Bulk traffic". If traffic falls in the "Low
80latency" or "Lowest latency" class, the InterruptThrottleRate is increased
81stepwise to 20000. This default mode is suitable for most applications.
82
83For situations where low latency is vital such as cluster or
84grid computing, the algorithm can reduce latency even more when
85InterruptThrottleRate is set to mode 1. In this mode, which operates
86the same as mode 3, the InterruptThrottleRate will be increased stepwise to
8770000 for traffic in class "Lowest latency".
88
89In simplified mode the interrupt rate is based on the ratio of Tx and
90Rx traffic. If the bytes per second rate is approximately equal the
91interrupt rate will drop as low as 2000 interrupts per second. If the
92traffic is mostly transmit or mostly receive, the interrupt rate could
93be as high as 8000.
94
95Setting InterruptThrottleRate to 0 turns off any interrupt moderation
96and may improve small packet latency, but is generally not suitable
97for bulk throughput traffic.
98
99NOTE: InterruptThrottleRate takes precedence over the TxAbsIntDelay and
100 RxAbsIntDelay parameters. In other words, minimizing the receive
101 and/or transmit absolute delays does not force the controller to
102 generate more interrupts than what the Interrupt Throttle Rate
103 allows.
104
105NOTE: When e1000e is loaded with default settings and multiple adapters
106 are in use simultaneously, the CPU utilization may increase non-
107 linearly. In order to limit the CPU utilization without impacting
108 the overall throughput, we recommend that you load the driver as
109 follows:
110
111 modprobe e1000e InterruptThrottleRate=3000,3000,3000
112
113 This sets the InterruptThrottleRate to 3000 interrupts/sec for
114 the first, second, and third instances of the driver. The range
115 of 2000 to 3000 interrupts per second works on a majority of
116 systems and is a good starting point, but the optimal value will
117 be platform-specific. If CPU utilization is not a concern, use
118 RX_POLLING (NAPI) and default driver settings.
119
120RxIntDelay
121----------
122Valid Range: 0-65535 (0=off)
123Default Value: 0
124
125This value delays the generation of receive interrupts in units of 1.024
126microseconds. Receive interrupt reduction can improve CPU efficiency if
127properly tuned for specific network traffic. Increasing this value adds
128extra latency to frame reception and can end up decreasing the throughput
129of TCP traffic. If the system is reporting dropped receives, this value
130may be set too high, causing the driver to run out of available receive
131descriptors.
132
133CAUTION: When setting RxIntDelay to a value other than 0, adapters may
134 hang (stop transmitting) under certain network conditions. If
135 this occurs a NETDEV WATCHDOG message is logged in the system
136 event log. In addition, the controller is automatically reset,
137 restoring the network connection. To eliminate the potential
138 for the hang ensure that RxIntDelay is set to 0.
139
140RxAbsIntDelay
141-------------
142Valid Range: 0-65535 (0=off)
143Default Value: 8
144
145This value, in units of 1.024 microseconds, limits the delay in which a
146receive interrupt is generated. Useful only if RxIntDelay is non-zero,
147this value ensures that an interrupt is generated after the initial
148packet is received within the set amount of time. Proper tuning,
149along with RxIntDelay, may improve traffic throughput in specific network
150conditions.
151
152TxIntDelay
153----------
154Valid Range: 0-65535 (0=off)
155Default Value: 8
156
157This value delays the generation of transmit interrupts in units of
1581.024 microseconds. Transmit interrupt reduction can improve CPU
159efficiency if properly tuned for specific network traffic. If the
160system is reporting dropped transmits, this value may be set too high
161causing the driver to run out of available transmit descriptors.
162
163TxAbsIntDelay
164-------------
165Valid Range: 0-65535 (0=off)
166Default Value: 32
167
168This value, in units of 1.024 microseconds, limits the delay in which a
169transmit interrupt is generated. Useful only if TxIntDelay is non-zero,
170this value ensures that an interrupt is generated after the initial
171packet is sent on the wire within the set amount of time. Proper tuning,
172along with TxIntDelay, may improve traffic throughput in specific
173network conditions.
174
175Copybreak
176---------
177Valid Range: 0-xxxxxxx (0=off)
178Default Value: 256
179
180Driver copies all packets below or equaling this size to a fresh Rx
181buffer before handing it up the stack.
182
183This parameter is different than other parameters, in that it is a
184single (not 1,1,1 etc.) parameter applied to all driver instances and
185it is also available during runtime at
186/sys/module/e1000e/parameters/copybreak
187
188SmartPowerDownEnable
189--------------------
190Valid Range: 0-1
191Default Value: 0 (disabled)
192
193Allows PHY to turn off in lower power states. The user can set this parameter
194in supported chipsets.
195
196KumeranLockLoss
197---------------
198Valid Range: 0-1
199Default Value: 1 (enabled)
200
201This workaround skips resetting the PHY at shutdown for the initial
202silicon releases of ICH8 systems.
203
204IntMode
205-------
206Valid Range: 0-2 (0=legacy, 1=MSI, 2=MSI-X)
207Default Value: 2
208
209Allows changing the interrupt mode at module load time, without requiring a
210recompile. If the driver load fails to enable a specific interrupt mode, the
211driver will try other interrupt modes, from least to most compatible. The
212interrupt order is MSI-X, MSI, Legacy. If specifying MSI (IntMode=1)
213interrupts, only MSI and Legacy will be attempted.
214
215CrcStripping
216------------
217Valid Range: 0-1
218Default Value: 1 (enabled)
219
220Strip the CRC from received packets before sending up the network stack. If
221you have a machine with a BMC enabled but cannot receive IPMI traffic after
222loading or enabling the driver, try disabling this feature.
223
224WriteProtectNVM
225---------------
226Valid Range: 0-1
227Default Value: 1 (enabled)
228
229Set the hardware to ignore all write/erase cycles to the GbE region in the
230ICHx NVM (non-volatile memory). This feature can be disabled by the
231WriteProtectNVM module parameter (enabled by default) only after a hardware
232reset, but the machine must be power cycled before trying to enable writes.
233
234Note: the kernel boot option iomem=relaxed may need to be set if the kernel
235config option CONFIG_STRICT_DEVMEM=y, if the root user wants to write the
236NVM from user space via ethtool.
237
238Additional Configurations
239=========================
240
241 Jumbo Frames
242 ------------
243 Jumbo Frames support is enabled by changing the MTU to a value larger than
244 the default of 1500. Use the ifconfig command to increase the MTU size.
245 For example:
246
247 ifconfig eth<x> mtu 9000 up
248
249 This setting is not saved across reboots.
250
251 Notes:
252
253 - The maximum MTU setting for Jumbo Frames is 9216. This value coincides
254 with the maximum Jumbo Frames size of 9234 bytes.
255
256 - Using Jumbo Frames at 10 or 100 Mbps is not supported and may result in
257 poor performance or loss of link.
258
259 - Some adapters limit Jumbo Frames sized packets to a maximum of
260 4096 bytes and some adapters do not support Jumbo Frames.
261
262
263 Ethtool
264 -------
265 The driver utilizes the ethtool interface for driver configuration and
266 diagnostics, as well as displaying statistical information. We
267 strongly recommend downloading the latest version of Ethtool at:
268
269 http://sourceforge.net/projects/gkernel.
270
271 Speed and Duplex
272 ----------------
273 Speed and Duplex are configured through the Ethtool* utility. For
274 instructions, refer to the Ethtool man page.
275
276 Enabling Wake on LAN* (WoL)
277 ---------------------------
278 WoL is configured through the Ethtool* utility. For instructions on
279 enabling WoL with Ethtool, refer to the Ethtool man page.
280
281 WoL will be enabled on the system during the next shut down or reboot.
282 For this driver version, in order to enable WoL, the e1000e driver must be
283 loaded when shutting down or rebooting the system.
284
285 In most cases Wake On LAN is only supported on port A for multiple port
286 adapters. To verify if a port supports Wake on LAN run ethtool eth<X>.
287
288
289Support
290=======
291
292For general information, go to the Intel support website at:
293
294 www.intel.com/support/
295
296or the Intel Wired Networking project hosted by Sourceforge at:
297
298 http://sourceforge.net/projects/e1000
299
300If an issue is identified with the released source code on the supported
301kernel with a supported adapter, email the specific information related
302to the issue to e1000-devel@lists.sf.net
diff --git a/Documentation/networking/ixgbevf.txt b/Documentation/networking/ixgbevf.txt
index 19015de6725..21dd5d15b6b 100755..100644
--- a/Documentation/networking/ixgbevf.txt
+++ b/Documentation/networking/ixgbevf.txt
@@ -1,19 +1,16 @@
1Linux* Base Driver for Intel(R) Network Connection 1Linux* Base Driver for Intel(R) Network Connection
2================================================== 2==================================================
3 3
4November 24, 2009 4Intel Gigabit Linux driver.
5Copyright(c) 1999 - 2010 Intel Corporation.
5 6
6Contents 7Contents
7======== 8========
8 9
9- In This Release
10- Identifying Your Adapter 10- Identifying Your Adapter
11- Known Issues/Troubleshooting 11- Known Issues/Troubleshooting
12- Support 12- Support
13 13
14In This Release
15===============
16
17This file describes the ixgbevf Linux* Base Driver for Intel Network 14This file describes the ixgbevf Linux* Base Driver for Intel Network
18Connection. 15Connection.
19 16
@@ -33,7 +30,7 @@ Identifying Your Adapter
33For more information on how to identify your adapter, go to the Adapter & 30For more information on how to identify your adapter, go to the Adapter &
34Driver ID Guide at: 31Driver ID Guide at:
35 32
36 http://support.intel.com/support/network/sb/CS-008441.htm 33 http://support.intel.com/support/go/network/adapter/idguide.htm
37 34
38Known Issues/Troubleshooting 35Known Issues/Troubleshooting
39============================ 36============================
@@ -57,34 +54,3 @@ or the Intel Wired Networking project hosted by Sourceforge at:
57If an issue is identified with the released source code on the supported 54If an issue is identified with the released source code on the supported
58kernel with a supported adapter, email the specific information related 55kernel with a supported adapter, email the specific information related
59to the issue to e1000-devel@lists.sf.net 56to the issue to e1000-devel@lists.sf.net
60
61License
62=======
63
64Intel 10 Gigabit Linux driver.
65Copyright(c) 1999 - 2009 Intel Corporation.
66
67This program is free software; you can redistribute it and/or modify it
68under the terms and conditions of the GNU General Public License,
69version 2, as published by the Free Software Foundation.
70
71This program is distributed in the hope it will be useful, but WITHOUT
72ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
73FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
74more details.
75
76You should have received a copy of the GNU General Public License along with
77this program; if not, write to the Free Software Foundation, Inc.,
7851 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
79
80The full GNU General Public License is included in this distribution in
81the file called "COPYING".
82
83Trademarks
84==========
85
86Intel, Itanium, and Pentium are trademarks or registered trademarks of
87Intel Corporation or its subsidiaries in the United States and other
88countries.
89
90* Other names and brands may be claimed as the property of others.
diff --git a/Documentation/pcmcia/driver-changes.txt b/Documentation/pcmcia/driver-changes.txt
index 26c0f9c0054..dd04361dd36 100644
--- a/Documentation/pcmcia/driver-changes.txt
+++ b/Documentation/pcmcia/driver-changes.txt
@@ -1,4 +1,29 @@
1This file details changes in 2.6 which affect PCMCIA card driver authors: 1This file details changes in 2.6 which affect PCMCIA card driver authors:
2* pcmcia_loop_config() and autoconfiguration (as of 2.6.36)
3 If struct pcmcia_device *p_dev->config_flags is set accordingly,
4 pcmcia_loop_config() now sets up certain configuration values
5 automatically, though the driver may still override the settings
6 in the callback function. The following autoconfiguration options
7 are provided at the moment:
8 CONF_AUTO_CHECK_VCC : check for matching Vcc
9 CONF_AUTO_SET_VPP : set Vpp
10 CONF_AUTO_AUDIO : auto-enable audio line, if required
11 CONF_AUTO_SET_IO : set ioport resources (->resource[0,1])
12 CONF_AUTO_SET_IOMEM : set first iomem resource (->resource[2])
13
14* pcmcia_request_configuration -> pcmcia_enable_device (as of 2.6.36)
15 pcmcia_request_configuration() got renamed to pcmcia_enable_device(),
16 as it mirrors pcmcia_disable_device(). Configuration settings are now
17 stored in struct pcmcia_device, e.g. in the fields config_flags,
18 config_index, config_base, vpp.
19
20* pcmcia_request_window changes (as of 2.6.36)
21 Instead of win_req_t, drivers are now requested to fill out
22 struct pcmcia_device *p_dev->resource[2,3,4,5] for up to four ioport
23 ranges. After a call to pcmcia_request_window(), the regions found there
24 are reserved and may be used immediately -- until pcmcia_release_window()
25 is called.
26
2* pcmcia_request_io changes (as of 2.6.36) 27* pcmcia_request_io changes (as of 2.6.36)
3 Instead of io_req_t, drivers are now requested to fill out 28 Instead of io_req_t, drivers are now requested to fill out
4 struct pcmcia_device *p_dev->resource[0,1] for up to two ioport 29 struct pcmcia_device *p_dev->resource[0,1] for up to two ioport
diff --git a/Documentation/power/00-INDEX b/Documentation/power/00-INDEX
index fb742c213c9..45e9d4a9128 100644
--- a/Documentation/power/00-INDEX
+++ b/Documentation/power/00-INDEX
@@ -14,6 +14,8 @@ interface.txt
14 - Power management user interface in /sys/power 14 - Power management user interface in /sys/power
15notifiers.txt 15notifiers.txt
16 - Registering suspend notifiers in device drivers 16 - Registering suspend notifiers in device drivers
17opp.txt
18 - Operating Performance Point library
17pci.txt 19pci.txt
18 - How the PCI Subsystem Does Power Management 20 - How the PCI Subsystem Does Power Management
19pm_qos_interface.txt 21pm_qos_interface.txt
diff --git a/Documentation/power/interface.txt b/Documentation/power/interface.txt
index e67211fe0ee..c537834af00 100644
--- a/Documentation/power/interface.txt
+++ b/Documentation/power/interface.txt
@@ -57,7 +57,7 @@ smallest image possible. In particular, if "0" is written to this file, the
57suspend image will be as small as possible. 57suspend image will be as small as possible.
58 58
59Reading from this file will display the current image size limit, which 59Reading from this file will display the current image size limit, which
60is set to 500 MB by default. 60is set to 2/5 of available RAM by default.
61 61
62/sys/power/pm_trace controls the code which saves the last PM event point in 62/sys/power/pm_trace controls the code which saves the last PM event point in
63the RTC across reboots, so that you can debug a machine that just hangs 63the RTC across reboots, so that you can debug a machine that just hangs
diff --git a/Documentation/power/opp.txt b/Documentation/power/opp.txt
new file mode 100644
index 00000000000..44d87ad3cea
--- /dev/null
+++ b/Documentation/power/opp.txt
@@ -0,0 +1,375 @@
1*=============*
2* OPP Library *
3*=============*
4
5(C) 2009-2010 Nishanth Menon <nm@ti.com>, Texas Instruments Incorporated
6
7Contents
8--------
91. Introduction
102. Initial OPP List Registration
113. OPP Search Functions
124. OPP Availability Control Functions
135. OPP Data Retrieval Functions
146. Cpufreq Table Generation
157. Data Structures
16
171. Introduction
18===============
19Complex SoCs of today consists of a multiple sub-modules working in conjunction.
20In an operational system executing varied use cases, not all modules in the SoC
21need to function at their highest performing frequency all the time. To
22facilitate this, sub-modules in a SoC are grouped into domains, allowing some
23domains to run at lower voltage and frequency while other domains are loaded
24more. The set of discrete tuples consisting of frequency and voltage pairs that
25the device will support per domain are called Operating Performance Points or
26OPPs.
27
28OPP library provides a set of helper functions to organize and query the OPP
29information. The library is located in drivers/base/power/opp.c and the header
30is located in include/linux/opp.h. OPP library can be enabled by enabling
31CONFIG_PM_OPP from power management menuconfig menu. OPP library depends on
32CONFIG_PM as certain SoCs such as Texas Instrument's OMAP framework allows to
33optionally boot at a certain OPP without needing cpufreq.
34
35Typical usage of the OPP library is as follows:
36(users) -> registers a set of default OPPs -> (library)
37SoC framework -> modifies on required cases certain OPPs -> OPP layer
38 -> queries to search/retrieve information ->
39
40OPP layer expects each domain to be represented by a unique device pointer. SoC
41framework registers a set of initial OPPs per device with the OPP layer. This
42list is expected to be an optimally small number typically around 5 per device.
43This initial list contains a set of OPPs that the framework expects to be safely
44enabled by default in the system.
45
46Note on OPP Availability:
47------------------------
48As the system proceeds to operate, SoC framework may choose to make certain
49OPPs available or not available on each device based on various external
50factors. Example usage: Thermal management or other exceptional situations where
51SoC framework might choose to disable a higher frequency OPP to safely continue
52operations until that OPP could be re-enabled if possible.
53
54OPP library facilitates this concept in it's implementation. The following
55operational functions operate only on available opps:
56opp_find_freq_{ceil, floor}, opp_get_voltage, opp_get_freq, opp_get_opp_count
57and opp_init_cpufreq_table
58
59opp_find_freq_exact is meant to be used to find the opp pointer which can then
60be used for opp_enable/disable functions to make an opp available as required.
61
62WARNING: Users of OPP library should refresh their availability count using
63get_opp_count if opp_enable/disable functions are invoked for a device, the
64exact mechanism to trigger these or the notification mechanism to other
65dependent subsystems such as cpufreq are left to the discretion of the SoC
66specific framework which uses the OPP library. Similar care needs to be taken
67care to refresh the cpufreq table in cases of these operations.
68
69WARNING on OPP List locking mechanism:
70-------------------------------------------------
71OPP library uses RCU for exclusivity. RCU allows the query functions to operate
72in multiple contexts and this synchronization mechanism is optimal for a read
73intensive operations on data structure as the OPP library caters to.
74
75To ensure that the data retrieved are sane, the users such as SoC framework
76should ensure that the section of code operating on OPP queries are locked
77using RCU read locks. The opp_find_freq_{exact,ceil,floor},
78opp_get_{voltage, freq, opp_count} fall into this category.
79
80opp_{add,enable,disable} are updaters which use mutex and implement it's own
81RCU locking mechanisms. opp_init_cpufreq_table acts as an updater and uses
82mutex to implment RCU updater strategy. These functions should *NOT* be called
83under RCU locks and other contexts that prevent blocking functions in RCU or
84mutex operations from working.
85
862. Initial OPP List Registration
87================================
88The SoC implementation calls opp_add function iteratively to add OPPs per
89device. It is expected that the SoC framework will register the OPP entries
90optimally- typical numbers range to be less than 5. The list generated by
91registering the OPPs is maintained by OPP library throughout the device
92operation. The SoC framework can subsequently control the availability of the
93OPPs dynamically using the opp_enable / disable functions.
94
95opp_add - Add a new OPP for a specific domain represented by the device pointer.
96 The OPP is defined using the frequency and voltage. Once added, the OPP
97 is assumed to be available and control of it's availability can be done
98 with the opp_enable/disable functions. OPP library internally stores
99 and manages this information in the opp struct. This function may be
100 used by SoC framework to define a optimal list as per the demands of
101 SoC usage environment.
102
103 WARNING: Do not use this function in interrupt context.
104
105 Example:
106 soc_pm_init()
107 {
108 /* Do things */
109 r = opp_add(mpu_dev, 1000000, 900000);
110 if (!r) {
111 pr_err("%s: unable to register mpu opp(%d)\n", r);
112 goto no_cpufreq;
113 }
114 /* Do cpufreq things */
115 no_cpufreq:
116 /* Do remaining things */
117 }
118
1193. OPP Search Functions
120=======================
121High level framework such as cpufreq operates on frequencies. To map the
122frequency back to the corresponding OPP, OPP library provides handy functions
123to search the OPP list that OPP library internally manages. These search
124functions return the matching pointer representing the opp if a match is
125found, else returns error. These errors are expected to be handled by standard
126error checks such as IS_ERR() and appropriate actions taken by the caller.
127
128opp_find_freq_exact - Search for an OPP based on an *exact* frequency and
129 availability. This function is especially useful to enable an OPP which
130 is not available by default.
131 Example: In a case when SoC framework detects a situation where a
132 higher frequency could be made available, it can use this function to
133 find the OPP prior to call the opp_enable to actually make it available.
134 rcu_read_lock();
135 opp = opp_find_freq_exact(dev, 1000000000, false);
136 rcu_read_unlock();
137 /* dont operate on the pointer.. just do a sanity check.. */
138 if (IS_ERR(opp)) {
139 pr_err("frequency not disabled!\n");
140 /* trigger appropriate actions.. */
141 } else {
142 opp_enable(dev,1000000000);
143 }
144
145 NOTE: This is the only search function that operates on OPPs which are
146 not available.
147
148opp_find_freq_floor - Search for an available OPP which is *at most* the
149 provided frequency. This function is useful while searching for a lesser
150 match OR operating on OPP information in the order of decreasing
151 frequency.
152 Example: To find the highest opp for a device:
153 freq = ULONG_MAX;
154 rcu_read_lock();
155 opp_find_freq_floor(dev, &freq);
156 rcu_read_unlock();
157
158opp_find_freq_ceil - Search for an available OPP which is *at least* the
159 provided frequency. This function is useful while searching for a
160 higher match OR operating on OPP information in the order of increasing
161 frequency.
162 Example 1: To find the lowest opp for a device:
163 freq = 0;
164 rcu_read_lock();
165 opp_find_freq_ceil(dev, &freq);
166 rcu_read_unlock();
167 Example 2: A simplified implementation of a SoC cpufreq_driver->target:
168 soc_cpufreq_target(..)
169 {
170 /* Do stuff like policy checks etc. */
171 /* Find the best frequency match for the req */
172 rcu_read_lock();
173 opp = opp_find_freq_ceil(dev, &freq);
174 rcu_read_unlock();
175 if (!IS_ERR(opp))
176 soc_switch_to_freq_voltage(freq);
177 else
178 /* do something when we cant satisfy the req */
179 /* do other stuff */
180 }
181
1824. OPP Availability Control Functions
183=====================================
184A default OPP list registered with the OPP library may not cater to all possible
185situation. The OPP library provides a set of functions to modify the
186availability of a OPP within the OPP list. This allows SoC frameworks to have
187fine grained dynamic control of which sets of OPPs are operationally available.
188These functions are intended to *temporarily* remove an OPP in conditions such
189as thermal considerations (e.g. don't use OPPx until the temperature drops).
190
191WARNING: Do not use these functions in interrupt context.
192
193opp_enable - Make a OPP available for operation.
194 Example: Lets say that 1GHz OPP is to be made available only if the
195 SoC temperature is lower than a certain threshold. The SoC framework
196 implementation might choose to do something as follows:
197 if (cur_temp < temp_low_thresh) {
198 /* Enable 1GHz if it was disabled */
199 rcu_read_lock();
200 opp = opp_find_freq_exact(dev, 1000000000, false);
201 rcu_read_unlock();
202 /* just error check */
203 if (!IS_ERR(opp))
204 ret = opp_enable(dev, 1000000000);
205 else
206 goto try_something_else;
207 }
208
209opp_disable - Make an OPP to be not available for operation
210 Example: Lets say that 1GHz OPP is to be disabled if the temperature
211 exceeds a threshold value. The SoC framework implementation might
212 choose to do something as follows:
213 if (cur_temp > temp_high_thresh) {
214 /* Disable 1GHz if it was enabled */
215 rcu_read_lock();
216 opp = opp_find_freq_exact(dev, 1000000000, true);
217 rcu_read_unlock();
218 /* just error check */
219 if (!IS_ERR(opp))
220 ret = opp_disable(dev, 1000000000);
221 else
222 goto try_something_else;
223 }
224
2255. OPP Data Retrieval Functions
226===============================
227Since OPP library abstracts away the OPP information, a set of functions to pull
228information from the OPP structure is necessary. Once an OPP pointer is
229retrieved using the search functions, the following functions can be used by SoC
230framework to retrieve the information represented inside the OPP layer.
231
232opp_get_voltage - Retrieve the voltage represented by the opp pointer.
233 Example: At a cpufreq transition to a different frequency, SoC
234 framework requires to set the voltage represented by the OPP using
235 the regulator framework to the Power Management chip providing the
236 voltage.
237 soc_switch_to_freq_voltage(freq)
238 {
239 /* do things */
240 rcu_read_lock();
241 opp = opp_find_freq_ceil(dev, &freq);
242 v = opp_get_voltage(opp);
243 rcu_read_unlock();
244 if (v)
245 regulator_set_voltage(.., v);
246 /* do other things */
247 }
248
249opp_get_freq - Retrieve the freq represented by the opp pointer.
250 Example: Lets say the SoC framework uses a couple of helper functions
251 we could pass opp pointers instead of doing additional parameters to
252 handle quiet a bit of data parameters.
253 soc_cpufreq_target(..)
254 {
255 /* do things.. */
256 max_freq = ULONG_MAX;
257 rcu_read_lock();
258 max_opp = opp_find_freq_floor(dev,&max_freq);
259 requested_opp = opp_find_freq_ceil(dev,&freq);
260 if (!IS_ERR(max_opp) && !IS_ERR(requested_opp))
261 r = soc_test_validity(max_opp, requested_opp);
262 rcu_read_unlock();
263 /* do other things */
264 }
265 soc_test_validity(..)
266 {
267 if(opp_get_voltage(max_opp) < opp_get_voltage(requested_opp))
268 return -EINVAL;
269 if(opp_get_freq(max_opp) < opp_get_freq(requested_opp))
270 return -EINVAL;
271 /* do things.. */
272 }
273
274opp_get_opp_count - Retrieve the number of available opps for a device
275 Example: Lets say a co-processor in the SoC needs to know the available
276 frequencies in a table, the main processor can notify as following:
277 soc_notify_coproc_available_frequencies()
278 {
279 /* Do things */
280 rcu_read_lock();
281 num_available = opp_get_opp_count(dev);
282 speeds = kzalloc(sizeof(u32) * num_available, GFP_KERNEL);
283 /* populate the table in increasing order */
284 freq = 0;
285 while (!IS_ERR(opp = opp_find_freq_ceil(dev, &freq))) {
286 speeds[i] = freq;
287 freq++;
288 i++;
289 }
290 rcu_read_unlock();
291
292 soc_notify_coproc(AVAILABLE_FREQs, speeds, num_available);
293 /* Do other things */
294 }
295
2966. Cpufreq Table Generation
297===========================
298opp_init_cpufreq_table - cpufreq framework typically is initialized with
299 cpufreq_frequency_table_cpuinfo which is provided with the list of
300 frequencies that are available for operation. This function provides
301 a ready to use conversion routine to translate the OPP layer's internal
302 information about the available frequencies into a format readily
303 providable to cpufreq.
304
305 WARNING: Do not use this function in interrupt context.
306
307 Example:
308 soc_pm_init()
309 {
310 /* Do things */
311 r = opp_init_cpufreq_table(dev, &freq_table);
312 if (!r)
313 cpufreq_frequency_table_cpuinfo(policy, freq_table);
314 /* Do other things */
315 }
316
317 NOTE: This function is available only if CONFIG_CPU_FREQ is enabled in
318 addition to CONFIG_PM as power management feature is required to
319 dynamically scale voltage and frequency in a system.
320
3217. Data Structures
322==================
323Typically an SoC contains multiple voltage domains which are variable. Each
324domain is represented by a device pointer. The relationship to OPP can be
325represented as follows:
326SoC
327 |- device 1
328 | |- opp 1 (availability, freq, voltage)
329 | |- opp 2 ..
330 ... ...
331 | `- opp n ..
332 |- device 2
333 ...
334 `- device m
335
336OPP library maintains a internal list that the SoC framework populates and
337accessed by various functions as described above. However, the structures
338representing the actual OPPs and domains are internal to the OPP library itself
339to allow for suitable abstraction reusable across systems.
340
341struct opp - The internal data structure of OPP library which is used to
342 represent an OPP. In addition to the freq, voltage, availability
343 information, it also contains internal book keeping information required
344 for the OPP library to operate on. Pointer to this structure is
345 provided back to the users such as SoC framework to be used as a
346 identifier for OPP in the interactions with OPP layer.
347
348 WARNING: The struct opp pointer should not be parsed or modified by the
349 users. The defaults of for an instance is populated by opp_add, but the
350 availability of the OPP can be modified by opp_enable/disable functions.
351
352struct device - This is used to identify a domain to the OPP layer. The
353 nature of the device and it's implementation is left to the user of
354 OPP library such as the SoC framework.
355
356Overall, in a simplistic view, the data structure operations is represented as
357following:
358
359Initialization / modification:
360 +-----+ /- opp_enable
361opp_add --> | opp | <-------
362 | +-----+ \- opp_disable
363 \-------> domain_info(device)
364
365Search functions:
366 /-- opp_find_freq_ceil ---\ +-----+
367domain_info<---- opp_find_freq_exact -----> | opp |
368 \-- opp_find_freq_floor ---/ +-----+
369
370Retrieval functions:
371+-----+ /- opp_get_voltage
372| opp | <---
373+-----+ \- opp_get_freq
374
375domain_info <- opp_get_opp_count
diff --git a/Documentation/power/regulator/overview.txt b/Documentation/power/regulator/overview.txt
index 9363e056188..8ed17587a74 100644
--- a/Documentation/power/regulator/overview.txt
+++ b/Documentation/power/regulator/overview.txt
@@ -13,7 +13,7 @@ regulators (where voltage output is controllable) and current sinks (where
13current limit is controllable). 13current limit is controllable).
14 14
15(C) 2008 Wolfson Microelectronics PLC. 15(C) 2008 Wolfson Microelectronics PLC.
16Author: Liam Girdwood <lg@opensource.wolfsonmicro.com> 16Author: Liam Girdwood <lrg@slimlogic.co.uk>
17 17
18 18
19Nomenclature 19Nomenclature
diff --git a/Documentation/power/runtime_pm.txt b/Documentation/power/runtime_pm.txt
index 55b859b3bc7..489e9bacd16 100644
--- a/Documentation/power/runtime_pm.txt
+++ b/Documentation/power/runtime_pm.txt
@@ -1,6 +1,7 @@
1Run-time Power Management Framework for I/O Devices 1Run-time Power Management Framework for I/O Devices
2 2
3(C) 2009 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. 3(C) 2009 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
4(C) 2010 Alan Stern <stern@rowland.harvard.edu>
4 5
51. Introduction 61. Introduction
6 7
@@ -157,7 +158,8 @@ rules:
157 to execute it, the other callbacks will not be executed for the same device. 158 to execute it, the other callbacks will not be executed for the same device.
158 159
159 * A request to execute ->runtime_resume() will cancel any pending or 160 * A request to execute ->runtime_resume() will cancel any pending or
160 scheduled requests to execute the other callbacks for the same device. 161 scheduled requests to execute the other callbacks for the same device,
162 except for scheduled autosuspends.
161 163
1623. Run-time PM Device Fields 1643. Run-time PM Device Fields
163 165
@@ -165,7 +167,7 @@ The following device run-time PM fields are present in 'struct dev_pm_info', as
165defined in include/linux/pm.h: 167defined in include/linux/pm.h:
166 168
167 struct timer_list suspend_timer; 169 struct timer_list suspend_timer;
168 - timer used for scheduling (delayed) suspend request 170 - timer used for scheduling (delayed) suspend and autosuspend requests
169 171
170 unsigned long timer_expires; 172 unsigned long timer_expires;
171 - timer expiration time, in jiffies (if this is different from zero, the 173 - timer expiration time, in jiffies (if this is different from zero, the
@@ -230,6 +232,28 @@ defined in include/linux/pm.h:
230 interface; it may only be modified with the help of the pm_runtime_allow() 232 interface; it may only be modified with the help of the pm_runtime_allow()
231 and pm_runtime_forbid() helper functions 233 and pm_runtime_forbid() helper functions
232 234
235 unsigned int no_callbacks;
236 - indicates that the device does not use the run-time PM callbacks (see
237 Section 8); it may be modified only by the pm_runtime_no_callbacks()
238 helper function
239
240 unsigned int use_autosuspend;
241 - indicates that the device's driver supports delayed autosuspend (see
242 Section 9); it may be modified only by the
243 pm_runtime{_dont}_use_autosuspend() helper functions
244
245 unsigned int timer_autosuspends;
246 - indicates that the PM core should attempt to carry out an autosuspend
247 when the timer expires rather than a normal suspend
248
249 int autosuspend_delay;
250 - the delay time (in milliseconds) to be used for autosuspend
251
252 unsigned long last_busy;
253 - the time (in jiffies) when the pm_runtime_mark_last_busy() helper
254 function was last called for this device; used in calculating inactivity
255 periods for autosuspend
256
233All of the above fields are members of the 'power' member of 'struct device'. 257All of the above fields are members of the 'power' member of 'struct device'.
234 258
2354. Run-time PM Device Helper Functions 2594. Run-time PM Device Helper Functions
@@ -255,6 +279,12 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
255 error code on failure, where -EAGAIN or -EBUSY means it is safe to attempt 279 error code on failure, where -EAGAIN or -EBUSY means it is safe to attempt
256 to suspend the device again in future 280 to suspend the device again in future
257 281
282 int pm_runtime_autosuspend(struct device *dev);
283 - same as pm_runtime_suspend() except that the autosuspend delay is taken
284 into account; if pm_runtime_autosuspend_expiration() says the delay has
285 not yet expired then an autosuspend is scheduled for the appropriate time
286 and 0 is returned
287
258 int pm_runtime_resume(struct device *dev); 288 int pm_runtime_resume(struct device *dev);
259 - execute the subsystem-level resume callback for the device; returns 0 on 289 - execute the subsystem-level resume callback for the device; returns 0 on
260 success, 1 if the device's run-time PM status was already 'active' or 290 success, 1 if the device's run-time PM status was already 'active' or
@@ -267,6 +297,11 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
267 device (the request is represented by a work item in pm_wq); returns 0 on 297 device (the request is represented by a work item in pm_wq); returns 0 on
268 success or error code if the request has not been queued up 298 success or error code if the request has not been queued up
269 299
300 int pm_request_autosuspend(struct device *dev);
301 - schedule the execution of the subsystem-level suspend callback for the
302 device when the autosuspend delay has expired; if the delay has already
303 expired then the work item is queued up immediately
304
270 int pm_schedule_suspend(struct device *dev, unsigned int delay); 305 int pm_schedule_suspend(struct device *dev, unsigned int delay);
271 - schedule the execution of the subsystem-level suspend callback for the 306 - schedule the execution of the subsystem-level suspend callback for the
272 device in future, where 'delay' is the time to wait before queuing up a 307 device in future, where 'delay' is the time to wait before queuing up a
@@ -298,12 +333,20 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
298 - decrement the device's usage counter 333 - decrement the device's usage counter
299 334
300 int pm_runtime_put(struct device *dev); 335 int pm_runtime_put(struct device *dev);
301 - decrement the device's usage counter, run pm_request_idle(dev) and return 336 - decrement the device's usage counter; if the result is 0 then run
302 its result 337 pm_request_idle(dev) and return its result
338
339 int pm_runtime_put_autosuspend(struct device *dev);
340 - decrement the device's usage counter; if the result is 0 then run
341 pm_request_autosuspend(dev) and return its result
303 342
304 int pm_runtime_put_sync(struct device *dev); 343 int pm_runtime_put_sync(struct device *dev);
305 - decrement the device's usage counter, run pm_runtime_idle(dev) and return 344 - decrement the device's usage counter; if the result is 0 then run
306 its result 345 pm_runtime_idle(dev) and return its result
346
347 int pm_runtime_put_sync_autosuspend(struct device *dev);
348 - decrement the device's usage counter; if the result is 0 then run
349 pm_runtime_autosuspend(dev) and return its result
307 350
308 void pm_runtime_enable(struct device *dev); 351 void pm_runtime_enable(struct device *dev);
309 - enable the run-time PM helper functions to run the device bus type's 352 - enable the run-time PM helper functions to run the device bus type's
@@ -349,19 +392,51 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
349 counter (used by the /sys/devices/.../power/control interface to 392 counter (used by the /sys/devices/.../power/control interface to
350 effectively prevent the device from being power managed at run time) 393 effectively prevent the device from being power managed at run time)
351 394
395 void pm_runtime_no_callbacks(struct device *dev);
396 - set the power.no_callbacks flag for the device and remove the run-time
397 PM attributes from /sys/devices/.../power (or prevent them from being
398 added when the device is registered)
399
400 void pm_runtime_mark_last_busy(struct device *dev);
401 - set the power.last_busy field to the current time
402
403 void pm_runtime_use_autosuspend(struct device *dev);
404 - set the power.use_autosuspend flag, enabling autosuspend delays
405
406 void pm_runtime_dont_use_autosuspend(struct device *dev);
407 - clear the power.use_autosuspend flag, disabling autosuspend delays
408
409 void pm_runtime_set_autosuspend_delay(struct device *dev, int delay);
410 - set the power.autosuspend_delay value to 'delay' (expressed in
411 milliseconds); if 'delay' is negative then run-time suspends are
412 prevented
413
414 unsigned long pm_runtime_autosuspend_expiration(struct device *dev);
415 - calculate the time when the current autosuspend delay period will expire,
416 based on power.last_busy and power.autosuspend_delay; if the delay time
417 is 1000 ms or larger then the expiration time is rounded up to the
418 nearest second; returns 0 if the delay period has already expired or
419 power.use_autosuspend isn't set, otherwise returns the expiration time
420 in jiffies
421
352It is safe to execute the following helper functions from interrupt context: 422It is safe to execute the following helper functions from interrupt context:
353 423
354pm_request_idle() 424pm_request_idle()
425pm_request_autosuspend()
355pm_schedule_suspend() 426pm_schedule_suspend()
356pm_request_resume() 427pm_request_resume()
357pm_runtime_get_noresume() 428pm_runtime_get_noresume()
358pm_runtime_get() 429pm_runtime_get()
359pm_runtime_put_noidle() 430pm_runtime_put_noidle()
360pm_runtime_put() 431pm_runtime_put()
432pm_runtime_put_autosuspend()
433pm_runtime_enable()
361pm_suspend_ignore_children() 434pm_suspend_ignore_children()
362pm_runtime_set_active() 435pm_runtime_set_active()
363pm_runtime_set_suspended() 436pm_runtime_set_suspended()
364pm_runtime_enable() 437pm_runtime_suspended()
438pm_runtime_mark_last_busy()
439pm_runtime_autosuspend_expiration()
365 440
3665. Run-time PM Initialization, Device Probing and Removal 4415. Run-time PM Initialization, Device Probing and Removal
367 442
@@ -524,3 +599,141 @@ poweroff and run-time suspend callback, and similarly for system resume, thaw,
524restore, and run-time resume, can achieve this with the help of the 599restore, and run-time resume, can achieve this with the help of the
525UNIVERSAL_DEV_PM_OPS macro defined in include/linux/pm.h (possibly setting its 600UNIVERSAL_DEV_PM_OPS macro defined in include/linux/pm.h (possibly setting its
526last argument to NULL). 601last argument to NULL).
602
6038. "No-Callback" Devices
604
605Some "devices" are only logical sub-devices of their parent and cannot be
606power-managed on their own. (The prototype example is a USB interface. Entire
607USB devices can go into low-power mode or send wake-up requests, but neither is
608possible for individual interfaces.) The drivers for these devices have no
609need of run-time PM callbacks; if the callbacks did exist, ->runtime_suspend()
610and ->runtime_resume() would always return 0 without doing anything else and
611->runtime_idle() would always call pm_runtime_suspend().
612
613Subsystems can tell the PM core about these devices by calling
614pm_runtime_no_callbacks(). This should be done after the device structure is
615initialized and before it is registered (although after device registration is
616also okay). The routine will set the device's power.no_callbacks flag and
617prevent the non-debugging run-time PM sysfs attributes from being created.
618
619When power.no_callbacks is set, the PM core will not invoke the
620->runtime_idle(), ->runtime_suspend(), or ->runtime_resume() callbacks.
621Instead it will assume that suspends and resumes always succeed and that idle
622devices should be suspended.
623
624As a consequence, the PM core will never directly inform the device's subsystem
625or driver about run-time power changes. Instead, the driver for the device's
626parent must take responsibility for telling the device's driver when the
627parent's power state changes.
628
6299. Autosuspend, or automatically-delayed suspends
630
631Changing a device's power state isn't free; it requires both time and energy.
632A device should be put in a low-power state only when there's some reason to
633think it will remain in that state for a substantial time. A common heuristic
634says that a device which hasn't been used for a while is liable to remain
635unused; following this advice, drivers should not allow devices to be suspended
636at run-time until they have been inactive for some minimum period. Even when
637the heuristic ends up being non-optimal, it will still prevent devices from
638"bouncing" too rapidly between low-power and full-power states.
639
640The term "autosuspend" is an historical remnant. It doesn't mean that the
641device is automatically suspended (the subsystem or driver still has to call
642the appropriate PM routines); rather it means that run-time suspends will
643automatically be delayed until the desired period of inactivity has elapsed.
644
645Inactivity is determined based on the power.last_busy field. Drivers should
646call pm_runtime_mark_last_busy() to update this field after carrying out I/O,
647typically just before calling pm_runtime_put_autosuspend(). The desired length
648of the inactivity period is a matter of policy. Subsystems can set this length
649initially by calling pm_runtime_set_autosuspend_delay(), but after device
650registration the length should be controlled by user space, using the
651/sys/devices/.../power/autosuspend_delay_ms attribute.
652
653In order to use autosuspend, subsystems or drivers must call
654pm_runtime_use_autosuspend() (preferably before registering the device), and
655thereafter they should use the various *_autosuspend() helper functions instead
656of the non-autosuspend counterparts:
657
658 Instead of: pm_runtime_suspend use: pm_runtime_autosuspend;
659 Instead of: pm_schedule_suspend use: pm_request_autosuspend;
660 Instead of: pm_runtime_put use: pm_runtime_put_autosuspend;
661 Instead of: pm_runtime_put_sync use: pm_runtime_put_sync_autosuspend.
662
663Drivers may also continue to use the non-autosuspend helper functions; they
664will behave normally, not taking the autosuspend delay into account.
665Similarly, if the power.use_autosuspend field isn't set then the autosuspend
666helper functions will behave just like the non-autosuspend counterparts.
667
668The implementation is well suited for asynchronous use in interrupt contexts.
669However such use inevitably involves races, because the PM core can't
670synchronize ->runtime_suspend() callbacks with the arrival of I/O requests.
671This synchronization must be handled by the driver, using its private lock.
672Here is a schematic pseudo-code example:
673
674 foo_read_or_write(struct foo_priv *foo, void *data)
675 {
676 lock(&foo->private_lock);
677 add_request_to_io_queue(foo, data);
678 if (foo->num_pending_requests++ == 0)
679 pm_runtime_get(&foo->dev);
680 if (!foo->is_suspended)
681 foo_process_next_request(foo);
682 unlock(&foo->private_lock);
683 }
684
685 foo_io_completion(struct foo_priv *foo, void *req)
686 {
687 lock(&foo->private_lock);
688 if (--foo->num_pending_requests == 0) {
689 pm_runtime_mark_last_busy(&foo->dev);
690 pm_runtime_put_autosuspend(&foo->dev);
691 } else {
692 foo_process_next_request(foo);
693 }
694 unlock(&foo->private_lock);
695 /* Send req result back to the user ... */
696 }
697
698 int foo_runtime_suspend(struct device *dev)
699 {
700 struct foo_priv foo = container_of(dev, ...);
701 int ret = 0;
702
703 lock(&foo->private_lock);
704 if (foo->num_pending_requests > 0) {
705 ret = -EBUSY;
706 } else {
707 /* ... suspend the device ... */
708 foo->is_suspended = 1;
709 }
710 unlock(&foo->private_lock);
711 return ret;
712 }
713
714 int foo_runtime_resume(struct device *dev)
715 {
716 struct foo_priv foo = container_of(dev, ...);
717
718 lock(&foo->private_lock);
719 /* ... resume the device ... */
720 foo->is_suspended = 0;
721 pm_runtime_mark_last_busy(&foo->dev);
722 if (foo->num_pending_requests > 0)
723 foo_process_requests(foo);
724 unlock(&foo->private_lock);
725 return 0;
726 }
727
728The important point is that after foo_io_completion() asks for an autosuspend,
729the foo_runtime_suspend() callback may race with foo_read_or_write().
730Therefore foo_runtime_suspend() has to check whether there are any pending I/O
731requests (while holding the private lock) before allowing the suspend to
732proceed.
733
734In addition, the power.autosuspend_delay field can be changed by user space at
735any time. If a driver cares about this, it can call
736pm_runtime_autosuspend_expiration() from within the ->runtime_suspend()
737callback while holding its private lock. If the function returns a nonzero
738value then the delay has not yet expired and the callback should return
739-EAGAIN.
diff --git a/Documentation/power/s2ram.txt b/Documentation/power/s2ram.txt
index 514b94fc931..1bdfa044377 100644
--- a/Documentation/power/s2ram.txt
+++ b/Documentation/power/s2ram.txt
@@ -49,6 +49,13 @@ machine that doesn't boot) is:
49 device (lspci and /sys/devices/pci* is your friend), and see if you can 49 device (lspci and /sys/devices/pci* is your friend), and see if you can
50 fix it, disable it, or trace into its resume function. 50 fix it, disable it, or trace into its resume function.
51 51
52 If no device matches the hash (or any matches appear to be false positives),
53 the culprit may be a device from a loadable kernel module that is not loaded
54 until after the hash is checked. You can check the hash against the current
55 devices again after more modules are loaded using sysfs:
56
57 cat /sys/power/pm_trace_dev_match
58
52For example, the above happens to be the VGA device on my EVO, which I 59For example, the above happens to be the VGA device on my EVO, which I
53used to run with "radeonfb" (it's an ATI Radeon mobility). It turns out 60used to run with "radeonfb" (it's an ATI Radeon mobility). It turns out
54that "radeonfb" simply cannot resume that device - it tries to set the 61that "radeonfb" simply cannot resume that device - it tries to set the
diff --git a/Documentation/power/swsusp.txt b/Documentation/power/swsusp.txt
index 9d60ab717a7..ea718891a66 100644
--- a/Documentation/power/swsusp.txt
+++ b/Documentation/power/swsusp.txt
@@ -66,7 +66,8 @@ swsusp saves the state of the machine into active swaps and then reboots or
66powerdowns. You must explicitly specify the swap partition to resume from with 66powerdowns. You must explicitly specify the swap partition to resume from with
67``resume='' kernel option. If signature is found it loads and restores saved 67``resume='' kernel option. If signature is found it loads and restores saved
68state. If the option ``noresume'' is specified as a boot parameter, it skips 68state. If the option ``noresume'' is specified as a boot parameter, it skips
69the resuming. 69the resuming. If the option ``hibernate=nocompress'' is specified as a boot
70parameter, it saves hibernation image without compression.
70 71
71In the meantime while the system is suspended you should not add/remove any 72In the meantime while the system is suspended you should not add/remove any
72of the hardware, write to the filesystems, etc. 73of the hardware, write to the filesystems, etc.
diff --git a/Documentation/powerpc/dts-bindings/fsl/spi.txt b/Documentation/powerpc/dts-bindings/fsl/spi.txt
index 80510c018ee..777abd7399d 100644
--- a/Documentation/powerpc/dts-bindings/fsl/spi.txt
+++ b/Documentation/powerpc/dts-bindings/fsl/spi.txt
@@ -1,7 +1,9 @@
1* SPI (Serial Peripheral Interface) 1* SPI (Serial Peripheral Interface)
2 2
3Required properties: 3Required properties:
4- cell-index : SPI controller index. 4- cell-index : QE SPI subblock index.
5 0: QE subblock SPI1
6 1: QE subblock SPI2
5- compatible : should be "fsl,spi". 7- compatible : should be "fsl,spi".
6- mode : the SPI operation mode, it can be "cpu" or "cpu-qe". 8- mode : the SPI operation mode, it can be "cpu" or "cpu-qe".
7- reg : Offset and length of the register set for the device 9- reg : Offset and length of the register set for the device
@@ -29,3 +31,23 @@ Example:
29 gpios = <&gpio 18 1 // device reg=<0> 31 gpios = <&gpio 18 1 // device reg=<0>
30 &gpio 19 1>; // device reg=<1> 32 &gpio 19 1>; // device reg=<1>
31 }; 33 };
34
35
36* eSPI (Enhanced Serial Peripheral Interface)
37
38Required properties:
39- compatible : should be "fsl,mpc8536-espi".
40- reg : Offset and length of the register set for the device.
41- interrupts : should contain eSPI interrupt, the device has one interrupt.
42- fsl,espi-num-chipselects : the number of the chipselect signals.
43
44Example:
45 spi@110000 {
46 #address-cells = <1>;
47 #size-cells = <0>;
48 compatible = "fsl,mpc8536-espi";
49 reg = <0x110000 0x1000>;
50 interrupts = <53 0x2>;
51 interrupt-parent = <&mpic>;
52 fsl,espi-num-chipselects = <4>;
53 };
diff --git a/Documentation/sound/alsa/HD-Audio-Models.txt b/Documentation/sound/alsa/HD-Audio-Models.txt
index ce46fa1e643..37c6aad5e59 100644
--- a/Documentation/sound/alsa/HD-Audio-Models.txt
+++ b/Documentation/sound/alsa/HD-Audio-Models.txt
@@ -296,6 +296,7 @@ Conexant 5051
296Conexant 5066 296Conexant 5066
297============= 297=============
298 laptop Basic Laptop config (default) 298 laptop Basic Laptop config (default)
299 hp-laptop HP laptops, e g G60
299 dell-laptop Dell laptops 300 dell-laptop Dell laptops
300 dell-vostro Dell Vostro 301 dell-vostro Dell Vostro
301 olpc-xo-1_5 OLPC XO 1.5 302 olpc-xo-1_5 OLPC XO 1.5
diff --git a/Documentation/vm/page-types.c b/Documentation/vm/page-types.c
index ccd951fa94e..cc96ee2666f 100644
--- a/Documentation/vm/page-types.c
+++ b/Documentation/vm/page-types.c
@@ -478,7 +478,7 @@ static void prepare_hwpoison_fd(void)
478 } 478 }
479 479
480 if (opt_unpoison && !hwpoison_forget_fd) { 480 if (opt_unpoison && !hwpoison_forget_fd) {
481 sprintf(buf, "%s/renew-pfn", hwpoison_debug_fs); 481 sprintf(buf, "%s/unpoison-pfn", hwpoison_debug_fs);
482 hwpoison_forget_fd = checked_open(buf, O_WRONLY); 482 hwpoison_forget_fd = checked_open(buf, O_WRONLY);
483 } 483 }
484} 484}
diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
new file mode 100644
index 00000000000..996a27d9b8d
--- /dev/null
+++ b/Documentation/workqueue.txt
@@ -0,0 +1,381 @@
1
2Concurrency Managed Workqueue (cmwq)
3
4September, 2010 Tejun Heo <tj@kernel.org>
5 Florian Mickler <florian@mickler.org>
6
7CONTENTS
8
91. Introduction
102. Why cmwq?
113. The Design
124. Application Programming Interface (API)
135. Example Execution Scenarios
146. Guidelines
15
16
171. Introduction
18
19There are many cases where an asynchronous process execution context
20is needed and the workqueue (wq) API is the most commonly used
21mechanism for such cases.
22
23When such an asynchronous execution context is needed, a work item
24describing which function to execute is put on a queue. An
25independent thread serves as the asynchronous execution context. The
26queue is called workqueue and the thread is called worker.
27
28While there are work items on the workqueue the worker executes the
29functions associated with the work items one after the other. When
30there is no work item left on the workqueue the worker becomes idle.
31When a new work item gets queued, the worker begins executing again.
32
33
342. Why cmwq?
35
36In the original wq implementation, a multi threaded (MT) wq had one
37worker thread per CPU and a single threaded (ST) wq had one worker
38thread system-wide. A single MT wq needed to keep around the same
39number of workers as the number of CPUs. The kernel grew a lot of MT
40wq users over the years and with the number of CPU cores continuously
41rising, some systems saturated the default 32k PID space just booting
42up.
43
44Although MT wq wasted a lot of resource, the level of concurrency
45provided was unsatisfactory. The limitation was common to both ST and
46MT wq albeit less severe on MT. Each wq maintained its own separate
47worker pool. A MT wq could provide only one execution context per CPU
48while a ST wq one for the whole system. Work items had to compete for
49those very limited execution contexts leading to various problems
50including proneness to deadlocks around the single execution context.
51
52The tension between the provided level of concurrency and resource
53usage also forced its users to make unnecessary tradeoffs like libata
54choosing to use ST wq for polling PIOs and accepting an unnecessary
55limitation that no two polling PIOs can progress at the same time. As
56MT wq don't provide much better concurrency, users which require
57higher level of concurrency, like async or fscache, had to implement
58their own thread pool.
59
60Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with
61focus on the following goals.
62
63* Maintain compatibility with the original workqueue API.
64
65* Use per-CPU unified worker pools shared by all wq to provide
66 flexible level of concurrency on demand without wasting a lot of
67 resource.
68
69* Automatically regulate worker pool and level of concurrency so that
70 the API users don't need to worry about such details.
71
72
733. The Design
74
75In order to ease the asynchronous execution of functions a new
76abstraction, the work item, is introduced.
77
78A work item is a simple struct that holds a pointer to the function
79that is to be executed asynchronously. Whenever a driver or subsystem
80wants a function to be executed asynchronously it has to set up a work
81item pointing to that function and queue that work item on a
82workqueue.
83
84Special purpose threads, called worker threads, execute the functions
85off of the queue, one after the other. If no work is queued, the
86worker threads become idle. These worker threads are managed in so
87called thread-pools.
88
89The cmwq design differentiates between the user-facing workqueues that
90subsystems and drivers queue work items on and the backend mechanism
91which manages thread-pool and processes the queued work items.
92
93The backend is called gcwq. There is one gcwq for each possible CPU
94and one gcwq to serve work items queued on unbound workqueues.
95
96Subsystems and drivers can create and queue work items through special
97workqueue API functions as they see fit. They can influence some
98aspects of the way the work items are executed by setting flags on the
99workqueue they are putting the work item on. These flags include
100things like CPU locality, reentrancy, concurrency limits and more. To
101get a detailed overview refer to the API description of
102alloc_workqueue() below.
103
104When a work item is queued to a workqueue, the target gcwq is
105determined according to the queue parameters and workqueue attributes
106and appended on the shared worklist of the gcwq. For example, unless
107specifically overridden, a work item of a bound workqueue will be
108queued on the worklist of exactly that gcwq that is associated to the
109CPU the issuer is running on.
110
111For any worker pool implementation, managing the concurrency level
112(how many execution contexts are active) is an important issue. cmwq
113tries to keep the concurrency at a minimal but sufficient level.
114Minimal to save resources and sufficient in that the system is used at
115its full capacity.
116
117Each gcwq bound to an actual CPU implements concurrency management by
118hooking into the scheduler. The gcwq is notified whenever an active
119worker wakes up or sleeps and keeps track of the number of the
120currently runnable workers. Generally, work items are not expected to
121hog a CPU and consume many cycles. That means maintaining just enough
122concurrency to prevent work processing from stalling should be
123optimal. As long as there are one or more runnable workers on the
124CPU, the gcwq doesn't start execution of a new work, but, when the
125last running worker goes to sleep, it immediately schedules a new
126worker so that the CPU doesn't sit idle while there are pending work
127items. This allows using a minimal number of workers without losing
128execution bandwidth.
129
130Keeping idle workers around doesn't cost other than the memory space
131for kthreads, so cmwq holds onto idle ones for a while before killing
132them.
133
134For an unbound wq, the above concurrency management doesn't apply and
135the gcwq for the pseudo unbound CPU tries to start executing all work
136items as soon as possible. The responsibility of regulating
137concurrency level is on the users. There is also a flag to mark a
138bound wq to ignore the concurrency management. Please refer to the
139API section for details.
140
141Forward progress guarantee relies on that workers can be created when
142more execution contexts are necessary, which in turn is guaranteed
143through the use of rescue workers. All work items which might be used
144on code paths that handle memory reclaim are required to be queued on
145wq's that have a rescue-worker reserved for execution under memory
146pressure. Else it is possible that the thread-pool deadlocks waiting
147for execution contexts to free up.
148
149
1504. Application Programming Interface (API)
151
152alloc_workqueue() allocates a wq. The original create_*workqueue()
153functions are deprecated and scheduled for removal. alloc_workqueue()
154takes three arguments - @name, @flags and @max_active. @name is the
155name of the wq and also used as the name of the rescuer thread if
156there is one.
157
158A wq no longer manages execution resources but serves as a domain for
159forward progress guarantee, flush and work item attributes. @flags
160and @max_active control how work items are assigned execution
161resources, scheduled and executed.
162
163@flags:
164
165 WQ_NON_REENTRANT
166
167 By default, a wq guarantees non-reentrance only on the same
168 CPU. A work item may not be executed concurrently on the same
169 CPU by multiple workers but is allowed to be executed
170 concurrently on multiple CPUs. This flag makes sure
171 non-reentrance is enforced across all CPUs. Work items queued
172 to a non-reentrant wq are guaranteed to be executed by at most
173 one worker system-wide at any given time.
174
175 WQ_UNBOUND
176
177 Work items queued to an unbound wq are served by a special
178 gcwq which hosts workers which are not bound to any specific
179 CPU. This makes the wq behave as a simple execution context
180 provider without concurrency management. The unbound gcwq
181 tries to start execution of work items as soon as possible.
182 Unbound wq sacrifices locality but is useful for the following
183 cases.
184
185 * Wide fluctuation in the concurrency level requirement is
186 expected and using bound wq may end up creating large number
187 of mostly unused workers across different CPUs as the issuer
188 hops through different CPUs.
189
190 * Long running CPU intensive workloads which can be better
191 managed by the system scheduler.
192
193 WQ_FREEZEABLE
194
195 A freezeable wq participates in the freeze phase of the system
196 suspend operations. Work items on the wq are drained and no
197 new work item starts execution until thawed.
198
199 WQ_MEM_RECLAIM
200
201 All wq which might be used in the memory reclaim paths _MUST_
202 have this flag set. The wq is guaranteed to have at least one
203 execution context regardless of memory pressure.
204
205 WQ_HIGHPRI
206
207 Work items of a highpri wq are queued at the head of the
208 worklist of the target gcwq and start execution regardless of
209 the current concurrency level. In other words, highpri work
210 items will always start execution as soon as execution
211 resource is available.
212
213 Ordering among highpri work items is preserved - a highpri
214 work item queued after another highpri work item will start
215 execution after the earlier highpri work item starts.
216
217 Although highpri work items are not held back by other
218 runnable work items, they still contribute to the concurrency
219 level. Highpri work items in runnable state will prevent
220 non-highpri work items from starting execution.
221
222 This flag is meaningless for unbound wq.
223
224 WQ_CPU_INTENSIVE
225
226 Work items of a CPU intensive wq do not contribute to the
227 concurrency level. In other words, runnable CPU intensive
228 work items will not prevent other work items from starting
229 execution. This is useful for bound work items which are
230 expected to hog CPU cycles so that their execution is
231 regulated by the system scheduler.
232
233 Although CPU intensive work items don't contribute to the
234 concurrency level, start of their executions is still
235 regulated by the concurrency management and runnable
236 non-CPU-intensive work items can delay execution of CPU
237 intensive work items.
238
239 This flag is meaningless for unbound wq.
240
241 WQ_HIGHPRI | WQ_CPU_INTENSIVE
242
243 This combination makes the wq avoid interaction with
244 concurrency management completely and behave as a simple
245 per-CPU execution context provider. Work items queued on a
246 highpri CPU-intensive wq start execution as soon as resources
247 are available and don't affect execution of other work items.
248
249@max_active:
250
251@max_active determines the maximum number of execution contexts per
252CPU which can be assigned to the work items of a wq. For example,
253with @max_active of 16, at most 16 work items of the wq can be
254executing at the same time per CPU.
255
256Currently, for a bound wq, the maximum limit for @max_active is 512
257and the default value used when 0 is specified is 256. For an unbound
258wq, the limit is higher of 512 and 4 * num_possible_cpus(). These
259values are chosen sufficiently high such that they are not the
260limiting factor while providing protection in runaway cases.
261
262The number of active work items of a wq is usually regulated by the
263users of the wq, more specifically, by how many work items the users
264may queue at the same time. Unless there is a specific need for
265throttling the number of active work items, specifying '0' is
266recommended.
267
268Some users depend on the strict execution ordering of ST wq. The
269combination of @max_active of 1 and WQ_UNBOUND is used to achieve this
270behavior. Work items on such wq are always queued to the unbound gcwq
271and only one work item can be active at any given time thus achieving
272the same ordering property as ST wq.
273
274
2755. Example Execution Scenarios
276
277The following example execution scenarios try to illustrate how cmwq
278behave under different configurations.
279
280 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
281 w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms
282 again before finishing. w1 and w2 burn CPU for 5ms then sleep for
283 10ms.
284
285Ignoring all other tasks, works and processing overhead, and assuming
286simple FIFO scheduling, the following is one highly simplified version
287of possible sequences of events with the original wq.
288
289 TIME IN MSECS EVENT
290 0 w0 starts and burns CPU
291 5 w0 sleeps
292 15 w0 wakes up and burns CPU
293 20 w0 finishes
294 20 w1 starts and burns CPU
295 25 w1 sleeps
296 35 w1 wakes up and finishes
297 35 w2 starts and burns CPU
298 40 w2 sleeps
299 50 w2 wakes up and finishes
300
301And with cmwq with @max_active >= 3,
302
303 TIME IN MSECS EVENT
304 0 w0 starts and burns CPU
305 5 w0 sleeps
306 5 w1 starts and burns CPU
307 10 w1 sleeps
308 10 w2 starts and burns CPU
309 15 w2 sleeps
310 15 w0 wakes up and burns CPU
311 20 w0 finishes
312 20 w1 wakes up and finishes
313 25 w2 wakes up and finishes
314
315If @max_active == 2,
316
317 TIME IN MSECS EVENT
318 0 w0 starts and burns CPU
319 5 w0 sleeps
320 5 w1 starts and burns CPU
321 10 w1 sleeps
322 15 w0 wakes up and burns CPU
323 20 w0 finishes
324 20 w1 wakes up and finishes
325 20 w2 starts and burns CPU
326 25 w2 sleeps
327 35 w2 wakes up and finishes
328
329Now, let's assume w1 and w2 are queued to a different wq q1 which has
330WQ_HIGHPRI set,
331
332 TIME IN MSECS EVENT
333 0 w1 and w2 start and burn CPU
334 5 w1 sleeps
335 10 w2 sleeps
336 10 w0 starts and burns CPU
337 15 w0 sleeps
338 15 w1 wakes up and finishes
339 20 w2 wakes up and finishes
340 25 w0 wakes up and burns CPU
341 30 w0 finishes
342
343If q1 has WQ_CPU_INTENSIVE set,
344
345 TIME IN MSECS EVENT
346 0 w0 starts and burns CPU
347 5 w0 sleeps
348 5 w1 and w2 start and burn CPU
349 10 w1 sleeps
350 15 w2 sleeps
351 15 w0 wakes up and burns CPU
352 20 w0 finishes
353 20 w1 wakes up and finishes
354 25 w2 wakes up and finishes
355
356
3576. Guidelines
358
359* Do not forget to use WQ_MEM_RECLAIM if a wq may process work items
360 which are used during memory reclaim. Each wq with WQ_MEM_RECLAIM
361 set has an execution context reserved for it. If there is
362 dependency among multiple work items used during memory reclaim,
363 they should be queued to separate wq each with WQ_MEM_RECLAIM.
364
365* Unless strict ordering is required, there is no need to use ST wq.
366
367* Unless there is a specific need, using 0 for @max_active is
368 recommended. In most use cases, concurrency level usually stays
369 well under the default limit.
370
371* A wq serves as a domain for forward progress guarantee
372 (WQ_MEM_RECLAIM, flush and work item attributes. Work items which
373 are not involved in memory reclaim and don't need to be flushed as a
374 part of a group of work items, and don't require any special
375 attribute, can use one of the system wq. There is no difference in
376 execution characteristics between using a dedicated wq and a system
377 wq.
378
379* Unless work items are expected to consume a huge amount of CPU
380 cycles, using a bound wq is usually beneficial due to the increased
381 level of locality in wq operations and work item execution.
diff --git a/Documentation/x86/x86_64/kernel-stacks b/Documentation/x86/x86_64/kernel-stacks
index 5ad65d51fb9..a01eec5d1d0 100644
--- a/Documentation/x86/x86_64/kernel-stacks
+++ b/Documentation/x86/x86_64/kernel-stacks
@@ -18,9 +18,9 @@ specialized stacks contain no useful data. The main CPU stacks are:
18 Used for external hardware interrupts. If this is the first external 18 Used for external hardware interrupts. If this is the first external
19 hardware interrupt (i.e. not a nested hardware interrupt) then the 19 hardware interrupt (i.e. not a nested hardware interrupt) then the
20 kernel switches from the current task to the interrupt stack. Like 20 kernel switches from the current task to the interrupt stack. Like
21 the split thread and interrupt stacks on i386 (with CONFIG_4KSTACKS), 21 the split thread and interrupt stacks on i386, this gives more room
22 this gives more room for kernel interrupt processing without having 22 for kernel interrupt processing without having to increase the size
23 to increase the size of every per thread stack. 23 of every per thread stack.
24 24
25 The interrupt stack is also used when processing a softirq. 25 The interrupt stack is also used when processing a softirq.
26 26