diff options
author | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2009-04-14 00:31:16 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2009-04-14 05:31:50 -0400 |
commit | ef631b0ca01655d24e9ca7e199262c4a46416a26 (patch) | |
tree | 0ef86d5b97c6e2de500e639ebb92da8f2ec8c863 /include | |
parent | 27b19565fe4ca5b0e9d2ae98ce4b81ca728bf445 (diff) |
rcu: Make hierarchical RCU less IPI-happy
This patch fixes a hierarchical-RCU performance bug located by Anton
Blanchard. The problem stems from a misguided attempt to provide a
work-around for jiffies-counter failure. This work-around uses a per-CPU
n_rcu_pending counter, which is incremented on each call to rcu_pending(),
which in turn is called from each scheduling-clock interrupt. Each CPU
then treats this counter as a surrogate for the jiffies counter, so
that if the jiffies counter fails to advance, the per-CPU n_rcu_pending
counter will cause RCU to invoke force_quiescent_state(), which in turn
will (among other things) send resched IPIs to CPUs that have thus far
failed to pass through an RCU quiescent state.
Unfortunately, each CPU resets only its own counter after sending a
batch of IPIs. This means that the other CPUs will also (needlessly)
send -another- round of IPIs, for a full N-squared set of IPIs in the
worst case every three scheduler-clock ticks until the grace period
finally ends. It is not reasonable for a given CPU to reset each and
every n_rcu_pending for all the other CPUs, so this patch instead simply
disables the jiffies-counter "training wheels", thus eliminating the
excessive IPIs.
Note that the jiffies-counter IPIs do not have this problem due to
the fact that the jiffies counter is global, so that the CPU sending
the IPIs can easily reset things, thus preventing the other CPUs from
sending redundant IPIs.
Note also that the n_rcu_pending counter remains, as it will continue to
be used for tracing. It may also see use to update the jiffies counter,
should an appropriate kick-the-jiffies-counter API appear.
Located-by: Anton Blanchard <anton@au1.ibm.com>
Tested-by: Anton Blanchard <anton@au1.ibm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: anton@samba.org
Cc: akpm@linux-foundation.org
Cc: dipankar@in.ibm.com
Cc: manfred@colorfullife.com
Cc: cl@linux-foundation.org
Cc: josht@linux.vnet.ibm.com
Cc: schamp@sgi.com
Cc: niv@us.ibm.com
Cc: dvhltc@us.ibm.com
Cc: ego@in.ibm.com
Cc: laijs@cn.fujitsu.com
Cc: rostedt@goodmis.org
Cc: peterz@infradead.org
Cc: penberg@cs.helsinki.fi
Cc: andi@firstfloor.org
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
LKML-Reference: <12396834793575-git-send-email->
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/rcutree.h | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 0cdda00f2b2a..58b2aa5312b9 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h | |||
@@ -161,9 +161,8 @@ struct rcu_data { | |||
161 | unsigned long offline_fqs; /* Kicked due to being offline. */ | 161 | unsigned long offline_fqs; /* Kicked due to being offline. */ |
162 | unsigned long resched_ipi; /* Sent a resched IPI. */ | 162 | unsigned long resched_ipi; /* Sent a resched IPI. */ |
163 | 163 | ||
164 | /* 5) state to allow this CPU to force_quiescent_state on others */ | 164 | /* 5) For future __rcu_pending statistics. */ |
165 | long n_rcu_pending; /* rcu_pending() calls since boot. */ | 165 | long n_rcu_pending; /* rcu_pending() calls since boot. */ |
166 | long n_rcu_pending_force_qs; /* when to force quiescent states. */ | ||
167 | 166 | ||
168 | int cpu; | 167 | int cpu; |
169 | }; | 168 | }; |