diff options
| author | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2018-05-16 17:41:41 -0400 |
|---|---|---|
| committer | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2018-07-12 18:39:15 -0400 |
| commit | 15651201fa055ec81d3669b36ab7c2fb12c3ce36 (patch) | |
| tree | ef8bfa5efe7b09c8695423613d2bb368e8e1fa43 /kernel/rcu/tree.c | |
| parent | 6f56f714db067056c80f5d71510118f82872e34c (diff) | |
rcu: Mark task as .need_qs less aggressively
If any scheduling-clock interrupt interrupts an RCU-preempt read-side
critical section, the interrupted task's ->rcu_read_unlock_special.b.need_qs
field is set. This causes the outermost rcu_read_unlock() to incur the
extra overhead of calling into rcu_read_unlock_special(). This commit
reduces that overhead by setting ->rcu_read_unlock_special.b.need_qs only
if the grace period has been in effect for more than one second.
Why one second? Because this is comfortably smaller than the minimum
RCU CPU stall-warning timeout of three seconds, but long enough that the
.need_qs marking should happen quite rarely. And if your RCU read-side
critical section has run on-CPU for a full second, it is not unreasonable
to invest some CPU time in ending the grace period quickly.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Diffstat (limited to 'kernel/rcu/tree.c')
0 files changed, 0 insertions, 0 deletions
