diff options
author | Peter Zijlstra <peterz@infradead.org> | 2013-09-27 11:30:03 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2013-09-28 04:04:47 -0400 |
commit | 75f93fed50c2abadbab6ef546b265f51ca975b27 (patch) | |
tree | ae531501cb671c948baedb8e07111f8dda2d5036 /include/linux/sched.h | |
parent | 1a338ac32ca630f67df25b4a16436cccc314e997 (diff) |
sched: Revert need_resched() to look at TIF_NEED_RESCHED
Yuanhan reported a serious throughput regression in his pigz
benchmark. Using the ftrace patch I found that several idle
paths need more TLC before we can switch the generic
need_resched() over to preempt_need_resched.
The preemption paths benefit most from preempt_need_resched and
do indeed use it; all other need_resched() users don't really
care that much so reverting need_resched() back to
tif_need_resched() is the simple and safe solution.
Reported-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: lkp@linux.intel.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20130927153003.GF15690@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/linux/sched.h')
-rw-r--r-- | include/linux/sched.h | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index b09798b672f3..2ac5285db434 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h | |||
@@ -2577,6 +2577,11 @@ static inline bool __must_check current_clr_polling_and_test(void) | |||
2577 | } | 2577 | } |
2578 | #endif | 2578 | #endif |
2579 | 2579 | ||
2580 | static __always_inline bool need_resched(void) | ||
2581 | { | ||
2582 | return unlikely(tif_need_resched()); | ||
2583 | } | ||
2584 | |||
2580 | /* | 2585 | /* |
2581 | * Thread group CPU time accounting. | 2586 | * Thread group CPU time accounting. |
2582 | */ | 2587 | */ |