diff options
author | Kirill Tkhai <tkhai@yandex.ru> | 2014-06-28 16:03:57 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-07-16 07:38:19 -0400 |
commit | 8875125efe8402c4d84b08291e68f1281baba8e2 (patch) | |
tree | 2957c181dd06189a1499e09836c5fe5c3932a0b3 /include/linux/sched.h | |
parent | 466af29bf4270e84261712428a1304c28e3743fa (diff) |
sched: Transform resched_task() into resched_curr()
We always use resched_task() with rq->curr argument.
It's not possible to reschedule any task but rq's current.
The patch introduces resched_curr(struct rq *) to
replace all of the repeating patterns. The main aim
is cleanup, but there is a little size profit too:
(before)
$ size kernel/sched/built-in.o
text data bss dec hex filename
155274 16445 7042 178761 2ba49 kernel/sched/built-in.o
$ size vmlinux
text data bss dec hex filename
7411490 1178376 991232 9581098 92322a vmlinux
(after)
$ size kernel/sched/built-in.o
text data bss dec hex filename
155130 16445 7042 178617 2b9b9 kernel/sched/built-in.o
$ size vmlinux
text data bss dec hex filename
7411362 1178376 991232 9580970 9231aa vmlinux
I was choosing between resched_curr() and resched_rq(),
and the first name looks better for me.
A little lie in Documentation/trace/ftrace.txt. I have not
actually collected the tracing again. With a hope the patch
won't make execution times much worse :)
Signed-off-by: Kirill Tkhai <tkhai@yandex.ru>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/r/20140628200219.1778.18735.stgit@localhost
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/linux/sched.h')
-rw-r--r-- | include/linux/sched.h | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index c9c9ff723525..41a195385081 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h | |||
@@ -2786,7 +2786,7 @@ static inline bool __must_check current_set_polling_and_test(void) | |||
2786 | 2786 | ||
2787 | /* | 2787 | /* |
2788 | * Polling state must be visible before we test NEED_RESCHED, | 2788 | * Polling state must be visible before we test NEED_RESCHED, |
2789 | * paired by resched_task() | 2789 | * paired by resched_curr() |
2790 | */ | 2790 | */ |
2791 | smp_mb__after_atomic(); | 2791 | smp_mb__after_atomic(); |
2792 | 2792 | ||
@@ -2804,7 +2804,7 @@ static inline bool __must_check current_clr_polling_and_test(void) | |||
2804 | 2804 | ||
2805 | /* | 2805 | /* |
2806 | * Polling state must be visible before we test NEED_RESCHED, | 2806 | * Polling state must be visible before we test NEED_RESCHED, |
2807 | * paired by resched_task() | 2807 | * paired by resched_curr() |
2808 | */ | 2808 | */ |
2809 | smp_mb__after_atomic(); | 2809 | smp_mb__after_atomic(); |
2810 | 2810 | ||
@@ -2836,7 +2836,7 @@ static inline void current_clr_polling(void) | |||
2836 | * TIF_NEED_RESCHED and the IPI handler, scheduler_ipi(), will also | 2836 | * TIF_NEED_RESCHED and the IPI handler, scheduler_ipi(), will also |
2837 | * fold. | 2837 | * fold. |
2838 | */ | 2838 | */ |
2839 | smp_mb(); /* paired with resched_task() */ | 2839 | smp_mb(); /* paired with resched_curr() */ |
2840 | 2840 | ||
2841 | preempt_fold_need_resched(); | 2841 | preempt_fold_need_resched(); |
2842 | } | 2842 | } |