diff options
author | Nick Piggin <nickpiggin@yahoo.com.au> | 2005-11-09 00:39:04 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2005-11-09 10:56:33 -0500 |
commit | 64c7c8f88559624abdbe12b5da6502e8879f8d28 (patch) | |
tree | 02f85a35ddd0f24dec70e5d6ecd61073578fd8d6 /kernel | |
parent | 5bfb5d690f36d316a5f3b4f7775fda996faa6b12 (diff) |
[PATCH] sched: resched and cpu_idle rework
Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce
confusion, and make their semantics rigid. Improves efficiency of
resched_task and some cpu_idle routines.
* In resched_task:
- TIF_NEED_RESCHED is only cleared with the task's runqueue lock held,
and as we hold it during resched_task, then there is no need for an
atomic test and set there. The only other time this should be set is
when the task's quantum expires, in the timer interrupt - this is
protected against because the rq lock is irq-safe.
- If TIF_NEED_RESCHED is set, then we don't need to do anything. It
won't get unset until the task get's schedule()d off.
- If we are running on the same CPU as the task we resched, then set
TIF_NEED_RESCHED and no further action is required.
- If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set
after TIF_NEED_RESCHED has been set, then we need to send an IPI.
Using these rules, we are able to remove the test and set operation in
resched_task, and make clear the previously vague semantics of
POLLING_NRFLAG.
* In idle routines:
- Enter cpu_idle with preempt disabled. When the need_resched() condition
becomes true, explicitly call schedule(). This makes things a bit clearer
(IMO), but haven't updated all architectures yet.
- Many do a test and clear of TIF_NEED_RESCHED for some reason. According
to the resched_task rules, this isn't needed (and actually breaks the
assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock
held). So remove that. Generally one less locked memory op when switching
to the idle thread.
- Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner
most polling idle loops. The above resched_task semantics allow it to be
set until before the last time need_resched() is checked before going into
a halt requiring interrupt wakeup.
Many idle routines simply never enter such a halt, and so POLLING_NRFLAG
can be always left set, completely eliminating resched IPIs when rescheduling
the idle task.
POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Con Kolivas <kernel@kolivas.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched.c | 21 |
1 files changed, 14 insertions, 7 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 0f2def822296..ac3f5cc3bb51 100644 --- a/kernel/sched.c +++ b/kernel/sched.c | |||
@@ -864,21 +864,28 @@ static void deactivate_task(struct task_struct *p, runqueue_t *rq) | |||
864 | #ifdef CONFIG_SMP | 864 | #ifdef CONFIG_SMP |
865 | static void resched_task(task_t *p) | 865 | static void resched_task(task_t *p) |
866 | { | 866 | { |
867 | int need_resched, nrpolling; | 867 | int cpu; |
868 | 868 | ||
869 | assert_spin_locked(&task_rq(p)->lock); | 869 | assert_spin_locked(&task_rq(p)->lock); |
870 | 870 | ||
871 | /* minimise the chance of sending an interrupt to poll_idle() */ | 871 | if (unlikely(test_tsk_thread_flag(p, TIF_NEED_RESCHED))) |
872 | nrpolling = test_tsk_thread_flag(p,TIF_POLLING_NRFLAG); | 872 | return; |
873 | need_resched = test_and_set_tsk_thread_flag(p,TIF_NEED_RESCHED); | 873 | |
874 | nrpolling |= test_tsk_thread_flag(p,TIF_POLLING_NRFLAG); | 874 | set_tsk_thread_flag(p, TIF_NEED_RESCHED); |
875 | |||
876 | cpu = task_cpu(p); | ||
877 | if (cpu == smp_processor_id()) | ||
878 | return; | ||
875 | 879 | ||
876 | if (!need_resched && !nrpolling && (task_cpu(p) != smp_processor_id())) | 880 | /* NEED_RESCHED must be visible before we test POLLING_NRFLAG */ |
877 | smp_send_reschedule(task_cpu(p)); | 881 | smp_mb(); |
882 | if (!test_tsk_thread_flag(p, TIF_POLLING_NRFLAG)) | ||
883 | smp_send_reschedule(cpu); | ||
878 | } | 884 | } |
879 | #else | 885 | #else |
880 | static inline void resched_task(task_t *p) | 886 | static inline void resched_task(task_t *p) |
881 | { | 887 | { |
888 | assert_spin_locked(&task_rq(p)->lock); | ||
882 | set_tsk_need_resched(p); | 889 | set_tsk_need_resched(p); |
883 | } | 890 | } |
884 | #endif | 891 | #endif |