diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2014-10-13 10:23:15 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-10-13 10:23:15 -0400 |
commit | faafcba3b5e15999cf75d5c5a513ac8e47e2545f (patch) | |
tree | 47d58d1c00e650e820506c91eb9a41268756bdda /drivers/cpuidle | |
parent | 13ead805c5a14b0e7ecd34f61404a5bfba655895 (diff) | |
parent | f10e00f4bf360c36edbe6bf18a6c75b171cbe012 (diff) |
Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar:
"The main changes in this cycle were:
- Optimized support for Intel "Cluster-on-Die" (CoD) topologies (Dave
Hansen)
- Various sched/idle refinements for better idle handling (Nicolas
Pitre, Daniel Lezcano, Chuansheng Liu, Vincent Guittot)
- sched/numa updates and optimizations (Rik van Riel)
- sysbench speedup (Vincent Guittot)
- capacity calculation cleanups/refactoring (Vincent Guittot)
- Various cleanups to thread group iteration (Oleg Nesterov)
- Double-rq-lock removal optimization and various refactorings
(Kirill Tkhai)
- various sched/deadline fixes
... and lots of other changes"
* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (72 commits)
sched/dl: Use dl_bw_of() under rcu_read_lock_sched()
sched/fair: Delete resched_cpu() from idle_balance()
sched, time: Fix build error with 64 bit cputime_t on 32 bit systems
sched: Improve sysbench performance by fixing spurious active migration
sched/x86: Fix up typo in topology detection
x86, sched: Add new topology for multi-NUMA-node CPUs
sched/rt: Use resched_curr() in task_tick_rt()
sched: Use rq->rd in sched_setaffinity() under RCU read lock
sched: cleanup: Rename 'out_unlock' to 'out_free_new_mask'
sched: Use dl_bw_of() under RCU read lock
sched/fair: Remove duplicate code from can_migrate_task()
sched, mips, ia64: Remove __ARCH_WANT_UNLOCKED_CTXSW
sched: print_rq(): Don't use tasklist_lock
sched: normalize_rt_tasks(): Don't use _irqsave for tasklist_lock, use task_rq_lock()
sched: Fix the task-group check in tg_has_rt_tasks()
sched/fair: Leverage the idle state info when choosing the "idlest" cpu
sched: Let the scheduler see CPU idle states
sched/deadline: Fix inter- exclusive cpusets migrations
sched/deadline: Clear dl_entity params when setscheduling to different class
sched/numa: Kill the wrong/dead TASK_DEAD check in task_numa_fault()
...
Diffstat (limited to 'drivers/cpuidle')
-rw-r--r-- | drivers/cpuidle/cpuidle.c | 15 |
1 files changed, 8 insertions, 7 deletions
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c index ee9df5e3f5eb..125150dc6e81 100644 --- a/drivers/cpuidle/cpuidle.c +++ b/drivers/cpuidle/cpuidle.c | |||
@@ -223,8 +223,14 @@ void cpuidle_uninstall_idle_handler(void) | |||
223 | { | 223 | { |
224 | if (enabled_devices) { | 224 | if (enabled_devices) { |
225 | initialized = 0; | 225 | initialized = 0; |
226 | kick_all_cpus_sync(); | 226 | wake_up_all_idle_cpus(); |
227 | } | 227 | } |
228 | |||
229 | /* | ||
230 | * Make sure external observers (such as the scheduler) | ||
231 | * are done looking at pointed idle states. | ||
232 | */ | ||
233 | synchronize_rcu(); | ||
228 | } | 234 | } |
229 | 235 | ||
230 | /** | 236 | /** |
@@ -530,11 +536,6 @@ EXPORT_SYMBOL_GPL(cpuidle_register); | |||
530 | 536 | ||
531 | #ifdef CONFIG_SMP | 537 | #ifdef CONFIG_SMP |
532 | 538 | ||
533 | static void smp_callback(void *v) | ||
534 | { | ||
535 | /* we already woke the CPU up, nothing more to do */ | ||
536 | } | ||
537 | |||
538 | /* | 539 | /* |
539 | * This function gets called when a part of the kernel has a new latency | 540 | * This function gets called when a part of the kernel has a new latency |
540 | * requirement. This means we need to get all processors out of their C-state, | 541 | * requirement. This means we need to get all processors out of their C-state, |
@@ -544,7 +545,7 @@ static void smp_callback(void *v) | |||
544 | static int cpuidle_latency_notify(struct notifier_block *b, | 545 | static int cpuidle_latency_notify(struct notifier_block *b, |
545 | unsigned long l, void *v) | 546 | unsigned long l, void *v) |
546 | { | 547 | { |
547 | smp_call_function(smp_callback, NULL, 1); | 548 | wake_up_all_idle_cpus(); |
548 | return NOTIFY_OK; | 549 | return NOTIFY_OK; |
549 | } | 550 | } |
550 | 551 | ||