aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorViresh Kumar <viresh.kumar@linaro.org>2013-06-04 03:40:24 -0400
committerIngo Molnar <mingo@kernel.org>2013-06-19 06:58:42 -0400
commit0a0fca9d832b704f116a25badd1ca8c16771dcac (patch)
tree499c5502a79447c84ad1d70d1e976083f2f071dc /Documentation
parent8404c90d050733b3404dc36c500f63ccb0c972ce (diff)
sched: Rename sched.c as sched/core.c in comments and Documentation
Most of the stuff from kernel/sched.c was moved to kernel/sched/core.c long time back and the comments/Documentation never got updated. I figured it out when I was going through sched-domains.txt and so thought of fixing it globally. I haven't crossed check if the stuff that is referenced in sched/core.c by all these files is still present and hasn't changed as that wasn't the motive behind this patch. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/cdff76a265326ab8d71922a1db5be599f20aad45.1370329560.git.viresh.kumar@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/cgroups/cpusets.txt2
-rw-r--r--Documentation/rt-mutex-design.txt2
-rw-r--r--Documentation/scheduler/sched-domains.txt4
-rw-r--r--Documentation/spinlocks.txt2
-rw-r--r--Documentation/virtual/uml/UserModeLinux-HOWTO.txt4
5 files changed, 7 insertions, 7 deletions
diff --git a/Documentation/cgroups/cpusets.txt b/Documentation/cgroups/cpusets.txt
index 12e01d432bfe..7740038d82bc 100644
--- a/Documentation/cgroups/cpusets.txt
+++ b/Documentation/cgroups/cpusets.txt
@@ -373,7 +373,7 @@ can become very uneven.
3731.7 What is sched_load_balance ? 3731.7 What is sched_load_balance ?
374-------------------------------- 374--------------------------------
375 375
376The kernel scheduler (kernel/sched.c) automatically load balances 376The kernel scheduler (kernel/sched/core.c) automatically load balances
377tasks. If one CPU is underutilized, kernel code running on that 377tasks. If one CPU is underutilized, kernel code running on that
378CPU will look for tasks on other more overloaded CPUs and move those 378CPU will look for tasks on other more overloaded CPUs and move those
379tasks to itself, within the constraints of such placement mechanisms 379tasks to itself, within the constraints of such placement mechanisms
diff --git a/Documentation/rt-mutex-design.txt b/Documentation/rt-mutex-design.txt
index 33ed8007a845..a5bcd7f5c33f 100644
--- a/Documentation/rt-mutex-design.txt
+++ b/Documentation/rt-mutex-design.txt
@@ -384,7 +384,7 @@ priority back.
384__rt_mutex_adjust_prio examines the result of rt_mutex_getprio, and if the 384__rt_mutex_adjust_prio examines the result of rt_mutex_getprio, and if the
385result does not equal the task's current priority, then rt_mutex_setprio 385result does not equal the task's current priority, then rt_mutex_setprio
386is called to adjust the priority of the task to the new priority. 386is called to adjust the priority of the task to the new priority.
387Note that rt_mutex_setprio is defined in kernel/sched.c to implement the 387Note that rt_mutex_setprio is defined in kernel/sched/core.c to implement the
388actual change in priority. 388actual change in priority.
389 389
390It is interesting to note that __rt_mutex_adjust_prio can either increase 390It is interesting to note that __rt_mutex_adjust_prio can either increase
diff --git a/Documentation/scheduler/sched-domains.txt b/Documentation/scheduler/sched-domains.txt
index 443f0c76bab4..4af80b1c05aa 100644
--- a/Documentation/scheduler/sched-domains.txt
+++ b/Documentation/scheduler/sched-domains.txt
@@ -25,7 +25,7 @@ is treated as one entity. The load of a group is defined as the sum of the
25load of each of its member CPUs, and only when the load of a group becomes 25load of each of its member CPUs, and only when the load of a group becomes
26out of balance are tasks moved between groups. 26out of balance are tasks moved between groups.
27 27
28In kernel/sched.c, trigger_load_balance() is run periodically on each CPU 28In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU
29through scheduler_tick(). It raises a softirq after the next regularly scheduled 29through scheduler_tick(). It raises a softirq after the next regularly scheduled
30rebalancing event for the current runqueue has arrived. The actual load 30rebalancing event for the current runqueue has arrived. The actual load
31balancing workhorse, run_rebalance_domains()->rebalance_domains(), is then run 31balancing workhorse, run_rebalance_domains()->rebalance_domains(), is then run
@@ -62,7 +62,7 @@ struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of
62the specifics and what to tune. 62the specifics and what to tune.
63 63
64Architectures may retain the regular override the default SD_*_INIT flags 64Architectures may retain the regular override the default SD_*_INIT flags
65while using the generic domain builder in kernel/sched.c if they wish to 65while using the generic domain builder in kernel/sched/core.c if they wish to
66retain the traditional SMT->SMP->NUMA topology (or some subset of that). This 66retain the traditional SMT->SMP->NUMA topology (or some subset of that). This
67can be done by #define'ing ARCH_HASH_SCHED_TUNE. 67can be done by #define'ing ARCH_HASH_SCHED_TUNE.
68 68
diff --git a/Documentation/spinlocks.txt b/Documentation/spinlocks.txt
index 9dbe885ecd8d..97eaf5727178 100644
--- a/Documentation/spinlocks.txt
+++ b/Documentation/spinlocks.txt
@@ -137,7 +137,7 @@ don't block on each other (and thus there is no dead-lock wrt interrupts.
137But when you do the write-lock, you have to use the irq-safe version. 137But when you do the write-lock, you have to use the irq-safe version.
138 138
139For an example of being clever with rw-locks, see the "waitqueue_lock" 139For an example of being clever with rw-locks, see the "waitqueue_lock"
140handling in kernel/sched.c - nothing ever _changes_ a wait-queue from 140handling in kernel/sched/core.c - nothing ever _changes_ a wait-queue from
141within an interrupt, they only read the queue in order to know whom to 141within an interrupt, they only read the queue in order to know whom to
142wake up. So read-locks are safe (which is good: they are very common 142wake up. So read-locks are safe (which is good: they are very common
143indeed), while write-locks need to protect themselves against interrupts. 143indeed), while write-locks need to protect themselves against interrupts.
diff --git a/Documentation/virtual/uml/UserModeLinux-HOWTO.txt b/Documentation/virtual/uml/UserModeLinux-HOWTO.txt
index a5f8436753e7..f4099ca6b483 100644
--- a/Documentation/virtual/uml/UserModeLinux-HOWTO.txt
+++ b/Documentation/virtual/uml/UserModeLinux-HOWTO.txt
@@ -3127,7 +3127,7 @@
3127 at process_kern.c:156 3127 at process_kern.c:156
3128 #3 0x1006a052 in switch_to (prev=0x50072000, next=0x507e8000, last=0x50072000) 3128 #3 0x1006a052 in switch_to (prev=0x50072000, next=0x507e8000, last=0x50072000)
3129 at process_kern.c:161 3129 at process_kern.c:161
3130 #4 0x10001d12 in schedule () at sched.c:777 3130 #4 0x10001d12 in schedule () at core.c:777
3131 #5 0x1006a744 in __down (sem=0x507d241c) at semaphore.c:71 3131 #5 0x1006a744 in __down (sem=0x507d241c) at semaphore.c:71
3132 #6 0x1006aa10 in __down_failed () at semaphore.c:157 3132 #6 0x1006aa10 in __down_failed () at semaphore.c:157
3133 #7 0x1006c5d8 in segv_handler (sc=0x5006e940) at trap_user.c:174 3133 #7 0x1006c5d8 in segv_handler (sc=0x5006e940) at trap_user.c:174
@@ -3191,7 +3191,7 @@
3191 at process_kern.c:161 3191 at process_kern.c:161
3192 161 _switch_to(prev, next); 3192 161 _switch_to(prev, next);
3193 (gdb) 3193 (gdb)
3194 #4 0x10001d12 in schedule () at sched.c:777 3194 #4 0x10001d12 in schedule () at core.c:777
3195 777 switch_to(prev, next, prev); 3195 777 switch_to(prev, next, prev);
3196 (gdb) 3196 (gdb)
3197 #5 0x1006a744 in __down (sem=0x507d241c) at semaphore.c:71 3197 #5 0x1006a744 in __down (sem=0x507d241c) at semaphore.c:71