diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2012-05-22 21:27:32 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-05-22 21:27:32 -0400 |
commit | d79ee93de909dfb252279b9a95978bbda9a814a9 (patch) | |
tree | bfccca60fd36259ff4bcc5e78a2c272fbd680065 /arch/tile/include/asm | |
parent | 2ff2b289a695807e291e1ed9f639d8a3ba5f4254 (diff) | |
parent | 1c2927f18576d65631d8e0ddd19e1d023183222e (diff) |
Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler changes from Ingo Molnar:
"The biggest change is the cleanup/simplification of the load-balancer:
instead of the current practice of architectures twiddling scheduler
internal data structures and providing the scheduler domains in
colorfully inconsistent ways, we now have generic scheduler code in
kernel/sched/core.c:sched_init_numa() that looks at the architecture's
node_distance() parameters and (while not fully trusting it) deducts a
NUMA topology from it.
This inevitably changes balancing behavior - hopefully for the better.
There are various smaller optimizations, cleanups and fixlets as well"
* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
sched: Taint kernel with TAINT_WARN after sleep-in-atomic bug
sched: Remove stale power aware scheduling remnants and dysfunctional knobs
sched/debug: Fix printing large integers on 32-bit platforms
sched/fair: Improve the ->group_imb logic
sched/nohz: Fix rq->cpu_load[] calculations
sched/numa: Don't scale the imbalance
sched/fair: Revert sched-domain iteration breakage
sched/x86: Rewrite set_cpu_sibling_map()
sched/numa: Fix the new NUMA topology bits
sched/numa: Rewrite the CONFIG_NUMA sched domain support
sched/fair: Propagate 'struct lb_env' usage into find_busiest_group
sched/fair: Add some serialization to the sched_domain load-balance walk
sched/fair: Let minimally loaded cpu balance the group
sched: Change rq->nr_running to unsigned int
x86/numa: Check for nonsensical topologies on real hw as well
x86/numa: Hard partition cpu topology masks on node boundaries
x86/numa: Allow specifying node_distance() for numa=fake
x86/sched: Make mwait_usable() heed to "idle=" kernel parameters properly
sched: Update documentation and comments
sched_rt: Avoid unnecessary dequeue and enqueue of pushable tasks in set_cpus_allowed_rt()
Diffstat (limited to 'arch/tile/include/asm')
-rw-r--r-- | arch/tile/include/asm/topology.h | 26 |
1 files changed, 0 insertions, 26 deletions
diff --git a/arch/tile/include/asm/topology.h b/arch/tile/include/asm/topology.h index 6fdd0c860193..7a7ce390534f 100644 --- a/arch/tile/include/asm/topology.h +++ b/arch/tile/include/asm/topology.h | |||
@@ -78,32 +78,6 @@ static inline const struct cpumask *cpumask_of_node(int node) | |||
78 | .balance_interval = 32, \ | 78 | .balance_interval = 32, \ |
79 | } | 79 | } |
80 | 80 | ||
81 | /* sched_domains SD_NODE_INIT for TILE architecture */ | ||
82 | #define SD_NODE_INIT (struct sched_domain) { \ | ||
83 | .min_interval = 16, \ | ||
84 | .max_interval = 512, \ | ||
85 | .busy_factor = 32, \ | ||
86 | .imbalance_pct = 125, \ | ||
87 | .cache_nice_tries = 1, \ | ||
88 | .busy_idx = 3, \ | ||
89 | .idle_idx = 1, \ | ||
90 | .newidle_idx = 2, \ | ||
91 | .wake_idx = 1, \ | ||
92 | .flags = 1*SD_LOAD_BALANCE \ | ||
93 | | 1*SD_BALANCE_NEWIDLE \ | ||
94 | | 1*SD_BALANCE_EXEC \ | ||
95 | | 1*SD_BALANCE_FORK \ | ||
96 | | 0*SD_BALANCE_WAKE \ | ||
97 | | 0*SD_WAKE_AFFINE \ | ||
98 | | 0*SD_PREFER_LOCAL \ | ||
99 | | 0*SD_SHARE_CPUPOWER \ | ||
100 | | 0*SD_SHARE_PKG_RESOURCES \ | ||
101 | | 1*SD_SERIALIZE \ | ||
102 | , \ | ||
103 | .last_balance = jiffies, \ | ||
104 | .balance_interval = 128, \ | ||
105 | } | ||
106 | |||
107 | /* By definition, we create nodes based on online memory. */ | 81 | /* By definition, we create nodes based on online memory. */ |
108 | #define node_has_online_mem(nid) 1 | 82 | #define node_has_online_mem(nid) 1 |
109 | 83 | ||