diff options
author | KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> | 2008-12-09 16:14:16 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-12-10 11:01:53 -0500 |
commit | 6841c8e26357904ef462650273f5d5015f7bb370 (patch) | |
tree | 17621c8b63393b725f42dad26ac671dd1926c9c1 /mm/swap.c | |
parent | 02d211688727ad02bb4555b1aa8ae2de16b21b39 (diff) |
mm: remove UP version of lru_add_drain_all()
Currently, lru_add_drain_all() has two version.
(1) use schedule_on_each_cpu()
(2) don't use schedule_on_each_cpu()
Gerald Schaefer reported it doesn't work well on SMP (not NUMA) S390
machine.
offline_pages() calls lru_add_drain_all() followed by drain_all_pages().
While drain_all_pages() works on each cpu, lru_add_drain_all() only runs
on the current cpu for architectures w/o CONFIG_NUMA. This let us run
into the BUG_ON(!PageBuddy(page)) in __offline_isolated_pages() during
memory hotplug stress test on s390. The page in question was still on the
pcp list, because of a race with lru_add_drain_all() and drain_all_pages()
on different cpus.
Actually, Almost machine has CONFIG_UNEVICTABLE_LRU=y. Then almost machine use
(1) version lru_add_drain_all although the machine is UP.
Then this ifdef is not valueable.
simple removing is better.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/swap.c')
-rw-r--r-- | mm/swap.c | 13 |
1 files changed, 0 insertions, 13 deletions
@@ -299,7 +299,6 @@ void lru_add_drain(void) | |||
299 | put_cpu(); | 299 | put_cpu(); |
300 | } | 300 | } |
301 | 301 | ||
302 | #if defined(CONFIG_NUMA) || defined(CONFIG_UNEVICTABLE_LRU) | ||
303 | static void lru_add_drain_per_cpu(struct work_struct *dummy) | 302 | static void lru_add_drain_per_cpu(struct work_struct *dummy) |
304 | { | 303 | { |
305 | lru_add_drain(); | 304 | lru_add_drain(); |
@@ -313,18 +312,6 @@ int lru_add_drain_all(void) | |||
313 | return schedule_on_each_cpu(lru_add_drain_per_cpu); | 312 | return schedule_on_each_cpu(lru_add_drain_per_cpu); |
314 | } | 313 | } |
315 | 314 | ||
316 | #else | ||
317 | |||
318 | /* | ||
319 | * Returns 0 for success | ||
320 | */ | ||
321 | int lru_add_drain_all(void) | ||
322 | { | ||
323 | lru_add_drain(); | ||
324 | return 0; | ||
325 | } | ||
326 | #endif | ||
327 | |||
328 | /* | 315 | /* |
329 | * Batched page_cache_release(). Decrement the reference count on all the | 316 | * Batched page_cache_release(). Decrement the reference count on all the |
330 | * passed pages. If it fell to zero then remove the page from the LRU and | 317 | * passed pages. If it fell to zero then remove the page from the LRU and |