aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>2009-09-21 20:01:46 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2009-09-22 10:17:30 -0400
commita26f5320c4ee3d46a0da48fa0f3ac6a00b575793 (patch)
tree76e4502dae5d278c9a3cebbeda53f01aa1cf3e67 /mm
parent74a1c48fb4e9f10e3c83dcd39af73487968e35bf (diff)
vmscan: kill unnecessary prefetch
The pages in the list passed move_active_pages_to_lru() are already touched by shrink_active_list(). IOW the prefetch in move_active_pages_to_lru() don't populate any cache. it's pointless. This patch remove it. Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Rik van Riel <riel@redhat.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/vmscan.c1
1 files changed, 0 insertions, 1 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 20c5fa8eca46..e219b47fc50b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1273,7 +1273,6 @@ static void move_active_pages_to_lru(struct zone *zone,
1273 1273
1274 while (!list_empty(list)) { 1274 while (!list_empty(list)) {
1275 page = lru_to_page(list); 1275 page = lru_to_page(list);
1276 prefetchw_prev_lru_page(page, list, flags);
1277 1276
1278 VM_BUG_ON(PageLRU(page)); 1277 VM_BUG_ON(PageLRU(page));
1279 SetPageLRU(page); 1278 SetPageLRU(page);