diff options
author | Andi Kleen <ak@linux.intel.com> | 2011-05-24 20:12:29 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-05-25 11:39:26 -0400 |
commit | 207d04baa3591a354711e863dd90087fc75873b3 (patch) | |
tree | 17498d55af5b2a588e7e7111e927a099236ca770 /mm | |
parent | 275b12bf5486f6f531111fd3d7dbbf01df427cfe (diff) |
readahead: reduce unnecessary mmap_miss increases
The original INT_MAX is too large, reduce it to
- avoid unnecessarily dirtying/bouncing the cache line
- restore mmap read-around faster on changed access pattern
Background: in the mosbench exim benchmark which does multi-threaded page
faults on shared struct file, the ra->mmap_miss updates are found to cause
excessive cache line bouncing on tmpfs. The ra state updates are needless
for tmpfs because it actually disabled readahead totally
(shmem_backing_dev_info.ra_pages == 0).
Tested-by: Tim Chen <tim.c.chen@intel.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/filemap.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/filemap.c b/mm/filemap.c index c974a2863897..e5131392d32e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c | |||
@@ -1566,7 +1566,8 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, | |||
1566 | return; | 1566 | return; |
1567 | } | 1567 | } |
1568 | 1568 | ||
1569 | if (ra->mmap_miss < INT_MAX) | 1569 | /* Avoid banging the cache line if not needed */ |
1570 | if (ra->mmap_miss < MMAP_LOTSAMISS * 10) | ||
1570 | ra->mmap_miss++; | 1571 | ra->mmap_miss++; |
1571 | 1572 | ||
1572 | /* | 1573 | /* |