aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorDamien Ramonda <damien.ramonda@intel.com>2013-11-12 18:08:16 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2013-11-12 22:09:09 -0500
commitaf248a0c67457e5c6d2bcf288f07b4b2ed064f1f (patch)
tree35ca58b3831270a3015e64297eb2cbf3332439ca
parentc78e93630d15b5f5774213aad9bdc9f52473a89b (diff)
readahead: fix sequential read cache miss detection
The kernel's readahead algorithm sometimes interprets random read accesses as sequential and triggers unnecessary data prefecthing from storage device (impacting random read average latency). In order to identify sequential cache read misses, the readahead algorithm intends to check whether offset - previous offset == 1 (trivial sequential reads) or offset - previous offset == 0 (sequential reads not aligned on page boundary): if (offset - (ra->prev_pos >> PAGE_CACHE_SHIFT) <= 1UL) The current offset is stored in the "offset" variable of type "pgoff_t" (unsigned long), while previous offset is stored in "ra->prev_pos" of type "loff_t" (long long). Therefore, operands of the if statement are implicitly converted to type long long. Consequently, when previous offset > current offset (which happens on random pattern), the if condition is true and access is wrongly interpeted as sequential. An unnecessary data prefetching is triggered, impacting the average random read latency. Storing the previous offset value in a "pgoff_t" variable (unsigned long) fixes the sequential read detection logic. Signed-off-by: Damien Ramonda <damien.ramonda@intel.com> Reviewed-by: Fengguang Wu <fengguang.wu@intel.com> Acked-by: Pierre Tardy <pierre.tardy@intel.com> Acked-by: David Cohen <david.a.cohen@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--mm/readahead.c6
1 files changed, 5 insertions, 1 deletions
diff --git a/mm/readahead.c b/mm/readahead.c
index 50241836fe82..7cdbb44aa90b 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -401,6 +401,7 @@ ondemand_readahead(struct address_space *mapping,
401 unsigned long req_size) 401 unsigned long req_size)
402{ 402{
403 unsigned long max = max_sane_readahead(ra->ra_pages); 403 unsigned long max = max_sane_readahead(ra->ra_pages);
404 pgoff_t prev_offset;
404 405
405 /* 406 /*
406 * start of file 407 * start of file
@@ -452,8 +453,11 @@ ondemand_readahead(struct address_space *mapping,
452 453
453 /* 454 /*
454 * sequential cache miss 455 * sequential cache miss
456 * trivial case: (offset - prev_offset) == 1
457 * unaligned reads: (offset - prev_offset) == 0
455 */ 458 */
456 if (offset - (ra->prev_pos >> PAGE_CACHE_SHIFT) <= 1UL) 459 prev_offset = (unsigned long long)ra->prev_pos >> PAGE_CACHE_SHIFT;
460 if (offset - prev_offset <= 1UL)
457 goto initial_readahead; 461 goto initial_readahead;
458 462
459 /* 463 /*