aboutsummaryrefslogtreecommitdiffstats
path: root/fs/ext4
diff options
context:
space:
mode:
authorEric Sandeen <sandeen@redhat.com>2010-05-16 04:00:00 -0400
committerTheodore Ts'o <tytso@mit.edu>2010-05-16 04:00:00 -0400
commitc445e3e0a5c2804524dec6e55f66d63f6bc5bc3e (patch)
treee4839825628b6d3aa3a9141e32370d5edebcc546 /fs/ext4
parenta30eec2a8650a77f754e84b2e15f062fe652baa7 (diff)
ext4: don't scan/accumulate more pages than mballoc will allocate
There was a bug reported on RHEL5 that a 10G dd on a 12G box had a very, very slow sync after that. At issue was the loop in write_cache_pages scanning all the way to the end of the 10G file, even though the subsequent call to mpage_da_submit_io would only actually write a smallish amt; then we went back to the write_cache_pages loop ... wasting tons of time in calling __mpage_da_writepage for thousands of pages we would just revisit (many times) later. Upstream it's not such a big issue for sys_sync because we get to the loop with a much smaller nr_to_write, which limits the loop. However, talking with Aneesh he realized that fsync upstream still gets here with a very large nr_to_write and we face the same problem. This patch makes mpage_add_bh_to_extent stop the loop after we've accumulated 2048 pages, by setting mpd->io_done = 1; which ultimately causes the write_cache_pages loop to break. Repeating the test with a dirty_ratio of 80 (to leave something for fsync to do), I don't see huge IO performance gains, but the reduction in cpu usage is striking: 80% usage with stock, and 2% with the below patch. Instrumenting the loop in write_cache_pages clearly shows that we are wasting time here. Eventually we need to change mpage_da_map_pages() also submit its I/O to the block layer, subsuming mpage_da_submit_io(), and then change it call ext4_get_blocks() multiple times. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Diffstat (limited to 'fs/ext4')
-rw-r--r--fs/ext4/inode.c9
1 files changed, 9 insertions, 0 deletions
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 55bfcd94d1a..89a31e8869c 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2350,6 +2350,15 @@ static void mpage_add_bh_to_extent(struct mpage_da_data *mpd,
2350 sector_t next; 2350 sector_t next;
2351 int nrblocks = mpd->b_size >> mpd->inode->i_blkbits; 2351 int nrblocks = mpd->b_size >> mpd->inode->i_blkbits;
2352 2352
2353 /*
2354 * XXX Don't go larger than mballoc is willing to allocate
2355 * This is a stopgap solution. We eventually need to fold
2356 * mpage_da_submit_io() into this function and then call
2357 * ext4_get_blocks() multiple times in a loop
2358 */
2359 if (nrblocks >= 8*1024*1024/mpd->inode->i_sb->s_blocksize)
2360 goto flush_it;
2361
2353 /* check if thereserved journal credits might overflow */ 2362 /* check if thereserved journal credits might overflow */
2354 if (!(EXT4_I(mpd->inode)->i_flags & EXT4_EXTENTS_FL)) { 2363 if (!(EXT4_I(mpd->inode)->i_flags & EXT4_EXTENTS_FL)) {
2355 if (nrblocks >= EXT4_MAX_TRANS_DATA) { 2364 if (nrblocks >= EXT4_MAX_TRANS_DATA) {