diff options
author | Chao Yu <chao2.yu@samsung.com> | 2014-06-23 21:18:20 -0400 |
---|---|---|
committer | Jaegeuk Kim <jaegeuk@kernel.org> | 2014-07-09 17:04:25 -0400 |
commit | aec71382c68135261ef6efc3d8a96b7149939446 (patch) | |
tree | 947ff0bb52e12693c0f551aaef1f70aacad735d2 /fs/f2fs/f2fs.h | |
parent | a014e037be26b5c9ee6fb4e49e7804141cf3bb89 (diff) |
f2fs: refactor flush_nat_entries codes for reducing NAT writes
Although building NAT journal in cursum reduce the read/write work for NAT
block, but previous design leave us lower performance when write checkpoint
frequently for these cases:
1. if journal in cursum has already full, it's a bit of waste that we flush all
nat entries to page for persistence, but not to cache any entries.
2. if journal in cursum is not full, we fill nat entries to journal util
journal is full, then flush the left dirty entries to disk without merge
journaled entries, so these journaled entries may be flushed to disk at next
checkpoint but lost chance to flushed last time.
In this patch we merge dirty entries located in same NAT block to nat entry set,
and linked all set to list, sorted ascending order by entries' count of set.
Later we flush entries in sparse set into journal as many as we can, and then
flush merged entries to disk. In this way we can not only gain in performance,
but also save lifetime of flash device.
In my testing environment, it shows this patch can help to reduce NAT block
writes obviously. In hard disk test case: cost time of fsstress is stablely
reduced by about 5%.
1. virtual machine + hard disk:
fsstress -p 20 -n 200 -l 5
node num cp count nodes/cp
based 4599.6 1803.0 2.551
patched 2714.6 1829.6 1.483
2. virtual machine + 32g micro SD card:
fsstress -p 20 -n 200 -l 1 -w -f chown=0 -f creat=4 -f dwrite=0
-f fdatasync=4 -f fsync=4 -f link=0 -f mkdir=4 -f mknod=4 -f rename=5
-f rmdir=5 -f symlink=0 -f truncate=4 -f unlink=5 -f write=0 -S
node num cp count nodes/cp
based 84.5 43.7 1.933
patched 49.2 40.0 1.23
Our latency of merging op shows not bad when handling extreme case like:
merging a great number of dirty nats:
latency(ns) dirty nat count
3089219 24922
5129423 27422
4000250 24523
change log from v1:
o fix wrong logic in add_nat_entry when grab a new nat entry set.
o swith to create slab cache in create_node_manager_caches.
o use GFP_ATOMIC instead of GFP_NOFS to avoid potential long latency.
change log from v2:
o make comment position more appropriate suggested by Jaegeuk Kim.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Diffstat (limited to 'fs/f2fs/f2fs.h')
-rw-r--r-- | fs/f2fs/f2fs.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 3f0291b840ef..ec480b1a6e33 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h | |||
@@ -256,6 +256,8 @@ struct f2fs_nm_info { | |||
256 | unsigned int nat_cnt; /* the # of cached nat entries */ | 256 | unsigned int nat_cnt; /* the # of cached nat entries */ |
257 | struct list_head nat_entries; /* cached nat entry list (clean) */ | 257 | struct list_head nat_entries; /* cached nat entry list (clean) */ |
258 | struct list_head dirty_nat_entries; /* cached nat entry list (dirty) */ | 258 | struct list_head dirty_nat_entries; /* cached nat entry list (dirty) */ |
259 | struct list_head nat_entry_set; /* nat entry set list */ | ||
260 | unsigned int dirty_nat_cnt; /* total num of nat entries in set */ | ||
259 | 261 | ||
260 | /* free node ids management */ | 262 | /* free node ids management */ |
261 | struct radix_tree_root free_nid_root;/* root of the free_nid cache */ | 263 | struct radix_tree_root free_nid_root;/* root of the free_nid cache */ |