aboutsummaryrefslogtreecommitdiffstats
path: root/fs
diff options
context:
space:
mode:
authorDavid Woodhouse <dwmw2@infradead.org>2006-05-13 23:06:24 -0400
committerDavid Woodhouse <dwmw2@infradead.org>2006-05-13 23:06:24 -0400
commitcf5eba53346fbfdf1b80e05ca3fd7fe2ec841077 (patch)
tree22e24ce02de0ddd1c7a1113a59f0cac157aa9dab /fs
parent151e76590f66f5406eb2e1f4270c5323f385d2e8 (diff)
[JFFS2] Reduce excessive node count for syslog files.
We currently get fairly poor behaviour with files which get many short writes, such as system logs. This is because we end up with many tiny data nodes, and the rbtree gets massive. None of these nodes are actually obsolete, so they are counted as 'clean' space. Eraseblocks can be entirely full of these nodes (which are REF_NORMAL instead of REF_PRISTINE), and still they count entirely towards 'used_size' and the eraseblocks can sit on the clean_list for a long time without being picked for GC. One way to alleviate this in the long term is to account REF_NORMAL space separately from REF_PRISTINE space, rather than counting them both towards used_size. Then these eraseblocks can be picked for GC and the offending nodes will be garbage collected. The short-term fix, though -- which probably makes sense even if we do eventually implement the above -- is to merge these nodes as they're written. When we write the last byte in a page, write the _whole_ page. This obsoletes the earlier nodes in the page _immediately_ and we don't even need to wait for the garbage collection to do it. Original implementation from Ferenc Havasi <havasi@inf.u-szeged.hu> Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Diffstat (limited to 'fs')
-rw-r--r--fs/jffs2/file.c20
1 files changed, 14 insertions, 6 deletions
diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
index 9f4171213e58..3349db0a7863 100644
--- a/fs/jffs2/file.c
+++ b/fs/jffs2/file.c
@@ -215,12 +215,20 @@ static int jffs2_commit_write (struct file *filp, struct page *pg,
215 D1(printk(KERN_DEBUG "jffs2_commit_write(): ino #%lu, page at 0x%lx, range %d-%d, flags %lx\n", 215 D1(printk(KERN_DEBUG "jffs2_commit_write(): ino #%lu, page at 0x%lx, range %d-%d, flags %lx\n",
216 inode->i_ino, pg->index << PAGE_CACHE_SHIFT, start, end, pg->flags)); 216 inode->i_ino, pg->index << PAGE_CACHE_SHIFT, start, end, pg->flags));
217 217
218 if (!start && end == PAGE_CACHE_SIZE) { 218 if (end == PAGE_CACHE_SIZE) {
219 /* We need to avoid deadlock with page_cache_read() in 219 if (!start) {
220 jffs2_garbage_collect_pass(). So we have to mark the 220 /* We need to avoid deadlock with page_cache_read() in
221 page up to date, to prevent page_cache_read() from 221 jffs2_garbage_collect_pass(). So we have to mark the
222 trying to re-lock it. */ 222 page up to date, to prevent page_cache_read() from
223 SetPageUptodate(pg); 223 trying to re-lock it. */
224 SetPageUptodate(pg);
225 } else {
226 /* When writing out the end of a page, write out the
227 _whole_ page. This helps to reduce the number of
228 nodes in files which have many short writes, like
229 syslog files. */
230 start = aligned_start = 0;
231 }
224 } 232 }
225 233
226 ri = jffs2_alloc_raw_inode(); 234 ri = jffs2_alloc_raw_inode();