aboutsummaryrefslogtreecommitdiffstats
path: root/fs/jffs2/background.c
diff options
context:
space:
mode:
authorAndres Salomon <dilinger@queued.net>2009-02-11 16:27:02 -0500
committerDavid Woodhouse <David.Woodhouse@intel.com>2009-02-14 03:59:04 -0500
commitefab0b5d3eed6aa71f8e3233e4e11774eedc04dc (patch)
tree79906754c394f0e2e081c150ea4494a3bde1b086 /fs/jffs2/background.c
parentab00d68276295a1b4da7ad924a35a3566e9c2698 (diff)
[JFFS2] force the jffs2 GC daemon to behave a bit better
I've noticed some pretty poor behavior on OLPC machines after bootup, when gdm/X are starting. The GCD monopolizes the scheduler (which in turns means it gets to do more nand i/o), which results in processes taking much much longer than they should to start. As an example, on an OLPC machine going from OFW to a usable X (via auto-login gdm) takes 2m 30s. The majority of this time is consumed by the switch into graphical mode. With this patch, we cut a full 60s off of bootup time. After bootup, things are much snappier as well. Note that we have seen a CRC node error with this patch that causes the machine to fail to boot, but we've also seen that problem without this patch. Signed-off-by: Andres Salomon <dilinger@debian.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Diffstat (limited to 'fs/jffs2/background.c')
-rw-r--r--fs/jffs2/background.c18
1 files changed, 11 insertions, 7 deletions
diff --git a/fs/jffs2/background.c b/fs/jffs2/background.c
index 3cceef4ad2b7..e9580104b6ba 100644
--- a/fs/jffs2/background.c
+++ b/fs/jffs2/background.c
@@ -95,13 +95,17 @@ static int jffs2_garbage_collect_thread(void *_c)
95 spin_unlock(&c->erase_completion_lock); 95 spin_unlock(&c->erase_completion_lock);
96 96
97 97
98 /* This thread is purely an optimisation. But if it runs when 98 /* Problem - immediately after bootup, the GCD spends a lot
99 other things could be running, it actually makes things a 99 * of time in places like jffs2_kill_fragtree(); so much so
100 lot worse. Use yield() and put it at the back of the runqueue 100 * that userspace processes (like gdm and X) are starved
101 every time. Especially during boot, pulling an inode in 101 * despite plenty of cond_resched()s and renicing. Yield()
102 with read_inode() is much preferable to having the GC thread 102 * doesn't help, either (presumably because userspace and GCD
103 get there first. */ 103 * are generally competing for a higher latency resource -
104 yield(); 104 * disk).
105 * This forces the GCD to slow the hell down. Pulling an
106 * inode in with read_inode() is much preferable to having
107 * the GC thread get there first. */
108 schedule_timeout_interruptible(msecs_to_jiffies(50));
105 109
106 /* Put_super will send a SIGKILL and then wait on the sem. 110 /* Put_super will send a SIGKILL and then wait on the sem.
107 */ 111 */