diff options
author | Wu Fengguang <fengguang.wu@intel.com> | 2011-04-27 21:05:21 -0400 |
---|---|---|
committer | Wu Fengguang <fengguang.wu@intel.com> | 2011-06-07 20:25:20 -0400 |
commit | 94c3dcbb0b0cdfd82cedd21705424d8044edc42c (patch) | |
tree | f991941a1d5c5f23530e0cc4eb68bfe5d0684b9e /fs/fs-writeback.c | |
parent | 6e6938b6d3130305a5960c86b1a9b21e58cf6144 (diff) |
writeback: update dirtied_when for synced inode to prevent livelock
Explicitly update .dirtied_when on synced inodes, so that they are no
longer considered for writeback in the next round.
It can prevent both of the following livelock schemes:
- while true; do echo data >> f; done
- while true; do touch f; done (in theory)
The exact livelock condition is, during sync(1):
(1) no new inodes are dirtied
(2) an inode being actively dirtied
On (2), the inode will be tagged and synced with .nr_to_write=LONG_MAX.
When finished, it will be redirty_tail()ed because it's still dirty
and (.nr_to_write > 0). redirty_tail() won't update its ->dirtied_when
on condition (1). The sync work will then revisit it on the next
queue_io() and find it eligible again because its old ->dirtied_when
predates the sync work start time.
We'll do more aggressive "keep writeback as long as we wrote something"
logic in wb_writeback(). The "use LONG_MAX .nr_to_write" trick in commit
b9543dac5bbc ("writeback: avoid livelocking WB_SYNC_ALL writeback") will
no longer be enough to stop sync livelock.
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Diffstat (limited to 'fs/fs-writeback.c')
-rw-r--r-- | fs/fs-writeback.c | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 5ed2ce9a28d0..fe190a8b0bc8 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c | |||
@@ -419,6 +419,15 @@ writeback_single_inode(struct inode *inode, struct writeback_control *wbc) | |||
419 | spin_lock(&inode->i_lock); | 419 | spin_lock(&inode->i_lock); |
420 | inode->i_state &= ~I_SYNC; | 420 | inode->i_state &= ~I_SYNC; |
421 | if (!(inode->i_state & I_FREEING)) { | 421 | if (!(inode->i_state & I_FREEING)) { |
422 | /* | ||
423 | * Sync livelock prevention. Each inode is tagged and synced in | ||
424 | * one shot. If still dirty, it will be redirty_tail()'ed below. | ||
425 | * Update the dirty time to prevent enqueue and sync it again. | ||
426 | */ | ||
427 | if ((inode->i_state & I_DIRTY) && | ||
428 | (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)) | ||
429 | inode->dirtied_when = jiffies; | ||
430 | |||
422 | if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY)) { | 431 | if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY)) { |
423 | /* | 432 | /* |
424 | * We didn't write back all the pages. nfs_writepages() | 433 | * We didn't write back all the pages. nfs_writepages() |