aboutsummaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorKentaro Makita <k-makita@np.css.fujitsu.com>2008-07-24 00:27:13 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2008-07-24 13:47:15 -0400
commitda3bbdd4632c0171406b2677e31494afa5bde2f8 (patch)
tree6a1947b2a32041ef0d14044e659ba7d546d55483 /include
parent3c82d0ce2c4f642b2f24ef98707a030543b06b90 (diff)
fix soft lock up at NFS mount via per-SB LRU-list of unused dentries
[Summary] Split LRU-list of unused dentries to one per superblock to avoid soft lock up during NFS mounts and remounting of any filesystem. Previously I posted here: http://lkml.org/lkml/2008/3/5/590 [Descriptions] - background dentry_unused is a list of dentries which are not referenced. dentry_unused grows up when references on directories or files are released. This list can be very long if there is huge free memory. - the problem When shrink_dcache_sb() is called, it scans all dentry_unused linearly under spin_lock(), and if dentry->d_sb is differnt from given superblock, scan next dentry. This scan costs very much if there are many entries, and very ineffective if there are many superblocks. IOW, When we need to shrink unused dentries on one dentry, but scans unused dentries on all superblocks in the system. For example, we scan 500 dentries to unmount a filesystem, but scans 1,000,000 or more unused dentries on other superblocks. In our case , At mounting NFS*, shrink_dcache_sb() is called to shrink unused dentries on NFS, but scans 100,000,000 unused dentries on superblocks in the system such as local ext3 filesystems. I hear NFS mounting took 1 min on some system in use. * : NFS uses virtual filesystem in rpc layer, so NFS is affected by this problem. 100,000,000 is possible number on large systems. Per-superblock LRU of unused dentried can reduce the cost in reasonable manner. - How to fix I found this problem is solved by David Chinner's "Per-superblock unused dentry LRU lists V3"(1), so I rebase it and add some fix to reclaim with fairness, which is in Andrew Morton's comments(2). 1) http://lkml.org/lkml/2006/5/25/318 2) http://lkml.org/lkml/2006/5/25/320 Split LRU-list of unused dentries to each superblocks. Then, NFS mounting will check dentries under a superblock instead of all. But this spliting will break LRU of dentry-unused. So, I've attempted to make reclaim unused dentrins with fairness by calculate number of dentries to scan on this sb based on following way number of dentries to scan on this sb = count * (number of dentries on this sb / number of dentries in the machine) - ToDo - I have to measuring performance number and do stress tests. - When unmount occurs during prune_dcache(), scanning on same superblock, It is unable to reach next superblock because it is gone away. We restart scannig superblock from first one, it causes unfairness of reclaim unused dentries on first superblock. But I think this happens very rarely. - Test Results Result on 6GB boxes with excessive unused dentries. Without patch: $ cat /proc/sys/fs/dentry-state 10181835 10180203 45 0 0 0 # mount -t nfs 10.124.60.70:/work/kernel-src nfs real 0m1.830s user 0m0.001s sys 0m1.653s With this patch: $ cat /proc/sys/fs/dentry-state 10236610 10234751 45 0 0 0 # mount -t nfs 10.124.60.70:/work/kernel-src nfs real 0m0.106s user 0m0.002s sys 0m0.032s [akpm@linux-foundation.org: fix comments] Signed-off-by: Kentaro Makita <k-makita@np.css.fujitsu.com> Cc: Neil Brown <neilb@suse.de> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Cc: David Chinner <dgc@sgi.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/fs.h4
1 files changed, 4 insertions, 0 deletions
diff --git a/include/linux/fs.h b/include/linux/fs.h
index ff54ae4933f3..e5e6a244096c 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1025,6 +1025,7 @@ extern int send_sigurg(struct fown_struct *fown);
1025extern struct list_head super_blocks; 1025extern struct list_head super_blocks;
1026extern spinlock_t sb_lock; 1026extern spinlock_t sb_lock;
1027 1027
1028#define sb_entry(list) list_entry((list), struct super_block, s_list)
1028#define S_BIAS (1<<30) 1029#define S_BIAS (1<<30)
1029struct super_block { 1030struct super_block {
1030 struct list_head s_list; /* Keep this first */ 1031 struct list_head s_list; /* Keep this first */
@@ -1058,6 +1059,9 @@ struct super_block {
1058 struct list_head s_more_io; /* parked for more writeback */ 1059 struct list_head s_more_io; /* parked for more writeback */
1059 struct hlist_head s_anon; /* anonymous dentries for (nfs) exporting */ 1060 struct hlist_head s_anon; /* anonymous dentries for (nfs) exporting */
1060 struct list_head s_files; 1061 struct list_head s_files;
1062 /* s_dentry_lru and s_nr_dentry_unused are protected by dcache_lock */
1063 struct list_head s_dentry_lru; /* unused dentry lru */
1064 int s_nr_dentry_unused; /* # of dentry on lru */
1061 1065
1062 struct block_device *s_bdev; 1066 struct block_device *s_bdev;
1063 struct mtd_info *s_mtd; 1067 struct mtd_info *s_mtd;