aboutsummaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAge
* Merge tag 'tytso-for-linus-20111214' of ↵Linus Torvalds2011-12-14
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4 * tag 'tytso-for-linus-20111214' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: ext4: handle EOF correctly in ext4_bio_write_page() ext4: remove a wrong BUG_ON in ext4_ext_convert_to_initialized ext4: correctly handle pages w/o buffers in ext4_discard_partial_buffers() ext4: avoid potential hang in mpage_submit_io() when blocksize < pagesize ext4: avoid hangs in ext4_da_should_update_i_disksize() ext4: display the correct mount option in /proc/mounts for [no]init_itable ext4: Fix crash due to getting bogus eh_depth value on big-endian systems ext4: fix ext4_end_io_dio() racing against fsync() .. using the new signed tag merge of git that now verifies the gpg signature automatically. Yay. The branchname was just 'dev', which is prettier. I'll tell Ted to use nicer tag names for future cases.
| * ext4: handle EOF correctly in ext4_bio_write_page()Yongqiang Yang2011-12-13
| | | | | | | | | | | | | | | | | | We need to zero out part of a page which beyond EOF before setting uptodate, otherwise, mapread or write will see non-zero data beyond EOF. Signed-off-by: Yongqiang Yang <xiaoqiangnk@gmail.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: stable@kernel.org
| * ext4: remove a wrong BUG_ON in ext4_ext_convert_to_initializedYongqiang Yang2011-12-13
| | | | | | | | | | | | | | | | | | If a file is fallocated on a hole, map->m_lblk + map->m_len may be greater than ee_block + ee_len. Signed-off-by: Yongqiang Yang <xiaoqiangnk@gmail.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: stable@kernel.org
| * ext4: correctly handle pages w/o buffers in ext4_discard_partial_buffers()Yongqiang Yang2011-12-13
| | | | | | | | | | | | | | | | | | | | | | | | If a page has been read into memory and never been written, it has no buffers, but we should handle the page in truncate or punch hole. VFS code of writing operations has handled holes correctly, so this patch removes the code handling holes in writing operations. Signed-off-by: Yongqiang Yang <xiaoqiangnk@gmail.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: stable@kernel.org
| * ext4: avoid potential hang in mpage_submit_io() when blocksize < pagesizeYongqiang Yang2011-12-13
| | | | | | | | | | | | | | | | | | | | | | | | If there is an unwritten but clean buffer in a page and there is a dirty buffer after the buffer, then mpage_submit_io does not write the dirty buffer out. As a result, da_writepages loops forever. This patch fixes the problem by checking dirty flag. Signed-off-by: Yongqiang Yang <xiaoqiangnk@gmail.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: stable@kernel.org
| * ext4: avoid hangs in ext4_da_should_update_i_disksize()Andrea Arcangeli2011-12-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the pte mapping in generic_perform_write() is unmapped between iov_iter_fault_in_readable() and iov_iter_copy_from_user_atomic(), the "copied" parameter to ->end_write can be zero. ext4 couldn't cope with it with delayed allocations enabled. This skips the i_disksize enlargement logic if copied is zero and no new data was appeneded to the inode. gdb> bt #0 0xffffffff811afe80 in ext4_da_should_update_i_disksize (file=0xffff88003f606a80, mapping=0xffff88001d3824e0, pos=0x1\ 08000, len=0x1000, copied=0x0, page=0xffffea0000d792e8, fsdata=0x0) at fs/ext4/inode.c:2467 #1 ext4_da_write_end (file=0xffff88003f606a80, mapping=0xffff88001d3824e0, pos=0x108000, len=0x1000, copied=0x0, page=0\ xffffea0000d792e8, fsdata=0x0) at fs/ext4/inode.c:2512 #2 0xffffffff810d97f1 in generic_perform_write (iocb=<value optimized out>, iov=<value optimized out>, nr_segs=<value o\ ptimized out>, pos=0x108000, ppos=0xffff88001e26be40, count=<value optimized out>, written=0x0) at mm/filemap.c:2440 #3 generic_file_buffered_write (iocb=<value optimized out>, iov=<value optimized out>, nr_segs=<value optimized out>, p\ os=0x108000, ppos=0xffff88001e26be40, count=<value optimized out>, written=0x0) at mm/filemap.c:2482 #4 0xffffffff810db5d1 in __generic_file_aio_write (iocb=0xffff88001e26bde8, iov=0xffff88001e26bec8, nr_segs=0x1, ppos=0\ xffff88001e26be40) at mm/filemap.c:2600 #5 0xffffffff810db853 in generic_file_aio_write (iocb=0xffff88001e26bde8, iov=0xffff88001e26bec8, nr_segs=<value optimi\ zed out>, pos=<value optimized out>) at mm/filemap.c:2632 #6 0xffffffff811a71aa in ext4_file_write (iocb=0xffff88001e26bde8, iov=0xffff88001e26bec8, nr_segs=0x1, pos=0x108000) a\ t fs/ext4/file.c:136 #7 0xffffffff811375aa in do_sync_write (filp=0xffff88003f606a80, buf=<value optimized out>, len=<value optimized out>, \ ppos=0xffff88001e26bf48) at fs/read_write.c:406 #8 0xffffffff81137e56 in vfs_write (file=0xffff88003f606a80, buf=0x1ec2960 <Address 0x1ec2960 out of bounds>, count=0x4\ 000, pos=0xffff88001e26bf48) at fs/read_write.c:435 #9 0xffffffff8113816c in sys_write (fd=<value optimized out>, buf=0x1ec2960 <Address 0x1ec2960 out of bounds>, count=0x\ 4000) at fs/read_write.c:487 #10 <signal handler called> #11 0x00007f120077a390 in __brk_reservation_fn_dmi_alloc__ () #12 0x0000000000000000 in ?? () gdb> print offset $22 = 0xffffffffffffffff gdb> print idx $23 = 0xffffffff gdb> print inode->i_blkbits $24 = 0xc gdb> up #1 ext4_da_write_end (file=0xffff88003f606a80, mapping=0xffff88001d3824e0, pos=0x108000, len=0x1000, copied=0x0, page=0\ xffffea0000d792e8, fsdata=0x0) at fs/ext4/inode.c:2512 2512 if (ext4_da_should_update_i_disksize(page, end)) { gdb> print start $25 = 0x0 gdb> print end $26 = 0xffffffffffffffff gdb> print pos $27 = 0x108000 gdb> print new_i_size $28 = 0x108000 gdb> print ((struct ext4_inode_info *)((char *)inode-((int)(&((struct ext4_inode_info *)0)->vfs_inode))))->i_disksize $29 = 0xd9000 gdb> down 2467 for (i = 0; i < idx; i++) gdb> print i $30 = 0xd44acbee This is 100% reproducible with some autonuma development code tuned in a very aggressive manner (not normal way even for knumad) which does "exotic" changes to the ptes. It wouldn't normally trigger but I don't see why it can't happen normally if the page is added to swap cache in between the two faults leading to "copied" being zero (which then hangs in ext4). So it should be fixed. Especially possible with lumpy reclaim (albeit disabled if compaction is enabled) as that would ignore the young bits in the ptes. Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: stable@kernel.org
| * ext4: display the correct mount option in /proc/mounts for [no]init_itableTheodore Ts'o2011-12-12
| | | | | | | | | | | | | | | | | | /proc/mounts was showing the mount option [no]init_inode_table when the correct mount option that will be accepted by parse_options() is [no]init_itable. Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: stable@kernel.org
| * ext4: Fix crash due to getting bogus eh_depth value on big-endian systemsPaul Mackerras2011-12-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 1939dd84b3 ("ext4: cleanup ext4_ext_grow_indepth code") added a reference to ext4_extent_header.eh_depth, but forget to pass the value read through le16_to_cpu. The result is a crash on big-endian machines, such as this crash on a POWER7 server: attempt to access beyond end of device sda8: rw=0, want=776392648163376, limit=168558560 Unable to handle kernel paging request for data at address 0x6b6b6b6b6b6b6bcb Faulting instruction address: 0xc0000000001f5f38 cpu 0x14: Vector: 300 (Data Access) at [c000001bd1aaecf0] pc: c0000000001f5f38: .__brelse+0x18/0x60 lr: c0000000002e07a4: .ext4_ext_drop_refs+0x44/0x80 sp: c000001bd1aaef70 msr: 9000000000009032 dar: 6b6b6b6b6b6b6bcb dsisr: 40000000 current = 0xc000001bd15b8010 paca = 0xc00000000ffe4600 pid = 19911, comm = flush-8:0 enter ? for help [c000001bd1aaeff0] c0000000002e07a4 .ext4_ext_drop_refs+0x44/0x80 [c000001bd1aaf090] c0000000002e0c58 .ext4_ext_find_extent+0x408/0x4c0 [c000001bd1aaf180] c0000000002e145c .ext4_ext_insert_extent+0x2bc/0x14c0 [c000001bd1aaf2c0] c0000000002e3fb8 .ext4_ext_map_blocks+0x628/0x1710 [c000001bd1aaf420] c0000000002b2974 .ext4_map_blocks+0x224/0x310 [c000001bd1aaf4d0] c0000000002b7f2c .mpage_da_map_and_submit+0xbc/0x490 [c000001bd1aaf5a0] c0000000002b8688 .write_cache_pages_da+0x2c8/0x430 [c000001bd1aaf720] c0000000002b8b28 .ext4_da_writepages+0x338/0x670 [c000001bd1aaf8d0] c000000000157280 .do_writepages+0x40/0x90 [c000001bd1aaf940] c0000000001ea830 .writeback_single_inode+0xe0/0x530 [c000001bd1aafa00] c0000000001eb680 .writeback_sb_inodes+0x210/0x300 [c000001bd1aafb20] c0000000001ebc84 .__writeback_inodes_wb+0xd4/0x140 [c000001bd1aafbe0] c0000000001ebfec .wb_writeback+0x2fc/0x3e0 [c000001bd1aafce0] c0000000001ed770 .wb_do_writeback+0x2f0/0x300 [c000001bd1aafdf0] c0000000001ed848 .bdi_writeback_thread+0xc8/0x340 [c000001bd1aafed0] c0000000000c5494 .kthread+0xb4/0xc0 [c000001bd1aaff90] c000000000021f48 .kernel_thread+0x54/0x70 This is due to getting ext_depth(inode) == 0x101 and therefore running off the end of the path array in ext4_ext_drop_refs into following unallocated structures. This fixes it by adding the necessary le16_to_cpu. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
| * ext4: fix ext4_end_io_dio() racing against fsync()Theodore Ts'o2011-12-12
| | | | | | | | | | | | | | | | | | | | | | We need to make sure iocb->private is cleared *before* we put the io_end structure on i_completed_io_list. Otherwise fsync() could potentially run on another CPU and free the iocb structure out from under us. Reported-by: Kent Overstreet <koverstreet@google.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: stable@kernel.org
* | Merge branch 'for-linus' of ↵Linus Torvalds2011-12-14
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse: fuse: llseek fix race fuse: fix llseek bug fuse: fix fuse_retrieve
| * | fuse: llseek fix raceMiklos Szeredi2011-12-13
| | | | | | | | | | | | | | | | | | | | | Fix race between lseek(fd, 0, SEEK_CUR) and read/write. This was fixed in generic code by commit 5b6f1eb97d (vfs: lseek(fd, 0, SEEK_CUR) race condition). Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
| * | fuse: fix llseek bugRoel Kluin2011-12-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The test in fuse_file_llseek() "not SEEK_CUR or not SEEK_SET" always evaluates to true. This was introduced in 3.1 by commit 06222e49 (fs: handle SEEK_HOLE/SEEK_DATA properly in all fs's that define their own llseek) and changed the behavior of SEEK_CUR and SEEK_SET to always retrieve the file attributes. This is a performance regression. Fix the test so that it makes sense. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: stable@vger.kernel.org CC: Josef Bacik <josef@redhat.com> CC: Al Viro <viro@zeniv.linux.org.uk>
| * | fuse: fix fuse_retrieveMiklos Szeredi2011-12-13
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix two bugs in fuse_retrieve(): - retrieving more than one page would yield repeated instances of the first page - if more than FUSE_MAX_PAGES_PER_REQ pages were requested than the request page array would overflow fuse_retrieve() was added in 2.6.36 and these bugs had been there since the beginning. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: stable@vger.kernel.org
* | Merge branch 'for-linus' of ↵Linus Torvalds2011-12-14
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: fs/ncpfs: fix error paths and goto statements in ncp_fill_super() configfs: register_filesystem() called too early fuse: register_filesystem() called too early ubifs: too early register_filesystem() ... and the same kind of leak for mqueue procfs: fix a vfsmount longterm reference leak
| * | fs/ncpfs: fix error paths and goto statements in ncp_fill_super()Djalal Harouni2011-12-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The label 'out_bdi' should be followed by bdi_destroy() instead of fput() which should be after the 'out_fput' label. If bdi_setup_and_register() fails then jump to the 'out_fput' label instead of the 'out_bdi' one. If fget(data.info_fd) fails then jump to the previously fixed 'out_bdi' label to call bdi_destroy() otherwise the bdi object will not be destroyed. Compile tested only. Signed-off-by: Djalal Harouni <tixxdz@opendz.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | configfs: register_filesystem() called too earlyAl Viro2011-12-13
| | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | fuse: register_filesystem() called too earlyAl Viro2011-12-13
| | | | | | | | | | | | | | | | | | same story as with ubifs Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | ubifs: too early register_filesystem()Al Viro2011-12-13
| | | | | | | | | | | | | | | | | | doing that before you are ready to handle mount() is a Bad Idea(tm)... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | procfs: fix a vfsmount longterm reference leakAl Viro2011-12-09
| | | | | | | | | | | | | | | | | | kern_mount() doesn't pair with plain mntput()... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | | Merge branch 'for-linus' of ↵Linus Torvalds2011-12-13
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client: ceph: add missing spin_unlock at ceph_mdsc_build_path() ceph: fix SEEK_CUR, SEEK_SET regression crush: fix mapping calculation when force argument doesn't exist ceph: use i_ceph_lock instead of i_lock rbd: remove buggy rollback functionality rbd: return an error when an invalid header is read ceph: fix rasize reporting by ceph_show_options
| * | | ceph: add missing spin_unlock at ceph_mdsc_build_path()Yehuda Sadeh2011-12-13
| | | | | | | | | | | | | | | | | | | | | | | | one of the paths was missing spin_unlock Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
| * | | ceph: fix SEEK_CUR, SEEK_SET regressionSage Weil2011-12-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 06222e491e663dac939f04b125c9dc52126a75c4 got the if wrong so that it always evaluates as true. This is semantically harmless, but makes SEEK_CUR and SEEK_SET needlessly query the server. Rewrite the if to explicitly enumerate the cases we DO need a valid i_size to make this code less fragile. Reported-by: Roel Kluin <roel.kluin@gmail.com> Signed-off-by: Sage Weil <sage@newdream.net>
| * | | ceph: use i_ceph_lock instead of i_lockSage Weil2011-12-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have been using i_lock to protect all kinds of data structures in the ceph_inode_info struct, including lists of inodes that we need to iterate over while avoiding races with inode destruction. That requires grabbing a reference to the inode with the list lock protected, but igrab() now takes i_lock to check the inode flags. Changing the list lock ordering would be a painful process. However, using a ceph-specific i_ceph_lock in the ceph inode instead of i_lock is a simple mechanical change and avoids the ordering constraints imposed by igrab(). Reported-by: Amon Ott <a.ott@m-privacy.de> Signed-off-by: Sage Weil <sage@newdream.net>
| * | | ceph: fix rasize reporting by ceph_show_optionsSage Weil2011-12-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix typo. Reported-by: mowang da <whooya.xxl@gmail.com> Signed-off-by: Sage Weil <sage@newdream.net>
* | | | Merge branch 'writeback-for-linus' of ↵Linus Torvalds2011-12-13
|\ \ \ \ | |_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/wfg/linux * 'writeback-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/wfg/linux: writeback: set max_pause to lowest value on zero bdi_dirty writeback: permit through good bdi even when global dirty exceeded writeback: comment on the bdi dirty threshold fs: Make write(2) interruptible by a fatal signal writeback: Fix issue on make htmldocs
| * | | writeback: Fix issue on make htmldocsMarcos Paulo de Souza2011-11-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Document the @reason parameter to make "make htmldocs" happy. Acked-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Marcos Paulo de Souza <marcos.mage@gmail.com> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
* | | | Merge git://git.samba.org/sfrench/cifs-2.6Linus Torvalds2011-12-09
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.samba.org/sfrench/cifs-2.6: cifs: check for NULL last_entry before calling cifs_save_resume_key cifs: attempt to freeze while looping on a receive attempt cifs: Fix sparse warning when calling cifs_strtoUCS CIFS: Add descriptions to the brlock cache functions
| * | | | cifs: check for NULL last_entry before calling cifs_save_resume_keyJeff Layton2011-12-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Prior to commit eaf35b1, cifs_save_resume_key had some NULL pointer checks at the top. It turns out that at least one of those NULL pointer checks is needed after all. When the LastNameOffset in a FIND reply appears to be beyond the end of the buffer, CIFSFindFirst and CIFSFindNext will set srch_inf.last_entry to NULL. Since eaf35b1, the code will now oops in this situation. Fix this by having the callers check for a NULL last entry pointer before calling cifs_save_resume_key. No change is needed for the call site in cifs_readdir as it's not reachable with a NULL current_entry pointer. This should fix: https://bugzilla.redhat.com/show_bug.cgi?id=750247 Cc: stable@vger.kernel.org Cc: Christoph Hellwig <hch@infradead.org> Reported-by: Adam G. Metzler <adamgmetzler@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
| * | | | cifs: attempt to freeze while looping on a receive attemptJeff Layton2011-12-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the recent overhaul of the demultiplex thread receive path, I neglected to ensure that we attempt to freeze on each pass through the receive loop. Reported-and-Tested-by: Woody Suwalski <terraluna977@gmail.com> Reported-and-Tested-by: Adam Williamson <awilliam@redhat.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
| * | | | cifs: Fix sparse warning when calling cifs_strtoUCSSteve French2011-12-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix sparse endian check warning while calling cifs_strtoUCS CHECK fs/cifs/smbencrypt.c fs/cifs/smbencrypt.c:216:37: warning: incorrect type in argument 1 (different base types) fs/cifs/smbencrypt.c:216:37: expected restricted __le16 [usertype] *<noident> fs/cifs/smbencrypt.c:216:37: got unsigned short *<noident> Signed-off-by: Steve French <smfrench@gmail.com> Acked-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com
| * | | | CIFS: Add descriptions to the brlock cache functionsPavel Shilovsky2011-12-08
| | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Steve French <sfrench@us.ibm.com>
* | | | | procfs: do not overflow get_{idle,iowait}_time for nohzMichal Hocko2011-12-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since commit a25cac5198d4 ("proc: Consider NO_HZ when printing idle and iowait times") we are reporting idle/io_wait time also while a CPU is tickless. We rely on get_{idle,iowait}_time functions to retrieve proper data. These functions, however, use usecs_to_cputime to translate micro seconds time to cputime64_t. This is just an alias to usecs_to_jiffies which reduces the data type from u64 to unsigned int and also checks whether the given parameter overflows jiffies_to_usecs(MAX_JIFFY_OFFSET) and returns MAX_JIFFY_OFFSET in that case. When we overflow depends on CONFIG_HZ but especially for CONFIG_HZ_300 it is quite low (1431649781) so we are getting MAX_JIFFY_OFFSET for >3000s! until we overflow unsigned int. Just for reference CONFIG_HZ_100 has an overflow window around 20s, CONFIG_HZ_250 ~8s and CONFIG_HZ_1000 ~2s. This results in a bug when people saw [h]top going mad reporting 100% CPU usage even though there was basically no CPU load. The reason was simply that /proc/stat stopped reporting idle/io_wait changes (and reported MAX_JIFFY_OFFSET) and so the only change happening was for user system time. Let's use nsecs_to_jiffies64 instead which doesn't reduce the precision to 32b type and it is much more appropriate for cumulative time values (unlike usecs_to_jiffies which intended for timeout calculations). Signed-off-by: Michal Hocko <mhocko@suse.cz> Tested-by: Artem S. Tashkinov <t.artem@mailcity.com> Cc: Dave Jones <davej@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | | fs/proc/meminfo.c: fix compilation errorClaudio Scordino2011-12-09
| |_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix the error message "directives may not be used inside a macro argument" which appears when the kernel is compiled for the cris architecture. Signed-off-by: Claudio Scordino <claudio@evidence.eu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | Merge branch 'for-linus' of ↵Linus Torvalds2011-12-08
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: Btrfs: drop spin lock when memory alloc fails Btrfs: check if the to-be-added device is writable Btrfs: try cluster but don't advance in search list Btrfs: try to allocate from cluster even at LOOP_NO_EMPTY_SIZE
| * | | | Btrfs: drop spin lock when memory alloc failsLiu Bo2011-12-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Drop spin lock in convert_extent_bit() when memory alloc fails, otherwise, it will be a deadlock. Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | | Btrfs: check if the to-be-added device is writableLi Zefan2011-12-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we call ioctl(BTRFS_IOC_ADD_DEV) directly, we'll succeed in adding a readonly device to a btrfs filesystem, and btrfs will write to that device, emitting kernel errors: [ 3109.833692] lost page write due to I/O error on loop2 [ 3109.833720] lost page write due to I/O error on loop2 ... Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | | Btrfs: try cluster but don't advance in search listAlexandre Oliva2011-12-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we find an existing cluster, we switch to its block group as the current block group, possibly skipping multiple blocks in the process. Furthermore, under heavy contention, multiple threads may fail to allocate from a cluster and then release just-created clusters just to proceed to create new ones in a different block group. This patch tries to allocate from an existing cluster regardless of its block group, and doesn't switch to that group, instead proceeding to try to allocate a cluster from the group it was iterating before the attempt. Signed-off-by: Alexandre Oliva <oliva@lsd.ic.unicamp.br> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | | Btrfs: try to allocate from cluster even at LOOP_NO_EMPTY_SIZEAlexandre Oliva2011-12-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we reach LOOP_NO_EMPTY_SIZE, we won't even try to use a cluster that others might have set up. Odds are that there won't be one, but if someone else succeeded in setting it up, we might as well use it, even if we don't try to set up a cluster again. Signed-off-by: Alexandre Oliva <oliva@lsd.ic.unicamp.br> Signed-off-by: Chris Mason <chris.mason@oracle.com>
* | | | | Merge branch 'for-linus' of git://oss.sgi.com/xfs/xfsLinus Torvalds2011-12-07
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-linus' of git://oss.sgi.com/xfs/xfs: xfs: fix the logspace waiting algorithm xfs: fix nfs export of 64-bit inodes numbers on 32-bit kernels xfs: fix allocation length overflow in xfs_bmapi_write()
| * | | | | xfs: fix the logspace waiting algorithmChristoph Hellwig2011-12-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Apply the scheme used in log_regrant_write_log_space to wake up any other threads waiting for log space before the newly added one to log_regrant_write_log_space as well, and factor the code into readable helpers. For each of the queues we have add two helpers: - one to try to wake up all waiting threads. This helper will also be usable by xfs_log_move_tail once we remove the current opportunistic wakeups in it. - one to sleep on t_wait until enough log space is available, loosely modelled after Linux waitqueues. And use them to reimplement the guts of log_regrant_write_log_space and log_regrant_write_log_space. These two function now use one and the same algorithm for waiting on log space instead of subtly different ones before, with an option to completely unify them in the near future. Also move the filesystem shutdown handling to the common caller given that we had to touch it anyway. Based on hard debugging and an earlier patch from Chandra Seetharaman <sekharan@us.ibm.com>. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Chandra Seetharaman <sekharan@us.ibm.com> Tested-by: Chandra Seetharaman <sekharan@us.ibm.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | | | | xfs: fix nfs export of 64-bit inodes numbers on 32-bit kernelsChristoph Hellwig2011-12-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The i_ino field in the VFS inode is of type unsigned long and thus can't hold the full 64-bit inode number on 32-bit kernels. We have the full inode number in the XFS inode, so use that one for nfs exports. Note that I've also switched the 32-bit file handles types to it, just to make the code more consistent and copy & paste errors less likely to happen. Reported-by: Guoquan Yang <ygq51@hotmail.com> Reported-by: Hank Peng <pengxihan@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | | | | xfs: fix allocation length overflow in xfs_bmapi_write()Dave Chinner2011-12-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When testing the new xfstests --large-fs option that does very large file preallocations, this assert was tripped deep in xfs_alloc_vextent(): XFS: Assertion failed: args->minlen <= args->maxlen, file: fs/xfs/xfs_alloc.c, line: 2239 The allocation was trying to allocate a zero length extent because the lower 32 bits of the allocation length was zero. The remaining length of the allocation to be done was an exact multiple of 2^32 - the first case I saw was at 496TB remaining to be allocated. This turns out to be an overflow when converting the allocation length (a 64 bit quantity) into the extent length to allocate (a 32 bit quantity), and it requires the length to be allocated an exact multiple of 2^32 blocks to trip the assert. Fix it by limiting the extent lenth to allocate to MAXEXTLEN. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Ben Myers <bpm@sgi.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* | | | | | Merge branch 'for-linus' of ↵Linus Torvalds2011-12-07
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: fix apparmor dereferencing potentially freed dentry, sanitize __d_path() API
| * | | | | | fix apparmor dereferencing potentially freed dentry, sanitize __d_path() APIAl Viro2011-12-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | __d_path() API is asking for trouble and in case of apparmor d_namespace_path() getting just that. The root cause is that when __d_path() misses the root it had been told to look for, it stores the location of the most remote ancestor in *root. Without grabbing references. Sure, at the moment of call it had been pinned down by what we have in *path. And if we raced with umount -l, we could have very well stopped at vfsmount/dentry that got freed as soon as prepend_path() dropped vfsmount_lock. It is safe to compare these pointers with pre-existing (and known to be still alive) vfsmount and dentry, as long as all we are asking is "is it the same address?". Dereferencing is not safe and apparmor ended up stepping into that. d_namespace_path() really wants to examine the place where we stopped, even if it's not connected to our namespace. As the result, it looked at ->d_sb->s_magic of a dentry that might've been already freed by that point. All other callers had been careful enough to avoid that, but it's really a bad interface - it invites that kind of trouble. The fix is fairly straightforward, even though it's bigger than I'd like: * prepend_path() root argument becomes const. * __d_path() is never called with NULL/NULL root. It was a kludge to start with. Instead, we have an explicit function - d_absolute_root(). Same as __d_path(), except that it doesn't get root passed and stops where it stops. apparmor and tomoyo are using it. * __d_path() returns NULL on path outside of root. The main caller is show_mountinfo() and that's precisely what we pass root for - to skip those outside chroot jail. Those who don't want that can (and do) use d_path(). * __d_path() root argument becomes const. Everyone agrees, I hope. * apparmor does *NOT* try to use __d_path() or any of its variants when it sees that path->mnt is an internal vfsmount. In that case it's definitely not mounted anywhere and dentry_path() is exactly what we want there. Handling of sysctl()-triggered weirdness is moved to that place. * if apparmor is asked to do pathname relative to chroot jail and __d_path() tells it we it's not in that jail, the sucker just calls d_absolute_path() instead. That's the other remaining caller of __d_path(), BTW. * seq_path_root() does _NOT_ return -ENAMETOOLONG (it's stupid anyway - the normal seq_file logics will take care of growing the buffer and redoing the call of ->show() just fine). However, if it gets path not reachable from root, it returns SEQ_SKIP. The only caller adjusted (i.e. stopped ignoring the return value as it used to do). Reviewed-by: John Johansen <john.johansen@canonical.com> ACKed-by: John Johansen <john.johansen@canonical.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Cc: stable@vger.kernel.org
* | | | | | | Merge branch 'for-linus' of git://oss.sgi.com/xfs/xfsLinus Torvalds2011-12-02
|\ \ \ \ \ \ \ | |/ / / / / / |/| / / / / / | |/ / / / / | | | | | | | | | | | | | | | | | | * 'for-linus' of git://oss.sgi.com/xfs/xfs: xfs: fix attr2 vs large data fork assert xfs: force buffer writeback before blocking on the ilock in inode reclaim xfs: validate acl count
| * | | | | xfs: fix attr2 vs large data fork assertChristoph Hellwig2011-11-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With Dmitry fsstress updates I've seen very reproducible crashes in xfs_attr_shortform_remove because xfs_attr_shortform_bytesfit claims that the attributes would not fit inline into the inode after removing an attribute. It turns out that we were operating on an inode with lots of delalloc extents, and thus an if_bytes values for the data fork that is larger than biggest possible on-disk storage for it which utterly confuses the code near the end of xfs_attr_shortform_bytesfit. Fix this by always allowing the current attribute fork, like we already do for the attr1 format, given that delalloc conversion will take care for moving either the data or attribute area out of line if it doesn't fit at that point - or making the point moot by merging extents at this point. Also document the function better, and clean up some loose bits. Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | | | | xfs: force buffer writeback before blocking on the ilock in inode reclaimChristoph Hellwig2011-11-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we are doing synchronous inode reclaim we block the VM from making progress in memory reclaim. So if we encouter a flush locked inode promote it in the delwri list and wake up xfsbufd to write it out now. Without this we can get hangs of up to 30 seconds during workloads hitting synchronous inode reclaim. The scheme is copied from what we do for dquot reclaims. Reported-by: Simon Kirby <sim@hostway.ca> Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Simon Kirby <sim@hostway.ca> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | | | | xfs: validate acl countChristoph Hellwig2011-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This prevents in-memory corruption and possible panics if the on-disk ACL is badly corrupted. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ben Myers <bpm@sgi.com>
* | | | | | Merge branch 'upstream-linus' of ↵Linus Torvalds2011-12-01
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jlbec/ocfs2 * 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jlbec/ocfs2: (31 commits) ocfs2: avoid unaligned access to dqc_bitmap ocfs2: Use filemap_write_and_wait() instead of write_inode_now() ocfs2: honor O_(D)SYNC flag in fallocate ocfs2: Add a missing journal credit in ocfs2_link_credits() -v2 ocfs2: send correct UUID to cleancache initialization ocfs2: Commit transactions in error cases -v2 ocfs2: make direntry invalid when deleting it fs/ocfs2/dlm/dlmlock.c: free kmem_cache_zalloc'd data using kmem_cache_free ocfs2: Avoid livelock in ocfs2_readpage() ocfs2: serialize unaligned aio ocfs2: Implement llseek() ocfs2: Fix ocfs2_page_mkwrite() ocfs2: Add comment about orphan scanning ocfs2: Clean up messages in the fs ocfs2/cluster: Cluster up now includes network connections too ocfs2/cluster: Add new function o2net_fill_node_map() ocfs2/cluster: Fix output in file elapsed_time_in_ms ocfs2/dlm: dlmlock_remote() needs to account for remastery ocfs2/dlm: Take inflight reference count for remotely mastered resources too ocfs2/dlm: Cleanup dlm_wait_for_node_death() and dlm_wait_for_node_recovery() ...
| * | | | | | ocfs2: avoid unaligned access to dqc_bitmapAkinobu Mita2011-12-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The dqc_bitmap field of struct ocfs2_local_disk_chunk is 32-bit aligned, but not 64-bit aligned. The dqc_bitmap is accessed by ocfs2_set_bit(), ocfs2_clear_bit(), ocfs2_test_bit(), or ocfs2_find_next_zero_bit(). These are wrapper macros for ext2_*_bit() which need to take an unsigned long aligned address (though some architectures are able to handle unaligned address correctly) So some 64bit architectures may not be able to access the dqc_bitmap correctly. This avoids such unaligned access by using another wrapper functions for ext2_*_bit(). The code is taken from fs/ext4/mballoc.c which also need to handle unaligned bitmap access. Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: Joel Becker <jlbec@evilplan.org> Cc: Mark Fasheh <mfasheh@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Joel Becker <jlbec@evilplan.org>