aboutsummaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAge
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstableLinus Torvalds2011-04-18
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable: (24 commits) Btrfs: fix free space cache leak Btrfs: avoid taking the chunk_mutex in do_chunk_alloc Btrfs end_bio_extent_readpage should look for locked bits Btrfs: don't force chunk allocation in find_free_extent Btrfs: Check validity before setting an acl Btrfs: Fix incorrect inode nlink in btrfs_link() Btrfs: Check if btrfs_next_leaf() returns error in btrfs_real_readdir() Btrfs: Check if btrfs_next_leaf() returns error in btrfs_listxattr() Btrfs: make uncache_state unconditional btrfs: using cached extent_state in set/unlock combinations Btrfs: avoid taking the trans_mutex in btrfs_end_transaction Btrfs: fix subvolume mount by name problem when default mount subvolume is set fix user annotation in ioctl.c Btrfs: check for duplicate iov_base's when doing dio reads btrfs: properly handle overlapping areas in memmove_extent_buffer Btrfs: fix memory leaks in btrfs_new_inode() Btrfs: check for duplicate iov_base's when doing dio reads Btrfs: reuse the extent_map we found when calling btrfs_get_extent Btrfs: do not use async submit for small DIO io's Btrfs: don't split dio bios if we don't have to ...
| * Btrfs: fix free space cache leakChris Mason2011-04-18
| | | | | | | | | | | | | | | | | | | | The free space caching code was recently reworked to cache all the pages it needed instead of using find_get_page everywhere. One loop was missed though, so it ended up leaking pages. This fixes it to use our page array instead of find_get_page. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * Btrfs: avoid taking the chunk_mutex in do_chunk_allocJosef Bacik2011-04-16
| | | | | | | | | | | | | | | | | | | | | | Everytime we try to allocate disk space we try and see if we can pre-emptively allocate a chunk, but in the common case we don't allocate anything, so there is no sense in taking the chunk_mutex at all. So instead if we are allocating a chunk, mark it in the space_info so we don't get two people trying to allocate at the same time. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com> Reviewed-by: Liu Bo <liubo2009@cn.fujitsu.com>
| * Btrfs end_bio_extent_readpage should look for locked bitsChris Mason2011-04-16
| | | | | | | | | | | | | | | | A recent commit caches the extent state in end_bio_extent_readpage, but the search it does should look for locked extents. This fixes things to make it more effective. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * Btrfs: don't force chunk allocation in find_free_extentChris Mason2011-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | find_free_extent likes to allocate in contiguous clusters, which makes writeback faster, especially on SSD storage. As the FS fragments, these clusters become harder to find and we have to decide between allocating a new chunk to make more clusters or giving up on the cluster to allocate from the free space we have. Right now it creates too many chunks, and you can end up with a whole FS that is mostly empty metadata chunks. This commit changes the allocation code to be more strict and only allocate new chunks when we've made good use of the chunks we already have. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * Btrfs: Check validity before setting an aclMiao Xie2011-04-13
| | | | | | | | | | | | | | Call posix_acl_valid() to check if an acl is valid or not. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
| * Btrfs: Fix incorrect inode nlink in btrfs_link()Miao Xie2011-04-13
| | | | | | | | | | | | | | | | Link count of the inode is not decreased if btrfs_set_inode_index() fails. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Singed-off-by: Li Zefan <lizf@cn.fujitsu.com>
| * Btrfs: Check if btrfs_next_leaf() returns error in btrfs_real_readdir()Li Zefan2011-04-13
| | | | | | | | | | | | | | | | | | btrfs_next_leaf() can return -errno, and we should propagate it to userspace. This also simplifies how we walk the btree path. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
| * Btrfs: Check if btrfs_next_leaf() returns error in btrfs_listxattr()Li Zefan2011-04-13
| | | | | | | | | | | | | | | | | | btrfs_next_leaf() can return -errno, and we should propagate it to userspace. This also simplifies how we walk the btree path. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
| * Btrfs: make uncache_state unconditionalChris Mason2011-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The extent_io code can take cached pointers into the extent state trees, and these can make lookups much faster in common operations. The caching only happens when specific bits are set that prevent merging and splitting of the extent state. A help function was added to uncache the state, and it was testing the same set of conditionals. This can leak in very strange corner cases where the lock bit goes away unexpectedly. The uncaching should be unconditional. Once we have a ref on the extent we should always give it up. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * Merge branch 'for-chris' of ↵Chris Mason2011-04-11
| |\ | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/josef/btrfs-work into for-linus
| | * Btrfs: avoid taking the trans_mutex in btrfs_end_transactionJosef Bacik2011-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I've been working on making our O_DIRECT latency not suck and I noticed we were taking the trans_mutex in btrfs_end_transaction. So to do this we convert num_writers and use_count to atomic_t's and just decrement them in btrfs_end_transaction. Instead of deleting the transaction from the trans list in put_transaction we do that in btrfs_commit_transaction() since that's the only time it actually needs to be removed from the list. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com>
| | * Btrfs: check for duplicate iov_base's when doing dio readsJosef Bacik2011-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Apparently it is ok to submit a read to an IDE device with the same target page for different offsets. This is what Windows does under qemu. The problem is under DIO we expect them to be different buffers for checksumming reasons, and so this sort of thing will result in checksum errors, when in reality the file is fine. So when reading, check to make sure that all iov bases are different, and if they aren't fall back to buffered mode, since that will work out right. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com>
| | * Btrfs: reuse the extent_map we found when calling btrfs_get_extentJosef Bacik2011-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In btrfs_get_block_direct we call btrfs_get_extent to lookup the extent for the range that we are looking for. If we don't find an extent, btrfs_get_extent will insert a extent_map for that area and mark it as a hole. So it does the job of allocating a new extent map and inserting it into the io tree. But if we're creating a new extent we free it up and redo all of that work. So instead pass the em to btrfs_new_extent_direct(), and if it will work just allocate the disk space and set it up properly and bypass the freeing/allocating of a new extent map and the expensive operation of inserting the thing into the io_tree. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com>
| | * Btrfs: do not use async submit for small DIO io'sJosef Bacik2011-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When looking at our DIO performance Chris said that for small IO's doing the async submit stuff tends to be more overhead than it's worth. With this on top of my other fixes I get about a 17-20% speedup doing a sequential dd with 4k IO's. Basically if we don't have to split the bio for the map length it's small enough to be directly submitted, otherwise go back to the async submit. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com>
| | * Btrfs: don't split dio bios if we don't have toJosef Bacik2011-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have been unconditionally allocating a new bio and re-adding all pages from our original bio to the new bio. This is needed if our original bio is larger than our stripe size, but if it is smaller than the stripe size then there is no need to do this. So check the map length and if we are under that then go ahead and submit the original bio. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com>
| | * Btrfs: do not call btrfs_update_inode in endio if nothing changedJosef Bacik2011-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the DIO code we often don't update the i_disk_size because the i_size isn't updated until after the DIO is completed, so basically we are allocating a path, doing a search, and updating the inode item for no reason since nothing changed. btrfs_ordered_update_i_size will return 1 if it didn't update i_disk_size, so only run btrfs_update_inode if btrfs_ordered_update_i_size returns 0. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com>
| | * Btrfs: map the inode item when doing fill_inode_itemJosef Bacik2011-04-08
| | | | | | | | | | | | | | | | | | | | | | | | Instead of calling kmap_atomic for every thing we set in the inode item, map the entire inode item at the start and unmap it at the end. This makes a sequential dd of 400mb O_DIRECT something like 1% faster. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com>
| | * Btrfs: only retry transaction reservation onceJosef Bacik2011-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I saw a lockup where we kept getting into this start transaction->commit transaction loop because of enospce. The fact is if we fail to make our reservation, we've tried _everything_ several times, so we only need to try and commit the transaction once, and if that doesn't work then we really are out of space and need to just exit. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com>
| | * Btrfs: deal with the case that we run out of space in the cacheJosef Bacik2011-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we don't handle running out of space in the cache, so to fix this we keep track of how far in the cache we are. Then we only dirty the pages if we successfully modify all of them, otherwise if we have an error or run out of space we can just drop them and not worry about the vm writing them out. Thanks, Tested-by Johannes Hirte <johannes.hirte@fem.tu-ilmenau.de> Signed-off-by: Josef Bacik <josef@redhat.com>
| * | btrfs: using cached extent_state in set/unlock combinationsArne Jansen2011-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In several places the sequence (set_extent_uptodate, unlock_extent) is used. This leads to a duplicate lookup of the extent state. This patch lets set_extent_uptodate return a cached extent_state which can be passed to unlock_extent_cached. The occurences of the above sequences are updated to use the cache. Only end_bio_extent_readpage is updated that it first gets a cached state to pass it to the readpage_end_io_hook as the prototype requested and is later on being used for set/unlock. Signed-off-by: Arne Jansen <sensille@gmx.net> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | Btrfs: fix subvolume mount by name problem when default mount subvolume is setXin Zhong2011-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We create two subvolumes (meego_root and meego_home) in btrfs root directory. And set meego_root as default mount subvolume. After we remount btrfs, meego_root is mounted to top directory by default. Then when we try to mount meego_home (subvol=meego_home) to a subdirectory, it failed. The problem is when default mount subvolume is set to meego_root, we search meego_home in meego_root but can not find it. So the solution is to add a new mount option (subvolrootid) to specify subvol id of root and search subvol name in it. For our case, now we can use "-o subvolrootid=0,subvol=meego_home) to mount meego_home. Detail information can be found in meego bugzilla: https://bugs.meego.com/show_bug.cgi?id=15055 Signed-off-by: Zhong, Xin <xin.zhong@intel.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | fix user annotation in ioctl.cDaniel J Blueman2011-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix address space annotation correct in ioctl.c. Signed-off-by: Daniel J Blueman <daniel.blueman@gmail.com> BTRFS_BLOCK_GROUP_SYSTEM, @@ -2387,7 +2387,7 @@ long btrfs_ioctl_space_info(struct btrfs_root *root, void __user *arg) up_read(&info->groups_sem); } - user_dest = (struct btrfs_ioctl_space_info *) + user_dest = (struct btrfs_ioctl_space_info __user *) (arg + sizeof(struct btrfs_ioctl_space_args)); if (copy_to_user(user_dest, dest_orig, alloc_size)) Reviewed-by: Josef Bacik <josef@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | Btrfs: check for duplicate iov_base's when doing dio readsJosef Bacik2011-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Apparently it is ok to submit a read to an IDE device with the same target page for different offsets. This is what Windows does under qemu. The problem is under DIO we expect them to be different buffers for checksumming reasons, and so this sort of thing will result in checksum errors, when in reality the file is fine. So when reading, check to make sure that all iov bases are different, and if they aren't fall back to buffered mode, since that will work out right. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | btrfs: properly handle overlapping areas in memmove_extent_bufferSergei Trofimovich2011-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix data corruption caused by memcpy() usage on overlapping data. I've observed it first when found out usermode linux crash on btrfs. ?all chain is the following: ------------[ cut here ]------------ WARNING: at /home/slyfox/linux-2.6/fs/btrfs/extent_io.c:3900 memcpy_extent_buffer+0x1a5/0x219() Call Trace: 6fa39a58: [<601b495e>] _raw_spin_unlock_irqrestore+0x18/0x1c 6fa39a68: [<60029ad9>] warn_slowpath_common+0x59/0x70 6fa39aa8: [<60029b05>] warn_slowpath_null+0x15/0x17 6fa39ab8: [<600efc97>] memcpy_extent_buffer+0x1a5/0x219 6fa39b48: [<600efd9f>] memmove_extent_buffer+0x94/0x208 6fa39bc8: [<600becbf>] btrfs_del_items+0x214/0x473 6fa39c78: [<600ce1b0>] btrfs_delete_one_dir_name+0x7c/0xda 6fa39cc8: [<600dad6b>] __btrfs_unlink_inode+0xad/0x25d 6fa39d08: [<600d7864>] btrfs_start_transaction+0xe/0x10 6fa39d48: [<600dc9ff>] btrfs_unlink_inode+0x1b/0x3b 6fa39d78: [<600e04bc>] btrfs_unlink+0x70/0xef 6fa39dc8: [<6007f0d0>] vfs_unlink+0x58/0xa3 6fa39df8: [<60080278>] do_unlinkat+0xd4/0x162 6fa39e48: [<600517db>] call_rcu_sched+0xe/0x10 6fa39e58: [<600452a8>] __put_cred+0x58/0x5a 6fa39e78: [<6007446c>] sys_faccessat+0x154/0x166 6fa39ed8: [<60080317>] sys_unlink+0x11/0x13 6fa39ee8: [<60016b80>] handle_syscall+0x58/0x70 6fa39f08: [<60021377>] userspace+0x2d4/0x381 6fa39fc8: [<60014507>] fork_handler+0x62/0x69 ---[ end trace 70b0ca2ef0266b93 ]--- http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg09302.html Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org> Reviewed-by: Josef Bacik <josef@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | Btrfs: fix memory leaks in btrfs_new_inode()Yoshinori Sano2011-04-11
| |/ | | | | | | | | | | | | This patch fixes memory leaks in btrfs_new_inode(). Signed-off-by: Yoshinori Sano <yoshinori.sano@gmail.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
* | proc: do proper range check on readdir offsetLinus Torvalds2011-04-18
| | | | | | | | | | | | | | | | | | | | Rather than pass in some random truncated offset to the pid-related functions, check that the offset is in range up-front. This is just cleanup, the previous commit fixed the real problem. Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | fs: synchronize_rcu when unregister_filesystem success not failureMilton Miller2011-04-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | While checking unregister_filesystem for saftey vs extra calls for "ext4: register ext2 and ext3 alias after ext4" I realized that the synchronize_rcu() was called on the error path but not on the success path. Cc: stable (2.6.38) Signed-off-by: Milton Miller <miltonm@bga.com> [ This probably won't really make a difference since commit d863b50ab013 ("vfs: call rcu_barrier after ->kill_sb()"), but it's the right thing to do. - Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge branch 'for-linus' of ↵Linus Torvalds2011-04-15
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs: net/9p: nwname should be an unsigned int 9p: Fix sparse error fs/9p: Fix error reported by coccicheck 9p: revert tsyncfs related changes fs/9p: Use write_inode for data sync on server fs/9p: Fix revalidate to return correct value
| * | fs/9p: Fix error reported by coccicheckAneesh Kumar K.V2011-04-15
| | | | | | | | | | | | | | | | | | Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
| * | 9p: revert tsyncfs related changesAneesh Kumar K.V2011-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we use write_inode to flush server cache related to fid, we don't need tsyncfs either fort dotl or dotu protocols. For dotu this helps to do a more efficient server flush. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
| * | fs/9p: Use write_inode for data sync on serverAneesh Kumar K.V2011-04-15
| | | | | | | | | | | | | | | | | | Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
| * | fs/9p: Fix revalidate to return correct valueAneesh Kumar K.V2011-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | revalidate should return > 0 on success. Also return 0 on ENOENT to force do_revalidate to return NULL dentry; Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Venkateswararao Jujjuri <jvrao@linux.vnet.ibm.com> Signed-off-by: Eric Van Hensbergen <ericvh@gmail.com>
* | | vfs: Fix absolute RCU path walk failures due to uninitialized seq numberTim Chen2011-04-15
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During RCU walk in path_lookupat and path_openat, the rcu lookup frequently failed if looking up an absolute path, because when root directory was looked up, seq number was not properly set in nameidata. We dropped out of RCU walk in nameidata_drop_rcu due to mismatch in directory entry's seq number. We reverted to slow path walk that need to take references. With the following patch, I saw a 50% increase in an exim mail server benchmark throughput on a 4-socket Nehalem-EX system. Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Cc: stable@kernel.org (v2.6.38) Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge branch 'linux-next' of git://git.infradead.org/ubifs-2.6Linus Torvalds2011-04-15
|\ \ | | | | | | | | | | | | | | | * 'linux-next' of git://git.infradead.org/ubifs-2.6: UBIFS: fix compilation warnings when compiling with gcc 4.5 UBIFS: fix oops when R/O file-system is fsync'ed
| * | UBIFS: fix compilation warnings when compiling with gcc 4.5Maksim Rayskiy2011-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When compiling UBIFS with CONFIG_UBIFS_FS_DEBUG not set, gcc-4.5.2 generates a slew of "warning: statement with no effect" on references to non-void functions defined as 0. To avoid these warnings, replace #defines with dummy inline functions. Artem: massage the patch a bit, also remove the duplicate 'dbg_check_lprops()' prototype. Signed-off-by: Maksim Rayskiy <maksim.rayskiy@gmail.com> Acked-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
| * | UBIFS: fix oops when R/O file-system is fsync'edArtem Bityutskiy2011-04-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes severe UBIFS bug: UBIFS oopses when we 'fsync()' an file on R/O-mounter file-system. We (the UBIFS authors) incorrectly thought that VFS would not propagate 'fsync()' down to the file-system if it is read-only, but this is not the case. It is easy to exploit this bug using the following simple perl script: use strict; use File::Sync qw(fsync sync); die "File path is not specified" if not defined $ARGV[0]; my $path = $ARGV[0]; open FILE, "<", "$path" or die "Cannot open $path: $!"; fsync(\*FILE) or die "cannot fsync $path: $!"; close FILE or die "Cannot close $path: $!"; Thanks to Reuben Dowle <Reuben.Dowle@navico.com> for reporting about this issue. Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com> Reported-by: Reuben Dowle <Reuben.Dowle@navico.com> Cc: stable@kernel.org
* | | vfs: fix incorrect dentry_update_name_case() BUG_ON() testLinus Torvalds2011-04-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The case we should be verifying when updating the dentry name is that the _parent_ inode (the directory) semaphore is held, not the semaphore for the dentry itself. It's the directory locking that rename and readdir() etc all care about. The comment just above even says so - but then the BUG_ON() still checked the dentry inode itself. Very few people noticed, because this helper function really isn't used for very much, so you had to be using ncpfs to ever hit it. I think I should just remove the BUG_ON (the function really has just one user), but let's run with it fixed for a while before getting rid of it entirely. Reported-and-tested-by: Bongani Hlope <bonganih@bankservafrica.com> Reported-and-tested-by: Bernd Feige <bernd.feige@uniklinik-freiburg.de> Cc: Petr Vandrovec <petr@vandrovec.name>, Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christoph Hellwig <hch@lst.de> Cc: Nick Piggin <npiggin@kernel.dk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | ramfs: fix memleak on no-mmu archBob Liu2011-04-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On no-mmu arch, there is a memleak during shmem test. The cause of this memleak is ramfs_nommu_expand_for_mapping() added page refcount to 2 which makes iput() can't free that pages. The simple test file is like this: int main(void) { int i; key_t k = ftok("/etc", 42); for ( i=0; i<100; ++i) { int id = shmget(k, 10000, 0644|IPC_CREAT); if (id == -1) { printf("shmget error\n"); } if(shmctl(id, IPC_RMID, NULL ) == -1) { printf("shm rm error\n"); return -1; } } printf("run ok...\n"); return 0; } And the result: root:/> free total used free shared buffers Mem: 60320 17912 42408 0 0 -/+ buffers: 17912 42408 root:/> shmem run ok... root:/> free total used free shared buffers Mem: 60320 19096 41224 0 0 -/+ buffers: 19096 41224 root:/> shmem run ok... root:/> free total used free shared buffers Mem: 60320 20296 40024 0 0 -/+ buffers: 20296 40024 ... After this patch the test result is:(no memleak anymore) root:/> free total used free shared buffers Mem: 60320 16668 43652 0 0 -/+ buffers: 16668 43652 root:/> shmem run ok... root:/> free total used free shared buffers Mem: 60320 16668 43652 0 0 -/+ buffers: 16668 43652 Signed-off-by: Bob Liu <lliubbo@gmail.com> Acked-by: Hugh Dickins <hughd@google.com> Signed-off-by: David Howells <dhowells@redhat.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | fs/fhandle.c: add <linux/personality.h> for ia64Jeff Mahoney2011-04-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | force_o_largefile() on ia64 is defined in <asm/fcntl.h> and requires <linux/personality.h>. Signed-off-by: Jeff Mahoney <jeffm@suse.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | brk: COMPAT_BRK: fix detection of randomized brkJiri Kosina2011-04-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5520e89 ("brk: fix min_brk lower bound computation for COMPAT_BRK") tried to get the whole logic of brk randomization for legacy (libc5-based) applications finally right. It turns out that the way to detect whether brk has actually been randomized in the end or not introduced by that patch still doesn't work for those binaries, as reported by Geert: : /sbin/init from my old m68k ramdisk exists prematurely. : : Before the patch: : : | brk(0x80005c8e) = 0x80006000 : : After the patch: : : | brk(0x80005c8e) = 0x80005c8e : : Old libc5 considers brk() to have failed if the return value is not : identical to the requested value. I don't like it, but currently see no better option than a bit flag in task_struct to catch the CONFIG_COMPAT_BRK && randomize_va_space == 2 case. Signed-off-by: Jiri Kosina <jkosina@suse.cz> Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | fs/partitions/ldm.c: fix oops caused by corrupted partition tableTimo Warns2011-04-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The kernel automatically evaluates partition tables of storage devices. The code for evaluating LDM partitions (in fs/partitions/ldm.c) contains a bug that causes a kernel oops on certain corrupted LDM partitions. A kernel subsystem seems to crash, because, after the oops, the kernel no longer recognizes newly connected storage devices. The patch validates the value of vblk_size. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Timo Warns <warns@pre-sense.de> Cc: Eugene Teo <eugeneteo@kernel.sg> Cc: Harvey Harrison <harvey.harrison@gmail.com> Cc: Richard Russon <rich@flatcap.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | Merge git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6Linus Torvalds2011-04-12
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6: cifs: don't allow mmap'ed pages to be dirtied while under writeback (try #3) [CIFS] Warn on requesting default security (ntlm) on mount [CIFS] cifs: clarify the meaning of tcpStatus == CifsGood cifs: wrap received signature check in srv_mutex cifs: clean up various nits in unicode routines (try #2) cifs: clean up length checks in check2ndT2 cifs: set ra_pages in backing_dev_info cifs: fix broken BCC check in is_valid_oplock_break cifs: always do is_path_accessible check in cifs_mount various endian fixes to cifs Elminate sparse __CHECK_ENDIAN__ warnings on port conversion Max share size is too small Allow user names longer than 32 bytes cifs: replace /proc/fs/cifs/Experimental with a module parm cifs: check for private_data before trying to put it
| * | | cifs: don't allow mmap'ed pages to be dirtied while under writeback (try #3)Jeff Layton2011-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is more or less the same patch as before, but with some merge conflicts fixed up. If a process has a dirty page mapped into its page tables, then it has the ability to change it while the client is trying to write the data out to the server. If that happens after the signature has been calculated then that signature will then be wrong, and the server will likely reset the TCP connection. This patch adds a page_mkwrite handler for CIFS that simply takes the page lock. Because the page lock is held over the life of writepage and writepages, this prevents the page from becoming writeable until the write call has completed. With this, we can also remove the "sign_zero_copy" module option and always inline the pages when writing. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
| * | | [CIFS] Warn on requesting default security (ntlm) on mountSteve French2011-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Warn once if default security (ntlm) requested. We will update the default to the stronger security mechanism (ntlmv2) in 2.6.41. Kerberos is also stronger than ntlm, but more servers support ntlmv2 and ntlmv2 does not require an upcall, so ntlmv2 is a better default. Reviewed-by: Jeff Layton <jlayton@redhat.com> CC: Suresh Jayaraman <sjayaraman@suse.de> Reviewed-by: Shirish Pargaonkar <shirishp@us.ibm.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
| * | | [CIFS] cifs: clarify the meaning of tcpStatus == CifsGoodSteve French2011-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the TCP_Server_Info is first allocated and connected, tcpStatus == CifsGood means that the NEGOTIATE_PROTOCOL request has completed and the socket is ready for other calls. cifs_reconnect however sets tcpStatus to CifsGood as soon as the socket is reconnected and the optional RFC1001 session setup is done. We have no clear way to tell the difference between these two states, and we need to know this in order to know whether we can send an echo or not. Resolve this by adding a new statusEnum value -- CifsNeedNegotiate. When the socket has been connected but has not yet had a NEGOTIATE_PROTOCOL request done, set it to this value. Once the NEGOTIATE is done, cifs_negotiate_protocol will set tcpStatus to CifsGood. This also fixes and cleans the logic in cifs_reconnect and cifs_reconnect_tcon. The old code checked for specific states when what it really wants to know is whether the state has actually changed from CifsNeedReconnect. Reported-and-Tested-by: JG <jg@cms.ac> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
| * | | cifs: wrap received signature check in srv_mutexJeff Layton2011-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While testing my patchset to fix asynchronous writes, I hit a bunch of signature problems when testing with signing on. The problem seems to be that signature checks on receive can be running at the same time as a process that is sending, or even that multiple receives can be checking signatures at the same time, clobbering the same data structures. While we're at it, clean up the comments over cifs_calculate_signature and add a note that the srv_mutex should be held when calling this function. This patch seems to fix the problems for me, but I'm not clear on whether it's the best approach. If it is, then this should probably go to stable too. Cc: stable@kernel.org Cc: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
| * | | cifs: clean up various nits in unicode routines (try #2)Jeff Layton2011-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Minor revision to the original patch. Don't abuse the __le16 variable on the stack by casting it to wchar_t and handing it off to char2uni. Declare an actual wchar_t on the stack instead. This fixes a valid sparse warning. Fix the spelling of UNI_ASTERISK. Eliminate the unneeded len_remaining variable in cifsConvertToUCS. Also, as David Howells points out. We were better off making cifsConvertToUCS *not* use put_unaligned_le16 since it means that we can't optimize the mapped characters at compile time. Switch them instead to use cpu_to_le16, and simply use put_unaligned to set them in the string. Reported-and-acked-by: David Howells <dhowells@redhat.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
| * | | cifs: clean up length checks in check2ndT2Jeff Layton2011-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Thus spake David Howells: The code that follows this: remaining = total_data_size - data_in_this_rsp; if (remaining == 0) return 0; else if (remaining < 0) { generates better code if you drop the 'remaining' variable and compare the values directly. Clean it up per his recommendation... Reported-and-acked-by: David Howells <dhowells@redhat.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
| * | | cifs: set ra_pages in backing_dev_infoJeff Layton2011-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 522440ed made cifs set backing_dev_info on the mapping attached to new inodes. This change caused a fairly significant read performance regression, as cifs started doing page-sized reads exclusively. By virtue of the fact that they're allocated as part of cifs_sb_info by kzalloc, the ra_pages on cifs BDIs get set to 0, which prevents any readahead. This forces the normal read codepaths to use readpage instead of readpages causing a four-fold increase in the number of read calls with the default rsize. Fix it by setting ra_pages in the BDI to the same value as that in the default_backing_dev_info. Fixes https://bugzilla.kernel.org/show_bug.cgi?id=31662 Cc: stable@kernel.org Reported-and-Tested-by: Till <till2.schaefer@uni-dortmund.de> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>