aboutsummaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAge
* nilfs2: get rid of private conversion macros on bmap key and pointerRyusuke Konishi2010-07-22
| | | | | | | Will remove nilfs_bmap_key_to_dkey(), nilfs_bmap_dkey_to_key(), nilfs_bmap_ptr_to_dptr(), and nilfs_bmap_dptr_to_ptr() for simplicity. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: verify btree node after readingRyusuke Konishi2010-07-22
| | | | | | | | This inserts sanity checks soon after read btree node from disk. This allows early detection of broken btree nodes, and helps to narrow down problems due to file system corruption. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: add sanity check in nilfs_btree_add_dirty_bufferRyusuke Konishi2010-07-22
| | | | | | | | | | | | | | According to the report titled "problem with nilfs_cleanerd" from Łukasz Wójcicki, nilfs_btree_lookup_dirty_buffers or nilfs_btree_add_dirty_buffer got memory violation during garbage collection. This could happen if a level field of given btree node buffer is incorrect, which is a crucial internal bug. This inserts a sanity check to figure out the problem. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: pass remount flag to parse_optionsRyusuke Konishi2010-07-22
| | | | | | | | | | | | | | This adds is_remount argument to the parse_options() function that obtains mount options from strings. Previously, parse_options did not distinguish context whether it's called for a new mount or remount, so the caller needed additional verifications outside the function. This allows parse_options to verify options and print messages depending on the context. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: use seq_puts to print mount options without argumentRyusuke Konishi2010-07-22
| | | | | | | This replaces seq_printf() with seq_puts() in nilfs_show_options for mount options which have no argument. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: add nodiscard mount optionRyusuke Konishi2010-07-22
| | | | | | | | | | Nilfs has "discard" mount option which issues discard/TRIM commands to underlying block device, but it lacks a complementary option and has no way to disable the feature through remount. This adds "nodiscard" option to resolve this imbalance. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: add barrier mount optionRyusuke Konishi2010-07-22
| | | | | | | | | | Nilfs enables write barriers by default and has "nobarrier" mount option to disable this feature. But it lacks the complementary option and has no way to re-enable the feature on remount. This adds "barrier" option to resolve this imbalance. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: do not update log cursor for small changeRyusuke Konishi2010-07-22
| | | | | | | | | | | | | Super blocks of nilfs are periodically overwritten in order to record the recent log position. This shortens recovery time after unclean unmount, but the current implementation performs the update even for a few blocks of change. If the filesystem gets small changes slowly and continually, super blocks may be updated excessively. This moderates the issue by skipping update of log cursor if it does not cross a segment boundary. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: implement fallback for super root searchRyusuke Konishi2010-07-22
| | | | | | | | | | | | | | | Although nilfs redundantly uses two super blocks and each may point to different position on log, the current version of nilfs does not try fallback to the spare super block when it doesn't find any valid log at the position that the primary super block points to. This has been a cause of mount failures due to write order reversals on barrier less block devices. This inserts fallback code in error path of nilfs_search_super_root routine to resolve the mount failure problem. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: add missing error code in comment of nilfs_search_super_rootRyusuke Konishi2010-07-22
| | | | | | | nilfs_search_super_root can return -ENOMEM, but this error code is not described in its kernel-doc comment. This fixes the discrepancy. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: separate setup of log cursor from init_nilfsRyusuke Konishi2010-07-22
| | | | | | | | | This separates a setup routine of log cursor from init_nilfs(). The routine, nilfs_store_log_cursor, reads the last position of the log containing a super root, and initializes relevant state on the nilfs object. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: sync super blocks in turnsJiro SEKIBA2010-07-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This will sync super blocks in turns instead of syncing duplicate super blocks at the time. This will help searching valid super root when super block is written into disk before log is written, which is happen when barrier-less block devices are unmounted uncleanly. In the situation, old super block likely points to valid log. This patch introduces ns_sbwcount member to the nilfs object and adds nilfs_sb_will_flip() function; ns_sbwcount counts how many times super blocks write back to the disk. And, nilfs_sb_will_flip() decides whether flipping required or not based on the count of ns_sbwcount to sync super blocks asymmetrically. The following functions are also changed: - nilfs_prepare_super(): flips super blocks according to the argument. The argument is calculated by nilfs_sb_will_flip() function. - nilfs_cleanup_super(): sets "clean" flag to both super blocks if they point to the same checkpoint. To update both of super block information, caller of nilfs_commit_super must set the information on both super blocks. Signed-off-by: Jiro SEKIBA <jir@unicus.jp> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: introduce nilfs_prepare_superJiro SEKIBA2010-07-22
| | | | | | | | | | This function checks validity of super block pointers. If first super block is invalid, it will swap the super blocks. The function should be called before any super block information updates. Caller must obtain nilfs->ns_sem. Signed-off-by: Jiro SEKIBA <jir@unicus.jp> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: separate function that updates log positionRyusuke Konishi2010-07-22
| | | | | | | | This moves out section that updates information of the recent log position stored in super blocks from nilfs_commit_super to a new routine named nilfs_set_log_cursor. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: add nilfs_set_errorRyusuke Konishi2010-07-22
| | | | | | | This function marks error state and write it on super blocks. This is a preparation for making super block writeback alternately. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: add nilfs_cleanup_superRyusuke Konishi2010-07-22
| | | | | | | | | This function write out filesystem state to super blocks in order to share the same cleanup work. This is a preparation for making super block writeback alternately. Cc: Jiro SEKIBA <jir@unicus.jp> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: do not update mount time on rw->ro remountRyusuke Konishi2010-07-22
| | | | | | | Mount time field in super block is wrongly updated when nilfs remounts the partition from read-write to read-only. This fixes the issue. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: get rid of ns_free_segments_countRyusuke Konishi2010-07-22
| | | | | | This counter is unused. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: get rid of macros for segment summary informationRyusuke Konishi2010-07-22
| | | | | | | This removes macros to test segment summary flags and redefines a few relevant macros with inline functions. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: do not use nilfs_segsum_info structure in recovery codeRyusuke Konishi2010-07-22
| | | | | | | This will get rid of nilfs_segsum_info use from recovery functions for simplicity. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: divide load_segment_summary functionRyusuke Konishi2010-07-22
| | | | | | | | | | load_segment_summary function has two distinct roles: getting summary header of a log, and verifying consistencies of the log. This divide it into two corresponding functions, nilfs_read_log_header and nilfs_validate_log to clarify the meaning. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: rename nilfs_recover_logical_segments functionRyusuke Konishi2010-07-22
| | | | | | | | The function name of nilfs_recover_logical_segments makes no sense. This changes the name into nilfs_salvage_orphan_logs to clarify the role of the function. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: refactor recovery logic routinesRyusuke Konishi2010-07-22
| | | | | | | | | | | Most functions in recovery code take an argument of a super block instance or a nilfs_sb_info struct for convenience sake. This replaces them aggressively with a nilfs object by applying __bread and __breadahead against routines using sb_bread and sb_breadahead. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* nilfs2: add blocksize member to nilfs objectRyusuke Konishi2010-07-22
| | | | | | | This stores blocksize in nilfs objects for the successive refactoring of recovery logic. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
* CIFS: Fix a malicious redirect problem in the DNS lookup codeDavid Howells2010-07-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix the security problem in the CIFS filesystem DNS lookup code in which a malicious redirect could be installed by a random user by simply adding a result record into one of their keyrings with add_key() and then invoking a CIFS CFS lookup [CVE-2010-2524]. This is done by creating an internal keyring specifically for the caching of DNS lookups. To enforce the use of this keyring, the module init routine creates a set of override credentials with the keyring installed as the thread keyring and instructs request_key() to only install lookup result keys in that keyring. The override is then applied around the call to request_key(). This has some additional benefits when a kernel service uses this module to request a key: (1) The result keys are owned by root, not the user that caused the lookup. (2) The result keys don't pop up in the user's keyrings. (3) The result keys don't come out of the quota of the user that caused the lookup. The keyring can be viewed as root by doing cat /proc/keys: 2a0ca6c3 I----- 1 perm 1f030000 0 0 keyring .dns_resolver: 1/4 It can then be listed with 'keyctl list' by root. # keyctl list 0x2a0ca6c3 1 key in keyring: 726766307: --alswrv 0 0 dns_resolver: foo.bar.com Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-and-Tested-by: Jeff Layton <jlayton@redhat.com> Acked-by: Steve French <smfrench@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Fix up trivial spelling errors ('taht' -> 'that')Linus Torvalds2010-07-21
| | | | | | | | | Pointed out by Lucas who found the new one in a comment in setup_percpu.c. And then I fixed the others that I grepped for. Reported-by: Lucas <canolucas@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'for-linus' of ↵Linus Torvalds2010-07-20
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client: ceph: do not include cap/dentry releases in replayed messages ceph: reuse request message when replaying against recovering mds ceph: fix creation of ipv6 sockets ceph: fix parsing of ipv6 addresses ceph: fix printing of ipv6 addrs ceph: add kfree() to error path ceph: fix leak of mon authorizer ceph: fix message revocation
| * ceph: do not include cap/dentry releases in replayed messagesSage Weil2010-07-16
| | | | | | | | | | | | | | | | | | Strip the cap and dentry releases from replayed messages. They can cause the shared state to get out of sync because they were generated (with the request message) earlier, and no longer reflect the current client state. Signed-off-by: Sage Weil <sage@newdream.net>
| * ceph: reuse request message when replaying against recovering mdsSage Weil2010-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replayed rename operations (after an mds failure/recovery) were broken because the request paths were regenerated from the dentry names, which get mangled when d_move() is called. Instead, resend the previous request message when replaying completed operations. Just make sure the REPLAY flag is set and the target ino is filled in. This fixes problems with workloads doing renames when the MDS restarts, where the rename operation appears to succeed, but on mds restart then fails (leading to client confusion, app breakage, etc.). Signed-off-by: Sage Weil <sage@newdream.net>
| * ceph: fix creation of ipv6 socketsSage Weil2010-07-09
| | | | | | | | | | | | Use the address family from the peer address instead of assuming IPv4. Signed-off-by: Sage Weil <sage@newdream.net>
| * ceph: fix parsing of ipv6 addressesSage Weil2010-07-09
| | | | | | | | | | | | | | Check for brackets around the ipv6 address to avoid ambiguity with the port number. Signed-off-by: Sage Weil <sage@newdream.net>
| * ceph: fix printing of ipv6 addrsSage Weil2010-07-08
| | | | | | | | | | | | | | | | The buffer was too small. Make it bigger, use snprintf(), put brackets around the ipv6 address to avoid mixing it up with the :port, and use the ever-so-handy %pI[46] formats. Signed-off-by: Sage Weil <sage@newdream.net>
| * ceph: add kfree() to error pathDan Carpenter2010-07-08
| | | | | | | | | | | | | | We leak a "pi" on this error path. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Sage Weil <sage@newdream.net>
| * ceph: fix leak of mon authorizerSage Weil2010-07-05
| | | | | | | | | | | | Fix leak of a struct ceph_buffer on umount. Signed-off-by: Sage Weil <sage@newdream.net>
| * ceph: fix message revocationSage Weil2010-07-05
| | | | | | | | | | | | | | | | | | | | | | | | | | A message can be on a queue (pending or sent), or out_msg (sending), or both. We were assuming that if it's not on a queue it couldn't be out_msg, but that was false in the case of lossy connections like the OSD. Fix ceph_con_revoke() to treat these cases independently. Also, fix the out_kvec_is_message check to only trigger if we are currently sending _this_ message. This fixes a GPF in tcp_sendpage, triggered by OSD restarts. Signed-off-by: Sage Weil <sage@newdream.net>
* | Merge branch 'shrinker' of ↵Linus Torvalds2010-07-19
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/dgc/xfsdev * 'shrinker' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/xfsdev: xfs: track AGs with reclaimable inodes in per-ag radix tree xfs: convert inode shrinker to per-filesystem contexts mm: add context argument to shrinker callback
| * | xfs: track AGs with reclaimable inodes in per-ag radix treeDave Chinner2010-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | https://bugzilla.kernel.org/show_bug.cgi?id=16348 When the filesystem grows to a large number of allocation groups, the summing of recalimable inodes gets expensive. In many cases, most AGs won't have any reclaimable inodes and so we are wasting CPU time aggregating over these AGs. This is particularly important for the inode shrinker that gets called frequently under memory pressure. To avoid the overhead, track AGs with reclaimable inodes in the per-ag radix tree so that we can find all the AGs with reclaimable inodes via a simple gang tag lookup. This involves setting the tag when the first reclaimable inode is tracked in the AG, and removing the tag when the last reclaimable inode is removed from the tree. Then the summation process becomes a loop walking the radix tree summing AGs with the reclaim tag set. This significantly reduces the overhead of scanning - a 6400 AG filesystea now only uses about 25% of a cpu in kswapd while slab reclaim progresses instead of being permanently stuck at 100% CPU and making little progress. Clean filesystems filesystems will see no overhead and the overhead only increases linearly with the number of dirty AGs. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
| * | xfs: convert inode shrinker to per-filesystem contextsDave Chinner2010-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now the shrinker passes us a context, wire up a shrinker context per filesystem. This allows us to remove the global mount list and the locking problems that introduced. It also means that a shrinker call does not need to traverse clean filesystems before finding a filesystem with reclaimable inodes. This significantly reduces scanning overhead when lots of filesystems are present. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
| * | mm: add context argument to shrinker callbackDave Chinner2010-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current shrinker implementation requires the registered callback to have global state to work from. This makes it difficult to shrink caches that are not global (e.g. per-filesystem caches). Pass the shrinker structure to the callback so that users can embed the shrinker structure in the context the shrinker needs to operate on and get back to it in the callback via container_of(). Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* | | Merge git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstableLinus Torvalds2010-07-19
|\ \ \ | |/ / |/| | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable: Btrfs: fix checks in BTRFS_IOC_CLONE_RANGE Btrfs: fix CLONE ioctl destination file size expansion to block boundary Btrfs: fix split_leaf double split corner case
| * | Btrfs: fix checks in BTRFS_IOC_CLONE_RANGEDan Rosenberg2010-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. The BTRFS_IOC_CLONE and BTRFS_IOC_CLONE_RANGE ioctls should check whether the donor file is append-only before writing to it. 2. The BTRFS_IOC_CLONE_RANGE ioctl appears to have an integer overflow that allows a user to specify an out-of-bounds range to copy from the source file (if off + len wraps around). I haven't been able to successfully exploit this, but I'd imagine that a clever attacker could use this to read things he shouldn't. Even if it's not exploitable, it couldn't hurt to be safe. Signed-off-by: Dan Rosenberg <dan.j.rosenberg@gmail.com> cc: stable@kernel.org Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | Btrfs: fix CLONE ioctl destination file size expansion to block boundarySage Weil2010-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CLONE and CLONE_RANGE ioctls round up the range of extents being cloned to the block size when the range to clone extends to the end of file (this is always the case with CLONE). It was then using that offset when extending the destination file's i_size. Fix this by not setting i_size beyond the originally requested ending offset. This bug was introduced by a22285a6 (2.6.35-rc1). Signed-off-by: Sage Weil <sage@newdream.net> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | Btrfs: fix split_leaf double split corner caseChris Mason2010-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | split_leaf was not properly balancing leaves when it was forced to split a leaf twice. This commit adds an extra push left and right before forcing the double split in hopes of getting the slot where we want to insert at either the start or end of the leaf. If the extra pushes do work, then we are able to avoid splitting twice and we keep the tree properly balanced. Signed-off-by: Chris Mason <chris.mason@oracle.com>
* | | [S390] dasd: use correct label location for diag fba disksPeter Oberparleiter2010-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Partition boundary calculation fails for DASD FBA disks under the following conditions: - disk is formatted with CMS FORMAT with a blocksize of more than 512 bytes - all of the disk is reserved to a single CMS file using CMS RESERVE - the disk is accessed using the DIAG mode of the DASD driver Under these circumstances, the partition detection code tries to read the CMS label block containing partition-relevant information from logical block offset 1, while it is in fact located at physical block offset 1. Fix this problem by using the correct CMS label block location depending on the device type as determined by the DASD SENSE ID information. Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* | | Merge branch 'upstream-linus' of ↵Linus Torvalds2010-07-18
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jlbec/ocfs2 * 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jlbec/ocfs2: ocfs2: Silence gcc warning in ocfs2_write_zero_page(). jbd2/ocfs2: Fix block checksumming when a buffer is used in several transactions ocfs2/dlm: Remove BUG_ON from migration in the rare case of a down node ocfs2: Don't duplicate pages past i_size during CoW. ocfs2: tighten up strlen() checking ocfs2: Make xattr reflink work with new local alloc reservation. ocfs2: make xattr extension work with new local alloc reservation. ocfs2: Remove the redundant cpu_to_le64. ocfs2/dlm: don't access beyond bitmap size ocfs2: No need to zero pages past i_size. ocfs2: Zero the tail cluster when extending past i_size. ocfs2: When zero extending, do it by page. ocfs2: Limit default local alloc size within bitmap range. ocfs2: Move orphan scan work to ocfs2_wq. fs/ocfs2/dlm: Add missing spin_unlock
| * | | ocfs2: Silence gcc warning in ocfs2_write_zero_page().Joel Becker2010-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | ocfs2_write_zero_page() has a loop that won't ever be skipped, but gcc doesn't know that. Set ret=0 just to make gcc happy. Signed-off-by: Joel Becker <joel.becker@oracle.com>
| * | | jbd2/ocfs2: Fix block checksumming when a buffer is used in several transactionsJan Kara2010-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | OCFS2 uses t_commit trigger to compute and store checksum of the just committed blocks. When a buffer has b_frozen_data, checksum is computed for it instead of b_data but this can result in an old checksum being written to the filesystem in the following scenario: 1) transaction1 is opened 2) handle1 is opened 3) journal_access(handle1, bh) - This sets jh->b_transaction to transaction1 4) modify(bh) 5) journal_dirty(handle1, bh) 6) handle1 is closed 7) start committing transaction1, opening transaction2 8) handle2 is opened 9) journal_access(handle2, bh) - This copies off b_frozen_data to make it safe for transaction1 to commit. jh->b_next_transaction is set to transaction2. 10) jbd2_journal_write_metadata() checksums b_frozen_data 11) the journal correctly writes b_frozen_data to the disk journal 12) handle2 is closed - There was no dirty call for the bh on handle2, so it is never queued for any more journal operation 13) Checkpointing finally happens, and it just spools the bh via normal buffer writeback. This will write b_data, which was never triggered on and thus contains a wrong (old) checksum. This patch fixes the problem by calling the trigger at the moment data is frozen for journal commit - i.e., either when b_frozen_data is created by do_get_write_access or just before we write a buffer to the log if b_frozen_data does not exist. We also rename the trigger to t_frozen as that better describes when it is called. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Mark Fasheh <mfasheh@suse.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
| * | | ocfs2/dlm: Remove BUG_ON from migration in the rare case of a down nodeWengang Wang2010-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For migration, we are waiting for DLM_LOCK_RES_MIGRATING flag to be set before sending DLM_MIG_LOCKRES_MSG message to the target. We are using dlm_migration_can_proceed() for that purpose. However, if the node is down, dlm_migration_can_proceed() will also return "go ahead". In this rare case, the DLM_LOCK_RES_MIGRATING flag might not be set yet. Remove the BUG_ON() that trips over this condition. Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
| * | | ocfs2: Don't duplicate pages past i_size during CoW.Tao Ma2010-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During CoW, the pages after i_size don't contain valid data, so there's no need to read and duplicate them. Signed-off-by: Tao Ma <tao.ma@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
| * | | ocfs2: tighten up strlen() checkingDan Carpenter2010-07-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This function is only called from one place and it's like this: dlm_register_domain(conn->cc_name, dlm_key, &fs_version); The "conn->cc_name" is 64 characters long. If strlen(conn->cc_name) were equal to O2NM_MAX_NAME_LEN (64) that would be a bug because strlen() doesn't count the NULL character. In fact, if you look how O2NM_MAX_NAME_LEN is used, it mostly describes 64 character buffers. The only exception is nd_name from struct o2nm_node. Anyway I looked into it and in this case the domain string comes from osb->uuid_str in ocfs2_setup_osb_uuid(). That's 32 characters and NULL which easily fits into O2NM_MAX_NAME_LEN. This patch doesn't change how the code works, but I think it makes the code a little cleaner. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>