aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAge
* GFS2: Eliminate redundant buffer_head manipulation in gfs2_unlink_inodeBob Peterson2012-11-13
| | | | | | | | | | | Since we now have a dirty_inode that takes care of manipulating the inode buffer and writing from the inode to the buffer, we can eliminate some unnecessary buffer manipulations in gfs2_unlink_inode that are now redundant. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* GFS2: Use dirty_inode in gfs2_dir_addBob Peterson2012-11-13
| | | | | | | | | | This patch changes the gfs2_dir_add function so that it uses the dirty_inode function (via mark_inode_dirty) rather than manually updating the dinode. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* GFS2: Fix truncation of journaled data filesSteven Whitehouse2012-11-13
| | | | | | | | | | | This patch fixes an issue relating to not having enough revokes available when truncating journaled data files. In order to ensure that we do no run out, the truncation is broken into separate pieces if it is large enough. Tested using fsx on a journaled data file. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* GFS2: Add Orlov allocatorSteven Whitehouse2012-11-07
| | | | | | | | | | | | | | | Just like ext3, this works on the root directory and any directory with the +T flag set. Also, just like ext3, any subdirectory created in one of the just mentioned cases will be allocated to a random resource group (GFS2 equivalent of a block group). If you are creating a set of directories, each of which will contain a job running on a different node, then by setting +T on the parent directory before creating the subdirectories, each will land up in a different resource group, and thus resource group contention between nodes will be kept to a minimum. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* GFS2: Use proper allocation context for new inodesSteven Whitehouse2012-11-07
| | | | | | | | | | | Rather than using the parent directory's allocation context, this patch allocated the new inode earlier in the process and then uses it to contain all the information required. As a result, we can now use the new inode's own allocation context to allocate it rather than having to use the parent directory's context. This give us a lot more flexibility in where the inode is placed on disk. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* GFS2: Add test for resource group congestion statusSteven Whitehouse2012-11-07
| | | | | | | | | | | | | | | | | | This patch uses information gathered by the recent glock statistics patch in order to derrive a boolean verdict on the congestion status of a resource group. This is then used when making decisions on which resource group to choose during block allocation. The aim is to avoid resource groups which are heavily contended by other nodes, while still ensuring locality of access wherever possible. Once a reservation has been made in a particular resource group we continue to use that resource group until a new reservation is required. This should help to ensure that we do not change resource groups too often. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* GFS2: Rename glops go_xmote_th to go_syncBob Peterson2012-11-07
| | | | | | | | | | [Editorial: This is a nit, but has been a minor irritation for a long time:] This patch renames glops structure item for go_xmote_th to go_sync. The functionality is unchanged; it's just for readability. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* GFS2: Speed up gfs2_rbm_from_blockBob Peterson2012-11-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch is a rewrite of function gfs2_rbm_from_block. Rather than looping to find the right bitmap, the code now does a few simple math calculations. I compared the performance of both algorithms side by side and the new algorithm is noticeably faster. Sample instrumentation output from a "fast" machine: 5 million calls: millisec spent: Orig: 166 New: 113 5 million calls: millisec spent: Orig: 189 New: 114 In addition, I ran postmark (on a somewhat slowr CPU) before the after the new algorithm was put in place and postmark showed a decent improvement: Before the new algorithm: ------------------------- Time: 645 seconds total 584 seconds of transactions (171 per second) Files: 150087 created (232 per second) Creation alone: 100000 files (2083 per second) Mixed with transactions: 50087 files (85 per second) 49995 read (85 per second) 49991 appended (85 per second) 150087 deleted (232 per second) Deletion alone: 100174 files (7705 per second) Mixed with transactions: 49913 files (85 per second) Data: 273.42 megabytes read (434.08 kilobytes per second) 852.13 megabytes written (1.32 megabytes per second) With the new algorithm: ----------------------- Time: 599 seconds total 530 seconds of transactions (188 per second) Files: 150087 created (250 per second) Creation alone: 100000 files (1886 per second) Mixed with transactions: 50087 files (94 per second) 49995 read (94 per second) 49991 appended (94 per second) 150087 deleted (250 per second) Deletion alone: 100174 files (6260 per second) Mixed with transactions: 49913 files (94 per second) Data: 273.42 megabytes read (467.42 kilobytes per second) 852.13 megabytes written (1.42 megabytes per second) Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* GFS2: Review bug traps in glops.cSteven Whitehouse2012-11-07
| | | | | | | | | | Two of the bug traps here could really be warnings. The others are converted from BUG() to GLOCK_BUG_ON() since we'll most likely need to know the glock state in order to debug any issues which arise. As a result of this, __dump_glock has to be renamed and is no longer static. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-3.0-fixesLinus Torvalds2012-11-07
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull gfs2 fixes from Steven Whitehouse: "Here are a number of GFS2 bug fixes. There are three from Andy Price which fix various issues spotted by automated code analysis. There are two from Lukas Czerner fixing my mistaken assumptions as to how FITRIM should work. Finally Ben Marzinski has fixed a bug relating to mmap and atime and also a bug relating to a locking issue in the transaction code." * git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-3.0-fixes: GFS2: Test bufdata with buffer locked and gfs2_log_lock held GFS2: Don't call file_accessed() with a shared glock GFS2: Fix FITRIM argument handling GFS2: Require user to provide argument for FITRIM GFS2: Clean up some unused assignments GFS2: Fix possible null pointer deref in gfs2_rs_alloc GFS2: Fix an unchecked error from gfs2_rs_alloc
| * GFS2: Test bufdata with buffer locked and gfs2_log_lock heldBenjamin Marzinski2012-11-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In gfs2_trans_add_bh(), gfs2 was testing if a there was a bd attached to the buffer without having the gfs2_log_lock held. It was then assuming it would stay attached for the rest of the function. However, without either the log lock being held of the buffer locked, __gfs2_ail_flush() could detach bd at any time. This patch moves the locking before the test. If there isn't a bd already attached, gfs2 can safely allocate one and attach it before locking. There is no way that the newly allocated bd could be on the ail list, and thus no way for __gfs2_ail_flush() to detach it. Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| * GFS2: Don't call file_accessed() with a shared glockBenjamin Marzinski2012-11-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | file_accessed() was being called by gfs2_mmap() with a shared glock. If it needed to update the atime, it was crashing because it dirtied the inode in gfs2_dirty_inode() without holding an exclusive lock. gfs2_dirty_inode() checked if the caller was already holding a glock, but it didn't make sure that the glock was in the exclusive state. Now, instead of calling file_accessed() while holding the shared lock in gfs2_mmap(), file_accessed() is called after grabbing and releasing the glock to update the inode. If file_accessed() needs to update the atime, it will grab an exclusive lock in gfs2_dirty_inode(). gfs2_dirty_inode() now also checks to make sure that if the calling process has already locked the glock, it has an exclusive lock. Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| * GFS2: Fix FITRIM argument handlingLukas Czerner2012-11-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently implementation in gfs2 uses FITRIM arguments as it were in file system blocks units which is wrong. The FITRIM arguments (fstrim_range.start, fstrim_range.len and fstrim_range.minlen) are actually in bytes. Moreover, check for start argument beyond the end of file system, len argument being smaller than file system block and minlen argument being bigger than biggest resource group were missing. This commit converts the code to convert FITRIM argument to file system blocks and also adds appropriate checks mentioned above. All the problems were recognised by xfstests 251 and 260. Signed-off-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| * GFS2: Require user to provide argument for FITRIMLukas Czerner2012-11-07
| | | | | | | | | | | | | | | | | | When the fstrim_range argument is not provided by user in FITRIM ioctl we should just return EFAULT and not promoting bad behaviour by filling the structure in kernel. Let the user deal with it. Signed-off-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| * GFS2: Clean up some unused assignmentsAndrew Price2012-11-07
| | | | | | | | | | | | | | | | Cleans up two cases where variables were assigned values but then never used again. Signed-off-by: Andrew Price <anprice@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| * GFS2: Fix possible null pointer deref in gfs2_rs_allocAndrew Price2012-11-07
| | | | | | | | | | | | | | | | | | | | Despite the return value from kmem_cache_zalloc() being checked, the error wasn't being returned until after a possible null pointer dereference. This patch returns the error immediately, allowing the removal of the error variable. Signed-off-by: Andrew Price <anprice@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| * GFS2: Fix an unchecked error from gfs2_rs_allocAndrew Price2012-11-07
| | | | | | | | | | | | | | | | Check the return value of gfs2_rs_alloc(ip) and avoid a possible null pointer dereference. Signed-off-by: Andrew Price <anprice@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* | Merge branch 'hwmon-for-linus' of ↵Linus Torvalds2012-11-07
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jdelvare/staging Pull hwmon fixes from Jean Delvare. * 'hwmon-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jdelvare/staging: hwmon: Fix chip feature table headers hwmon: (w83627ehf) Force initial bank selection
| * | hwmon: Fix chip feature table headersJean Delvare2012-11-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These got broken by recent patches fixing checkpatch warnings in these drivers. The trick is that the patches themselves looked good, but the source files after applying them do not. That's why I am not a big fan of using tabs inside comments. Signed-off-by: Jean Delvare <khali@linux-fr.org> Acked-by: Guenter Roeck <linux@roeck-us.net>
| * | hwmon: (w83627ehf) Force initial bank selectionJean Delvare2012-11-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't assume bank 0 is selected at device probe time. This may not be the case. Force bank selection at first register access to guarantee that we read the right registers upon driver loading. Signed-off-by: Jean Delvare <khali@linux-fr.org> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Cc: stable@vger.kernel.org
* | | Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linuxLinus Torvalds2012-11-06
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull drm fixes from Dave Airlie: "A single radeon typo fix for a regressions and two fixes for a regression in the open helper address space stuff." * 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: drm/radeon: fix typo in evergreen_mc_resume() drm: set dev_mapping before calling drm_open_helper drm: restore open_count if drm_setup fails
| * | | drm/radeon: fix typo in evergreen_mc_resume()Alex Deucher2012-11-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add missing index that may have led us to enabling more crtcs than necessary. May also fix: https://bugs.freedesktop.org/show_bug.cgi?id=56139 Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
| * | | drm: set dev_mapping before calling drm_open_helperIlija Hadzic2012-11-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some drivers (specifically vmwgfx) look at dev_mapping in their open hook, so we have to set dev->dev_mapping earlier in the process. Reference: http://lists.freedesktop.org/archives/dri-devel/2012-October/029420.html Signed-off-by: Ilija Hadzic <ihadzic@research.bell-labs.com> Reported-by: Thomas Hellstrom <thellstrom@vmware.com> Cc: stable@vger.kernel.org Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
| * | | drm: restore open_count if drm_setup failsIlija Hadzic2012-11-06
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If drm_setup (called at first open) fails, the whole open call has failed, so we should not keep the open_count incremented. Signed-off-by: Ilija Hadzic <ihadzic@research.bell-labs.com> Cc: stable@vger.kernel.org Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
* | | Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds2012-11-06
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull arm fixes from Russell King: "Not much here again. The two most notable things here are the sched_clock() fix, which was causing problems with the scheduling of threaded IRQs after a suspend event, and the vfp fix, which afaik has only been seen on some older OMAP boards. Nevertheless, both are fairly important fixes." * 'fixes' of git://git.linaro.org/people/rmk/linux-arm: ARM: 7569/1: mm: uninitialized warning corrections ARM: 7567/1: io: avoid GCC's offsettable addressing modes for halfword accesses ARM: 7566/1: vfp: fix save and restore when running on pre-VFPv3 and CONFIG_VFPv3 set ARM: 7565/1: sched: stop sched_clock() during suspend
| * | ARM: 7569/1: mm: uninitialized warning correctionsviresh kumar2012-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The variables here are really not used uninitialized. arch/arm/mm/alignment.c: In function 'do_alignment': arch/arm/mm/alignment.c:327:15: warning: 'offset.un' may be used uninitialized in this function [-Wmaybe-uninitialized] arch/arm/mm/alignment.c:748:21: note: 'offset.un' was declared here Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | ARM: 7567/1: io: avoid GCC's offsettable addressing modes for halfword accessesWill Deacon2012-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using the 'o' memory constraint in inline assembly can result in GCC generating invalid immediate offsets for memory access instructions with reduced addressing capabilities (i.e. smaller than 12-bit immediate offsets): http://gcc.gnu.org/bugzilla/show_bug.cgi?id=54983 As there is no constraint to specify the exact addressing mode we need, fallback to using 'Q' exclusively for halfword I/O accesses. This may emit an additional add instruction (using an extra register) in order to construct the address but it will always be accepted by GAS. Reported-by: Bastian Hecht <hechtb@googlemail.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | ARM: 7566/1: vfp: fix save and restore when running on pre-VFPv3 and ↵Paul Walmsley2012-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CONFIG_VFPv3 set After commit 846a136881b8f73c1f74250bf6acfaa309cab1f2 ("ARM: vfp: fix saving d16-d31 vfp registers on v6+ kernels"), the OMAP 2430SDP board started crashing during boot with omap2plus_defconfig: [ 3.875122] mmcblk0: mmc0:e624 SD04G 3.69 GiB [ 3.915954] mmcblk0: p1 [ 4.086639] Internal error: Oops - undefined instruction: 0 [#1] SMP ARM [ 4.093719] Modules linked in: [ 4.096954] CPU: 0 Not tainted (3.6.0-02232-g759e00b #570) [ 4.103149] PC is at vfp_reload_hw+0x1c/0x44 [ 4.107666] LR is at __und_usr_fault_32+0x0/0x8 It turns out that the context save/restore fix unmasked a latent bug in commit 5aaf254409f8d58229107b59507a8235b715a960 ("ARM: 6203/1: Make VFPv3 usable on ARMv6"). When CONFIG_VFPv3 is set, but the kernel is booted on a pre-VFPv3 core, the code attempts to save and restore the d16-d31 VFP registers. These are only present on non-D16 VFPv3+, so this results in an undefined instruction exception. The code didn't crash before commit 846a136 because the save and restore code was only touching d0-d15, present on all VFP. Fix by implementing a request from Russell King to add a new HWCAP flag that affirmatively indicates the presence of the d16-d31 registers: http://marc.info/?l=linux-arm-kernel&m=135013547905283&w=2 and some feedback from Måns to clarify the name of the HWCAP flag. Signed-off-by: Paul Walmsley <paul@pwsan.com> Cc: Tony Lindgren <tony@atomide.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Martin <dave.martin@linaro.org> Cc: Måns Rullgård <mans.rullgard@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | ARM: 7565/1: sched: stop sched_clock() during suspendFelipe Balbi 22012-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The scheduler imposes a requirement to sched_clock() which is to stop the clock during suspend, if we don't do that any RT thread will be rescheduled in the future which might cause any sort of problems. This became an issue on OMAP when we converted omap-i2c.c to use threaded IRQs, it turned out that depending on how much time we spent on suspend, the I2C IRQ thread would end up being rescheduled so far in the future that I2C transfers would timeout and, because omap_hsmmc depends on an I2C-connected device to detect if an MMC card is inserted in the slot, our rootfs would just vanish. arch/arm/kernel/sched_clock.c already had an optional implementation (sched_clock_needs_suspend()) which would handle scheduler's requirement properly, what this patch does is simply to make that implementation non-optional. Note that this has the side-effect that printk timings won't reflect the actual time spent on suspend so other methods to measure that will have to be used. This has been tested with beagleboard XM (OMAP3630) and pandaboard rev A3 (OMAP4430). Suspend to RAM is now working after this patch. Thanks to Kevin Hilman for helping out with debugging. Acked-by: Kevin Hilman <khilman@ti.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | | Linux 3.7-rc4Linus Torvalds2012-11-04
| | |
* | | Merge tag 'nfs-for-3.7-4' of git://git.linux-nfs.org/projects/trondmy/linux-nfsLinus Torvalds2012-11-03
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull NFS client bugfixes from Trond Myklebust: - Fix a bunch of deadlock situations: * State recovery can deadlock if we fail to release sequence ids before scheduling the recovery thread. * Calling deactivate_super() from an RPC workqueue thread can deadlock because of the call to rpc_shutdown_client. - Display the device name correctly in /proc/*/mounts - Fix a number of incorrect error return values: * When NFSv3 mounts fail due to a timeout. * On NFSv4.1 backchannel setup failure * On NFSv4 open access checks - pnfs_find_alloc_layout() must check the layout pointer for NULL - Fix a regression in the legacy DNS resolved * tag 'nfs-for-3.7-4' of git://git.linux-nfs.org/projects/trondmy/linux-nfs: NFS4: nfs4_opendata_access should return errno NFSv4: Initialise the NFSv4.1 slot table highest_used_slotid correctly SUNRPC: return proper errno from backchannel_rqst NFS: add nfs_sb_deactive_async to avoid deadlock nfs: Show original device name verbatim in /proc/*/mount{s,info} nfsv3: Make v3 mounts fail with ETIMEDOUTs instead EIO on mountd timeouts nfs: Check whether a layout pointer is NULL before free it NFS: fix bug in legacy DNS resolver. NFSv4: nfs4_locku_done must release the sequence id NFSv4.1: We must release the sequence id when we fail to get a session slot NFS: Wait for session recovery to finish before returning
| * | | NFS4: nfs4_opendata_access should return errnoWeston Andros Adamson2012-11-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Return errno - not an NFS4ERR_. This worked because NFS4ERR_ACCESS == EACCES. Signed-off-by: Weston Andros Adamson <dros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFSv4: Initialise the NFSv4.1 slot table highest_used_slotid correctlyTrond Myklebust2012-11-01
| | | | | | | | | | | | | | | | Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | SUNRPC: return proper errno from backchannel_rqstWeston Andros Adamson2012-11-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The one and only caller (in fs/nfs/nfs4client.c) uses the result as an errno and would have interpreted an error as EPERM. Signed-off-by: Weston Andros Adamson <dros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: add nfs_sb_deactive_async to avoid deadlockWeston Andros Adamson2012-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use nfs_sb_deactive_async instead of nfs_sb_deactive when in a workqueue context. This avoids a deadlock where rpc_shutdown_client loops forever in a workqueue kworker context, trying to kill all RPC tasks associated with the client, while one or more of these tasks have already been assigned to the same kworker (and will never run rpc_exit_task). This approach is needed because RPC tasks that have already been assigned to a kworker by queue_work cannot be canceled, as explained in the comment for workqueue.c:insert_wq_barrier. Signed-off-by: Weston Andros Adamson <dros@netapp.com> [Trond: add module_get/put.] Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | nfs: Show original device name verbatim in /proc/*/mount{s,info}Ben Hutchings2012-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since commit c7f404b ('vfs: new superblock methods to override /proc/*/mount{s,info}'), nfs_path() is used to generate the mounted device name reported back to userland. nfs_path() always generates a trailing slash when the given dentry is the root of an NFS mount, but userland may expect the original device name to be returned verbatim (as it used to be). Make this canonicalisation optional and change the callers accordingly. [jrnieder@gmail.com: use flag instead of bool argument] Reported-and-tested-by: Chris Hiestand <chiestand@salk.edu> Reference: http://bugs.debian.org/669314 Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Cc: <stable@vger.kernel.org> # v2.6.39+ Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | nfsv3: Make v3 mounts fail with ETIMEDOUTs instead EIO on mountd timeoutsScott Mayhew2012-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In very busy v3 environment, rpc.mountd can respond to the NULL procedure but not the MNT procedure in a timely manner causing the MNT procedure to time out. The problem is the mount system call returns EIO which causes the mount to fail, instead of ETIMEDOUT, which would cause the mount to be retried. This patch sets the RPC_TASK_SOFT|RPC_TASK_TIMEOUT flags to the rpc_call_sync() call in nfs_mount() which causes ETIMEDOUT to be returned on timed out connections. Signed-off-by: Steve Dickson <steved@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@vger.kernel.org
| * | | nfs: Check whether a layout pointer is NULL before free itYanchuan Nian2012-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new layout pointer in pnfs_find_alloc_layout() may be NULL because of out of memory. we must do some check work, otherwise pnfs_free_layout_hdr() will go wrong because it can not deal with a NULL pointer. Signed-off-by: Yanchuan Nian <ycnian@gmail.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: fix bug in legacy DNS resolver.NeilBrown2012-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The DNS resolver's use of the sunrpc cache involves a 'ttl' number (relative) rather that a timeout (absolute). This confused me when I wrote commit c5b29f885afe890f953f7f23424045cdad31d3e4 "sunrpc: use seconds since boot in expiry cache" and I managed to break it. The effect is that any TTL is interpreted as 0, and nothing useful gets into the cache. This patch removes the use of get_expiry() - which really expects an expiry time - and uses get_uint() instead, treating the int correctly as a ttl. This fixes a regression that has been present since 2.6.37, causing certain NFS accesses in certain environments to incorrectly fail. Reported-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: Chuck Lever <chuck.lever@oracle.com> Cc: stable@vger.kernel.org Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFSv4: nfs4_locku_done must release the sequence idTrond Myklebust2012-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the state recovery machinery is triggered by the call to nfs4_async_handle_error() then we can deadlock. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@vger.kernel.org
| * | | NFSv4.1: We must release the sequence id when we fail to get a session slotTrond Myklebust2012-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we do not release the sequence id in cases where we fail to get a session slot, then we can deadlock if we hit a recovery scenario. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@vger.kernel.org
| * | | NFS: Wait for session recovery to finish before returningBryan Schumaker2012-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, we will schedule session recovery and then return to the caller of nfs4_handle_exception. This works for most cases, but causes a hang on the following test case: Client Server ------ ------ Open file over NFS v4.1 Write to file Expire client Try to lock file The server will return NFS4ERR_BADSESSION, prompting the client to schedule recovery. However, the client will continue placing lock attempts and the open recovery never seems to be scheduled. The simplest solution is to wait for session recovery to run before retrying the lock. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@vger.kernel.org
* | | | Merge branch 'release' of ↵Linus Torvalds2012-11-03
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux Pull thermal management & ACPI update from Zhang Rui, Ho humm. Normally these things go through Len. But it's just three small fixes, I guess I can pull directly too. * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux: exynos4_tmu_driver_ids should be exynos_tmu_driver_ids. ACPI video: Ignore errors after _DOD evaluation. thermal: solve compilation errors in rcar_thermal
| * | | | exynos4_tmu_driver_ids should be exynos_tmu_driver_ids.Jonghwan Choi2012-11-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Jonghwan Choi <jhbird.choi@samsung.com> Reviewed-by: Amit Daniel Kachhap <amit.kachhap@linaro.org> Signed-off-by: Zhang Rui <rui.zhang@intel.com>
| * | | | ACPI video: Ignore errors after _DOD evaluation.Igor Murzov2012-11-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are systems where video module known to work fine regardless of broken _DOD and ignoring returned value here doesn't cause any issues later. This should fix brightness controls on some laptops. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=47861 Signed-off-by: Igor Murzov <e-mail@date.by> Reviewed-by: Sergey V <sftp.mtuci@gmail.com> Signed-off-by: Zhang Rui <rui.zhang@intel.com>
| * | | | thermal: solve compilation errors in rcar_thermalDevendra Naga2012-11-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | following were the errors reported drivers/thermal/rcar_thermal.c: In function ‘rcar_thermal_probe’: drivers/thermal/rcar_thermal.c:214:10: warning: passing argument 3 of ‘thermal_zone_device_register’ makes integer from pointer without a cast [enabled by default] include/linux/thermal.h:166:29: note: expected ‘int’ but argument is of type ‘struct rcar_thermal_priv *’ drivers/thermal/rcar_thermal.c:214:10: error: too few arguments to function ‘thermal_zone_device_register’ include/linux/thermal.h:166:29: note: declared here make[1]: *** [drivers/thermal/rcar_thermal.o] Error 1 make: *** [drivers/thermal/rcar_thermal.o] Error 2 with gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) Signed-off-by: Devendra Naga <develkernel412222@gmail.com> Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Zhang Rui <rui.zhang@intel.com>
* | | | | Merge branch 'i2c-embedded/for-current' of ↵Linus Torvalds2012-11-03
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.pengutronix.de/git/wsa/linux Pull i2c embedded fixes from Wolfram Sang: "Two patches are usual stuff. The bigger patch is needed to correct a wrong decision made in this merge window. We hoped to get the PIOQUEUE mode in the mxs driver working with DMA, but it turned out to be too broken (leading to data loss), so we now think it is best to remove it entirely and work only with DMA now. The patch should be in 3.7. IMO, so users never get the chance to use both modes in parallel." * 'i2c-embedded/for-current' of git://git.pengutronix.de/git/wsa/linux: i2c: tegra: set irq name as device name i2c-nomadik: Fixup clock handling i2c: mxs: remove broken PIOQUEUE support
| * | | | | i2c: tegra: set irq name as device nameLaxman Dewangan2012-11-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When watching the irqs name of tegra i2c, all instances irq name shows as tegra_i2c. Passing the device name properly to have the irq names with instance like tegra-i2c.0, tegra-i2c.1 etc. Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com> Acked-by: Jean Delvare <khali@linux-fr.org> Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
| * | | | | i2c-nomadik: Fixup clock handlingPhilippe Begnic2012-11-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make sure to clk_prepare as well as clk_enable. Signed-off-by: Philippe Begnic <philippe.begnic@stericsson.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
| * | | | | i2c: mxs: remove broken PIOQUEUE supportWolfram Sang2012-11-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This I2C master can do DMA and PIOQUEUE (PIO with FIFO). Originally, only PIOQUEUE was supported and it had issues, then DMA support was added this cycle. The original intention was to keep PIOQUEUE since it has less overhead what is nice for small transfers. However, runtime switching between PIOQEUE and DMA depending on the transfer size never worked despite a lot of trying. Since PIOQUEUE mode itself was flaky (polling at places where interrupts failed to work) and the implementation also imposed a size limit for transfers, it is best to remove the support, so users don't fall over its limitations. It also makes the driver a lot cleaner and more robust. If somebody really wants less overhead, plain PIO mode could still be implemented with the addidtional advantage that this mode is also available on MX23, too. Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Reviewed-by: Marek Vasut <marex@denx.de>