| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the fourth respin of the patch to convert oplock breaks to
use the slow_work facility.
A customer of ours was testing a backport of one of the earlier
patchsets, and hit a "Busy inodes after umount..." problem. An oplock
break job had raced with a umount, and the superblock got torn down and
its memory reused. When the oplock break job tried to dereference the
inode->i_sb, the kernel oopsed.
This patchset has the oplock break job hold an inode and vfsmount
reference until the oplock break completes. With this, there should be
no need to take a tcon reference (the vfsmount implicitly holds one
already).
Currently, when an oplock break comes in there's a chance that the
oplock break job won't occur if the allocation of the oplock_q_entry
fails. There are also some rather nasty races in the allocation and
handling these structs.
Rather than allocating oplock queue entries when an oplock break comes
in, add a few extra fields to the cifsFileInfo struct. Get rid of the
dedicated cifs_oplock_thread as well and queue the oplock break job to
the slow_work thread pool.
This approach also has the advantage that the oplock break jobs can
potentially run in parallel rather than be serialized like they are
today.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Right now, the GlobalOplock_Q is protected by the GlobalMid_Lock. That
lock is also used for completely unrelated purposes (mostly for managing
the global mid queue). Give the list its own dedicated spinlock
(cifs_oplock_lock) and rename the list to cifs_oplock_list to
eliminate the camel-case.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
| |
nonblocking cases
We have used approximately 15 second timeouts on nonblocking sends in the past, and
also 15 second SMB timeout (waiting for server responses, for most request types).
Now that we can do blocking tcp sends,
make blocking send timeout approximately the same (15 seconds).
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
cifs: turn smb_send into a wrapper around smb_sendv
Rename smb_send2 to smb_sendv to make it consistent with kernel naming
conventions for functions that take a vector.
There's no need to have 2 functions to handle sending SMB calls. Turn
smb_send into a wrapper around smb_sendv. This also allows us to
properly mark the socket as needing to be reconnected when there's a
partial send from smb_send.
Also, in practice we always use the address and noblocksnd flag
that's attached to the TCP_Server_Info. There's no need to pass
them in as separate args to smb_sendv.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Acked-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
| |
Remove an already-checked error condition in SendReceiveBlockingLock
Signed-off-by: Volker Lendecke <vl@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
| |
Streamline SendReceiveBlockingLock: Use "goto out:" in an error condition
Signed-off-by: Volker Lendecke <vl@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
| |
Streamline SendReceiveBlockingLock: Use "goto out:" in an error condition
Signed-off-by: Volker Lendecke <vl@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Volker Lendecke <vl@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
| |
Slightly streamline SendReceive[2]
Remove an else branch by naming the error condition what it is
Signed-off-by: Volker Lendecke <vl@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
| |
Check the return value of cifs_sign_smb[2]
Signed-off-by: Volker Lendecke <vl@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
| |
"else" branch
This is no functional change, because in the "if" branch we do an early
"return 0;".
Signed-off-by: Volker Lendecke <vl@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
| |
Simplify allocate_mid() slightly: Remove some unnecessary "else" branches
Signed-off-by: Volker Lendecke <vl@samba.org>
Acked-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
| |
inbuf->smb_buf_length does not change in in wait_for_free_request() or in
allocate_mid(), so we can check it early.
Signed-off-by: Volker Lendecke <vl@samba.org>
Acked-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
cifs: fix wait_for_response to time out sleeping processes correctly
The current scheme that CIFS uses to sleep and wait for a response is
not quite what we want. After sending a request, wait_for_response puts
the task to sleep with wait_event(). One of the conditions for
wait_event is a timeout (using time_after()).
The problem with this is that there is no guarantee that the process
will ever be woken back up. If the server stops sending data, then
cifs_demultiplex_thread will leave its response queue sleeping.
I think the only thing that saves us here is the fact that
cifs_dnotify_thread periodically (every 15s) wakes up sleeping processes
on all response_q's that have calls in flight. This makes for
unnecessary wakeups of some processes. It also means large variability
in the timeouts since they're all woken up at once.
Instead of this, put the tasks to sleep with wait_event_timeout. This
makes them wake up on their own if they time out. With this change,
cifs_dnotify_thread should no longer be needed.
I've been testing this in conjunction with some other patches that I'm
working on. It doesn't seem to affect performance at all with with heavy
I/O. Identical iozone -ac runs complete in almost exactly the same time
(<1% difference in times).
Thanks to Wasrshi Nimara for initially pointing this out. Wasrshi, it
would be nice to know whether this patch also helps your testcase.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Cc: Wasrshi Nimara <warshinimara@gmail.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
| |
Mutexes are preferred for single-holder semaphores...
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
| |
smb_send2 exit logic was strange, and with the previous change
could cause us to fail large
smb writes when all of the smb was not sent as one chunk.
Acked-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CIFS in some heavy stress conditions cifs could get EAGAIN
repeatedly in smb_send2 which led to repeated retries and eventually
failure of large writes which could lead to data corruption.
There are three changes that were suggested by various network
developers:
1) convert cifs from non-blocking to blocking tcp sendmsg
(we left in the retry on failure)
2) change cifs to not set sendbuf and rcvbuf size for the socket
(let tcp autotune the buffer sizes since that works much better
in the TCP stack now)
3) if we have a partial frame sent in smb_send2, mark the tcp
session as invalid (close the socket and reconnect) so we do
not corrupt the remaining part of the SMB with the beginning
of the next SMB.
This does not appear to hurt performance measurably and has
been run in various scenarios, but it definately removes
a corruption that we were seeing in some high stress
test cases.
Acked-by: Shirish Pargaonkar <shirishp@us.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
| |
GFP_KERNEL and GFP_NOFS are mutually exclusive. If you combine them, you end up
with plain GFP_KERNEL which can deadlock in cases where you really want
GFP_NOFS.
Cc: Steve French <sfrench@samba.org>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are cases in which, on a full socket which requires retry on
sending data by the app (cifs in this case), that we were not
retrying since we did not reinitialize a counter.
This fixes the retry logic to retry up to 15 seconds on stuck
sockets.
Signed-off-by: Shirish Pargaonkar <shirishp@us.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
| |
If a tcon is being freed in call tconInfoFree, clean up any entries that may
exist in global oplock queue as the tcon structure hanging off of those entries
will be invalid and can cause oops while accesing any elements in the
tcon structure.
Signed-off-by: Shirish Pargaonkar <shirishp@us.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
| |
Christoph had noticed too many ifdefs in the CIFS code making it
hard to read. This patch removes about a quarter of them from
the C files in cifs by improving a few key ifdefs in the .h files.
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
| |
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
request
In SendReceive() function in transport.c - it memcpy's
message payload into a buffer passed via out_buf param. The function
assumes that all buffers are of size (CIFSMaxBufSize +
MAX_CIFS_HDR_SIZE) , unfortunately it is also called with smaller
(MAX_CIFS_SMALL_BUFFER_SIZE) buffers. There are eight callers
(SMB worker functions) which are primarily affected by this change:
TreeDisconnect, uLogoff, Close, findClose, SetFileSize, SetFileTimes,
Lock and PosixLock
CC: Dave Kleikamp <shaggy@austin.ibm.com>
CC: Przemyslaw Wegrzyn <czajnik@czajsoft.pl>
Acked-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
| |
Also returns more accurate errors to mount for the cases of
account expired and password expired
Acked-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
| |
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
| |
This should be the last big batch of whitespace/formatting fixes.
checkpatch warnings for the cifs directory are down about 90% and
many of the remaining ones are harder to remove or make the code
harder to read.
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Yehuda Sadeh Weinraub <Yehuda.Sadeh@expand.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
| |
checkpatch.pl redux
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
| |
Noticed by Shaggy.
Signed-off-by: Shaggy <shaggy@us.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace all uses of kmem_cache_t with struct kmem_cache.
The patch was generated using the following script:
#!/bin/sh
#
# Replace one string by another in all the kernel sources.
#
set -e
for file in `find * -name "*.c" -o -name "*.h"|xargs grep -l $1`; do
quilt add $file
sed -e "1,\$s/$1/$2/g" $file >/tmp/$$
mv /tmp/$$ $file
quilt refresh
done
The script was run like this
sh replace kmem_cache_t "struct kmem_cache"
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
|
|
|
|
|
|
| |
SLAB_KERNEL is an alias of GFP_KERNEL.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
|
|
|
|
|
|
| |
SLAB_NOFS is an alias of GFP_NOFS.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
|
|
|
|
|
|
|
|
|
| |
Allow Windows blocking locks to be cancelled via a
CANCEL_LOCK call. TODO - restrict this to servers
that support NT_STATUS codes (Win9x will probably
not support this call).
Signed-off-by: Jeremy Allison <jra@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
(cherry picked from 570d4d2d895569825d0d017d4e76b51138f68864 commit)
|
|
|
|
|
|
|
|
|
| |
Although harmless, we were sometimes treating midState like it contained
flags but they are exclusive states, and this makes that more clear.
Signed-off-by: Jeremy Allison <jra@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
(cherry picked from 586c057c3a68dd6ae0f3ba94fbf76798b1558074 commit)
|
|
|
|
|
|
| |
Signed-off-by: Jeremy Allison <jra@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
(cherry picked from b33a3f55e54fd210fc043eafcf83728b03bc9e02 commit)
|
|
|
|
|
|
|
|
|
|
| |
request and do not time out slow requests to a server that is still responding
well to other threads
Suggested by jra of Samba team
Signed-off-by: Steve French <sfrench@us.ibm.com>
(cherry picked from 89b57148115479eef074b8d3f86c4c86c96ac969 commit)
|
| |
|
|
|
|
|
|
| |
small req buffer pool less often.
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
| |
Could cause memory leak in some readpaths depending on what junk followed it in the stack.
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
| |
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
| |
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
| |
memcpy. Part 1
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
| |
the request queue. Also periodically wakeup response_q so threads can
check if stuck requests have timed out. Workaround Windows server illegal smb
length on transact2 findfirst response.
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Dave Kleikamp <shaggy@austin.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
| |
New cifs_writepages routine was not updated bytes written in cifs stats.
Also added ability to clear /proc/fs/cifs/Stats by writing (0 or 1) to it.
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
| |
Continue implementation of cifs umount begin to allow force unmounts of
cifs mounts.
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
| |
ever longer amounts (up to 15 seconds). This improves performance
especially when using large wsize.
Signed-off-by: Steve French (sfrench@us.ibm.com)
|
|
|
|
|
|
|
| |
These changes to debug code and new stats are helpful in
debugging potential tcp performance/configuration problems under cifs.
Signed-off-by: Steve French <sfrench@us.ibm.com>
|
|
|
|
|
|
|
|
| |
This allows cifs_writepages to send data in larger chunks from the page
cache, without requiring larger memory allocations in other cases.
Signed-off-by: Dave Kleikamp <shaggy@austin.ibm.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
|