diff options
author | Michael Holzheu <holzheu@linux.vnet.ibm.com> | 2009-10-14 06:43:45 -0400 |
---|---|---|
committer | Martin Schwidefsky <sky@mschwide.boeblingen.de.ibm.com> | 2009-10-14 06:43:52 -0400 |
commit | 03cadd36d51c737d7ad6aa21e2524296be6fe57f (patch) | |
tree | bbfe2c44a6eed8edf5b3f9a5064031efb00fc8cd /drivers/s390/char/tape_block.c | |
parent | 7874b1b66a53c4d9c8dcb37884cbb758aa2d712c (diff) |
[S390] tape390: Fix request queue handling in block driver
When setting a channel attached tape online under Linux 2.6.31, the
"vol_id" process from udev hangs in sync_page():
2 sync_page+144 [0x1dfaac]
3 __wait_on_bit_lock+194 [0x58c23e]
4 __lock_page+116 [0x1df9dc]
5 truncate_inode_pages_range+728 [0x1ed7cc]
6 __blkdev_put+244 [0x25f738]
7 __fput+300 [0x229c4c]
8 filp_close+122 [0x225a3a]
The reason for that is an error in the request queue handling. It can
happen that we fetch a request, but do not process it further because
the number of queued requests exceeds TAPEBLOCK_MIN_REQUEUE.
To fix this, we should call blk_peek_request() instead of
blk_fetch_request() in the while condition and fetch the request in
the loop body afterwards.
This bug was introduced with the patch "block: implement and enforce
request peek/start/fetch" (9934c8c04561413609d2bc38c6b9f268cba774a4)
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Diffstat (limited to 'drivers/s390/char/tape_block.c')
-rw-r--r-- | drivers/s390/char/tape_block.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/drivers/s390/char/tape_block.c b/drivers/s390/char/tape_block.c index 64f57ef2763c..0c0705b91c28 100644 --- a/drivers/s390/char/tape_block.c +++ b/drivers/s390/char/tape_block.c | |||
@@ -162,9 +162,10 @@ tapeblock_requeue(struct work_struct *work) { | |||
162 | spin_lock_irq(&device->blk_data.request_queue_lock); | 162 | spin_lock_irq(&device->blk_data.request_queue_lock); |
163 | while ( | 163 | while ( |
164 | !blk_queue_plugged(queue) && | 164 | !blk_queue_plugged(queue) && |
165 | (req = blk_fetch_request(queue)) && | 165 | blk_peek_request(queue) && |
166 | nr_queued < TAPEBLOCK_MIN_REQUEUE | 166 | nr_queued < TAPEBLOCK_MIN_REQUEUE |
167 | ) { | 167 | ) { |
168 | req = blk_fetch_request(queue); | ||
168 | if (rq_data_dir(req) == WRITE) { | 169 | if (rq_data_dir(req) == WRITE) { |
169 | DBF_EVENT(1, "TBLOCK: Rejecting write request\n"); | 170 | DBF_EVENT(1, "TBLOCK: Rejecting write request\n"); |
170 | spin_unlock_irq(&device->blk_data.request_queue_lock); | 171 | spin_unlock_irq(&device->blk_data.request_queue_lock); |