aboutsummaryrefslogtreecommitdiffstats
path: root/block
diff options
context:
space:
mode:
authorJens Axboe <jens.axboe@oracle.com>2008-05-07 03:48:17 -0400
committerJens Axboe <jens.axboe@oracle.com>2008-05-07 03:48:17 -0400
commitdbaf2c003e151ad9231778819b0977f95e20e06f (patch)
tree2768a0cd046801d83faf04c408a7d53a2fdfabc5 /block
parent2cdf79cafbd11580f5b63cd4993b45c1c4952415 (diff)
block: optimize generic_unplug_device()
Original patch from Mikulas Patocka <mpatocka@redhat.com> Mike Anderson was doing an OLTP benchmark on a computer with 48 physical disks mapped to one logical device via device mapper. He found that there was a slowdown on request_queue->lock in function generic_unplug_device. The slowdown is caused by the fact that when some code calls unplug on the device mapper, device mapper calls unplug on all physical disks. These unplug calls take the lock, find that the queue is already unplugged, release the lock and exit. With the below patch, performance of the benchmark was increased by 18% (the whole OLTP application, not just block layer microbenchmarks). So I'm submitting this patch for upstream. I think the patch is correct, because when more threads call simultaneously plug and unplug, it is unspecified, if the queue is or isn't plugged (so the patch can't make this worse). And the caller that plugged the queue should unplug it anyway. (if it doesn't, there's 3ms timeout). Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Diffstat (limited to 'block')
-rw-r--r--block/blk-core.c8
1 files changed, 5 insertions, 3 deletions
diff --git a/block/blk-core.c b/block/blk-core.c
index b754a4a2f9bd..1b7dddf94f4f 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -253,9 +253,11 @@ EXPORT_SYMBOL(__generic_unplug_device);
253 **/ 253 **/
254void generic_unplug_device(struct request_queue *q) 254void generic_unplug_device(struct request_queue *q)
255{ 255{
256 spin_lock_irq(q->queue_lock); 256 if (blk_queue_plugged(q)) {
257 __generic_unplug_device(q); 257 spin_lock_irq(q->queue_lock);
258 spin_unlock_irq(q->queue_lock); 258 __generic_unplug_device(q);
259 spin_unlock_irq(q->queue_lock);
260 }
259} 261}
260EXPORT_SYMBOL(generic_unplug_device); 262EXPORT_SYMBOL(generic_unplug_device);
261 263