aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2017-11-17 13:56:56 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2017-11-17 13:56:56 -0500
commit06ede5f6086757f746b7be860ae76137f4e95032 (patch)
tree6d591c11d3c2eea19ee99d9e80ddc018c41930ea
parenta3841f94c7ecb3ede0f888d3fcfe8fb6368ddd7a (diff)
parent62530ed8b1d07a45dec94d46e521c0c6c2d476e6 (diff)
Merge branch 'for-linus' of git://git.kernel.dk/linux-block
Pull more block layer updates from Jens Axboe: "A followup pull request, with some parts that either needed a bit more testing before going in, merge sync, or just later arriving fixes. This contains: - Timer related updates from Kees. These were purposefully delayed since I didn't want to pull in a later v4.14-rc tag to my block tree. - ide-cd prep sense buffer fix from Bart. Also delayed, as not to clash with the late fix we put into 4.14-rc. - Small BFQ updates series from Luca and Paolo. - Single nvmet fix from James, fixing a non-functional case there. - Bio fast clone fix from Michael, which made bcache return the wrong data for some cases. - Legacy IO path regression hang fix from Ming" * 'for-linus' of git://git.kernel.dk/linux-block: bio: ensure __bio_clone_fast copies bi_partno nvmet_fc: fix better length checking block: wake up all tasks blocked in get_request() block, bfq: move debug blkio stats behind CONFIG_DEBUG_BLK_CGROUP block, bfq: update blkio stats outside the scheduler lock block, bfq: add missing invocations of bfqg_stats_update_io_add/remove doc, block, bfq: update max IOPS sustainable with BFQ ide: Make ide_cdrom_prep_fs() initialize the sense buffer pointer md: Convert timers to use timer_setup() block: swim3: Convert timers to use timer_setup() block/aoe: Convert timers to use timer_setup() amifloppy: Convert timers to use timer_setup() block/floppy: Convert callback to pass timer_list
-rw-r--r--Documentation/block/bfq-iosched.txt43
-rw-r--r--block/bfq-cgroup.c148
-rw-r--r--block/bfq-iosched.c117
-rw-r--r--block/bfq-iosched.h4
-rw-r--r--block/bfq-wf2q.c1
-rw-r--r--block/bio.c1
-rw-r--r--block/blk-core.c4
-rw-r--r--drivers/block/amiflop.c57
-rw-r--r--drivers/block/aoe/aoecmd.c6
-rw-r--r--drivers/block/aoe/aoedev.c9
-rw-r--r--drivers/block/floppy.c10
-rw-r--r--drivers/block/swim3.c31
-rw-r--r--drivers/ide/ide-cd.c3
-rw-r--r--drivers/md/bcache/stats.c8
-rw-r--r--drivers/md/dm-delay.c6
-rw-r--r--drivers/md/dm-integrity.c6
-rw-r--r--drivers/md/dm-raid1.c8
-rw-r--r--drivers/md/md.c9
-rw-r--r--drivers/nvme/target/fc.c6
19 files changed, 311 insertions, 166 deletions
diff --git a/Documentation/block/bfq-iosched.txt b/Documentation/block/bfq-iosched.txt
index 3d6951d63489..8d8d8f06cab2 100644
--- a/Documentation/block/bfq-iosched.txt
+++ b/Documentation/block/bfq-iosched.txt
@@ -20,12 +20,27 @@ for that device, by setting low_latency to 0. See Section 3 for
20details on how to configure BFQ for the desired tradeoff between 20details on how to configure BFQ for the desired tradeoff between
21latency and throughput, or on how to maximize throughput. 21latency and throughput, or on how to maximize throughput.
22 22
23On average CPUs, the current version of BFQ can handle devices 23BFQ has a non-null overhead, which limits the maximum IOPS that a CPU
24performing at most ~30K IOPS; at most ~50 KIOPS on faster CPUs. As a 24can process for a device scheduled with BFQ. To give an idea of the
25reference, 30-50 KIOPS correspond to very high bandwidths with 25limits on slow or average CPUs, here are, first, the limits of BFQ for
26sequential I/O (e.g., 8-12 GB/s if I/O requests are 256 KB large), and 26three different CPUs, on, respectively, an average laptop, an old
27to 120-200 MB/s with 4KB random I/O. BFQ is currently being tested on 27desktop, and a cheap embedded system, in case full hierarchical
28multi-queue devices too. 28support is enabled (i.e., CONFIG_BFQ_GROUP_IOSCHED is set), but
29CONFIG_DEBUG_BLK_CGROUP is not set (Section 4-2):
30- Intel i7-4850HQ: 400 KIOPS
31- AMD A8-3850: 250 KIOPS
32- ARM CortexTM-A53 Octa-core: 80 KIOPS
33
34If CONFIG_DEBUG_BLK_CGROUP is set (and of course full hierarchical
35support is enabled), then the sustainable throughput with BFQ
36decreases, because all blkio.bfq* statistics are created and updated
37(Section 4-2). For BFQ, this leads to the following maximum
38sustainable throughputs, on the same systems as above:
39- Intel i7-4850HQ: 310 KIOPS
40- AMD A8-3850: 200 KIOPS
41- ARM CortexTM-A53 Octa-core: 56 KIOPS
42
43BFQ works for multi-queue devices too.
29 44
30The table of contents follow. Impatients can just jump to Section 3. 45The table of contents follow. Impatients can just jump to Section 3.
31 46
@@ -500,6 +515,22 @@ BFQ-specific files is "blkio.bfq." or "io.bfq." For example, the group
500parameter to set the weight of a group with BFQ is blkio.bfq.weight 515parameter to set the weight of a group with BFQ is blkio.bfq.weight
501or io.bfq.weight. 516or io.bfq.weight.
502 517
518As for cgroups-v1 (blkio controller), the exact set of stat files
519created, and kept up-to-date by bfq, depends on whether
520CONFIG_DEBUG_BLK_CGROUP is set. If it is set, then bfq creates all
521the stat files documented in
522Documentation/cgroup-v1/blkio-controller.txt. If, instead,
523CONFIG_DEBUG_BLK_CGROUP is not set, then bfq creates only the files
524blkio.bfq.io_service_bytes
525blkio.bfq.io_service_bytes_recursive
526blkio.bfq.io_serviced
527blkio.bfq.io_serviced_recursive
528
529The value of CONFIG_DEBUG_BLK_CGROUP greatly influences the maximum
530throughput sustainable with bfq, because updating the blkio.bfq.*
531stats is rather costly, especially for some of the stats enabled by
532CONFIG_DEBUG_BLK_CGROUP.
533
503Parameters to set 534Parameters to set
504----------------- 535-----------------
505 536
diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index ceefb9a706d6..da1525ec4c87 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -24,7 +24,7 @@
24 24
25#include "bfq-iosched.h" 25#include "bfq-iosched.h"
26 26
27#ifdef CONFIG_BFQ_GROUP_IOSCHED 27#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
28 28
29/* bfqg stats flags */ 29/* bfqg stats flags */
30enum bfqg_stats_flags { 30enum bfqg_stats_flags {
@@ -152,6 +152,57 @@ void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg)
152 bfqg_stats_update_group_wait_time(stats); 152 bfqg_stats_update_group_wait_time(stats);
153} 153}
154 154
155void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
156 unsigned int op)
157{
158 blkg_rwstat_add(&bfqg->stats.queued, op, 1);
159 bfqg_stats_end_empty_time(&bfqg->stats);
160 if (!(bfqq == ((struct bfq_data *)bfqg->bfqd)->in_service_queue))
161 bfqg_stats_set_start_group_wait_time(bfqg, bfqq_group(bfqq));
162}
163
164void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op)
165{
166 blkg_rwstat_add(&bfqg->stats.queued, op, -1);
167}
168
169void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op)
170{
171 blkg_rwstat_add(&bfqg->stats.merged, op, 1);
172}
173
174void bfqg_stats_update_completion(struct bfq_group *bfqg, uint64_t start_time,
175 uint64_t io_start_time, unsigned int op)
176{
177 struct bfqg_stats *stats = &bfqg->stats;
178 unsigned long long now = sched_clock();
179
180 if (time_after64(now, io_start_time))
181 blkg_rwstat_add(&stats->service_time, op,
182 now - io_start_time);
183 if (time_after64(io_start_time, start_time))
184 blkg_rwstat_add(&stats->wait_time, op,
185 io_start_time - start_time);
186}
187
188#else /* CONFIG_BFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */
189
190void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
191 unsigned int op) { }
192void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op) { }
193void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op) { }
194void bfqg_stats_update_completion(struct bfq_group *bfqg, uint64_t start_time,
195 uint64_t io_start_time, unsigned int op) { }
196void bfqg_stats_update_dequeue(struct bfq_group *bfqg) { }
197void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg) { }
198void bfqg_stats_update_idle_time(struct bfq_group *bfqg) { }
199void bfqg_stats_set_start_idle_time(struct bfq_group *bfqg) { }
200void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg) { }
201
202#endif /* CONFIG_BFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */
203
204#ifdef CONFIG_BFQ_GROUP_IOSCHED
205
155/* 206/*
156 * blk-cgroup policy-related handlers 207 * blk-cgroup policy-related handlers
157 * The following functions help in converting between blk-cgroup 208 * The following functions help in converting between blk-cgroup
@@ -229,42 +280,10 @@ void bfqg_and_blkg_put(struct bfq_group *bfqg)
229 blkg_put(bfqg_to_blkg(bfqg)); 280 blkg_put(bfqg_to_blkg(bfqg));
230} 281}
231 282
232void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
233 unsigned int op)
234{
235 blkg_rwstat_add(&bfqg->stats.queued, op, 1);
236 bfqg_stats_end_empty_time(&bfqg->stats);
237 if (!(bfqq == ((struct bfq_data *)bfqg->bfqd)->in_service_queue))
238 bfqg_stats_set_start_group_wait_time(bfqg, bfqq_group(bfqq));
239}
240
241void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op)
242{
243 blkg_rwstat_add(&bfqg->stats.queued, op, -1);
244}
245
246void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op)
247{
248 blkg_rwstat_add(&bfqg->stats.merged, op, 1);
249}
250
251void bfqg_stats_update_completion(struct bfq_group *bfqg, uint64_t start_time,
252 uint64_t io_start_time, unsigned int op)
253{
254 struct bfqg_stats *stats = &bfqg->stats;
255 unsigned long long now = sched_clock();
256
257 if (time_after64(now, io_start_time))
258 blkg_rwstat_add(&stats->service_time, op,
259 now - io_start_time);
260 if (time_after64(io_start_time, start_time))
261 blkg_rwstat_add(&stats->wait_time, op,
262 io_start_time - start_time);
263}
264
265/* @stats = 0 */ 283/* @stats = 0 */
266static void bfqg_stats_reset(struct bfqg_stats *stats) 284static void bfqg_stats_reset(struct bfqg_stats *stats)
267{ 285{
286#ifdef CONFIG_DEBUG_BLK_CGROUP
268 /* queued stats shouldn't be cleared */ 287 /* queued stats shouldn't be cleared */
269 blkg_rwstat_reset(&stats->merged); 288 blkg_rwstat_reset(&stats->merged);
270 blkg_rwstat_reset(&stats->service_time); 289 blkg_rwstat_reset(&stats->service_time);
@@ -276,6 +295,7 @@ static void bfqg_stats_reset(struct bfqg_stats *stats)
276 blkg_stat_reset(&stats->group_wait_time); 295 blkg_stat_reset(&stats->group_wait_time);
277 blkg_stat_reset(&stats->idle_time); 296 blkg_stat_reset(&stats->idle_time);
278 blkg_stat_reset(&stats->empty_time); 297 blkg_stat_reset(&stats->empty_time);
298#endif
279} 299}
280 300
281/* @to += @from */ 301/* @to += @from */
@@ -284,6 +304,7 @@ static void bfqg_stats_add_aux(struct bfqg_stats *to, struct bfqg_stats *from)
284 if (!to || !from) 304 if (!to || !from)
285 return; 305 return;
286 306
307#ifdef CONFIG_DEBUG_BLK_CGROUP
287 /* queued stats shouldn't be cleared */ 308 /* queued stats shouldn't be cleared */
288 blkg_rwstat_add_aux(&to->merged, &from->merged); 309 blkg_rwstat_add_aux(&to->merged, &from->merged);
289 blkg_rwstat_add_aux(&to->service_time, &from->service_time); 310 blkg_rwstat_add_aux(&to->service_time, &from->service_time);
@@ -296,6 +317,7 @@ static void bfqg_stats_add_aux(struct bfqg_stats *to, struct bfqg_stats *from)
296 blkg_stat_add_aux(&to->group_wait_time, &from->group_wait_time); 317 blkg_stat_add_aux(&to->group_wait_time, &from->group_wait_time);
297 blkg_stat_add_aux(&to->idle_time, &from->idle_time); 318 blkg_stat_add_aux(&to->idle_time, &from->idle_time);
298 blkg_stat_add_aux(&to->empty_time, &from->empty_time); 319 blkg_stat_add_aux(&to->empty_time, &from->empty_time);
320#endif
299} 321}
300 322
301/* 323/*
@@ -342,6 +364,7 @@ void bfq_init_entity(struct bfq_entity *entity, struct bfq_group *bfqg)
342 364
343static void bfqg_stats_exit(struct bfqg_stats *stats) 365static void bfqg_stats_exit(struct bfqg_stats *stats)
344{ 366{
367#ifdef CONFIG_DEBUG_BLK_CGROUP
345 blkg_rwstat_exit(&stats->merged); 368 blkg_rwstat_exit(&stats->merged);
346 blkg_rwstat_exit(&stats->service_time); 369 blkg_rwstat_exit(&stats->service_time);
347 blkg_rwstat_exit(&stats->wait_time); 370 blkg_rwstat_exit(&stats->wait_time);
@@ -353,10 +376,12 @@ static void bfqg_stats_exit(struct bfqg_stats *stats)
353 blkg_stat_exit(&stats->group_wait_time); 376 blkg_stat_exit(&stats->group_wait_time);
354 blkg_stat_exit(&stats->idle_time); 377 blkg_stat_exit(&stats->idle_time);
355 blkg_stat_exit(&stats->empty_time); 378 blkg_stat_exit(&stats->empty_time);
379#endif
356} 380}
357 381
358static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp) 382static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp)
359{ 383{
384#ifdef CONFIG_DEBUG_BLK_CGROUP
360 if (blkg_rwstat_init(&stats->merged, gfp) || 385 if (blkg_rwstat_init(&stats->merged, gfp) ||
361 blkg_rwstat_init(&stats->service_time, gfp) || 386 blkg_rwstat_init(&stats->service_time, gfp) ||
362 blkg_rwstat_init(&stats->wait_time, gfp) || 387 blkg_rwstat_init(&stats->wait_time, gfp) ||
@@ -371,6 +396,7 @@ static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp)
371 bfqg_stats_exit(stats); 396 bfqg_stats_exit(stats);
372 return -ENOMEM; 397 return -ENOMEM;
373 } 398 }
399#endif
374 400
375 return 0; 401 return 0;
376} 402}
@@ -887,6 +913,7 @@ static ssize_t bfq_io_set_weight(struct kernfs_open_file *of,
887 return bfq_io_set_weight_legacy(of_css(of), NULL, weight); 913 return bfq_io_set_weight_legacy(of_css(of), NULL, weight);
888} 914}
889 915
916#ifdef CONFIG_DEBUG_BLK_CGROUP
890static int bfqg_print_stat(struct seq_file *sf, void *v) 917static int bfqg_print_stat(struct seq_file *sf, void *v)
891{ 918{
892 blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_stat, 919 blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_stat,
@@ -991,6 +1018,7 @@ static int bfqg_print_avg_queue_size(struct seq_file *sf, void *v)
991 0, false); 1018 0, false);
992 return 0; 1019 return 0;
993} 1020}
1021#endif /* CONFIG_DEBUG_BLK_CGROUP */
994 1022
995struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node) 1023struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node)
996{ 1024{
@@ -1029,15 +1057,6 @@ struct cftype bfq_blkcg_legacy_files[] = {
1029 1057
1030 /* statistics, covers only the tasks in the bfqg */ 1058 /* statistics, covers only the tasks in the bfqg */
1031 { 1059 {
1032 .name = "bfq.time",
1033 .private = offsetof(struct bfq_group, stats.time),
1034 .seq_show = bfqg_print_stat,
1035 },
1036 {
1037 .name = "bfq.sectors",
1038 .seq_show = bfqg_print_stat_sectors,
1039 },
1040 {
1041 .name = "bfq.io_service_bytes", 1060 .name = "bfq.io_service_bytes",
1042 .private = (unsigned long)&blkcg_policy_bfq, 1061 .private = (unsigned long)&blkcg_policy_bfq,
1043 .seq_show = blkg_print_stat_bytes, 1062 .seq_show = blkg_print_stat_bytes,
@@ -1047,6 +1066,16 @@ struct cftype bfq_blkcg_legacy_files[] = {
1047 .private = (unsigned long)&blkcg_policy_bfq, 1066 .private = (unsigned long)&blkcg_policy_bfq,
1048 .seq_show = blkg_print_stat_ios, 1067 .seq_show = blkg_print_stat_ios,
1049 }, 1068 },
1069#ifdef CONFIG_DEBUG_BLK_CGROUP
1070 {
1071 .name = "bfq.time",
1072 .private = offsetof(struct bfq_group, stats.time),
1073 .seq_show = bfqg_print_stat,
1074 },
1075 {
1076 .name = "bfq.sectors",
1077 .seq_show = bfqg_print_stat_sectors,
1078 },
1050 { 1079 {
1051 .name = "bfq.io_service_time", 1080 .name = "bfq.io_service_time",
1052 .private = offsetof(struct bfq_group, stats.service_time), 1081 .private = offsetof(struct bfq_group, stats.service_time),
@@ -1067,18 +1096,10 @@ struct cftype bfq_blkcg_legacy_files[] = {
1067 .private = offsetof(struct bfq_group, stats.queued), 1096 .private = offsetof(struct bfq_group, stats.queued),
1068 .seq_show = bfqg_print_rwstat, 1097 .seq_show = bfqg_print_rwstat,
1069 }, 1098 },
1099#endif /* CONFIG_DEBUG_BLK_CGROUP */
1070 1100
1071 /* the same statictics which cover the bfqg and its descendants */ 1101 /* the same statictics which cover the bfqg and its descendants */
1072 { 1102 {
1073 .name = "bfq.time_recursive",
1074 .private = offsetof(struct bfq_group, stats.time),
1075 .seq_show = bfqg_print_stat_recursive,
1076 },
1077 {
1078 .name = "bfq.sectors_recursive",
1079 .seq_show = bfqg_print_stat_sectors_recursive,
1080 },
1081 {
1082 .name = "bfq.io_service_bytes_recursive", 1103 .name = "bfq.io_service_bytes_recursive",
1083 .private = (unsigned long)&blkcg_policy_bfq, 1104 .private = (unsigned long)&blkcg_policy_bfq,
1084 .seq_show = blkg_print_stat_bytes_recursive, 1105 .seq_show = blkg_print_stat_bytes_recursive,
@@ -1088,6 +1109,16 @@ struct cftype bfq_blkcg_legacy_files[] = {
1088 .private = (unsigned long)&blkcg_policy_bfq, 1109 .private = (unsigned long)&blkcg_policy_bfq,
1089 .seq_show = blkg_print_stat_ios_recursive, 1110 .seq_show = blkg_print_stat_ios_recursive,
1090 }, 1111 },
1112#ifdef CONFIG_DEBUG_BLK_CGROUP
1113 {
1114 .name = "bfq.time_recursive",
1115 .private = offsetof(struct bfq_group, stats.time),
1116 .seq_show = bfqg_print_stat_recursive,
1117 },
1118 {
1119 .name = "bfq.sectors_recursive",
1120 .seq_show = bfqg_print_stat_sectors_recursive,
1121 },
1091 { 1122 {
1092 .name = "bfq.io_service_time_recursive", 1123 .name = "bfq.io_service_time_recursive",
1093 .private = offsetof(struct bfq_group, stats.service_time), 1124 .private = offsetof(struct bfq_group, stats.service_time),
@@ -1132,6 +1163,7 @@ struct cftype bfq_blkcg_legacy_files[] = {
1132 .private = offsetof(struct bfq_group, stats.dequeue), 1163 .private = offsetof(struct bfq_group, stats.dequeue),
1133 .seq_show = bfqg_print_stat, 1164 .seq_show = bfqg_print_stat,
1134 }, 1165 },
1166#endif /* CONFIG_DEBUG_BLK_CGROUP */
1135 { } /* terminate */ 1167 { } /* terminate */
1136}; 1168};
1137 1169
@@ -1147,18 +1179,6 @@ struct cftype bfq_blkg_files[] = {
1147 1179
1148#else /* CONFIG_BFQ_GROUP_IOSCHED */ 1180#else /* CONFIG_BFQ_GROUP_IOSCHED */
1149 1181
1150void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
1151 unsigned int op) { }
1152void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op) { }
1153void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op) { }
1154void bfqg_stats_update_completion(struct bfq_group *bfqg, uint64_t start_time,
1155 uint64_t io_start_time, unsigned int op) { }
1156void bfqg_stats_update_dequeue(struct bfq_group *bfqg) { }
1157void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg) { }
1158void bfqg_stats_update_idle_time(struct bfq_group *bfqg) { }
1159void bfqg_stats_set_start_idle_time(struct bfq_group *bfqg) { }
1160void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg) { }
1161
1162void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, 1182void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
1163 struct bfq_group *bfqg) {} 1183 struct bfq_group *bfqg) {}
1164 1184
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 889a8549d97f..bcb6d21baf12 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -1359,7 +1359,6 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
1359 bfqq->ttime.last_end_request + 1359 bfqq->ttime.last_end_request +
1360 bfqd->bfq_slice_idle * 3; 1360 bfqd->bfq_slice_idle * 3;
1361 1361
1362 bfqg_stats_update_io_add(bfqq_group(RQ_BFQQ(rq)), bfqq, rq->cmd_flags);
1363 1362
1364 /* 1363 /*
1365 * bfqq deserves to be weight-raised if: 1364 * bfqq deserves to be weight-raised if:
@@ -1633,7 +1632,6 @@ static void bfq_remove_request(struct request_queue *q,
1633 if (rq->cmd_flags & REQ_META) 1632 if (rq->cmd_flags & REQ_META)
1634 bfqq->meta_pending--; 1633 bfqq->meta_pending--;
1635 1634
1636 bfqg_stats_update_io_remove(bfqq_group(bfqq), rq->cmd_flags);
1637} 1635}
1638 1636
1639static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio) 1637static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio)
@@ -1746,6 +1744,7 @@ static void bfq_requests_merged(struct request_queue *q, struct request *rq,
1746 bfqq->next_rq = rq; 1744 bfqq->next_rq = rq;
1747 1745
1748 bfq_remove_request(q, next); 1746 bfq_remove_request(q, next);
1747 bfqg_stats_update_io_remove(bfqq_group(bfqq), next->cmd_flags);
1749 1748
1750 spin_unlock_irq(&bfqq->bfqd->lock); 1749 spin_unlock_irq(&bfqq->bfqd->lock);
1751end: 1750end:
@@ -2229,7 +2228,6 @@ static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
2229 struct bfq_queue *bfqq) 2228 struct bfq_queue *bfqq)
2230{ 2229{
2231 if (bfqq) { 2230 if (bfqq) {
2232 bfqg_stats_update_avg_queue_size(bfqq_group(bfqq));
2233 bfq_clear_bfqq_fifo_expire(bfqq); 2231 bfq_clear_bfqq_fifo_expire(bfqq);
2234 2232
2235 bfqd->budgets_assigned = (bfqd->budgets_assigned * 7 + 256) / 8; 2233 bfqd->budgets_assigned = (bfqd->budgets_assigned * 7 + 256) / 8;
@@ -3470,7 +3468,6 @@ check_queue:
3470 */ 3468 */
3471 bfq_clear_bfqq_wait_request(bfqq); 3469 bfq_clear_bfqq_wait_request(bfqq);
3472 hrtimer_try_to_cancel(&bfqd->idle_slice_timer); 3470 hrtimer_try_to_cancel(&bfqd->idle_slice_timer);
3473 bfqg_stats_update_idle_time(bfqq_group(bfqq));
3474 } 3471 }
3475 goto keep_queue; 3472 goto keep_queue;
3476 } 3473 }
@@ -3696,12 +3693,67 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
3696{ 3693{
3697 struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; 3694 struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
3698 struct request *rq; 3695 struct request *rq;
3696#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
3697 struct bfq_queue *in_serv_queue, *bfqq;
3698 bool waiting_rq, idle_timer_disabled;
3699#endif
3699 3700
3700 spin_lock_irq(&bfqd->lock); 3701 spin_lock_irq(&bfqd->lock);
3701 3702
3703#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
3704 in_serv_queue = bfqd->in_service_queue;
3705 waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue);
3706
3707 rq = __bfq_dispatch_request(hctx);
3708
3709 idle_timer_disabled =
3710 waiting_rq && !bfq_bfqq_wait_request(in_serv_queue);
3711
3712#else
3702 rq = __bfq_dispatch_request(hctx); 3713 rq = __bfq_dispatch_request(hctx);
3714#endif
3703 spin_unlock_irq(&bfqd->lock); 3715 spin_unlock_irq(&bfqd->lock);
3704 3716
3717#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
3718 bfqq = rq ? RQ_BFQQ(rq) : NULL;
3719 if (!idle_timer_disabled && !bfqq)
3720 return rq;
3721
3722 /*
3723 * rq and bfqq are guaranteed to exist until this function
3724 * ends, for the following reasons. First, rq can be
3725 * dispatched to the device, and then can be completed and
3726 * freed, only after this function ends. Second, rq cannot be
3727 * merged (and thus freed because of a merge) any longer,
3728 * because it has already started. Thus rq cannot be freed
3729 * before this function ends, and, since rq has a reference to
3730 * bfqq, the same guarantee holds for bfqq too.
3731 *
3732 * In addition, the following queue lock guarantees that
3733 * bfqq_group(bfqq) exists as well.
3734 */
3735 spin_lock_irq(hctx->queue->queue_lock);
3736 if (idle_timer_disabled)
3737 /*
3738 * Since the idle timer has been disabled,
3739 * in_serv_queue contained some request when
3740 * __bfq_dispatch_request was invoked above, which
3741 * implies that rq was picked exactly from
3742 * in_serv_queue. Thus in_serv_queue == bfqq, and is
3743 * therefore guaranteed to exist because of the above
3744 * arguments.
3745 */
3746 bfqg_stats_update_idle_time(bfqq_group(in_serv_queue));
3747 if (bfqq) {
3748 struct bfq_group *bfqg = bfqq_group(bfqq);
3749
3750 bfqg_stats_update_avg_queue_size(bfqg);
3751 bfqg_stats_set_start_empty_time(bfqg);
3752 bfqg_stats_update_io_remove(bfqg, rq->cmd_flags);
3753 }
3754 spin_unlock_irq(hctx->queue->queue_lock);
3755#endif
3756
3705 return rq; 3757 return rq;
3706} 3758}
3707 3759
@@ -4159,7 +4211,6 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
4159 */ 4211 */
4160 bfq_clear_bfqq_wait_request(bfqq); 4212 bfq_clear_bfqq_wait_request(bfqq);
4161 hrtimer_try_to_cancel(&bfqd->idle_slice_timer); 4213 hrtimer_try_to_cancel(&bfqd->idle_slice_timer);
4162 bfqg_stats_update_idle_time(bfqq_group(bfqq));
4163 4214
4164 /* 4215 /*
4165 * The queue is not empty, because a new request just 4216 * The queue is not empty, because a new request just
@@ -4174,10 +4225,12 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
4174 } 4225 }
4175} 4226}
4176 4227
4177static void __bfq_insert_request(struct bfq_data *bfqd, struct request *rq) 4228/* returns true if it causes the idle timer to be disabled */
4229static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
4178{ 4230{
4179 struct bfq_queue *bfqq = RQ_BFQQ(rq), 4231 struct bfq_queue *bfqq = RQ_BFQQ(rq),
4180 *new_bfqq = bfq_setup_cooperator(bfqd, bfqq, rq, true); 4232 *new_bfqq = bfq_setup_cooperator(bfqd, bfqq, rq, true);
4233 bool waiting, idle_timer_disabled = false;
4181 4234
4182 if (new_bfqq) { 4235 if (new_bfqq) {
4183 if (bic_to_bfqq(RQ_BIC(rq), 1) != bfqq) 4236 if (bic_to_bfqq(RQ_BIC(rq), 1) != bfqq)
@@ -4211,12 +4264,16 @@ static void __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
4211 bfqq = new_bfqq; 4264 bfqq = new_bfqq;
4212 } 4265 }
4213 4266
4267 waiting = bfqq && bfq_bfqq_wait_request(bfqq);
4214 bfq_add_request(rq); 4268 bfq_add_request(rq);
4269 idle_timer_disabled = waiting && !bfq_bfqq_wait_request(bfqq);
4215 4270
4216 rq->fifo_time = ktime_get_ns() + bfqd->bfq_fifo_expire[rq_is_sync(rq)]; 4271 rq->fifo_time = ktime_get_ns() + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
4217 list_add_tail(&rq->queuelist, &bfqq->fifo); 4272 list_add_tail(&rq->queuelist, &bfqq->fifo);
4218 4273
4219 bfq_rq_enqueued(bfqd, bfqq, rq); 4274 bfq_rq_enqueued(bfqd, bfqq, rq);
4275
4276 return idle_timer_disabled;
4220} 4277}
4221 4278
4222static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, 4279static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
@@ -4224,6 +4281,11 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
4224{ 4281{
4225 struct request_queue *q = hctx->queue; 4282 struct request_queue *q = hctx->queue;
4226 struct bfq_data *bfqd = q->elevator->elevator_data; 4283 struct bfq_data *bfqd = q->elevator->elevator_data;
4284#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
4285 struct bfq_queue *bfqq = RQ_BFQQ(rq);
4286 bool idle_timer_disabled = false;
4287 unsigned int cmd_flags;
4288#endif
4227 4289
4228 spin_lock_irq(&bfqd->lock); 4290 spin_lock_irq(&bfqd->lock);
4229 if (blk_mq_sched_try_insert_merge(q, rq)) { 4291 if (blk_mq_sched_try_insert_merge(q, rq)) {
@@ -4242,7 +4304,17 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
4242 else 4304 else
4243 list_add_tail(&rq->queuelist, &bfqd->dispatch); 4305 list_add_tail(&rq->queuelist, &bfqd->dispatch);
4244 } else { 4306 } else {
4307#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
4308 idle_timer_disabled = __bfq_insert_request(bfqd, rq);
4309 /*
4310 * Update bfqq, because, if a queue merge has occurred
4311 * in __bfq_insert_request, then rq has been
4312 * redirected into a new queue.
4313 */
4314 bfqq = RQ_BFQQ(rq);
4315#else
4245 __bfq_insert_request(bfqd, rq); 4316 __bfq_insert_request(bfqd, rq);
4317#endif
4246 4318
4247 if (rq_mergeable(rq)) { 4319 if (rq_mergeable(rq)) {
4248 elv_rqhash_add(q, rq); 4320 elv_rqhash_add(q, rq);
@@ -4251,7 +4323,35 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
4251 } 4323 }
4252 } 4324 }
4253 4325
4326#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
4327 /*
4328 * Cache cmd_flags before releasing scheduler lock, because rq
4329 * may disappear afterwards (for example, because of a request
4330 * merge).
4331 */
4332 cmd_flags = rq->cmd_flags;
4333#endif
4254 spin_unlock_irq(&bfqd->lock); 4334 spin_unlock_irq(&bfqd->lock);
4335
4336#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
4337 if (!bfqq)
4338 return;
4339 /*
4340 * bfqq still exists, because it can disappear only after
4341 * either it is merged with another queue, or the process it
4342 * is associated with exits. But both actions must be taken by
4343 * the same process currently executing this flow of
4344 * instruction.
4345 *
4346 * In addition, the following queue lock guarantees that
4347 * bfqq_group(bfqq) exists as well.
4348 */
4349 spin_lock_irq(q->queue_lock);
4350 bfqg_stats_update_io_add(bfqq_group(bfqq), bfqq, cmd_flags);
4351 if (idle_timer_disabled)
4352 bfqg_stats_update_idle_time(bfqq_group(bfqq));
4353 spin_unlock_irq(q->queue_lock);
4354#endif
4255} 4355}
4256 4356
4257static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx, 4357static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx,
@@ -4428,8 +4528,11 @@ static void bfq_finish_request(struct request *rq)
4428 * lock is held. 4528 * lock is held.
4429 */ 4529 */
4430 4530
4431 if (!RB_EMPTY_NODE(&rq->rb_node)) 4531 if (!RB_EMPTY_NODE(&rq->rb_node)) {
4432 bfq_remove_request(rq->q, rq); 4532 bfq_remove_request(rq->q, rq);
4533 bfqg_stats_update_io_remove(bfqq_group(bfqq),
4534 rq->cmd_flags);
4535 }
4433 bfq_put_rq_priv_body(bfqq); 4536 bfq_put_rq_priv_body(bfqq);
4434 } 4537 }
4435 4538
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index ac0809c72c98..91c4390903a1 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -689,7 +689,7 @@ enum bfqq_expiration {
689}; 689};
690 690
691struct bfqg_stats { 691struct bfqg_stats {
692#ifdef CONFIG_BFQ_GROUP_IOSCHED 692#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
693 /* number of ios merged */ 693 /* number of ios merged */
694 struct blkg_rwstat merged; 694 struct blkg_rwstat merged;
695 /* total time spent on device in ns, may not be accurate w/ queueing */ 695 /* total time spent on device in ns, may not be accurate w/ queueing */
@@ -717,7 +717,7 @@ struct bfqg_stats {
717 uint64_t start_idle_time; 717 uint64_t start_idle_time;
718 uint64_t start_empty_time; 718 uint64_t start_empty_time;
719 uint16_t flags; 719 uint16_t flags;
720#endif /* CONFIG_BFQ_GROUP_IOSCHED */ 720#endif /* CONFIG_BFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */
721}; 721};
722 722
723#ifdef CONFIG_BFQ_GROUP_IOSCHED 723#ifdef CONFIG_BFQ_GROUP_IOSCHED
diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
index 414ba686a847..e495d3f9b4b0 100644
--- a/block/bfq-wf2q.c
+++ b/block/bfq-wf2q.c
@@ -843,7 +843,6 @@ void bfq_bfqq_served(struct bfq_queue *bfqq, int served)
843 st->vtime += bfq_delta(served, st->wsum); 843 st->vtime += bfq_delta(served, st->wsum);
844 bfq_forget_idle(st); 844 bfq_forget_idle(st);
845 } 845 }
846 bfqg_stats_set_start_empty_time(bfqq_group(bfqq));
847 bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %d secs", served); 846 bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %d secs", served);
848} 847}
849 848
diff --git a/block/bio.c b/block/bio.c
index b94a802f8ba3..459cc857f3d9 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -597,6 +597,7 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src)
597 * so we don't set nor calculate new physical/hw segment counts here 597 * so we don't set nor calculate new physical/hw segment counts here
598 */ 598 */
599 bio->bi_disk = bio_src->bi_disk; 599 bio->bi_disk = bio_src->bi_disk;
600 bio->bi_partno = bio_src->bi_partno;
600 bio_set_flag(bio, BIO_CLONED); 601 bio_set_flag(bio, BIO_CLONED);
601 bio->bi_opf = bio_src->bi_opf; 602 bio->bi_opf = bio_src->bi_opf;
602 bio->bi_write_hint = bio_src->bi_write_hint; 603 bio->bi_write_hint = bio_src->bi_write_hint;
diff --git a/block/blk-core.c b/block/blk-core.c
index 7c54c195e79e..1038706edd87 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -637,8 +637,8 @@ void blk_set_queue_dying(struct request_queue *q)
637 spin_lock_irq(q->queue_lock); 637 spin_lock_irq(q->queue_lock);
638 blk_queue_for_each_rl(rl, q) { 638 blk_queue_for_each_rl(rl, q) {
639 if (rl->rq_pool) { 639 if (rl->rq_pool) {
640 wake_up(&rl->wait[BLK_RW_SYNC]); 640 wake_up_all(&rl->wait[BLK_RW_SYNC]);
641 wake_up(&rl->wait[BLK_RW_ASYNC]); 641 wake_up_all(&rl->wait[BLK_RW_ASYNC]);
642 } 642 }
643 } 643 }
644 spin_unlock_irq(q->queue_lock); 644 spin_unlock_irq(q->queue_lock);
diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c
index 4e3fb9f104af..e5aa62fcf5a8 100644
--- a/drivers/block/amiflop.c
+++ b/drivers/block/amiflop.c
@@ -146,6 +146,7 @@ static struct amiga_floppy_struct unit[FD_MAX_UNITS];
146 146
147static struct timer_list flush_track_timer[FD_MAX_UNITS]; 147static struct timer_list flush_track_timer[FD_MAX_UNITS];
148static struct timer_list post_write_timer; 148static struct timer_list post_write_timer;
149static unsigned long post_write_timer_drive;
149static struct timer_list motor_on_timer; 150static struct timer_list motor_on_timer;
150static struct timer_list motor_off_timer[FD_MAX_UNITS]; 151static struct timer_list motor_off_timer[FD_MAX_UNITS];
151static int on_attempts; 152static int on_attempts;
@@ -323,7 +324,7 @@ static void fd_deselect (int drive)
323 324
324} 325}
325 326
326static void motor_on_callback(unsigned long ignored) 327static void motor_on_callback(struct timer_list *unused)
327{ 328{
328 if (!(ciaa.pra & DSKRDY) || --on_attempts == 0) { 329 if (!(ciaa.pra & DSKRDY) || --on_attempts == 0) {
329 complete_all(&motor_on_completion); 330 complete_all(&motor_on_completion);
@@ -355,7 +356,7 @@ static int fd_motor_on(int nr)
355 on_attempts = -1; 356 on_attempts = -1;
356#if 0 357#if 0
357 printk (KERN_ERR "motor_on failed, turning motor off\n"); 358 printk (KERN_ERR "motor_on failed, turning motor off\n");
358 fd_motor_off (nr); 359 fd_motor_off (motor_off_timer + nr);
359 return 0; 360 return 0;
360#else 361#else
361 printk (KERN_WARNING "DSKRDY not set after 1.5 seconds - assuming drive is spinning notwithstanding\n"); 362 printk (KERN_WARNING "DSKRDY not set after 1.5 seconds - assuming drive is spinning notwithstanding\n");
@@ -365,20 +366,17 @@ static int fd_motor_on(int nr)
365 return 1; 366 return 1;
366} 367}
367 368
368static void fd_motor_off(unsigned long drive) 369static void fd_motor_off(struct timer_list *timer)
369{ 370{
370 long calledfromint; 371 unsigned long drive = ((unsigned long)timer -
371#ifdef MODULE 372 (unsigned long)&motor_off_timer[0]) /
372 long decusecount; 373 sizeof(motor_off_timer[0]);
373 374
374 decusecount = drive & 0x40000000;
375#endif
376 calledfromint = drive & 0x80000000;
377 drive&=3; 375 drive&=3;
378 if (calledfromint && !try_fdc(drive)) { 376 if (!try_fdc(drive)) {
379 /* We would be blocked in an interrupt, so try again later */ 377 /* We would be blocked in an interrupt, so try again later */
380 motor_off_timer[drive].expires = jiffies + 1; 378 timer->expires = jiffies + 1;
381 add_timer(motor_off_timer + drive); 379 add_timer(timer);
382 return; 380 return;
383 } 381 }
384 unit[drive].motor = 0; 382 unit[drive].motor = 0;
@@ -392,8 +390,6 @@ static void floppy_off (unsigned int nr)
392 int drive; 390 int drive;
393 391
394 drive = nr & 3; 392 drive = nr & 3;
395 /* called this way it is always from interrupt */
396 motor_off_timer[drive].data = nr | 0x80000000;
397 mod_timer(motor_off_timer + drive, jiffies + 3*HZ); 393 mod_timer(motor_off_timer + drive, jiffies + 3*HZ);
398} 394}
399 395
@@ -435,7 +431,7 @@ static int fd_calibrate(int drive)
435 break; 431 break;
436 if (--n == 0) { 432 if (--n == 0) {
437 printk (KERN_ERR "fd%d: calibrate failed, turning motor off\n", drive); 433 printk (KERN_ERR "fd%d: calibrate failed, turning motor off\n", drive);
438 fd_motor_off (drive); 434 fd_motor_off (motor_off_timer + drive);
439 unit[drive].track = -1; 435 unit[drive].track = -1;
440 rel_fdc(); 436 rel_fdc();
441 return 0; 437 return 0;
@@ -564,7 +560,7 @@ static irqreturn_t fd_block_done(int irq, void *dummy)
564 if (block_flag == 2) { /* writing */ 560 if (block_flag == 2) { /* writing */
565 writepending = 2; 561 writepending = 2;
566 post_write_timer.expires = jiffies + 1; /* at least 2 ms */ 562 post_write_timer.expires = jiffies + 1; /* at least 2 ms */
567 post_write_timer.data = selected; 563 post_write_timer_drive = selected;
568 add_timer(&post_write_timer); 564 add_timer(&post_write_timer);
569 } 565 }
570 else { /* reading */ 566 else { /* reading */
@@ -651,6 +647,10 @@ static void post_write (unsigned long drive)
651 rel_fdc(); /* corresponds to get_fdc() in raw_write */ 647 rel_fdc(); /* corresponds to get_fdc() in raw_write */
652} 648}
653 649
650static void post_write_callback(struct timer_list *timer)
651{
652 post_write(post_write_timer_drive);
653}
654 654
655/* 655/*
656 * The following functions are to convert the block contents into raw data 656 * The following functions are to convert the block contents into raw data
@@ -1244,8 +1244,12 @@ static void dos_write(int disk)
1244/* FIXME: this assumes the drive is still spinning - 1244/* FIXME: this assumes the drive is still spinning -
1245 * which is only true if we complete writing a track within three seconds 1245 * which is only true if we complete writing a track within three seconds
1246 */ 1246 */
1247static void flush_track_callback(unsigned long nr) 1247static void flush_track_callback(struct timer_list *timer)
1248{ 1248{
1249 unsigned long nr = ((unsigned long)timer -
1250 (unsigned long)&flush_track_timer[0]) /
1251 sizeof(flush_track_timer[0]);
1252
1249 nr&=3; 1253 nr&=3;
1250 writefromint = 1; 1254 writefromint = 1;
1251 if (!try_fdc(nr)) { 1255 if (!try_fdc(nr)) {
@@ -1649,8 +1653,7 @@ static void floppy_release(struct gendisk *disk, fmode_t mode)
1649 fd_ref[drive] = 0; 1653 fd_ref[drive] = 0;
1650 } 1654 }
1651#ifdef MODULE 1655#ifdef MODULE
1652/* the mod_use counter is handled this way */ 1656 floppy_off (drive);
1653 floppy_off (drive | 0x40000000);
1654#endif 1657#endif
1655 mutex_unlock(&amiflop_mutex); 1658 mutex_unlock(&amiflop_mutex);
1656} 1659}
@@ -1791,27 +1794,19 @@ static int __init amiga_floppy_probe(struct platform_device *pdev)
1791 floppy_find, NULL, NULL); 1794 floppy_find, NULL, NULL);
1792 1795
1793 /* initialize variables */ 1796 /* initialize variables */
1794 init_timer(&motor_on_timer); 1797 timer_setup(&motor_on_timer, motor_on_callback, 0);
1795 motor_on_timer.expires = 0; 1798 motor_on_timer.expires = 0;
1796 motor_on_timer.data = 0;
1797 motor_on_timer.function = motor_on_callback;
1798 for (i = 0; i < FD_MAX_UNITS; i++) { 1799 for (i = 0; i < FD_MAX_UNITS; i++) {
1799 init_timer(&motor_off_timer[i]); 1800 timer_setup(&motor_off_timer[i], fd_motor_off, 0);
1800 motor_off_timer[i].expires = 0; 1801 motor_off_timer[i].expires = 0;
1801 motor_off_timer[i].data = i|0x80000000; 1802 timer_setup(&flush_track_timer[i], flush_track_callback, 0);
1802 motor_off_timer[i].function = fd_motor_off;
1803 init_timer(&flush_track_timer[i]);
1804 flush_track_timer[i].expires = 0; 1803 flush_track_timer[i].expires = 0;
1805 flush_track_timer[i].data = i;
1806 flush_track_timer[i].function = flush_track_callback;
1807 1804
1808 unit[i].track = -1; 1805 unit[i].track = -1;
1809 } 1806 }
1810 1807
1811 init_timer(&post_write_timer); 1808 timer_setup(&post_write_timer, post_write_callback, 0);
1812 post_write_timer.expires = 0; 1809 post_write_timer.expires = 0;
1813 post_write_timer.data = 0;
1814 post_write_timer.function = post_write;
1815 1810
1816 for (i = 0; i < 128; i++) 1811 for (i = 0; i < 128; i++)
1817 mfmdecode[i]=255; 1812 mfmdecode[i]=255;
diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
index dc43254e05a4..55ab25f79a08 100644
--- a/drivers/block/aoe/aoecmd.c
+++ b/drivers/block/aoe/aoecmd.c
@@ -744,7 +744,7 @@ count_targets(struct aoedev *d, int *untainted)
744} 744}
745 745
746static void 746static void
747rexmit_timer(ulong vp) 747rexmit_timer(struct timer_list *timer)
748{ 748{
749 struct aoedev *d; 749 struct aoedev *d;
750 struct aoetgt *t; 750 struct aoetgt *t;
@@ -758,7 +758,7 @@ rexmit_timer(ulong vp)
758 int utgts; /* number of aoetgt descriptors (not slots) */ 758 int utgts; /* number of aoetgt descriptors (not slots) */
759 int since; 759 int since;
760 760
761 d = (struct aoedev *) vp; 761 d = from_timer(d, timer, timer);
762 762
763 spin_lock_irqsave(&d->lock, flags); 763 spin_lock_irqsave(&d->lock, flags);
764 764
@@ -1429,7 +1429,7 @@ aoecmd_ata_id(struct aoedev *d)
1429 1429
1430 d->rttavg = RTTAVG_INIT; 1430 d->rttavg = RTTAVG_INIT;
1431 d->rttdev = RTTDEV_INIT; 1431 d->rttdev = RTTDEV_INIT;
1432 d->timer.function = rexmit_timer; 1432 d->timer.function = (TIMER_FUNC_TYPE)rexmit_timer;
1433 1433
1434 skb = skb_clone(skb, GFP_ATOMIC); 1434 skb = skb_clone(skb, GFP_ATOMIC);
1435 if (skb) { 1435 if (skb) {
diff --git a/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c
index b28fefb90391..697f735b07a4 100644
--- a/drivers/block/aoe/aoedev.c
+++ b/drivers/block/aoe/aoedev.c
@@ -15,7 +15,6 @@
15#include <linux/string.h> 15#include <linux/string.h>
16#include "aoe.h" 16#include "aoe.h"
17 17
18static void dummy_timer(ulong);
19static void freetgt(struct aoedev *d, struct aoetgt *t); 18static void freetgt(struct aoedev *d, struct aoetgt *t);
20static void skbpoolfree(struct aoedev *d); 19static void skbpoolfree(struct aoedev *d);
21 20
@@ -146,11 +145,11 @@ aoedev_put(struct aoedev *d)
146} 145}
147 146
148static void 147static void
149dummy_timer(ulong vp) 148dummy_timer(struct timer_list *t)
150{ 149{
151 struct aoedev *d; 150 struct aoedev *d;
152 151
153 d = (struct aoedev *)vp; 152 d = from_timer(d, t, timer);
154 if (d->flags & DEVFL_TKILL) 153 if (d->flags & DEVFL_TKILL)
155 return; 154 return;
156 d->timer.expires = jiffies + HZ; 155 d->timer.expires = jiffies + HZ;
@@ -466,9 +465,7 @@ aoedev_by_aoeaddr(ulong maj, int min, int do_alloc)
466 INIT_WORK(&d->work, aoecmd_sleepwork); 465 INIT_WORK(&d->work, aoecmd_sleepwork);
467 spin_lock_init(&d->lock); 466 spin_lock_init(&d->lock);
468 skb_queue_head_init(&d->skbpool); 467 skb_queue_head_init(&d->skbpool);
469 init_timer(&d->timer); 468 timer_setup(&d->timer, dummy_timer, 0);
470 d->timer.data = (ulong) d;
471 d->timer.function = dummy_timer;
472 d->timer.expires = jiffies + HZ; 469 d->timer.expires = jiffies + HZ;
473 add_timer(&d->timer); 470 add_timer(&d->timer);
474 d->bufpool = NULL; /* defer to aoeblk_gdalloc */ 471 d->bufpool = NULL; /* defer to aoeblk_gdalloc */
diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index a54183935aa1..eae484acfbbc 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -903,10 +903,14 @@ static void unlock_fdc(void)
903} 903}
904 904
905/* switches the motor off after a given timeout */ 905/* switches the motor off after a given timeout */
906static void motor_off_callback(unsigned long nr) 906static void motor_off_callback(struct timer_list *t)
907{ 907{
908 unsigned long nr = t - motor_off_timer;
908 unsigned char mask = ~(0x10 << UNIT(nr)); 909 unsigned char mask = ~(0x10 << UNIT(nr));
909 910
911 if (WARN_ON_ONCE(nr >= N_DRIVE))
912 return;
913
910 set_dor(FDC(nr), mask, 0); 914 set_dor(FDC(nr), mask, 0);
911} 915}
912 916
@@ -3047,7 +3051,7 @@ static void raw_cmd_done(int flag)
3047 else 3051 else
3048 raw_cmd->flags &= ~FD_RAW_DISK_CHANGE; 3052 raw_cmd->flags &= ~FD_RAW_DISK_CHANGE;
3049 if (raw_cmd->flags & FD_RAW_NO_MOTOR_AFTER) 3053 if (raw_cmd->flags & FD_RAW_NO_MOTOR_AFTER)
3050 motor_off_callback(current_drive); 3054 motor_off_callback(&motor_off_timer[current_drive]);
3051 3055
3052 if (raw_cmd->next && 3056 if (raw_cmd->next &&
3053 (!(raw_cmd->flags & FD_RAW_FAILURE) || 3057 (!(raw_cmd->flags & FD_RAW_FAILURE) ||
@@ -4542,7 +4546,7 @@ static int __init do_floppy_init(void)
4542 disks[drive]->fops = &floppy_fops; 4546 disks[drive]->fops = &floppy_fops;
4543 sprintf(disks[drive]->disk_name, "fd%d", drive); 4547 sprintf(disks[drive]->disk_name, "fd%d", drive);
4544 4548
4545 setup_timer(&motor_off_timer[drive], motor_off_callback, drive); 4549 timer_setup(&motor_off_timer[drive], motor_off_callback, 0);
4546 } 4550 }
4547 4551
4548 err = register_blkdev(FLOPPY_MAJOR, "fd"); 4552 err = register_blkdev(FLOPPY_MAJOR, "fd");
diff --git a/drivers/block/swim3.c b/drivers/block/swim3.c
index 9f931f8f6b4c..e620e423102b 100644
--- a/drivers/block/swim3.c
+++ b/drivers/block/swim3.c
@@ -239,10 +239,10 @@ static unsigned short write_postamble[] = {
239static void seek_track(struct floppy_state *fs, int n); 239static void seek_track(struct floppy_state *fs, int n);
240static void init_dma(struct dbdma_cmd *cp, int cmd, void *buf, int count); 240static void init_dma(struct dbdma_cmd *cp, int cmd, void *buf, int count);
241static void act(struct floppy_state *fs); 241static void act(struct floppy_state *fs);
242static void scan_timeout(unsigned long data); 242static void scan_timeout(struct timer_list *t);
243static void seek_timeout(unsigned long data); 243static void seek_timeout(struct timer_list *t);
244static void settle_timeout(unsigned long data); 244static void settle_timeout(struct timer_list *t);
245static void xfer_timeout(unsigned long data); 245static void xfer_timeout(struct timer_list *t);
246static irqreturn_t swim3_interrupt(int irq, void *dev_id); 246static irqreturn_t swim3_interrupt(int irq, void *dev_id);
247/*static void fd_dma_interrupt(int irq, void *dev_id);*/ 247/*static void fd_dma_interrupt(int irq, void *dev_id);*/
248static int grab_drive(struct floppy_state *fs, enum swim_state state, 248static int grab_drive(struct floppy_state *fs, enum swim_state state,
@@ -392,13 +392,12 @@ static void do_fd_request(struct request_queue * q)
392} 392}
393 393
394static void set_timeout(struct floppy_state *fs, int nticks, 394static void set_timeout(struct floppy_state *fs, int nticks,
395 void (*proc)(unsigned long)) 395 void (*proc)(struct timer_list *t))
396{ 396{
397 if (fs->timeout_pending) 397 if (fs->timeout_pending)
398 del_timer(&fs->timeout); 398 del_timer(&fs->timeout);
399 fs->timeout.expires = jiffies + nticks; 399 fs->timeout.expires = jiffies + nticks;
400 fs->timeout.function = proc; 400 fs->timeout.function = (TIMER_FUNC_TYPE)proc;
401 fs->timeout.data = (unsigned long) fs;
402 add_timer(&fs->timeout); 401 add_timer(&fs->timeout);
403 fs->timeout_pending = 1; 402 fs->timeout_pending = 1;
404} 403}
@@ -569,9 +568,9 @@ static void act(struct floppy_state *fs)
569 } 568 }
570} 569}
571 570
572static void scan_timeout(unsigned long data) 571static void scan_timeout(struct timer_list *t)
573{ 572{
574 struct floppy_state *fs = (struct floppy_state *) data; 573 struct floppy_state *fs = from_timer(fs, t, timeout);
575 struct swim3 __iomem *sw = fs->swim3; 574 struct swim3 __iomem *sw = fs->swim3;
576 unsigned long flags; 575 unsigned long flags;
577 576
@@ -594,9 +593,9 @@ static void scan_timeout(unsigned long data)
594 spin_unlock_irqrestore(&swim3_lock, flags); 593 spin_unlock_irqrestore(&swim3_lock, flags);
595} 594}
596 595
597static void seek_timeout(unsigned long data) 596static void seek_timeout(struct timer_list *t)
598{ 597{
599 struct floppy_state *fs = (struct floppy_state *) data; 598 struct floppy_state *fs = from_timer(fs, t, timeout);
600 struct swim3 __iomem *sw = fs->swim3; 599 struct swim3 __iomem *sw = fs->swim3;
601 unsigned long flags; 600 unsigned long flags;
602 601
@@ -614,9 +613,9 @@ static void seek_timeout(unsigned long data)
614 spin_unlock_irqrestore(&swim3_lock, flags); 613 spin_unlock_irqrestore(&swim3_lock, flags);
615} 614}
616 615
617static void settle_timeout(unsigned long data) 616static void settle_timeout(struct timer_list *t)
618{ 617{
619 struct floppy_state *fs = (struct floppy_state *) data; 618 struct floppy_state *fs = from_timer(fs, t, timeout);
620 struct swim3 __iomem *sw = fs->swim3; 619 struct swim3 __iomem *sw = fs->swim3;
621 unsigned long flags; 620 unsigned long flags;
622 621
@@ -644,9 +643,9 @@ static void settle_timeout(unsigned long data)
644 spin_unlock_irqrestore(&swim3_lock, flags); 643 spin_unlock_irqrestore(&swim3_lock, flags);
645} 644}
646 645
647static void xfer_timeout(unsigned long data) 646static void xfer_timeout(struct timer_list *t)
648{ 647{
649 struct floppy_state *fs = (struct floppy_state *) data; 648 struct floppy_state *fs = from_timer(fs, t, timeout);
650 struct swim3 __iomem *sw = fs->swim3; 649 struct swim3 __iomem *sw = fs->swim3;
651 struct dbdma_regs __iomem *dr = fs->dma; 650 struct dbdma_regs __iomem *dr = fs->dma;
652 unsigned long flags; 651 unsigned long flags;
@@ -1182,7 +1181,7 @@ static int swim3_add_device(struct macio_dev *mdev, int index)
1182 return -EBUSY; 1181 return -EBUSY;
1183 } 1182 }
1184 1183
1185 init_timer(&fs->timeout); 1184 timer_setup(&fs->timeout, NULL, 0);
1186 1185
1187 swim3_info("SWIM3 floppy controller %s\n", 1186 swim3_info("SWIM3 floppy controller %s\n",
1188 mdev->media_bay ? "in media bay" : ""); 1187 mdev->media_bay ? "in media bay" : "");
diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c
index 6ff0be8cbdc9..7c3ed7c9af77 100644
--- a/drivers/ide/ide-cd.c
+++ b/drivers/ide/ide-cd.c
@@ -1333,8 +1333,7 @@ static int ide_cdrom_prep_fs(struct request_queue *q, struct request *rq)
1333 unsigned long blocks = blk_rq_sectors(rq) / (hard_sect >> 9); 1333 unsigned long blocks = blk_rq_sectors(rq) / (hard_sect >> 9);
1334 struct scsi_request *req = scsi_req(rq); 1334 struct scsi_request *req = scsi_req(rq);
1335 1335
1336 scsi_req_init(req); 1336 q->initialize_rq_fn(rq);
1337 memset(req->cmd, 0, BLK_MAX_CDB);
1338 1337
1339 if (rq_data_dir(rq) == READ) 1338 if (rq_data_dir(rq) == READ)
1340 req->cmd[0] = GPCMD_READ_10; 1339 req->cmd[0] = GPCMD_READ_10;
diff --git a/drivers/md/bcache/stats.c b/drivers/md/bcache/stats.c
index d0831d5bcc87..be119326297b 100644
--- a/drivers/md/bcache/stats.c
+++ b/drivers/md/bcache/stats.c
@@ -147,9 +147,9 @@ static void scale_stats(struct cache_stats *stats, unsigned long rescale_at)
147 } 147 }
148} 148}
149 149
150static void scale_accounting(unsigned long data) 150static void scale_accounting(struct timer_list *t)
151{ 151{
152 struct cache_accounting *acc = (struct cache_accounting *) data; 152 struct cache_accounting *acc = from_timer(acc, t, timer);
153 153
154#define move_stat(name) do { \ 154#define move_stat(name) do { \
155 unsigned t = atomic_xchg(&acc->collector.name, 0); \ 155 unsigned t = atomic_xchg(&acc->collector.name, 0); \
@@ -234,9 +234,7 @@ void bch_cache_accounting_init(struct cache_accounting *acc,
234 kobject_init(&acc->day.kobj, &bch_stats_ktype); 234 kobject_init(&acc->day.kobj, &bch_stats_ktype);
235 235
236 closure_init(&acc->cl, parent); 236 closure_init(&acc->cl, parent);
237 init_timer(&acc->timer); 237 timer_setup(&acc->timer, scale_accounting, 0);
238 acc->timer.expires = jiffies + accounting_delay; 238 acc->timer.expires = jiffies + accounting_delay;
239 acc->timer.data = (unsigned long) acc;
240 acc->timer.function = scale_accounting;
241 add_timer(&acc->timer); 239 add_timer(&acc->timer);
242} 240}
diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c
index 2209a9700acd..288386bfbfb5 100644
--- a/drivers/md/dm-delay.c
+++ b/drivers/md/dm-delay.c
@@ -44,9 +44,9 @@ struct dm_delay_info {
44 44
45static DEFINE_MUTEX(delayed_bios_lock); 45static DEFINE_MUTEX(delayed_bios_lock);
46 46
47static void handle_delayed_timer(unsigned long data) 47static void handle_delayed_timer(struct timer_list *t)
48{ 48{
49 struct delay_c *dc = (struct delay_c *)data; 49 struct delay_c *dc = from_timer(dc, t, delay_timer);
50 50
51 queue_work(dc->kdelayd_wq, &dc->flush_expired_bios); 51 queue_work(dc->kdelayd_wq, &dc->flush_expired_bios);
52} 52}
@@ -195,7 +195,7 @@ out:
195 goto bad_queue; 195 goto bad_queue;
196 } 196 }
197 197
198 setup_timer(&dc->delay_timer, handle_delayed_timer, (unsigned long)dc); 198 timer_setup(&dc->delay_timer, handle_delayed_timer, 0);
199 199
200 INIT_WORK(&dc->flush_expired_bios, flush_expired_bios); 200 INIT_WORK(&dc->flush_expired_bios, flush_expired_bios);
201 INIT_LIST_HEAD(&dc->delayed_bios); 201 INIT_LIST_HEAD(&dc->delayed_bios);
diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 61180783ef42..05c7bfd0c9d9 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -1094,9 +1094,9 @@ static void sleep_on_endio_wait(struct dm_integrity_c *ic)
1094 __remove_wait_queue(&ic->endio_wait, &wait); 1094 __remove_wait_queue(&ic->endio_wait, &wait);
1095} 1095}
1096 1096
1097static void autocommit_fn(unsigned long data) 1097static void autocommit_fn(struct timer_list *t)
1098{ 1098{
1099 struct dm_integrity_c *ic = (struct dm_integrity_c *)data; 1099 struct dm_integrity_c *ic = from_timer(ic, t, autocommit_timer);
1100 1100
1101 if (likely(!dm_integrity_failed(ic))) 1101 if (likely(!dm_integrity_failed(ic)))
1102 queue_work(ic->commit_wq, &ic->commit_work); 1102 queue_work(ic->commit_wq, &ic->commit_work);
@@ -2942,7 +2942,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
2942 2942
2943 ic->autocommit_jiffies = msecs_to_jiffies(sync_msec); 2943 ic->autocommit_jiffies = msecs_to_jiffies(sync_msec);
2944 ic->autocommit_msec = sync_msec; 2944 ic->autocommit_msec = sync_msec;
2945 setup_timer(&ic->autocommit_timer, autocommit_fn, (unsigned long)ic); 2945 timer_setup(&ic->autocommit_timer, autocommit_fn, 0);
2946 2946
2947 ic->io = dm_io_client_create(); 2947 ic->io = dm_io_client_create();
2948 if (IS_ERR(ic->io)) { 2948 if (IS_ERR(ic->io)) {
diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
index c0b82136b2d1..580c49cc8079 100644
--- a/drivers/md/dm-raid1.c
+++ b/drivers/md/dm-raid1.c
@@ -94,9 +94,9 @@ static void wakeup_mirrord(void *context)
94 queue_work(ms->kmirrord_wq, &ms->kmirrord_work); 94 queue_work(ms->kmirrord_wq, &ms->kmirrord_work);
95} 95}
96 96
97static void delayed_wake_fn(unsigned long data) 97static void delayed_wake_fn(struct timer_list *t)
98{ 98{
99 struct mirror_set *ms = (struct mirror_set *) data; 99 struct mirror_set *ms = from_timer(ms, t, timer);
100 100
101 clear_bit(0, &ms->timer_pending); 101 clear_bit(0, &ms->timer_pending);
102 wakeup_mirrord(ms); 102 wakeup_mirrord(ms);
@@ -108,8 +108,6 @@ static void delayed_wake(struct mirror_set *ms)
108 return; 108 return;
109 109
110 ms->timer.expires = jiffies + HZ / 5; 110 ms->timer.expires = jiffies + HZ / 5;
111 ms->timer.data = (unsigned long) ms;
112 ms->timer.function = delayed_wake_fn;
113 add_timer(&ms->timer); 111 add_timer(&ms->timer);
114} 112}
115 113
@@ -1133,7 +1131,7 @@ static int mirror_ctr(struct dm_target *ti, unsigned int argc, char **argv)
1133 goto err_free_context; 1131 goto err_free_context;
1134 } 1132 }
1135 INIT_WORK(&ms->kmirrord_work, do_mirror); 1133 INIT_WORK(&ms->kmirrord_work, do_mirror);
1136 init_timer(&ms->timer); 1134 timer_setup(&ms->timer, delayed_wake_fn, 0);
1137 ms->timer_pending = 0; 1135 ms->timer_pending = 0;
1138 INIT_WORK(&ms->trigger_event, trigger_event); 1136 INIT_WORK(&ms->trigger_event, trigger_event);
1139 1137
diff --git a/drivers/md/md.c b/drivers/md/md.c
index c3dc134b9fb5..41c050b59ec4 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -541,7 +541,7 @@ static void mddev_put(struct mddev *mddev)
541 bioset_free(sync_bs); 541 bioset_free(sync_bs);
542} 542}
543 543
544static void md_safemode_timeout(unsigned long data); 544static void md_safemode_timeout(struct timer_list *t);
545 545
546void mddev_init(struct mddev *mddev) 546void mddev_init(struct mddev *mddev)
547{ 547{
@@ -550,8 +550,7 @@ void mddev_init(struct mddev *mddev)
550 mutex_init(&mddev->bitmap_info.mutex); 550 mutex_init(&mddev->bitmap_info.mutex);
551 INIT_LIST_HEAD(&mddev->disks); 551 INIT_LIST_HEAD(&mddev->disks);
552 INIT_LIST_HEAD(&mddev->all_mddevs); 552 INIT_LIST_HEAD(&mddev->all_mddevs);
553 setup_timer(&mddev->safemode_timer, md_safemode_timeout, 553 timer_setup(&mddev->safemode_timer, md_safemode_timeout, 0);
554 (unsigned long) mddev);
555 atomic_set(&mddev->active, 1); 554 atomic_set(&mddev->active, 1);
556 atomic_set(&mddev->openers, 0); 555 atomic_set(&mddev->openers, 0);
557 atomic_set(&mddev->active_io, 0); 556 atomic_set(&mddev->active_io, 0);
@@ -5404,9 +5403,9 @@ static int add_named_array(const char *val, const struct kernel_param *kp)
5404 return -EINVAL; 5403 return -EINVAL;
5405} 5404}
5406 5405
5407static void md_safemode_timeout(unsigned long data) 5406static void md_safemode_timeout(struct timer_list *t)
5408{ 5407{
5409 struct mddev *mddev = (struct mddev *) data; 5408 struct mddev *mddev = from_timer(mddev, t, safemode_timer);
5410 5409
5411 mddev->safemode = 1; 5410 mddev->safemode = 1;
5412 if (mddev->external) 5411 if (mddev->external)
diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 739b8feadc7d..664d3013f68f 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -2144,6 +2144,7 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
2144 struct nvmet_fc_fcp_iod *fod) 2144 struct nvmet_fc_fcp_iod *fod)
2145{ 2145{
2146 struct nvme_fc_cmd_iu *cmdiu = &fod->cmdiubuf; 2146 struct nvme_fc_cmd_iu *cmdiu = &fod->cmdiubuf;
2147 u32 xfrlen = be32_to_cpu(cmdiu->data_len);
2147 int ret; 2148 int ret;
2148 2149
2149 /* 2150 /*
@@ -2157,7 +2158,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
2157 2158
2158 fod->fcpreq->done = nvmet_fc_xmt_fcp_op_done; 2159 fod->fcpreq->done = nvmet_fc_xmt_fcp_op_done;
2159 2160
2160 fod->req.transfer_len = be32_to_cpu(cmdiu->data_len);
2161 if (cmdiu->flags & FCNVME_CMD_FLAGS_WRITE) { 2161 if (cmdiu->flags & FCNVME_CMD_FLAGS_WRITE) {
2162 fod->io_dir = NVMET_FCP_WRITE; 2162 fod->io_dir = NVMET_FCP_WRITE;
2163 if (!nvme_is_write(&cmdiu->sqe)) 2163 if (!nvme_is_write(&cmdiu->sqe))
@@ -2168,7 +2168,7 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
2168 goto transport_error; 2168 goto transport_error;
2169 } else { 2169 } else {
2170 fod->io_dir = NVMET_FCP_NODATA; 2170 fod->io_dir = NVMET_FCP_NODATA;
2171 if (fod->req.transfer_len) 2171 if (xfrlen)
2172 goto transport_error; 2172 goto transport_error;
2173 } 2173 }
2174 2174
@@ -2192,6 +2192,8 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
2192 return; 2192 return;
2193 } 2193 }
2194 2194
2195 fod->req.transfer_len = xfrlen;
2196
2195 /* keep a running counter of tail position */ 2197 /* keep a running counter of tail position */
2196 atomic_inc(&fod->queue->sqtail); 2198 atomic_inc(&fod->queue->sqtail);
2197 2199