aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/md/raid10.c
diff options
context:
space:
mode:
authorKrzysztof Wojcik <krzysztof.wojcik@intel.com>2011-02-04 08:18:26 -0500
committerNeilBrown <neilb@suse.de>2011-02-07 19:49:02 -0500
commit02214dc5461c36da26a34014cab4e1bb484edba2 (patch)
tree32137bdf12144af5ff6c946838e1bfbf3b2bc0f2 /drivers/md/raid10.c
parente91ece5590b3c728624ab57043fc7a05069c604a (diff)
FIX: md: process hangs at wait_barrier after 0->10 takeover
Following symptoms were observed: 1. After raid0->raid10 takeover operation we have array with 2 missing disks. When we add disk for rebuild, recovery process starts as expected but it does not finish- it stops at about 90%, md126_resync process hangs in "D" state. 2. Similar behavior is when we have mounted raid0 array and we execute takeover to raid10. After this when we try to unmount array- it causes process umount hangs in "D" In scenarios above processes hang at the same function- wait_barrier in raid10.c. Process waits in macro "wait_event_lock_irq" until the "!conf->barrier" condition will be true. In scenarios above it never happens. Reason was that at the end of level_store, after calling pers->run, we call mddev_resume. This calls pers->quiesce(mddev, 0) with RAID10, that calls lower_barrier. However raise_barrier hadn't been called on that 'conf' yet, so conf->barrier becomes negative, which is bad. This patch introduces setting conf->barrier=1 after takeover operation. It prevents to become barrier negative after call lower_barrier(). Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com> Signed-off-by: NeilBrown <neilb@suse.de>
Diffstat (limited to 'drivers/md/raid10.c')
-rw-r--r--drivers/md/raid10.c6
1 files changed, 4 insertions, 2 deletions
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 69b659544390..3b607b28741b 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -2463,11 +2463,13 @@ static void *raid10_takeover_raid0(mddev_t *mddev)
2463 mddev->recovery_cp = MaxSector; 2463 mddev->recovery_cp = MaxSector;
2464 2464
2465 conf = setup_conf(mddev); 2465 conf = setup_conf(mddev);
2466 if (!IS_ERR(conf)) 2466 if (!IS_ERR(conf)) {
2467 list_for_each_entry(rdev, &mddev->disks, same_set) 2467 list_for_each_entry(rdev, &mddev->disks, same_set)
2468 if (rdev->raid_disk >= 0) 2468 if (rdev->raid_disk >= 0)
2469 rdev->new_raid_disk = rdev->raid_disk * 2; 2469 rdev->new_raid_disk = rdev->raid_disk * 2;
2470 2470 conf->barrier = 1;
2471 }
2472
2471 return conf; 2473 return conf;
2472} 2474}
2473 2475