diff options
author | NeilBrown <neilb@suse.de> | 2008-05-23 16:04:39 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-05-24 12:56:10 -0400 |
commit | dfc7064500061677720fa26352963c772d3ebe6b (patch) | |
tree | a8ca495bccf98837c6762ffba54a8009c9772259 /drivers/md/raid1.c | |
parent | 90b08710e41a07d4ff0fb8940dcce3a552991a56 (diff) |
md: restart recovery cleanly after device failure.
When we get any IO error during a recovery (rebuilding a spare), we abort
the recovery and restart it.
For RAID6 (and multi-drive RAID1) it may not be best to restart at the
beginning: when multiple failures can be tolerated, the recovery may be
able to continue and re-doing all that has already been done doesn't make
sense.
We already have the infrastructure to record where a recovery is up to
and restart from there, but it is not being used properly.
This is because:
- We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
which causes the recovery not be be checkpointed.
- We remove spares and then re-added them which loses important state
information.
The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
needed. If there is an error, the relevant drive will be marked as
Faulty, and that is enough to ensure correct handling of the error. So we
first remove MD_RECOVERY_ERR, changing some of the uses of it to
MD_RECOVERY_INTR.
Then we cause the attempt to remove a non-faulty device from an array to
fail (unless recovery is impossible as the array is too degraded). Then
when remove_and_add_spares attempts to remove the devices on which
recovery can continue, it will fail, they will remain in place, and
recovery will continue on them as desired.
Issue: If we are halfway through rebuilding a spare and another drive
fails, and a new spare is immediately available, do we want to:
1/ complete the current rebuild, then go back and rebuild the new spare or
2/ restart the rebuild from the start and rebuild both devices in
parallel.
Both options can be argued for. The code currently takes option 2 as
a/ this requires least code change
b/ this results in a minimally-degraded array in minimal time.
Cc: "Eivind Sarto" <ivan@kasenna.com>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'drivers/md/raid1.c')
-rw-r--r-- | drivers/md/raid1.c | 10 |
1 files changed, 9 insertions, 1 deletions
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index d0f4021bbc2e..c610b947218a 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c | |||
@@ -1027,7 +1027,7 @@ static void error(mddev_t *mddev, mdk_rdev_t *rdev) | |||
1027 | /* | 1027 | /* |
1028 | * if recovery is running, make sure it aborts. | 1028 | * if recovery is running, make sure it aborts. |
1029 | */ | 1029 | */ |
1030 | set_bit(MD_RECOVERY_ERR, &mddev->recovery); | 1030 | set_bit(MD_RECOVERY_INTR, &mddev->recovery); |
1031 | } else | 1031 | } else |
1032 | set_bit(Faulty, &rdev->flags); | 1032 | set_bit(Faulty, &rdev->flags); |
1033 | set_bit(MD_CHANGE_DEVS, &mddev->flags); | 1033 | set_bit(MD_CHANGE_DEVS, &mddev->flags); |
@@ -1148,6 +1148,14 @@ static int raid1_remove_disk(mddev_t *mddev, int number) | |||
1148 | err = -EBUSY; | 1148 | err = -EBUSY; |
1149 | goto abort; | 1149 | goto abort; |
1150 | } | 1150 | } |
1151 | /* Only remove non-faulty devices is recovery | ||
1152 | * is not possible. | ||
1153 | */ | ||
1154 | if (!test_bit(Faulty, &rdev->flags) && | ||
1155 | mddev->degraded < conf->raid_disks) { | ||
1156 | err = -EBUSY; | ||
1157 | goto abort; | ||
1158 | } | ||
1151 | p->rdev = NULL; | 1159 | p->rdev = NULL; |
1152 | synchronize_rcu(); | 1160 | synchronize_rcu(); |
1153 | if (atomic_read(&rdev->nr_pending)) { | 1161 | if (atomic_read(&rdev->nr_pending)) { |