You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the feature would like to see added to OpenZFS
It would be nice to be able to re-attach a split mirror member to the same pool it was originally split from.
Currently trying to re-attach a split vdev results in an error message that it is a part of the (new) pool, and requires the use of attach -f to make it re-attach. Using attach -f results in a full resilver of the device, which seems wasteful and unnecessary.
How will this feature improve OpenZFS?
Dramatically reduce resilvering time in most cases, lessening the time window when other failures might result in catastrophic data loss, as well as not wasting unneeded writes on write-limited devices like SSD's when all or the majority of the data already exists on the device.
Additional context
It seems like it might be likely that there is enough data already stored in the respective devices to be able to do this without too much work, by just replaying the events that have occurred since the split. However, if it isn't as easy as it might seem, here is an alternative method:
Since zfs already has to know enough to be able to apply incremental send's to replicated pools (and in this case, perhaps even if the split wasn't from the same source pool but from just a replicated pool of the source), it seems like zfs should be able to loop over the datasets in the source pool, doing the equivalent of a send/recv to bring the split drive back into sync, and deleting any datasets on the split pool that do not exist on the source pool.
My own use case for this is to be able to split off a vdev for an offsite backup or for testing, but still be able to quickly rejoin it to the original pool for updates or additional redundancy, without incurring unnecessary wear or extended recovery time or additional system load.
The text was updated successfully, but these errors were encountered:
Pool splitting effectively makes it two different pools with different names and GUIDs. I wonder if it is even possible to track their common ancestry after that. And even if so, both pools could change individually since being split. So no information on the disks can be trusted. We can't do incremental resilver of only changes like what we do when disconnected disk is later reconnected. And as part of scrub we don't really compare the data, but only verify checksums, and default checksums we use are not cryptographically strong, so it is possible to miss that some block have (maliciously?) changed. I don't think this rare case worth special handling and possible problems.
Describe the feature would like to see added to OpenZFS
It would be nice to be able to re-attach a split mirror member to the same pool it was originally split from.
Currently trying to re-attach a split vdev results in an error message that it is a part of the (new) pool, and requires the use of
attach -f
to make it re-attach. Usingattach -f
results in a full resilver of the device, which seems wasteful and unnecessary.How will this feature improve OpenZFS?
Dramatically reduce resilvering time in most cases, lessening the time window when other failures might result in catastrophic data loss, as well as not wasting unneeded writes on write-limited devices like SSD's when all or the majority of the data already exists on the device.
Additional context
It seems like it might be likely that there is enough data already stored in the respective devices to be able to do this without too much work, by just replaying the events that have occurred since the split. However, if it isn't as easy as it might seem, here is an alternative method:
Since zfs already has to know enough to be able to apply incremental send's to replicated pools (and in this case, perhaps even if the split wasn't from the same source pool but from just a replicated pool of the source), it seems like zfs should be able to loop over the datasets in the source pool, doing the equivalent of a send/recv to bring the split drive back into sync, and deleting any datasets on the split pool that do not exist on the source pool.
My own use case for this is to be able to split off a vdev for an offsite backup or for testing, but still be able to quickly rejoin it to the original pool for updates or additional redundancy, without incurring unnecessary wear or extended recovery time or additional system load.
The text was updated successfully, but these errors were encountered: