You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
zpool import fails to properly show available pools with same name
Describe how to reproduce the problem
# truncate -s 10G zdev1
# truncate -s 10G zdev2
# truncate -s 10G zdev3
# zpool create test mirror /raid/temp/zdev1 /raid/temp/zdev2 /raid/temp/zdev3
# zpool offline test /raid/temp/zdev2
# zpool detach test /raid/temp/zdev2
# zdb -l zdev1 |grep pool_guid
pool_guid: 9117558645899222510
# zpool reguid test
# zdb -l zdev1 |grep pool_guid
pool_guid: 16189010088442266690
# zpool export test
# zpool import -d /raid/temp
pool: test
id: 16189010088442266690
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
test ONLINE
mirror-0 ONLINE
/raid/temp/zdev1 ONLINE
/raid/temp/zdev3 ONLINE
pool: test
id: 16189010088442266690
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
test ONLINE
mirror-0 ONLINE
/raid/temp/zdev1 ONLINE
/raid/temp/zdev3 ONLINE
# zdb -l zdev2 |grep pool_guid
pool_guid: 9117558645899222510
As you can see, the test pool was listed twice with the same info, even though the copy from zdev2 should have a different guid and members.
# mv zdev1 zdev3 ..
# zpool import -d /raid/temp
pool: test
id: 9117558645899222510
state: DEGRADED
status: One or more devices are missing from the system.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may be compromised if imported.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q
config:
test DEGRADED
mirror-0 DEGRADED
/raid/temp/zdev1 UNAVAIL cannot open
/raid/temp/zdev2 ONLINE
/raid/temp/zdev3 UNAVAIL cannot open
# mv ../zdev[13] .
# zpool import -d /raid/temp
pool: test2
id: 16189010088442266690
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
test2 ONLINE
mirror-0 ONLINE
/raid/temp/zdev1 ONLINE
/raid/temp/zdev3 ONLINE
pool: test
id: 16189010088442266690
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
test ONLINE
mirror-0 ONLINE
/raid/temp/zdev1 ONLINE
/raid/temp/zdev3 ONLINE
Note: test2 is shown in the second case because I also imported the copy from zdev2 using that name before exporting it again and generating the rest of the output. But that shows that the first copy is actually coming from zdev2, but that it is still showing the wrong guid and members.
Include any warning/errors/backtraces from the system logs
none
The text was updated successfully, but these errors were encountered:
I believe I have seen this as well after moving NVMe devices from a test zpool of the same name from three different systems into a single system that already had an exported test zpool. While, I didn't have time to investigate fully, I can report that a simple zpool import did report two out of what I think are possible 4 test pools to import on this system. Note, this is a legacy Scientific Linux 7 system running ZFS 2.1.15.
[root@vsmarchive2 ~]# zpool import
pool: test
id: 4059267647759945907
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
test ONLINE
mirror-0 ONLINE
pmem0 ONLINE
pmem1 ONLINE
pool: test
id: 3211568940523427607
state: UNAVAIL
status: The pool uses the following feature(s) not supported on this system:
com.klarasystems:vdev_zaps_v2
action: The pool cannot be imported. Access the pool on a system that supports
the required feature(s), or recreate the pool from backup.
config:
test UNAVAIL unsupported feature(s)
mirror-0 ONLINE
zfs-facc40dddca0c58d ONLINE
zfs-562ba4a29f1e50fc ONLINE
zfs-9bf1917b8171caae ONLINE
System information
Describe the problem you're observing
zpool import
fails to properly show available pools with same nameDescribe how to reproduce the problem
As you can see, the
test
pool was listed twice with the same info, even though the copy from zdev2 should have a different guid and members.Note: test2 is shown in the second case because I also imported the copy from zdev2 using that name before exporting it again and generating the rest of the output. But that shows that the first copy is actually coming from zdev2, but that it is still showing the wrong guid and members.
Include any warning/errors/backtraces from the system logs
none
The text was updated successfully, but these errors were encountered: