From nobody@FreeBSD.org  Tue Apr  6 09:36:41 2010
Return-Path: <nobody@FreeBSD.org>
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34])
	by hub.freebsd.org (Postfix) with ESMTP id 6CB42106564A
	for <freebsd-gnats-submit@FreeBSD.org>; Tue,  6 Apr 2010 09:36:41 +0000 (UTC)
	(envelope-from nobody@FreeBSD.org)
Received: from www.freebsd.org (www.freebsd.org [IPv6:2001:4f8:fff6::21])
	by mx1.freebsd.org (Postfix) with ESMTP id 424A58FC27
	for <freebsd-gnats-submit@FreeBSD.org>; Tue,  6 Apr 2010 09:36:41 +0000 (UTC)
Received: from www.freebsd.org (localhost [127.0.0.1])
	by www.freebsd.org (8.14.3/8.14.3) with ESMTP id o369aeKc080365
	for <freebsd-gnats-submit@FreeBSD.org>; Tue, 6 Apr 2010 09:36:40 GMT
	(envelope-from nobody@www.freebsd.org)
Received: (from nobody@localhost)
	by www.freebsd.org (8.14.3/8.14.3/Submit) id o369aeit080364;
	Tue, 6 Apr 2010 09:36:40 GMT
	(envelope-from nobody)
Message-Id: <201004060936.o369aeit080364@www.freebsd.org>
Date: Tue, 6 Apr 2010 09:36:40 GMT
From: vermaden <vermaden@interia.pl>
To: freebsd-gnats-submit@FreeBSD.org
Subject: ZFS/zpool status shows deleted/not present pools after scrub
X-Send-Pr-Version: www-3.1
X-GNATS-Notify:

>Number:         145423
>Category:       kern
>Synopsis:       [zfs] ZFS/zpool status shows deleted/not present pools after scrub
>Confidential:   no
>Severity:       non-critical
>Priority:       low
>Responsible:    pjd
>State:          closed
>Quarter:        
>Keywords:       
>Date-Required:  
>Class:          sw-bug
>Submitter-Id:   current-users
>Arrival-Date:   Tue Apr 06 09:40:06 UTC 2010
>Closed-Date:    Fri May 14 06:09:53 UTC 2010
>Last-Modified:  Fri May 14 18:55:42 UTC 2010
>Originator:     vermaden
>Release:        8.0-RELEASE-p2
>Organization:
>Environment:
Stock 8.0-RELEASE-p2 KERNEL/BASE SYSTEM from freebsd-update(8).

FreeBSD savio 8.0-RELEASE-p2 FreeBSD 8.0-RELEASE-p2 #0: Tue Jan  5 21:11:58 UTC 2010     root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
>Description:
Hi,

I get strange behaviour with ZFS zpool status/scrub I think.

Almost everytime when I launch zpool scrub the old pool that is not
existent in that system for quite a long time (2 months) applies
after launching zpool scrub. Other thing is that I the 'oldfs' pool
was not on these disks, but was created on other disks that were
removed from that system. Of course zpool destroy helps, but only
until next zpool scrub.

# zpool status
  pool: basefs
 state: ONLINE
 scrub: scrub in progress for 0h0m, 0.00% done, 1572h56m to go
config:

        NAME        STATE     READ WRITE CKSUM
        basefs      ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ada0s3  ONLINE       0     0     0
            ada1s3  ONLINE       0     0     0
            ada2s3  ONLINE       0     0     0

# zpool scrub basefs
# zpool status
  pool: basefs
 state: ONLINE
 scrub: scrub in progress for 0h0m, 0.00% done, 1572h56m to go
config:

        NAME        STATE     READ WRITE CKSUM
        basefs      ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ada0s3  ONLINE       0     0     0
            ada1s3  ONLINE       0     0     0
            ada2s3  ONLINE       0     0     0

errors: No known data errors

  pool: oldfs
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
        replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        oldfs       UNAVAIL      0     0     0  insufficient replicas
          ada3s3    UNAVAIL      0     0     0  cannot open

# zpool destroy oldfs
# zpool status            
  pool: basefs
 state: ONLINE
 scrub: scrub in progress for 0h6m, 2.61% done, 4h9m to go
config:

        NAME        STATE     READ WRITE CKSUM
        basefs      ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ada0s3  ONLINE       0     0     0
            ada1s3  ONLINE       0     0     0
            ada2s3  ONLINE       0     0     0

errors: No known data errors

Regards,
vermaden

>How-To-Repeat:
# zpool status
# zpool scrub ${EXISTING_POOL}
# zpool status
>Fix:


>Release-Note:
>Audit-Trail:
Responsible-Changed-From-To: freebsd-bugs->freebsd-fs 
Responsible-Changed-By: linimon 
Responsible-Changed-When: Tue Apr 6 14:59:02 UTC 2010 
Responsible-Changed-Why:  
Over to maintainer(s). 

http://www.freebsd.org/cgi/query-pr.cgi?pr=145423 
State-Changed-From-To: open->feedback 
State-Changed-By: pjd 
State-Changed-When: czw 13 maj 2010 22:31:19 UTC 
State-Changed-Why:  
Could you provide outup of: 

# zdb -l /dev/ada3s3 


Responsible-Changed-From-To: freebsd-fs->pjd 
Responsible-Changed-By: pjd 
Responsible-Changed-When: czw 13 maj 2010 22:31:19 UTC 
Responsible-Changed-Why:  
I'll take this one. 

Date: 14 May 2010 06:38:45 +0200
From: vermaden <vermaden@interia.pl>
Sender: vermaden@interia.pl
To: pjd@FreeBSD.org
Subject: Re: kern/145423: [zfs] ZFS/zpool status shows deleted/not present pools after scrub

 > Could you provide outup of:
 >=20
 > =09# zdb -l /dev/ada3s3
 >=20
 
 Of course, I have disks ada0, ada1 and ada2 so I assume that
 You wanted # zdb -l /dev/ada2s3 output, here it comes:
 
 --------------------------------------------
 LABEL 0
 --------------------------------------------
     version=3D13
     name=3D'basefs'
     state=3D0
     txg=3D1033488
     pool_guid=3D12448141572999625538
     hostid=3D3527456500
     hostname=3D'savio'
     top_guid=3D17006193158460166059
     guid=3D5300817946821238285
     vdev_tree
         type=3D'raidz'
         id=3D0
         guid=3D17006193158460166059
         nparity=3D1
         metaslab_array=3D23
         metaslab_shift=3D31
         ashift=3D9
         asize=3D2995769573376
         is_log=3D0
         children[0]
                 type=3D'disk'
                 id=3D0
                 guid=3D10040725571806476419
                 path=3D'/dev/ada0s3'
                 whole_disk=3D0
                 DTL=3D48
         children[1]
                 type=3D'disk'
                 id=3D1
                 guid=3D7986326854703496087
                 path=3D'/dev/ada1s3'
                 whole_disk=3D0
                 DTL=3D47
         children[2]
                 type=3D'disk'
                 id=3D2
                 guid=3D5300817946821238285
                 path=3D'/dev/ada2s3'
                 whole_disk=3D0
                 DTL=3D46
 --------------------------------------------
 LABEL 1
 --------------------------------------------
     version=3D13
     name=3D'basefs'
     state=3D0
     txg=3D1033488
     pool_guid=3D12448141572999625538
     hostid=3D3527456500
     hostname=3D'savio'
     top_guid=3D17006193158460166059
     guid=3D5300817946821238285
     vdev_tree
         type=3D'raidz'
         id=3D0
         guid=3D17006193158460166059
         nparity=3D1
         metaslab_array=3D23
         metaslab_shift=3D31
         ashift=3D9
         asize=3D2995769573376
         is_log=3D0
         children[0]
                 type=3D'disk'
                 id=3D0
                 guid=3D10040725571806476419
                 path=3D'/dev/ada0s3'
                 whole_disk=3D0
                 DTL=3D48
         children[1]
                 type=3D'disk'
                 id=3D1
                 guid=3D7986326854703496087
                 path=3D'/dev/ada1s3'
                 whole_disk=3D0
                 DTL=3D47
         children[2]
                 type=3D'disk'
                 id=3D2
                 guid=3D5300817946821238285
                 path=3D'/dev/ada2s3'
                 whole_disk=3D0
                 DTL=3D46
 --------------------------------------------
 LABEL 2
 --------------------------------------------
     version=3D13
     name=3D'basefs'
     state=3D0
     txg=3D1033488
     pool_guid=3D12448141572999625538
     hostid=3D3527456500
     hostname=3D'savio'
     top_guid=3D17006193158460166059
     guid=3D5300817946821238285
     vdev_tree
         type=3D'raidz'
         id=3D0
         guid=3D17006193158460166059
         nparity=3D1
         metaslab_array=3D23
         metaslab_shift=3D31
         ashift=3D9
         asize=3D2995769573376
         is_log=3D0
         children[0]
                 type=3D'disk'
                 id=3D0
                 guid=3D10040725571806476419
                 path=3D'/dev/ada0s3'
                 whole_disk=3D0
                 DTL=3D48
         children[1]
                 type=3D'disk'
                 id=3D1
                 guid=3D7986326854703496087
                 path=3D'/dev/ada1s3'
                 whole_disk=3D0
                 DTL=3D47
         children[2]
                 type=3D'disk'
                 id=3D2
                 guid=3D5300817946821238285
                 path=3D'/dev/ada2s3'
                 whole_disk=3D0
                 DTL=3D46
 --------------------------------------------
 LABEL 3
 --------------------------------------------
     version=3D13
     name=3D'basefs'
     state=3D0
     txg=3D1033488
     pool_guid=3D12448141572999625538
     hostid=3D3527456500
     hostname=3D'savio'
     top_guid=3D17006193158460166059
     guid=3D5300817946821238285
     vdev_tree
         type=3D'raidz'
         id=3D0
         guid=3D17006193158460166059
         nparity=3D1
         metaslab_array=3D23
         metaslab_shift=3D31
         ashift=3D9
         asize=3D2995769573376
         is_log=3D0
         children[0]
                 type=3D'disk'
                 id=3D0
                 guid=3D10040725571806476419
                 path=3D'/dev/ada0s3'
                 whole_disk=3D0
                 DTL=3D48
         children[1]
                 type=3D'disk'
                 id=3D1
                 guid=3D7986326854703496087
                 path=3D'/dev/ada1s3'
                 whole_disk=3D0
                 DTL=3D47
         children[2]
                 type=3D'disk'
                 id=3D2
                 guid=3D5300817946821238285
                 path=3D'/dev/ada2s3'
                 whole_disk=3D0
                 DTL=3D46
 
 ----
 
 Also, after last # zpool scrub basefs the 'oldfs' or any other
 'old' pool did not shown on thoue # zpool status output.
 
 Regards,
 vermaden
 
 ----------------------------------------------------------------------
 Twoja farma na Dzikim Zachodzie!
 Zagraj >> http://linkint.pl/f26d8

http://www.freebsd.org/cgi/query-pr.cgi?pr=145423 

State-Changed-From-To: feedback->closed 
State-Changed-By: pjd 
State-Changed-When: ptk 14 maj 2010 06:09:02 UTC 
State-Changed-Why:  
In your report oldfs was reported with ada3s3: 

NAME      STATE   READ WRITE CKSUM 
oldfs     UNAVAIL    0     0     0 insufficient replicas 
ada3s3  UNAVAIL    0     0     0 cannot open 

If there was no ada3 in your system, which could contain incomplete ZFS 
metadata the only other possibility is that there was some info about  
oldfs in your /boot/zfs/zpool.cache file. 'zpool export oldfs' instead  
of 'zpool destroy oldfs' should be enough to fix it in the future. 

http://www.freebsd.org/cgi/query-pr.cgi?pr=145423 
>Unformatted:
