From root@datastore01.seemoo.tu-darmstadt.de  Thu Jan 12 14:54:15 2012
Return-Path: <root@datastore01.seemoo.tu-darmstadt.de>
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34])
	by hub.freebsd.org (Postfix) with ESMTP id 5D455106564A
	for <FreeBSD-gnats-submit@freebsd.org>; Thu, 12 Jan 2012 14:54:15 +0000 (UTC)
	(envelope-from root@datastore01.seemoo.tu-darmstadt.de)
Received: from lnx141.hrz.tu-darmstadt.de (lnx141.hrz.tu-darmstadt.de [130.83.156.236])
	by mx1.freebsd.org (Postfix) with ESMTP id 3D8038FC0A
	for <FreeBSD-gnats-submit@freebsd.org>; Thu, 12 Jan 2012 14:54:13 +0000 (UTC)
Received: from lnx503.hrz.tu-darmstadt.de (lnx503.hrz.tu-darmstadt.de [130.83.156.232])
	by lnx141.hrz.tu-darmstadt.de (8.14.4/8.13.8) with ESMTP id q0CEMsxA022172
	for <FreeBSD-gnats-submit@freebsd.org>; Thu, 12 Jan 2012 15:23:00 +0100
	(envelope-from root@datastore01.seemoo.tu-darmstadt.de)
Received: from datastore01.seemoo.tu-darmstadt.de (datastore01.seemoo.tu-darmstadt.de [130.83.33.77])
	by lnx503.hrz.tu-darmstadt.de (8.14.4/8.14.4/HRZ/PMX) with ESMTP id q0CEJWI5008025
	for <FreeBSD-gnats-submit@freebsd.org>; Thu, 12 Jan 2012 15:19:33 +0100
	(envelope-from root@datastore01.seemoo.tu-darmstadt.de)
Received: from datastore01.seemoo.tu-darmstadt.de (localhost [127.0.0.1])
	by datastore01.seemoo.tu-darmstadt.de (8.14.4/8.14.4) with ESMTP id q0C9km0e079756;
	Thu, 12 Jan 2012 10:46:48 +0100 (CET)
	(envelope-from root@datastore01.seemoo.tu-darmstadt.de)
Received: (from root@localhost)
	by datastore01.seemoo.tu-darmstadt.de (8.14.4/8.14.4/Submit) id q0C9kmTo079755;
	Thu, 12 Jan 2012 10:46:48 +0100 (CET)
	(envelope-from root)
Message-Id: <201201120946.q0C9kmTo079755@datastore01.seemoo.tu-darmstadt.de>
Date: Thu, 12 Jan 2012 10:46:48 +0100 (CET)
From: marc.werner@seemoo.tu-darmstadt.de
Reply-To: marc.werner@seemoo.tu-darmstadt.de
To: FreeBSD-gnats-submit@freebsd.org
Cc: marc.werner@seemoo.tu-darmstadt.de
Subject: sysutils / zfs-periodic: Test if scrubbing is in process fails when using volume describtors instead of pools
X-Send-Pr-Version: 3.113
X-GNATS-Notify:

>Number:         164055
>Category:       ports
>Synopsis:       sysutils/zfs-periodic: Test if scrubbing is in process fails when using volume describtors instead of pools
>Confidential:   no
>Severity:       non-critical
>Priority:       medium
>Responsible:    freebsd-ports-bugs
>State:          closed
>Quarter:        
>Keywords:       
>Date-Required:  
>Class:          sw-bug
>Submitter-Id:   current-users
>Arrival-Date:   Thu Jan 12 15:00:26 UTC 2012
>Closed-Date:    Thu Feb 28 17:12:43 UTC 2013
>Last-Modified:  Thu Feb 28 17:12:43 UTC 2013
>Originator:     Marc Werner
>Release:        FreeBSD 8.2-RELEASE-p3 amd64
>Organization:
Technische Universität Darmstadt
>Environment:
System: FreeBSD datastore01.seemoo.tu-darmstadt.de 8.2-RELEASE-p3 FreeBSD 8.2-RELEASE-p3 #0: Tue Sep 27 18:45:57 UTC 2011 root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64

One ZFS pool with different volumes. Each volume needs to be snapshotted differently. Some are only snapshotted on a daily basis others are snapshotted hourly.

>Description:
	
We have one zpool which contains different volumes created with 'zfs create' i.e. storage_pool01/home storage_pool01/www etc. Some of these volumes need to be snapshotted every hour others only once a day.
Using  sysutils / zfs-periodic we setup a configuration in /etc/periodic.conf which uses the volume descriptors (with the slash) in the *_zfs_snapshot_pools= option.

With this config the check if a scrub is running on the pool fails as zpool in the scrub_in_progress() function only accepts pool names (without a slash) but gets supplied volume names.

>How-To-Repeat:

Setup sysutils / zfs-periodic and /etc/periodic.conf to snapshot volumes (pool/volume) instead of entire pools. Our periodic.conf looks like this:

ourly_output="root"
hourly_show_success="NO"
hourly_show_info="YES"
hourly_show_badconfig="YES"

hourly_zfs_snapshot_enable="YES"
hourly_zfs_snapshot_pools="storage_pool01/home/staff storage_pool01/shares"
hourly_zfs_snapshot_keep=720

daily_zfs_snapshot_enable="YES"
daily_zfs_snapshot_pools="storage_pool01/home/students storage_pool01/home/hiwi"
daily_zfs_snapshot_keep=30

monthly_zfs_scrub_enable="YES"
monthly_zfs_scrub_pools="storage_pool01"


>Fix:

strip the pool from the volume descriptor before running zpool status in scrub_in_progress():

scrub_in_progress()
{
  # This code was added to allow snapshotting of single volumes without getting an error here
  IFS='/' read -ra  arr<<< "$1"
  pool=${arr[0]}

  if zpool status $pool | grep "scrub in progress" > /dev/null; then
    return 0
  else
    return 1
  fi
}


>Release-Note:
>Audit-Trail:

From: Marc Werner <marc.werner@seemoo.tu-darmstadt.de>
To: bug-followup@FreeBSD.org,
 marc.werner@seemoo.tu-darmstadt.de
Cc:  
Subject: Re: ports/164055: sysutils/zfs-periodic: Test if scrubbing is in process fails when using volume describtors instead of pools
Date: Fri, 13 Jan 2012 09:18:25 +0100

 Use this fix instead of the proposed one as the first suggestion throws =
 errors in some circumstances.
 
 # checks to see if there's a scrub in progress
 scrub_in_progress()
 {
   # This code was added to allow snapshotting of single volumes without =
 getting an error here
   scrub_pool=3D`echo $1 | awk '{split($0,a,"/"); print a[1]}'`
 
   if zpool status $scrub_pool | grep "scrub in progress" > /dev/null; =
 then
     return 0
   else
     return 1
   fi
 }
 

From: =?ISO-8859-1?Q?Peter_Ankerst=E5l?= <peter@pean.org>
To: bug-followup@FreeBSD.org, marc.werner@seemoo.tu-darmstadt.de
Cc:  
Subject: Re: ports/164055: sysutils/zfs-periodic: Test if scrubbing is in
 process fails when using volume describtors instead of pools
Date: Mon, 27 Feb 2012 14:55:53 +0100

 Since scrubs no longer are aborted when making a snapshot I think 
 someone should remove the part that checks for scrubbing.
State-Changed-From-To: open->closed 
State-Changed-By: miwi 
State-Changed-When: Thu Feb 28 17:12:41 UTC 2013 
State-Changed-Why:  
looks like fixed. 

http://www.freebsd.org/cgi/query-pr.cgi?pr=164055 
>Unformatted:
