From ilepore@damnhippie.dyndns.org  Sat Sep  3 17:19:20 2011
Return-Path: <ilepore@damnhippie.dyndns.org>
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34])
	by hub.freebsd.org (Postfix) with ESMTP id 0CED6106564A
	for <freebsd-gnats-submit@freebsd.org>; Sat,  3 Sep 2011 17:19:20 +0000 (UTC)
	(envelope-from ilepore@damnhippie.dyndns.org)
Received: from qmta06.emeryville.ca.mail.comcast.net (qmta06.emeryville.ca.mail.comcast.net [76.96.30.56])
	by mx1.freebsd.org (Postfix) with ESMTP id E61638FC08
	for <freebsd-gnats-submit@freebsd.org>; Sat,  3 Sep 2011 17:19:19 +0000 (UTC)
Received: from omta24.emeryville.ca.mail.comcast.net ([76.96.30.92])
	by qmta06.emeryville.ca.mail.comcast.net with comcast
	id UG9d1h0031zF43QA6HKEf0; Sat, 03 Sep 2011 17:19:14 +0000
Received: from damnhippie.dyndns.org ([24.8.232.202])
	by omta24.emeryville.ca.mail.comcast.net with comcast
	id UHM31h01y4NgCEG8kHM4ak; Sat, 03 Sep 2011 17:21:04 +0000
Received: from revolution.hippie.lan (revolution.hippie.lan [172.22.42.240])
	by damnhippie.dyndns.org (8.14.3/8.14.3) with ESMTP id p83HJHTt033984
	for <FreeBSD-gnats-submit@freebsd.org>; Sat, 3 Sep 2011 11:19:17 -0600 (MDT)
	(envelope-from ilepore@damnhippie.dyndns.org)
Received: (from ilepore@localhost)
	by revolution.hippie.lan (8.14.4/8.14.4/Submit) id p83HJHHO098751;
	Sat, 3 Sep 2011 11:19:17 -0600 (MDT)
	(envelope-from ilepore)
Message-Id: <201109031719.p83HJHHO098751@revolution.hippie.lan>
Date: Sat, 3 Sep 2011 11:19:17 -0600 (MDT)
From: Ian Lepore <freebsd@damnhippie.dyndns.org>
Reply-To: Ian Lepore <freebsd@damnhippie.dyndns.org>
To: FreeBSD-gnats-submit@freebsd.org
Cc:
Subject: [patch] Disable interrupts during busdma cache sync operations.
X-Send-Pr-Version: 3.113
X-GNATS-Notify:

>Number:         160431
>Category:       arm
>Synopsis:       [busdma] [patch] Disable interrupts during busdma cache sync operations.
>Confidential:   no
>Severity:       critical
>Priority:       high
>Responsible:    freebsd-arm
>State:          closed
>Quarter:        
>Keywords:       
>Date-Required:  
>Class:          sw-bug
>Submitter-Id:   current-users
>Arrival-Date:   Sat Sep 03 17:20:12 UTC 2011
>Closed-Date:    Sat May 26 09:23:03 UTC 2012
>Last-Modified:  Mon Dec 31 21:10:00 UTC 2012
>Originator:     Ian Lepore <freebsd@damnhippie.dyndns.org>
>Release:        FreeBSD 8.2-RC3 arm
>Organization:
none
>Environment:
FreeBSD dvb 8.2-RC3 FreeBSD 8.2-RC3 #49: Tue Feb 15 22:52:14 UTC 2011     root@revolution.hippie.lan:/usr/obj/arm/usr/src/sys/DVB  arm

>Description:
Data can be corrupted when an interrupt occurs while busdma_sync_buf() is 
handling a buffer that partially overlaps a cache line.  One scenario, seen
in the real world, was a small IO buffer allocated in the same cache line
as an instance of a struct intr_handler.  The dma sync code copied the non-DMA
data (the intr_handler struct) to a temporary buffer prior to the cache sync,
then an interrupt occurs that results in setting the it_need flag in the 
struct.  When control returns to the dma sync code it finishes by copying 
the saved partial cache line from the temporary buffer back to the 
intr_handler struct, restoring the it_need flag to zero, and resulting in
a threaded interrupt handler not running as needed.

Similar sequences can be imagined that would lead to corruption of either
the DMA'd data or non-DMA data sharing the same cache line, depending on the
timing of the interrupt, and I can't quite convince myself that the problem
only occurs in this partial-cacheline-overlap scenario.  For example, what
happens if execution is in the middle of a cpu_dcache_wbinv_range() operation
and an interrupt leads to a context switch wherein the pmap code decides to
call cpu_dcache_inv_range()?  So to be conservatively safe, this patch 
disables interrupts for the entire duration bus_dmamap_sync_buf(), not just
when partial cache lines are being handled.

>How-To-Repeat:
It would be very difficult to set up a repeatable test of this condition.  We
were lucky (!) enough to have it happen repeatably enough to diagnose.

>Fix:
Problem was discovered in an 8.2 environment, but this diff is to -current.

--- diff.tmp begins here ---
--- busdma_machdep.c.orig	2010-03-11 14:16:54.000000000 -0700
+++ busdma_machdep.c	2011-09-03 10:15:16.000000000 -0600
@@ -1091,6 +1091,14 @@ static void
 bus_dmamap_sync_buf(void *buf, int len, bus_dmasync_op_t op)
 {
 	char _tmp_cl[arm_dcache_align], _tmp_clend[arm_dcache_align];
+	uint32_t intr;
+
+	/* Interrupts MUST be disabled when handling partial cacheline flush
+	 * and most likely should be disabled for all flushes.  (I know for
+	 * certain interrupts can cause failures on partial flushes, and suspect
+	 * problems could also happen in other scenarios.)
+	 */
+	intr = intr_disable();
 
 	if ((op & BUS_DMASYNC_PREWRITE) && !(op & BUS_DMASYNC_PREREAD)) {
 		cpu_dcache_wb_range((vm_offset_t)buf, len);
@@ -1129,6 +1137,8 @@ bus_dmamap_sync_buf(void *buf, int len, 
 			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
 			   arm_dcache_align_mask));
 	}
+
+	intr_restore(intr);
 }
 
 static void
--- diff.tmp ends here ---

>Release-Note:
>Audit-Trail:

From: Mark Tinguely <marktinguely@gmail.com>
To: Ian Lepore <freebsd@damnhippie.dyndns.org>
Cc: FreeBSD-gnats-submit@FreeBSD.org
Subject: Re: arm/160431: [patch] Disable interrupts during busdma cache sync
 operations.
Date: Sat, 03 Sep 2011 14:05:32 -0500

 On 9/3/2011 12:19 PM, Ian Lepore wrote:
 >> Number:         160431
 >> Category:       arm
 >> Synopsis:       [patch] Disable interrupts during busdma cache sync operations.
 >> Confidential:   no
 >> Severity:       critical
 >> Priority:       high
 >> Responsible:    freebsd-arm
 >> State:          open
 >> Quarter:
 >> Keywords:
 >> Date-Required:
 >> Class:          sw-bug
 >> Submitter-Id:   current-users
 >> Arrival-Date:   Sat Sep 03 17:20:12 UTC 2011
 >> Closed-Date:
 >> Last-Modified:
 >> Originator:     Ian Lepore<freebsd@damnhippie.dyndns.org>
 >> Release:        FreeBSD 8.2-RC3 arm
 >> Organization:
 > none
 >> Environment:
 > FreeBSD dvb 8.2-RC3 FreeBSD 8.2-RC3 #49: Tue Feb 15 22:52:14 UTC 2011     root@revolution.hippie.lan:/usr/obj/arm/usr/src/sys/DVB  arm
 >
 >> Description:
 > Data can be corrupted when an interrupt occurs while busdma_sync_buf() is
 > handling a buffer that partially overlaps a cache line.  One scenario, seen
 > in the real world, was a small IO buffer allocated in the same cache line
 > as an instance of a struct intr_handler.  The dma sync code copied the non-DMA
 > data (the intr_handler struct) to a temporary buffer prior to the cache sync,
 > then an interrupt occurs that results in setting the it_need flag in the
 > struct.  When control returns to the dma sync code it finishes by copying
 > the saved partial cache line from the temporary buffer back to the
 > intr_handler struct, restoring the it_need flag to zero, and resulting in
 > a threaded interrupt handler not running as needed.
 >
 > Similar sequences can be imagined that would lead to corruption of either
 > the DMA'd data or non-DMA data sharing the same cache line, depending on the
 > timing of the interrupt, and I can't quite convince myself that the problem
 > only occurs in this partial-cacheline-overlap scenario.  For example, what
 > happens if execution is in the middle of a cpu_dcache_wbinv_range() operation
 > and an interrupt leads to a context switch wherein the pmap code decides to
 > call cpu_dcache_inv_range()?  So to be conservatively safe, this patch
 > disables interrupts for the entire duration bus_dmamap_sync_buf(), not just
 > when partial cache lines are being handled.
 >
 >> How-To-Repeat:
 > It would be very difficult to set up a repeatable test of this condition.  We
 > were lucky (!) enough to have it happen repeatably enough to diagnose.
 >
 >> Fix:
 > Problem was discovered in an 8.2 environment, but this diff is to -current.
 >
 > --- diff.tmp begins here ---
 > --- busdma_machdep.c.orig	2010-03-11 14:16:54.000000000 -0700
 > +++ busdma_machdep.c	2011-09-03 10:15:16.000000000 -0600
 > @@ -1091,6 +1091,14 @@ static void
 >   bus_dmamap_sync_buf(void *buf, int len, bus_dmasync_op_t op)
 >   {
 >   	char _tmp_cl[arm_dcache_align], _tmp_clend[arm_dcache_align];
 > +	uint32_t intr;
 > +
 > +	/* Interrupts MUST be disabled when handling partial cacheline flush
 > +	 * and most likely should be disabled for all flushes.  (I know for
 > +	 * certain interrupts can cause failures on partial flushes, and suspect
 > +	 * problems could also happen in other scenarios.)
 > +	 */
 > +	intr = intr_disable();
 >
 >   	if ((op&  BUS_DMASYNC_PREWRITE)&&  !(op&  BUS_DMASYNC_PREREAD)) {
 >   		cpu_dcache_wb_range((vm_offset_t)buf, len);
 > @@ -1129,6 +1137,8 @@ bus_dmamap_sync_buf(void *buf, int len,
 >   			    arm_dcache_align - (((vm_offset_t)(buf) + len)&
 >   			arm_dcache_align_mask));
 >   	}
 > +
 > +	intr_restore(intr);
 >   }
 >
 >   static void
 > --- diff.tmp ends here ---
 >
 >> Release-Note:
 >> Audit-Trail:
 >> Unformatted:
 > _______________________________________________
 >
 
 Which processor are you using (for my curiosity)?
 
 If this is easily reproducible, would you please put the interrupt 
 disable/restore just around the  BUS_DMASYNC_POSTREAD option? (for my 
 curiosity again).
 
 Thank-you
 
 --Mark.

From: Ian Lepore <freebsd@damnhippie.dyndns.org>
To: Mark Tinguely <marktinguely@gmail.com>
Cc: FreeBSD-gnats-submit@FreeBSD.org
Subject: Re: arm/160431: [patch] Disable interrupts during busdma cache
 sync operations.
Date: Sat, 03 Sep 2011 13:43:38 -0600

 On Sat, 2011-09-03 at 14:05 -0500, Mark Tinguely wrote:
 > On 9/3/2011 12:19 PM, Ian Lepore wrote:
 > > [bug report]
 > 
 > Which processor are you using (for my curiosity)?
 > 
 > If this is easily reproducible, would you please put the interrupt 
 > disable/restore just around the  BUS_DMASYNC_POSTREAD option? (for my 
 > curiosity again).
 > 
 > Thank-you
 > 
 > --Mark.
 
 It's an Atmel at91rm9200.  It's been weeks since we were actively
 working this problem (I'm just way behind on submitting fixes back to
 the community), so it would be pretty hard at this point to get back to
 where the problem is easily reproducible.  But when we were still
 investigating it, I remember instrumenting the code to conclusively
 prove it was the handling of partially-overlapping cache lines within
 the POSTREAD block that was leading to the ih_need flag of the nearby
 intr_handler struct getting reset to zero.  
 
 We actually discovered all this on code that was mostly 6.2 with some
 6.4 stuff merged in.  We're now almost up and running on 8.2 on our
 embedded arm products, so the whole 6.2 thing is feeling like a bad
 nightmare that won't quite fade from memory. :)
 
 My putting the intr_disable() at the next-outer layer of code is just an
 abundance of caution.  But given that the pmap code and the busdma code
 can both lead to writeback and/or invalidate activity, and that those
 two pieces of code don't know what each other are doing, I've had a
 growing quesy feeling about this stuff for a while.  For example, the
 pmap code can decide to do a writeback that could overwrite
 DMA-in-progress data, especially in the cases of partial overlap of a
 DMA operation with a cache line.  But I didn't want to start raising any
 alarms until I've learned more about the code in its current form, and
 I'm still working on learning that.
 
 -- Ian
 
 

From: Mark Tinguely <marktinguely@gmail.com>
To: bug-followup@FreeBSD.org, freebsd@damnhippie.dyndns.org
Cc:  
Subject: Re: arm/160431: [patch] Disable interrupts during busdma cache sync
 operations.
Date: Sun, 16 Oct 2011 14:57:50 -0500

 This is a multi-part message in MIME format.
 --------------090706000403080305010106
 Content-Type: text/plain; charset=ISO-8859-1; format=flowed
 Content-Transfer-Encoding: 7bit
 
 Ian and I have been sending emails this week about this problem. He and 
 I have examples where turning off interrupts will not be enough, but I 
 think this is a good start.
 
 If we cache align the allocation sizes that are less than a PAGE_SIZE in 
 bus_dmamem_alloc(), then it may help avoid this routine. Attached is a 
 crude alignment concept code.
 
 --Mark.
 
 --------------090706000403080305010106
 Content-Type: text/plain;
  name="arm_busdma_machdep_c.diff"
 Content-Transfer-Encoding: 7bit
 Content-Disposition: attachment;
  filename="arm_busdma_machdep_c.diff"
 
 --- sys/arm/arm/busdma_machdep.c.orig	2011-10-16 13:09:49.000000000 -0500
 +++ sys/arm/arm/busdma_machdep.c	2011-10-16 14:20:27.000000000 -0500
 @@ -579,6 +579,7 @@ bus_dmamem_alloc(bus_dma_tag_t dmat, voi
  {
  	bus_dmamap_t newmap = NULL;
  
 +	bus_size_t len;
  	int mflags;
  
  	if (flags & BUS_DMA_NOWAIT)
 @@ -598,17 +599,23 @@ bus_dmamem_alloc(bus_dma_tag_t dmat, voi
  	*mapp = newmap;
  	newmap->dmat = dmat;
  	
 -        if (dmat->maxsize <= PAGE_SIZE &&
 -	   (dmat->alignment < dmat->maxsize) &&
 +	if (dmat->maxsize < PAGE_SIZE)
 +		/* round up to nearest cache line size */
 +		len = (dmat->maxsize + arm_dcache_align_mask) &
 +			 ~arm_dcache_align_mask;
 +	else
 +		len = dmat->maxsize;
 +        if (len <= PAGE_SIZE &&
 +	   (dmat->alignment < len) &&
  	   !_bus_dma_can_bounce(dmat->lowaddr, dmat->highaddr)) {
 -                *vaddr = malloc(dmat->maxsize, M_DEVBUF, mflags);
 +                *vaddr = malloc(len, M_DEVBUF, mflags);
          } else {
                  /*
                   * XXX Use Contigmalloc until it is merged into this facility
                   *     and handles multi-seg allocations.  Nobody is doing
                   *     multi-seg allocations yet though.
                   */
 -                *vaddr = contigmalloc(dmat->maxsize, M_DEVBUF, mflags,
 +                *vaddr = contigmalloc(len, M_DEVBUF, mflags,
                      0ul, dmat->lowaddr, dmat->alignment? dmat->alignment : 1ul,
                      dmat->boundary);
          }
 @@ -623,7 +630,7 @@ bus_dmamem_alloc(bus_dma_tag_t dmat, voi
  	if (flags & BUS_DMA_COHERENT) {
  		void *tmpaddr = arm_remap_nocache(
  		    (void *)((vm_offset_t)*vaddr &~ PAGE_MASK),
 -		    dmat->maxsize + ((vm_offset_t)*vaddr & PAGE_MASK));
 +		    len + ((vm_offset_t)*vaddr & PAGE_MASK));
  
  		if (tmpaddr) {
  			tmpaddr = (void *)((vm_offset_t)(tmpaddr) +
 @@ -645,19 +652,28 @@ bus_dmamem_alloc(bus_dma_tag_t dmat, voi
  void
  bus_dmamem_free(bus_dma_tag_t dmat, void *vaddr, bus_dmamap_t map)
  {
 +	bus_size_t len;
 +
 +	if (dmat->maxsize < PAGE_SIZE)
 +		/* round up to nearest cache line size */
 +		len = (dmat->maxsize + arm_dcache_align_mask) &
 +			 ~arm_dcache_align_mask;
 +	else
 +		len = dmat->maxsize;
 +
  	if (map->allocbuffer) {
  		KASSERT(map->allocbuffer == vaddr,
  		    ("Trying to freeing the wrong DMA buffer"));
  		vaddr = map->origbuffer;
  		arm_unmap_nocache(map->allocbuffer,
 -		    dmat->maxsize + ((vm_offset_t)vaddr & PAGE_MASK));
 +		    len + ((vm_offset_t)vaddr & PAGE_MASK));
  	}
 -        if (dmat->maxsize <= PAGE_SIZE &&
 -	   dmat->alignment < dmat->maxsize &&
 +        if (len <= PAGE_SIZE &&
 +	   dmat->alignment < len &&
  	    !_bus_dma_can_bounce(dmat->lowaddr, dmat->highaddr))
  		free(vaddr, M_DEVBUF);
          else {
 -		contigfree(vaddr, dmat->maxsize, M_DEVBUF);
 +		contigfree(vaddr, len, M_DEVBUF);
  	}
  	dmat->map_count--;
  	_busdma_free_dmamap(map);
 
 --------------090706000403080305010106--

From: Ian Lepore <freebsd@damnhippie.dyndns.org>
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: arm/160431: [busdma] [patch] Disable interrupts during busdma
 cache sync operations.
Date: Sat, 21 Apr 2012 10:01:06 -0600

 --=-I62BfnVhv+9haFCqsTdw
 Content-Type: text/plain; charset="us-ascii"
 Content-Transfer-Encoding: 7bit
 
 Here is the latest and most-tested patch for this problem.  
 
 Mark Tinguely had suggested that the scope of having interrupts disabled
 be narrowed to just the time spent doing a partial cacheline flush, as
 opposed to disabling them for the entire bus_dmamap_sync_buf() function
 as my original patch did.  I made that change and we've been shipping
 products using this new patch since September 2011, but I apparently
 neglected to submit the updated patch (which also fixes some line
 wrapping and other style(9) issues).
 
 -- Ian
 
 
 --=-I62BfnVhv+9haFCqsTdw
 Content-Description: 
 Content-Disposition: inline; filename="arm_busdma.diff"
 Content-Type: text/x-patch; name="arm_busdma.diff"; charset="us-ascii"
 Content-Transfer-Encoding: 7bit
 
 Index: sys/arm/arm/busdma_machdep.c
 ===================================================================
 --- sys/arm/arm/busdma_machdep.c	(revision 234543)
 +++ sys/arm/arm/busdma_machdep.c	(working copy)
 @@ -1106,28 +1106,45 @@
  	    		cpu_l2cache_wbinv_range((vm_offset_t)buf, len);
  		}
  	}
 +	/*
 +	 * Interrupts must be disabled while handling a partial cacheline flush,
 +	 * otherwise the interrupt handling code could modify data in the
 +	 * non-DMA part of a cacheline while we have it stashed away in the
 +	 * temporary stack buffer, then we end up restoring the stale value.
 +	 * As unlikely as this seems, it has been observed in the real world.
 +	 */
  	if (op & BUS_DMASYNC_POSTREAD) {
 -		if ((vm_offset_t)buf & arm_dcache_align_mask) {
 -			memcpy(_tmp_cl, (void *)((vm_offset_t)buf & ~
 -			    arm_dcache_align_mask),
 -			    (vm_offset_t)buf & arm_dcache_align_mask);
 +		partial = (((vm_offset_t)buf) | len) & arm_dcache_align_mask;
 +		if (partial) {
 +			intr = intr_disable();
 +			if ((vm_offset_t)buf & arm_dcache_align_mask) {
 +				memcpy(_tmp_cl, (void *)((vm_offset_t)buf &
 +				    ~arm_dcache_align_mask),
 +				    (vm_offset_t)buf & arm_dcache_align_mask);
 +			}
 +			if (((vm_offset_t)buf + len) & arm_dcache_align_mask) {
 +				memcpy(_tmp_clend, 
 +				    (void *)((vm_offset_t)buf + len),
 +				    arm_dcache_align - 
 +				    (((vm_offset_t)(buf) + len) &
 +				    arm_dcache_align_mask));
 +			}
  		}
 -		if (((vm_offset_t)buf + len) & arm_dcache_align_mask) {
 -			memcpy(_tmp_clend, (void *)((vm_offset_t)buf + len),
 -			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
 -			   arm_dcache_align_mask));
 -		}
  		cpu_dcache_inv_range((vm_offset_t)buf, len);
  		cpu_l2cache_inv_range((vm_offset_t)buf, len);
 -
 -		if ((vm_offset_t)buf & arm_dcache_align_mask)
 -			memcpy((void *)((vm_offset_t)buf &
 -			    ~arm_dcache_align_mask), _tmp_cl, 
 -			    (vm_offset_t)buf & arm_dcache_align_mask);
 -		if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 -			memcpy((void *)((vm_offset_t)buf + len), _tmp_clend,
 -			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
 -			   arm_dcache_align_mask));
 +		if (partial) {
 +			if ((vm_offset_t)buf & arm_dcache_align_mask)
 +				memcpy((void *)((vm_offset_t)buf &
 +				    ~arm_dcache_align_mask), _tmp_cl, 
 +				    (vm_offset_t)buf & arm_dcache_align_mask);
 +			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 +				memcpy((void *)((vm_offset_t)buf + len), 
 +				    _tmp_clend,
 +				    arm_dcache_align - 
 +				    (((vm_offset_t)(buf) + len) &
 +				    arm_dcache_align_mask));
 +			intr_restore(intr);
 +		}
  	}
  }
  
 
 --=-I62BfnVhv+9haFCqsTdw--
 

From: Ian Lepore <freebsd@damnhippie.dyndns.org>
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: arm/160431: [busdma] [patch] Disable interrupts during busdma
 cache sync operations.
Date: Sat, 21 Apr 2012 10:22:47 -0600

 --=-qM25b7nGVVUxHWsdaUrf
 Content-Type: text/plain; charset="us-ascii"
 Content-Transfer-Encoding: 7bit
 
 Arrgh!  One more time, with the actual correct patch attached (and this
 time validated correctly by building kernel rather than world, doh!).
 
 
 
 --=-qM25b7nGVVUxHWsdaUrf
 Content-Description: 
 Content-Disposition: inline; filename="arm_busdma.diff"
 Content-Type: text/x-patch; name="arm_busdma.diff"; charset="us-ascii"
 Content-Transfer-Encoding: 7bit
 
 Index: sys/arm/arm/busdma_machdep.c
 ===================================================================
 --- sys/arm/arm/busdma_machdep.c	(revision 234543)
 +++ sys/arm/arm/busdma_machdep.c	(working copy)
 @@ -1090,6 +1090,8 @@
  static void
  bus_dmamap_sync_buf(void *buf, int len, bus_dmasync_op_t op)
  {
 +	uint32_t intr;
 +	int partial; 
  	char _tmp_cl[arm_dcache_align], _tmp_clend[arm_dcache_align];
  
  	if ((op & BUS_DMASYNC_PREWRITE) && !(op & BUS_DMASYNC_PREREAD)) {
 @@ -1106,28 +1108,45 @@
  	    		cpu_l2cache_wbinv_range((vm_offset_t)buf, len);
  		}
  	}
 +	/*
 +	 * Interrupts must be disabled while handling a partial cacheline flush,
 +	 * otherwise the interrupt handling code could modify data in the
 +	 * non-DMA part of a cacheline while we have it stashed away in the
 +	 * temporary stack buffer, then we end up restoring the stale value.
 +	 * As unlikely as this seems, it has been observed in the real world.
 +	 */
  	if (op & BUS_DMASYNC_POSTREAD) {
 -		if ((vm_offset_t)buf & arm_dcache_align_mask) {
 -			memcpy(_tmp_cl, (void *)((vm_offset_t)buf & ~
 -			    arm_dcache_align_mask),
 -			    (vm_offset_t)buf & arm_dcache_align_mask);
 +		partial = (((vm_offset_t)buf) | len) & arm_dcache_align_mask;
 +		if (partial) {
 +			intr = intr_disable();
 +			if ((vm_offset_t)buf & arm_dcache_align_mask) {
 +				memcpy(_tmp_cl, (void *)((vm_offset_t)buf &
 +				    ~arm_dcache_align_mask),
 +				    (vm_offset_t)buf & arm_dcache_align_mask);
 +			}
 +			if (((vm_offset_t)buf + len) & arm_dcache_align_mask) {
 +				memcpy(_tmp_clend, 
 +				    (void *)((vm_offset_t)buf + len),
 +				    arm_dcache_align - 
 +				    (((vm_offset_t)(buf) + len) &
 +				    arm_dcache_align_mask));
 +			}
  		}
 -		if (((vm_offset_t)buf + len) & arm_dcache_align_mask) {
 -			memcpy(_tmp_clend, (void *)((vm_offset_t)buf + len),
 -			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
 -			   arm_dcache_align_mask));
 -		}
  		cpu_dcache_inv_range((vm_offset_t)buf, len);
  		cpu_l2cache_inv_range((vm_offset_t)buf, len);
 -
 -		if ((vm_offset_t)buf & arm_dcache_align_mask)
 -			memcpy((void *)((vm_offset_t)buf &
 -			    ~arm_dcache_align_mask), _tmp_cl, 
 -			    (vm_offset_t)buf & arm_dcache_align_mask);
 -		if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 -			memcpy((void *)((vm_offset_t)buf + len), _tmp_clend,
 -			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
 -			   arm_dcache_align_mask));
 +		if (partial) {
 +			if ((vm_offset_t)buf & arm_dcache_align_mask)
 +				memcpy((void *)((vm_offset_t)buf &
 +				    ~arm_dcache_align_mask), _tmp_cl, 
 +				    (vm_offset_t)buf & arm_dcache_align_mask);
 +			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 +				memcpy((void *)((vm_offset_t)buf + len), 
 +				    _tmp_clend,
 +				    arm_dcache_align - 
 +				    (((vm_offset_t)(buf) + len) &
 +				    arm_dcache_align_mask));
 +			intr_restore(intr);
 +		}
  	}
  }
  
 
 --=-qM25b7nGVVUxHWsdaUrf--
 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: arm/160431: commit references a PR
Date: Sun, 22 Apr 2012 00:58:20 +0000 (UTC)

 Author: marius
 Date: Sun Apr 22 00:58:04 2012
 New Revision: 234561
 URL: http://svn.freebsd.org/changeset/base/234561
 
 Log:
   Interrupts must be disabled while handling a partial cache line flush,
   as otherwise the interrupt handling code may modify data in the non-DMA
   part of the cache line while we have it stashed away in the temporary
   stack buffer, then we end up restoring a stale value.
   
   PR:		160431
   Submitted by:	Ian Lepore
   MFC after:	1 week
 
 Modified:
   head/sys/arm/arm/busdma_machdep.c
 
 Modified: head/sys/arm/arm/busdma_machdep.c
 ==============================================================================
 --- head/sys/arm/arm/busdma_machdep.c	Sun Apr 22 00:43:32 2012	(r234560)
 +++ head/sys/arm/arm/busdma_machdep.c	Sun Apr 22 00:58:04 2012	(r234561)
 @@ -1091,14 +1091,16 @@ static void
  bus_dmamap_sync_buf(void *buf, int len, bus_dmasync_op_t op)
  {
  	char _tmp_cl[arm_dcache_align], _tmp_clend[arm_dcache_align];
 +	register_t s;
 +	int partial; 
  
  	if ((op & BUS_DMASYNC_PREWRITE) && !(op & BUS_DMASYNC_PREREAD)) {
  		cpu_dcache_wb_range((vm_offset_t)buf, len);
  		cpu_l2cache_wb_range((vm_offset_t)buf, len);
  	}
 +	partial = (((vm_offset_t)buf) | len) & arm_dcache_align_mask;
  	if (op & BUS_DMASYNC_PREREAD) {
 -		if (!(op & BUS_DMASYNC_PREWRITE) &&
 -		    ((((vm_offset_t)(buf) | len) & arm_dcache_align_mask) == 0)) {
 +		if (!(op & BUS_DMASYNC_PREWRITE) && !partial) {
  			cpu_dcache_inv_range((vm_offset_t)buf, len);
  			cpu_l2cache_inv_range((vm_offset_t)buf, len);
  		} else {
 @@ -1107,27 +1109,32 @@ bus_dmamap_sync_buf(void *buf, int len, 
  		}
  	}
  	if (op & BUS_DMASYNC_POSTREAD) {
 -		if ((vm_offset_t)buf & arm_dcache_align_mask) {
 -			memcpy(_tmp_cl, (void *)((vm_offset_t)buf & ~
 -			    arm_dcache_align_mask),
 -			    (vm_offset_t)buf & arm_dcache_align_mask);
 -		}
 -		if (((vm_offset_t)buf + len) & arm_dcache_align_mask) {
 -			memcpy(_tmp_clend, (void *)((vm_offset_t)buf + len),
 -			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
 -			   arm_dcache_align_mask));
 +		if (partial) {
 +			s = intr_disable();
 +			if ((vm_offset_t)buf & arm_dcache_align_mask)
 +				memcpy(_tmp_cl, (void *)((vm_offset_t)buf &
 +				    ~arm_dcache_align_mask),
 +				    (vm_offset_t)buf & arm_dcache_align_mask);
 +			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 +				memcpy(_tmp_clend, 
 +				    (void *)((vm_offset_t)buf + len),
 +				    arm_dcache_align - (((vm_offset_t)(buf) +
 +				    len) & arm_dcache_align_mask));
  		}
  		cpu_dcache_inv_range((vm_offset_t)buf, len);
  		cpu_l2cache_inv_range((vm_offset_t)buf, len);
 -
 -		if ((vm_offset_t)buf & arm_dcache_align_mask)
 -			memcpy((void *)((vm_offset_t)buf &
 -			    ~arm_dcache_align_mask), _tmp_cl, 
 -			    (vm_offset_t)buf & arm_dcache_align_mask);
 -		if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 -			memcpy((void *)((vm_offset_t)buf + len), _tmp_clend,
 -			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
 -			   arm_dcache_align_mask));
 +		if (partial) {
 +			if ((vm_offset_t)buf & arm_dcache_align_mask)
 +				memcpy((void *)((vm_offset_t)buf &
 +				    ~arm_dcache_align_mask), _tmp_cl, 
 +				    (vm_offset_t)buf & arm_dcache_align_mask);
 +			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 +				memcpy((void *)((vm_offset_t)buf + len), 
 +				    _tmp_clend, arm_dcache_align - 
 +				    (((vm_offset_t)(buf) + len) &
 +				    arm_dcache_align_mask));
 +			intr_restore(s);
 +		}
  	}
  }
  
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: arm/160431: commit references a PR
Date: Sat, 26 May 2012 09:14:33 +0000 (UTC)

 Author: marius
 Date: Sat May 26 09:13:24 2012
 New Revision: 236085
 URL: http://svn.freebsd.org/changeset/base/236085
 
 Log:
   MFC: r234561
   
   Interrupts must be disabled while handling a partial cache line flush,
   as otherwise the interrupt handling code may modify data in the non-DMA
   part of the cache line while we have it stashed away in the temporary
   stack buffer, then we end up restoring a stale value.
   
   PR:		160431
   Submitted by:	Ian Lepore
 
 Modified:
   stable/9/sys/arm/arm/busdma_machdep.c
 Directory Properties:
   stable/9/sys/   (props changed)
   stable/9/sys/amd64/include/xen/   (props changed)
   stable/9/sys/boot/   (props changed)
   stable/9/sys/boot/i386/efi/   (props changed)
   stable/9/sys/boot/ia64/efi/   (props changed)
   stable/9/sys/boot/ia64/ski/   (props changed)
   stable/9/sys/boot/powerpc/boot1.chrp/   (props changed)
   stable/9/sys/boot/powerpc/ofw/   (props changed)
   stable/9/sys/cddl/contrib/opensolaris/   (props changed)
   stable/9/sys/conf/   (props changed)
   stable/9/sys/contrib/dev/acpica/   (props changed)
   stable/9/sys/contrib/octeon-sdk/   (props changed)
   stable/9/sys/contrib/pf/   (props changed)
   stable/9/sys/contrib/x86emu/   (props changed)
   stable/9/sys/dev/   (props changed)
   stable/9/sys/dev/e1000/   (props changed)
   stable/9/sys/dev/ixgbe/   (props changed)
   stable/9/sys/fs/   (props changed)
   stable/9/sys/fs/ntfs/   (props changed)
   stable/9/sys/modules/   (props changed)
 
 Modified: stable/9/sys/arm/arm/busdma_machdep.c
 ==============================================================================
 --- stable/9/sys/arm/arm/busdma_machdep.c	Sat May 26 09:11:45 2012	(r236084)
 +++ stable/9/sys/arm/arm/busdma_machdep.c	Sat May 26 09:13:24 2012	(r236085)
 @@ -1091,14 +1091,16 @@ static void
  bus_dmamap_sync_buf(void *buf, int len, bus_dmasync_op_t op)
  {
  	char _tmp_cl[arm_dcache_align], _tmp_clend[arm_dcache_align];
 +	register_t s;
 +	int partial; 
  
  	if ((op & BUS_DMASYNC_PREWRITE) && !(op & BUS_DMASYNC_PREREAD)) {
  		cpu_dcache_wb_range((vm_offset_t)buf, len);
  		cpu_l2cache_wb_range((vm_offset_t)buf, len);
  	}
 +	partial = (((vm_offset_t)buf) | len) & arm_dcache_align_mask;
  	if (op & BUS_DMASYNC_PREREAD) {
 -		if (!(op & BUS_DMASYNC_PREWRITE) &&
 -		    ((((vm_offset_t)(buf) | len) & arm_dcache_align_mask) == 0)) {
 +		if (!(op & BUS_DMASYNC_PREWRITE) && !partial) {
  			cpu_dcache_inv_range((vm_offset_t)buf, len);
  			cpu_l2cache_inv_range((vm_offset_t)buf, len);
  		} else {
 @@ -1107,27 +1109,32 @@ bus_dmamap_sync_buf(void *buf, int len, 
  		}
  	}
  	if (op & BUS_DMASYNC_POSTREAD) {
 -		if ((vm_offset_t)buf & arm_dcache_align_mask) {
 -			memcpy(_tmp_cl, (void *)((vm_offset_t)buf & ~
 -			    arm_dcache_align_mask),
 -			    (vm_offset_t)buf & arm_dcache_align_mask);
 -		}
 -		if (((vm_offset_t)buf + len) & arm_dcache_align_mask) {
 -			memcpy(_tmp_clend, (void *)((vm_offset_t)buf + len),
 -			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
 -			   arm_dcache_align_mask));
 +		if (partial) {
 +			s = intr_disable();
 +			if ((vm_offset_t)buf & arm_dcache_align_mask)
 +				memcpy(_tmp_cl, (void *)((vm_offset_t)buf &
 +				    ~arm_dcache_align_mask),
 +				    (vm_offset_t)buf & arm_dcache_align_mask);
 +			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 +				memcpy(_tmp_clend, 
 +				    (void *)((vm_offset_t)buf + len),
 +				    arm_dcache_align - (((vm_offset_t)(buf) +
 +				    len) & arm_dcache_align_mask));
  		}
  		cpu_dcache_inv_range((vm_offset_t)buf, len);
  		cpu_l2cache_inv_range((vm_offset_t)buf, len);
 -
 -		if ((vm_offset_t)buf & arm_dcache_align_mask)
 -			memcpy((void *)((vm_offset_t)buf &
 -			    ~arm_dcache_align_mask), _tmp_cl, 
 -			    (vm_offset_t)buf & arm_dcache_align_mask);
 -		if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 -			memcpy((void *)((vm_offset_t)buf + len), _tmp_clend,
 -			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
 -			   arm_dcache_align_mask));
 +		if (partial) {
 +			if ((vm_offset_t)buf & arm_dcache_align_mask)
 +				memcpy((void *)((vm_offset_t)buf &
 +				    ~arm_dcache_align_mask), _tmp_cl, 
 +				    (vm_offset_t)buf & arm_dcache_align_mask);
 +			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 +				memcpy((void *)((vm_offset_t)buf + len), 
 +				    _tmp_clend, arm_dcache_align - 
 +				    (((vm_offset_t)(buf) + len) &
 +				    arm_dcache_align_mask));
 +			intr_restore(s);
 +		}
  	}
  }
  
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: arm/160431: commit references a PR
Date: Sat, 26 May 2012 09:14:38 +0000 (UTC)

 Author: marius
 Date: Sat May 26 09:13:38 2012
 New Revision: 236086
 URL: http://svn.freebsd.org/changeset/base/236086
 
 Log:
   MFC: r234561
   
   Interrupts must be disabled while handling a partial cache line flush,
   as otherwise the interrupt handling code may modify data in the non-DMA
   part of the cache line while we have it stashed away in the temporary
   stack buffer, then we end up restoring a stale value.
   
   PR:		160431
   Submitted by:	Ian Lepore
 
 Modified:
   stable/8/sys/arm/arm/busdma_machdep.c
 Directory Properties:
   stable/8/sys/   (props changed)
   stable/8/sys/amd64/include/xen/   (props changed)
   stable/8/sys/boot/   (props changed)
   stable/8/sys/cddl/contrib/opensolaris/   (props changed)
   stable/8/sys/contrib/dev/acpica/   (props changed)
   stable/8/sys/contrib/pf/   (props changed)
   stable/8/sys/dev/e1000/   (props changed)
 
 Modified: stable/8/sys/arm/arm/busdma_machdep.c
 ==============================================================================
 --- stable/8/sys/arm/arm/busdma_machdep.c	Sat May 26 09:13:24 2012	(r236085)
 +++ stable/8/sys/arm/arm/busdma_machdep.c	Sat May 26 09:13:38 2012	(r236086)
 @@ -1091,14 +1091,16 @@ static void
  bus_dmamap_sync_buf(void *buf, int len, bus_dmasync_op_t op)
  {
  	char _tmp_cl[arm_dcache_align], _tmp_clend[arm_dcache_align];
 +	register_t s;
 +	int partial; 
  
  	if ((op & BUS_DMASYNC_PREWRITE) && !(op & BUS_DMASYNC_PREREAD)) {
  		cpu_dcache_wb_range((vm_offset_t)buf, len);
  		cpu_l2cache_wb_range((vm_offset_t)buf, len);
  	}
 +	partial = (((vm_offset_t)buf) | len) & arm_dcache_align_mask;
  	if (op & BUS_DMASYNC_PREREAD) {
 -		if (!(op & BUS_DMASYNC_PREWRITE) &&
 -		    ((((vm_offset_t)(buf) | len) & arm_dcache_align_mask) == 0)) {
 +		if (!(op & BUS_DMASYNC_PREWRITE) && !partial) {
  			cpu_dcache_inv_range((vm_offset_t)buf, len);
  			cpu_l2cache_inv_range((vm_offset_t)buf, len);
  		} else {
 @@ -1107,27 +1109,32 @@ bus_dmamap_sync_buf(void *buf, int len, 
  		}
  	}
  	if (op & BUS_DMASYNC_POSTREAD) {
 -		if ((vm_offset_t)buf & arm_dcache_align_mask) {
 -			memcpy(_tmp_cl, (void *)((vm_offset_t)buf & ~
 -			    arm_dcache_align_mask),
 -			    (vm_offset_t)buf & arm_dcache_align_mask);
 -		}
 -		if (((vm_offset_t)buf + len) & arm_dcache_align_mask) {
 -			memcpy(_tmp_clend, (void *)((vm_offset_t)buf + len),
 -			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
 -			   arm_dcache_align_mask));
 +		if (partial) {
 +			s = intr_disable();
 +			if ((vm_offset_t)buf & arm_dcache_align_mask)
 +				memcpy(_tmp_cl, (void *)((vm_offset_t)buf &
 +				    ~arm_dcache_align_mask),
 +				    (vm_offset_t)buf & arm_dcache_align_mask);
 +			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 +				memcpy(_tmp_clend, 
 +				    (void *)((vm_offset_t)buf + len),
 +				    arm_dcache_align - (((vm_offset_t)(buf) +
 +				    len) & arm_dcache_align_mask));
  		}
  		cpu_dcache_inv_range((vm_offset_t)buf, len);
  		cpu_l2cache_inv_range((vm_offset_t)buf, len);
 -
 -		if ((vm_offset_t)buf & arm_dcache_align_mask)
 -			memcpy((void *)((vm_offset_t)buf &
 -			    ~arm_dcache_align_mask), _tmp_cl, 
 -			    (vm_offset_t)buf & arm_dcache_align_mask);
 -		if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 -			memcpy((void *)((vm_offset_t)buf + len), _tmp_clend,
 -			    arm_dcache_align - (((vm_offset_t)(buf) + len) &
 -			   arm_dcache_align_mask));
 +		if (partial) {
 +			if ((vm_offset_t)buf & arm_dcache_align_mask)
 +				memcpy((void *)((vm_offset_t)buf &
 +				    ~arm_dcache_align_mask), _tmp_cl, 
 +				    (vm_offset_t)buf & arm_dcache_align_mask);
 +			if (((vm_offset_t)buf + len) & arm_dcache_align_mask)
 +				memcpy((void *)((vm_offset_t)buf + len), 
 +				    _tmp_clend, arm_dcache_align - 
 +				    (((vm_offset_t)(buf) + len) &
 +				    arm_dcache_align_mask));
 +			intr_restore(s);
 +		}
  	}
  }
  
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 
State-Changed-From-To: open->closed 
State-Changed-By: marius 
State-Changed-When: Sat May 26 09:22:51 UTC 2012 
State-Changed-Why:  
Close 

http://www.freebsd.org/cgi/query-pr.cgi?pr=160431 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: arm/160431: commit references a PR
Date: Mon, 31 Dec 2012 21:00:46 +0000 (UTC)

 Author: gonzo
 Date: Mon Dec 31 21:00:38 2012
 New Revision: 244912
 URL: http://svnweb.freebsd.org/changeset/base/244912
 
 Log:
   Merge r234561 from busdma_machdep.c to ARMv6 version of busdma:
   
   Interrupts must be disabled while handling a partial cache line flush,
   as otherwise the interrupt handling code may modify data in the non-DMA
   part of the cache line while we have it stashed away in the temporary
   stack buffer, then we end up restoring a stale value.
   
   PR:             160431
   Submitted by:   Ian Lepore
 
 Modified:
   head/sys/arm/arm/busdma_machdep-v6.c
 
 Modified: head/sys/arm/arm/busdma_machdep-v6.c
 ==============================================================================
 --- head/sys/arm/arm/busdma_machdep-v6.c	Mon Dec 31 16:52:52 2012	(r244911)
 +++ head/sys/arm/arm/busdma_machdep-v6.c	Mon Dec 31 21:00:38 2012	(r244912)
 @@ -1347,35 +1347,49 @@ _bus_dmamap_sync(bus_dma_tag_t dmat, bus
  			while (sl != NULL) {
  					/* write back the unaligned portions */
  				vm_paddr_t physaddr;
 +				register_t s = 0;
 +
  				buf = sl->vaddr;
  				len = sl->datacount;
  				physaddr = sl->busaddr;
  				bbuf = buf & ~arm_dcache_align_mask;
  				ebuf = buf + len;
  				physaddr = physaddr & ~arm_dcache_align_mask;
 -				unalign = buf & arm_dcache_align_mask;
 -				if (unalign) {
 -					memcpy(_tmp_cl, (void *)bbuf, unalign);
 -					len += unalign; /* inv entire cache line */
 -				}
 -				unalign = ebuf & arm_dcache_align_mask;
 -				if (unalign) {
 -					unalign = arm_dcache_align - unalign;
 -					memcpy(_tmp_clend, (void *)ebuf, unalign);
 -					len += unalign; /* inv entire cache line */
 +
 +
 +				if ((buf & arm_dcache_align_mask) ||
 +				    (ebuf & arm_dcache_align_mask)) {
 +					s = intr_disable();
 +					unalign = buf & arm_dcache_align_mask;
 +					if (unalign) {
 +						memcpy(_tmp_cl, (void *)bbuf, unalign);
 +						len += unalign; /* inv entire cache line */
 +					}
 +
 +					unalign = ebuf & arm_dcache_align_mask;
 +					if (unalign) {
 +						unalign = arm_dcache_align - unalign;
 +						memcpy(_tmp_clend, (void *)ebuf, unalign);
 +						len += unalign; /* inv entire cache line */
 +					}
  				}
 -					/* inv are cache length aligned */
 +
 +				/* inv are cache length aligned */
  				cpu_dcache_inv_range(bbuf, len);
  				l2cache_inv_range(bbuf, physaddr, len);
  
 -				unalign = (vm_offset_t)buf & arm_dcache_align_mask;
 -				if (unalign) {
 -					memcpy((void *)bbuf, _tmp_cl, unalign);
 -				}
 -				unalign = ebuf & arm_dcache_align_mask;
 -				if (unalign) {
 -					unalign = arm_dcache_align - unalign;
 -					memcpy((void *)ebuf, _tmp_clend, unalign);
 +				if ((buf & arm_dcache_align_mask) ||
 +				    (ebuf & arm_dcache_align_mask)) {
 +					unalign = (vm_offset_t)buf & arm_dcache_align_mask;
 +					if (unalign)
 +						memcpy((void *)bbuf, _tmp_cl, unalign);
 +
 +					unalign = ebuf & arm_dcache_align_mask;
 +					if (unalign)
 +						memcpy((void *)ebuf, _tmp_clend,
 +						    arm_dcache_align - unalign);
 +
 +					intr_restore(s);
  				}
  				sl = STAILQ_NEXT(sl, slinks);
  			}
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 
>Unformatted:
