From nobody@FreeBSD.org  Fri Mar 16 23:18:40 2012
Return-Path: <nobody@FreeBSD.org>
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34])
	by hub.freebsd.org (Postfix) with ESMTP id E024C106564A
	for <freebsd-gnats-submit@FreeBSD.org>; Fri, 16 Mar 2012 23:18:40 +0000 (UTC)
	(envelope-from nobody@FreeBSD.org)
Received: from red.freebsd.org (red.freebsd.org [IPv6:2001:4f8:fff6::22])
	by mx1.freebsd.org (Postfix) with ESMTP id D04518FC08
	for <freebsd-gnats-submit@FreeBSD.org>; Fri, 16 Mar 2012 23:18:40 +0000 (UTC)
Received: from red.freebsd.org (localhost [127.0.0.1])
	by red.freebsd.org (8.14.4/8.14.4) with ESMTP id q2GNIedY087697
	for <freebsd-gnats-submit@FreeBSD.org>; Fri, 16 Mar 2012 23:18:40 GMT
	(envelope-from nobody@red.freebsd.org)
Received: (from nobody@localhost)
	by red.freebsd.org (8.14.4/8.14.4/Submit) id q2GNIeh5087696;
	Fri, 16 Mar 2012 23:18:40 GMT
	(envelope-from nobody)
Message-Id: <201203162318.q2GNIeh5087696@red.freebsd.org>
Date: Fri, 16 Mar 2012 23:18:40 GMT
From: Adrian Chadd <adrian@FreeBSD.org>
To: freebsd-gnats-submit@FreeBSD.org
Subject: [ath] TX hangs and frames stuck in TX queue
X-Send-Pr-Version: www-3.1
X-GNATS-Notify:

>Number:         166190
>Category:       kern
>Synopsis:       [ath] TX hangs and frames stuck in TX queue
>Confidential:   no
>Severity:       non-critical
>Priority:       low
>Responsible:    freebsd-wireless
>State:          patched
>Quarter:        
>Keywords:       
>Date-Required:  
>Class:          sw-bug
>Submitter-Id:   current-users
>Arrival-Date:   Fri Mar 16 23:20:08 UTC 2012
>Closed-Date:    
>Last-Modified:  Mon Jun 11 07:50:06 UTC 2012
>Originator:     Adrian Chadd
>Release:        9.0-RELEASE with -HEAD net80211/ath
>Organization:
>Environment:
>Description:
I've noticed that some frames get "stuck" in the software TX queue and dont' get flushed until:

* the seqno's wrap around so that frame again falls within the BAW, at which point they're TXed, or
* a scan or reset is done, flushing the queue.

Further debugging will be provided once the PR has been created.
>How-To-Repeat:
This seems to occur only when:

* doing multiple TX threads of traffic;
* seems to reliably trigger with multiple threads/processes doing TX - eg chrome reloading all of its threads after a crash;
* on an SMP machine.

A non-SMP machine doesn't seem to have this issue. I've not seen this in AP mode but then I've not really tested AP mode out on SMP hardware and with multiple interfaces (ie, multiple sources of traffic.)

>Fix:


>Release-Note:
>Audit-Trail:

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: kern/166190: commit references a PR
Date: Fri, 16 Mar 2012 23:24:36 +0000 (UTC)

 Author: adrian
 Date: Fri Mar 16 23:24:27 2012
 New Revision: 233053
 URL: http://svn.freebsd.org/changeset/base/233053
 
 Log:
   Fix a couple of debugging outputs.
   
   * printf -> device_printf
   * print the buffer pointer and sequence number for any buffer that wasn't
     correctly tidied up before it was freed.  This is to aid in some
     current SMP TX debugging stalls.
   
   PR:		kern/166190
 
 Modified:
   head/sys/dev/ath/if_ath.c
 
 Modified: head/sys/dev/ath/if_ath.c
 ==============================================================================
 --- head/sys/dev/ath/if_ath.c	Fri Mar 16 23:19:45 2012	(r233052)
 +++ head/sys/dev/ath/if_ath.c	Fri Mar 16 23:24:27 2012	(r233053)
 @@ -4795,10 +4795,16 @@ ath_tx_default_comp(struct ath_softc *sc
  
  	if (bf->bf_state.bfs_dobaw)
  		device_printf(sc->sc_dev,
 -		    "%s: dobaw should've been cleared!\n", __func__);
 +		    "%s: bf %p: seqno %d: dobaw should've been cleared!\n",
 +		    __func__,
 +		    bf,
 +		    SEQNO(bf->bf_state.bfs_seqno));
  	if (bf->bf_next != NULL)
  		device_printf(sc->sc_dev,
 -		    "%s: bf_next not NULL!\n", __func__);
 +		    "%s: bf %p: seqno %d: bf_next not NULL!\n",
 +		    __func__,
 +		    bf,
 +		    SEQNO(bf->bf_state.bfs_seqno));
  
  	/*
  	 * Do any tx complete callback.  Note this must
 @@ -5352,8 +5358,11 @@ ath_stoprecv(struct ath_softc *sc, int d
  		struct ath_buf *bf;
  		u_int ix;
  
 -		printf("%s: rx queue %p, link %p\n", __func__,
 -			(caddr_t)(uintptr_t) ath_hal_getrxbuf(ah), sc->sc_rxlink);
 +		device_printf(sc->sc_dev,
 +		    "%s: rx queue %p, link %p\n",
 +		    __func__,
 +		    (caddr_t)(uintptr_t) ath_hal_getrxbuf(ah),
 +		    sc->sc_rxlink);
  		ix = 0;
  		TAILQ_FOREACH(bf, &sc->sc_rxbuf, bf_list) {
  			struct ath_desc *ds = bf->bf_desc;
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 
Responsible-Changed-From-To: freebsd-bugs->freebsd-wireless 
Responsible-Changed-By: linimon 
Responsible-Changed-When: Sat Mar 17 04:40:45 UTC 2012 
Responsible-Changed-Why:  
Over to maintainer(s). 

http://www.freebsd.org/cgi/query-pr.cgi?pr=166190 

From: Adrian Chadd <adrian@freebsd.org>
To: bug-followup@freebsd.org, Vincent Hoffman <vince@unsane.co.uk>
Cc: freebsd-wireless@freebsd.org
Subject: Re: kern/166190: [ath] TX hangs and frames stuck in TX queue
Date: Sun, 18 Mar 2012 17:10:04 -0700

 I think I understand what's going on here.
 
 It turns out that multiple instances of the TX code (via if_start())
 were running at the same time. These were processing frames from the
 input queue and assigning them sequence numbers.
 
 This seems to be occuring:
 
 * thread A would allocate sequence number 5
 * thread B would concurrency allocate sequence number 6
 * thread B would then "win" the race to add it to the BAW, as the
 sequence numbers were allocated early but it wouldn't be added to the
 queue until much later
 * then thread A would try adding its frame to the BAW, but since the
 BAW left edge is now 6, 5 is now "out of window".
 
 I have a local patch here which I'm going to test tonight/tomorrow. It
 delays the sequence number allocation until _right before_ the frame
 may be added to the BAW. This is done inside the same lock, so there's
 no chance that it'll race with another concurrent thread.
 
 I won't commit it until I have committed some verification code to
 -HEAD to complain loudly when a frame _before_ the BAW is trying to be
 queued. Since that shouldn't happen in reality, I'm going to guess
 that it'll pop up in my testing and Vincents use.
 
 Once I've verified that (a) my sanity checking code is firing as I
 expect it to, (b) Vincent also sees the same, and (c) this is fixed by
 my patch, I'll look at committing it.
 
 Vincent - thanks so very much for persisting with this bug! I'd not
 have really found it at all if you didn't point the odd behaviour out
 to me.
 
 Now - yes, the solution would also be "serialise the whole TX queue
 damnit." Yes, that'd solve it, but as I'm seeing 802.11ac around the
 corner, I'd like to actually debug, diagnose and document how a
 multi-threaded TX/RX path could work. Serialising the driver TX path
 isn't going to help me do that. :-)
 
 
 Adrian

From: Adrian Chadd <adrian@freebsd.org>
To: bug-followup@freebsd.org, Vincent Hoffman <vince@unsane.co.uk>
Cc: freebsd-wireless@freebsd.org
Subject: Re: kern/166190: [ath] TX hangs and frames stuck in TX queue
Date: Sun, 18 Mar 2012 22:27:43 -0700

 --047d7b33c9fe4e3ed604bb91d101
 Content-Type: text/plain; charset=ISO-8859-1
 
 Hi Vincent,
 
 Please try this patch and let me know how it behaves.
 
 Thanks,
 
 
 
 Adrian
 
 --047d7b33c9fe4e3ed604bb91d101
 Content-Type: text/x-patch; charset=US-ASCII; name="kern-166190-baw.diff"
 Content-Disposition: attachment; filename="kern-166190-baw.diff"
 Content-Transfer-Encoding: base64
 X-Attachment-Id: f_gzz2qutw0
 
 SW5kZXg6IHN5cy9kZXYvYXRoL2lmX2F0aF9kZWJ1Zy5jCj09PT09PT09PT09PT09PT09PT09PT09
 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KLS0tIHN5cy9kZXYv
 YXRoL2lmX2F0aF9kZWJ1Zy5jCShyZXZpc2lvbiAyMzMwODkpCisrKyBzeXMvZGV2L2F0aC9pZl9h
 dGhfZGVidWcuYwkod29ya2luZyBjb3B5KQpAQCAtMTM1LDE5ICsxMzUsMjMgQEAKIAlwcmludGYo
 IlEldVslM3VdIiwgcW51bSwgaXgpOwogCXdoaWxlIChiZiAhPSBOVUxMKSB7CiAJCWZvciAoaSA9
 IDAsIGRzID0gYmYtPmJmX2Rlc2M7IGkgPCBiZi0+YmZfbnNlZzsgaSsrLCBkcysrKSB7Ci0JCQlw
 cmludGYoIiAoRFMuVjolcCBEUy5QOiVwKSBMOiUwOHggRDolMDh4IEY6JTA0eCVzXG4iCi0JCQkg
 ICAgICAgIiAgICAgICAgVFhGOiAlMDR4IFNlcTogJWQgc3d0cnk6ICVkIEFEREJBVz86ICVkIERP
 QkFXPzogJWRcbiIKLQkJCSAgICAgICAiICAgICAgICAlMDh4ICUwOHggJTA4eCAlMDh4ICUwOHgg
 JTA4eFxuIiwKKwkJCXByaW50ZigiIChEUy5WOiVwIERTLlA6JXApIEw6JTA4eCBEOiUwOHggRjol
 MDR4JXNcbiIsCiAJCQkgICAgZHMsIChjb25zdCBzdHJ1Y3QgYXRoX2Rlc2MgKiliZi0+YmZfZGFk
 ZHIgKyBpLAogCQkJICAgIGRzLT5kc19saW5rLCBkcy0+ZHNfZGF0YSwgYmYtPmJmX3R4ZmxhZ3Ms
 Ci0JCQkgICAgIWRvbmUgPyAiIiA6ICh0cy0+dHNfc3RhdHVzID09IDApID8gIiAqIiA6ICIgISIs
 CisJCQkgICAgIWRvbmUgPyAiIiA6ICh0cy0+dHNfc3RhdHVzID09IDApID8gIiAqIiA6ICIgISIp
 OworCQkJcHJpbnRmKCIgICAgICAgIFRYRjogJTA0eCBTZXE6ICVkIHN3dHJ5OiAlZCBBRERCQVc/
 OiAlZCBET0JBVz86ICVkXG4iLAogCQkJICAgIGJmLT5iZl9zdGF0ZS5iZnNfZmxhZ3MsCiAJCQkg
 ICAgYmYtPmJmX3N0YXRlLmJmc19zZXFubywKIAkJCSAgICBiZi0+YmZfc3RhdGUuYmZzX3JldHJp
 ZXMsCiAJCQkgICAgYmYtPmJmX3N0YXRlLmJmc19hZGRlZGJhdywKLQkJCSAgICBiZi0+YmZfc3Rh
 dGUuYmZzX2RvYmF3LAorCQkJICAgIGJmLT5iZl9zdGF0ZS5iZnNfZG9iYXcpOworCQkJcHJpbnRm
 KCIgICAgICAgIFNFUU5PX0FTU0lHTkVEOiAlZCwgTkVFRF9TRVFOTzogJWRcbiIsCisJCQkgICAg
 YmYtPmJmX3N0YXRlLmJmc19zZXFub19hc3NpZ25lZCwKKwkJCSAgICBiZi0+YmZfc3RhdGUuYmZz
 X25lZWRfc2Vxbm8pOworCQkJcHJpbnRmKCIgICAgICAgICUwOHggJTA4eCAlMDh4ICUwOHggJTA4
 eCAlMDh4XG4iLAogCQkJICAgIGRzLT5kc19jdGwwLCBkcy0+ZHNfY3RsMSwKLQkJCSAgICBkcy0+
 ZHNfaHdbMF0sIGRzLT5kc19od1sxXSwgZHMtPmRzX2h3WzJdLCBkcy0+ZHNfaHdbM10pOworCQkJ
 ICAgIGRzLT5kc19od1swXSwgZHMtPmRzX2h3WzFdLAorCQkJICAgIGRzLT5kc19od1syXSwgZHMt
 PmRzX2h3WzNdKTsKIAkJCWlmIChhaC0+YWhfbWFnaWMgPT0gMHgyMDA2NTQxNikgewogCQkJCXBy
 aW50ZigiICAgICAgICAlMDh4ICUwOHggJTA4eCAlMDh4ICUwOHggJTA4eCAlMDh4ICUwOHhcbiIs
 CiAJCQkJICAgIGRzLT5kc19od1s0XSwgZHMtPmRzX2h3WzVdLCBkcy0+ZHNfaHdbNl0sCkluZGV4
 OiBzeXMvZGV2L2F0aC9pZl9hdGh2YXIuaAo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSBzeXMvZGV2L2F0aC9pZl9h
 dGh2YXIuaAkocmV2aXNpb24gMjMzMDg5KQorKysgc3lzL2Rldi9hdGgvaWZfYXRodmFyLmgJKHdv
 cmtpbmcgY29weSkKQEAgLTIxNSw2ICsyMTUsOCBAQAogCQlpbnQgYmZzX2lzbXJyOjE7CS8qIGRv
 IG11bHRpLXJhdGUgVFggcmV0cnkgKi8KIAkJaW50IGJmc19kb3Byb3Q6MTsJLyogZG8gUlRTL0NU
 UyBiYXNlZCBwcm90ZWN0aW9uICovCiAJCWludCBiZnNfZG9yYXRlbG9va3VwOjE7CS8qIGRvIHJh
 dGUgbG9va3VwIGJlZm9yZSBlYWNoIFRYICovCisJCWludCBiZnNfbmVlZF9zZXFubzoxOwkvKiBu
 ZWVkIHRvIGFzc2lnbiBhIHNlcW5vIGZvciBhZ2dyZWdhdGlvbiAqLworCQlpbnQgYmZzX3NlcW5v
 X2Fzc2lnbmVkOjE7CS8qIHNlcW5vIGhhcyBiZWVuIGFzc2lnbmVkICovCiAJCWludCBiZnNfbmZs
 OwkJLyogbmV4dCBmcmFnbWVudCBsZW5ndGggKi8KIAogCQkvKgpJbmRleDogc3lzL2Rldi9hdGgv
 aWZfYXRoX3R4X2h0LmMKPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
 PT09PT09PT09PT09PT09PT09PT09PT09PQotLS0gc3lzL2Rldi9hdGgvaWZfYXRoX3R4X2h0LmMJ
 KHJldmlzaW9uIDIzMzA4OSkKKysrIHN5cy9kZXYvYXRoL2lmX2F0aF90eF9odC5jCSh3b3JraW5n
 IGNvcHkpCkBAIC02NDQsNyArNjQ0LDcgQEAKIGF0aF90eF9mb3JtX2FnZ3Ioc3RydWN0IGF0aF9z
 b2Z0YyAqc2MsIHN0cnVjdCBhdGhfbm9kZSAqYW4sIHN0cnVjdCBhdGhfdGlkICp0aWQsCiAgICAg
 YXRoX2J1ZmhlYWQgKmJmX3EpCiB7Ci0JLy9zdHJ1Y3QgaWVlZTgwMjExX25vZGUgKm5pID0gJmFu
 LT5hbl9ub2RlOworCXN0cnVjdCBpZWVlODAyMTFfbm9kZSAqbmkgPSAmYW4tPmFuX25vZGU7CiAJ
 c3RydWN0IGF0aF9idWYgKmJmLCAqYmZfZmlyc3QgPSBOVUxMLCAqYmZfcHJldiA9IE5VTEw7CiAJ
 aW50IG5mcmFtZXMgPSAwOwogCXVpbnQxNl90IGFnZ3JfbGltaXQgPSAwLCBhbCA9IDAsIGJwYWQg
 PSAwLCBhbF9kZWx0YSwgaF9iYXc7CkBAIC02NTIsNiArNjUyLDcgQEAKIAlpbnQgc3RhdHVzID0g
 QVRIX0FHR1JfRE9ORTsKIAlpbnQgcHJldl9mcmFtZXMgPSAwOwkvKiBYWFggZm9yIEFSNTQxNiBi
 dXJzdCwgbm90IGRvbmUgaGVyZSAqLwogCWludCBwcmV2X2FsID0gMDsJLyogWFhYIGFsc28gZm9y
 IEFSNTQxNiBidXJzdCAqLworCWludCBzZXFubzsKIAogCUFUSF9UWFFfTE9DS19BU1NFUlQoc2Mt
 PnNjX2FjMnFbdGlkLT5hY10pOwogCkBAIC03MDcsMTYgKzcwOCw2IEBACiAJCSAqLwogCiAJCS8q
 Ci0JCSAqIElmIHRoZSBwYWNrZXQgaGFzIGEgc2VxdWVuY2UgbnVtYmVyLCBkbyBub3QKLQkJICog
 c3RlcCBvdXRzaWRlIG9mIHRoZSBibG9jay1hY2sgd2luZG93LgotCQkgKi8KLQkJaWYgKCEgQkFX
 X1dJVEhJTih0YXAtPnR4YV9zdGFydCwgdGFwLT50eGFfd25kLAotCQkgICAgU0VRTk8oYmYtPmJm
 X3N0YXRlLmJmc19zZXFubykpKSB7Ci0JCSAgICBzdGF0dXMgPSBBVEhfQUdHUl9CQVdfQ0xPU0VE
 OwotCQkgICAgYnJlYWs7Ci0JCX0KLQotCQkvKgogCQkgKiBYWFggVE9ETzogQVI1NDE2IGhhcyBh
 biA4SyBhZ2dyZWdhdGlvbiBzaXplIGxpbWl0CiAJCSAqIHdoZW4gUlRTIGlzIGVuYWJsZWQsIGFu
 ZCBSVFMgaXMgcmVxdWlyZWQgZm9yIGR1YWwtc3RyZWFtCiAJCSAqIHJhdGVzLgpAQCAtNzQ0LDYg
 KzczNSw1OCBAQAogCQl9CiAKIAkJLyoKKwkJICogVE9ETzogSWYgaXQncyBfYmVmb3JlXyB0aGUg
 QkFXIGxlZnQgZWRnZSwgY29tcGxhaW4gdmVyeSBsb3VkbHkuCisJCSAqIFRoaXMgbWVhbnMgc29t
 ZXRoaW5nIChlbHNlKSBoYXMgc2xpZCB0aGUgbGVmdCBlZGdlIGFsb25nCisJCSAqIGJlZm9yZSB3
 ZSBnb3QgYSBjaGFuY2UgdG8gYmUgVFhlZC4KKwkJICovCisKKwkJLyoKKwkJICogQ2hlY2sgaWYg
 d2UgaGF2ZSBzcGFjZSBpbiB0aGUgQkFXIGZvciB0aGlzIGZyYW1lIGJlZm9yZQorCQkgKiB3ZSBh
 ZGQgaXQuCisJCSAqCisJCSAqIHNlZSBhdGhfdHhfeG1pdF9hZ2dyKCkgZm9yIG1vcmUgaW5mby4K
 KwkJICovCisJCWlmIChiZi0+YmZfc3RhdGUuYmZzX2RvYmF3KSB7CisJCQlpZiAoISBCQVdfV0lU
 SElOKHRhcC0+dHhhX3N0YXJ0LCB0YXAtPnR4YV93bmQsCisJCQkgICAgbmktPm5pX3R4c2Vxc1ti
 Zi0+YmZfc3RhdGUuYmZzX3RpZF0pKSB7CisJCQkJc3RhdHVzID0gQVRIX0FHR1JfQkFXX0NMT1NF
 RDsKKwkJCQlicmVhazsKKwkJCX0KKwkJCS8qIFhYWCBjaGVjayBmb3IgYmZzX25lZWRfc2Vxbm8/
 ICovCisJCQlpZiAoISBiZi0+YmZfc3RhdGUuYmZzX3NlcW5vX2Fzc2lnbmVkKSB7CisJCQkJc2Vx
 bm8gPSBhdGhfdHhfdGlkX3NlcW5vX2Fzc2lnbihzYywgbmksIGJmLCBiZi0+YmZfbSk7CisJCQkJ
 aWYgKHNlcW5vIDwgMCkgeworCQkJCQlkZXZpY2VfcHJpbnRmKHNjLT5zY19kZXYsCisJCQkJCSAg
 ICAiJXM6IGJmPSVwLCBodWgsIHNlcW5vPS0xP1xuIiwKKwkJCQkJICAgIF9fZnVuY19fLAorCQkJ
 CQkgICAgYmYpOworCQkJCQkvKiBYWFggd2hhdCBjYW4gd2UgZXZlbiBkbyBoZXJlPyAqLworCQkJ
 CX0KKwkJCQkvKiBGbHVzaCBzZXFubyB1cGRhdGUgdG8gUkFNICovCisJCQkJLyoKKwkJCQkgKiBY
 WFggVGhpcyBpcyByZXF1aXJlZCBiZWNhdXNlIHRoZSBkbWFzZXR1cAorCQkJCSAqIFhYWCBpcyBk
 b25lIGVhcmx5IHJhdGhlciB0aGFuIGF0IGRpc3BhdGNoCisJCQkJICogWFhYIHRpbWUuIEV3LCB3
 ZSBzaG91bGQgZml4IHRoaXMhCisJCQkJICovCisJCQkJYnVzX2RtYW1hcF9zeW5jKHNjLT5zY19k
 bWF0LCBiZi0+YmZfZG1hbWFwLAorCQkJCSAgICBCVVNfRE1BU1lOQ19QUkVXUklURSk7CisJCQl9
 CisJCX0KKworCQkvKgorCQkgKiBJZiB0aGUgcGFja2V0IGhhcyBhIHNlcXVlbmNlIG51bWJlciwg
 ZG8gbm90CisJCSAqIHN0ZXAgb3V0c2lkZSBvZiB0aGUgYmxvY2stYWNrIHdpbmRvdy4KKwkJICov
 CisJCWlmICghIEJBV19XSVRISU4odGFwLT50eGFfc3RhcnQsIHRhcC0+dHhhX3duZCwKKwkJICAg
 IFNFUU5PKGJmLT5iZl9zdGF0ZS5iZnNfc2Vxbm8pKSkgeworCQkJZGV2aWNlX3ByaW50ZihzYy0+
 c2NfZGV2LAorCQkJICAgICIlczogYmY9JXAsIHNlcW5vPSVkLCBvdXRzaWRlPyFcbiIsCisJCQkg
 ICAgX19mdW5jX18sIGJmLCBTRVFOTyhiZi0+YmZfc3RhdGUuYmZzX3NlcW5vKSk7CisJCQlzdGF0
 dXMgPSBBVEhfQUdHUl9CQVdfQ0xPU0VEOworCQkJYnJlYWs7CisJCX0KKworCQkvKgogCQkgKiB0
 aGlzIHBhY2tldCBpcyBwYXJ0IG9mIGFuIGFnZ3JlZ2F0ZS4KIAkJICovCiAJCUFUSF9UWFFfUkVN
 T1ZFKHRpZCwgYmYsIGJmX2xpc3QpOwpJbmRleDogc3lzL2Rldi9hdGgvaWZfYXRoX3R4LmMKPT09
 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
 PT09PT09PQotLS0gc3lzL2Rldi9hdGgvaWZfYXRoX3R4LmMJKHJldmlzaW9uIDIzMzA4OSkKKysr
 IHN5cy9kZXYvYXRoL2lmX2F0aF90eC5jCSh3b3JraW5nIGNvcHkpCkBAIC0xMDksOCArMTA5LDYg
 QEAKICAgICBpbnQgdGlkKTsKIHN0YXRpYyBpbnQgYXRoX3R4X2FtcGR1X3J1bm5pbmcoc3RydWN0
 IGF0aF9zb2Z0YyAqc2MsIHN0cnVjdCBhdGhfbm9kZSAqYW4sCiAgICAgaW50IHRpZCk7Ci1zdGF0
 aWMgaWVlZTgwMjExX3NlcSBhdGhfdHhfdGlkX3NlcW5vX2Fzc2lnbihzdHJ1Y3QgYXRoX3NvZnRj
 ICpzYywKLSAgICBzdHJ1Y3QgaWVlZTgwMjExX25vZGUgKm5pLCBzdHJ1Y3QgYXRoX2J1ZiAqYmYs
 IHN0cnVjdCBtYnVmICptMCk7CiBzdGF0aWMgaW50IGF0aF90eF9hY3Rpb25fZnJhbWVfb3ZlcnJp
 ZGVfcXVldWUoc3RydWN0IGF0aF9zb2Z0YyAqc2MsCiAgICAgc3RydWN0IGllZWU4MDIxMV9ub2Rl
 ICpuaSwgc3RydWN0IG1idWYgKm0wLCBpbnQgKnRpZCk7CiAKQEAgLTEzNzYsNyArMTM3NCw3IEBA
 CiAJaW50IGlzbWNhc3Q7CiAJY29uc3Qgc3RydWN0IGllZWU4MDIxMV9mcmFtZSAqd2g7CiAJaW50
 IGlzX2FtcGR1LCBpc19hbXBkdV90eCwgaXNfYW1wZHVfcGVuZGluZzsKLQlpZWVlODAyMTFfc2Vx
 IHNlcW5vOworCS8vaWVlZTgwMjExX3NlcSBzZXFubzsKIAl1aW50OF90IHR5cGUsIHN1YnR5cGU7
 CiAKIAkvKgpAQCAtMTQyOCw4ICsxNDI2LDkgQEAKIAlpc19hbXBkdV9wZW5kaW5nID0gYXRoX3R4
 X2FtcGR1X3BlbmRpbmcoc2MsIEFUSF9OT0RFKG5pKSwgdGlkKTsKIAlpc19hbXBkdSA9IGlzX2Ft
 cGR1X3R4IHwgaXNfYW1wZHVfcGVuZGluZzsKIAotCURQUklOVEYoc2MsIEFUSF9ERUJVR19TV19U
 WCwgIiVzOiB0aWQ9JWQsIGFjPSVkLCBpc19hbXBkdT0lZFxuIiwKLQkgICAgX19mdW5jX18sIHRp
 ZCwgcHJpLCBpc19hbXBkdSk7CisJRFBSSU5URihzYywgQVRIX0RFQlVHX1NXX1RYLAorCSAgICAi
 JXM6IGJmPSVwLCB0aWQ9JWQsIGFjPSVkLCBpc19hbXBkdT0lZFxuIiwKKwkgICAgX19mdW5jX18s
 IGJmLCB0aWQsIHByaSwgaXNfYW1wZHUpOwogCiAJLyogTXVsdGljYXN0IGZyYW1lcyBnbyBvbnRv
 IHRoZSBzb2Z0d2FyZSBtdWx0aWNhc3QgcXVldWUgKi8KIAlpZiAoaXNtY2FzdCkKQEAgLTE0NDcs
 NiArMTQ0Niw5IEBACiAJLyogRG8gdGhlIGdlbmVyaWMgZnJhbWUgc2V0dXAgKi8KIAkvKiBYWFgg
 c2hvdWxkIGp1c3QgYnplcm8gdGhlIGJmX3N0YXRlPyAqLwogCWJmLT5iZl9zdGF0ZS5iZnNfZG9i
 YXcgPSAwOworCWJmLT5iZl9zdGF0ZS5iZnNfc2Vxbm9fYXNzaWduZWQgPSAwOworCWJmLT5iZl9z
 dGF0ZS5iZnNfbmVlZF9zZXFubyA9IDA7CisJYmYtPmJmX3N0YXRlLmJmc19zZXFubyA9IC0xOwkv
 KiBYWFggZGVidWdnaW5nICovCiAKIAkvKiBBLU1QRFUgVFg/IE1hbnVhbGx5IHNldCBzZXF1ZW5j
 ZSBudW1iZXIgKi8KIAkvKiBEb24ndCBkbyBpdCB3aGlsc3QgcGVuZGluZzsgdGhlIG5ldDgwMjEx
 IGxheWVyIHN0aWxsIGFzc2lnbnMgdGhlbSAqLwpAQCAtMTQ1OSwxOSArMTQ2MSwyNyBAQAogCQkg
 KiBkb24ndCBnZXQgYSBzZXF1ZW5jZSBudW1iZXIgZnJvbSB0aGUgY3VycmVudAogCQkgKiBUSUQg
 YW5kIHRodXMgbWVzcyB3aXRoIHRoZSBCQVcuCiAJCSAqLwotCQlzZXFubyA9IGF0aF90eF90aWRf
 c2Vxbm9fYXNzaWduKHNjLCBuaSwgYmYsIG0wKTsKKwkJLy9zZXFubyA9IGF0aF90eF90aWRfc2Vx
 bm9fYXNzaWduKHNjLCBuaSwgYmYsIG0wKTsKIAkJaWYgKElFRUU4MDIxMV9RT1NfSEFTX1NFUSh3
 aCkgJiYKIAkJICAgIHN1YnR5cGUgIT0gSUVFRTgwMjExX0ZDMF9TVUJUWVBFX1FPU19OVUxMKSB7
 CiAJCQliZi0+YmZfc3RhdGUuYmZzX2RvYmF3ID0gMTsKKwkJCWJmLT5iZl9zdGF0ZS5iZnNfbmVl
 ZF9zZXFubyA9IDE7CiAJCX0KIAkJQVRIX1RYUV9VTkxPQ0sodHhxKTsKKwl9IGVsc2UgeworCQkv
 KiBObyBBTVBEVSBUWCwgd2UndmUgYmVlbiBhc3NpZ25lZCBhIHNlcXVlbmNlIG51bWJlci4gKi8K
 KwkJaWYgKElFRUU4MDIxMV9RT1NfSEFTX1NFUSh3aCkpIHsKKwkJCWJmLT5iZl9zdGF0ZS5iZnNf
 c2Vxbm9fYXNzaWduZWQgPSAxOworCQkJYmYtPmJmX3N0YXRlLmJmc19zZXFubyA9CisJCQkgICAg
 TV9TRVFOT19HRVQobTApIDw8IElFRUU4MDIxMV9TRVFfU0VRX1NISUZUOworCQl9CiAJfQogCiAJ
 LyoKIAkgKiBJZiBuZWVkZWQsIHRoZSBzZXF1ZW5jZSBudW1iZXIgaGFzIGJlZW4gYXNzaWduZWQu
 CiAJICogU3F1aXJyZWwgaXQgYXdheSBzb21ld2hlcmUgZWFzeSB0byBnZXQgdG8uCiAJICovCi0J
 YmYtPmJmX3N0YXRlLmJmc19zZXFubyA9IE1fU0VRTk9fR0VUKG0wKSA8PCBJRUVFODAyMTFfU0VR
 X1NFUV9TSElGVDsKKwkvL2JmLT5iZl9zdGF0ZS5iZnNfc2Vxbm8gPSBNX1NFUU5PX0dFVChtMCkg
 PDwgSUVFRTgwMjExX1NFUV9TRVFfU0hJRlQ7CiAKIAkvKiBJcyBhbXBkdSBwZW5kaW5nPyBmZXRj
 aCB0aGUgc2Vxbm8gYW5kIHByaW50IGl0IG91dCAqLwogCWlmIChpc19hbXBkdV9wZW5kaW5nKQpA
 QCAtMTQ4OCw2ICsxNDk4LDEwIEBACiAJLyogQXQgdGhpcyBwb2ludCBtMCBjb3VsZCBoYXZlIGNo
 YW5nZWQhICovCiAJbTAgPSBiZi0+YmZfbTsKIAorCURQUklOVEYoc2MsIEFUSF9ERUJVR19TV19U
 WCwKKwkgICAgIiVzOiBET05FOiBiZj0lcCwgdGlkPSVkLCBhYz0lZCwgaXNfYW1wZHU9JWQsIGRv
 YmF3PSVkLCBzZXFubz0lZFxuIiwKKwkgICAgX19mdW5jX18sIGJmLCB0aWQsIHByaSwgaXNfYW1w
 ZHUsIGJmLT5iZl9zdGF0ZS5iZnNfZG9iYXcsIE1fU0VRTk9fR0VUKG0wKSk7CisKICNpZiAxCiAJ
 LyoKIAkgKiBJZiBpdCdzIGEgbXVsdGljYXN0IGZyYW1lLCBkbyBhIGRpcmVjdC1kaXNwYXRjaCB0
 byB0aGUKQEAgLTE1MDYsNiArMTUyMCw4IEBACiAJICogcmVhY2hlZC4pCiAJICovCiAJaWYgKHR4
 cSA9PSAmYXZwLT5hdl9tY2FzdHEpIHsKKwkJRFBSSU5URihzYywgQVRIX0RFQlVHX1NXX1RYX0NU
 UkwsCisJCSAgICAiJXM6IGJmPSVwOiBtY2FzdHE6IFRYJ2luZ1xuIiwgX19mdW5jX18sIGJmKTsK
 IAkJQVRIX1RYUV9MT0NLKHR4cSk7CiAJCWF0aF90eF94bWl0X25vcm1hbChzYywgdHhxLCBiZik7
 CiAJCUFUSF9UWFFfVU5MT0NLKHR4cSk7CkBAIC0xNTE4LDYgKzE1MzQsOCBAQAogCQlBVEhfVFhR
 X1VOTE9DSyh0eHEpOwogCX0gZWxzZSB7CiAJCS8qIGFkZCB0byBzb2Z0d2FyZSBxdWV1ZSAqLwor
 CQlEUFJJTlRGKHNjLCBBVEhfREVCVUdfU1dfVFhfQ1RSTCwKKwkJICAgICIlczogYmY9JXA6IHN3
 cTogVFgnaW5nXG4iLCBfX2Z1bmNfXywgYmYpOwogCQlhdGhfdHhfc3dxKHNjLCBuaSwgdHhxLCBi
 Zik7CiAJfQogI2Vsc2UKQEAgLTE5NjYsMjYgKzE5ODQsNTEgQEAKIAlpZiAoYmYtPmJmX3N0YXRl
 LmJmc19pc3JldHJpZWQpCiAJCXJldHVybjsKIAorCS8qCisJICogSWYgdGhpcyBvY2N1cnMgd2Un
 cmUgaW4gYSBsb3Qgb2YgdHJvdWJsZS4gIFdlIHNob3VsZCB0cnkgdG8KKwkgKiByZWNvdmVyIGZy
 b20gdGhpcyB3aXRob3V0IHRoZSBzZXNzaW9uIGhhbmdpbmc/CisJICovCisJaWYgKCEgYmYtPmJm
 X3N0YXRlLmJmc19zZXFub19hc3NpZ25lZCkgeworCQlkZXZpY2VfcHJpbnRmKHNjLT5zY19kZXYs
 CisJCSAgICAiJXM6IGJmPSVwLCBzZXFub19hc3NpZ25lZCBpcyAwPyFcbiIsIF9fZnVuY19fLCBi
 Zik7CisJCXJldHVybjsKKwl9CisKIAl0YXAgPSBhdGhfdHhfZ2V0X3R4X3RpZChhbiwgdGlkLT50
 aWQpOwogCiAJaWYgKGJmLT5iZl9zdGF0ZS5iZnNfYWRkZWRiYXcpCiAJCWRldmljZV9wcmludGYo
 c2MtPnNjX2RldiwKLQkJICAgICIlczogcmUtYWRkZWQ/IHRpZD0lZCwgc2Vxbm8gJWQ7IHdpbmRv
 dyAlZDolZDsgIgorCQkgICAgIiVzOiByZS1hZGRlZD8gYmY9JXAsIHRpZD0lZCwgc2Vxbm8gJWQ7
 IHdpbmRvdyAlZDolZDsgIgogCQkgICAgImJhdyBoZWFkPSVkIHRhaWw9JWRcbiIsCi0JCSAgICBf
 X2Z1bmNfXywgdGlkLT50aWQsIFNFUU5PKGJmLT5iZl9zdGF0ZS5iZnNfc2Vxbm8pLAorCQkgICAg
 X19mdW5jX18sIGJmLCB0aWQtPnRpZCwgU0VRTk8oYmYtPmJmX3N0YXRlLmJmc19zZXFubyksCiAJ
 CSAgICB0YXAtPnR4YV9zdGFydCwgdGFwLT50eGFfd25kLCB0aWQtPmJhd19oZWFkLAogCQkgICAg
 dGlkLT5iYXdfdGFpbCk7CiAKIAkvKgorCSAqIFZlcmlmeSB0aGF0IHRoZSBnaXZlbiBzZXF1ZW5j
 ZSBudW1iZXIgaXMgbm90IG91dHNpZGUgb2YgdGhlCisJICogQkFXLiAgQ29tcGxhaW4gbG91ZGx5
 IGlmIHRoYXQncyB0aGUgY2FzZS4KKwkgKi8KKwlpZiAoISBCQVdfV0lUSElOKHRhcC0+dHhhX3N0
 YXJ0LCB0YXAtPnR4YV93bmQsCisJICAgIFNFUU5PKGJmLT5iZl9zdGF0ZS5iZnNfc2Vxbm8pKSkg
 eworCQlkZXZpY2VfcHJpbnRmKHNjLT5zY19kZXYsCisJCSAgICAiJXM6IGJmPSVwOiBvdXRzaWRl
 IG9mIEJBVz8/IHRpZD0lZCwgc2Vxbm8gJWQ7IHdpbmRvdyAlZDolZDsgIgorCQkgICAgImJhdyBo
 ZWFkPSVkIHRhaWw9JWRcbiIsCisJCSAgICBfX2Z1bmNfXywgYmYsIHRpZC0+dGlkLCBTRVFOTyhi
 Zi0+YmZfc3RhdGUuYmZzX3NlcW5vKSwKKwkJICAgIHRhcC0+dHhhX3N0YXJ0LCB0YXAtPnR4YV93
 bmQsIHRpZC0+YmF3X2hlYWQsCisJCSAgICB0aWQtPmJhd190YWlsKTsKKworCX0KKworCS8qCiAJ
 ICogbmktPm5pX3R4c2Vxc1tdIGlzIHRoZSBjdXJyZW50bHkgYWxsb2NhdGVkIHNlcW5vLgogCSAq
 IHRoZSB0eGEgc3RhdGUgY29udGFpbnMgdGhlIGN1cnJlbnQgYmF3IHN0YXJ0LgogCSAqLwogCWlu
 ZGV4ICA9IEFUSF9CQV9JTkRFWCh0YXAtPnR4YV9zdGFydCwgU0VRTk8oYmYtPmJmX3N0YXRlLmJm
 c19zZXFubykpOwogCWNpbmRleCA9ICh0aWQtPmJhd19oZWFkICsgaW5kZXgpICYgKEFUSF9USURf
 TUFYX0JVRlMgLSAxKTsKIAlEUFJJTlRGKHNjLCBBVEhfREVCVUdfU1dfVFhfQkFXLAotCSAgICAi
 JXM6IHRpZD0lZCwgc2Vxbm8gJWQ7IHdpbmRvdyAlZDolZDsgaW5kZXg9JWQgY2luZGV4PSVkICIK
 KwkgICAgIiVzOiBiZj0lcCwgdGlkPSVkLCBzZXFubyAlZDsgd2luZG93ICVkOiVkOyBpbmRleD0l
 ZCBjaW5kZXg9JWQgIgogCSAgICAiYmF3IGhlYWQ9JWQgdGFpbD0lZFxuIiwKLQkgICAgX19mdW5j
 X18sIHRpZC0+dGlkLCBTRVFOTyhiZi0+YmZfc3RhdGUuYmZzX3NlcW5vKSwKKwkgICAgX19mdW5j
 X18sIGJmLCB0aWQtPnRpZCwgU0VRTk8oYmYtPmJmX3N0YXRlLmJmc19zZXFubyksCiAJICAgIHRh
 cC0+dHhhX3N0YXJ0LCB0YXAtPnR4YV93bmQsIGluZGV4LCBjaW5kZXgsIHRpZC0+YmF3X2hlYWQs
 CiAJICAgIHRpZC0+YmF3X3RhaWwpOwogCkBAIC0yMDg4LDkgKzIxMzEsOSBAQAogCWNpbmRleCA9
 ICh0aWQtPmJhd19oZWFkICsgaW5kZXgpICYgKEFUSF9USURfTUFYX0JVRlMgLSAxKTsKIAogCURQ
 UklOVEYoc2MsIEFUSF9ERUJVR19TV19UWF9CQVcsCi0JICAgICIlczogdGlkPSVkLCBiYXc9JWQ6
 JWQsIHNlcW5vPSVkLCBpbmRleD0lZCwgY2luZGV4PSVkLCAiCisJICAgICIlczogYmY9JXA6IHRp
 ZD0lZCwgYmF3PSVkOiVkLCBzZXFubz0lZCwgaW5kZXg9JWQsIGNpbmRleD0lZCwgIgogCSAgICAi
 YmF3IGhlYWQ9JWQsIHRhaWw9JWRcbiIsCi0JICAgIF9fZnVuY19fLCB0aWQtPnRpZCwgdGFwLT50
 eGFfc3RhcnQsIHRhcC0+dHhhX3duZCwgc2Vxbm8sIGluZGV4LAorCSAgICBfX2Z1bmNfXywgYmYs
 IHRpZC0+dGlkLCB0YXAtPnR4YV9zdGFydCwgdGFwLT50eGFfd25kLCBzZXFubywgaW5kZXgsCiAJ
 ICAgIGNpbmRleCwgdGlkLT5iYXdfaGVhZCwgdGlkLT5iYXdfdGFpbCk7CiAKIAkvKgpAQCAtMjE3
 MSwxMSArMjIxNCw0MiBAQAogfQogCiAvKgorICogUmV0dXJuIHdoZXRoZXIgYSBzZXF1ZW5jZSBu
 dW1iZXIgaXMgYWN0dWFsbHkgcmVxdWlyZWQuCisgKgorICogQSBzZXF1ZW5jZSBudW1iZXIgbXVz
 dCBvbmx5IGJlIGFsbG9jYXRlZCBhdCB0aGUgdGltZSB0aGF0IGEgZnJhbWUKKyAqIGlzIGNvbnNp
 ZGVyZWQgZm9yIGFkZGl0aW9uIHRvIHRoZSBCQVcvYWdncmVnYXRlIGFuZCBiZWluZyBUWGVkLgor
 ICogVGhlIHNlcXVlbmNlIG51bWJlciBtdXN0IG5vdCBiZSBhbGxvY2F0ZWQgYmVmb3JlIHRoZSBm
 cmFtZQorICogaXMgYWRkZWQgdG8gdGhlIEJBVyAocHJvdGVjdGVkIGJ5IHRoZSBzYW1lIGxvY2sg
 aW5zdGFuY2UpCisgKiBvdGhlcndpc2UgYSB0aGUgbXVsdGktZW50cmFudCBUWCBwYXRoIG1heSBy
 ZXN1bHQgaW4gYSBsYXRlciBzZXFubworICogYmVpbmcgYWRkZWQgdG8gdGhlIEJBVyBmaXJzdC4g
 IFRoZSBzdWJzZXF1ZW50IGFkZGl0aW9uIG9mIHRoZQorICogZWFybGllciBzZXFubyB3b3VsZCB0
 aGVuIG5vdCBnbyBpbnRvIHRoZSBCQVcgYXMgaXQncyBub3cgb3V0c2lkZQorICogb2Ygc2FpZCBC
 QVcuCisgKgorICogVGhpcyByb3V0aW5lIGlzIHVzZWQgYnkgYXRoX3R4X3N0YXJ0KCkgdG8gbWFy
 ayB3aGV0aGVyIHRoZSBmcmFtZQorICogc2hvdWxkIGdldCBhIHNlcXVlbmNlIG51bWJlciBiZWZv
 cmUgYWRkaW5nIGl0IHRvIHRoZSBCQVcuCisgKgorICogVGhlbiB0aGUgYWN0dWFsIGFnZ3JlZ2F0
 ZSBUWCByb3V0aW5lcyB3aWxsIGNoZWNrIHdoZXRoZXIgdGhpcworICogZmxhZyBpcyBzZXQgYW5k
 IGlmIHRoZSBmcmFtZSBuZWVkcyB0byBnbyBpbnRvIHRoZSBCQVcsIGl0J2xsCisgKiBoYXZlIGEg
 c2VxdWVuY2UgbnVtYmVyIGFsbG9jYXRlZCBmb3IgaXQuCisgKi8KKyNpZiAwCitzdGF0aWMgaW50
 CithdGhfdHhfc2Vxbm9fcmVxdWlyZWQoc3RydWN0IGF0aF9zb2Z0YyAqc2MsIHN0cnVjdCBpZWVl
 ODAyMTFfbm9kZSAqbmksCisgICAgc3RydWN0IGF0aF9idWYgKmJmLCBzdHJ1Y3QgbWJ1ZiAqbTAp
 Cit7Cit9CisjZW5kaWYKKworLyoKICAqIEFzc2lnbiBhIHNlcXVlbmNlIG51bWJlciBtYW51YWxs
 eSB0byB0aGUgZ2l2ZW4gZnJhbWUuCiAgKgogICogVGhpcyBzaG91bGQgb25seSBiZSBjYWxsZWQg
 Zm9yIEEtTVBEVSBUWCBmcmFtZXMuCisgKgorICogSWYgdGhpcyBpcyBjYWxsZWQgYWZ0ZXIgdGhl
 IGluaXRpYWwgZnJhbWUgc2V0dXAsIG1ha2Ugc3VyZSB5b3UndmUgZmx1c2hlZAorICogdGhlIERN
 QSBtYXAgb3IgeW91J2xsIHJpc2sgc2VuZGluZyBzdGFsZSBkYXRhIHRvIHRoZSBOSUMuICBUaGlz
 IHJvdXRpbmUKKyAqIHVwZGF0ZXMgdGhlIGFjdHVhbCBmcmFtZSBjb250ZW50cyB3aXRoIHRoZSBy
 ZWxldmFudCBzZXFuby4KICAqLwotc3RhdGljIGllZWU4MDIxMV9zZXEKK2ludAogYXRoX3R4X3Rp
 ZF9zZXFub19hc3NpZ24oc3RydWN0IGF0aF9zb2Z0YyAqc2MsIHN0cnVjdCBpZWVlODAyMTFfbm9k
 ZSAqbmksCiAgICAgc3RydWN0IGF0aF9idWYgKmJmLCBzdHJ1Y3QgbWJ1ZiAqbTApCiB7CkBAIC0y
 MTg4LDkgKzIyNjIsMjMgQEAKIAl3aCA9IG10b2QobTAsIHN0cnVjdCBpZWVlODAyMTFfZnJhbWUg
 Kik7CiAJcHJpID0gTV9XTUVfR0VUQUMobTApOwkJCS8qIGhvbm9yIGNsYXNzaWZpY2F0aW9uICov
 CiAJdGlkID0gV01FX0FDX1RPX1RJRChwcmkpOwotCURQUklOVEYoc2MsIEFUSF9ERUJVR19TV19U
 WCwgIiVzOiBwcmk9JWQsIHRpZD0lZCwgcW9zIGhhcyBzZXE9JWRcbiIsCi0JICAgIF9fZnVuY19f
 LCBwcmksIHRpZCwgSUVFRTgwMjExX1FPU19IQVNfU0VRKHdoKSk7CisJRFBSSU5URihzYywgQVRI
 X0RFQlVHX1NXX1RYLAorCSAgICAiJXM6IGJmPSVwLCBwcmk9JWQsIHRpZD0lZCwgcW9zIGhhcyBz
 ZXE9JWRcbiIsCisJICAgIF9fZnVuY19fLCBiZiwgcHJpLCB0aWQsIElFRUU4MDIxMV9RT1NfSEFT
 X1NFUSh3aCkpOwogCisJaWYgKCEgYmYtPmJmX3N0YXRlLmJmc19uZWVkX3NlcW5vKSB7CisJCWRl
 dmljZV9wcmludGYoc2MtPnNjX2RldiwgIiVzOiBiZj0lcDogbmVlZF9zZXFubyBub3Qgc2V0PyFc
 biIsCisJCSAgICBfX2Z1bmNfXywgYmYpOworCQlyZXR1cm4gLTE7CisJfQorCS8qIFhYWCBjaGVj
 ayBmb3IgYmZzX25lZWRfc2Vxbm8/ICovCisJaWYgKGJmLT5iZl9zdGF0ZS5iZnNfc2Vxbm9fYXNz
 aWduZWQpIHsKKwkJZGV2aWNlX3ByaW50ZihzYy0+c2NfZGV2LAorCQkgICAgIiVzOiBiZj0lcDog
 c2Vxbm8gYWxyZWFkeSBhc3NpZ25lZCAoJWQpPyFcbiIsCisJCSAgICBfX2Z1bmNfXywgYmYsIFNF
 UU5PKGJmLT5iZl9zdGF0ZS5iZnNfc2Vxbm8pKTsKKwkJcmV0dXJuIGJmLT5iZl9zdGF0ZS5iZnNf
 c2Vxbm8gPj4gSUVFRTgwMjExX1NFUV9TRVFfU0hJRlQ7CisJfQorCiAJLyogWFhYIElzIGl0IGEg
 Y29udHJvbCBmcmFtZT8gSWdub3JlICovCiAKIAkvKiBEb2VzIHRoZSBwYWNrZXQgcmVxdWlyZSBh
 IHNlcXVlbmNlIG51bWJlcj8gKi8KQEAgLTIyMTcsOSArMjMwNSwxNCBAQAogCX0KIAkqKHVpbnQx
 Nl90ICopJndoLT5pX3NlcVswXSA9IGh0b2xlMTYoc2Vxbm8gPDwgSUVFRTgwMjExX1NFUV9TRVFf
 U0hJRlQpOwogCU1fU0VRTk9fU0VUKG0wLCBzZXFubyk7CisJYmYtPmJmX3N0YXRlLmJmc19zZXFu
 byA9IHNlcW5vIDw8IElFRUU4MDIxMV9TRVFfU0VRX1NISUZUOworCWJmLT5iZl9zdGF0ZS5iZnNf
 c2Vxbm9fYXNzaWduZWQgPSAxOwogCiAJLyogUmV0dXJuIHNvIGNhbGxlciBjYW4gZG8gc29tZXRo
 aW5nIHdpdGggaXQgaWYgbmVlZGVkICovCi0JRFBSSU5URihzYywgQVRIX0RFQlVHX1NXX1RYLCAi
 JXM6ICAtPiBzZXFubz0lZFxuIiwgX19mdW5jX18sIHNlcW5vKTsKKwlEUFJJTlRGKHNjLCBBVEhf
 REVCVUdfU1dfVFgsICIlczogYmY9JXA6ICAtPiBzZXFubz0lZFxuIiwKKwkgICAgX19mdW5jX18s
 CisJICAgIGJmLAorCSAgICBzZXFubyk7CiAJcmV0dXJuIHNlcW5vOwogfQogCkBAIC0yMjMxLDkg
 KzIzMjQsMTEgQEAKIHN0YXRpYyB2b2lkCiBhdGhfdHhfeG1pdF9hZ2dyKHN0cnVjdCBhdGhfc29m
 dGMgKnNjLCBzdHJ1Y3QgYXRoX25vZGUgKmFuLCBzdHJ1Y3QgYXRoX2J1ZiAqYmYpCiB7CisJc3Ry
 dWN0IGllZWU4MDIxMV9ub2RlICpuaSA9ICZhbi0+YW5fbm9kZTsKIAlzdHJ1Y3QgYXRoX3RpZCAq
 dGlkID0gJmFuLT5hbl90aWRbYmYtPmJmX3N0YXRlLmJmc190aWRdOwogCXN0cnVjdCBhdGhfdHhx
 ICp0eHEgPSBiZi0+YmZfc3RhdGUuYmZzX3R4cTsKIAlzdHJ1Y3QgaWVlZTgwMjExX3R4X2FtcGR1
 ICp0YXA7CisJaW50IHNlcW5vOwogCiAJQVRIX1RYUV9MT0NLX0FTU0VSVCh0eHEpOwogCkBAIC0y
 MjQ1LDEwICsyMzQwLDYzIEBACiAJCXJldHVybjsKIAl9CiAKKwkvKgorCSAqIFRPRE86IElmIGl0
 J3MgX2JlZm9yZV8gdGhlIEJBVyBsZWZ0IGVkZ2UsIGNvbXBsYWluIHZlcnkgbG91ZGx5LgorCSAq
 IFRoaXMgbWVhbnMgc29tZXRoaW5nIChlbHNlKSBoYXMgc2xpZCB0aGUgbGVmdCBlZGdlIGFsb25n
 CisJICogYmVmb3JlIHdlIGdvdCBhIGNoYW5jZSB0byBiZSBUWGVkLgorCSAqLworCisJLyoKKwkg
 KiBJcyB0aGVyZSBzcGFjZSBpbiB0aGlzIEJBVyBmb3IgYW5vdGhlciBmcmFtZT8KKwkgKiBJZiBu
 b3QsIGRvbid0IGJvdGhlciB0cnlpbmcgdG8gc2NoZWR1bGUgaXQ7IGp1c3QKKwkgKiB0aHJvdyBp
 dCBiYWNrIG9uIHRoZSBxdWV1ZS4KKwkgKgorCSAqIElmIHdlIGFsbG9jYXRlIHRoZSBzZXF1ZW5j
 ZSBudW1iZXIgYmVmb3JlIHdlIGFkZAorCSAqIGl0IHRvIHRoZSBCQVcsIHdlIHJpc2sgcmFjaW5n
 IHdpdGggYW5vdGhlciBUWAorCSAqIHRocmVhZCB0aGF0IGdldHMgaW4gYSBmcmFtZSBpbnRvIHRo
 ZSBCQVcgd2l0aAorCSAqIHNlcW5vIGdyZWF0ZXIgdGhhbiBvdXJzLiAgV2UnZCB0aGVuIGZhaWwg
 dGhlCisJICogYmVsb3cgY2hlY2sgYW5kIHRocm93IHRoZSBmcmFtZSBvbiB0aGUgdGFpbCBvZgor
 CSAqIHRoZSBxdWV1ZS4gIFRoZSBzZW5kZXIgd291bGQgdGhlbiBoYXZlIGEgaG9sZS4KKwkgKgor
 CSAqIFhYWCBhZ2Fpbiwgd2UncmUgcHJvdGVjdGluZyBuaS0+bmlfdHhzZXFzW3RpZF0KKwkgKiBi
 ZWhpbmQgdGhpcyBoYXJkd2FyZSBUWFEgbG9jaywgbGlrZSB0aGUgcmVzdCBvZgorCSAqIHRoZSBU
 SURzIHRoYXQgbWFwIHRvIGl0LiAgVWdoLgorCSAqLworCWlmIChiZi0+YmZfc3RhdGUuYmZzX2Rv
 YmF3KSB7CisJCWlmICghIEJBV19XSVRISU4odGFwLT50eGFfc3RhcnQsIHRhcC0+dHhhX3duZCwK
 KwkJICAgIG5pLT5uaV90eHNlcXNbYmYtPmJmX3N0YXRlLmJmc190aWRdKSkgeworCQkJQVRIX1RY
 UV9JTlNFUlRfVEFJTCh0aWQsIGJmLCBiZl9saXN0KTsKKwkJCWF0aF90eF90aWRfc2NoZWQoc2Ms
 IHRpZCk7CisJCQlyZXR1cm47CisJCX0KKwkJaWYgKCEgYmYtPmJmX3N0YXRlLmJmc19zZXFub19h
 c3NpZ25lZCkgeworCQkJc2Vxbm8gPSBhdGhfdHhfdGlkX3NlcW5vX2Fzc2lnbihzYywgbmksIGJm
 LCBiZi0+YmZfbSk7CisJCQlpZiAoc2Vxbm8gPCAwKSB7CisJCQkJZGV2aWNlX3ByaW50ZihzYy0+
 c2NfZGV2LAorCQkJCSAgICAiJXM6IGJmPSVwLCBodWgsIHNlcW5vPS0xP1xuIiwKKwkJCQkgICAg
 X19mdW5jX18sCisJCQkJICAgIGJmKTsKKwkJCQkvKiBYWFggd2hhdCBjYW4gd2UgZXZlbiBkbyBo
 ZXJlPyAqLworCQkJfQorCQkJLyogRmx1c2ggc2Vxbm8gdXBkYXRlIHRvIFJBTSAqLworCQkJLyoK
 KwkJCSAqIFhYWCBUaGlzIGlzIHJlcXVpcmVkIGJlY2F1c2UgdGhlIGRtYXNldHVwCisJCQkgKiBY
 WFggaXMgZG9uZSBlYXJseSByYXRoZXIgdGhhbiBhdCBkaXNwYXRjaAorCQkJICogWFhYIHRpbWUu
 IEV3LCB3ZSBzaG91bGQgZml4IHRoaXMhCisJCQkgKi8KKwkJCWJ1c19kbWFtYXBfc3luYyhzYy0+
 c2NfZG1hdCwgYmYtPmJmX2RtYW1hcCwKKwkJCSAgICBCVVNfRE1BU1lOQ19QUkVXUklURSk7CisJ
 CX0KKwl9CisKIAkvKiBvdXRzaWRlIGJhdz8gcXVldWUgKi8KIAlpZiAoYmYtPmJmX3N0YXRlLmJm
 c19kb2JhdyAmJgogCSAgICAoISBCQVdfV0lUSElOKHRhcC0+dHhhX3N0YXJ0LCB0YXAtPnR4YV93
 bmQsCiAJICAgIFNFUU5PKGJmLT5iZl9zdGF0ZS5iZnNfc2Vxbm8pKSkpIHsKKwkJZGV2aWNlX3By
 aW50ZihzYy0+c2NfZGV2LAorCQkgICAgIiVzOiBiZj0lcCwgc2hvdWxkbid0IGJlIG91dHNpZGUg
 QkFXIG5vdz8hXG4iLAorCQkgICAgX19mdW5jX18sCisJCSAgICBiZik7CiAJCUFUSF9UWFFfSU5T
 RVJUX1RBSUwodGlkLCBiZiwgYmZfbGlzdCk7CiAJCWF0aF90eF90aWRfc2NoZWQoc2MsIHRpZCk7
 CiAJCXJldHVybjsKQEAgLTIzMDMsOCArMjQ1MSw4IEBACiAJdGlkID0gYXRoX3R4X2dldHRpZChz
 YywgbTApOwogCWF0aWQgPSAmYW4tPmFuX3RpZFt0aWRdOwogCi0JRFBSSU5URihzYywgQVRIX0RF
 QlVHX1NXX1RYLCAiJXM6IGJmPSVwLCBwcmk9JWQsIHRpZD0lZCwgcW9zPSVkXG4iLAotCSAgICBf
 X2Z1bmNfXywgYmYsIHByaSwgdGlkLCBJRUVFODAyMTFfUU9TX0hBU19TRVEod2gpKTsKKwlEUFJJ
 TlRGKHNjLCBBVEhfREVCVUdfU1dfVFgsICIlczogYmY9JXAsIHByaT0lZCwgdGlkPSVkLCBxb3M9
 JWQsIHNlcW5vPSVkXG4iLAorCSAgICBfX2Z1bmNfXywgYmYsIHByaSwgdGlkLCBJRUVFODAyMTFf
 UU9TX0hBU19TRVEod2gpLCBTRVFOTyhiZi0+YmZfc3RhdGUuYmZzX3NlcW5vKSk7CiAKIAkvKiBT
 ZXQgbG9jYWwgcGFja2V0IHN0YXRlLCB1c2VkIHRvIHF1ZXVlIHBhY2tldHMgdG8gaGFyZHdhcmUg
 Ki8KIAliZi0+YmZfc3RhdGUuYmZzX3RpZCA9IHRpZDsKQEAgLTIzMjAsMzQgKzI0NjgsMzQgQEAK
 IAlBVEhfVFhRX0xPQ0sodHhxKTsKIAlpZiAoYXRpZC0+cGF1c2VkKSB7CiAJCS8qIFRJRCBpcyBw
 YXVzZWQsIHF1ZXVlICovCi0JCURQUklOVEYoc2MsIEFUSF9ERUJVR19TV19UWCwgIiVzOiBwYXVz
 ZWRcbiIsIF9fZnVuY19fKTsKKwkJRFBSSU5URihzYywgQVRIX0RFQlVHX1NXX1RYLCAiJXM6IGJm
 PSVwOiBwYXVzZWRcbiIsIF9fZnVuY19fLCBiZik7CiAJCUFUSF9UWFFfSU5TRVJUX1RBSUwoYXRp
 ZCwgYmYsIGJmX2xpc3QpOwogCX0gZWxzZSBpZiAoYXRoX3R4X2FtcGR1X3BlbmRpbmcoc2MsIGFu
 LCB0aWQpKSB7CiAJCS8qIEFNUERVIHBlbmRpbmc7IHF1ZXVlICovCi0JCURQUklOVEYoc2MsIEFU
 SF9ERUJVR19TV19UWCwgIiVzOiBwZW5kaW5nXG4iLCBfX2Z1bmNfXyk7CisJCURQUklOVEYoc2Ms
 IEFUSF9ERUJVR19TV19UWCwgIiVzOiBiZj0lcDogcGVuZGluZ1xuIiwgX19mdW5jX18sIGJmKTsK
 IAkJQVRIX1RYUV9JTlNFUlRfVEFJTChhdGlkLCBiZiwgYmZfbGlzdCk7CiAJCS8qIFhYWCBzY2hl
 ZD8gKi8KIAl9IGVsc2UgaWYgKGF0aF90eF9hbXBkdV9ydW5uaW5nKHNjLCBhbiwgdGlkKSkgewog
 CQkvKiBBTVBEVSBydW5uaW5nLCBhdHRlbXB0IGRpcmVjdCBkaXNwYXRjaCBpZiBwb3NzaWJsZSAq
 LwogCQlpZiAodHhxLT5heHFfZGVwdGggPCBzYy0+c2NfaHdxX2xpbWl0KSB7CisJCQlEUFJJTlRG
 KHNjLCBBVEhfREVCVUdfU1dfVFgsCisJCQkgICAgIiVzOiBiZj0lcDogeG1pdF9hZ2dyXG4iLAor
 CQkJICAgIF9fZnVuY19fLCBiZik7CiAJCQlhdGhfdHhfeG1pdF9hZ2dyKHNjLCBhbiwgYmYpOwot
 CQkJRFBSSU5URihzYywgQVRIX0RFQlVHX1NXX1RYLAotCQkJICAgICIlczogeG1pdF9hZ2dyXG4i
 LAotCQkJICAgIF9fZnVuY19fKTsKIAkJfSBlbHNlIHsKIAkJCURQUklOVEYoc2MsIEFUSF9ERUJV
 R19TV19UWCwKLQkJCSAgICAiJXM6IGFtcGR1OyBzd3EnaW5nXG4iLAotCQkJICAgIF9fZnVuY19f
 KTsKKwkJCSAgICAiJXM6IGJmPSVwOiBhbXBkdTsgc3dxJ2luZ1xuIiwKKwkJCSAgICBfX2Z1bmNf
 XywgYmYpOwogCQkJQVRIX1RYUV9JTlNFUlRfVEFJTChhdGlkLCBiZiwgYmZfbGlzdCk7CiAJCQlh
 dGhfdHhfdGlkX3NjaGVkKHNjLCBhdGlkKTsKIAkJfQogCX0gZWxzZSBpZiAodHhxLT5heHFfZGVw
 dGggPCBzYy0+c2NfaHdxX2xpbWl0KSB7CiAJCS8qIEFNUERVIG5vdCBydW5uaW5nLCBhdHRlbXB0
 IGRpcmVjdCBkaXNwYXRjaCAqLwotCQlEUFJJTlRGKHNjLCBBVEhfREVCVUdfU1dfVFgsICIlczog
 eG1pdF9ub3JtYWxcbiIsIF9fZnVuY19fKTsKKwkJRFBSSU5URihzYywgQVRIX0RFQlVHX1NXX1RY
 LCAiJXM6IGJmPSVwOiB4bWl0X25vcm1hbFxuIiwgX19mdW5jX18sIGJmKTsKIAkJYXRoX3R4X3ht
 aXRfbm9ybWFsKHNjLCB0eHEsIGJmKTsKIAl9IGVsc2UgewogCQkvKiBCdXN5OyBxdWV1ZSAqLwot
 CQlEUFJJTlRGKHNjLCBBVEhfREVCVUdfU1dfVFgsICIlczogc3dxJ2luZ1xuIiwgX19mdW5jX18p
 OworCQlEUFJJTlRGKHNjLCBBVEhfREVCVUdfU1dfVFgsICIlczogYmY9JXA6IHN3cSdpbmdcbiIs
 IF9fZnVuY19fLCBiZik7CiAJCUFUSF9UWFFfSU5TRVJUX1RBSUwoYXRpZCwgYmYsIGJmX2xpc3Qp
 OwogCQlhdGhfdHhfdGlkX3NjaGVkKHNjLCBhdGlkKTsKIAl9CkBAIC0yNDc4LDExICsyNjI2LDEx
 IEBACiAKIAkJaWYgKHQgPT0gMCkgewogCQkJZGV2aWNlX3ByaW50ZihzYy0+c2NfZGV2LAotCQkJ
 ICAgICIlczogbm9kZSAlcDogdGlkICVkOiB0eHFfZGVwdGg9JWQsICIKKwkJCSAgICAiJXM6IG5v
 ZGUgJXA6IGJmPSVwOiB0aWQgJWQ6IHR4cV9kZXB0aD0lZCwgIgogCQkJICAgICJ0eHFfYWdncl9k
 ZXB0aD0lZCwgc2NoZWQ9JWQsIHBhdXNlZD0lZCwgIgogCQkJICAgICJod3FfZGVwdGg9JWQsIGlu
 Y29tcD0lZCwgYmF3X2hlYWQ9JWQsICIKIAkJCSAgICAiYmF3X3RhaWw9JWQgdHhhX3N0YXJ0PSVk
 LCBuaV90eHNlcXM9JWRcbiIsCi0JCQkgICAgIF9fZnVuY19fLCBuaSwgdGlkLT50aWQsIHR4cS0+
 YXhxX2RlcHRoLAorCQkJICAgICBfX2Z1bmNfXywgbmksIGJmLCB0aWQtPnRpZCwgdHhxLT5heHFf
 ZGVwdGgsCiAJCQkgICAgIHR4cS0+YXhxX2FnZ3JfZGVwdGgsIHRpZC0+c2NoZWQsIHRpZC0+cGF1
 c2VkLAogCQkJICAgICB0aWQtPmh3cV9kZXB0aCwgdGlkLT5pbmNvbXAsIHRpZC0+YmF3X2hlYWQs
 CiAJCQkgICAgIHRpZC0+YmF3X3RhaWwsIHRhcCA9PSBOVUxMID8gLTEgOiB0YXAtPnR4YV9zdGFy
 dCwKQEAgLTI0OTMsNyArMjY0MSw3IEBACiAJCQkgICAgbXRvZChiZi0+YmZfbSwgY29uc3QgdWlu
 dDhfdCAqKSwKIAkJCSAgICBiZi0+YmZfbS0+bV9sZW4sIDAsIC0xKTsKIAotCQkJdCA9IDE7CisJ
 CQkvL3QgPSAxOwogCQl9CiAKIApJbmRleDogc3lzL2Rldi9hdGgvaWZfYXRoX3R4LmgKPT09PT09
 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
 PT09PQotLS0gc3lzL2Rldi9hdGgvaWZfYXRoX3R4LmgJKHJldmlzaW9uIDIzMzA4OSkKKysrIHN5
 cy9kZXYvYXRoL2lmX2F0aF90eC5oCSh3b3JraW5nIGNvcHkpCkBAIC0xMDksNiArMTA5LDggQEAK
 ICAgICBzdHJ1Y3QgYXRoX3RpZCAqdGlkLCBzdHJ1Y3QgYXRoX2J1ZiAqYmYpOwogZXh0ZXJuIHN0
 cnVjdCBpZWVlODAyMTFfdHhfYW1wZHUgKiBhdGhfdHhfZ2V0X3R4X3RpZChzdHJ1Y3QgYXRoX25v
 ZGUgKmFuLAogICAgIGludCB0aWQpOworZXh0ZXJuIGludCBhdGhfdHhfdGlkX3NlcW5vX2Fzc2ln
 bihzdHJ1Y3QgYXRoX3NvZnRjICpzYywKKyAgICBzdHJ1Y3QgaWVlZTgwMjExX25vZGUgKm5pLCBz
 dHJ1Y3QgYXRoX2J1ZiAqYmYsIHN0cnVjdCBtYnVmICptMCk7CiAKIC8qIFRYIGFkZGJhIGhhbmRs
 aW5nICovCiBleHRlcm4JaW50IGF0aF9hZGRiYV9yZXF1ZXN0KHN0cnVjdCBpZWVlODAyMTFfbm9k
 ZSAqbmksCg==
 --047d7b33c9fe4e3ed604bb91d101--

From: Vincent Hoffman <vince@unsane.co.uk>
To: bug-followup@FreeBSD.org, adrian@FreeBSD.org
Cc: freebsd-wireless@FreeBSD.org
Subject: Re: kern/166190: [ath] TX hangs and frames stuck in TX queue
Date: Mon, 19 Mar 2012 21:37:56 +0000

 Hi Adrian,
 
 
         This patch is looking good as yet, I've repeated tests that were
 previously causing timeouts and as yet not been able cause a timeout
 after applying this patch.
 
     Its not definitive but so far it appears to have resolved this issue
 for me.
 
 
 Regards,
 Vince Hoffman

From: "Adrian Chadd" <adrian.chadd@gmail.com>
To: "Vincent Hoffman" <vince@unsane.co.uk>, "bug-followup@freebsd.org"
  <bug-followup@freebsd.org>
Cc:  
Subject: Re: kern/166190: [ath] TX hangs and frames stuck in TX queue
Date: Mon, 19 Mar 2012 14:49:28 -0700

 --Alternative__boundary__1332193773950
 Content-Type: text/plain; charset=UTF-8
 Content-Transfer-Encoding: quoted-printable
 
 Just check the txagg sysctl and mae sure your buffer count stays up around=
  512.
 
 I want to make sure that buffers aren't being leaked.
 
 Thanks again!
 
 
 
 Sent from my Palm Pre on AT&amp;T
 On Mar 19, 2012 2:38 PM, Vincent Hoffman &lt;vince@unsane.co.uk&gt; wrote:=
  
 
 Hi Adrian,
 
 
 
 
 
         This patch is looking good as yet, I've repeated tests that were
 
 previously causing timeouts and as yet not been able cause a timeout
 
 after applying this patch.
 
 
 
     Its not definitive but so far it appears to have resolved this issue
 
 for me.
 
 
 
 
 
 Regards,
 
 Vince Hoffman
 
 
 
 --Alternative__boundary__1332193773950
 Content-Type: text/html; charset=UTF-8
 Content-Transfer-Encoding: quoted-printable
 
 Just check the txagg sysctl and mae sure your buffer count stays up around=
  512.<br><br>I want to make sure that buffers aren't being leaked.<br><br>T=
 hanks again!<br><br><span style=3D"font-family:Prelude, Verdana, san-serif;=
 "><br><br></span><span id=3D"signature"><div style=3D"font-family: arial,=
  sans-serif; font-size: 12px;color: #999999;">Sent from my Palm Pre on AT&a=
 mp;T</div><br></span><span style=3D"color:navy; font-family:Prelude, Verdan=
 a, san-serif; "><hr align=3D"left" style=3D"width:75%">On Mar 19, 2012 2:38=
  PM, Vincent Hoffman &lt;vince@unsane.co.uk&gt; wrote: <br><br>Hi Adrian,
 <br>
 <br>
 <br>        This patch is looking good as yet, I've repeated tests that were
 <br>previously causing timeouts and as yet not been able cause a timeout
 <br>after applying this patch.
 <br>
 <br>    Its not definitive but so far it appears to have resolved this issue
 <br>for me.
 <br>
 <br>
 <br>Regards,
 <br>Vince Hoffman
 <br></span>
 
 --Alternative__boundary__1332193773950--
 

From: Vincent Hoffman <vince@unsane.co.uk>
To: Adrian Chadd <adrian.chadd@gmail.com>
Cc: "bug-followup@freebsd.org" <bug-followup@freebsd.org>
Subject: Re: kern/166190: [ath] TX hangs and frames stuck in TX queue
Date: Mon, 19 Mar 2012 22:10:57 +0000

 This is a multi-part message in MIME format.
 --------------070001010500040909030800
 Content-Type: text/plain; charset=UTF-8
 Content-Transfer-Encoding: 7bit
 
 During an iperf test
 Total TX buffers went from  512 -> 356)
 
 iperf output (tcp, sending form freebsd machine to osx laptop [  4] 
 0.0-60.2 sec   154 MBytes  21.4 Mbits/sec)
 
 dmesg output:
 
 no tx bufs (empty list): 0
 no tx bufs (was busy): 0
 aggr single packet: 14372
 aggr single packet w/ BAW closed: 0
 aggr non-baw packet: 1
 aggr aggregate packet: 119987
 aggr single packet low hwq: 641424
 aggr sched, no work: 15333
  0:          0  1:          0  2:       7811  3:       5690
  4:       5077  5:       4509  6:       4675  7:       4546
  8:       5255  9:       5061 10:       4796 11:       9393
 12:       3094 13:       2604 14:       2647 15:       2301
 16:       4372 17:       2440 18:       4558 19:       8300
 20:       6962 21:       4679 22:       2404 23:       1270
 24:       1076 25:        929 26:        866 27:        856
 28:        835 29:        895 30:       1033 31:       1016
 32:      10037 33:          0 34:          0 35:          0
 36:          0 37:          0 38:          0 39:          0
 40:          0 41:          0 42:          0 43:          0
 44:          0 45:          0 46:          0 47:          0
 48:          0 49:          0 50:          0 51:          0
 52:          0 53:          0 54:          0 55:          0
 56:          0 57:          0 58:          0 59:          0
 60:          0 61:          0 62:          0 63:          0
 
 HW TXQ 0: axq_depth=0, axq_aggr_depth=0
 HW TXQ 1: axq_depth=0, axq_aggr_depth=0
 HW TXQ 2: axq_depth=0, axq_aggr_depth=0
 HW TXQ 3: axq_depth=0, axq_aggr_depth=0
 HW TXQ 8: axq_depth=0, axq_aggr_depth=0
 Total TX buffers: 512; Total TX buffers busy: 0
 no tx bufs (empty list): 0
 no tx bufs (was busy): 0
 aggr single packet: 14553
 aggr single packet w/ BAW closed: 0
 aggr non-baw packet: 1
 aggr aggregate packet: 121203
 aggr single packet low hwq: 643315
 aggr sched, no work: 15414
  0:          0  1:          0  2:       7931  3:       5744
  4:       5116  5:       4554  6:       4716  7:       4577
  8:       5284  9:       5097 10:       4822 11:       9425
 12:       3123 13:       2628 14:       2671 15:       2322
 16:       5036 17:       2442 18:       4558 19:       8300
 20:       6962 21:       4679 22:       2404 23:       1270
 24:       1076 25:        929 26:        866 27:        856
 28:        835 29:        895 30:       1033 31:       1016
 32:      10037 33:          0 34:          0 35:          0
 36:          0 37:          0 38:          0 39:          0
 40:          0 41:          0 42:          0 43:          0
 44:          0 45:          0 46:          0 47:          0
 48:          0 49:          0 50:          0 51:          0
 52:          0 53:          0 54:          0 55:          0
 56:          0 57:          0 58:          0 59:          0
 60:          0 61:          0 62:          0 63:          0
 
 HW TXQ 0: axq_depth=0, axq_aggr_depth=0
 HW TXQ 1: axq_depth=2, axq_aggr_depth=2
 HW TXQ 2: axq_depth=0, axq_aggr_depth=0
 HW TXQ 3: axq_depth=0, axq_aggr_depth=0
 HW TXQ 8: axq_depth=0, axq_aggr_depth=0
 Total TX buffers: 481; Total TX buffers busy: 0
 no tx bufs (empty list): 0
 no tx bufs (was busy): 0
 aggr single packet: 14928
 aggr single packet w/ BAW closed: 0
 aggr non-baw packet: 1
 aggr aggregate packet: 125149
 aggr single packet low hwq: 645085
 aggr sched, no work: 15673
  0:          0  1:          0  2:       8187  3:       5884
  4:       5230  5:       4653  6:       4801  7:       4649
  8:       5347  9:       5168 10:       4891 11:       9496
 12:       3305 13:       2715 14:       2753 15:       2399
 16:       7473 17:       2464 18:       4565 19:       8304
 20:       6966 21:       4681 22:       2405 23:       1270
 24:       1077 25:        929 26:        866 27:        856
 28:        835 29:        895 30:       1033 31:       1016
 32:      10037 33:          0 34:          0 35:          0
 36:          0 37:          0 38:          0 39:          0
 40:          0 41:          0 42:          0 43:          0
 44:          0 45:          0 46:          0 47:          0
 48:          0 49:          0 50:          0 51:          0
 52:          0 53:          0 54:          0 55:          0
 56:          0 57:          0 58:          0 59:          0
 60:          0 61:          0 62:          0 63:          0
 
 HW TXQ 0: axq_depth=0, axq_aggr_depth=0
 HW TXQ 1: axq_depth=2, axq_aggr_depth=1
 HW TXQ 2: axq_depth=0, axq_aggr_depth=0
 HW TXQ 3: axq_depth=0, axq_aggr_depth=0
 HW TXQ 8: axq_depth=0, axq_aggr_depth=0
 Total TX buffers: 502; Total TX buffers busy: 0
 no tx bufs (empty list): 0
 no tx bufs (was busy): 0
 aggr single packet: 15237
 aggr single packet w/ BAW closed: 0
 aggr non-baw packet: 1
 aggr aggregate packet: 127403
 aggr single packet low hwq: 646324
 aggr sched, no work: 15851
  0:          0  1:          0  2:       8360  3:       5998
  4:       5304  5:       4703  6:       4846  7:       4701
  8:       5377  9:       5216 10:       4935 11:       9544
 12:       3383 13:       2753 14:       2811 15:       2441
 16:       8822 17:       2474 18:       4566 19:       8304
 20:       6966 21:       4681 22:       2406 23:       1270
 24:       1077 25:        929 26:        866 27:        856
 28:        835 29:        895 30:       1033 31:       1016
 32:      10037 33:          0 34:          0 35:          0
 36:          0 37:          0 38:          0 39:          0
 40:          0 41:          0 42:          0 43:          0
 44:          0 45:          0 46:          0 47:          0
 48:          0 49:          0 50:          0 51:          0
 52:          0 53:          0 54:          0 55:          0
 56:          0 57:          0 58:          0 59:          0
 60:          0 61:          0 62:          0 63:          0
 
 HW TXQ 0: axq_depth=0, axq_aggr_depth=0
 HW TXQ 1: axq_depth=2, axq_aggr_depth=2
 HW TXQ 2: axq_depth=0, axq_aggr_depth=0
 HW TXQ 3: axq_depth=0, axq_aggr_depth=0
 HW TXQ 8: axq_depth=0, axq_aggr_depth=0
 Total TX buffers: 356; Total TX buffers busy: 0
 
 
 On 19/03/2012 21:49, Adrian Chadd wrote:
 > Just check the txagg sysctl and mae sure your buffer count stays up
 > around 512.
 >
 > I want to make sure that buffers aren't being leaked.
 >
 > Thanks again!
 >
 >
 >
 > Sent from my Palm Pre on AT&T
 >
 > ------------------------------------------------------------------------
 > On Mar 19, 2012 2:38 PM, Vincent Hoffman <vince@unsane.co.uk> wrote:
 >
 > Hi Adrian,
 >
 >
 > This patch is looking good as yet, I've repeated tests that were
 > previously causing timeouts and as yet not been able cause a timeout
 > after applying this patch.
 >
 > Its not definitive but so far it appears to have resolved this issue
 > for me.
 >
 >
 > Regards,
 > Vince Hoffman
 
 
 --------------070001010500040909030800
 Content-Type: text/html; charset=UTF-8
 Content-Transfer-Encoding: 8bit
 
 <html>
   <head>
     <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
   </head>
   <body bgcolor="#FFFFFF" text="#000000">
     During an iperf test<br>
     Total TX buffers went from  512 -&gt; 356)<br>
     <br>
     iperf output (tcp, sending form freebsd machine to osx laptop [  4] 
     0.0-60.2 sec   154 MBytes  21.4 Mbits/sec)<br>
     <br>
     dmesg output:<br>
     <br>
     no tx bufs (empty list): 0<br>
     no tx bufs (was busy): 0<br>
     aggr single packet: 14372<br>
     aggr single packet w/ BAW closed: 0<br>
     aggr non-baw packet: 1<br>
     aggr aggregate packet: 119987<br>
     aggr single packet low hwq: 641424<br>
     aggr sched, no work: 15333<br>
      0:          0  1:          0  2:       7811  3:       5690 <br>
      4:       5077  5:       4509  6:       4675  7:       4546 <br>
      8:       5255  9:       5061 10:       4796 11:       9393 <br>
     12:       3094 13:       2604 14:       2647 15:       2301 <br>
     16:       4372 17:       2440 18:       4558 19:       8300 <br>
     20:       6962 21:       4679 22:       2404 23:       1270 <br>
     24:       1076 25:        929 26:        866 27:        856 <br>
     28:        835 29:        895 30:       1033 31:       1016 <br>
     32:      10037 33:          0 34:          0 35:          0 <br>
     36:          0 37:          0 38:          0 39:          0 <br>
     40:          0 41:          0 42:          0 43:          0 <br>
     44:          0 45:          0 46:          0 47:          0 <br>
     48:          0 49:          0 50:          0 51:          0 <br>
     52:          0 53:          0 54:          0 55:          0 <br>
     56:          0 57:          0 58:          0 59:          0 <br>
     60:          0 61:          0 62:          0 63:          0 <br>
     <br>
     HW TXQ 0: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 1: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 2: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 3: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 8: axq_depth=0, axq_aggr_depth=0<br>
     Total TX buffers: 512; Total TX buffers busy: 0<br>
     no tx bufs (empty list): 0<br>
     no tx bufs (was busy): 0<br>
     aggr single packet: 14553<br>
     aggr single packet w/ BAW closed: 0<br>
     aggr non-baw packet: 1<br>
     aggr aggregate packet: 121203<br>
     aggr single packet low hwq: 643315<br>
     aggr sched, no work: 15414<br>
      0:          0  1:          0  2:       7931  3:       5744 <br>
      4:       5116  5:       4554  6:       4716  7:       4577 <br>
      8:       5284  9:       5097 10:       4822 11:       9425 <br>
     12:       3123 13:       2628 14:       2671 15:       2322 <br>
     16:       5036 17:       2442 18:       4558 19:       8300 <br>
     20:       6962 21:       4679 22:       2404 23:       1270 <br>
     24:       1076 25:        929 26:        866 27:        856 <br>
     28:        835 29:        895 30:       1033 31:       1016 <br>
     32:      10037 33:          0 34:          0 35:          0 <br>
     36:          0 37:          0 38:          0 39:          0 <br>
     40:          0 41:          0 42:          0 43:          0 <br>
     44:          0 45:          0 46:          0 47:          0 <br>
     48:          0 49:          0 50:          0 51:          0 <br>
     52:          0 53:          0 54:          0 55:          0 <br>
     56:          0 57:          0 58:          0 59:          0 <br>
     60:          0 61:          0 62:          0 63:          0 <br>
     <br>
     HW TXQ 0: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 1: axq_depth=2, axq_aggr_depth=2<br>
     HW TXQ 2: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 3: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 8: axq_depth=0, axq_aggr_depth=0<br>
     Total TX buffers: 481; Total TX buffers busy: 0<br>
     no tx bufs (empty list): 0<br>
     no tx bufs (was busy): 0<br>
     aggr single packet: 14928<br>
     aggr single packet w/ BAW closed: 0<br>
     aggr non-baw packet: 1<br>
     aggr aggregate packet: 125149<br>
     aggr single packet low hwq: 645085<br>
     aggr sched, no work: 15673<br>
      0:          0  1:          0  2:       8187  3:       5884 <br>
      4:       5230  5:       4653  6:       4801  7:       4649 <br>
      8:       5347  9:       5168 10:       4891 11:       9496 <br>
     12:       3305 13:       2715 14:       2753 15:       2399 <br>
     16:       7473 17:       2464 18:       4565 19:       8304 <br>
     20:       6966 21:       4681 22:       2405 23:       1270 <br>
     24:       1077 25:        929 26:        866 27:        856 <br>
     28:        835 29:        895 30:       1033 31:       1016 <br>
     32:      10037 33:          0 34:          0 35:          0 <br>
     36:          0 37:          0 38:          0 39:          0 <br>
     40:          0 41:          0 42:          0 43:          0 <br>
     44:          0 45:          0 46:          0 47:          0 <br>
     48:          0 49:          0 50:          0 51:          0 <br>
     52:          0 53:          0 54:          0 55:          0 <br>
     56:          0 57:          0 58:          0 59:          0 <br>
     60:          0 61:          0 62:          0 63:          0 <br>
     <br>
     HW TXQ 0: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 1: axq_depth=2, axq_aggr_depth=1<br>
     HW TXQ 2: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 3: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 8: axq_depth=0, axq_aggr_depth=0<br>
     Total TX buffers: 502; Total TX buffers busy: 0<br>
     no tx bufs (empty list): 0<br>
     no tx bufs (was busy): 0<br>
     aggr single packet: 15237<br>
     aggr single packet w/ BAW closed: 0<br>
     aggr non-baw packet: 1<br>
     aggr aggregate packet: 127403<br>
     aggr single packet low hwq: 646324<br>
     aggr sched, no work: 15851<br>
      0:          0  1:          0  2:       8360  3:       5998 <br>
      4:       5304  5:       4703  6:       4846  7:       4701 <br>
      8:       5377  9:       5216 10:       4935 11:       9544 <br>
     12:       3383 13:       2753 14:       2811 15:       2441 <br>
     16:       8822 17:       2474 18:       4566 19:       8304 <br>
     20:       6966 21:       4681 22:       2406 23:       1270 <br>
     24:       1077 25:        929 26:        866 27:        856 <br>
     28:        835 29:        895 30:       1033 31:       1016 <br>
     32:      10037 33:          0 34:          0 35:          0 <br>
     36:          0 37:          0 38:          0 39:          0 <br>
     40:          0 41:          0 42:          0 43:          0 <br>
     44:          0 45:          0 46:          0 47:          0 <br>
     48:          0 49:          0 50:          0 51:          0 <br>
     52:          0 53:          0 54:          0 55:          0 <br>
     56:          0 57:          0 58:          0 59:          0 <br>
     60:          0 61:          0 62:          0 63:          0 <br>
     <br>
     HW TXQ 0: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 1: axq_depth=2, axq_aggr_depth=2<br>
     HW TXQ 2: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 3: axq_depth=0, axq_aggr_depth=0<br>
     HW TXQ 8: axq_depth=0, axq_aggr_depth=0<br>
     Total TX buffers: 356; Total TX buffers busy: 0<br>
     <br>
     <br>
     On 19/03/2012 21:49, Adrian Chadd wrote:
     <blockquote cite="mid:4f67a9ec.4105440a.1995.ffffecdf@mx.google.com"
       type="cite">Just check the txagg sysctl and mae sure your buffer
       count stays up around 512.<br>
       <br>
       I want to make sure that buffers aren't being leaked.<br>
       <br>
       Thanks again!<br>
       <br>
       <span style="font-family:Prelude, Verdana, san-serif;"><br>
         <br>
       </span><span id="signature">
         <div style="font-family: arial, sans-serif; font-size:
           12px;color: #999999;">Sent from my Palm Pre on AT&amp;T</div>
         <br>
       </span><span style="color:navy; font-family:Prelude, Verdana,
         san-serif; ">
         <hr style="width:75%" align="left">On Mar 19, 2012 2:38 PM,
         Vincent Hoffman <a class="moz-txt-link-rfc2396E" href="mailto:vince@unsane.co.uk">&lt;vince@unsane.co.uk&gt;</a> wrote: <br>
         <br>
         Hi Adrian,
         <br>
         <br>
         <br>
         This patch is looking good as yet, I've repeated tests that were
         <br>
         previously causing timeouts and as yet not been able cause a
         timeout
         <br>
         after applying this patch.
         <br>
         <br>
         Its not definitive but so far it appears to have resolved this
         issue
         <br>
         for me.
         <br>
         <br>
         <br>
         Regards,
         <br>
         Vince Hoffman
         <br>
       </span>
     </blockquote>
     <br>
   </body>
 </html>
 
 --------------070001010500040909030800--

From: "Adrian Chadd" <adrian.chadd@gmail.com>
To: "Vincent Hoffman" <vince@unsane.co.uk>
Cc: "bug-followup@freebsd.org" <bug-followup@freebsd.org>
Subject: Re: kern/166190: [ath] TX hangs and frames stuck in TX queue
Date: Mon, 19 Mar 2012 16:09:05 -0700

 --Alternative__boundary__1332198551683
 Content-Type: text/plain; charset=UTF-8
 Content-Transfer-Encoding: quoted-printable
 
 Then they returned to 512, right?
 
 
 Adrian
 
 
 
 Sent from my Palm Pre on AT&amp;T
 On Mar 19, 2012 3:11 PM, Vincent Hoffman &lt;vince@unsane.co.uk&gt; wrote:=
  
 
 
     During an iperf test
 
     Total TX buffers went from&nbsp; 512 -&gt; 356)
 
    =20
 
     iperf output (tcp, sending form freebsd machine to osx laptop [&nbsp;=
  4]&nbsp;
     0.0-60.2 sec&nbsp;&nbsp; 154 MBytes&nbsp; 21.4 Mbits/sec)
 
    =20
 
     dmesg output:
 
    =20
 
     no tx bufs (empty list): 0
 
     no tx bufs (was busy): 0
 
     aggr single packet: 14372
 
     aggr single packet w/ BAW closed: 0
 
     aggr non-baw packet: 1
 
     aggr aggregate packet: 119987
 
     aggr single packet low hwq: 641424
 
     aggr sched, no work: 15333
 
     &nbsp;0:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;=
  1:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 2:&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7811&nbsp; 3:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp; 5690=20
 
     &nbsp;4:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5077&nbsp; 5:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 4509&nbsp; 6:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
  4675&nbsp; 7:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4546=20
 
     &nbsp;8:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5255&nbsp; 9:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 5061 10:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4796=
  11:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9393=20
 
     12:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3094 13:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2604 14:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2647 15:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 2301=20
 
     16:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4372 17:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2440 18:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4558 19:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 8300=20
 
     20:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6962 21:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 4679 22:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2404 23:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 1270=20
 
     24:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1076 25:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp; 929 26:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 866=
  27:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 856=20
 
     28:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 835 29:&nbsp;&nbsp;&nbsp;=
 &nbsp;&nbsp;&nbsp;&nbsp; 895 30:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1033=
  31:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1016=20
 
     32:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10037 33:&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
 p;&nbsp;&nbsp;&nbsp;&nbsp; 0 34:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
 nbsp;&nbsp; 0 35:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0=20
 
     36:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 37:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 38:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 39:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     40:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 41:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 42:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 43:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     44:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 45:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 46:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 47:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     48:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 49:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 50:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 51:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     52:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 53:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 54:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 55:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     56:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 57:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 58:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 59:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     60:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 61:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 62:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 63:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
    =20
 
     HW TXQ 0: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 1: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 2: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 3: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 8: axq_depth=3D0, axq_aggr_depth=3D0
 
     Total TX buffers: 512; Total TX buffers busy: 0
 
     no tx bufs (empty list): 0
 
     no tx bufs (was busy): 0
 
     aggr single packet: 14553
 
     aggr single packet w/ BAW closed: 0
 
     aggr non-baw packet: 1
 
     aggr aggregate packet: 121203
 
     aggr single packet low hwq: 643315
 
     aggr sched, no work: 15414
 
     &nbsp;0:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;=
  1:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 2:&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7931&nbsp; 3:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp; 5744=20
 
     &nbsp;4:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5116&nbsp; 5:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 4554&nbsp; 6:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
  4716&nbsp; 7:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4577=20
 
     &nbsp;8:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5284&nbsp; 9:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 5097 10:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4822=
  11:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9425=20
 
     12:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3123 13:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2628 14:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2671 15:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 2322=20
 
     16:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5036 17:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2442 18:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4558 19:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 8300=20
 
     20:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6962 21:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 4679 22:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2404 23:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 1270=20
 
     24:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1076 25:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp; 929 26:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 866=
  27:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 856=20
 
     28:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 835 29:&nbsp;&nbsp;&nbsp;=
 &nbsp;&nbsp;&nbsp;&nbsp; 895 30:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1033=
  31:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1016=20
 
     32:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10037 33:&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
 p;&nbsp;&nbsp;&nbsp;&nbsp; 0 34:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
 nbsp;&nbsp; 0 35:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0=20
 
     36:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 37:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 38:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 39:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     40:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 41:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 42:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 43:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     44:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 45:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 46:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 47:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     48:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 49:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 50:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 51:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     52:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 53:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 54:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 55:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     56:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 57:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 58:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 59:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     60:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 61:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 62:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 63:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
    =20
 
     HW TXQ 0: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 1: axq_depth=3D2, axq_aggr_depth=3D2
 
     HW TXQ 2: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 3: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 8: axq_depth=3D0, axq_aggr_depth=3D0
 
     Total TX buffers: 481; Total TX buffers busy: 0
 
     no tx bufs (empty list): 0
 
     no tx bufs (was busy): 0
 
     aggr single packet: 14928
 
     aggr single packet w/ BAW closed: 0
 
     aggr non-baw packet: 1
 
     aggr aggregate packet: 125149
 
     aggr single packet low hwq: 645085
 
     aggr sched, no work: 15673
 
     &nbsp;0:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;=
  1:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 2:&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8187&nbsp; 3:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp; 5884=20
 
     &nbsp;4:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5230&nbsp; 5:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 4653&nbsp; 6:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
  4801&nbsp; 7:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4649=20
 
     &nbsp;8:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5347&nbsp; 9:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 5168 10:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4891=
  11:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9496=20
 
     12:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3305 13:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2715 14:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2753 15:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 2399=20
 
     16:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7473 17:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2464 18:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4565 19:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 8304=20
 
     20:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6966 21:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 4681 22:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2405 23:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 1270=20
 
     24:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1077 25:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp; 929 26:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 866=
  27:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 856=20
 
     28:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 835 29:&nbsp;&nbsp;&nbsp;=
 &nbsp;&nbsp;&nbsp;&nbsp; 895 30:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1033=
  31:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1016=20
 
     32:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10037 33:&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
 p;&nbsp;&nbsp;&nbsp;&nbsp; 0 34:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
 nbsp;&nbsp; 0 35:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0=20
 
     36:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 37:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 38:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 39:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     40:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 41:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 42:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 43:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     44:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 45:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 46:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 47:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     48:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 49:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 50:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 51:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     52:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 53:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 54:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 55:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     56:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 57:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 58:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 59:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     60:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 61:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 62:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 63:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
    =20
 
     HW TXQ 0: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 1: axq_depth=3D2, axq_aggr_depth=3D1
 
     HW TXQ 2: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 3: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 8: axq_depth=3D0, axq_aggr_depth=3D0
 
     Total TX buffers: 502; Total TX buffers busy: 0
 
     no tx bufs (empty list): 0
 
     no tx bufs (was busy): 0
 
     aggr single packet: 15237
 
     aggr single packet w/ BAW closed: 0
 
     aggr non-baw packet: 1
 
     aggr aggregate packet: 127403
 
     aggr single packet low hwq: 646324
 
     aggr sched, no work: 15851
 
     &nbsp;0:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;=
  1:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 2:&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8360&nbsp; 3:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp; 5998=20
 
     &nbsp;4:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5304&nbsp; 5:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 4703&nbsp; 6:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
  4846&nbsp; 7:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4701=20
 
     &nbsp;8:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5377&nbsp; 9:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 5216 10:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4935=
  11:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9544=20
 
     12:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3383 13:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2753 14:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2811 15:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 2441=20
 
     16:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8822 17:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2474 18:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4566 19:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 8304=20
 
     20:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6966 21:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 4681 22:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2406 23:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 1270=20
 
     24:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1077 25:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp; 929 26:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 866=
  27:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 856=20
 
     28:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 835 29:&nbsp;&nbsp;&nbsp;=
 &nbsp;&nbsp;&nbsp;&nbsp; 895 30:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1033=
  31:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1016=20
 
     32:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10037 33:&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
 p;&nbsp;&nbsp;&nbsp;&nbsp; 0 34:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
 nbsp;&nbsp; 0 35:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0=20
 
     36:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 37:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 38:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 39:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     40:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 41:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 42:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 43:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     44:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 45:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 46:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 47:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     48:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 49:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 50:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 51:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     52:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 53:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 54:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 55:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     56:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 57:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 58:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 59:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
     60:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 61:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 62:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 63:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0=20
 
    =20
 
     HW TXQ 0: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 1: axq_depth=3D2, axq_aggr_depth=3D2
 
     HW TXQ 2: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 3: axq_depth=3D0, axq_aggr_depth=3D0
 
     HW TXQ 8: axq_depth=3D0, axq_aggr_depth=3D0
 
     Total TX buffers: 356; Total TX buffers busy: 0
 
    =20
 
    =20
 
     On 19/03/2012 21:49, Adrian Chadd wrote:
     Just check the txagg sysctl and mae sure your buffer
       count stays up around 512.
 
      =20
 
       I want to make sure that buffers aren't being leaked.
 
      =20
 
       Thanks again!
 
      =20
 
      =20
 
        =20
 
      =20
         Sent from my Palm Pre on AT&amp;T
        =20
 
      =20
         On Mar 19, 2012 2:38 PM,
         Vincent Hoffman &lt;vince@unsane.co.uk&gt; wrote:=20
 
        =20
 
         Hi Adrian,
        =20
 
        =20
 
        =20
 
         This patch is looking good as yet, I've repeated tests that were
        =20
 
         previously causing timeouts and as yet not been able cause a
         timeout
        =20
 
         after applying this patch.
        =20
 
        =20
 
         Its not definitive but so far it appears to have resolved this
         issue
        =20
 
         for me.
        =20
 
        =20
 
        =20
 
         Regards,
        =20
 
         Vince Hoffman
        =20
 
      =20
    =20
    =20
 
  =20
 
 
 
 --Alternative__boundary__1332198551683
 Content-Type: text/html; charset=UTF-8
 Content-Transfer-Encoding: quoted-printable
 
 Then they returned to 512, right?<br><br><br>Adrian<br><br><span style=3D"f=
 ont-family:Prelude, Verdana, san-serif;"><br><br></span><span id=3D"signatu=
 re"><div style=3D"font-family: arial, sans-serif; font-size: 12px;color: #9=
 99999;">Sent from my Palm Pre on AT&amp;T</div><br></span><span style=3D"co=
 lor:navy; font-family:Prelude, Verdana, san-serif; "><hr align=3D"left" sty=
 le=3D"width:75%">On Mar 19, 2012 3:11 PM, Vincent Hoffman &lt;vince@unsane.=
 co.uk&gt; wrote: <br><br>
     During an iperf test<br>
     Total TX buffers went from&nbsp; 512 -&gt; 356)<br>
     <br>
     iperf output (tcp, sending form freebsd machine to osx laptop [&nbsp;=
  4]&nbsp;
     0.0-60.2 sec&nbsp;&nbsp; 154 MBytes&nbsp; 21.4 Mbits/sec)<br>
     <br>
     dmesg output:<br>
     <br>
     no tx bufs (empty list): 0<br>
     no tx bufs (was busy): 0<br>
     aggr single packet: 14372<br>
     aggr single packet w/ BAW closed: 0<br>
     aggr non-baw packet: 1<br>
     aggr aggregate packet: 119987<br>
     aggr single packet low hwq: 641424<br>
     aggr sched, no work: 15333<br>
     &nbsp;0:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;=
  1:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 2:&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7811&nbsp; 3:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp; 5690 <br>
     &nbsp;4:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5077&nbsp; 5:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 4509&nbsp; 6:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
  4675&nbsp; 7:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4546 <br>
     &nbsp;8:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5255&nbsp; 9:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 5061 10:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4796=
  11:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9393 <br>
     12:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3094 13:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2604 14:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2647 15:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 2301 <br>
     16:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4372 17:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2440 18:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4558 19:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 8300 <br>
     20:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6962 21:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 4679 22:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2404 23:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 1270 <br>
     24:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1076 25:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp; 929 26:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 866=
  27:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 856 <br>
     28:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 835 29:&nbsp;&nbsp;&nbsp;=
 &nbsp;&nbsp;&nbsp;&nbsp; 895 30:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1033=
  31:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1016 <br>
     32:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10037 33:&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
 p;&nbsp;&nbsp;&nbsp;&nbsp; 0 34:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
 nbsp;&nbsp; 0 35:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0=
  <br>
     36:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 37:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 38:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 39:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     40:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 41:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 42:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 43:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     44:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 45:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 46:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 47:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     48:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 49:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 50:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 51:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     52:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 53:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 54:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 55:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     56:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 57:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 58:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 59:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     60:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 61:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 62:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 63:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     <br>
     HW TXQ 0: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 1: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 2: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 3: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 8: axq_depth=3D0, axq_aggr_depth=3D0<br>
     Total TX buffers: 512; Total TX buffers busy: 0<br>
     no tx bufs (empty list): 0<br>
     no tx bufs (was busy): 0<br>
     aggr single packet: 14553<br>
     aggr single packet w/ BAW closed: 0<br>
     aggr non-baw packet: 1<br>
     aggr aggregate packet: 121203<br>
     aggr single packet low hwq: 643315<br>
     aggr sched, no work: 15414<br>
     &nbsp;0:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;=
  1:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 2:&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7931&nbsp; 3:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp; 5744 <br>
     &nbsp;4:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5116&nbsp; 5:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 4554&nbsp; 6:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
  4716&nbsp; 7:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4577 <br>
     &nbsp;8:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5284&nbsp; 9:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 5097 10:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4822=
  11:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9425 <br>
     12:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3123 13:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2628 14:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2671 15:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 2322 <br>
     16:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5036 17:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2442 18:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4558 19:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 8300 <br>
     20:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6962 21:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 4679 22:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2404 23:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 1270 <br>
     24:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1076 25:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp; 929 26:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 866=
  27:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 856 <br>
     28:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 835 29:&nbsp;&nbsp;&nbsp;=
 &nbsp;&nbsp;&nbsp;&nbsp; 895 30:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1033=
  31:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1016 <br>
     32:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10037 33:&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
 p;&nbsp;&nbsp;&nbsp;&nbsp; 0 34:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
 nbsp;&nbsp; 0 35:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0=
  <br>
     36:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 37:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 38:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 39:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     40:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 41:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 42:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 43:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     44:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 45:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 46:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 47:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     48:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 49:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 50:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 51:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     52:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 53:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 54:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 55:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     56:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 57:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 58:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 59:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     60:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 61:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 62:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 63:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     <br>
     HW TXQ 0: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 1: axq_depth=3D2, axq_aggr_depth=3D2<br>
     HW TXQ 2: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 3: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 8: axq_depth=3D0, axq_aggr_depth=3D0<br>
     Total TX buffers: 481; Total TX buffers busy: 0<br>
     no tx bufs (empty list): 0<br>
     no tx bufs (was busy): 0<br>
     aggr single packet: 14928<br>
     aggr single packet w/ BAW closed: 0<br>
     aggr non-baw packet: 1<br>
     aggr aggregate packet: 125149<br>
     aggr single packet low hwq: 645085<br>
     aggr sched, no work: 15673<br>
     &nbsp;0:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;=
  1:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 2:&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8187&nbsp; 3:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp; 5884 <br>
     &nbsp;4:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5230&nbsp; 5:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 4653&nbsp; 6:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
  4801&nbsp; 7:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4649 <br>
     &nbsp;8:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5347&nbsp; 9:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 5168 10:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4891=
  11:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9496 <br>
     12:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3305 13:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2715 14:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2753 15:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 2399 <br>
     16:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7473 17:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2464 18:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4565 19:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 8304 <br>
     20:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6966 21:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 4681 22:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2405 23:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 1270 <br>
     24:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1077 25:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp; 929 26:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 866=
  27:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 856 <br>
     28:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 835 29:&nbsp;&nbsp;&nbsp;=
 &nbsp;&nbsp;&nbsp;&nbsp; 895 30:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1033=
  31:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1016 <br>
     32:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10037 33:&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
 p;&nbsp;&nbsp;&nbsp;&nbsp; 0 34:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
 nbsp;&nbsp; 0 35:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0=
  <br>
     36:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 37:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 38:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 39:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     40:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 41:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 42:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 43:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     44:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 45:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 46:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 47:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     48:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 49:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 50:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 51:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     52:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 53:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 54:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 55:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     56:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 57:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 58:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 59:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     60:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 61:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 62:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 63:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     <br>
     HW TXQ 0: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 1: axq_depth=3D2, axq_aggr_depth=3D1<br>
     HW TXQ 2: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 3: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 8: axq_depth=3D0, axq_aggr_depth=3D0<br>
     Total TX buffers: 502; Total TX buffers busy: 0<br>
     no tx bufs (empty list): 0<br>
     no tx bufs (was busy): 0<br>
     aggr single packet: 15237<br>
     aggr single packet w/ BAW closed: 0<br>
     aggr non-baw packet: 1<br>
     aggr aggregate packet: 127403<br>
     aggr single packet low hwq: 646324<br>
     aggr sched, no work: 15851<br>
     &nbsp;0:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp;=
  1:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0&nbsp; 2:&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8360&nbsp; 3:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp; 5998 <br>
     &nbsp;4:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5304&nbsp; 5:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 4703&nbsp; 6:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
  4846&nbsp; 7:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4701 <br>
     &nbsp;8:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5377&nbsp; 9:&nbsp;&nbsp;&=
 nbsp;&nbsp;&nbsp;&nbsp; 5216 10:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4935=
  11:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9544 <br>
     12:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3383 13:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2753 14:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2811 15:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 2441 <br>
     16:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8822 17:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 2474 18:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4566 19:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 8304 <br>
     20:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6966 21:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp; 4681 22:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2406 23:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp; 1270 <br>
     24:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1077 25:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp; 929 26:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 866=
  27:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 856 <br>
     28:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 835 29:&nbsp;&nbsp;&nbsp;=
 &nbsp;&nbsp;&nbsp;&nbsp; 895 30:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1033=
  31:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1016 <br>
     32:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 10037 33:&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
 p;&nbsp;&nbsp;&nbsp;&nbsp; 0 34:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
 nbsp;&nbsp; 0 35:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0=
  <br>
     36:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 37:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 38:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 39:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     40:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 41:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 42:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 43:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     44:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 45:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 46:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 47:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     48:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 49:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 50:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 51:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     52:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 53:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 54:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 55:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     56:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 57:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 58:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 59:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     60:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 61:&nbsp;&n=
 bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 62:&nbsp;&nbsp;&nbsp;&nbsp=
 ;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 63:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
 bsp;&nbsp;&nbsp; 0 <br>
     <br>
     HW TXQ 0: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 1: axq_depth=3D2, axq_aggr_depth=3D2<br>
     HW TXQ 2: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 3: axq_depth=3D0, axq_aggr_depth=3D0<br>
     HW TXQ 8: axq_depth=3D0, axq_aggr_depth=3D0<br>
     Total TX buffers: 356; Total TX buffers busy: 0<br>
     <br>
     <br>
     On 19/03/2012 21:49, Adrian Chadd wrote:
     <blockquote cite=3D"mid:4f67a9ec.4105440a.1995.ffffecdf@mx.google.com"=
  type=3D"cite">Just check the txagg sysctl and mae sure your buffer
       count stays up around 512.<br>
       <br>
       I want to make sure that buffers aren't being leaked.<br>
       <br>
       Thanks again!<br>
       <br>
       <span style=3D"font-family:Prelude, Verdana, san-serif;"><br>
         <br>
       </span><span id=3D"signature">
         <div style=3D"font-family: arial, sans-serif; font-size:
           12px;color: #999999;">Sent from my Palm Pre on AT&amp;T</div>
         <br>
       </span><span style=3D"color:navy; font-family:Prelude, Verdana,
         san-serif; ">
         <hr style=3D"width:75%" align=3D"left">On Mar 19, 2012 2:38 PM,
         Vincent Hoffman <a class=3D"moz-txt-link-rfc2396E" href=3D"mailto:v=
 ince@unsane.co.uk">&lt;vince@unsane.co.uk&gt;</a> wrote: <br>
         <br>
         Hi Adrian,
         <br>
         <br>
         <br>
         This patch is looking good as yet, I've repeated tests that were
         <br>
         previously causing timeouts and as yet not been able cause a
         timeout
         <br>
         after applying this patch.
         <br>
         <br>
         Its not definitive but so far it appears to have resolved this
         issue
         <br>
         for me.
         <br>
         <br>
         <br>
         Regards,
         <br>
         Vince Hoffman
         <br>
       </span>
     </blockquote>
     <br>
  =20
 
 </span>
 
 --Alternative__boundary__1332198551683--
 

From: Vincent Hoffman <vince@unsane.co.uk>
To: Adrian Chadd <adrian.chadd@gmail.com>
Cc: "bug-followup@freebsd.org" <bug-followup@freebsd.org>
Subject: Re: kern/166190: [ath] TX hangs and frames stuck in TX queue
Date: Mon, 19 Mar 2012 23:12:42 +0000

 This is a multi-part message in MIME format.
 --------------060900020802060605060409
 Content-Type: text/plain; charset=UTF-8
 Content-Transfer-Encoding: 7bit
 
 Oops sorry missed the important bit off. Yes then they returned to 512
 and have continued returning to 512.
 
 Vince
 
 On 19/03/2012 23:09, Adrian Chadd wrote:
 > Then they returned to 512, right?
 >
 >
 > Adrian
 >
 >
 >
 > Sent from my Palm Pre on AT&T
 >
 > ------------------------------------------------------------------------
 > On Mar 19, 2012 3:11 PM, Vincent Hoffman <vince@unsane.co.uk> wrote:
 >
 > During an iperf test
 > Total TX buffers went from  512 -> 356)
 >
 > iperf output (tcp, sending form freebsd machine to osx laptop [  4] 
 > 0.0-60.2 sec   154 MBytes  21.4 Mbits/sec)
 >
 > dmesg output:
 >
 > no tx bufs (empty list): 0
 > no tx bufs (was busy): 0
 > aggr single packet: 14372
 > aggr single packet w/ BAW closed: 0
 > aggr non-baw packet: 1
 > aggr aggregate packet: 119987
 > aggr single packet low hwq: 641424
 > aggr sched, no work: 15333
 >  0:          0  1:          0  2:       7811  3:       5690
 >  4:       5077  5:       4509  6:       4675  7:       4546
 >  8:       5255  9:       5061 10:       4796 11:       9393
 > 12:       3094 13:       2604 14:       2647 15:       2301
 > 16:       4372 17:       2440 18:       4558 19:       8300
 > 20:       6962 21:       4679 22:       2404 23:       1270
 > 24:       1076 25:        929 26:        866 27:        856
 > 28:        835 29:        895 30:       1033 31:       1016
 > 32:      10037 33:          0 34:          0 35:          0
 > 36:          0 37:          0 38:          0 39:          0
 > 40:          0 41:          0 42:          0 43:          0
 > 44:          0 45:          0 46:          0 47:          0
 > 48:          0 49:          0 50:          0 51:          0
 > 52:          0 53:          0 54:          0 55:          0
 > 56:          0 57:          0 58:          0 59:          0
 > 60:          0 61:          0 62:          0 63:          0
 >
 > HW TXQ 0: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 1: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 2: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 3: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 8: axq_depth=0, axq_aggr_depth=0
 > Total TX buffers: 512; Total TX buffers busy: 0
 > no tx bufs (empty list): 0
 > no tx bufs (was busy): 0
 > aggr single packet: 14553
 > aggr single packet w/ BAW closed: 0
 > aggr non-baw packet: 1
 > aggr aggregate packet: 121203
 > aggr single packet low hwq: 643315
 > aggr sched, no work: 15414
 >  0:          0  1:          0  2:       7931  3:       5744
 >  4:       5116  5:       4554  6:       4716  7:       4577
 >  8:       5284  9:       5097 10:       4822 11:       9425
 > 12:       3123 13:       2628 14:       2671 15:       2322
 > 16:       5036 17:       2442 18:       4558 19:       8300
 > 20:       6962 21:       4679 22:       2404 23:       1270
 > 24:       1076 25:        929 26:        866 27:        856
 > 28:        835 29:        895 30:       1033 31:       1016
 > 32:      10037 33:          0 34:          0 35:          0
 > 36:          0 37:          0 38:          0 39:          0
 > 40:          0 41:          0 42:          0 43:          0
 > 44:          0 45:          0 46:          0 47:          0
 > 48:          0 49:          0 50:          0 51:          0
 > 52:          0 53:          0 54:          0 55:          0
 > 56:          0 57:          0 58:          0 59:          0
 > 60:          0 61:          0 62:          0 63:          0
 >
 > HW TXQ 0: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 1: axq_depth=2, axq_aggr_depth=2
 > HW TXQ 2: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 3: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 8: axq_depth=0, axq_aggr_depth=0
 > Total TX buffers: 481; Total TX buffers busy: 0
 > no tx bufs (empty list): 0
 > no tx bufs (was busy): 0
 > aggr single packet: 14928
 > aggr single packet w/ BAW closed: 0
 > aggr non-baw packet: 1
 > aggr aggregate packet: 125149
 > aggr single packet low hwq: 645085
 > aggr sched, no work: 15673
 >  0:          0  1:          0  2:       8187  3:       5884
 >  4:       5230  5:       4653  6:       4801  7:       4649
 >  8:       5347  9:       5168 10:       4891 11:       9496
 > 12:       3305 13:       2715 14:       2753 15:       2399
 > 16:       7473 17:       2464 18:       4565 19:       8304
 > 20:       6966 21:       4681 22:       2405 23:       1270
 > 24:       1077 25:        929 26:        866 27:        856
 > 28:        835 29:        895 30:       1033 31:       1016
 > 32:      10037 33:          0 34:          0 35:          0
 > 36:          0 37:          0 38:          0 39:          0
 > 40:          0 41:          0 42:          0 43:          0
 > 44:          0 45:          0 46:          0 47:          0
 > 48:          0 49:          0 50:          0 51:          0
 > 52:          0 53:          0 54:          0 55:          0
 > 56:          0 57:          0 58:          0 59:          0
 > 60:          0 61:          0 62:          0 63:          0
 >
 > HW TXQ 0: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 1: axq_depth=2, axq_aggr_depth=1
 > HW TXQ 2: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 3: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 8: axq_depth=0, axq_aggr_depth=0
 > Total TX buffers: 502; Total TX buffers busy: 0
 > no tx bufs (empty list): 0
 > no tx bufs (was busy): 0
 > aggr single packet: 15237
 > aggr single packet w/ BAW closed: 0
 > aggr non-baw packet: 1
 > aggr aggregate packet: 127403
 > aggr single packet low hwq: 646324
 > aggr sched, no work: 15851
 >  0:          0  1:          0  2:       8360  3:       5998
 >  4:       5304  5:       4703  6:       4846  7:       4701
 >  8:       5377  9:       5216 10:       4935 11:       9544
 > 12:       3383 13:       2753 14:       2811 15:       2441
 > 16:       8822 17:       2474 18:       4566 19:       8304
 > 20:       6966 21:       4681 22:       2406 23:       1270
 > 24:       1077 25:        929 26:        866 27:        856
 > 28:        835 29:        895 30:       1033 31:       1016
 > 32:      10037 33:          0 34:          0 35:          0
 > 36:          0 37:          0 38:          0 39:          0
 > 40:          0 41:          0 42:          0 43:          0
 > 44:          0 45:          0 46:          0 47:          0
 > 48:          0 49:          0 50:          0 51:          0
 > 52:          0 53:          0 54:          0 55:          0
 > 56:          0 57:          0 58:          0 59:          0
 > 60:          0 61:          0 62:          0 63:          0
 >
 > HW TXQ 0: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 1: axq_depth=2, axq_aggr_depth=2
 > HW TXQ 2: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 3: axq_depth=0, axq_aggr_depth=0
 > HW TXQ 8: axq_depth=0, axq_aggr_depth=0
 > Total TX buffers: 356; Total TX buffers busy: 0
 >
 >
 > On 19/03/2012 21:49, Adrian Chadd wrote:
 >> Just check the txagg sysctl and mae sure your buffer count stays up
 >> around 512.
 >>
 >> I want to make sure that buffers aren't being leaked.
 >>
 >> Thanks again!
 >>
 >>
 >>
 >> Sent from my Palm Pre on AT&T
 >>
 >> ------------------------------------------------------------------------
 >> On Mar 19, 2012 2:38 PM, Vincent Hoffman <vince@unsane.co.uk> wrote:
 >>
 >> Hi Adrian,
 >>
 >>
 >> This patch is looking good as yet, I've repeated tests that were
 >> previously causing timeouts and as yet not been able cause a timeout
 >> after applying this patch.
 >>
 >> Its not definitive but so far it appears to have resolved this issue
 >> for me.
 >>
 >>
 >> Regards,
 >> Vince Hoffman
 >
 
 
 --------------060900020802060605060409
 Content-Type: text/html; charset=UTF-8
 Content-Transfer-Encoding: 8bit
 
 <html>
   <head>
     <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
   </head>
   <body bgcolor="#FFFFFF" text="#000000">
     Oops sorry missed the important bit off. Yes then they returned to
     512 and have continued returning to 512.<br>
     <br>
     Vince<br>
     <br>
     On 19/03/2012 23:09, Adrian Chadd wrote:
     <blockquote cite="mid:4f67bc9a.c10f440a.40d9.ffffea1b@mx.google.com"
       type="cite">Then they returned to 512, right?<br>
       <br>
       <br>
       Adrian<br>
       <br>
       <span style="font-family:Prelude, Verdana, san-serif;"><br>
         <br>
       </span><span id="signature">
         <div style="font-family: arial, sans-serif; font-size:
           12px;color: #999999;">Sent from my Palm Pre on AT&amp;T</div>
         <br>
       </span><span style="color:navy; font-family:Prelude, Verdana,
         san-serif; ">
         <hr style="width:75%" align="left">On Mar 19, 2012 3:11 PM,
         Vincent Hoffman <a class="moz-txt-link-rfc2396E" href="mailto:vince@unsane.co.uk">&lt;vince@unsane.co.uk&gt;</a> wrote: <br>
         <br>
         During an iperf test<br>
         Total TX buffers went from  512 -&gt; 356)<br>
         <br>
         iperf output (tcp, sending form freebsd machine to osx laptop [ 
         4]  0.0-60.2 sec   154 MBytes  21.4 Mbits/sec)<br>
         <br>
         dmesg output:<br>
         <br>
         no tx bufs (empty list): 0<br>
         no tx bufs (was busy): 0<br>
         aggr single packet: 14372<br>
         aggr single packet w/ BAW closed: 0<br>
         aggr non-baw packet: 1<br>
         aggr aggregate packet: 119987<br>
         aggr single packet low hwq: 641424<br>
         aggr sched, no work: 15333<br>
          0:          0  1:          0  2:       7811  3:       5690 <br>
          4:       5077  5:       4509  6:       4675  7:       4546 <br>
          8:       5255  9:       5061 10:       4796 11:       9393 <br>
         12:       3094 13:       2604 14:       2647 15:       2301 <br>
         16:       4372 17:       2440 18:       4558 19:       8300 <br>
         20:       6962 21:       4679 22:       2404 23:       1270 <br>
         24:       1076 25:        929 26:        866 27:        856 <br>
         28:        835 29:        895 30:       1033 31:       1016 <br>
         32:      10037 33:          0 34:          0 35:          0 <br>
         36:          0 37:          0 38:          0 39:          0 <br>
         40:          0 41:          0 42:          0 43:          0 <br>
         44:          0 45:          0 46:          0 47:          0 <br>
         48:          0 49:          0 50:          0 51:          0 <br>
         52:          0 53:          0 54:          0 55:          0 <br>
         56:          0 57:          0 58:          0 59:          0 <br>
         60:          0 61:          0 62:          0 63:          0 <br>
         <br>
         HW TXQ 0: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 1: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 2: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 3: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 8: axq_depth=0, axq_aggr_depth=0<br>
         Total TX buffers: 512; Total TX buffers busy: 0<br>
         no tx bufs (empty list): 0<br>
         no tx bufs (was busy): 0<br>
         aggr single packet: 14553<br>
         aggr single packet w/ BAW closed: 0<br>
         aggr non-baw packet: 1<br>
         aggr aggregate packet: 121203<br>
         aggr single packet low hwq: 643315<br>
         aggr sched, no work: 15414<br>
          0:          0  1:          0  2:       7931  3:       5744 <br>
          4:       5116  5:       4554  6:       4716  7:       4577 <br>
          8:       5284  9:       5097 10:       4822 11:       9425 <br>
         12:       3123 13:       2628 14:       2671 15:       2322 <br>
         16:       5036 17:       2442 18:       4558 19:       8300 <br>
         20:       6962 21:       4679 22:       2404 23:       1270 <br>
         24:       1076 25:        929 26:        866 27:        856 <br>
         28:        835 29:        895 30:       1033 31:       1016 <br>
         32:      10037 33:          0 34:          0 35:          0 <br>
         36:          0 37:          0 38:          0 39:          0 <br>
         40:          0 41:          0 42:          0 43:          0 <br>
         44:          0 45:          0 46:          0 47:          0 <br>
         48:          0 49:          0 50:          0 51:          0 <br>
         52:          0 53:          0 54:          0 55:          0 <br>
         56:          0 57:          0 58:          0 59:          0 <br>
         60:          0 61:          0 62:          0 63:          0 <br>
         <br>
         HW TXQ 0: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 1: axq_depth=2, axq_aggr_depth=2<br>
         HW TXQ 2: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 3: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 8: axq_depth=0, axq_aggr_depth=0<br>
         Total TX buffers: 481; Total TX buffers busy: 0<br>
         no tx bufs (empty list): 0<br>
         no tx bufs (was busy): 0<br>
         aggr single packet: 14928<br>
         aggr single packet w/ BAW closed: 0<br>
         aggr non-baw packet: 1<br>
         aggr aggregate packet: 125149<br>
         aggr single packet low hwq: 645085<br>
         aggr sched, no work: 15673<br>
          0:          0  1:          0  2:       8187  3:       5884 <br>
          4:       5230  5:       4653  6:       4801  7:       4649 <br>
          8:       5347  9:       5168 10:       4891 11:       9496 <br>
         12:       3305 13:       2715 14:       2753 15:       2399 <br>
         16:       7473 17:       2464 18:       4565 19:       8304 <br>
         20:       6966 21:       4681 22:       2405 23:       1270 <br>
         24:       1077 25:        929 26:        866 27:        856 <br>
         28:        835 29:        895 30:       1033 31:       1016 <br>
         32:      10037 33:          0 34:          0 35:          0 <br>
         36:          0 37:          0 38:          0 39:          0 <br>
         40:          0 41:          0 42:          0 43:          0 <br>
         44:          0 45:          0 46:          0 47:          0 <br>
         48:          0 49:          0 50:          0 51:          0 <br>
         52:          0 53:          0 54:          0 55:          0 <br>
         56:          0 57:          0 58:          0 59:          0 <br>
         60:          0 61:          0 62:          0 63:          0 <br>
         <br>
         HW TXQ 0: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 1: axq_depth=2, axq_aggr_depth=1<br>
         HW TXQ 2: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 3: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 8: axq_depth=0, axq_aggr_depth=0<br>
         Total TX buffers: 502; Total TX buffers busy: 0<br>
         no tx bufs (empty list): 0<br>
         no tx bufs (was busy): 0<br>
         aggr single packet: 15237<br>
         aggr single packet w/ BAW closed: 0<br>
         aggr non-baw packet: 1<br>
         aggr aggregate packet: 127403<br>
         aggr single packet low hwq: 646324<br>
         aggr sched, no work: 15851<br>
          0:          0  1:          0  2:       8360  3:       5998 <br>
          4:       5304  5:       4703  6:       4846  7:       4701 <br>
          8:       5377  9:       5216 10:       4935 11:       9544 <br>
         12:       3383 13:       2753 14:       2811 15:       2441 <br>
         16:       8822 17:       2474 18:       4566 19:       8304 <br>
         20:       6966 21:       4681 22:       2406 23:       1270 <br>
         24:       1077 25:        929 26:        866 27:        856 <br>
         28:        835 29:        895 30:       1033 31:       1016 <br>
         32:      10037 33:          0 34:          0 35:          0 <br>
         36:          0 37:          0 38:          0 39:          0 <br>
         40:          0 41:          0 42:          0 43:          0 <br>
         44:          0 45:          0 46:          0 47:          0 <br>
         48:          0 49:          0 50:          0 51:          0 <br>
         52:          0 53:          0 54:          0 55:          0 <br>
         56:          0 57:          0 58:          0 59:          0 <br>
         60:          0 61:          0 62:          0 63:          0 <br>
         <br>
         HW TXQ 0: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 1: axq_depth=2, axq_aggr_depth=2<br>
         HW TXQ 2: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 3: axq_depth=0, axq_aggr_depth=0<br>
         HW TXQ 8: axq_depth=0, axq_aggr_depth=0<br>
         Total TX buffers: 356; Total TX buffers busy: 0<br>
         <br>
         <br>
         On 19/03/2012 21:49, Adrian Chadd wrote:
         <blockquote
           cite="mid:4f67a9ec.4105440a.1995.ffffecdf@mx.google.com"
           type="cite">Just check the txagg sysctl and mae sure your
           buffer count stays up around 512.<br>
           <br>
           I want to make sure that buffers aren't being leaked.<br>
           <br>
           Thanks again!<br>
           <br>
           <span style="font-family:Prelude, Verdana, san-serif;"><br>
             <br>
           </span><span id="signature">
             <div style="font-family: arial, sans-serif; font-size:
               12px;color: #999999;">Sent from my Palm Pre on AT&amp;T</div>
             <br>
           </span><span style="color:navy; font-family:Prelude, Verdana,
             san-serif; ">
             <hr style="width:75%" align="left">On Mar 19, 2012 2:38 PM,
             Vincent Hoffman <a moz-do-not-send="true"
               class="moz-txt-link-rfc2396E"
               href="mailto:vince@unsane.co.uk">&lt;vince@unsane.co.uk&gt;</a>
             wrote: <br>
             <br>
             Hi Adrian, <br>
             <br>
             <br>
             This patch is looking good as yet, I've repeated tests that
             were <br>
             previously causing timeouts and as yet not been able cause a
             timeout <br>
             after applying this patch. <br>
             <br>
             Its not definitive but so far it appears to have resolved
             this issue <br>
             for me. <br>
             <br>
             <br>
             Regards, <br>
             Vince Hoffman <br>
           </span> </blockquote>
         <br>
       </span>
     </blockquote>
     <br>
   </body>
 </html>
 
 --------------060900020802060605060409--

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: kern/166190: commit references a PR
Date: Tue, 20 Mar 2012 04:50:36 +0000 (UTC)

 Author: adrian
 Date: Tue Mar 20 04:50:25 2012
 New Revision: 233227
 URL: http://svn.freebsd.org/changeset/base/233227
 
 Log:
   Delay sequence number allocation for A-MPDU until just before the frame
   is queued to the hardware.
   
   Because multiple concurrent paths can execute ath_start(), multiple
   concurrent paths can push frames into the software/hardware TX queue
   and since preemption/interrupting can occur, there's the possibility
   that a gap in time will occur between allocating the sequence number
   and queuing it to the hardware.
   
   Because of this, it's possible that a thread will have allocated a
   sequence number and then be preempted by another thread doing the same.
   If the second thread sneaks the frame into the BAW, the (earlier) sequence
   number of the first frame will be now outside the BAW and will result
   in the frame being constantly re-added to the tail of the queue.
   There it will live until the sequence numbers cycle around again.
   
   This also creates a hole in the RX BAW tracking which can also cause
   issues.
   
   This patch delays the sequence number allocation to occur only just before
   the frame is going to be added to the BAW.  I've been wanting to do this
   anyway as part of a general code tidyup but I've not gotten around to it.
   This fixes the PR.
   
   However, it still makes it quite difficult to try and ensure in-order
   queuing and dequeuing of frames. Since multiple copies of ath_start()
   can be run at the same time (eg one TXing process thread, one TX completion
   task/one RX task) the driver may end up having frames dequeued and pushed
   into the hardware slightly/occasionally out of order.
   
   And, to make matters more annoying, net80211 may have the same behaviour -
   in the non-aggregation case, the TX code allocates sequence numbers
   before it's thrown to the driver.  I'll open another PR to investigate
   this and potentially introduce some kind of final-pass TX serialisation
   before frames are thrown to the hardware.  It's also very likely worthwhile
   adding some debugging code into ath(4) and net80211 to catch when/if this
   does occur.
   
   PR:		kern/166190
 
 Modified:
   head/sys/dev/ath/if_ath_debug.c
   head/sys/dev/ath/if_ath_tx.c
   head/sys/dev/ath/if_ath_tx.h
   head/sys/dev/ath/if_ath_tx_ht.c
   head/sys/dev/ath/if_athvar.h
 
 Modified: head/sys/dev/ath/if_ath_debug.c
 ==============================================================================
 --- head/sys/dev/ath/if_ath_debug.c	Mon Mar 19 23:28:13 2012	(r233226)
 +++ head/sys/dev/ath/if_ath_debug.c	Tue Mar 20 04:50:25 2012	(r233227)
 @@ -135,19 +135,23 @@ ath_printtxbuf(struct ath_softc *sc, con
  	printf("Q%u[%3u]", qnum, ix);
  	while (bf != NULL) {
  		for (i = 0, ds = bf->bf_desc; i < bf->bf_nseg; i++, ds++) {
 -			printf(" (DS.V:%p DS.P:%p) L:%08x D:%08x F:%04x%s\n"
 -			       "        TXF: %04x Seq: %d swtry: %d ADDBAW?: %d DOBAW?: %d\n"
 -			       "        %08x %08x %08x %08x %08x %08x\n",
 +			printf(" (DS.V:%p DS.P:%p) L:%08x D:%08x F:%04x%s\n",
  			    ds, (const struct ath_desc *)bf->bf_daddr + i,
  			    ds->ds_link, ds->ds_data, bf->bf_txflags,
 -			    !done ? "" : (ts->ts_status == 0) ? " *" : " !",
 +			    !done ? "" : (ts->ts_status == 0) ? " *" : " !");
 +			printf("        TXF: %04x Seq: %d swtry: %d ADDBAW?: %d DOBAW?: %d\n",
  			    bf->bf_state.bfs_flags,
  			    bf->bf_state.bfs_seqno,
  			    bf->bf_state.bfs_retries,
  			    bf->bf_state.bfs_addedbaw,
 -			    bf->bf_state.bfs_dobaw,
 +			    bf->bf_state.bfs_dobaw);
 +			printf("        SEQNO_ASSIGNED: %d, NEED_SEQNO: %d\n",
 +			    bf->bf_state.bfs_seqno_assigned,
 +			    bf->bf_state.bfs_need_seqno);
 +			printf("        %08x %08x %08x %08x %08x %08x\n",
  			    ds->ds_ctl0, ds->ds_ctl1,
 -			    ds->ds_hw[0], ds->ds_hw[1], ds->ds_hw[2], ds->ds_hw[3]);
 +			    ds->ds_hw[0], ds->ds_hw[1],
 +			    ds->ds_hw[2], ds->ds_hw[3]);
  			if (ah->ah_magic == 0x20065416) {
  				printf("        %08x %08x %08x %08x %08x %08x %08x %08x\n",
  				    ds->ds_hw[4], ds->ds_hw[5], ds->ds_hw[6],
 
 Modified: head/sys/dev/ath/if_ath_tx.c
 ==============================================================================
 --- head/sys/dev/ath/if_ath_tx.c	Mon Mar 19 23:28:13 2012	(r233226)
 +++ head/sys/dev/ath/if_ath_tx.c	Tue Mar 20 04:50:25 2012	(r233227)
 @@ -109,10 +109,10 @@ static int ath_tx_ampdu_pending(struct a
      int tid);
  static int ath_tx_ampdu_running(struct ath_softc *sc, struct ath_node *an,
      int tid);
 -static ieee80211_seq ath_tx_tid_seqno_assign(struct ath_softc *sc,
 -    struct ieee80211_node *ni, struct ath_buf *bf, struct mbuf *m0);
  static int ath_tx_action_frame_override_queue(struct ath_softc *sc,
      struct ieee80211_node *ni, struct mbuf *m0, int *tid);
 +static int ath_tx_seqno_required(struct ath_softc *sc,
 +    struct ieee80211_node *ni, struct ath_buf *bf, struct mbuf *m0);
  
  /*
   * Whether to use the 11n rate scenario functions or not
 @@ -1376,7 +1376,7 @@ ath_tx_start(struct ath_softc *sc, struc
  	int ismcast;
  	const struct ieee80211_frame *wh;
  	int is_ampdu, is_ampdu_tx, is_ampdu_pending;
 -	ieee80211_seq seqno;
 +	//ieee80211_seq seqno;
  	uint8_t type, subtype;
  
  	/*
 @@ -1428,8 +1428,9 @@ ath_tx_start(struct ath_softc *sc, struc
  	is_ampdu_pending = ath_tx_ampdu_pending(sc, ATH_NODE(ni), tid);
  	is_ampdu = is_ampdu_tx | is_ampdu_pending;
  
 -	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: tid=%d, ac=%d, is_ampdu=%d\n",
 -	    __func__, tid, pri, is_ampdu);
 +	DPRINTF(sc, ATH_DEBUG_SW_TX,
 +	    "%s: bf=%p, tid=%d, ac=%d, is_ampdu=%d\n",
 +	    __func__, bf, tid, pri, is_ampdu);
  
  	/* Multicast frames go onto the software multicast queue */
  	if (ismcast)
 @@ -1447,6 +1448,9 @@ ath_tx_start(struct ath_softc *sc, struc
  	/* Do the generic frame setup */
  	/* XXX should just bzero the bf_state? */
  	bf->bf_state.bfs_dobaw = 0;
 +	bf->bf_state.bfs_seqno_assigned = 0;
 +	bf->bf_state.bfs_need_seqno = 0;
 +	bf->bf_state.bfs_seqno = -1;	/* XXX debugging */
  
  	/* A-MPDU TX? Manually set sequence number */
  	/* Don't do it whilst pending; the net80211 layer still assigns them */
 @@ -1459,19 +1463,26 @@ ath_tx_start(struct ath_softc *sc, struc
  		 * don't get a sequence number from the current
  		 * TID and thus mess with the BAW.
  		 */
 -		seqno = ath_tx_tid_seqno_assign(sc, ni, bf, m0);
 -		if (IEEE80211_QOS_HAS_SEQ(wh) &&
 -		    subtype != IEEE80211_FC0_SUBTYPE_QOS_NULL) {
 +		//seqno = ath_tx_tid_seqno_assign(sc, ni, bf, m0);
 +		if (ath_tx_seqno_required(sc, ni, bf, m0)) {
  			bf->bf_state.bfs_dobaw = 1;
 +			bf->bf_state.bfs_need_seqno = 1;
  		}
  		ATH_TXQ_UNLOCK(txq);
 +	} else {
 +		/* No AMPDU TX, we've been assigned a sequence number. */
 +		if (IEEE80211_QOS_HAS_SEQ(wh)) {
 +			bf->bf_state.bfs_seqno_assigned = 1;
 +			bf->bf_state.bfs_seqno =
 +			    M_SEQNO_GET(m0) << IEEE80211_SEQ_SEQ_SHIFT;
 +		}
  	}
  
  	/*
  	 * If needed, the sequence number has been assigned.
  	 * Squirrel it away somewhere easy to get to.
  	 */
 -	bf->bf_state.bfs_seqno = M_SEQNO_GET(m0) << IEEE80211_SEQ_SEQ_SHIFT;
 +	//bf->bf_state.bfs_seqno = M_SEQNO_GET(m0) << IEEE80211_SEQ_SEQ_SHIFT;
  
  	/* Is ampdu pending? fetch the seqno and print it out */
  	if (is_ampdu_pending)
 @@ -1488,6 +1499,10 @@ ath_tx_start(struct ath_softc *sc, struc
  	/* At this point m0 could have changed! */
  	m0 = bf->bf_m;
  
 +	DPRINTF(sc, ATH_DEBUG_SW_TX,
 +	    "%s: DONE: bf=%p, tid=%d, ac=%d, is_ampdu=%d, dobaw=%d, seqno=%d\n",
 +	    __func__, bf, tid, pri, is_ampdu, bf->bf_state.bfs_dobaw, M_SEQNO_GET(m0));
 +
  #if 1
  	/*
  	 * If it's a multicast frame, do a direct-dispatch to the
 @@ -1506,6 +1521,8 @@ ath_tx_start(struct ath_softc *sc, struc
  	 * reached.)
  	 */
  	if (txq == &avp->av_mcastq) {
 +		DPRINTF(sc, ATH_DEBUG_SW_TX_CTRL,
 +		    "%s: bf=%p: mcastq: TX'ing\n", __func__, bf);
  		ATH_TXQ_LOCK(txq);
  		ath_tx_xmit_normal(sc, txq, bf);
  		ATH_TXQ_UNLOCK(txq);
 @@ -1518,6 +1535,8 @@ ath_tx_start(struct ath_softc *sc, struc
  		ATH_TXQ_UNLOCK(txq);
  	} else {
  		/* add to software queue */
 +		DPRINTF(sc, ATH_DEBUG_SW_TX_CTRL,
 +		    "%s: bf=%p: swq: TX'ing\n", __func__, bf);
  		ath_tx_swq(sc, ni, txq, bf);
  	}
  #else
 @@ -1966,16 +1985,41 @@ ath_tx_addto_baw(struct ath_softc *sc, s
  	if (bf->bf_state.bfs_isretried)
  		return;
  
 +	/*
 +	 * If this occurs we're in a lot of trouble.  We should try to
 +	 * recover from this without the session hanging?
 +	 */
 +	if (! bf->bf_state.bfs_seqno_assigned) {
 +		device_printf(sc->sc_dev,
 +		    "%s: bf=%p, seqno_assigned is 0?!\n", __func__, bf);
 +		return;
 +	}
 +
  	tap = ath_tx_get_tx_tid(an, tid->tid);
  
  	if (bf->bf_state.bfs_addedbaw)
  		device_printf(sc->sc_dev,
 -		    "%s: re-added? tid=%d, seqno %d; window %d:%d; "
 +		    "%s: re-added? bf=%p, tid=%d, seqno %d; window %d:%d; "
 +		    "baw head=%d tail=%d\n",
 +		    __func__, bf, tid->tid, SEQNO(bf->bf_state.bfs_seqno),
 +		    tap->txa_start, tap->txa_wnd, tid->baw_head,
 +		    tid->baw_tail);
 +
 +	/*
 +	 * Verify that the given sequence number is not outside of the
 +	 * BAW.  Complain loudly if that's the case.
 +	 */
 +	if (! BAW_WITHIN(tap->txa_start, tap->txa_wnd,
 +	    SEQNO(bf->bf_state.bfs_seqno))) {
 +		device_printf(sc->sc_dev,
 +		    "%s: bf=%p: outside of BAW?? tid=%d, seqno %d; window %d:%d; "
  		    "baw head=%d tail=%d\n",
 -		    __func__, tid->tid, SEQNO(bf->bf_state.bfs_seqno),
 +		    __func__, bf, tid->tid, SEQNO(bf->bf_state.bfs_seqno),
  		    tap->txa_start, tap->txa_wnd, tid->baw_head,
  		    tid->baw_tail);
  
 +	}
 +
  	/*
  	 * ni->ni_txseqs[] is the currently allocated seqno.
  	 * the txa state contains the current baw start.
 @@ -1983,9 +2027,9 @@ ath_tx_addto_baw(struct ath_softc *sc, s
  	index  = ATH_BA_INDEX(tap->txa_start, SEQNO(bf->bf_state.bfs_seqno));
  	cindex = (tid->baw_head + index) & (ATH_TID_MAX_BUFS - 1);
  	DPRINTF(sc, ATH_DEBUG_SW_TX_BAW,
 -	    "%s: tid=%d, seqno %d; window %d:%d; index=%d cindex=%d "
 +	    "%s: bf=%p, tid=%d, seqno %d; window %d:%d; index=%d cindex=%d "
  	    "baw head=%d tail=%d\n",
 -	    __func__, tid->tid, SEQNO(bf->bf_state.bfs_seqno),
 +	    __func__, bf, tid->tid, SEQNO(bf->bf_state.bfs_seqno),
  	    tap->txa_start, tap->txa_wnd, index, cindex, tid->baw_head,
  	    tid->baw_tail);
  
 @@ -2088,9 +2132,9 @@ ath_tx_update_baw(struct ath_softc *sc, 
  	cindex = (tid->baw_head + index) & (ATH_TID_MAX_BUFS - 1);
  
  	DPRINTF(sc, ATH_DEBUG_SW_TX_BAW,
 -	    "%s: tid=%d, baw=%d:%d, seqno=%d, index=%d, cindex=%d, "
 +	    "%s: bf=%p: tid=%d, baw=%d:%d, seqno=%d, index=%d, cindex=%d, "
  	    "baw head=%d, tail=%d\n",
 -	    __func__, tid->tid, tap->txa_start, tap->txa_wnd, seqno, index,
 +	    __func__, bf, tid->tid, tap->txa_start, tap->txa_wnd, seqno, index,
  	    cindex, tid->baw_head, tid->baw_tail);
  
  	/*
 @@ -2171,11 +2215,51 @@ ath_tx_tid_unsched(struct ath_softc *sc,
  }
  
  /*
 + * Return whether a sequence number is actually required.
 + *
 + * A sequence number must only be allocated at the time that a frame
 + * is considered for addition to the BAW/aggregate and being TXed.
 + * The sequence number must not be allocated before the frame
 + * is added to the BAW (protected by the same lock instance)
 + * otherwise a the multi-entrant TX path may result in a later seqno
 + * being added to the BAW first.  The subsequent addition of the
 + * earlier seqno would then not go into the BAW as it's now outside
 + * of said BAW.
 + *
 + * This routine is used by ath_tx_start() to mark whether the frame
 + * should get a sequence number before adding it to the BAW.
 + *
 + * Then the actual aggregate TX routines will check whether this
 + * flag is set and if the frame needs to go into the BAW, it'll
 + * have a sequence number allocated for it.
 + */
 +static int
 +ath_tx_seqno_required(struct ath_softc *sc, struct ieee80211_node *ni,
 +    struct ath_buf *bf, struct mbuf *m0)
 +{
 +	const struct ieee80211_frame *wh;
 +	uint8_t subtype;
 +
 +	wh = mtod(m0, const struct ieee80211_frame *);
 +	subtype = wh->i_fc[0] & IEEE80211_FC0_SUBTYPE_MASK;
 +
 +	/* XXX assert txq lock */
 +	/* XXX assert ampdu is set */
 +
 +	return ((IEEE80211_QOS_HAS_SEQ(wh) &&
 +	    subtype != IEEE80211_FC0_SUBTYPE_QOS_NULL));
 +}
 +
 +/*
   * Assign a sequence number manually to the given frame.
   *
   * This should only be called for A-MPDU TX frames.
 + *
 + * If this is called after the initial frame setup, make sure you've flushed
 + * the DMA map or you'll risk sending stale data to the NIC.  This routine
 + * updates the actual frame contents with the relevant seqno.
   */
 -static ieee80211_seq
 +int
  ath_tx_tid_seqno_assign(struct ath_softc *sc, struct ieee80211_node *ni,
      struct ath_buf *bf, struct mbuf *m0)
  {
 @@ -2188,8 +2272,22 @@ ath_tx_tid_seqno_assign(struct ath_softc
  	wh = mtod(m0, struct ieee80211_frame *);
  	pri = M_WME_GETAC(m0);			/* honor classification */
  	tid = WME_AC_TO_TID(pri);
 -	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: pri=%d, tid=%d, qos has seq=%d\n",
 -	    __func__, pri, tid, IEEE80211_QOS_HAS_SEQ(wh));
 +	DPRINTF(sc, ATH_DEBUG_SW_TX,
 +	    "%s: bf=%p, pri=%d, tid=%d, qos has seq=%d\n",
 +	    __func__, bf, pri, tid, IEEE80211_QOS_HAS_SEQ(wh));
 +
 +	if (! bf->bf_state.bfs_need_seqno) {
 +		device_printf(sc->sc_dev, "%s: bf=%p: need_seqno not set?!\n",
 +		    __func__, bf);
 +		return -1;
 +	}
 +	/* XXX check for bfs_need_seqno? */
 +	if (bf->bf_state.bfs_seqno_assigned) {
 +		device_printf(sc->sc_dev,
 +		    "%s: bf=%p: seqno already assigned (%d)?!\n",
 +		    __func__, bf, SEQNO(bf->bf_state.bfs_seqno));
 +		return bf->bf_state.bfs_seqno >> IEEE80211_SEQ_SEQ_SHIFT;
 +	}
  
  	/* XXX Is it a control frame? Ignore */
  
 @@ -2217,9 +2315,14 @@ ath_tx_tid_seqno_assign(struct ath_softc
  	}
  	*(uint16_t *)&wh->i_seq[0] = htole16(seqno << IEEE80211_SEQ_SEQ_SHIFT);
  	M_SEQNO_SET(m0, seqno);
 +	bf->bf_state.bfs_seqno = seqno << IEEE80211_SEQ_SEQ_SHIFT;
 +	bf->bf_state.bfs_seqno_assigned = 1;
  
  	/* Return so caller can do something with it if needed */
 -	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s:  -> seqno=%d\n", __func__, seqno);
 +	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p:  -> seqno=%d\n",
 +	    __func__,
 +	    bf,
 +	    seqno);
  	return seqno;
  }
  
 @@ -2231,9 +2334,11 @@ ath_tx_tid_seqno_assign(struct ath_softc
  static void
  ath_tx_xmit_aggr(struct ath_softc *sc, struct ath_node *an, struct ath_buf *bf)
  {
 +	struct ieee80211_node *ni = &an->an_node;
  	struct ath_tid *tid = &an->an_tid[bf->bf_state.bfs_tid];
  	struct ath_txq *txq = bf->bf_state.bfs_txq;
  	struct ieee80211_tx_ampdu *tap;
 +	int seqno;
  
  	ATH_TXQ_LOCK_ASSERT(txq);
  
 @@ -2245,10 +2350,63 @@ ath_tx_xmit_aggr(struct ath_softc *sc, s
  		return;
  	}
  
 +	/*
 +	 * TODO: If it's _before_ the BAW left edge, complain very loudly.
 +	 * This means something (else) has slid the left edge along
 +	 * before we got a chance to be TXed.
 +	 */
 +
 +	/*
 +	 * Is there space in this BAW for another frame?
 +	 * If not, don't bother trying to schedule it; just
 +	 * throw it back on the queue.
 +	 *
 +	 * If we allocate the sequence number before we add
 +	 * it to the BAW, we risk racing with another TX
 +	 * thread that gets in a frame into the BAW with
 +	 * seqno greater than ours.  We'd then fail the
 +	 * below check and throw the frame on the tail of
 +	 * the queue.  The sender would then have a hole.
 +	 *
 +	 * XXX again, we're protecting ni->ni_txseqs[tid]
 +	 * behind this hardware TXQ lock, like the rest of
 +	 * the TIDs that map to it.  Ugh.
 +	 */
 +	if (bf->bf_state.bfs_dobaw) {
 +		if (! BAW_WITHIN(tap->txa_start, tap->txa_wnd,
 +		    ni->ni_txseqs[bf->bf_state.bfs_tid])) {
 +			ATH_TXQ_INSERT_TAIL(tid, bf, bf_list);
 +			ath_tx_tid_sched(sc, tid);
 +			return;
 +		}
 +		if (! bf->bf_state.bfs_seqno_assigned) {
 +			seqno = ath_tx_tid_seqno_assign(sc, ni, bf, bf->bf_m);
 +			if (seqno < 0) {
 +				device_printf(sc->sc_dev,
 +				    "%s: bf=%p, huh, seqno=-1?\n",
 +				    __func__,
 +				    bf);
 +				/* XXX what can we even do here? */
 +			}
 +			/* Flush seqno update to RAM */
 +			/*
 +			 * XXX This is required because the dmasetup
 +			 * XXX is done early rather than at dispatch
 +			 * XXX time. Ew, we should fix this!
 +			 */
 +			bus_dmamap_sync(sc->sc_dmat, bf->bf_dmamap,
 +			    BUS_DMASYNC_PREWRITE);
 +		}
 +	}
 +
  	/* outside baw? queue */
  	if (bf->bf_state.bfs_dobaw &&
  	    (! BAW_WITHIN(tap->txa_start, tap->txa_wnd,
  	    SEQNO(bf->bf_state.bfs_seqno)))) {
 +		device_printf(sc->sc_dev,
 +		    "%s: bf=%p, shouldn't be outside BAW now?!\n",
 +		    __func__,
 +		    bf);
  		ATH_TXQ_INSERT_TAIL(tid, bf, bf_list);
  		ath_tx_tid_sched(sc, tid);
  		return;
 @@ -2303,8 +2461,8 @@ ath_tx_swq(struct ath_softc *sc, struct 
  	tid = ath_tx_gettid(sc, m0);
  	atid = &an->an_tid[tid];
  
 -	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p, pri=%d, tid=%d, qos=%d\n",
 -	    __func__, bf, pri, tid, IEEE80211_QOS_HAS_SEQ(wh));
 +	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p, pri=%d, tid=%d, qos=%d, seqno=%d\n",
 +	    __func__, bf, pri, tid, IEEE80211_QOS_HAS_SEQ(wh), SEQNO(bf->bf_state.bfs_seqno));
  
  	/* Set local packet state, used to queue packets to hardware */
  	bf->bf_state.bfs_tid = tid;
 @@ -2320,34 +2478,34 @@ ath_tx_swq(struct ath_softc *sc, struct 
  	ATH_TXQ_LOCK(txq);
  	if (atid->paused) {
  		/* TID is paused, queue */
 -		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: paused\n", __func__);
 +		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p: paused\n", __func__, bf);
  		ATH_TXQ_INSERT_TAIL(atid, bf, bf_list);
  	} else if (ath_tx_ampdu_pending(sc, an, tid)) {
  		/* AMPDU pending; queue */
 -		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: pending\n", __func__);
 +		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p: pending\n", __func__, bf);
  		ATH_TXQ_INSERT_TAIL(atid, bf, bf_list);
  		/* XXX sched? */
  	} else if (ath_tx_ampdu_running(sc, an, tid)) {
  		/* AMPDU running, attempt direct dispatch if possible */
  		if (txq->axq_depth < sc->sc_hwq_limit) {
 -			ath_tx_xmit_aggr(sc, an, bf);
  			DPRINTF(sc, ATH_DEBUG_SW_TX,
 -			    "%s: xmit_aggr\n",
 -			    __func__);
 +			    "%s: bf=%p: xmit_aggr\n",
 +			    __func__, bf);
 +			ath_tx_xmit_aggr(sc, an, bf);
  		} else {
  			DPRINTF(sc, ATH_DEBUG_SW_TX,
 -			    "%s: ampdu; swq'ing\n",
 -			    __func__);
 +			    "%s: bf=%p: ampdu; swq'ing\n",
 +			    __func__, bf);
  			ATH_TXQ_INSERT_TAIL(atid, bf, bf_list);
  			ath_tx_tid_sched(sc, atid);
  		}
  	} else if (txq->axq_depth < sc->sc_hwq_limit) {
  		/* AMPDU not running, attempt direct dispatch */
 -		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: xmit_normal\n", __func__);
 +		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p: xmit_normal\n", __func__, bf);
  		ath_tx_xmit_normal(sc, txq, bf);
  	} else {
  		/* Busy; queue */
 -		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: swq'ing\n", __func__);
 +		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p: swq'ing\n", __func__, bf);
  		ATH_TXQ_INSERT_TAIL(atid, bf, bf_list);
  		ath_tx_tid_sched(sc, atid);
  	}
 @@ -2478,11 +2636,11 @@ ath_tx_tid_drain(struct ath_softc *sc, s
  
  		if (t == 0) {
  			device_printf(sc->sc_dev,
 -			    "%s: node %p: tid %d: txq_depth=%d, "
 +			    "%s: node %p: bf=%p: tid %d: txq_depth=%d, "
  			    "txq_aggr_depth=%d, sched=%d, paused=%d, "
  			    "hwq_depth=%d, incomp=%d, baw_head=%d, "
  			    "baw_tail=%d txa_start=%d, ni_txseqs=%d\n",
 -			     __func__, ni, tid->tid, txq->axq_depth,
 +			     __func__, ni, bf, tid->tid, txq->axq_depth,
  			     txq->axq_aggr_depth, tid->sched, tid->paused,
  			     tid->hwq_depth, tid->incomp, tid->baw_head,
  			     tid->baw_tail, tap == NULL ? -1 : tap->txa_start,
 @@ -2493,7 +2651,7 @@ ath_tx_tid_drain(struct ath_softc *sc, s
  			    mtod(bf->bf_m, const uint8_t *),
  			    bf->bf_m->m_len, 0, -1);
  
 -			t = 1;
 +			//t = 1;
  		}
  
  
 
 Modified: head/sys/dev/ath/if_ath_tx.h
 ==============================================================================
 --- head/sys/dev/ath/if_ath_tx.h	Mon Mar 19 23:28:13 2012	(r233226)
 +++ head/sys/dev/ath/if_ath_tx.h	Tue Mar 20 04:50:25 2012	(r233227)
 @@ -109,6 +109,8 @@ extern void ath_tx_addto_baw(struct ath_
      struct ath_tid *tid, struct ath_buf *bf);
  extern struct ieee80211_tx_ampdu * ath_tx_get_tx_tid(struct ath_node *an,
      int tid);
 +extern int ath_tx_tid_seqno_assign(struct ath_softc *sc,
 +    struct ieee80211_node *ni, struct ath_buf *bf, struct mbuf *m0);
  
  /* TX addba handling */
  extern	int ath_addba_request(struct ieee80211_node *ni,
 
 Modified: head/sys/dev/ath/if_ath_tx_ht.c
 ==============================================================================
 --- head/sys/dev/ath/if_ath_tx_ht.c	Mon Mar 19 23:28:13 2012	(r233226)
 +++ head/sys/dev/ath/if_ath_tx_ht.c	Tue Mar 20 04:50:25 2012	(r233227)
 @@ -644,7 +644,7 @@ ATH_AGGR_STATUS
  ath_tx_form_aggr(struct ath_softc *sc, struct ath_node *an, struct ath_tid *tid,
      ath_bufhead *bf_q)
  {
 -	//struct ieee80211_node *ni = &an->an_node;
 +	struct ieee80211_node *ni = &an->an_node;
  	struct ath_buf *bf, *bf_first = NULL, *bf_prev = NULL;
  	int nframes = 0;
  	uint16_t aggr_limit = 0, al = 0, bpad = 0, al_delta, h_baw;
 @@ -652,6 +652,7 @@ ath_tx_form_aggr(struct ath_softc *sc, s
  	int status = ATH_AGGR_DONE;
  	int prev_frames = 0;	/* XXX for AR5416 burst, not done here */
  	int prev_al = 0;	/* XXX also for AR5416 burst */
 +	int seqno;
  
  	ATH_TXQ_LOCK_ASSERT(sc->sc_ac2q[tid->ac]);
  
 @@ -707,16 +708,6 @@ ath_tx_form_aggr(struct ath_softc *sc, s
  		 */
  
  		/*
 -		 * If the packet has a sequence number, do not
 -		 * step outside of the block-ack window.
 -		 */
 -		if (! BAW_WITHIN(tap->txa_start, tap->txa_wnd,
 -		    SEQNO(bf->bf_state.bfs_seqno))) {
 -		    status = ATH_AGGR_BAW_CLOSED;
 -		    break;
 -		}
 -
 -		/*
  		 * XXX TODO: AR5416 has an 8K aggregation size limit
  		 * when RTS is enabled, and RTS is required for dual-stream
  		 * rates.
 @@ -744,6 +735,58 @@ ath_tx_form_aggr(struct ath_softc *sc, s
  		}
  
  		/*
 +		 * TODO: If it's _before_ the BAW left edge, complain very loudly.
 +		 * This means something (else) has slid the left edge along
 +		 * before we got a chance to be TXed.
 +		 */
 +
 +		/*
 +		 * Check if we have space in the BAW for this frame before
 +		 * we add it.
 +		 *
 +		 * see ath_tx_xmit_aggr() for more info.
 +		 */
 +		if (bf->bf_state.bfs_dobaw) {
 +			if (! BAW_WITHIN(tap->txa_start, tap->txa_wnd,
 +			    ni->ni_txseqs[bf->bf_state.bfs_tid])) {
 +				status = ATH_AGGR_BAW_CLOSED;
 +				break;
 +			}
 +			/* XXX check for bfs_need_seqno? */
 +			if (! bf->bf_state.bfs_seqno_assigned) {
 +				seqno = ath_tx_tid_seqno_assign(sc, ni, bf, bf->bf_m);
 +				if (seqno < 0) {
 +					device_printf(sc->sc_dev,
 +					    "%s: bf=%p, huh, seqno=-1?\n",
 +					    __func__,
 +					    bf);
 +					/* XXX what can we even do here? */
 +				}
 +				/* Flush seqno update to RAM */
 +				/*
 +				 * XXX This is required because the dmasetup
 +				 * XXX is done early rather than at dispatch
 +				 * XXX time. Ew, we should fix this!
 +				 */
 +				bus_dmamap_sync(sc->sc_dmat, bf->bf_dmamap,
 +				    BUS_DMASYNC_PREWRITE);
 +			}
 +		}
 +
 +		/*
 +		 * If the packet has a sequence number, do not
 +		 * step outside of the block-ack window.
 +		 */
 +		if (! BAW_WITHIN(tap->txa_start, tap->txa_wnd,
 +		    SEQNO(bf->bf_state.bfs_seqno))) {
 +			device_printf(sc->sc_dev,
 +			    "%s: bf=%p, seqno=%d, outside?!\n",
 +			    __func__, bf, SEQNO(bf->bf_state.bfs_seqno));
 +			status = ATH_AGGR_BAW_CLOSED;
 +			break;
 +		}
 +
 +		/*
  		 * this packet is part of an aggregate.
  		 */
  		ATH_TXQ_REMOVE(tid, bf, bf_list);
 
 Modified: head/sys/dev/ath/if_athvar.h
 ==============================================================================
 --- head/sys/dev/ath/if_athvar.h	Mon Mar 19 23:28:13 2012	(r233226)
 +++ head/sys/dev/ath/if_athvar.h	Tue Mar 20 04:50:25 2012	(r233227)
 @@ -215,6 +215,8 @@ struct ath_buf {
  		int bfs_ismrr:1;	/* do multi-rate TX retry */
  		int bfs_doprot:1;	/* do RTS/CTS based protection */
  		int bfs_doratelookup:1;	/* do rate lookup before each TX */
 +		int bfs_need_seqno:1;	/* need to assign a seqno for aggregation */
 +		int bfs_seqno_assigned:1;	/* seqno has been assigned */
  		int bfs_nfl;		/* next fragment length */
  
  		/*
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 
State-Changed-From-To: open->patched 
State-Changed-By: adrian 
State-Changed-When: Mon Apr 2 16:39:24 UTC 2012 
State-Changed-Why:  
Fix 


http://www.freebsd.org/cgi/query-pr.cgi?pr=166190 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: kern/166190: commit references a PR
Date: Mon, 11 Jun 2012 06:59:39 +0000 (UTC)

 Author: adrian
 Date: Mon Jun 11 06:59:28 2012
 New Revision: 236872
 URL: http://svn.freebsd.org/changeset/base/236872
 
 Log:
   Revert r233227 and followup commits as it breaks CCMP PN replay detection.
   
   This showed up when doing heavy UDP throughput on SMP machines.
   
   The problem with this is because the 802.11 sequence number is being
   allocated separately to the CCMP PN replay number (which is assigned
   during ieee80211_crypto_encap()).
   
   Under significant throughput (200+ MBps) the TX path would be stressed
   enough that frame TX/retry would force sequence number and PN allocation
   to be out of order.  So once the frames were reordered via 802.11 seqnos,
   the CCMP PN would be far out of order, causing most frames to be discarded
   by the receiver.
   
   I've fixed this in some local work by being forced to:
   
     (a) deal with the issues that lead to the parallel TX causing out of
         order sequence numbers in the first place;
     (b) fix all the packet queuing issues which lead to strange (but mostly
         valid) TX.
   
   I'll begin fixing these in a subsequent commit or five.
   
   PR:		kern/166190
 
 Modified:
   head/sys/dev/ath/if_ath_debug.c
   head/sys/dev/ath/if_ath_tx.c
   head/sys/dev/ath/if_ath_tx.h
   head/sys/dev/ath/if_ath_tx_ht.c
   head/sys/dev/ath/if_athvar.h
 
 Modified: head/sys/dev/ath/if_ath_debug.c
 ==============================================================================
 --- head/sys/dev/ath/if_ath_debug.c	Mon Jun 11 05:25:26 2012	(r236871)
 +++ head/sys/dev/ath/if_ath_debug.c	Mon Jun 11 06:59:28 2012	(r236872)
 @@ -144,9 +144,6 @@ ath_printtxbuf(struct ath_softc *sc, con
  			    bf->bf_state.bfs_retries,
  			    bf->bf_state.bfs_addedbaw,
  			    bf->bf_state.bfs_dobaw);
 -			printf("        SEQNO_ASSIGNED: %d, NEED_SEQNO: %d\n",
 -			    bf->bf_state.bfs_seqno_assigned,
 -			    bf->bf_state.bfs_need_seqno);
  			printf("        %08x %08x %08x %08x %08x %08x\n",
  			    ds->ds_ctl0, ds->ds_ctl1,
  			    ds->ds_hw[0], ds->ds_hw[1],
 
 Modified: head/sys/dev/ath/if_ath_tx.c
 ==============================================================================
 --- head/sys/dev/ath/if_ath_tx.c	Mon Jun 11 05:25:26 2012	(r236871)
 +++ head/sys/dev/ath/if_ath_tx.c	Mon Jun 11 06:59:28 2012	(r236872)
 @@ -109,10 +109,10 @@ static int ath_tx_ampdu_pending(struct a
      int tid);
  static int ath_tx_ampdu_running(struct ath_softc *sc, struct ath_node *an,
      int tid);
 +static ieee80211_seq ath_tx_tid_seqno_assign(struct ath_softc *sc,
 +    struct ieee80211_node *ni, struct ath_buf *bf, struct mbuf *m0);
  static int ath_tx_action_frame_override_queue(struct ath_softc *sc,
      struct ieee80211_node *ni, struct mbuf *m0, int *tid);
 -static int ath_tx_seqno_required(struct ath_softc *sc,
 -    struct ieee80211_node *ni, struct ath_buf *bf, struct mbuf *m0);
  
  /*
   * Whether to use the 11n rate scenario functions or not
 @@ -1436,7 +1436,7 @@ ath_tx_start(struct ath_softc *sc, struc
  	int ismcast;
  	const struct ieee80211_frame *wh;
  	int is_ampdu, is_ampdu_tx, is_ampdu_pending;
 -	//ieee80211_seq seqno;
 +	ieee80211_seq seqno;
  	uint8_t type, subtype;
  
  	/*
 @@ -1488,9 +1488,8 @@ ath_tx_start(struct ath_softc *sc, struc
  	is_ampdu_pending = ath_tx_ampdu_pending(sc, ATH_NODE(ni), tid);
  	is_ampdu = is_ampdu_tx | is_ampdu_pending;
  
 -	DPRINTF(sc, ATH_DEBUG_SW_TX,
 -	    "%s: bf=%p, tid=%d, ac=%d, is_ampdu=%d\n",
 -	    __func__, bf, tid, pri, is_ampdu);
 +	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: tid=%d, ac=%d, is_ampdu=%d\n",
 +	    __func__, tid, pri, is_ampdu);
  
  	/*
  	 * When servicing one or more stations in power-save mode
 @@ -1506,9 +1505,6 @@ ath_tx_start(struct ath_softc *sc, struc
  	/* Do the generic frame setup */
  	/* XXX should just bzero the bf_state? */
  	bf->bf_state.bfs_dobaw = 0;
 -	bf->bf_state.bfs_seqno_assigned = 0;
 -	bf->bf_state.bfs_need_seqno = 0;
 -	bf->bf_state.bfs_seqno = -1;	/* XXX debugging */
  
  	/* A-MPDU TX? Manually set sequence number */
  	/* Don't do it whilst pending; the net80211 layer still assigns them */
 @@ -1521,16 +1517,15 @@ ath_tx_start(struct ath_softc *sc, struc
  		 * don't get a sequence number from the current
  		 * TID and thus mess with the BAW.
  		 */
 -		//seqno = ath_tx_tid_seqno_assign(sc, ni, bf, m0);
 -		if (ath_tx_seqno_required(sc, ni, bf, m0)) {
 +		seqno = ath_tx_tid_seqno_assign(sc, ni, bf, m0);
 +		if (IEEE80211_QOS_HAS_SEQ(wh) &&
 +		    subtype != IEEE80211_FC0_SUBTYPE_QOS_NULL) {
  			bf->bf_state.bfs_dobaw = 1;
 -			bf->bf_state.bfs_need_seqno = 1;
  		}
  		ATH_TXQ_UNLOCK(txq);
  	} else {
  		/* No AMPDU TX, we've been assigned a sequence number. */
  		if (IEEE80211_QOS_HAS_SEQ(wh)) {
 -			bf->bf_state.bfs_seqno_assigned = 1;
  			/* XXX we should store the frag+seqno in bfs_seqno */
  			bf->bf_state.bfs_seqno =
  			    M_SEQNO_GET(m0) << IEEE80211_SEQ_SEQ_SHIFT;
 @@ -1541,7 +1536,7 @@ ath_tx_start(struct ath_softc *sc, struc
  	 * If needed, the sequence number has been assigned.
  	 * Squirrel it away somewhere easy to get to.
  	 */
 -	//bf->bf_state.bfs_seqno = M_SEQNO_GET(m0) << IEEE80211_SEQ_SEQ_SHIFT;
 +	bf->bf_state.bfs_seqno = M_SEQNO_GET(m0) << IEEE80211_SEQ_SEQ_SHIFT;
  
  	/* Is ampdu pending? fetch the seqno and print it out */
  	if (is_ampdu_pending)
 @@ -1558,10 +1553,6 @@ ath_tx_start(struct ath_softc *sc, struc
  	/* At this point m0 could have changed! */
  	m0 = bf->bf_m;
  
 -	DPRINTF(sc, ATH_DEBUG_SW_TX,
 -	    "%s: DONE: bf=%p, tid=%d, ac=%d, is_ampdu=%d, dobaw=%d, seqno=%d\n",
 -	    __func__, bf, tid, pri, is_ampdu, bf->bf_state.bfs_dobaw, M_SEQNO_GET(m0));
 -
  #if 1
  	/*
  	 * If it's a multicast frame, do a direct-dispatch to the
 @@ -2043,41 +2034,16 @@ ath_tx_addto_baw(struct ath_softc *sc, s
  	if (bf->bf_state.bfs_isretried)
  		return;
  
 -	/*
 -	 * If this occurs we're in a lot of trouble.  We should try to
 -	 * recover from this without the session hanging?
 -	 */
 -	if (! bf->bf_state.bfs_seqno_assigned) {
 -		device_printf(sc->sc_dev,
 -		    "%s: bf=%p, seqno_assigned is 0?!\n", __func__, bf);
 -		return;
 -	}
 -
  	tap = ath_tx_get_tx_tid(an, tid->tid);
  
  	if (bf->bf_state.bfs_addedbaw)
  		device_printf(sc->sc_dev,
 -		    "%s: re-added? bf=%p, tid=%d, seqno %d; window %d:%d; "
 -		    "baw head=%d tail=%d\n",
 -		    __func__, bf, tid->tid, SEQNO(bf->bf_state.bfs_seqno),
 -		    tap->txa_start, tap->txa_wnd, tid->baw_head,
 -		    tid->baw_tail);
 -
 -	/*
 -	 * Verify that the given sequence number is not outside of the
 -	 * BAW.  Complain loudly if that's the case.
 -	 */
 -	if (! BAW_WITHIN(tap->txa_start, tap->txa_wnd,
 -	    SEQNO(bf->bf_state.bfs_seqno))) {
 -		device_printf(sc->sc_dev,
 -		    "%s: bf=%p: outside of BAW?? tid=%d, seqno %d; window %d:%d; "
 +		    "%s: re-added? tid=%d, seqno %d; window %d:%d; "
  		    "baw head=%d tail=%d\n",
 -		    __func__, bf, tid->tid, SEQNO(bf->bf_state.bfs_seqno),
 +		    __func__, tid->tid, SEQNO(bf->bf_state.bfs_seqno),
  		    tap->txa_start, tap->txa_wnd, tid->baw_head,
  		    tid->baw_tail);
  
 -	}
 -
  	/*
  	 * ni->ni_txseqs[] is the currently allocated seqno.
  	 * the txa state contains the current baw start.
 @@ -2085,9 +2051,9 @@ ath_tx_addto_baw(struct ath_softc *sc, s
  	index  = ATH_BA_INDEX(tap->txa_start, SEQNO(bf->bf_state.bfs_seqno));
  	cindex = (tid->baw_head + index) & (ATH_TID_MAX_BUFS - 1);
  	DPRINTF(sc, ATH_DEBUG_SW_TX_BAW,
 -	    "%s: bf=%p, tid=%d, seqno %d; window %d:%d; index=%d cindex=%d "
 +	    "%s: tid=%d, seqno %d; window %d:%d; index=%d cindex=%d "
  	    "baw head=%d tail=%d\n",
 -	    __func__, bf, tid->tid, SEQNO(bf->bf_state.bfs_seqno),
 +	    __func__, tid->tid, SEQNO(bf->bf_state.bfs_seqno),
  	    tap->txa_start, tap->txa_wnd, index, cindex, tid->baw_head,
  	    tid->baw_tail);
  
 @@ -2190,9 +2156,9 @@ ath_tx_update_baw(struct ath_softc *sc, 
  	cindex = (tid->baw_head + index) & (ATH_TID_MAX_BUFS - 1);
  
  	DPRINTF(sc, ATH_DEBUG_SW_TX_BAW,
 -	    "%s: bf=%p: tid=%d, baw=%d:%d, seqno=%d, index=%d, cindex=%d, "
 +	    "%s: tid=%d, baw=%d:%d, seqno=%d, index=%d, cindex=%d, "
  	    "baw head=%d, tail=%d\n",
 -	    __func__, bf, tid->tid, tap->txa_start, tap->txa_wnd, seqno, index,
 +	    __func__, tid->tid, tap->txa_start, tap->txa_wnd, seqno, index,
  	    cindex, tid->baw_head, tid->baw_tail);
  
  	/*
 @@ -2273,51 +2239,11 @@ ath_tx_tid_unsched(struct ath_softc *sc,
  }
  
  /*
 - * Return whether a sequence number is actually required.
 - *
 - * A sequence number must only be allocated at the time that a frame
 - * is considered for addition to the BAW/aggregate and being TXed.
 - * The sequence number must not be allocated before the frame
 - * is added to the BAW (protected by the same lock instance)
 - * otherwise a the multi-entrant TX path may result in a later seqno
 - * being added to the BAW first.  The subsequent addition of the
 - * earlier seqno would then not go into the BAW as it's now outside
 - * of said BAW.
 - *
 - * This routine is used by ath_tx_start() to mark whether the frame
 - * should get a sequence number before adding it to the BAW.
 - *
 - * Then the actual aggregate TX routines will check whether this
 - * flag is set and if the frame needs to go into the BAW, it'll
 - * have a sequence number allocated for it.
 - */
 -static int
 -ath_tx_seqno_required(struct ath_softc *sc, struct ieee80211_node *ni,
 -    struct ath_buf *bf, struct mbuf *m0)
 -{
 -	const struct ieee80211_frame *wh;
 -	uint8_t subtype;
 -
 -	wh = mtod(m0, const struct ieee80211_frame *);
 -	subtype = wh->i_fc[0] & IEEE80211_FC0_SUBTYPE_MASK;
 -
 -	/* XXX assert txq lock */
 -	/* XXX assert ampdu is set */
 -
 -	return ((IEEE80211_QOS_HAS_SEQ(wh) &&
 -	    subtype != IEEE80211_FC0_SUBTYPE_QOS_NULL));
 -}
 -
 -/*
   * Assign a sequence number manually to the given frame.
   *
   * This should only be called for A-MPDU TX frames.
 - *
 - * If this is called after the initial frame setup, make sure you've flushed
 - * the DMA map or you'll risk sending stale data to the NIC.  This routine
 - * updates the actual frame contents with the relevant seqno.
   */
 -int
 +static ieee80211_seq
  ath_tx_tid_seqno_assign(struct ath_softc *sc, struct ieee80211_node *ni,
      struct ath_buf *bf, struct mbuf *m0)
  {
 @@ -2330,22 +2256,8 @@ ath_tx_tid_seqno_assign(struct ath_softc
  	wh = mtod(m0, struct ieee80211_frame *);
  	pri = M_WME_GETAC(m0);			/* honor classification */
  	tid = WME_AC_TO_TID(pri);
 -	DPRINTF(sc, ATH_DEBUG_SW_TX,
 -	    "%s: bf=%p, pri=%d, tid=%d, qos has seq=%d\n",
 -	    __func__, bf, pri, tid, IEEE80211_QOS_HAS_SEQ(wh));
 -
 -	if (! bf->bf_state.bfs_need_seqno) {
 -		device_printf(sc->sc_dev, "%s: bf=%p: need_seqno not set?!\n",
 -		    __func__, bf);
 -		return -1;
 -	}
 -	/* XXX check for bfs_need_seqno? */
 -	if (bf->bf_state.bfs_seqno_assigned) {
 -		device_printf(sc->sc_dev,
 -		    "%s: bf=%p: seqno already assigned (%d)?!\n",
 -		    __func__, bf, SEQNO(bf->bf_state.bfs_seqno));
 -		return bf->bf_state.bfs_seqno >> IEEE80211_SEQ_SEQ_SHIFT;
 -	}
 +	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: pri=%d, tid=%d, qos has seq=%d\n",
 +	    __func__, pri, tid, IEEE80211_QOS_HAS_SEQ(wh));
  
  	/* XXX Is it a control frame? Ignore */
  
 @@ -2373,14 +2285,9 @@ ath_tx_tid_seqno_assign(struct ath_softc
  	}
  	*(uint16_t *)&wh->i_seq[0] = htole16(seqno << IEEE80211_SEQ_SEQ_SHIFT);
  	M_SEQNO_SET(m0, seqno);
 -	bf->bf_state.bfs_seqno = seqno << IEEE80211_SEQ_SEQ_SHIFT;
 -	bf->bf_state.bfs_seqno_assigned = 1;
  
  	/* Return so caller can do something with it if needed */
 -	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p:  -> seqno=%d\n",
 -	    __func__,
 -	    bf,
 -	    seqno);
 +	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s:  -> seqno=%d\n", __func__, seqno);
  	return seqno;
  }
  
 @@ -2392,7 +2299,6 @@ ath_tx_tid_seqno_assign(struct ath_softc
  static void
  ath_tx_xmit_aggr(struct ath_softc *sc, struct ath_node *an, struct ath_buf *bf)
  {
 -	struct ieee80211_node *ni = &an->an_node;
  	struct ath_tid *tid = &an->an_tid[bf->bf_state.bfs_tid];
  	struct ath_txq *txq = bf->bf_state.bfs_txq;
  	struct ieee80211_tx_ampdu *tap;
 @@ -2408,81 +2314,10 @@ ath_tx_xmit_aggr(struct ath_softc *sc, s
  		return;
  	}
  
 -	/*
 -	 * TODO: If it's _before_ the BAW left edge, complain very loudly.
 -	 * This means something (else) has slid the left edge along
 -	 * before we got a chance to be TXed.
 -	 */
 -
 -	/*
 -	 * Is there space in this BAW for another frame?
 -	 * If not, don't bother trying to schedule it; just
 -	 * throw it back on the queue.
 -	 *
 -	 * If we allocate the sequence number before we add
 -	 * it to the BAW, we risk racing with another TX
 -	 * thread that gets in a frame into the BAW with
 -	 * seqno greater than ours.  We'd then fail the
 -	 * below check and throw the frame on the tail of
 -	 * the queue.  The sender would then have a hole.
 -	 *
 -	 * XXX again, we're protecting ni->ni_txseqs[tid]
 -	 * behind this hardware TXQ lock, like the rest of
 -	 * the TIDs that map to it.  Ugh.
 -	 */
 -	if (bf->bf_state.bfs_dobaw) {
 -		ieee80211_seq seqno;
 -
 -		/*
 -		 * If the sequence number is allocated, use it.
 -		 * Otherwise, use the sequence number we WOULD
 -		 * allocate.
 -		 */
 -		if (bf->bf_state.bfs_seqno_assigned)
 -			seqno = SEQNO(bf->bf_state.bfs_seqno);
 -		else
 -			seqno = ni->ni_txseqs[bf->bf_state.bfs_tid];
 -
 -		/*
 -		 * Check whether either the currently allocated
 -		 * sequence number _OR_ the to-be allocated
 -		 * sequence number is inside the BAW.
 -		 */
 -		if (! BAW_WITHIN(tap->txa_start, tap->txa_wnd, seqno)) {
 -			ATH_TXQ_INSERT_TAIL(tid, bf, bf_list);
 -			ath_tx_tid_sched(sc, tid);
 -			return;
 -		}
 -		if (! bf->bf_state.bfs_seqno_assigned) {
 -			int seqno;
 -
 -			seqno = ath_tx_tid_seqno_assign(sc, ni, bf, bf->bf_m);
 -			if (seqno < 0) {
 -				device_printf(sc->sc_dev,
 -				    "%s: bf=%p, huh, seqno=-1?\n",
 -				    __func__,
 -				    bf);
 -				/* XXX what can we even do here? */
 -			}
 -			/* Flush seqno update to RAM */
 -			/*
 -			 * XXX This is required because the dmasetup
 -			 * XXX is done early rather than at dispatch
 -			 * XXX time. Ew, we should fix this!
 -			 */
 -			bus_dmamap_sync(sc->sc_dmat, bf->bf_dmamap,
 -			    BUS_DMASYNC_PREWRITE);
 -		}
 -	}
 -
  	/* outside baw? queue */
  	if (bf->bf_state.bfs_dobaw &&
  	    (! BAW_WITHIN(tap->txa_start, tap->txa_wnd,
  	    SEQNO(bf->bf_state.bfs_seqno)))) {
 -		device_printf(sc->sc_dev,
 -		    "%s: bf=%p, shouldn't be outside BAW now?!\n",
 -		    __func__,
 -		    bf);
  		ATH_TXQ_INSERT_TAIL(tid, bf, bf_list);
  		ath_tx_tid_sched(sc, tid);
  		return;
 @@ -2539,8 +2374,8 @@ ath_tx_swq(struct ath_softc *sc, struct 
  	tid = ath_tx_gettid(sc, m0);
  	atid = &an->an_tid[tid];
  
 -	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p, pri=%d, tid=%d, qos=%d, seqno=%d\n",
 -	    __func__, bf, pri, tid, IEEE80211_QOS_HAS_SEQ(wh), SEQNO(bf->bf_state.bfs_seqno));
 +	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p, pri=%d, tid=%d, qos=%d\n",
 +	    __func__, bf, pri, tid, IEEE80211_QOS_HAS_SEQ(wh));
  
  	/* Set local packet state, used to queue packets to hardware */
  	bf->bf_state.bfs_tid = tid;
 @@ -2556,34 +2391,34 @@ ath_tx_swq(struct ath_softc *sc, struct 
  	ATH_TXQ_LOCK(txq);
  	if (atid->paused) {
  		/* TID is paused, queue */
 -		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p: paused\n", __func__, bf);
 +		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: paused\n", __func__);
  		ATH_TXQ_INSERT_TAIL(atid, bf, bf_list);
  	} else if (ath_tx_ampdu_pending(sc, an, tid)) {
  		/* AMPDU pending; queue */
 -		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p: pending\n", __func__, bf);
 +		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: pending\n", __func__);
  		ATH_TXQ_INSERT_TAIL(atid, bf, bf_list);
  		/* XXX sched? */
  	} else if (ath_tx_ampdu_running(sc, an, tid)) {
  		/* AMPDU running, attempt direct dispatch if possible */
  		if (txq->axq_depth < sc->sc_hwq_limit) {
 -			DPRINTF(sc, ATH_DEBUG_SW_TX,
 -			    "%s: bf=%p: xmit_aggr\n",
 -			    __func__, bf);
  			ath_tx_xmit_aggr(sc, an, bf);
 +			DPRINTF(sc, ATH_DEBUG_SW_TX,
 +			    "%s: xmit_aggr\n",
 +			    __func__);
  		} else {
  			DPRINTF(sc, ATH_DEBUG_SW_TX,
 -			    "%s: bf=%p: ampdu; swq'ing\n",
 -			    __func__, bf);
 +			    "%s: ampdu; swq'ing\n",
 +			    __func__);
  			ATH_TXQ_INSERT_TAIL(atid, bf, bf_list);
  			ath_tx_tid_sched(sc, atid);
  		}
  	} else if (txq->axq_depth < sc->sc_hwq_limit) {
  		/* AMPDU not running, attempt direct dispatch */
 -		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p: xmit_normal\n", __func__, bf);
 +		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: xmit_normal\n", __func__);
  		ath_tx_xmit_normal(sc, txq, bf);
  	} else {
  		/* Busy; queue */
 -		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: bf=%p: swq'ing\n", __func__, bf);
 +		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: swq'ing\n", __func__);
  		ATH_TXQ_INSERT_TAIL(atid, bf, bf_list);
  		ath_tx_tid_sched(sc, atid);
  	}
 @@ -2873,12 +2708,10 @@ ath_tx_tid_drain(struct ath_softc *sc, s
  		if (t == 0) {
  			device_printf(sc->sc_dev,
  			    "%s: node %p: bf=%p: addbaw=%d, dobaw=%d, "
 -			    "seqno_assign=%d, seqno_required=%d, seqno=%d, retry=%d\n",
 +			    "seqno=%d, retry=%d\n",
  			    __func__, ni, bf,
  			    bf->bf_state.bfs_addedbaw,
  			    bf->bf_state.bfs_dobaw,
 -			    bf->bf_state.bfs_need_seqno,
 -			    bf->bf_state.bfs_seqno_assigned,
  			    SEQNO(bf->bf_state.bfs_seqno),
  			    bf->bf_state.bfs_retries);
  			device_printf(sc->sc_dev,
 @@ -2888,11 +2721,11 @@ ath_tx_tid_drain(struct ath_softc *sc, s
  			    tid->hwq_depth,
  			    tid->bar_wait);
  			device_printf(sc->sc_dev,
 -			    "%s: node %p: bf=%p: tid %d: txq_depth=%d, "
 +			    "%s: node %p: tid %d: txq_depth=%d, "
  			    "txq_aggr_depth=%d, sched=%d, paused=%d, "
  			    "hwq_depth=%d, incomp=%d, baw_head=%d, "
  			    "baw_tail=%d txa_start=%d, ni_txseqs=%d\n",
 -			     __func__, ni, bf, tid->tid, txq->axq_depth,
 +			     __func__, ni, tid->tid, txq->axq_depth,
  			     txq->axq_aggr_depth, tid->sched, tid->paused,
  			     tid->hwq_depth, tid->incomp, tid->baw_head,
  			     tid->baw_tail, tap == NULL ? -1 : tap->txa_start,
 
 Modified: head/sys/dev/ath/if_ath_tx.h
 ==============================================================================
 --- head/sys/dev/ath/if_ath_tx.h	Mon Jun 11 05:25:26 2012	(r236871)
 +++ head/sys/dev/ath/if_ath_tx.h	Mon Jun 11 06:59:28 2012	(r236872)
 @@ -109,8 +109,6 @@ extern void ath_tx_addto_baw(struct ath_
      struct ath_tid *tid, struct ath_buf *bf);
  extern struct ieee80211_tx_ampdu * ath_tx_get_tx_tid(struct ath_node *an,
      int tid);
 -extern int ath_tx_tid_seqno_assign(struct ath_softc *sc,
 -    struct ieee80211_node *ni, struct ath_buf *bf, struct mbuf *m0);
  
  /* TX addba handling */
  extern	int ath_addba_request(struct ieee80211_node *ni,
 
 Modified: head/sys/dev/ath/if_ath_tx_ht.c
 ==============================================================================
 --- head/sys/dev/ath/if_ath_tx_ht.c	Mon Jun 11 05:25:26 2012	(r236871)
 +++ head/sys/dev/ath/if_ath_tx_ht.c	Mon Jun 11 06:59:28 2012	(r236872)
 @@ -644,7 +644,7 @@ ATH_AGGR_STATUS
  ath_tx_form_aggr(struct ath_softc *sc, struct ath_node *an, struct ath_tid *tid,
      ath_bufhead *bf_q)
  {
 -	struct ieee80211_node *ni = &an->an_node;
 +	//struct ieee80211_node *ni = &an->an_node;
  	struct ath_buf *bf, *bf_first = NULL, *bf_prev = NULL;
  	int nframes = 0;
  	uint16_t aggr_limit = 0, al = 0, bpad = 0, al_delta, h_baw;
 @@ -751,74 +751,11 @@ ath_tx_form_aggr(struct ath_softc *sc, s
  		    (HAL_TXDESC_RTSENA | HAL_TXDESC_CTSENA);
  
  		/*
 -		 * TODO: If it's _before_ the BAW left edge, complain very
 -		 * loudly.
 -		 *
 -		 * This means something (else) has slid the left edge along
 -		 * before we got a chance to be TXed.
 -		 */
 -
 -		/*
 -		 * Check if we have space in the BAW for this frame before
 -		 * we add it.
 -		 *
 -		 * see ath_tx_xmit_aggr() for more info.
 -		 */
 -		if (bf->bf_state.bfs_dobaw) {
 -			ieee80211_seq seqno;
 -
 -			/*
 -			 * If the sequence number is allocated, use it.
 -			 * Otherwise, use the sequence number we WOULD
 -			 * allocate.
 -			 */
 -			if (bf->bf_state.bfs_seqno_assigned)
 -				seqno = SEQNO(bf->bf_state.bfs_seqno);
 -			else
 -				seqno = ni->ni_txseqs[bf->bf_state.bfs_tid];
 -
 -			/*
 -			 * Check whether either the currently allocated
 -			 * sequence number _OR_ the to-be allocated
 -			 * sequence number is inside the BAW.
 -			 */
 -			if (! BAW_WITHIN(tap->txa_start, tap->txa_wnd,
 -			    seqno)) {
 -				status = ATH_AGGR_BAW_CLOSED;
 -				break;
 -			}
 -
 -			/* XXX check for bfs_need_seqno? */
 -			if (! bf->bf_state.bfs_seqno_assigned) {
 -				int seqno;
 -				seqno = ath_tx_tid_seqno_assign(sc, ni, bf, bf->bf_m);
 -				if (seqno < 0) {
 -					device_printf(sc->sc_dev,
 -					    "%s: bf=%p, huh, seqno=-1?\n",
 -					    __func__,
 -					    bf);
 -					/* XXX what can we even do here? */
 -				}
 -				/* Flush seqno update to RAM */
 -				/*
 -				 * XXX This is required because the dmasetup
 -				 * XXX is done early rather than at dispatch
 -				 * XXX time. Ew, we should fix this!
 -				 */
 -				bus_dmamap_sync(sc->sc_dmat, bf->bf_dmamap,
 -				    BUS_DMASYNC_PREWRITE);
 -			}
 -		}
 -
 -		/*
  		 * If the packet has a sequence number, do not
  		 * step outside of the block-ack window.
  		 */
  		if (! BAW_WITHIN(tap->txa_start, tap->txa_wnd,
  		    SEQNO(bf->bf_state.bfs_seqno))) {
 -			device_printf(sc->sc_dev,
 -			    "%s: bf=%p, seqno=%d, outside?!\n",
 -			    __func__, bf, SEQNO(bf->bf_state.bfs_seqno));
  			status = ATH_AGGR_BAW_CLOSED;
  			break;
  		}
 
 Modified: head/sys/dev/ath/if_athvar.h
 ==============================================================================
 --- head/sys/dev/ath/if_athvar.h	Mon Jun 11 05:25:26 2012	(r236871)
 +++ head/sys/dev/ath/if_athvar.h	Mon Jun 11 06:59:28 2012	(r236872)
 @@ -216,9 +216,7 @@ struct ath_buf {
  		    bfs_istxfrag:1,	/* is fragmented */
  		    bfs_ismrr:1,	/* do multi-rate TX retry */
  		    bfs_doprot:1,	/* do RTS/CTS based protection */
 -		    bfs_doratelookup:1,	/* do rate lookup before each TX */
 -		    bfs_need_seqno:1,	/* need to assign a seqno for aggr */
 -		    bfs_seqno_assigned:1;	/* seqno has been assigned */
 +		    bfs_doratelookup:1;	/* do rate lookup before each TX */
  
  		int bfs_nfl;		/* next fragment length */
  
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: kern/166190: commit references a PR
Date: Mon, 11 Jun 2012 07:08:55 +0000 (UTC)

 Author: adrian
 Date: Mon Jun 11 07:08:40 2012
 New Revision: 236874
 URL: http://svn.freebsd.org/changeset/base/236874
 
 Log:
   Finish undoing the previous commit - this part of the code is no longer
   required.
   
   PR:		kern/166190
 
 Modified:
   head/sys/dev/ath/if_ath_tx.c
 
 Modified: head/sys/dev/ath/if_ath_tx.c
 ==============================================================================
 --- head/sys/dev/ath/if_ath_tx.c	Mon Jun 11 07:06:49 2012	(r236873)
 +++ head/sys/dev/ath/if_ath_tx.c	Mon Jun 11 07:08:40 2012	(r236874)
 @@ -1518,18 +1518,15 @@ ath_tx_start(struct ath_softc *sc, struc
  		 * TID and thus mess with the BAW.
  		 */
  		seqno = ath_tx_tid_seqno_assign(sc, ni, bf, m0);
 +
 +		/*
 +		 * Don't add QoS NULL frames to the BAW.
 +		 */
  		if (IEEE80211_QOS_HAS_SEQ(wh) &&
  		    subtype != IEEE80211_FC0_SUBTYPE_QOS_NULL) {
  			bf->bf_state.bfs_dobaw = 1;
  		}
  		ATH_TXQ_UNLOCK(txq);
 -	} else {
 -		/* No AMPDU TX, we've been assigned a sequence number. */
 -		if (IEEE80211_QOS_HAS_SEQ(wh)) {
 -			/* XXX we should store the frag+seqno in bfs_seqno */
 -			bf->bf_state.bfs_seqno =
 -			    M_SEQNO_GET(m0) << IEEE80211_SEQ_SEQ_SHIFT;
 -		}
  	}
  
  	/*
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: kern/166190: commit references a PR
Date: Mon, 11 Jun 2012 07:16:04 +0000 (UTC)

 Author: adrian
 Date: Mon Jun 11 07:15:48 2012
 New Revision: 236876
 URL: http://svn.freebsd.org/changeset/base/236876
 
 Log:
   Retried frames need to be inserted in the head of the list, not the tail.
   
   This is an unfortunate byproduct of how the routine is used - it's called
   with the head frame on the queue, but if the frame is failed, it's inserted
   into the tail of the queue.
   
   Because of this, the sequence numbers would get all shuffled around and
   the BAW would be bumped past this sequence number, that's now at the
   end of the software queue.  Then, whenever it's time for that frame
   to be transmitted, it'll be immediately outside of the BAW and TX will
   stall until the BAW catches up.
   
   It can also result in all kinds of weird duplicate BAW frames, leading
   to hilarious panics.
   
   PR:		kern/166190
 
 Modified:
   head/sys/dev/ath/if_ath_tx.c
 
 Modified: head/sys/dev/ath/if_ath_tx.c
 ==============================================================================
 --- head/sys/dev/ath/if_ath_tx.c	Mon Jun 11 07:11:34 2012	(r236875)
 +++ head/sys/dev/ath/if_ath_tx.c	Mon Jun 11 07:15:48 2012	(r236876)
 @@ -2309,7 +2309,7 @@ ath_tx_xmit_aggr(struct ath_softc *sc, s
  
  	/* paused? queue */
  	if (tid->paused) {
 -		ATH_TXQ_INSERT_TAIL(tid, bf, bf_list);
 +		ATH_TXQ_INSERT_HEAD(tid, bf, bf_list);
  		/* XXX don't sched - we're paused! */
  		return;
  	}
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: kern/166190: commit references a PR
Date: Mon, 11 Jun 2012 07:29:38 +0000 (UTC)

 Author: adrian
 Date: Mon Jun 11 07:29:25 2012
 New Revision: 236877
 URL: http://svn.freebsd.org/changeset/base/236877
 
 Log:
   When scheduling frames in an aggregate session, the frames should be
   scheduled from the head of the software queue rather than trying to
   queue the newly given frame.
   
   This leads to some rather unfortunate out of order (but still valid
   as it's inside the BAW) frame TX.
   
   This now:
   
   * Always queues the frame at the end of the software queue;
   * Tries to direct dispatch the frame at the head of the software queue,
     to try and fill up the hardware queue.
   
   TODO:
   
   * I should likely try to queue as many frames to the hardware as I can
     at this point, rather than doing one at a time;
   * ath_tx_xmit_aggr() may fail and this code assumes that it'll schedule
     the TID.  Otherwise TX may stall.
   
   PR:		kern/166190
 
 Modified:
   head/sys/dev/ath/if_ath_tx.c
 
 Modified: head/sys/dev/ath/if_ath_tx.c
 ==============================================================================
 --- head/sys/dev/ath/if_ath_tx.c	Mon Jun 11 07:15:48 2012	(r236876)
 +++ head/sys/dev/ath/if_ath_tx.c	Mon Jun 11 07:29:25 2012	(r236877)
 @@ -2402,7 +2402,22 @@ ath_tx_swq(struct ath_softc *sc, struct 
  		/* XXX sched? */
  	} else if (ath_tx_ampdu_running(sc, an, tid)) {
  		/* AMPDU running, attempt direct dispatch if possible */
 +
 +		/*
 +		 * Always queue the frame to the tail of the list.
 +		 */
 +		ATH_TXQ_INSERT_TAIL(atid, bf, bf_list);
 +
 +		/*
 +		 * If the hardware queue isn't busy, direct dispatch
 +		 * the head frame in the list.  Don't schedule the
 +		 * TID - let it build some more frames first?
 +		 *
 +		 * Otherwise, schedule the TID.
 +		 */
  		if (txq->axq_depth < sc->sc_hwq_limit) {
 +			bf = TAILQ_FIRST(&atid->axq_q);
 +			ATH_TXQ_REMOVE(atid, bf, bf_list);
  			ath_tx_xmit_aggr(sc, an, bf);
  			DPRINTF(sc, ATH_DEBUG_SW_TX,
  			    "%s: xmit_aggr\n",
 @@ -2411,7 +2426,6 @@ ath_tx_swq(struct ath_softc *sc, struct 
  			DPRINTF(sc, ATH_DEBUG_SW_TX,
  			    "%s: ampdu; swq'ing\n",
  			    __func__);
 -			ATH_TXQ_INSERT_TAIL(atid, bf, bf_list);
  			ath_tx_tid_sched(sc, atid);
  		}
  	} else if (txq->axq_depth < sc->sc_hwq_limit) {
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: kern/166190: commit references a PR
Date: Mon, 11 Jun 2012 07:32:08 +0000 (UTC)

 Author: adrian
 Date: Mon Jun 11 07:31:50 2012
 New Revision: 236878
 URL: http://svn.freebsd.org/changeset/base/236878
 
 Log:
   Make sure the frames are queued to the head of the list, not the tail.
   See previous commit.
   
   PR:		kern/166190
 
 Modified:
   head/sys/dev/ath/if_ath_tx.c
 
 Modified: head/sys/dev/ath/if_ath_tx.c
 ==============================================================================
 --- head/sys/dev/ath/if_ath_tx.c	Mon Jun 11 07:29:25 2012	(r236877)
 +++ head/sys/dev/ath/if_ath_tx.c	Mon Jun 11 07:31:50 2012	(r236878)
 @@ -2318,7 +2318,7 @@ ath_tx_xmit_aggr(struct ath_softc *sc, s
  	if (bf->bf_state.bfs_dobaw &&
  	    (! BAW_WITHIN(tap->txa_start, tap->txa_wnd,
  	    SEQNO(bf->bf_state.bfs_seqno)))) {
 -		ATH_TXQ_INSERT_TAIL(tid, bf, bf_list);
 +		ATH_TXQ_INSERT_HEAD(tid, bf, bf_list);
  		ath_tx_tid_sched(sc, tid);
  		return;
  	}
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: kern/166190: commit references a PR
Date: Mon, 11 Jun 2012 07:44:32 +0000 (UTC)

 Author: adrian
 Date: Mon Jun 11 07:44:16 2012
 New Revision: 236880
 URL: http://svn.freebsd.org/changeset/base/236880
 
 Log:
   Wrap the whole (software) TX path from ifnet dequeue to software queue
   (or direct dispatch) behind the TXQ lock (which, remember, is doubling
   as the TID lock too for now.)
   
   This ensures that:
   
    (a) the sequence number and the CCMP PN allocation is done together;
    (b) overlapping transmit paths don't interleave frames, so we don't
        end up with the original issue that triggered kern/166190.
   
        Ie, that we don't end up with seqno A, B in thread 1, C, D in
        thread 2, and they being queued to the software queue as "A C D B"
        or similar, leading to the BAW stalls.
   
   This has been tested:
   
   * both STA and AP modes with INVARIANTS and WITNESS;
   * TCP and UDP TX;
   * both STA->AP and AP->STA.
   
   STA is a Routerstation Pro (single CPU MIPS) and the AP is a dual-core
   Centrino.
   
   PR:		kern/166190
 
 Modified:
   head/sys/dev/ath/if_ath_tx.c
 
 Modified: head/sys/dev/ath/if_ath_tx.c
 ==============================================================================
 --- head/sys/dev/ath/if_ath_tx.c	Mon Jun 11 07:35:24 2012	(r236879)
 +++ head/sys/dev/ath/if_ath_tx.c	Mon Jun 11 07:44:16 2012	(r236880)
 @@ -1171,6 +1171,15 @@ ath_tx_normal_setup(struct ath_softc *sc
  	struct ath_node *an;
  	u_int pri;
  
 +	/*
 +	 * To ensure that both sequence numbers and the CCMP PN handling
 +	 * is "correct", make sure that the relevant TID queue is locked.
 +	 * Otherwise the CCMP PN and seqno may appear out of order, causing
 +	 * re-ordered frames to have out of order CCMP PN's, resulting
 +	 * in many, many frame drops.
 +	 */
 +	ATH_TXQ_LOCK_ASSERT(txq);
 +
  	wh = mtod(m0, struct ieee80211_frame *);
  	iswep = wh->i_fc[1] & IEEE80211_FC1_WEP;
  	ismcast = IEEE80211_IS_MULTICAST(wh->i_addr1);
 @@ -1506,11 +1515,18 @@ ath_tx_start(struct ath_softc *sc, struc
  	/* XXX should just bzero the bf_state? */
  	bf->bf_state.bfs_dobaw = 0;
  
 +	/*
 +	 * Acquire the TXQ lock early, so both the encap and seqno
 +	 * are allocated together.
 +	 */
 +	ATH_TXQ_LOCK(txq);
 +
  	/* A-MPDU TX? Manually set sequence number */
 -	/* Don't do it whilst pending; the net80211 layer still assigns them */
 -	/* XXX do we need locking here? */
 +	/*
 +	 * Don't do it whilst pending; the net80211 layer still
 +	 * assigns them.
 +	 */
  	if (is_ampdu_tx) {
 -		ATH_TXQ_LOCK(txq);
  		/*
  		 * Always call; this function will
  		 * handle making sure that null data frames
 @@ -1526,7 +1542,6 @@ ath_tx_start(struct ath_softc *sc, struc
  		    subtype != IEEE80211_FC0_SUBTYPE_QOS_NULL) {
  			bf->bf_state.bfs_dobaw = 1;
  		}
 -		ATH_TXQ_UNLOCK(txq);
  	}
  
  	/*
 @@ -1545,7 +1560,7 @@ ath_tx_start(struct ath_softc *sc, struc
  	r = ath_tx_normal_setup(sc, ni, bf, m0, txq);
  
  	if (r != 0)
 -		return r;
 +		goto done;
  
  	/* At this point m0 could have changed! */
  	m0 = bf->bf_m;
 @@ -1570,16 +1585,12 @@ ath_tx_start(struct ath_softc *sc, struc
  	if (txq == &avp->av_mcastq) {
  		DPRINTF(sc, ATH_DEBUG_SW_TX,
  		    "%s: bf=%p: mcastq: TX'ing\n", __func__, bf);
 -		ATH_TXQ_LOCK(txq);
  		ath_tx_xmit_normal(sc, txq, bf);
 -		ATH_TXQ_UNLOCK(txq);
  	} else if (type == IEEE80211_FC0_TYPE_CTL &&
  		    subtype == IEEE80211_FC0_SUBTYPE_BAR) {
  		DPRINTF(sc, ATH_DEBUG_SW_TX,
  		    "%s: BAR: TX'ing direct\n", __func__);
 -		ATH_TXQ_LOCK(txq);
  		ath_tx_xmit_normal(sc, txq, bf);
 -		ATH_TXQ_UNLOCK(txq);
  	} else {
  		/* add to software queue */
  		DPRINTF(sc, ATH_DEBUG_SW_TX,
 @@ -1591,10 +1602,10 @@ ath_tx_start(struct ath_softc *sc, struc
  	 * For now, since there's no software queue,
  	 * direct-dispatch to the hardware.
  	 */
 -	ATH_TXQ_LOCK(txq);
  	ath_tx_xmit_normal(sc, txq, bf);
 -	ATH_TXQ_UNLOCK(txq);
  #endif
 +done:
 +	ATH_TXQ_UNLOCK(txq);
  
  	return 0;
  }
 @@ -1630,10 +1641,29 @@ ath_tx_raw_start(struct ath_softc *sc, s
  	/* XXX honor IEEE80211_BPF_DATAPAD */
  	pktlen = m0->m_pkthdr.len - (hdrlen & 3) + IEEE80211_CRC_LEN;
  
 -
  	DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: ismcast=%d\n",
  	    __func__, ismcast);
  
 +	pri = params->ibp_pri & 3;
 +	/* Override pri if the frame isn't a QoS one */
 +	if (! IEEE80211_QOS_HAS_SEQ(wh))
 +		pri = ath_tx_getac(sc, m0);
 +
 +	/* XXX If it's an ADDBA, override the correct queue */
 +	do_override = ath_tx_action_frame_override_queue(sc, ni, m0, &o_tid);
 +
 +	/* Map ADDBA to the correct priority */
 +	if (do_override) {
 +#if 0
 +		device_printf(sc->sc_dev,
 +		    "%s: overriding tid %d pri %d -> %d\n",
 +		    __func__, o_tid, pri, TID_TO_WME_AC(o_tid));
 +#endif
 +		pri = TID_TO_WME_AC(o_tid);
 +	}
 +
 +	ATH_TXQ_LOCK(sc->sc_ac2q[pri]);
 +
  	/* Handle encryption twiddling if needed */
  	if (! ath_tx_tag_crypto(sc, ni,
  	    m0, params->ibp_flags & IEEE80211_BPF_CRYPTO, 0,
 @@ -1688,11 +1718,6 @@ ath_tx_raw_start(struct ath_softc *sc, s
  	if (flags & (HAL_TXDESC_RTSENA|HAL_TXDESC_CTSENA))
  		bf->bf_state.bfs_ctsrate0 = params->ibp_ctsrate;
  
 -	pri = params->ibp_pri & 3;
 -	/* Override pri if the frame isn't a QoS one */
 -	if (! IEEE80211_QOS_HAS_SEQ(wh))
 -		pri = ath_tx_getac(sc, m0);
 -
  	/*
  	 * NB: we mark all packets as type PSPOLL so the h/w won't
  	 * set the sequence number, duration, etc.
 @@ -1774,19 +1799,6 @@ ath_tx_raw_start(struct ath_softc *sc, s
  
  	/* NB: no buffered multicast in power save support */
  
 -	/* XXX If it's an ADDBA, override the correct queue */
 -	do_override = ath_tx_action_frame_override_queue(sc, ni, m0, &o_tid);
 -
 -	/* Map ADDBA to the correct priority */
 -	if (do_override) {
 -#if 0
 -		device_printf(sc->sc_dev,
 -		    "%s: overriding tid %d pri %d -> %d\n",
 -		    __func__, o_tid, pri, TID_TO_WME_AC(o_tid));
 -#endif
 -		pri = TID_TO_WME_AC(o_tid);
 -	}
 -
  	/*
  	 * If we're overiding the ADDBA destination, dump directly
  	 * into the hardware queue, right after any pending
 @@ -1796,13 +1808,12 @@ ath_tx_raw_start(struct ath_softc *sc, s
  	    __func__, do_override);
  
  	if (do_override) {
 -		ATH_TXQ_LOCK(sc->sc_ac2q[pri]);
  		ath_tx_xmit_normal(sc, sc->sc_ac2q[pri], bf);
 -		ATH_TXQ_UNLOCK(sc->sc_ac2q[pri]);
  	} else {
  		/* Queue to software queue */
  		ath_tx_swq(sc, ni, sc->sc_ac2q[pri], bf);
  	}
 +	ATH_TXQ_UNLOCK(sc->sc_ac2q[pri]);
  
  	return 0;
  }
 @@ -2032,6 +2043,15 @@ ath_tx_addto_baw(struct ath_softc *sc, s
  	if (bf->bf_state.bfs_isretried)
  		return;
  
 +	if (! bf->bf_state.bfs_dobaw) {
 +		device_printf(sc->sc_dev,
 +		    "%s: dobaw=0, seqno=%d, window %d:%d\n",
 +		    __func__,
 +		    SEQNO(bf->bf_state.bfs_seqno),
 +		    tap->txa_start,
 +		    tap->txa_wnd);
 +	}
 +
  	tap = ath_tx_get_tx_tid(an, tid->tid);
  
  	if (bf->bf_state.bfs_addedbaw)
 @@ -2043,6 +2063,20 @@ ath_tx_addto_baw(struct ath_softc *sc, s
  		    tid->baw_tail);
  
  	/*
 +	 * Verify that the given sequence number is not outside of the
 +	 * BAW.  Complain loudly if that's the case.
 +	 */
 +	if (! BAW_WITHIN(tap->txa_start, tap->txa_wnd,
 +	    SEQNO(bf->bf_state.bfs_seqno))) {
 +		device_printf(sc->sc_dev,
 +		    "%s: bf=%p: outside of BAW?? tid=%d, seqno %d; window %d:%d; "
 +		    "baw head=%d tail=%d\n",
 +		    __func__, bf, tid->tid, SEQNO(bf->bf_state.bfs_seqno),
 +		    tap->txa_start, tap->txa_wnd, tid->baw_head,
 +		    tid->baw_tail);
 +	}
 +
 +	/*
  	 * ni->ni_txseqs[] is the currently allocated seqno.
  	 * the txa state contains the current baw start.
  	 */
 @@ -2265,6 +2299,8 @@ ath_tx_tid_seqno_assign(struct ath_softc
  	if (! IEEE80211_QOS_HAS_SEQ(wh))
  		return -1;
  
 +	ATH_TID_LOCK_ASSERT(sc, &(ATH_NODE(ni)->an_tid[tid]));
 +
  	/*
  	 * Is it a QOS NULL Data frame? Give it a sequence number from
  	 * the default TID (IEEE80211_NONQOS_TID.)
 @@ -2276,6 +2312,7 @@ ath_tx_tid_seqno_assign(struct ath_softc
  	 */
  	subtype = wh->i_fc[0] & IEEE80211_FC0_SUBTYPE_MASK;
  	if (subtype == IEEE80211_FC0_SUBTYPE_QOS_NULL) {
 +		/* XXX no locking for this TID? This is a bit of a problem. */
  		seqno = ni->ni_txseqs[IEEE80211_NONQOS_TID];
  		INCR(ni->ni_txseqs[IEEE80211_NONQOS_TID], IEEE80211_SEQ_RANGE);
  	} else {
 @@ -2369,6 +2406,8 @@ ath_tx_swq(struct ath_softc *sc, struct 
  	int pri, tid;
  	struct mbuf *m0 = bf->bf_m;
  
 +	ATH_TXQ_LOCK_ASSERT(txq);
 +
  	/* Fetch the TID - non-QoS frames get assigned to TID 16 */
  	wh = mtod(m0, struct ieee80211_frame *);
  	pri = ath_tx_getac(sc, m0);
 @@ -2391,7 +2430,6 @@ ath_tx_swq(struct ath_softc *sc, struct 
  	 * If the TID is paused or the traffic it outside BAW, software
  	 * queue it.
  	 */
 -	ATH_TXQ_LOCK(txq);
  	if (atid->paused) {
  		/* TID is paused, queue */
  		DPRINTF(sc, ATH_DEBUG_SW_TX, "%s: paused\n", __func__);
 @@ -2439,7 +2477,6 @@ ath_tx_swq(struct ath_softc *sc, struct 
  		ATH_TXQ_INSERT_TAIL(atid, bf, bf_list);
  		ath_tx_tid_sched(sc, atid);
  	}
 -	ATH_TXQ_UNLOCK(txq);
  }
  
  /*
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 
>Unformatted:
