Re: [aqm] Is bufferbloat a real problem?

David Collier-Brown <davec-b@rogers.com> Sat, 28 February 2015 02:00 UTC

Return-Path: <davec-b@rogers.com>
X-Original-To: aqm@ietfa.amsl.com
Delivered-To: aqm@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 977241A1AA7 for <aqm@ietfa.amsl.com>; Fri, 27 Feb 2015 18:00:20 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2
X-Spam-Level:
X-Spam-Status: No, score=-2 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_NONE=-0.0001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id W_4MEzJo6Fbn for <aqm@ietfa.amsl.com>; Fri, 27 Feb 2015 18:00:18 -0800 (PST)
Received: from nm12-vm9.access.bullet.mail.bf1.yahoo.com (nm12-vm9.access.bullet.mail.bf1.yahoo.com [216.109.114.248]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B17111A1A27 for <aqm@ietf.org>; Fri, 27 Feb 2015 18:00:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rogers.com; s=s2048; t=1425088816; bh=we46P+4OHfK609asFF+sd+f3sXdKaM3D6siqLmF72pg=; h=Date:From:Reply-To:To:Subject:References:In-Reply-To:From:Subject; b=tQl0LOp/ZAmf23lMgdvSImkhoC8DMdk6PZqaE9adtVufIPQ8gnpbOaQ9vBq3JZePKP9QnPnOXeOh2wy3JHeS8ZlkhdJ8bXKI9puXuKCkNw16AR9cKFqZ40d8esYXpuNpyzNG1k5jQJnuzii+naTemhzbCpokeT6VA9SKqpSMBXTimGuUC9Ragf0VNqU9uhV4BpEwKjsoj0jUoR1E7U4iyX5CS8gKoTV3CG8T/M+sLirYyzc8RQ0mWhmgBctC9jauNnwssAUx7fE/pEdImSIC9/3qgQRhA4q4Zz8Z9ZxqGSAn0JWVUcaNj+y7nCIT6oxmjipBYYFbcB9KGfGMM4f46g==
Received: from [66.196.81.159] by nm12.access.bullet.mail.bf1.yahoo.com with NNFMP; 28 Feb 2015 02:00:16 -0000
Received: from [98.138.226.240] by tm5.access.bullet.mail.bf1.yahoo.com with NNFMP; 28 Feb 2015 02:00:16 -0000
Received: from [127.0.0.1] by smtp111.sbc.mail.ne1.yahoo.com with NNFMP; 28 Feb 2015 02:00:16 -0000
X-Yahoo-Newman-Id: 353474.73495.bm@smtp111.sbc.mail.ne1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: UHt2_WoVM1mDH.lzp_k2.lZEXEXaxfCQUNn4Lyezm00Nvyc PdnjGc5dRQH58qgtO2DYOxy.x_bvcEHnxcabrCfwuJWKpImgNknnT6LWJ4__ kXsOsSqsl1ibh_UZ8DYXpIEObb42cG4zSyzlmxzCQLnIkOVo1wTdWwQD8.oJ 55ouTVZgn3lLJqEyyh2Zk2cK9lcJZH0vx_0vbYqPK2M6H8ynnKYN2UBD1V.x tmeHu3G1rZ04d3komagz20SLXVlEEbm4jDYi6TcNmISZ64j8vtmRcLJFkBDz o1NEWn6l3Ijux7Zn4j0npoyYei0D23ahYttl_7PWwhbwHRtKfh3Yjvf53Upo mH.z1KfBRWhpEAEpD_ANcOcyTeqW9i1cqpJUGl4h6oL1tV3owvIZxNUHrhtq iontJFPk_yCL0GzWmE6z_ybnKqcvJXqb4Yi4J2DJrDM6M6GD.UMhJeOdiym_ YlGZPUjflr3uUxVJQMJEhKWuXk292iTeVaOdk90c9JK35gwEGtnKq1.9e7hE bTyX5NTiM7PCv86sTHpNKuheTJ2qMyNohDw--
X-Yahoo-SMTP: sltvjZWswBCRD.ElTuB1l9j6s9wRYPpuyTNWOE5oEg--
Message-ID: <54F1212D.3040703@rogers.com>
Date: Fri, 27 Feb 2015 21:00:13 -0500
From: David Collier-Brown <davec-b@rogers.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0
MIME-Version: 1.0
To: aqm@ietf.org
References: <201502271852.t1RIqekh018253@maildrop31.somerville.occnc.com>
In-Reply-To: <201502271852.t1RIqekh018253@maildrop31.somerville.occnc.com>
Content-Type: text/plain; charset="windows-1252"; format="flowed"
Content-Transfer-Encoding: 7bit
Archived-At: <http://mailarchive.ietf.org/arch/msg/aqm/rsfuXJfD-c78N8excSayh2NO6wk>
Subject: Re: [aqm] Is bufferbloat a real problem?
X-BeenThere: aqm@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
Reply-To: davecb@spamcop.net
List-Id: "Discussion list for active queue management and flow isolation." <aqm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/aqm>, <mailto:aqm-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/aqm/>
List-Post: <mailto:aqm@ietf.org>
List-Help: <mailto:aqm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/aqm>, <mailto:aqm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 28 Feb 2015 02:00:20 -0000

IMHO, removing latency is the aim of FQ. Once done, buffer sizes can be 
unbounded (save by price (;-))

--dave

On 02/27/2015 01:52 PM, Curtis Villamizar wrote:
> In message <2134947047.1078309.1424979858723.JavaMail.yahoo@mail.yahoo.com>
> Daniel Havey writes:
>   
>>   
>> I know that this question is a bit ridiculous in this community.  Of
>> course bufferbloat is a real problem.  However, it would be nice to
>> formally address the question and I think this community is the right
>> place to do so.
>>   
>> Does anybody have a measurement study?  I have some stuff from the FCC
>> Measuring Broadband in America studies, but, that doesn't address
>> bufferbloat directly.
>>   
>> Let me repeat for clarity.  I don't need to be convinced.  I need
>> evidence that I can use to convince my committee.
>>   
>> ...Daniel
>>   
>> _______________________________________________
>> aqm mailing list
>> aqm@ietf.org
>> https://www.ietf.org/mailman/listinfo/aqm
>
> Daniel,
>
> You are convinced.  So am I but we need to temper the message to avoid
> going too far in the opposite direction - too little buffer.
>
> If you are not interested in the details, just skip to the last
> paragraph.
>
> Bufferbloat should in principle only be a problem for interactive
> realtime traffic.  Interactive means two way or multiway.  This is
> SIP, Skype, audio and video conferencing, etc.  In practice it is also
> bad for TCP flows with short RTT and max window set small.
>
> One way realtime (such as streaming audio and video) should be
> unnafected by all but huge bufferbloat.  That is should be.  For
> example, youtube video is carried over TCP and is typically either way
> ahead of the playback or choppy (inadequate aggregate bandwidth or
> marigninal and/or big drop with TCP stall).  It would be nice if
> marginal aggregate bandwidth was dealt with by switching to a lower
> bandwidth encoding, but too often this is not the case.  This doesn't
> mean that some streaming formats don't manage to get this wrong and
> end up delay sensitive.
>
> TCP needs buffering to function correctly.  Huge bufferbloat is bad
> for TCP, particularly for small transfers that never get out of TCP
> slow start and for short RTT flows.  For long RTT flows too little
> buffer causes problems.
>
> [ aside: For example, if TCP starts with 4 segments at 1K segment
> size, it will take 4 RTT to hit 64KB window, the typical max window
> without TCP large window (TCPLW) option.  During that time, 60KB will
> be sent.  After that 64KB will be sent each RTT.  With geographic RTT
> is 70 msec (approximate US continental RTT due to finite speed of
> light in fiber and fiber distance), 60 KB is sent in the first 280
> msec and 64KB gets sent every 70 msec yielding 7 mb/s.  OTOH if there
> is a server 2 msec RTT away (1 msec one way is 125mi = 200km), then
> 60KB in first 8 msec and 256 Mb/s after that.  If there is 100 msec
> buffer at a bottleneck, then this low RTT TCP flow will be slowed by a
> factor of 50.  OTOH, if bottlenecks have a lot less than 1 RTT of
> buffer, then the long TCP flows will get even further slowed. ]
>
> One of the effects of some buffer, but not excessive, is short RTT
> flows which given the no-TCPLW max window get slowed down while longer
> RTT are less affected.  This becomes more fair wrt to transfer rates
> among TCP flows.  The same holds true if TCPLW gets turned on in
> commodity gadgets and the commonly deploed max window increases, but
> the number change.
>
> If the buffer grows a little and the deployed window sizes become the
> limiting factor, then this is very light congestion with delay but
> absolutely zero loss due to queue drops (not considering AQM for the
> moment).
>
> Some uses of TCP increase the window to work better over long RTT.  It
> takes a bit longer to hit the max window but the rate once it has been
> hit is greater.  Setting TCP window large on short RTT flows is
> counterproductive since one or a small number of flows can cause a
> bottleneck on a slow provider link (ie: 10-100 Mb/s range typical of
> home use).  On a LAN RTT can be well under 1 msec on Ethernet and
> highly variable on WiFi.  On WiFi larger window can contribute to some
> real trouble.  So best that the default window be changed.
>
> [ Note that the work on automatic sizing of tcp_sndbuf and scp_recvbuf
> may create a tendency to saturate links as the window can go up to 2MB
> with default parameters.  Has this hit consumer devices yet?  This
> could be bad it this rolls out before widespread use of AQM. ]
>
> When a small amount of loss occurs, such as one or much less than the
> current window size, TCP cuts the current window size in half and
> retransmits the packet for the window in flight (ignoring selective
> acknowledgment extension aka SACK for the moment).
>
> If the buffer is way too small, then a large amount of premature drop
> occurs when the buffer limit is hit.  Lots of TCP flows slow down.
> The long RTT flows slow down the most.  Some retransmission occurs
> (which doesn't help congestion).  If there is a long period of drop
> relative to a short RTT, then a entire window can be dropped and this
> is terrible for TCP (slow start is initiated after delay based on an
> estimate of RTT and RTT stdev, or 3 sec if RTT estimate is stale -
> this is a TCP stall).  So with too little buffer some TCP flows get
> hammered and stall.  TCP flows with long RTT tend to stall less but
> are more sensitve to the frequency of drop events and can get
> extremely slow due to successively cutting window in half and then
> growing the window linearly rather than exponentially.
>
> With tiny buffers really bad things tend to happen.  The rate of
> retransmission can drive goodput (the amount of non-retransmit traffic
> per time) can drop substantially.  Long RTT flows can become
> hopelessly slow.  Stalls become more common.  In the worst case (which
> has been observed in a ISP network during a tiny buffer experiment
> about a decade ago, details in private email) TCP synchronization can
> occur, and utilization and goodput drop dramatically.
>
> A moderate amount of buffer is good for all TCP.  A large buffer is
> good for long RTT TCP flows, particularly those that have increased
> max window.  As mentioned before, any but a very small buffer is bad
> for interactive real time applications.
>
> Enter AQM.  A large buffer can be used but with a lower target delay
> and some form of AQM to introduce a low rate of isolated drops as
> needed to slow the senders.  Avoiding queue tail drop events where a
> lot of drops occur over an interval lowers the amount of
> retransmission and avoids stalls.  Long RTT flows tend to get
> penalized the most.
>
> Fairness is not great with a single queue and AQM but this is much
> better than a single queue with either small or large buffer and tail
> drop.  Fairness is greatly improved with some form of FQ or SFQ.
>
> Ideally with FQ each flow would get its own queue.  In practice this
> is not the case but the situation is greatly improved.  A real time
> flow, which is inherently rate limited, would see minimal delay and no
> loss.  A short RTT flow would see a moderate increase in delay and a
> low level of loss (ie: typically much less than 1%) enough to slow it
> down enough to avoid congestion.  a long RTT flow would see a moderate
> increase in delay and no loss if still running slower than the small
> RTT flows.  This does wonders for fairness and provides the best
> possible service for each service type.
>
> In practice, some FQ or SFQ queues have a mix of real time, low RTT
> TCP, and high RTT TCP.  If any such queue is taking a smaller share
> than other queues, delay is low and loss is low or zero.  If such a
> queue is taking more than its share, then the situation is similar to
> the single queue case.  Less flows end up in such a queue.  Cascaded
> queues have been proposed and in some cases (no longer existing) have
> been implemented.  In a cascaded SFQ scheme, the queues taking more
> than their share are further subdivied.  Repeat the subdivision a few
> times and you can end up with the large bandwidth contributors in
> their own queue and getting a fair share of capacity.
>
> So excuse the length of this but solving bufferbloat is *not* a silver
> bullet.  Not understanding that point and just making buffers really
> small could result in an even worse situation than we have now.
>
> Curtis
>
> ps - Some aspects of this may not reflect WG direction.  IMHO- the
> down sides of just making buffers smaller and/or setting low delay
> targets may not be getting enough (or any) attention in the WG.  Maybe
> discussion wouldn't hurt.
>
> _______________________________________________
> aqm mailing list
> aqm@ietf.org
> https://www.ietf.org/mailman/listinfo/aqm
>


-- 
David Collier-Brown,         | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
davecb@spamcop.net           |                      -- Mark Twain