Re: [aqm] Last Call: <draft-ietf-aqm-fq-codel-05.txt> (FlowQueue-Codel) to Experimental RFC

Bob Briscoe <> Fri, 25 March 2016 19:05 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id EB77F12D622; Fri, 25 Mar 2016 12:05:01 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.901
X-Spam-Status: No, score=-1.901 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id LlQHZ9EXz3eZ; Fri, 25 Mar 2016 12:04:59 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 30E9212D18B; Fri, 25 Mar 2016 12:04:59 -0700 (PDT)
Received: from ([]:46233 helo=[]) by with esmtpsa (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128) (Exim 4.86_1) (envelope-from <>) id 1ajX2W-0000rU-V9; Fri, 25 Mar 2016 19:04:57 +0000
Subject: Re: [aqm] Last Call: <draft-ietf-aqm-fq-codel-05.txt> (FlowQueue-Codel) to Experimental RFC
To: Jonathan Morton <>
References: <> <> <> <> <>
From: Bob Briscoe <>
Message-ID: <>
Date: Fri, 25 Mar 2016 19:04:56 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0
MIME-Version: 1.0
In-Reply-To: <>
Content-Type: text/plain; charset="windows-1252"; format="flowed"
Content-Transfer-Encoding: 8bit
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname -
X-AntiAbuse: Original Domain -
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain -
X-Get-Message-Sender-Via: authenticated_id:
Archived-At: <>
X-Mailman-Approved-At: Mon, 28 Mar 2016 08:13:23 -0700
Cc:, Toke Høiland-Jørgensen <>,,,,
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: IETF-Discussion <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 25 Mar 2016 19:05:02 -0000


It does make sense.

On 24/03/16 20:08, Jonathan Morton wrote:
>> On 21 Mar, 2016, at 20:04, Bob Briscoe <> wrote:
>> The experience that led me to understand this problem was when a bunch of colleagues tried to set up a start-up (a few years ago now) to sell a range of "equitable quality" video codecs (ie constant quality variable bit-rate instead of constant bit-rate variable quality). Then, the first ISP they tried to sell to had WFQ in its Broadband remote access servers. Even tho this was between users, not flows, when video was the dominant traffic, this overrode the benefits of their cool codecs (which would have delivered twice as many videos with the same quality over the same capacity.
> This result makes no sense.
> You state that the new codecs “would have delivered twice as many videos with the same quality over the same capacity”, and video “was the dominant traffic”, *and* the network was the bottleneck while running the new codecs.
> The logical conclusion must be either that the network was severely under-capacity
Nope. The SVLAN buffer (Service VLAN shared by all users on the same 
DSLAM) at the Broadband Network Gateway (BNG) became the bottleneck 
during peak hour, while at other times each user's CVLAN (Customer VLAN) 
at the BNG was the bottleneck. The proposition was to halve the SVLAN 
capacity serving the same CVLANs by exploiting the multiplexing gain of 
equitable quality video... explained below.
> and was *also* the bottleneck, only twice as hard, under the old codecs; or that there was insufficient buffering at the video clients to cope with temporary shortfalls in link bandwidth;
I think you are imagining that the bit-rate of a constant quality video 
varies around a constant mean over the timescale that a client buffer 
can absorb. It doesn't. The guys who developed constant quality video 
analysed a wide range of commercial videos including feature films, 
cartoons, documentaries etc, and found that, at whatever timescale you 
average over, you get a significantly different mean. This is because, 
to get the same quality, complex passages like a scene in a forest in 
the wind or splashing water require much higher bit-rate than simpler 
passages, e.g. a talking head with a fixed background. A passage of 
roughly the same visual complexity can last for many minutes within one 
video before moving on to a passage of completely different complexity.

Also, I hope you are aware of earlier research from around 2003 that 
found that humans judge the quality of a video by the worst quality 
passages, so there's no point increasing the quality if you can't 
maintain it and have to degrade it again. That's where the idea of 
constant quality encoding came from.

The point these researchers made is that the variable bit-rate model of 
video we have all been taught was derived from the media industry's need 
to package videos in constant size media (whether DVDs or TV channels). 
The information rate that the human brain prefers is very different.

A typical (not contrived) example bit-rate trace of constant quality 
video is on slide 20 of a talk I gave for the ICCRG in May 2009, when I 
first found out about this research:
As it says, the blue plot is averaged over 3 frames (0.12s) and red over 
192 frames (7.68s). If FQ gave everyone roughly constant bit-rate, you 
can see that even 7s of client buffer would not be able to absorb the 
difference between what they wanted and what they were given.

Constant quality videos multiplex together nicely in a FIFO. The rest of 
slide 20 quantifies the multiplexing gain:
* If you keep it strictly constant quality, you get 25% multiplexing 
gain compared to CBR.
* If all the videos respond to congestion a little (ie when many peaks 
coincide causing loss or ECN), so they all sacrifice the same proportion 
of quality (called equitable quality video), you get over 200% 
multiplexing gain relative to CBR. That's the x2 gain I quoted originally.

Anyway, even if client buffering did absorb the variations, you wouldn't 
want to rely on it. Constant quality video ought to be applicable to 
conversational and interactive video, not just streamed. Then you would 
want to keep client buffers below a few milliseconds.

> or that demand for videos doubled due to the new codecs providing a step-change in the user experience (which feeds back into the network capacity conclusion).
Nope, this was a controlled experiment (see below).
> In short, it was not WFQ that caused the problem.
Once they worked out that the problem might be the WFQ in the Broadband 
Network Gateway, they simulated the network with and without WFQ and 
proved that WFQ was the problem.


The papers below describe Equitable Quality VIdeo, but I'm afraid there 
is no published write-up of the problems they encountered with FQ - an 
unfortunate side-effect of the research community's bias against 
publishing negative results.

Mulroy09 is a more practical way of implementing equitable quality 
video, while Crabtree09 is the near-perfect strategy precomputed for 
each video:
[Mulroy09] Mulroy, P., Appleby, S., Nilsson, M. & Crabtree, B., "The Use 
of MulTCP for the Delivery of Equitable Quality Video," In: Proc. Int'l 
Packet Video Wkshp (PV'09) IEEE (May 2009)
[Crabtree09] Crabtree, B., Nilsson, M., Mulroy, P. & Appleby, S., 
"Equitable quality video streaming over DSL," In: Proc. Int'l Packet 
Video Wkshp (PV'09) IEEE (May 2009)

Either can be accessed from:

>   - Jonathan Morton

Bob Briscoe