Re: [tsvwg] Traffic protection as a hard requirement for NQB

Bob Briscoe <> Tue, 10 September 2019 16:55 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 303001201B7 for <>; Tue, 10 Sep 2019 09:55:34 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id Vlk8HNQYN-U4 for <>; Tue, 10 Sep 2019 09:55:32 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id D375E120115 for <>; Tue, 10 Sep 2019 09:55:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;; s=default; h=Content-Transfer-Encoding:Content-Type: In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender :Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=FB+e0+NfNGVQij94vTo6YcRN4oo1ryx02+bJodBnFFM=; b=rYMmuFvttGCres42+5BkcBLwTL f6BIjHcMb5uCfyYF0dHhHOJR7YAFmJgP9XmP6x7q1JKQEzXHOfliNKaiIhvrHImY6fitFc+hsFoS8 OictAx4jRtMdvM3iif1kOhe8oYw/jNEWq5zhm6vHlvoEfklTQv+lSzcUbupmOYRmJSdcDjzy/VLed g3+4PzVFEaUyf6rxwhllNUlngK0I+eJjTjDvzJ7GhER8Zr8OwcRams23jrSh1B9pbGA2giQSQvzfI 2ht1yO0/YkuTeAxsiuAV6OKkeds8KgUKUpBVCzN3lRd+RkWVTNUDscRPS1FCyKsAtoxv2Ird+Qwcp EzZyPKew==;
Received: from [] (port=42164 helo=[]) by with esmtpsa (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128) (Exim 4.92) (envelope-from <>) id 1i7jQP-0007L7-T5; Tue, 10 Sep 2019 17:55:30 +0100
To: Sebastian Moeller <>
Cc: Greg White <>, Steven Blake <>, "" <>
References: <> <> <> <> <> <> <> <>
From: Bob Briscoe <>
Message-ID: <>
Date: Tue, 10 Sep 2019 17:55:29 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1
MIME-Version: 1.0
In-Reply-To: <>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname -
X-AntiAbuse: Original Domain -
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain -
X-Get-Message-Sender-Via: authenticated_id:
Archived-At: <>
Subject: Re: [tsvwg] Traffic protection as a hard requirement for NQB
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 10 Sep 2019 16:55:34 -0000


On 07/09/2019 20:14, Sebastian Moeller wrote:
>> On Sep 6, 2019, at 21:53, Greg White <> wrote:
>> [GW] comments below
>> From: tsvwg <> on behalf of Steven Blake <>
>> The objective as I understand it is to offer low queueing latency to non-admissioned-controlled/non-capacity-seeking traffic (NQB). First, this is only feasible if NQB traffic is naturally self-limiting to a small fraction of total traffic (e.g., < 5%). Second, existence of this NQB service should not noticeably impact the QoS of capacity-seeking (QB) traffic. Third, implementations of this NQB service should try to ensure that capacity-seeking traffic trying to utilize the NQB service gets worse service with very high probability (so that it doesn't even bother to probe).
>> [GW] If the WRR weight is 0.5 for each queue, then it is feasible for the NQB PHB to provide low queueing latency to NQB traffic up to 50% of the total link capacity (and more than 50% when QB traffic is not filling up the other 50%).
> 	[SM] The NQB PHB is not going to provide low queueing latency to NQB traffic, the whole "trick" behind the low queueing latency, IMHO effectively is to mandate pacing and ideally synchronize all active senders such that each of their packet's arrive only shortly before its egress timeslot at the bottleneck becomes available.
The whole trick is the first part (pacing), but the second (ideally 
synchronize) is not part of the trick and not necessary.
> For Bob's examples like VoIP that trick will only work partially, as the VoIP senders will pick their sending slots based on acquisition start time and sampling rate and will not adapt to what would be ideal for the bottleneck's scheduler (VoIP being UDP there is not even a viable path from the bottleneck AQM back to the sender to synchronize them). Using NTP as a proxy for assessing synchronisation quality and jitter it seems to me that <1ms synchronization over the open internet is going to be hard given the transient nature of most flows, but hard does not equal impossible.
Randomness is our friend. For a certain NQB utilization, r, if there are 
two equal but uncoordinated NQB flows the probability of a queue of 2 
packets is r/2. But if there are n equal but uncoordinated flows, the 
probability of a queue of n is vanishingly small ((r/n)^(n-1), I think).

That's why no-one attempts to get VoIP flows or flows of game-sync 
datagrams to shift their phase to reduce overall delay. Randomness is 

The calculations I showed with 5 flows gave roughly 600us worst-case 
queuing delay without any attempt at desynchronizing each of the flows. 
And the probability of even that much would be tiny.

>> [GW] On the second point, you are correct, the existence of the NQB service will not noticeably impact the QoS of capacity-seeking QB traffic.
>> For example, if the NQB traffic did happen to be smooth traffic (CBR) at exactly 50% of link capacity, then with NQB, the QB traffic flows have 50% of link capacity to compete with each other for. Let’s say that competition results in (e.g.) 0.1% packet drop/mark.  On the other hand, without the NQB service, and in the presence of the same CBR flow, the capacity-seeking traffic flows similarly compete for the remaining 50% of link capacity, and settle in to the same 0.1% packet drop/mark rate (yet the CBR flow is unfortunately also subject to high queuing delay as well as 0.1% drop if the link doesn’t support ECN).  Well, being picky, if the link didn’t support ECN then I guess the capacity seeking traffic would compete for 50.1% of link capacity without the NQB PHB, as opposed to 50% with the NQB PHB, but I would argue that isn’t a “noticeable impact”.
> 	[SM] And now entertain the idea of a flow fair queueing system on the bottleneck which nicely would isolate the effect of CBR traffic exceeding its permitted rates from the rest of the flows...
[BB] I believe Greg's point was that the CBR flow at 50% of the capacity 
might have been desired by the user.

>> [GW] On your third point, I agree in concept, and it is an aspect that we’ve discussed privately several times.  At the end of the day, we could not convince ourselves of an appropriate “punishment” for mismarked QB traffic, other than high packet loss in the case where QP is not implemented or potential for out-of-order delivery when QP is implemented.
> 	[SM] With the actual numbers from the DualQ RFC it seems that a QB flow will be able to deposit at least up to 125ms worth of packets before hitting the overload-limit, at Bob's 120 Mbps that is roughly 1.8 MB of data, quite a number of TCP flows will be done with their business before hitting that, reaping potential benefit from mis-marking themselves, without encountering negative effects from doing so.
[BB] Already addressed in response to your other email.


Bob Briscoe