Re: [tsvwg] Traffic protection as a hard requirement for NQB

Bob Briscoe <ietf@bobbriscoe.net> Tue, 10 September 2019 16:55 UTC

Return-Path: <ietf@bobbriscoe.net>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 303001201B7 for <tsvwg@ietfa.amsl.com>; Tue, 10 Sep 2019 09:55:34 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=bobbriscoe.net
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Vlk8HNQYN-U4 for <tsvwg@ietfa.amsl.com>; Tue, 10 Sep 2019 09:55:32 -0700 (PDT)
Received: from server.dnsblock1.com (server.dnsblock1.com [85.13.236.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id D375E120115 for <tsvwg@ietf.org>; Tue, 10 Sep 2019 09:55:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bobbriscoe.net; s=default; h=Content-Transfer-Encoding:Content-Type: In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender :Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=FB+e0+NfNGVQij94vTo6YcRN4oo1ryx02+bJodBnFFM=; b=rYMmuFvttGCres42+5BkcBLwTL f6BIjHcMb5uCfyYF0dHhHOJR7YAFmJgP9XmP6x7q1JKQEzXHOfliNKaiIhvrHImY6fitFc+hsFoS8 OictAx4jRtMdvM3iif1kOhe8oYw/jNEWq5zhm6vHlvoEfklTQv+lSzcUbupmOYRmJSdcDjzy/VLed g3+4PzVFEaUyf6rxwhllNUlngK0I+eJjTjDvzJ7GhER8Zr8OwcRams23jrSh1B9pbGA2giQSQvzfI 2ht1yO0/YkuTeAxsiuAV6OKkeds8KgUKUpBVCzN3lRd+RkWVTNUDscRPS1FCyKsAtoxv2Ird+Qwcp EzZyPKew==;
Received: from [31.185.128.31] (port=42164 helo=[192.168.0.3]) by server.dnsblock1.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128) (Exim 4.92) (envelope-from <ietf@bobbriscoe.net>) id 1i7jQP-0007L7-T5; Tue, 10 Sep 2019 17:55:30 +0100
To: Sebastian Moeller <moeller0@gmx.de>
Cc: Greg White <g.white@CableLabs.com>, Steven Blake <slblake@petri-meat.com>, "tsvwg@ietf.org" <tsvwg@ietf.org>
References: <CE03DB3D7B45C245BCA0D24327794936306BBE54@MX307CL04.corp.emc.com> <56b804ee-478d-68c2-2da1-2b4e66f4a190@bobbriscoe.net> <AE16A666-6FF7-48EA-9D15-19350E705C19@gmx.de> <CE03DB3D7B45C245BCA0D24327794936306D4F3F@MX307CL04.corp.emc.com> <50404eb0-fa36-d9aa-5e4c-9728e7cb1469@bobbriscoe.net> <1567794961.7634.64.camel@petri-meat.com> <E865FE52-36AB-4B5F-A4FB-A7B946A33CA1@cablelabs.com> <308DDF8F-6700-4E64-8500-CD3A6A070935@gmx.de>
From: Bob Briscoe <ietf@bobbriscoe.net>
Message-ID: <a25d3a0b-e05e-43b2-1d7e-8f9ee0c85ba9@bobbriscoe.net>
Date: Tue, 10 Sep 2019 17:55:29 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1
MIME-Version: 1.0
In-Reply-To: <308DDF8F-6700-4E64-8500-CD3A6A070935@gmx.de>
Content-Type: text/plain; charset="utf-8"; format="flowed"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname - server.dnsblock1.com
X-AntiAbuse: Original Domain - ietf.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - bobbriscoe.net
X-Get-Message-Sender-Via: server.dnsblock1.com: authenticated_id: in@bobbriscoe.net
X-Authenticated-Sender: server.dnsblock1.com: in@bobbriscoe.net
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/F0K9Tu16_IEm8O1Sep2f0cdK-ok>
Subject: Re: [tsvwg] Traffic protection as a hard requirement for NQB
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 10 Sep 2019 16:55:34 -0000

Sebastian,

On 07/09/2019 20:14, Sebastian Moeller wrote:
>
>> On Sep 6, 2019, at 21:53, Greg White <g.white@CableLabs.com> wrote:
>>
>> [GW] comments below
>>   
>> From: tsvwg <tsvwg-bounces@ietf.org> on behalf of Steven Blake <slblake@petri-meat.com>
>>   
>> The objective as I understand it is to offer low queueing latency to non-admissioned-controlled/non-capacity-seeking traffic (NQB). First, this is only feasible if NQB traffic is naturally self-limiting to a small fraction of total traffic (e.g., < 5%). Second, existence of this NQB service should not noticeably impact the QoS of capacity-seeking (QB) traffic. Third, implementations of this NQB service should try to ensure that capacity-seeking traffic trying to utilize the NQB service gets worse service with very high probability (so that it doesn't even bother to probe).
>>   
>> [GW] If the WRR weight is 0.5 for each queue, then it is feasible for the NQB PHB to provide low queueing latency to NQB traffic up to 50% of the total link capacity (and more than 50% when QB traffic is not filling up the other 50%).
> 	[SM] The NQB PHB is not going to provide low queueing latency to NQB traffic, the whole "trick" behind the low queueing latency, IMHO effectively is to mandate pacing and ideally synchronize all active senders such that each of their packet's arrive only shortly before its egress timeslot at the bottleneck becomes available.
The whole trick is the first part (pacing), but the second (ideally 
synchronize) is not part of the trick and not necessary.
> For Bob's examples like VoIP that trick will only work partially, as the VoIP senders will pick their sending slots based on acquisition start time and sampling rate and will not adapt to what would be ideal for the bottleneck's scheduler (VoIP being UDP there is not even a viable path from the bottleneck AQM back to the sender to synchronize them). Using NTP as a proxy for assessing synchronisation quality and jitter it seems to me that <1ms synchronization over the open internet is going to be hard given the transient nature of most flows, but hard does not equal impossible.
Randomness is our friend. For a certain NQB utilization, r, if there are 
two equal but uncoordinated NQB flows the probability of a queue of 2 
packets is r/2. But if there are n equal but uncoordinated flows, the 
probability of a queue of n is vanishingly small ((r/n)^(n-1), I think).

That's why no-one attempts to get VoIP flows or flows of game-sync 
datagrams to shift their phase to reduce overall delay. Randomness is 
sufficient.

The calculations I showed with 5 flows gave roughly 600us worst-case 
queuing delay without any attempt at desynchronizing each of the flows. 
And the probability of even that much would be tiny.

>> [GW] On the second point, you are correct, the existence of the NQB service will not noticeably impact the QoS of capacity-seeking QB traffic.
>> For example, if the NQB traffic did happen to be smooth traffic (CBR) at exactly 50% of link capacity, then with NQB, the QB traffic flows have 50% of link capacity to compete with each other for. Let’s say that competition results in (e.g.) 0.1% packet drop/mark.  On the other hand, without the NQB service, and in the presence of the same CBR flow, the capacity-seeking traffic flows similarly compete for the remaining 50% of link capacity, and settle in to the same 0.1% packet drop/mark rate (yet the CBR flow is unfortunately also subject to high queuing delay as well as 0.1% drop if the link doesn’t support ECN).  Well, being picky, if the link didn’t support ECN then I guess the capacity seeking traffic would compete for 50.1% of link capacity without the NQB PHB, as opposed to 50% with the NQB PHB, but I would argue that isn’t a “noticeable impact”.
> 	[SM] And now entertain the idea of a flow fair queueing system on the bottleneck which nicely would isolate the effect of CBR traffic exceeding its permitted rates from the rest of the flows...
[BB] I believe Greg's point was that the CBR flow at 50% of the capacity 
might have been desired by the user.

>
>>   
>> [GW] On your third point, I agree in concept, and it is an aspect that we’ve discussed privately several times.  At the end of the day, we could not convince ourselves of an appropriate “punishment” for mismarked QB traffic, other than high packet loss in the case where QP is not implemented or potential for out-of-order delivery when QP is implemented.
> 	[SM] With the actual numbers from the DualQ RFC it seems that a QB flow will be able to deposit at least up to 125ms worth of packets before hitting the overload-limit, at Bob's 120 Mbps that is roughly 1.8 MB of data, quite a number of TCP flows will be done with their business before hitting that, reaping potential benefit from mis-marking themselves, without encountering negative effects from doing so.
[BB] Already addressed in response to your other email.


Bob

-- 
________________________________________________________________
Bob Briscoe                               http://bobbriscoe.net/