Re: [tsvwg] Traffic protection as a hard requirement for NQB

Steven Blake <> Fri, 06 September 2019 18:36 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 09E46120906 for <>; Fri, 6 Sep 2019 11:36:13 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id PxsBB0ecsAEZ for <>; Fri, 6 Sep 2019 11:36:11 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id C36FE12092F for <>; Fri, 6 Sep 2019 11:36:10 -0700 (PDT)
X-Sender-Id: totalchoicehosting|x-authuser|
Received: from (localhost []) by (Postfix) with ESMTP id 1554621E0E; Fri, 6 Sep 2019 18:36:10 +0000 (UTC)
Received: from (100-96-38-76.trex.outbound.svc.cluster.local []) (Authenticated sender: totalchoicehosting) by (Postfix) with ESMTPA id 7A85A229C2; Fri, 6 Sep 2019 18:36:08 +0000 (UTC)
X-Sender-Id: totalchoicehosting|x-authuser|
Received: from ([TEMPUNAVAIL]. []) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384) by (trex/5.17.5); Fri, 06 Sep 2019 18:36:09 +0000
X-MC-Relay: Neutral
X-MailChannels-SenderId: totalchoicehosting|x-authuser|
X-MailChannels-Auth-Id: totalchoicehosting
X-Squirrel-Stop: 5c189fb27ba280b9_1567794969078_3910252208
X-MC-Loop-Signature: 1567794969078:531260855
X-MC-Ingress-Time: 1567794969078
Received: from [] (port=46958 by with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.92) (envelope-from <>) id 1i6J5T-0005ib-IF; Fri, 06 Sep 2019 14:35:59 -0400
Message-ID: <>
From: Steven Blake <>
To: Bob Briscoe <>
Cc: "" <>
Date: Fri, 06 Sep 2019 14:36:01 -0400
In-Reply-To: <>
References: <> <> <> <> <>
Content-Type: multipart/alternative; boundary="=-GYHeyPfkMqsQeuqTSnou"
X-Mailer: Evolution (
Mime-Version: 1.0
Archived-At: <>
Subject: Re: [tsvwg] Traffic protection as a hard requirement for NQB
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 06 Sep 2019 18:36:13 -0000

On Thu, 2019-09-05 at 19:23 +0100, Bob Briscoe wrote:
> To judge whether there's any tragedy of the commons (incentive
> alignment), I'll put up a straw-man NQB configuration that complies
> with the following requirement in the draft:
>    a useful property of nodes that support separate queues for NQB
> and QB
>    flows would be that for NQB flows, the NQB queue provides better
>    performance (considering latency, loss and throughput) than the QB
>    queue; and for QB flows, the QB queue provides better performance
>    (considering latency, loss and throughput) than the NQB queue.
> Background: NQB has become possible largely because Internet access
> link rates have typically become fast enough that the serialization
> delay of a packet can be sub-millisecond, and therefore a queue of a
> few packets introduces delay that is small relative to other non-
> optional sources of delay like propagation. In these cases we no
> longer need priority scheduling for low delay.
> Config: 
> Scheduler: 
> WRR with weight 0.5 for NQB on a 120Mb/s link. That gives at least
> 60Mb/s for NQB flows. 
> Scheduler quantum: 1500B.
> Buffering:
> The NQB buffer is fairly shallow (30 packets or 3ms at 120Mb/s).
> The QB buffer is deeper (say 200ms) with an AQM target delay of say
> 10ms.
> Traffic: Let's introduce some example NQB traffic:
> 2 paced flows of VoIP datagrams, each avg 50B payload plus 58B of
> Eth/IP/UDP/RTP headers at 33pkt/s 
> bit-rate: 29kb/s /flow
> serialization delay: 7.2us / pkt
> 2 streams of 1000B game sync datagrams at 30pkt/s => 
> bit-rate: 240kb/s /flow
> serialization delay: 67us / pkt
> plus occasional DNSSec datagrams, worst case 1500B
> serialization delay: 100us / pkt
> Perhaps 540kb/s in all, which is about 0.9% of the min NQB capacity.
> Worst-case NQB queuing delay calculation for the above traffic
> model: 
> Each NQB flow paces out its own packets, but one might happen to
> arrive while a packet from any of the other NQB flows is already
> queued. Worst case n-1 = 4 other NQB packets queued, where n is the
> number of application flows. And if there's traffic in the QB queue,
> each NQB packet will have to wait for one quantum (100us) while the
> other queue is served. Worst-case NQB delay is then:
> (67us * 2 + 7.2us + 100us) + (100us * 4) = 641us, 
> It's v unlikely that this worst-case would arise, but it gives an
> indication of where the tail of the delay distribution will lie.

The objective as I understand it is to offer low queueing latency to non-admissioned-controlled/non-capacity-seeking traffic (NQB). First, this is only feasible if NQB traffic is naturally self-limiting to a small fraction of total traffic (e.g., < 5%). Second, existence of this NQB service should not noticeably impact the QoS of capacity-seeking (QB) traffic. Third, implementations of this NQB service should try to ensure that capacity-seeking traffic trying to utilize the NQB service gets worse service with very high probability (so that it doesn't even bother to probe).

Given this, using a WFQ-ish scheduler with a high NQB scheduler weight to achieve low latency is the wrong hammer to hit this nail, IMHO. A small queue by itself is insufficient to prevent capacity-seeking traffic from trying to grab that large & isolated share of capacity. If I were implementing this (and only this) I would consider putting NQB traffic in a bounded strict priority queue, feeding it priority credits at say 2-4% of link capacity. I would also put a policer in front of that queue set at a rate of 4-6% link capacity, with a large enough token bucket to let small NQB bursts through. That is a very simple implementation, available in lots of existing hardware, that achieves the stated objectives. 

DoS-ing NQB traffic in this implemention is "easier" than DoS-ing it in your proposed scheduling approach, but a targeted NQB DoS attack will have less impact on capacity-seeking traffic, and DoS-ing residential access links using only best-effort service is already a thing, so I'm not sure it really matters.


// Steve