Re: [tsvwg] Traffic protection as a hard requirement for NQB

Jonathan Morton <> Thu, 05 September 2019 19:52 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 7EC58120B1F for <>; Thu, 5 Sep 2019 12:52:47 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.748
X-Spam-Status: No, score=-1.748 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=no autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id bXdfoFJeC7ip for <>; Thu, 5 Sep 2019 12:52:46 -0700 (PDT)
Received: from ( [IPv6:2a00:1450:4864:20::12a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id D320E120B2D for <>; Thu, 5 Sep 2019 12:52:45 -0700 (PDT)
Received: by with SMTP id u13so3030845lfm.9 for <>; Thu, 05 Sep 2019 12:52:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=xWs3lQZ6kGMNO/H1PEbake8jkUeZKknxvn2LKSgAwjs=; b=Ef2d0t0nU0Zhm/c5qVHyYbYuVm3q03gG6W02AD2LsGlY7bLvfOWa20WwRzfDcpPPYE 0RZLC4DuJsKwp4XggEPynsMzahq8BPYk+4buM2eYIKadBUYnGD3DQIwd/Tv2iJ61uHFx /SwlfMOYArCIYN3n1YmQs+/4XNQ7ky2FRA5YpZdSUEUa9xM6NNQBHcjXsWT6YKiNvkYk sJiShk34P2X7XRHVeb+l43S7DjRcBcoc91Ob3ACYjQm2W47ySCV7i/rwu+pry17HHPbb arZJ1iALOrIPvOtOyiUdiEgSHrWG0ckojgNULJsSlLDJ4nBgUBZLqR25vPRH85ShX2M+ SiGA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=xWs3lQZ6kGMNO/H1PEbake8jkUeZKknxvn2LKSgAwjs=; b=MtKHay59hyRq/hQaAxlWc3sPKwbnK8LJ6/i4IccRV6KtDZp37ofFql7kHdu3baOvwd 1Ss3gLJY7A+++j832YY6x1RwEEKT67PIRSNHXjxmvKLYqZ9Dph0iXgDfD7hBMU6x1tTo 5VWRr0lalfZqbDlzmAxbhaUnU9ThfqzsNe7uNBaRMo6ysUrnERCrfBuSt31oGD2bW+Cn Or5w4jfOOFvqDoseRUaFv+lTIyxv4Li39JOqIOXwfh6Gk9Z1s0r+/JQfMJl7po3hJXak AwhLKxXDJkZ8O9UWG1T/GDS6FD/xq00YGNNqJnM9IRqk+wMWCRWQhyWNZaGMrK9oOIPd lx4A==
X-Gm-Message-State: APjAAAV+0YACE4XSGH7ywfrHtxJ7pH9kFR6O5DdnpYA0/LKBcnX53j6k 3y+dcD0rlVlHkQvHaAhWMAE=
X-Google-Smtp-Source: APXvYqwL2bufkg1gfvq0fddcFVJFn9Yx2z8CaI4KluT5J8HY2WoNwLjhLZcTlXxAmZR7j6sG/ymJdA==
X-Received: by 2002:a19:f11c:: with SMTP id p28mr3567048lfh.44.1567713164024; Thu, 05 Sep 2019 12:52:44 -0700 (PDT)
Received: from jonathartonsmbp.lan ( []) by with ESMTPSA id b12sm511870lji.24.2019. (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Sep 2019 12:52:43 -0700 (PDT)
Content-Type: text/plain; charset=utf-8
Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.1\))
From: Jonathan Morton <>
In-Reply-To: <>
Date: Thu, 5 Sep 2019 22:52:41 +0300
Cc: "Black, David" <>, Sebastian Moeller <>, "" <>
Content-Transfer-Encoding: quoted-printable
Message-Id: <>
References: <> <> <> <> <>
To: Bob Briscoe <>
X-Mailer: Apple Mail (2.3445.9.1)
Archived-At: <>
Subject: Re: [tsvwg] Traffic protection as a hard requirement for NQB
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 05 Sep 2019 19:52:48 -0000

> On 5 Sep, 2019, at 9:23 pm, Bob Briscoe <> wrote:
> Config: 
> 	• Scheduler: 
> 		• WRR with weight 0.5 for NQB on a 120Mb/s link. That gives at least 60Mb/s for NQB flows. 
> 		• Scheduler quantum: 1500B. 
> 	• Buffering:
> 		• The NQB buffer is fairly shallow (30 packets or 3ms at 120Mb/s).
> 		• The QB buffer is deeper (say 200ms) with an AQM target delay of say 10ms.

This seems like a reasonable implementation, on the face of it.

> Now, why do you think "the resulting latency is (at least initially) lower" for a QB flow that marks itself NQB? Why do you think incentives are misaligned (i.e. a tragedy of the commons)?

Okay, let's extend your traffic scenario a bit by assuming there's a significant amount of QB traffic, perhaps from a number of 50GB game updates in progress to various PCs and consoles in the household.  This could be as many as a couple of dozen bulk, saturating flows at a time, with the load being more-or-less continuous for an hour or so.  This traffic originates from gaming-related companies, so they know better than to risk interfering with actual gaming traffic carrying the NQB marking.

The above PHB template does a good job of isolating the NQB traffic from the BE traffic in this case, in which everyone behaves nicely.

But now let's introduce an adversary; let's call them Netflix' Unscrupulous Rival (NUR).  They are not in the gaming (interactive entertainment) business, but Video On Demand (passive entertainment).  Their USP is that they get the complete video file onto the subscriber's PC in the minimum possible time; this incidentally also removes the load from their servers as early as possible.  In short, they have chugged the entire cask of Flow Completion Time koolaid, and their flows are multiple gigabytes each.

In service to this goal, NUR have selected BBRv2 as their CC algorithm because, lacking the traditional TCP sawtooth, it achieves higher throughput than most, and without needing to open multiple connections (which means reduced server load and client software complexity).  And, because they're unscrupulous and literally don't care about competing traffic - how important can it be if they're busy watching our video? - they've increased the packet loss threshold to 10% from the default of 1%; they don't mind retransmitting a few packets if it gets the job done faster.  Since we may assume that the AQM in this PHB implements RFC-3168 ECN, this tweak doesn't actually have much effect on this particular case, but it does elsewhere, on the many networks still using dumb FIFOs.

What NUR now notices is that, at some times and with some particular subscribers, their throughput is maybe a tenth of what they expect.  This is unacceptable, so they investigate.  And then they switch on NQB marking to see what happens.  After all, BBRv2 is advertised as not building a queue, so it's allowed, right?

This gives them almost uncontended access to the 60Mbps reserved for NQB traffic, instead of having to share the 120Mbps total pipe with a couple of dozen other bulk flows.  NUR is ecstatic and rolls it out across their entire system.

And there's your incentive to mis-mark as NQB.

Now, what effect does this have on the actual NQB traffic?  Well, BBRv2 spends most of its time pacing at just below the detected path capacity, but periodically probes for additional bandwidth by pacing at a higher rate for an RTT or so.  Usually this will exceed the capacity allocated to NQB and start queuing, which imposes some delay both on itself and on other NQB traffic sharing the same queue.  This will cap out at 6ms (30 packets at 60Mbps) - ten times the worst-case delay you calculated for NQB traffic alone - at which point packet loss will begin to occur.  Because the NUR flow is very long and not application-limited, tail loss is not a factor for most of its lifetime, and they have configured BBRv2 to be exceptionally tolerant of loss before initiating congestive backoff.

In short, NQB traffic may periodically experience an additional 6ms delay and 10% packet loss due to this mis-marked NUR flow, depending on some other factors.

As for the "tragedy of the commons", NUR then proudly publishes an investor report and a white paper explaining how they quintupled their throughput with this one weird trick.  Other unscrupulous Internet companies take note of this and try it themselves, usually with less competence and without measuring the actual extent to which they can expect improvements in practice.  After all, there's no law against it, and it worked for NUR didn't it?  They're a big famous Internet company so they must be right!

Consequently the NQB queue becomes increasingly full of QB traffic.  The latency advantage of the NQB marking is thus eroded, with 6ms delays and significant packet loss becoming the rule rather than the exception, not all that much better than the 10ms target delays on the other side of the scheduler.

Do you now see?

 - Jonathan Morton