Re: [Plus] Blog post on quic and tou

Jana Iyengar <> Thu, 08 December 2016 17:39 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id B8836129AA2 for <>; Thu, 8 Dec 2016 09:39:18 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -5.596
X-Spam-Status: No, score=-5.596 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, RP_MATCHES_RCVD=-2.896, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id NW22sOfLtD1H for <>; Thu, 8 Dec 2016 09:39:15 -0800 (PST)
Received: from ( [IPv6:2607:f8b0:400c:c05::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 8E534128B44 for <>; Thu, 8 Dec 2016 09:39:15 -0800 (PST)
Received: by with SMTP id w194so232766060vkw.2 for <>; Thu, 08 Dec 2016 09:39:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=eoHGxaZ73619KsxIWn9w0a3NTP56xWn2ZaWnKgu/pbs=; b=YpySxKMyvnTCHlCwNlddzY29O5+kB6fx2vLMA6A4BX/UgwVN+lOfs/0SLpRAhASJ/2 sqmdA+eGH6TVmUskf/LQbHF8Hx9EqTVcX4ZNLGDRVBGQ5SgVod0NsGtjCJdMl0NHN8Ba cybusb8VJXDfPmgACj+30WNEYc8gC2ADzqw4GBPLEtDs6hQ0Ba4e0gUuAxHpC8RfKUH2 1aM/aur6YAMFGQaWV0vBf0UC1ziAEiWoomCtC9BGjt87duhbMTdnVqyUyVseqWD1A1wz QXEZC4W0gcqiLlLOLb9YV3shIQV5f5BDj0dU/r3DPOlXXRBjQ2OmDojwniIo94WT7F4q UIDA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=eoHGxaZ73619KsxIWn9w0a3NTP56xWn2ZaWnKgu/pbs=; b=b364QBa57bkMl2oxFWPVw/w3laPqM7rY5QWj9vaH+RnNkKvpoUuGqPNQnx8flrWL4/ ILGLU2aq/LISQS9ApSLhvqTPZ+tVElgVc0/U7xKA5caUngiG73BZl630jTSmAHrNKj2P baRPwHnh2PXWvh5pnEc3EMsOZMweOt6oug6bPXDXoZ4fZnF2+cbAiDEgRGiTW8z9REeR EC/MWfItfxpkrgYfoFiTwinuuOiIk8QkoNDEeWSL1xk0hpkxREUflOCTL09Gjnxq4NNw UR3ixpmbpESfCJln7lK36a60Jv7KAMSkLps8wkvdKCIy7aO2MnMy+1vDlznH4nwOmRfi 7IWA==
X-Gm-Message-State: AKaTC02D7Ua8NRHcPQwit/3gvpS7ezN8aPyyH3ZweigJKutcVP4xxyKmWzvOAXjWu3tDUrLWL2YHMe4LIX2HIMNY
X-Received: by with SMTP id r65mr25382567vkg.28.1481218754382; Thu, 08 Dec 2016 09:39:14 -0800 (PST)
MIME-Version: 1.0
Received: by with HTTP; Thu, 8 Dec 2016 09:39:13 -0800 (PST)
In-Reply-To: <>
References: <> <> <>
From: Jana Iyengar <>
Date: Thu, 8 Dec 2016 09:39:13 -0800
Message-ID: <>
To: Brian Trammell <>
Content-Type: multipart/alternative; boundary=94eb2c07b01cb0d8540543291d69
Archived-At: <>
Cc:, =?UTF-8?Q?Mirja_K=C3=BChlewind?= <>
Subject: Re: [Plus] Blog post on quic and tou
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: "Discussion of a Path Layer UDP Substrate \(PLUS\) protocol for in-band management of in-network state for UDP-encapsulated transport protocols." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 08 Dec 2016 17:39:19 -0000

Hi Brian,

I'm largely in agreement; a couple of thoughts inline.

On Thu, Dec 8, 2016 at 1:35 AM, Brian Trammell <> wrote:

> hi Jana,
> > On 07 Dec 2016, at 22:35, Jana Iyengar <> wrote:
> >
> > Thanks for forwarding the article. I'll offer some thoughts (and some
> corrections.)
> >
> > There's surely a solid argument to be made about network monitoring, as
> this blog post makes. Operators' needs are real, and we need to ensure that
> they are able to reasonably do the things that they need to do. At least
> for QUIC, exposing a small additional bit of information addresses ~80% of
> the use cases mentioned in the document: a "largest acked" ack number on
> all packets (or at least packets that contain acks). I won't design this
> mechanism on this list,
> (...noting that this list exists *precisely* for designing this mechanism
> ;) but yes, details should happen on the quic@ list...)
> > but I'll note that it's a conversation that's happening in several
> corners in the QUIC wg. It needs to be aired and discussed, and I expect it
> to happen relatively soon.
> For those not familiar with the details of IETF-quic (which AFAIK from
> others' implementation reports diverges somewhat from the version of QUIC
> deployed by Google right now, and will diverge more during the WG's work):
> the packet number is already exposed. Together with highest-ack, this
> allows one-observation-point split-RTT measurement with an unknown
> responder delay term, equivalent to TCP; two-observation-point approaches
> for loss measurement; and one-observation-point approaches for loss
> estimation to work with more information about the dynamics of the
> particular version of QUIC running, also similar to TCP.
> I personally think we can do a good deal better than this with epsilon
> more complexity and overhead, without either constraining QUIC's transport
> dynamics or requiring measurement devices to know about the details of
> those dynamics. Need to do a bit more work before I can say how small
> epsilon is, though.
> Missing is more detailed information about TCP dynamics mentioned in the
> post. Many of these are TCP (and CC algorithm) specific, so it doesn't make
> much sense to expose the same information, though each of the requirements
> implicit in the list is worth evaluating separately for its
> ossification/security/utility tradeoff.
> One requirement from that list that seems quite useful, though I don't
> know how to solve in the general case or in QUIC specifically: "determine
> [if] the software on the client or the server is the bottleneck". This is a
> very common triage task in network operations: does this problem indicate a
> misconfiguration of my network, or (more cynically) can I demonstrate that
> it's not my fault and therefore not my problem? Requiring access to one or
> both endpoint machines to answer that... seems like a question for a future
> Internet architecture research project.

This can be done by measuring at ingress and egress. A simplistic design,
at a high level, is:
(i) use packet number / ack number to measure queue buildup and loss
downstream of the ingress point, measured at the ingress.
(ii) use packet number / ack number to measure queue buildup and loss
downstream of the egress point, measured at the egress.
(iii) subtract (ii) from (i) to find queue buildup and loss in the network.

That at least gets you as far as identifying the problem as in the network
or outside it.

> The article though looks at real needs and current tools that operators
> have, and over-generalizes to saying that the "entire header" should be
> visible. My argument remains that only what is absolutely required should
> be exposed, and that every bit exposed should be debated.
> I would go further (again, this is the philosophical underpinning of PLUS,
> to the extent that it has one): the design of the header that is exposed
> unencrypted to the network (which constitutes its "wire image") should be
> treated as an entirely separate endeavour than the design of the transport
> protocol machinery.
> > This is not a security argument, it's an ossification one. The whole
> point of ossification is that there are third parties that are unresponsive
> to changes in allegedly e2e protocols. Middleboxes are reactive. If they
> see traffic shifting a particular way, they'll go build something in
> response -- I've seen this happen several times. But, they are not
> proactive. This creates a serious "deployment impossibility cycle" where
> deploying a protocol change widely requires it to work through a huge range
> of middleboxes, but even high-end middleboxes will not change behavior in
> response until the protocol change is widely deployed.
> My (possibly starry-eyed optimistic) hope is that a deliberately designed
> wire image will create a path of least resistance for middlebox designs to
> (reactively) follow. A well-designed wire image should be so obvious that
> the in-network reaction will be the one desired by the designers evenfor
> middleboxes built by people who didn't read the spec.

The problem that remains is that once middeboxes react to certain bits,
deployment of changes to those bits requires herculean effort, as we see
with TFO. But this is exactly the tradeoff -- every bit that is exposed may
help the operator, but loses agility.