RE: Core drafts -02 out

Lucas Pardue <Lucas.Pardue@bbc.co.uk> Tue, 21 March 2017 22:14 UTC

Return-Path: <Lucas.Pardue@bbc.co.uk>
X-Original-To: quic@ietfa.amsl.com
Delivered-To: quic@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 88F9F12F253 for <quic@ietfa.amsl.com>; Tue, 21 Mar 2017 15:14:56 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.2
X-Spam-Level:
X-Spam-Status: No, score=-4.2 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_MED=-2.3, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id eem3oJmWS5OE for <quic@ietfa.amsl.com>; Tue, 21 Mar 2017 15:14:53 -0700 (PDT)
Received: from mailout0.telhc.bbc.co.uk (mailout0.telhc.bbc.co.uk [132.185.161.179]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 1C950126C83 for <quic@ietf.org>; Tue, 21 Mar 2017 15:14:52 -0700 (PDT)
Received: from BGB01XI1009.national.core.bbc.co.uk (bgb01xi1009.national.core.bbc.co.uk [10.161.14.23]) by mailout0.telhc.bbc.co.uk (8.15.2/8.15.2) with ESMTP id v2LMEoCx026084; Tue, 21 Mar 2017 22:14:50 GMT
Received: from BGB01XUD1012.national.core.bbc.co.uk ([10.161.14.10]) by BGB01XI1009.national.core.bbc.co.uk ([10.161.14.23]) with mapi id 14.03.0319.002; Tue, 21 Mar 2017 22:14:50 +0000
From: Lucas Pardue <Lucas.Pardue@bbc.co.uk>
To: Charles 'Buck' Krasic <ckrasic@google.com>
CC: IETF QUIC WG <quic@ietf.org>
Subject: RE: Core drafts -02 out
Thread-Topic: Core drafts -02 out
Thread-Index: AQHSnFW3fEMDfc2nQEqGhaTBBGBtIqGV5bGAgABHOwCACbiVgIAAAjpy
Date: Tue, 21 Mar 2017 22:14:49 +0000
Message-ID: <7CF7F94CB496BF4FAB1676F375F9666A376F6784@bgb01xud1012>
References: <CABkgnnUdnfwAUyCKieSgYAaTSKSMu16F+HCCBSd+WWhh+U0iyQ@mail.gmail.com> <75F554C8-74A5-40A3-816B-979272F8A147@greenbytes.de> <BN6PR03MB270881C57219DC6046BBC40787270@BN6PR03MB2708.namprd03.prod.outlook.com>, <CAD-iZUYcKCLtv-mW5LPAwx18DHR_U2_xT_NToPLmxxtm5Z8Aqg@mail.gmail.com>
In-Reply-To: <CAD-iZUYcKCLtv-mW5LPAwx18DHR_U2_xT_NToPLmxxtm5Z8Aqg@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [172.19.161.211]
x-exclaimer-md-config: c91d45b2-6e10-4209-9543-d9970fac71b7
x-tm-as-product-ver: SMEX-11.0.0.4179-8.100.1062-22956.002
x-tm-as-result: No--20.055200-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
Content-Type: multipart/alternative; boundary="_000_7CF7F94CB496BF4FAB1676F375F9666A376F6784bgb01xud1012_"
MIME-Version: 1.0
Archived-At: <https://mailarchive.ietf.org/arch/msg/quic/sCQ_rcxjNDN_7-5eJ_RPVWgSjkc>
X-BeenThere: quic@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <quic.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/quic>, <mailto:quic-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/quic/>
List-Post: <mailto:quic@ietf.org>
List-Help: <mailto:quic-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/quic>, <mailto:quic-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 21 Mar 2017 22:14:56 -0000

Hi Buck,

Thanks for your proposal, I'll have a proper read on the plane over to Chicago!

The HTML version link was broken for me, but this one works - https://krasic.github.io/draft-krasic-quic-hpack/draft-krasic-quic-hpack-00.html.

Regards
Lucas
________________________________
From: QUIC [quic-bounces@ietf.org] on behalf of Charles 'Buck' Krasic [ckrasic@google.com]
Sent: 21 March 2017 22:04
To: Mike Bishop
Cc: Stefan Eissing; IETF QUIC WG; Martin Thomson
Subject: Re: Core drafts -02 out

Hi Folks.

I've put together a first attempt at my QPACK proposal:

draft-krasic-quic-hpack-00.html<http://draft-krasic-quic-hpack-00.html>
draft-krasic-quic-hpack-00.txt<https://krasic.github.io/draft-krasic-quic-hpack/draft-krasic-quic-hpack-00.txt>

Apologies in advance, this is my first attempt at an IETF draft.


On Wed, Mar 15, 2017 at 10:37 AM, Mike Bishop <Michael.Bishop@microsoft.com<mailto:Michael.Bishop@microsoft.com>> wrote:
Thanks for the feedback!

Yes, you've run straight into the big quandary with HPACK.  I don't think anyone expects that we will ship this way; I have a proposal in https://tools.ietf.org/html/draft-bishop-quic-http-and-qpack for an HPACK-replacement that would solve many of these.  Buck Krasic has a proposal which he has sketched in e-mail but not submitted as a draft.  The main point of the sequence number was to get us off the "everything on stream 3" model and let us sort out the problems of HPACK later.  #228 tracks fixing HPACK, in whatever form that takes.

We could solve it in the same way that we did PRIORITY, by adding an "affected stream number" field and moving the HEADERS/PUSH_PROMISE frames to Stream 3 as well.  However, that is just as blocking as the current approach, so not really an improvement.  Worse, the reason we can tolerate large header frames is because they occur on their own streams and don't block arrival of data from other streams.  If all headers occur on a single stream, that's not true -- you *are* blocking muxing of other streams again.

I like the comparison to concurrent edits.  We've discussed having rolling deltas in various mechanisms; the problem is that it requires reaching into the transport for ACK state to figure out when you can discard old deltas for good on the receiver side.  Buck's proposal is similar, essentially requiring the receiver to echo back the point up to which it has received all frames, and the sender shouldn't reference state that the receiver hasn't fully assimilated yet.  It feels like adding application-level ACKs to me, which I'd like to avoid.

HPACK is also one of the reasons for the two-stream-per-request approach.  There are two reasons for this -- one is that HPACK frames (in the current design) can't be lost when a stream is RST, so the draft forbids resetting control streams.  Since we still need to be able to RST requests, we add the semantic that killing the data stream implies the same thing about the control stream.  It's messy, and hopefully we can remove that once HPACK is fixed.  The second, as you guessed, is not dealing with DATA frames.  #245 notes that, if we fix HPACK, we could go back to a single stream per request; proponents of both "keep framing out of the way" and "fewer streams is easier to manage" have weighed in there.  Please add your voice.

As to buffering, it's actually easy enough -- just don't read from data streams until you've seen the headers.  Sure, the sender will fill up their flow control window on that stream -- and then they'll stop and send you QUIC BLOCKED frames, until you start reading the body and generating WINDOW_UPDATE frames.  One of the biggest arguments for two streams in my mind is making sure that a request blocked in this way doesn't impede the flow of control frames on that stream -- for example, should we successfully get certificate auth frames added, a certificate request.  I've had two customers this week telling me it's a bug in our server code that their TLS reneg gets stuck in TCP buffers behind a giant request body, because the server won't read an unauthenticated body and the client won't hold off sending the body until authentication succeeds.  I'd like to see blocks like that become less possible in QUIC, at least.

Yes, making SETTINGS immutable reduces the flexibility.  The opinion at the interim was that we could live without it, as we know of exactly one implementation of one extension that does mid-stream setting changes, and I own it.  😊  If there's a compelling use case for changing settings mid-stream we weren't aware of, that's new information that could justify the complexity.  (But note that with 0-RTT connection setup, it would be nearly as cheap to open a new connection with the new setting and transition your traffic to it.)

Negotiation still works, since each side would simply advertise that they support an extension or not, and consider the extension to be active once you've seen the other side's SETTINGS frame.  See https://tools.ietf.org/html/draft-ietf-quic-http-01#section-5.2.5.3 for the complexity this allowed us to remove.  An extension could add to the list easily -- if you support extension X, you should also remember the value for setting X_VAL.  Clients that don't use the extension don't care.  An extension that needed some more complex negotiation would have to use an extension-specific frame, it's true, but we haven't so far seen an extension do that.

-----Original Message-----
From: QUIC [mailto:quic-bounces@ietf.org<mailto:quic-bounces@ietf.org>] On Behalf Of Stefan Eissing
Sent: Wednesday, March 15, 2017 6:22 AM
To: Martin Thomson <martin.thomson@gmail.com<mailto:martin.thomson@gmail.com>>; IETF QUIC WG <quic@ietf.org<mailto:quic@ietf.org>>
Subject: Re: Core drafts -02 out


> Am 14.03.2017 um 00:57 schrieb Martin Thomson <martin.thomson@gmail.com<mailto:martin.thomson@gmail.com>>:
>
> The editors have submitted -02 versions of the base set of QUIC drafts.
[...]
> https://tools.ietf.org/html/draft-ietf-quic-http-02

Thanks, Martin! I assume that was announced to get feedback from the lurkers here. ;-)

I'll give this a try. All mistakes and misunderstandings are mine alone.

-----------------------------------------------------------------------------------------

Very well written spec. Easy to understand for someone having read rfc 7540 a bit.

I have some comments to the proposed HTTP mapping approach. Where the wg has already discussed and exhausted alternatives, please excuse my ignorance and ignore my comments. I had not the time to follow all discussions ongoing on this topic. Feel free to cherry-pick what seems helpful.


> 4.  Stream Mapping and Usage

+1 to directly using quic stream ids instead of virtual h2 stream
+identifier

However, using 2 quic streams for a single request/response does not sit well with me. I assume that stems from the wish to get rid of DATA frames. Which sounds nice, but is it worth it? By doubling the # of streams for a client, how much overhead does that introduce (I speak of a server holding >10k quic "connections")?

Also, the server needs to buffer data on quic streams 7, 11, 15, etc. because HEADERs might arrive some time in the future on streams 5, 9, 13, etc. or not. There is no way to route this data somewhere, because the meta information is still missing.

And there is still Holb on:

> 4.2.1.  Header Compression
> ...
> DISCUSS:  Keep HPACK with HOLB?  Redesign HPACK to be order-
>       invariant?  How much do we need to retain compatibility with
>       HTTP/2's HPACK?

Using a counter in HEADERS is a crutch:
- it is a highly specific solution to a common problem in http/quic: synchronicity in connection level state changes. SETTINGS (see below) has the same problem, as does have PRIORITY in HEADERS. It seems that performance wise, all HEADERS could as well be sent on stream 3.

Now, solving Hol blocking for HEADERS would be a fine achievement.

> 5.  HTTP Framing Layer
> Frames are used only on the connection (stream 3) and message
>    (streams 5, 9, etc.) control streams.

And streams 4, 8, 12, etc. I assume.

> 5.2.3.  SETTINGS
> ...
> SETTINGS frames always apply to a connection, never a single stream.
>  A SETTINGS frame MUST be sent as the first frame of the connection
> control stream (see Section 4) by each peer, and MUST NOT be sent
> subsequently or on any other stream.  If an endpoint receives an
> SETTINGS frame on a different stream, the endpoint MUST respond with
> a connection error of type HTTP_SETTINGS_ON_WRONG_STREAM.  If an
> endpoint receives a second SETTINGS frame, the endpoint MUST respond
> with a connection error of type HTTP_MULTIPLE_SETTINGS.

What about HPACK state? The connection state change problem again visible.

This is a severe restriction on extensions mechanisms that want to affect a connection. Because if the http/quic does not solve this problem, how are they expected to do it? They either announce themselves on the first SETTINGS or remain silent forever, it seems. How would an extension handshake work then? Own stream 3 handshake frames?

> 5.2.3.3.  Usage in 0-RTT

What about HPACK state? Does it need to be kept or is it reset?

> HTTP_PUSH_ALREADY_IN_CACHE (0x03):  The server has attempted to push
>      content which the client has cached.
> ...
> HTTP_REQUEST_CANCELLED (0x04):  The client no longer needs the
>     requested data.

Nitpick: seems redundant. The first could be replaced by the second. Stream number will suffice.

----------------------------------------------------------

Taking two steps back:

I think the main difficulty comes from the lack of a "hq connection state" concept and how changes to that state can be managed. Evidence:
- The 0-RTT mentions certain SETTINGS that need to be remembered by a client. How would an extension add to this? Will every extension have to come up with its own solution?
- The HEADERs sequence number is a highly specific fix for the missing state change
- The SETTINGS-ONCE restriction simply avoids the problem by killing a h2 mechanism

If hq defines stream 3 as the place where connection state changes happen *AND* synchronizes OPEN/CLOSE/RST of other streams on it, client and server can have shareable concept of the connection state.


I am no HPACK expert. The basic problem looks like concurrent editing against a repository.
Both client and server start with connection state zero (CS-0) and the predefined HPACK dictionary (HP-0). After SETTINGS exchange, client is in CS-1 and server is in CS-2 for its side. Let's call the  CS-1 HPACK state HP-1.

Client sends new HEADERS on 5 and keeps the HPACK delta around (HP-1.5). The HEADERS carries the connection state number it is based on (CS-1). Client sends new HEADERS on stream 9, also based on CS-1. Client keeps that delta around (HP-1.9).

Client then decides to announce a new connection state (CS-3) by sending how it applied the deltas HP-1.5 and HP-1.9 to come up with HP-3. The description allows the server to update its HP-1 to HP-3 as well.

Response HEADERS are also based on a server connection state, explicitly, so the client knows which HP-X to use when decoding them. At some time, the server sends an announcment of CS-4 with HPACK data on stream 3 back to the client.

For a 0-RTT, client and server could exchange the connection state id in the initial SETTINGS, to make sure they have at least the same name remembered as the other side. Client: "I was in CS-19 and you were in CS-8." Server: "Yep."

By implicitly adding all SETTINGS changes to connection states, the problem is also solved for extensions.

If one side receives HEADERS with an unknown connection state:
- if the state id is greater than any known one: set a stream timeout and wait for changes on stream 3 to arrive
- state id less than max(known conn state): STREAM_RST_UNKNOWN_CONN_STATE

New SETTINGS value: MAX_CONN_STATE number of maximum connection state the client/server is willing to keep, exchanged initially. Announcing a new connection state allows the other side to drop the lowest one, if MAX_CONN_STATE are used.

To get optimal HPACK size compression, every HEADERs would also announce a new connections state. To have less potential HOLB, connection states do not change during request bursts.

Something like that.

-Stefan





--
Charles 'Buck' Krasic | Software Engineer | ckrasic@google.com<mailto:ckrasic@google.com> | +1 (408) 412-1141