Re: Deadlocking in the transport
Roberto Peon <fenix@fb.com> Wed, 10 January 2018 20:27 UTC
Return-Path: <prvs=4548b35f70=fenix@fb.com>
X-Original-To: quic@ietfa.amsl.com
Delivered-To: quic@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id F17701289B0 for <quic@ietfa.amsl.com>; Wed, 10 Jan 2018 12:27:04 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.721
X-Spam-Level:
X-Spam-Status: No, score=-2.721 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=fb.com header.b=lMKxgPua; dkim=pass (1024-bit key) header.d=fb.onmicrosoft.com header.b=dlB4cQEM
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tMssY0TOJGOj for <quic@ietfa.amsl.com>; Wed, 10 Jan 2018 12:27:02 -0800 (PST)
Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id A8C4912711B for <quic@ietf.org>; Wed, 10 Jan 2018 12:27:02 -0800 (PST)
Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w0AKNMfN030719; Wed, 10 Jan 2018 12:27:00 -0800
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : subject : date : message-id : references : in-reply-to : content-type : content-id : content-transfer-encoding : mime-version; s=facebook; bh=coYb6XFHCP+bF0v0ZBDSlyTMjt1BbhMDJx7+JlCR5Jc=; b=lMKxgPuaIvBV7r0lV6D7OOQDjgvVX3rrahGt8NxBlIudDF4/TYSi0wRgnPVmuIEZ4mXS dpPvPQYqEBrfwvdL7jOXPS/vqfzYGCU2chU4fr1+PH1N3Q8O98q3xaKMscmW18RipqlY ZPL2RvUJHNlfzcj4FdqWVv9pnVVFLrWBy6w=
Received: from maileast.thefacebook.com ([199.201.65.23]) by mx0a-00082601.pphosted.com with ESMTP id 2fdsh4r1x5-1 (version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NOT); Wed, 10 Jan 2018 12:27:00 -0800
Received: from NAM01-BY2-obe.outbound.protection.outlook.com (192.168.183.28) by o365-in.thefacebook.com (192.168.177.33) with Microsoft SMTP Server (TLS) id 14.3.361.1; Wed, 10 Jan 2018 15:26:59 -0500
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.onmicrosoft.com; s=selector1-fb-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=coYb6XFHCP+bF0v0ZBDSlyTMjt1BbhMDJx7+JlCR5Jc=; b=dlB4cQEMFQqcf46yDdExi8ZKZ17RAr5LRdg84jlqCWeAw0la3ifwtZe8idIlGtEWOcxoIhl7lMBje8Cz3OqHQuF58d79HuwMKg15wATs5q+lkRpQfAR7IU9Y0UAgMJoE0quqqP+DF84qQ0+iqtetwNSyhTCnjxDyyHHb/OR88gY=
Received: from DM5PR1501MB2183.namprd15.prod.outlook.com (52.132.131.33) by DM5PR1501MB2181.namprd15.prod.outlook.com (52.132.131.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.407.7; Wed, 10 Jan 2018 20:26:57 +0000
Received: from DM5PR1501MB2183.namprd15.prod.outlook.com ([fe80::7d32:3266:94b6:13f1]) by DM5PR1501MB2183.namprd15.prod.outlook.com ([fe80::7d32:3266:94b6:13f1%13]) with mapi id 15.20.0386.009; Wed, 10 Jan 2018 20:26:57 +0000
From: Roberto Peon <fenix@fb.com>
To: Martin Thomson <martin.thomson@gmail.com>, QUIC WG <quic@ietf.org>
Subject: Re: Deadlocking in the transport
Thread-Topic: Deadlocking in the transport
Thread-Index: AQHTidq/6Mg61g6LuEipz9qoFIGKPqNtCOaA
Date: Wed, 10 Jan 2018 20:26:57 +0000
Message-ID: <E55BA3F8-39ED-404D-9165-C5E68362206E@fb.com>
References: <CABkgnnUSMYRvYNUwzuJk4TQ28qb-sEHmgXhxpjKOBON43_rWCg@mail.gmail.com>
In-Reply-To: <CABkgnnUSMYRvYNUwzuJk4TQ28qb-sEHmgXhxpjKOBON43_rWCg@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [2620:10d:c090:200::6:810d]
x-ms-publictraffictype: Email
x-microsoft-exchange-diagnostics: 1; DM5PR1501MB2181; 7:LytLwtTvFFGdA+Vuvag1nZqjx3lsZllc9MJUMzLkDdC5KNrwZkgl/A/EAO3VESAAHQnzAP7n0HhR8ymxSurKQ9MiomvXmSooG2GXtNW8zftdwEzhnKJgL56zrXJcm1PW9Xti15pgExFmi9OsNc7spWA2RJKJW4u3SOjj06+2d5L8KxR3pZJ5k9zRjhptHPtToxnpDdGuyNfGxiU/2WFj6Qfi4uC5lCKo8eNyjJ5V1NZM5aUntIBdl3N9IS3TEqsl; 20:pG3V6mEKkXPqkIBI170NRqJo7XyeKkoinu98DxeyX47ZRyLckSoE7YMcAhjpJZ2ioEj9EgSnHnlbdu0syXGmWjRENZLyQ/AXKnnS9hEXoHzvQZ72evmJLrvpPkK5DfX24ulGDkreNWINBGflLN+5F+L/fb76AAFphNNlH+vBJ2w=
x-ms-exchange-antispam-srfa-diagnostics: SSOS;
x-ms-office365-filtering-correlation-id: 0d051a81-226a-497c-dfdc-08d558687f2b
x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020055)(4652020)(5600026)(4604075)(3008032)(2017052603307)(7153060)(7193020); SRVR:DM5PR1501MB2181;
x-ms-traffictypediagnostic: DM5PR1501MB2181:
x-microsoft-antispam-prvs: <DM5PR1501MB2181933FED03B585F4D07301CD110@DM5PR1501MB2181.namprd15.prod.outlook.com>
x-exchange-antispam-report-test: UriScan:(278428928389397)(85827821059158);
x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(6040470)(2401047)(5005006)(8121501046)(3002001)(3231023)(11241501184)(944501122)(93006095)(93001095)(10201501046)(6041268)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123564045)(20161123558120)(20161123562045)(6072148)(201708071742011); SRVR:DM5PR1501MB2181; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:DM5PR1501MB2181;
x-forefront-prvs: 0548586081
x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(39860400002)(396003)(39380400002)(376002)(346002)(366004)(24454002)(189003)(54094003)(199004)(3660700001)(39060400002)(97736004)(2900100001)(3280700002)(478600001)(6116002)(14454004)(8936002)(59450400001)(305945005)(81166006)(5660300001)(86362001)(7736002)(76176011)(3480700004)(6246003)(83716003)(99286004)(8676002)(81156014)(2906002)(6436002)(6506007)(53546011)(33656002)(229853002)(2950100002)(68736007)(6512007)(316002)(110136005)(6346003)(36756003)(25786009)(5250100002)(105586002)(53936002)(102836004)(106356001)(6486002)(82746002)(42262002); DIR:OUT; SFP:1102; SCL:1; SRVR:DM5PR1501MB2181; H:DM5PR1501MB2183.namprd15.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en;
received-spf: None (protection.outlook.com: fb.com does not designate permitted sender hosts)
x-microsoft-antispam-message-info: dTdHAQgPccOmKm1mn5y8omdHQwgt0xP/Sy9px7BQeLnM7kwIbCuWoZMw0xWiXxiv5zo0fuMxnyYLUQUuJu9KWQ==
spamdiagnosticoutput: 1:99
spamdiagnosticmetadata: NSPM
Content-Type: text/plain; charset="utf-8"
Content-ID: <BB9C50C7A4D90A4CA689B086921B8E63@namprd15.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: 0d051a81-226a-497c-dfdc-08d558687f2b
X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jan 2018 20:26:57.5436 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 8ae927fe-1255-47a7-a2af-5f3a069daaa2
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1501MB2181
X-OriginatorOrg: fb.com
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2018-01-10_10:, , signatures=0
X-Proofpoint-Spam-Reason: safe
X-FB-Internal: Safe
Archived-At: <https://mailarchive.ietf.org/arch/msg/quic/59BFUkUzvt3gwB4RgCa5jnooHFg>
X-BeenThere: quic@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <quic.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/quic>, <mailto:quic-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/quic/>
List-Post: <mailto:quic@ietf.org>
List-Help: <mailto:quic-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/quic>, <mailto:quic-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 10 Jan 2018 20:27:05 -0000
Another option: Allow a flow-control ‘override’ which allows a receiver to state that they really want data on a particular stream, and ignore the global flow control for this. How you’d do it: A receiver can send a flow-control override for a stream. This includes the stream id to which the global window temporarily does not apply, the receiver’s current stream flow-control offset, and the offset the receiver would wish to be able to receive. A receiver must continue to (re)send the override (i.e. rexmit) until it is ack’d. It cannot send other flow-control for that stream until the override is ack’d. Thus: global-flow-control-override: <stream-id> <current-flow-control-offset>, <override-flow-control-offset> The sender (which receives the override) credits the global flow control with the difference between the data sent beyond the receivers currently-known flow-control offset upon receipt of the override. This synchronizes the global state between the receiver and the sender. The sender can then send the data on the stream (without touching any other flow control data). Why: This allows for a receiver to resolve priority inversions which would otherwise lead to deadlock, even when the data/dep leading to this was not known to the transport. This extends beyond such issues beyond just header compression. Since the global flow-control exists to protect the app from resource exhaustion, this poses no additional risk to the application. Simply increasing the global flow control provides less strong guarantees—any stream might consume it, which doesn’t resolve the dep inversion. Rejiggering priorities can help to resolve this, but would require the sender to send priorities to the client, which is problematic w.r.t.races and just a web of ick. Having a custom frame type is also a less strong guarantee as it requires the knowledge that the dep exists to be present at the time of sending, which is often impossible. -=R On 1/9/18, 10:17 PM, "QUIC on behalf of Martin Thomson" <quic-bounces@ietf.org on behalf of martin.thomson@gmail.com> wrote: Building a complex application protocol on top of QUIC continues to produce surprises. Today in the header compression design team meeting we discussed a deadlocking issue that I think warrants sharing with the larger group. This has implications for how people build a QUIC transport layer. It might need changes to the API that is exposed by that layer. This isn't really that new, but I don't think we've properly addressed the problem. ## The Basic Problem If a protocol creates a dependency between streams, there is a potential for flow control to deadlock. Say that I send X on stream 3 and Y on stream 7. Processing Y requires that X is processed first. X cannot be sent due to flow control but Y is sent. This is always possible even if X is appropriately prioritized. The receiver then leaves Y in its receive buffer until X is received. The receiver cannot give flow control credit for consuming Y because it can't consume Y until X is sent. But the sender needs flow control credit to send X. We are deadlocked. It doesn't matter whether the stream or connection flow control is causing the problem, either produces the same result. (To give some background on this, we were considering a preface to header blocks that identified the header table state that was necessary to process the header block. This would allow for concurrent population of the header table and sending message that depended on the header table state that is under construction. A receiver would read the identifier and then leave the remainder of the header block in the receive buffer until the header table was ready.) ## Options It seems like there are a few decent options for managing this. These are what occurred to me (there are almost certainly more options): 1. Don't do that. We might concede in this case that seeking the incremental improvement to compression efficiency isn't worth the risk. That is, we might make a general statement that this sort of inter-stream blocking is a bad idea. 2. Force receivers to consume data or reset streams in the case of unfulfilled dependencies. The former seems like it might be too much like magical thinking, in the sense that it requires that receivers conjure more memory up, but if the receiver were required to read Y and release the flow control credit, then all would be fine. For instance, we could require that the receiver reset a stream if it couldn't read and handle data. It seems like a bad arrangement though: you either have to allocate more memory than you would like or suffer the time and opportunity cost of having to do Y over. 3. Create an exception for flow control. This is what Google QUIC does for its headers stream. Roberto observed that we could alternatively create a frame type that was excluded from flow control. If this were used for data that had dependencies, then it would be impossible to deadlock. It would be similarly difficult to account for memory allocation, though if it were possible to process on receipt, then this *might* work. We'd have to do something to address out-of-order delivery though. It's possible that the stream abstraction is not appropriate in this case. 4. Block the problem at the source. It was suggested that in cases where there is a potential dependency, then it can't be a problem if the transport refused to accept data that it didn't have flow control credit for. Writes to the transport would consume flow control credit immediately. That way applications would only be able to write X if there was a chance that it would be delivered. Applications that have ordering requirements can ensure that Y is written after X is accepted by the transport and thereby avoid the deadlock. Writes might block rather than fail, if the API wasn't into the whole non-blocking I/O thing. The transport might still have to buffer X for other reasons, like congestion control, but it can guarantee that flow control isn't going to block delivery. ## My Preference Right now, I'm inclined toward option 4. Option 1 seems a little too much of a constraint. Protocols create this sort of inter-dependency naturally. There's a certain purity in having the flow control exert back pressure all the way to the next layer up. Not being able to build a transport with unconstrained writes is potentially creating undesirable externalities on transport users. Now they have to worry about flow control as well. Personally, I'm inclined to say that this is something that application protocols and their users should be exposed to. We've seen with the JS streams API that it's valuable to have back pressure available at the application layer and also how it is possible to do that relatively elegantly. I'm almost certain that I haven't thought about all the potential alternatives. I wonder if there isn't some experience with this problem in SCTP that might lend some insights.
- Deadlocking in the transport Martin Thomson
- Re: Deadlocking in the transport Brian Trammell (IETF)
- Re: Deadlocking in the transport Willy Tarreau
- Re: Deadlocking in the transport Jana Iyengar
- Re: Deadlocking in the transport Subodh Iyengar
- Re: Deadlocking in the transport Dmitri Tikhonov
- Re: Deadlocking in the transport Ian Swett
- Re: Deadlocking in the transport Mikkel Fahnøe Jørgensen
- Re: Deadlocking in the transport Mikkel Fahnøe Jørgensen
- Re: Deadlocking in the transport Charles 'Buck' Krasic
- Re: Deadlocking in the transport Ted Hardie
- Re: Deadlocking in the transport Jana Iyengar
- Re: Deadlocking in the transport Dmitri Tikhonov
- Re: Deadlocking in the transport Jana Iyengar
- Re: Deadlocking in the transport Dmitri Tikhonov
- Re: Deadlocking in the transport Jana Iyengar
- Re: Deadlocking in the transport Dmitri Tikhonov
- Re: Deadlocking in the transport Roberto Peon
- Re: Deadlocking in the transport Jana Iyengar
- Re: Deadlocking in the transport Mikkel Fahnøe Jørgensen
- Re: Deadlocking in the transport Dmitri Tikhonov
- Re: Deadlocking in the transport Christian Huitema
- Re: Deadlocking in the transport Martin Thomson
- Re: Deadlocking in the transport Martin Thomson
- Re: Deadlocking in the transport Martin Thomson
- Re: Deadlocking in the transport Dmitri Tikhonov
- Re: Deadlocking in the transport Jana Iyengar
- Re: Deadlocking in the transport Martin Thomson
- Re: Deadlocking in the transport Jana Iyengar
- RE: Deadlocking in the transport Mike Bishop
- Re: Deadlocking in the transport Mikkel Fahnøe Jørgensen
- Re: Deadlocking in the transport Martin Thomson
- Re: Deadlocking in the transport Mikkel Fahnøe Jørgensen
- Re: Deadlocking in the transport Martin Thomson
- RE: Deadlocking in the transport Lubashev, Igor
- Re: Deadlocking in the transport Dmitri Tikhonov
- Re: Deadlocking in the transport Martin Thomson
- Re: Deadlocking in the transport Jana Iyengar
- Re: Deadlocking in the transport Roberto Peon
- RE: Deadlocking in the transport Lubashev, Igor
- Re: Deadlocking in the transport Mirja Kühlewind
- Re: Deadlocking in the transport Roberto Peon
- Re: Deadlocking in the transport Roberto Peon
- Re: Deadlocking in the transport Martin Thomson
- Re: Deadlocking in the transport Jana Iyengar
- Re: Deadlocking in the transport Ian Swett
- Re: Deadlocking in the transport Martin Thomson
- Re: Deadlocking in the transport Mirja Kühlewind
- Re: Deadlocking in the transport Charles 'Buck' Krasic
- Re: Deadlocking in the transport Roberto Peon
- Re: Deadlocking in the transport Martin Thomson