Re: Multi-path QUIC Extension Experiments

Spencer Dawkins at IETF <> Tue, 21 September 2021 16:38 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 57C9B3A08FF for <>; Tue, 21 Sep 2021 09:38:31 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.097
X-Spam-Status: No, score=-2.097 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id NfENxe02rw3g for <>; Tue, 21 Sep 2021 09:38:25 -0700 (PDT)
Received: from ( [IPv6:2607:f8b0:4864:20::a2d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id B485F3A0900 for <>; Tue, 21 Sep 2021 09:38:25 -0700 (PDT)
Received: by with SMTP id f73so6424997vkf.6 for <>; Tue, 21 Sep 2021 09:38:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=DI/G2o+MtIW6jXorFn9akc7/26G/aiatxJYXEGXVx2M=; b=KGQrGbh3PbD4kx9m5SmnsNRzs2GmbFDmZblTGWjwBS2u0ffdSGpOO87GuKZP5pMppN KGNgcI2X0TZmdPBfkP3P0uNDOn0Zyt1tAOR/2aOp2jeb7clV06Ob7hWHtlI4/jsX04ep /oNdYHlIHEq8NTn5EI2iercqz8KnK8NyRa8NiWD2HZRfpEQ0Fq+swETOq3/jMlsFA1uk WT+f49sCaeF0dwMyjpczvQzxjKR1HlDI68AU8rAUEVt8c1dTKrKskgJOBFuT65NCDErA jg1tXF8oOc1JPJ8yi6o8K87AJZ8uVQ9QrGNED8iC0Pe33rCpWLQkx2/p4JTksUmlCjTS KWOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=DI/G2o+MtIW6jXorFn9akc7/26G/aiatxJYXEGXVx2M=; b=C7fmyfW8fpifl6QaXmlxV8jMm1BbkCqYJvawGRz6nakkHaE2/598h8u1+II84HN9Jk IET6OytfpNOxglrEZkeHZxpNBXCw3A2F38j8kDDkQcswXtRVesETbxSsPsCTo4FKWhVG nLo87ACcEgtYGNeKbhm3u1YC7TuCK40HdOMqWuwD6JzarVUKoJl3xyUYV9F1HRyDZpAD kj9No+r4ufLh8OkIdXpAR5Egsw8Zs4CFokfx0uKsaiQrjm2kpcYAxV0xoxs/tf9fLIkp ow1LVoRXBfIVZUegXvRg4pd/Nz+rEhoZX8vny7VjAAZQDDVnz9FqW0jCjpfCc37Nciml Zeeg==
X-Gm-Message-State: AOAM530aAzPnUjTikVwoHNWPYiSJbUBADNwnfLvHMiFG35oiRRSL5QyM IPWj7Cvh4bsiBHjTT467y3Wpte+F+gnZe+BFVxg=
X-Google-Smtp-Source: ABdhPJwKEbYHkwvqyPRKw+HfAnpBrd6DzvqINCHAu5/NmNOLgGGc3otB6Jq66ChCxnQGtq+miRI+moR0vbf8blVT6Jw=
X-Received: by 2002:a1f:1b90:: with SMTP id b138mr2113324vkb.0.1632242303473; Tue, 21 Sep 2021 09:38:23 -0700 (PDT)
MIME-Version: 1.0
References: <> <> <> <> <> <> <> <> <>
In-Reply-To: <>
From: Spencer Dawkins at IETF <>
Date: Tue, 21 Sep 2021 11:37:56 -0500
Message-ID: <>
Subject: Re: Multi-path QUIC Extension Experiments
To: Yunfei Ma <>
Cc: Robin MARX <>, Roberto Peon <>, Yunfei Ma <>, =?UTF-8?Q?Mirja_K=C3=BChlewind?= <>, "matt.joras" <>, =?UTF-8?B?5p2O5oyv5a6H?= <>, Christian Huitema <>, "Charles 'Buck' Krasic" <>, "lucaspardue.24.7" <>, Lars Eggert <>, quic <>, Qing An <>, Yanmei Liu <>
Content-Type: multipart/alternative; boundary="000000000000af9d4205cc840639"
Archived-At: <>
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 21 Sep 2021 16:38:32 -0000

Hi, Yunfei,

On Fri, Sep 17, 2021 at 1:39 AM Yunfei Ma <> wrote:

> Hi Robin,
> Thanks for the question and sorry for the late reply. With regard to the
> experiment, the goal was to achieve bandwidth, latency and reliability for
> a video stream at the same time. Intuitively, adding more paths should give
> you immediate improvement. But, as you pointed out, HoL might undo the
> benefit. So what we found was a way to overcome this, and the technique was
> to use QoE feedback in conjunction with scheduling techniques. What the
> results tell us is if you have a stream that is time critical and bandwidth
> intensive, with the user-space nature of QUIC, you now have a way to go
> forward using multipath.
> But please keep in mind that you can choose to use whatever scheduling
> algorithm you like with the draft. In the experiment, we are trying to push
> the limit, but it is indeed one of the many possibilities. If you want to
> send streams that do not require every high reliability or high bandwidth,
> you can definitely use the fixed-path-per-stream strategy. Recently, we
> also have also done testing with fixed-path-per-stream with
> draft-liu-multipath-quic for certain scenarios. Here are some observations,
> if you have a large number of streams, having an assigned path for each can
> actually give you more combined bandwidth utilization, but not every stream
> benefits and some could suffer because of a bad path assignment. If you
> really care about the long tail performance, then, I would recommend
> sending a stream on multiple paths.
> How to use multi-path and which scheduling mode depends on what you try to
> optimize. I hope the draft can provide a starting point, and please feel
> free to go beyond.

This is an important point that we shouldn't lose sight of. I think we're
tripping over an assumption about how smart "QUIC with multiple paths"
should be about scheduling packets on each path, versus how much
applications should be involved in scheduling packets on each path.

Applications using MP-TCP, especially for applications like bandwidth
bonding, could reasonably let MP-TCP "do the right thing" - if you've got
paths, use them, and MP-TCP can manage multiple paths with different path
characteristics fairly well if the goal is to use all the available

Applications using QUIC with multiple paths have a lot more possibilities.
For instance, using reliable streams and unreliable datagrams
simultaneously across multiple paths.

ISTM that we should be kind of clear about what QUIC implementations will
be responsible for, and what decisions applications will be responsible

I have been maintaining a collection of various goals for path selection
(most recently, in
It's not a short list.

If we're also including user preferences that would include knowing the
difference between cellular connections that are metered, cellular
connections that are unmetered but being throttled, and cellular
connections that are unmetered and unthrottled, and cross-matching them
with wifi connections that are likely to work better than cellular
connections vs. wifi connections that are only intermittently performing
well, that's not a small amount of complexity.

Maybe that complexity should be managed by applications until we have a
better understanding of what applications are actually doing?



> Cheers,
> Yunfei
>> Thanks for your extended explanations on Multipath HoL-blocking and
>> especially:
>> > I think the stream dependencies you mentioned here is a great point. In
>> our implementation, we introduced a stream-priority based reinjection which
>> tries to address such dependency (There is a figure in the material that
>> Yanmei sent). But we haven't tried when each stream is limited to a single
>> path. In our case, streams are distributed on multiple paths. I would
>> definitely want to hear more about the application you are dealing with,
>> and maybe for wired transport, such a design is needed.
>> This is exactly what I was trying to explore in my previous mail. You're
>> basically intentionally causing (or perhaps risking?) HOL blocking because
>> you split a single stream over multiple paths.
>> As noted by Christian with the 'equal cost multipath', this can have
>> bandwidth usage benefits, but only if paths are usable/similar. If not, HOL
>> blocking might undo all the benefits you get from this setup (and using a
>> single path per stream would be better).
>> So my question was: where is the inflection point where you might decide
>> to switch modes? At which parameters is one better than the other?
>> I'd hoped you would have experimented with the fixed-path-per-stream
>> setup to get some insight into this.
>> In my mind, the idea of doing a purely transport-level multipath
>> scheduler (i.e., without taking into account application layer streams /
>> data dependencies / etc.)
>> has historically made some sense for TCP / for completely separated
>> stacks, as the transport didn't have that type of information available.
>> It is however utterly strange to me that this approach would continue for
>> QUIC (at least in endpoint multipath, not things like in-network
>> aggregators that have been discussed),
>> where we have clear splits between streams and (hopefully) already some
>> type of prioritization information for each stream.
>> For QUIC, I'd expect one-path-per-stream to be the default, with
>> multiple-paths-per-stream to be an edge case if you have a single,
>> high-traffic stream (which I do assume is your situation with a video
>> stream).
>> With best regards,
>> Robin
>> On Tue, 20 Jul 2021 at 09:15, Lars Eggert <> wrote:
>>> On 2021-7-20, at 1:19, Roberto Peon <>
>>> wrote:
>>> >
>>> > If we have to send data along a path in order to discover properties
>>> about that path, then sending less data on the path means discovering less
>>> about that path.
>>> >
>>> > The ideal would be to send *enough* data on any one path to maintain
>>> an understanding of its characteristics (including variance), and no more
>>> than that, and then to schedule the rest of the data to whichever path(s)
>>> are best at the moment.
>>> ^^^ This.
>>> Because the Internet has no explicit network-to-endpoint signaling, an
>>> endpoint must build its understanding of the properties of a path by
>>> exercising it, and specifically exercising it to a degree that causes
>>> queues to form (to obtain "under load" RTTs, see bufferbloat) and
>>> congestion loss to happen (to obtain an understanding of available path
>>> capacity.) Some people have called this "putting pressure on a path".
>>> There has been a long-standing assumption that if you exercised a path
>>> in the (recent) past you can probably assume that the properties haven't
>>> changed much if you want to start exercising it again. This is why
>>> heuristics like caching path properties (RTTs, etc.) are often of benefit -
>>> often, but not always, and maybe never in some scenarios (e.g.,
>>> overcommitted CGNs.)
>>> There has been some work on this in the past for MPTCP. For example, on
>>> mobile devices - which most often have multiple possible paths to a
>>> destination via WiFi and cellular - exercising multiple paths comes at a
>>> distinct increase in energy usage. So you need a heuristic to determine if
>>> the potential benefit of going multipath is worth the energy cost of
>>> probing multiple paths before you do so.
>>> Thanks,
>>> Lars
>> --
>> dr. Robin Marx
>> Postdoc researcher - Web protocols
>> Expertise centre for Digital Media
>> *Cellphone *+32(0)497 72 86 94
>> Universiteit Hasselt - Campus Diepenbeek
>> Agoralaan Gebouw D - B-3590 Diepenbeek
>> Kantoor EDM-2.05