Re: Multi-path QUIC Extension Experiments

Yunfei Ma <> Sat, 18 September 2021 01:40 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id E5F833A1FB4 for <>; Fri, 17 Sep 2021 18:40:26 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.097
X-Spam-Status: No, score=-2.097 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=unavailable autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id lOZp3Ah62qSN for <>; Fri, 17 Sep 2021 18:40:21 -0700 (PDT)
Received: from ( [IPv6:2a00:1450:4864:20::130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 2BF2C3A1FB7 for <>; Fri, 17 Sep 2021 18:40:21 -0700 (PDT)
Received: by with SMTP id x27so40212595lfu.5 for <>; Fri, 17 Sep 2021 18:40:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=EwG855ypMyaZrFZ5wl4Gpxys364prgkIO30/bYcOLxE=; b=p+uW9mDublIfYr/yt5g4h6guXoVegF0c7zgw5BKXMeYVgsSCaVs7bavdhKuBJcihrv Z5KAyPUEUlKC7SG4O9Pjftp1HEMEVUxS3afCtUwcZqGJ/6TWb8hJS0WMh4gkdbMNTwfw /zGfvKjdsP4DzO1E5LgM/BdTStFhfHGffo5cF5oNFHpYn/8UOyzoqqZWyUUw+CDxXCFX cAVCwXvosmi4PsNBxpQRTL9M1af5EID/fKxpvsOgyB8zYNG+FBO7Hl+UQvyIPK3qC7/y TTqmGIKOCcEhkVPHq1/AkFF3p7QNVx5sADE6ghAlLtf+Dxk1OezGZAGNgKJn+aNwdAvK 9wbg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=EwG855ypMyaZrFZ5wl4Gpxys364prgkIO30/bYcOLxE=; b=OjHWfby3m6eDHB9BMp23xRkR33G7DRUxwQu2l/xqqapVIdVTIa4crwxGrNEUyi8jQx 5grrIX2LURqnC3o+uEEnfybkaeHm1dc1SPf0R04lpeQmlrMViF3n9Tb+xSgMnCDl4kiB gvpEscyNuNU2UaYQS5msqlYiqgQfpJ5ru2Kilx2SL2IB/zrLHOdEfsFwruxPCUNHUsL+ 9Qv6ZfqmJ+YReyOy+gFIsZkyZ1IaPH/77zcU4MM5AVs1fRe1/YZowlxpHq1jNNQIfOzZ ILsetB5olbU/W8swJCkZMNOldwsv8UR0fwTXOYIw1Z6vwx2sLSMD9VB603hZxuyTKSq8 NOEw==
X-Gm-Message-State: AOAM533GCxQ7m7yUl77GnyAIKjbZA6NNMtupIqqXW6BU/Pgw+85O5wmq Pa5bOEE8muRH74uGu6L8T2DKnkdg7Wj+VR9kgPc=
X-Google-Smtp-Source: ABdhPJyspQEZmpZDMPetYaQoWG1Ccn8fRW8VqkhgUT/zFZgycF7d24P5vOSvM2Z6NkRdk8eWxsHj+DKNkgA2EL1uGqY=
X-Received: by 2002:a2e:1511:: with SMTP id s17mr11419248ljd.197.1631929218321; Fri, 17 Sep 2021 18:40:18 -0700 (PDT)
MIME-Version: 1.0
References: <> <> <> <> <> <> <> <> <> <>
In-Reply-To: <>
From: Yunfei Ma <>
Date: Fri, 17 Sep 2021 18:39:42 -0700
Message-ID: <>
Subject: Re: Multi-path QUIC Extension Experiments
To: Ian Swett <>
Cc: Robin MARX <>, Roberto Peon <>, Yunfei Ma <>, Mirja Kühlewind <>, "matt.joras" <>, 李振宇 <>, Christian Huitema <>, Charles 'Buck' Krasic <>, "lucaspardue.24.7" <>, Lars Eggert <>, quic <>, Qing An <>, Yanmei Liu <>
Content-Type: multipart/alternative; boundary="0000000000005b532705cc3b21e8"
Archived-At: <>
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sat, 18 Sep 2021 01:40:27 -0000

Hi Ian,

Thanks for the comments. Please see my reply below.

In particular, if I have a free Wifi connection that may be slow or only
> somewhat connected, and a cell connection that's very solid, I'd like to
> send a new HTTP/3 request on the cell link or maybe even both to minimize
> tail latency, but maybe only smaller responses can come back via cell?  I'm
> not sure how to encode this mix of metered and unmetered connections in
> PATH_STATUS today?  The best I can think of is something along the lines of
> Christian's approach of 'Always use Wifi and switch to cell if it fails',
> but that seems like a suboptimal user experience.

The problem you point out is a very important one, especially for
end-to-end multi-path use cases. Usually on the end-host side, wifi is
free, but cellular costs money, albeit In certain cases, cellular could be
free depending on the user's data plan. With the priority field in
PATH_STATUS, here is what one can do: sending data on paths that are
available and have the same priority level. For example, when the user has
an unlimited cellular data plan or he/she actually thinks the cellular cost
is acceptable, we can set the wifi and cellular path with the same
priority, and let the scheduler only choose from the path set consisting of
the highest priority level paths. In another example, if the user prefers
wifi, he/she can lower the priority of the cellular, thus only sending on
wifi when wifi is available and switch to cellular when wifi becomes
unavailable, as you just mentioned above.

In connection migration, you have the mechanism of migrating when a path
has faded and then switching back to the "default" network, so the cost
issue is well addressed. In multi-path, I think it is desirable to have a
switch that the end user can turn on/off which controls the path priority.
In our previous experiment, we used zero rating. We did so because we
wanted to ensure the results were not biased. We also want to learn better
approaches as it is directly related to the product design.

> As an editorial nit, I'd suggest you remove the word 'Path' from the
> beginning of the field names in the PATH_STATUS frame.  Given they're all
> in the PATH_STATUS frame, I don't think it's necessary.

Thanks for pointing this out. I think you are right and we should modify
the wording here a bit.


On Fri, Sep 17, 2021 at 2:39 AM Yunfei Ma <> wrote:
>> Hi Robin,
>> Thanks for the question and sorry for the late reply. With regard to the
>> experiment, the goal was to achieve bandwidth, latency and reliability for
>> a video stream at the same time. Intuitively, adding more paths should give
>> you immediate improvement. But, as you pointed out, HoL might undo the
>> benefit. So what we found was a way to overcome this, and the technique was
>> to use QoE feedback in conjunction with scheduling techniques. What the
>> results tell us is if you have a stream that is time critical and bandwidth
>> intensive, with the user-space nature of QUIC, you now have a way to go
>> forward using multipath.
>> But please keep in mind that you can choose to use whatever scheduling
>> algorithm you like with the draft. In the experiment, we are trying to push
>> the limit, but it is indeed one of the many possibilities. If you want to
>> send streams that do not require every high reliability or high bandwidth,
>> you can definitely use the fixed-path-per-stream strategy. Recently, we
>> also have also done testing with fixed-path-per-stream with
>> draft-liu-multipath-quic for certain scenarios. Here are some observations,
>> if you have a large number of streams, having an assigned path for each can
>> actually give you more combined bandwidth utilization, but not every stream
>> benefits and some could suffer because of a bad path assignment. If you
>> really care about the long tail performance, then, I would recommend
>> sending a stream on multiple paths.
>> How to use multi-path and which scheduling mode depends on what you try
>> to optimize. I hope the draft can provide a starting point, and please feel
>> free to go beyond.
>> Cheers,
>> Yunfei
>>> Thanks for your extended explanations on Multipath HoL-blocking and
>>> especially:
>>> > I think the stream dependencies you mentioned here is a great point.
>>> In our implementation, we introduced a stream-priority based reinjection
>>> which tries to address such dependency (There is a figure in the material
>>> that Yanmei sent). But we haven't tried when each stream is limited to a
>>> single path. In our case, streams are distributed on multiple paths. I
>>> would definitely want to hear more about the application you are dealing
>>> with, and maybe for wired transport, such a design is needed.
>>> This is exactly what I was trying to explore in my previous mail. You're
>>> basically intentionally causing (or perhaps risking?) HOL blocking because
>>> you split a single stream over multiple paths.
>>> As noted by Christian with the 'equal cost multipath', this can have
>>> bandwidth usage benefits, but only if paths are usable/similar. If not, HOL
>>> blocking might undo all the benefits you get from this setup (and using a
>>> single path per stream would be better).
>>> So my question was: where is the inflection point where you might decide
>>> to switch modes? At which parameters is one better than the other?
>>> I'd hoped you would have experimented with the fixed-path-per-stream
>>> setup to get some insight into this.
>>> In my mind, the idea of doing a purely transport-level multipath
>>> scheduler (i.e., without taking into account application layer streams /
>>> data dependencies / etc.)
>>> has historically made some sense for TCP / for completely separated
>>> stacks, as the transport didn't have that type of information available.
>>> It is however utterly strange to me that this approach would continue
>>> for QUIC (at least in endpoint multipath, not things like in-network
>>> aggregators that have been discussed),
>>> where we have clear splits between streams and (hopefully) already some
>>> type of prioritization information for each stream.
>>> For QUIC, I'd expect one-path-per-stream to be the default, with
>>> multiple-paths-per-stream to be an edge case if you have a single,
>>> high-traffic stream (which I do assume is your situation with a video
>>> stream).
>>> With best regards,
>>> Robin
>>> On Tue, 20 Jul 2021 at 09:15, Lars Eggert <> wrote:
>>>> On 2021-7-20, at 1:19, Roberto Peon <>
>>>> wrote:
>>>> >
>>>> > If we have to send data along a path in order to discover properties
>>>> about that path, then sending less data on the path means discovering less
>>>> about that path.
>>>> >
>>>> > The ideal would be to send *enough* data on any one path to maintain
>>>> an understanding of its characteristics (including variance), and no more
>>>> than that, and then to schedule the rest of the data to whichever path(s)
>>>> are best at the moment.
>>>> ^^^ This.
>>>> Because the Internet has no explicit network-to-endpoint signaling, an
>>>> endpoint must build its understanding of the properties of a path by
>>>> exercising it, and specifically exercising it to a degree that causes
>>>> queues to form (to obtain "under load" RTTs, see bufferbloat) and
>>>> congestion loss to happen (to obtain an understanding of available path
>>>> capacity.) Some people have called this "putting pressure on a path".
>>>> There has been a long-standing assumption that if you exercised a path
>>>> in the (recent) past you can probably assume that the properties haven't
>>>> changed much if you want to start exercising it again. This is why
>>>> heuristics like caching path properties (RTTs, etc.) are often of benefit -
>>>> often, but not always, and maybe never in some scenarios (e.g.,
>>>> overcommitted CGNs.)
>>>> There has been some work on this in the past for MPTCP. For example, on
>>>> mobile devices - which most often have multiple possible paths to a
>>>> destination via WiFi and cellular - exercising multiple paths comes at a
>>>> distinct increase in energy usage. So you need a heuristic to determine if
>>>> the potential benefit of going multipath is worth the energy cost of
>>>> probing multiple paths before you do so.
>>>> Thanks,
>>>> Lars
>>> --
>>> dr. Robin Marx
>>> Postdoc researcher - Web protocols
>>> Expertise centre for Digital Media
>>> *Cellphone *+32(0)497 72 86 94 <+32%20497%2072%2086%2094>
>>> Universiteit Hasselt - Campus Diepenbeek
>>> Agoralaan Gebouw D - B-3590 Diepenbeek
>>> Kantoor EDM-2.05