Re: [tsvwg] [Ecn-sane] per-flow scheduling

Kyle Rose <krose@krose.org> Tue, 23 July 2019 15:13 UTC

Return-Path: <krose@krose.org>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2DDB41202B1 for <tsvwg@ietfa.amsl.com>; Tue, 23 Jul 2019 08:13:02 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.998
X-Spam-Level:
X-Spam-Status: No, score=-1.998 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=krose.org
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NB0PbxmVjM9h for <tsvwg@ietfa.amsl.com>; Tue, 23 Jul 2019 08:12:59 -0700 (PDT)
Received: from mail-yb1-xb31.google.com (mail-yb1-xb31.google.com [IPv6:2607:f8b0:4864:20::b31]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 25A4F120363 for <tsvwg@ietf.org>; Tue, 23 Jul 2019 08:12:59 -0700 (PDT)
Received: by mail-yb1-xb31.google.com with SMTP id f195so16265311ybg.9 for <tsvwg@ietf.org>; Tue, 23 Jul 2019 08:12:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=krose.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=junMy6ATjTHpvHPW8ZEy792DVPz7DWyTO1axsCKZCxY=; b=IMg/8iC01pyHXZTy32ZUHA49LGSfGSdj2Z9A6d7NqeadgoQkuuwC3JHG/n3TmfDEVD PPBIOPlVd9LH9GrWAEU9EjlQGQbpZK5+45rSRlkDA0Y44C01UfpIoPjLNw1zrk76h0t1 A0TdOT4ML6n48cwyJx1iB6X1YD7EGxs8BPOOE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=junMy6ATjTHpvHPW8ZEy792DVPz7DWyTO1axsCKZCxY=; b=XeiEw1+ZBW1+H5FqkqW8Yk3LVTCpthJhWy6ytF0n0XVVDICMe3jKMBpVLhMYC0i6bk MCDNWh9ylNGu1wnl+FUPmJwdPOJz5NXkaA2aK5t2bA6PBBDruijzPcwXLu/tqXxyj+iK If3QGyd8YoguspePsgzBtL4bh07shQCAcL10YeWdmGGTg+/9auSRvsmadRSthXdn8u1z C0BIhqTpbkbq9cJWdLvAKU9d5AVxIcMrdcx8/jDlPeJYZe9tOXSX4SzvIT4oQVe2XGdH uwAZOjLMaQgXLGkQabCkxiBxckmiYSm+avvdLbJ+6Tip+LDdZOQECKCEje5gjujdM4p3 ozTA==
X-Gm-Message-State: APjAAAV2BaGX0GV7XMNXtkXTttK13hTh6IHGdDuSWDN4deH49yWprHu/ TTklTgO5zCbrAe4L1ucRZShe3eq8ka/BrDXnP2g=
X-Google-Smtp-Source: APXvYqzWUktURlXH3e3E0ycEikO3Fr1WWvu0vLW2wpGsVcwr8RNJX6Z0jDSIIBLHmicsSD38cyBjU9MUNyGsLikQ6Tw=
X-Received: by 2002:a25:790a:: with SMTP id u10mr33491341ybc.379.1563894778008; Tue, 23 Jul 2019 08:12:58 -0700 (PDT)
MIME-Version: 1.0
References: <350f8dd5-65d4-d2f3-4d65-784c0379f58c@bobbriscoe.net> <40605F1F-A6F5-4402-9944-238F92926EA6@gmx.de> <1563401917.00951412@apps.rackspace.com> <D1595770-9481-46F6-AC50-3A720E28E03D@gmail.com> <d8911b7e-406d-adfd-37a5-1c2c20b353f2@bobbriscoe.net>
In-Reply-To: <d8911b7e-406d-adfd-37a5-1c2c20b353f2@bobbriscoe.net>
From: Kyle Rose <krose@krose.org>
Date: Tue, 23 Jul 2019 11:12:46 -0400
Message-ID: <CAJU8_nWTuQ4ERGP9PhXhpiju_4750xc3BX10z4yp4an0QBE-xw@mail.gmail.com>
To: Bob Briscoe <ietf@bobbriscoe.net>
Cc: Jonathan Morton <chromatix99@gmail.com>, "David P. Reed" <dpreed@deepplum.com>, "ecn-sane@lists.bufferbloat.net" <ecn-sane@lists.bufferbloat.net>, tsvwg IETF list <tsvwg@ietf.org>
Content-Type: multipart/alternative; boundary="000000000000b5adea058e5aa0ab"
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/ZzP0YKd9LVFe_7-mMFfflOMvrNI>
Subject: Re: [tsvwg] [Ecn-sane] per-flow scheduling
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 23 Jul 2019 15:13:09 -0000

On Mon, Jul 22, 2019 at 9:44 AM Bob Briscoe <ietf@bobbriscoe.net> wrote:

> Folks,
>
> As promised, I've pulled together and uploaded the main architectural
> arguments about per-flow scheduling that cause concern:
>
> Per-Flow Scheduling and the End-to-End Argum ent
> <http://bobbriscoe.net/projects/latency/per-flow_tr.pdf>
>
> It runs to 6 pages of reading. But I tried to make the time readers will
> have to spend worth it.
>

Before reading the other responses (poisoning my own thinking), I wanted to
offer my own reaction. In the discussion of figure 1, you seem to imply
that there's some obvious choice of bin packing for the flows involved, but
that can't be right. What if the dark green flow has deadlines? Why should
that be the one that gets only leftover bandwidth? I'll return to this
point in a bit.

The tl;dr summary of the paper seems to be that the L4S approach leaves the
allocation of limited bandwidth up to the endpoints, while FQ arbitrarily
enforces equality in the presence of limited bandwidth; but in reality the
bottleneck device needs to make *some* choice when there's a shortage and
flows don't respond. That requires some choice of policy.

In FQ, the chosen policy is to make sure every flow has the ability to get
low latency for itself, but in the absence of some other kind of trusted
signaling allocates an equal proportion of the available bandwidth to each
flow. ISTM this is the best you can do in an adversarial environment,
because anything else can be gamed to get a more than equal share (and
depending on how "flow" is defined, even this can be gamed by opening up
more flows; but this is not a problem unique to FQ).

In L4S, the policy is to assume one queue is well-behaved and one not, and
to use the ECT(1) codepoint as a classifier to get into one or the other.
But policy choice doesn't end there: in an uncooperative or adversarial
environment, you can easily get into a situation in which the bottleneck
has to apply policy to several unresponsive flows in the supposedly
well-behaved queue. Note that this doesn't even have to involve bad actors
misclassifying on purpose: it could be two uncooperative 200 Mb VR flows
competing for 300 Mb of bandwidth. In this case, L4S falls back to classic,
which with DualQ means every flow, not just the uncooperative ones,
suffers. As a user, I don't want my small, responsive flows to suffer when
uncooperative actors decide to exceed the BBW.

Getting back to figure 1, how do you choose the right allocation? With the
proposed use of ECT(1) as classifier, you have exactly one bit available to
decide which queue, and therefore which policy, applies to a flow. Should
all the classic flows get assigned whatever is left after the L4S flows are
allocated bandwidth? That hardly seems fair to classic flows. But let's say
this policy is implemented. It then escapes me how this is any different
from the trust problems facing end-to-end DSCP/QoS: why wouldn't everyone
just classify their classic flows as L4S, forcing everything to be treated
as classic and getting access to a (greater) share of the overall BBW? Then
we're left both with a spent ECT(1) codepoint and a need for FQ or some
other queuing policy to arbitrate between flows, without any bits with
which to implement the high-fidelity congestion signal required to achieve
low latency without getting squeezed out.

The bottom line is that I see no way to escape the necessity of something
FQ-like at bottlenecks outside of the sender's trust domain. If FQ can't be
done in backbone-grade hardware, then the only real answer is pipes in the
core big enough to force the bottleneck to live somewhere closer to the
edge, where FQ does scale.

Note that, in a perfect world, FQ wouldn't trigger at all because there
would always be enough bandwidth for everything users wanted to do, but in
the real world it seems like the best you can possibly do in the absence of
trusted information about how to prioritize traffic. IMO, best to think of
FQ as a last-ditch measure indicating to the operator that they're gonna
need a bigger pipe than as a steady-state bandwidth allocator.

Kyle