[tsvwg] https://tools.ietf.org/html/draft-ietf-tsvwg-l4s-arch-03#page-10

Sebastian Moeller <moeller0@gmx.de> Thu, 28 March 2019 08:25 UTC

Return-Path: <moeller0@gmx.de>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E3B1212042C for <tsvwg@ietfa.amsl.com>; Thu, 28 Mar 2019 01:25:58 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.349
X-Spam-Level:
X-Spam-Status: No, score=-2.349 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=gmx.net
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Juha2vIRlypg for <tsvwg@ietfa.amsl.com>; Thu, 28 Mar 2019 01:25:55 -0700 (PDT)
Received: from mout.gmx.net (mout.gmx.net [212.227.17.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 2639B120243 for <tsvwg@ietf.org>; Thu, 28 Mar 2019 01:25:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1553761552; bh=6sT4xFY3EkjVUFHaLAtFaZj9ibeqxXh0tEK9lXMhTgo=; h=X-UI-Sender-Class:From:Subject:Date:To; b=cEgmqDgbxYFavygqLuOmuyle7GXzAjkZpa0Fn7VilvyOghxEJG49WN9v1fqyZ8Y/B CZyZLmPTMv3hXQwZxTvtjTGtndZiPwi/RDwNb6Q92C0LbUXUfXLcwsrelUPKTEHENp hoCw3rlHAsRX2ih8cka8LPlrxu1LDySB/bTSNC7E=
X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c
Received: from [172.16.12.10] ([134.76.241.253]) by mail.gmx.com (mrgmx101 [212.227.17.168]) with ESMTPSA (Nemesis) id 0Ldttv-1gjkWS3uvD-00izu1 for <tsvwg@ietf.org>; Thu, 28 Mar 2019 09:25:51 +0100
From: Sebastian Moeller <moeller0@gmx.de>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.1\))
Message-Id: <618DEC9B-5BA1-4D20-B68D-970EBD42CBCE@gmx.de>
Date: Thu, 28 Mar 2019 09:25:49 +0100
To: tsvwg IETF list <tsvwg@ietf.org>
X-Mailer: Apple Mail (2.3445.9.1)
X-Provags-ID: V03:K1:ws3vhOOGz14RqXk08yrYzVz5BcbJL7h3nvV99hT5Mgr+Och9VmA eChA3Ai1aF+/BC9ChbCo65cHtw9LyY+tH9TowaLFWz89tDyVyj6+AkMZl+9+AU3ihHp+7xw Gs22GnoA3aBly8mxf5UYj/2Nl67HkzGcrWzk4eNibNMdQQBNtOTupHQcB52wN5br/lTJcty Cb0/cwS9uq2xP4geRPhUg==
X-UI-Out-Filterresults: notjunk:1;V03:K0:xtvTsFKnXnQ=:zDpoXKesoPYYPCQhxoSDJK gOweAeL9bzg9ONfod7cVly7R9MAFN1WbhTN2gnikZJOLan0g3ZYHpyg3RNRh56T3e0krqQRux YL9H3OvEvWIQqCGlIdM8JR8JNCUvz/ta/FGmyXlf5XRZrkaFgrZbYXCO4wxLArDBgyh5H5LWc Pshsr46Oqmxp2rn71XQFQ9czKjegE1qftqF88LrZzo/o0uQF6QqyV0rURYc9DctzRhBOPwKHy fKW3pKu8D+iQ1K0r+TFPEqialgL1T1DLnOR6JyS5hLYMZCLo+6dqOGMFClHf9JTZ8w9SdYUGD F1d2tq7wF5BxTwotJMq50EF1cyYUBFjd8PCQCZKkMvDCn7yZQdPLknBX0PJ6kLGlnZkOZApUQ AwZ31KXSE7eCXAZq/tcAuUJpezG9bh5sG7686xdCRGGlsuAt7xPbnH2CWy8yXDIXEeeEsxLkC Jq+cnOnyT2y+GKwBV8v7Mb6JJkTkz0eryOZDVzDGQEVYKtOq//EbIW/99ywBgo1znBicb8VSw Z8tVHbuTqOvtx+EYDnpt/FgZ4uUehX2PlZWyY+ytrZEgYC7GF2UdfcmmPULciGCUYvgdeTqUN 2n7za1WqWB4e1RH2Y/wVmZG8vyFnZ2Wg6meD7cpeKnMsTbb4jPiuZeEkDdVLexdOPRIC4lMxi jvfwbt55FpkGToKfrDEr/8fr7A4uJt0j+AuDtyBj8fL+RqWcmD8LwPi3efiu8+xfMDdqiIQN/ Wyn2wDotABv32FwW0Frj1gsIlREvRg6btw4/9zDQXZ2HdxbeRLKnFD/l8F1GTnUCcj/WT5DoK 9kVPG1HzGBJkVCKssZkU2TiAj5ug78UHcComFTP0to9uXbL+j65cN5fG248A14izCE3DAmWuq iN+f4pM9eF3dZ75rc288xSzEjbtdF9Mmn+RcXBPkalW/ESAmxeFU6EfljarrLqC9/GpmZzc7C KnfkorFoJ0Q==
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/VuditXUQ3MKHL_y68EOmxgqnECs>
Subject: [tsvwg] https://tools.ietf.org/html/draft-ietf-tsvwg-l4s-arch-03#page-10
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 28 Mar 2019 08:26:07 -0000

So the L4S architecture description has the following rationale, why L4S opted for the dual-queue approach instead of mandatinf flow queueing:

https://tools.ietf.org/html/draft-ietf-tsvwg-l4s-arch-03#page-10:

 Per-flow queuing:  Similarly per-flow queuing is not incompatible
      with the L4S approach.  However, one queue for every flow can be
      thought of as overkill compared to the minimum of two queues for
      all traffic needed for the L4S approach.  The overkill of per-flow
      queuing has side-effects:

     A.  fq makes high performance networking equipment costly
          (processing and memory) - in contrast dual queue code can be
          very simple;

      B.  fq requires packet inspection into the end-to-end transport
          layer, which doesn't sit well alongside encryption for privacy
          - in contrast the use of ECN as the classifier for L4S
          requires no deeper inspection than the IP layer;

      C.  fq isolates the queuing of each flow from the others but not
          from itself so, unlike L4S, it does not support applications
          that need both capacity-seeking behaviour and very low
          latency.

          It might seem that self-inflicted queuing delay should not
          count, because if the delay wasn't in the network it would
          just shift to the sender.  However, modern adaptive
          applications, e.g.  HTTP/2 [RFC7540] or the interactive media
          applications described in Section 6, can keep low latency
          objects at the front of their local send queue by shuffling
          priorities of other objects dependent on the progress of other
          transfers.  They cannot shuffle packets once they have
          released them into the network.

      D.  fq prevents any one flow from consuming more than 1/N of the
          capacity at any instant, where N is the number of flows.  This
          is fine if all flows are elastic, but it does not sit well
          with a variable bit rate real-time multimedia flow, which
          requires wriggle room to sometimes take more and other times
          less than a 1/N share.

          It might seem that an fq scheduler offers the benefit that it
          prevents individual flows from hogging all the bandwidth.
          However, L4S has been deliberately designed so that policing
          of individual flows can be added as a policy choice, rather
          than requiring one specific policy choice as the mechanism
          itself.  A scheduler (like fq) has to decide packet-by-packet
          which flow to schedule without knowing application intent.
          Whereas a separate policing function can be configured less
          strictly, so that senders can still control the instantaneous
          rate of each flow dependent on the needs of each application
          (e.g. variable rate video), giving more wriggle-room before a
          flow is deemed non-compliant.  Also policing of queuing and of
          flow-rates can be applied independently.



And then cablelabs, the organization closest to deploying L4S, in Data-Over-Cable Service Interface Specifications DOCSIS® 3.1, MAC and Upper Layer Protocols Interface SpecificationCM-SP-MULPIv3.1-I17-190121, Annex P Queue Protection Algorithm (Normative) says:

"This annex defines the Queue Protection algorithm that is required to be supported by the CM in the upstream (see Section 7.7.6.1). It is also the Queue Protection algorithm that CMTS Queue Protection algorithms are required to support (see Section 7.7.6.2). In either direction, this algorithm is intended to be applied solely to a Low Latency Service Flow. It detects queue- building Microflows and redirects some or all of their packets to the Classic Service Flow in order to protect the Latency Service Flow from excessive queuing. A Microflow is defined in Section P.3, but typically it is an end-to- end transport layer data flow."

To me this looks like the introduction of flow-queuing through a backdoor, instead of doing it upfront this tries to measure whether flows behave well enough to keep enjoying the L4S special treatment and demotes non-compliant flows to the TCP-friendly queue. I would be intersted to learn how this mandatory requirement can be implemented without at least having similar cost like flow queueung (at least I fear it will drag the identifies issues A. and B. from above into the system running the L4S-compliant AQM). I would love to learn where my interpretation is wrong.

Many Thanks in advance

Regards
	Sebastian Moeller