Re: [tsvwg] https://tools.ietf.org/html/draft-ietf-tsvwg-l4s-arch-03#page-10

G Fairhurst <gorry@erg.abdn.ac.uk> Thu, 28 March 2019 08:53 UTC

Return-Path: <gorry@erg.abdn.ac.uk>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id F3A821200FB for <tsvwg@ietfa.amsl.com>; Thu, 28 Mar 2019 01:53:00 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Level:
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ksyiULFO8QAE for <tsvwg@ietfa.amsl.com>; Thu, 28 Mar 2019 01:52:56 -0700 (PDT)
Received: from pegasus.erg.abdn.ac.uk (pegasus.erg.abdn.ac.uk [137.50.19.135]) by ietfa.amsl.com (Postfix) with ESMTP id 3C426120074 for <tsvwg@ietf.org>; Thu, 28 Mar 2019 01:52:55 -0700 (PDT)
Received: from dhcp-8118.meeting.ietf.org (unknown [IPv6:2001:67c:370:128:b8eb:9889:b68d:7680]) by pegasus.erg.abdn.ac.uk (Postfix) with ESMTPSA id BD7F41B00193; Thu, 28 Mar 2019 08:52:51 +0000 (GMT)
Message-ID: <5C9C8B64.4060101@erg.abdn.ac.uk>
Date: Thu, 28 Mar 2019 09:52:52 +0100
From: G Fairhurst <gorry@erg.abdn.ac.uk>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:12.0) Gecko/20120428 Thunderbird/12.0.1
MIME-Version: 1.0
To: Sebastian Moeller <moeller0@gmx.de>
CC: tsvwg IETF list <tsvwg@ietf.org>
References: <618DEC9B-5BA1-4D20-B68D-970EBD42CBCE@gmx.de>
In-Reply-To: <618DEC9B-5BA1-4D20-B68D-970EBD42CBCE@gmx.de>
Content-Type: text/plain; charset="UTF-8"; format="flowed"
Content-Transfer-Encoding: 8bit
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/guf_mQvyau8GeoUtzuZqg_CFjQc>
Subject: Re: [tsvwg] https://tools.ietf.org/html/draft-ietf-tsvwg-l4s-arch-03#page-10
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 28 Mar 2019 08:53:01 -0000

On 28/03/2019, 09:25, Sebastian Moeller wrote:
> So the L4S architecture description has the following rationale, why L4S opted for the dual-queue approach instead of mandatinf flow queueing:
>
> https://tools.ietf.org/html/draft-ietf-tsvwg-l4s-arch-03#page-10:
>
>   Per-flow queuing:  Similarly per-flow queuing is not incompatible
>        with the L4S approach.  However, one queue for every flow can be
>        thought of as overkill compared to the minimum of two queues for
>        all traffic needed for the L4S approach.  The overkill of per-flow
>        queuing has side-effects:
>
>       A.  fq makes high performance networking equipment costly
>            (processing and memory) - in contrast dual queue code can be
>            very simple;
>
>        B.  fq requires packet inspection into the end-to-end transport
>            layer, which doesn't sit well alongside encryption for privacy
>            - in contrast the use of ECN as the classifier for L4S
>            requires no deeper inspection than the IP layer;
>
>        C.  fq isolates the queuing of each flow from the others but not
>            from itself so, unlike L4S, it does not support applications
>            that need both capacity-seeking behaviour and very low
>            latency.
>
>            It might seem that self-inflicted queuing delay should not
>            count, because if the delay wasn't in the network it would
>            just shift to the sender.  However, modern adaptive
>            applications, e.g.  HTTP/2 [RFC7540] or the interactive media
>            applications described in Section 6, can keep low latency
>            objects at the front of their local send queue by shuffling
>            priorities of other objects dependent on the progress of other
>            transfers.  They cannot shuffle packets once they have
>            released them into the network.
>
>        D.  fq prevents any one flow from consuming more than 1/N of the
>            capacity at any instant, where N is the number of flows.  This
>            is fine if all flows are elastic, but it does not sit well
>            with a variable bit rate real-time multimedia flow, which
>            requires wriggle room to sometimes take more and other times
>            less than a 1/N share.
>
>            It might seem that an fq scheduler offers the benefit that it
>            prevents individual flows from hogging all the bandwidth.
>            However, L4S has been deliberately designed so that policing
>            of individual flows can be added as a policy choice, rather
>            than requiring one specific policy choice as the mechanism
>            itself.  A scheduler (like fq) has to decide packet-by-packet
>            which flow to schedule without knowing application intent.
>            Whereas a separate policing function can be configured less
>            strictly, so that senders can still control the instantaneous
>            rate of each flow dependent on the needs of each application
>            (e.g. variable rate video), giving more wriggle-room before a
>            flow is deemed non-compliant.  Also policing of queuing and of
>            flow-rates can be applied independently.
>
>
>
> And then cablelabs, the organization closest to deploying L4S, in Data-Over-Cable Service Interface Specifications DOCSIS® 3.1, MAC and Upper Layer Protocols Interface SpecificationCM-SP-MULPIv3.1-I17-190121, Annex P Queue Protection Algorithm (Normative) says:
>
> "This annex defines the Queue Protection algorithm that is required to be supported by the CM in the upstream (see Section 7.7.6.1). It is also the Queue Protection algorithm that CMTS Queue Protection algorithms are required to support (see Section 7.7.6.2). In either direction, this algorithm is intended to be applied solely to a Low Latency Service Flow. It detects queue- building Microflows and redirects some or all of their packets to the Classic Service Flow in order to protect the Latency Service Flow from excessive queuing. A Microflow is defined in Section P.3, but typically it is an end-to- end transport layer data flow."
>
> To me this looks like the introduction of flow-queuing through a backdoor, instead of doing it upfront this tries to measure whether flows behave well enough to keep enjoying the L4S special treatment and demotes non-compliant flows to the TCP-friendly queue. I would be intersted to learn how this mandatory requirement can be implemented without at least having similar cost like flow queueung (at least I fear it will drag the identifies issues A. and B. from above into the system running the L4S-compliant AQM). I would love to learn where my interpretation is wrong.
>
> Many Thanks in advance
>
> Regards
> 	Sebastian Moeller
I'm not sure I agree this is necessarily a change of direction - I 
personally see queue protection as about a control plane optimisation to 
identify and mitigate traffic from non-conformant flows. Per-flow 
accounting is not the same as per-flow queueing. (From a different 
perspective: a circuit-breaker that monitors an envelope is not the same 
as a transport congestion controller or the same as a network traffic 
shaper/scheduler).

My expectation would be that someone implementing L4S doesn't need to do 
this? ... unless someone wants to offer this sort of protection. In the 
same way that not all current Ethernet switches protect from overload, 
but it is important that those people who care about these things can 
acquire this when they needs it.

Gorry