Re: [tsvwg] Fwd: New Version Notification for draft-white-tsvwg-lld-00.txt

Bob Briscoe <in@bobbriscoe.net> Thu, 04 April 2019 11:09 UTC

Return-Path: <in@bobbriscoe.net>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0C854120092 for <tsvwg@ietfa.amsl.com>; Thu, 4 Apr 2019 04:09:28 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=bobbriscoe.net
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id t1NTCxi2UzSr for <tsvwg@ietfa.amsl.com>; Thu, 4 Apr 2019 04:09:23 -0700 (PDT)
Received: from server.dnsblock1.com (server.dnsblock1.com [85.13.236.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 4EB8B12007C for <tsvwg@ietf.org>; Thu, 4 Apr 2019 04:09:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bobbriscoe.net; s=default; h=Content-Type:In-Reply-To:MIME-Version:Date: Message-ID:From:References:To:Subject:Sender:Reply-To:Cc: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=wI3tJvsUtXbNlKZC5c6PkRFdGrwvB/MIme6OmFUFayo=; b=0lQ4wV64bjlv8xWQPpQNJfDEz FJTpihyJWzyqZ/p94vYXXycV+6tyiI19L40+tbojzotcS1hjcHH05ROFTtLSkT1ZipDrVHsMfP0ge AiEfNcKDGX0mU+PfA8xu3qPJEdDOlBLZEx7P/zd+t7Llkl3lyMarMuznBQardvygRDbjuFRZ5qsdT DmV0md43kH8iISGBvSL5HZbSqA6waoJd+3ITse9hVrNJW9adrokDhea7b8LsYQYx95gUZtOrzRCB/ dajCdh+RRthlRb5UMPd9Lu6ynDb7rHv5O3qPO0zrkCUrVbNvjIB/Gqds5UtFKTmdZ0raiAVF6TfVE FheO8kEig==;
Received: from [77.76.75.108] (port=59614 helo=[10.15.95.128]) by server.dnsblock1.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128) (Exim 4.91) (envelope-from <in@bobbriscoe.net>) id 1hC0FC-0003yU-Ew; Thu, 04 Apr 2019 12:09:20 +0100
To: Luca Muscariello <luca.muscariello@gmail.com>, Greg White <g.white@cablelabs.com>, tsvwg IETF list <tsvwg@ietf.org>
References: <0289E1B3-9AFF-4633-A9B1-6BAEE96B7692@cablelabs.com> <CAHx=1M6US_HYjXfqtRr8RbGEe5QxPjjnivLkKMHHBpqMQRyP8g@mail.gmail.com> <C689234C-6A08-47B1-90B5-07DE77112BBD@cablelabs.com> <CAHx=1M5z4fpViHbV+3+VchpyXsPJwwCuhWzNvZ7EU-An3gS0qQ@mail.gmail.com> <300857A4-9483-473E-8D9E-799F6077A4FF@cablelabs.com> <CAHx=1M53q0DG8AmhSxQXFhDfr4UzgrRR+iebmCwrZMcVnMvS5g@mail.gmail.com> <CAHx=1M4y=bEHQ1xt1G59B-DzuU195s4hMapAFmjP0UFqSn403A@mail.gmail.com>
From: Bob Briscoe <in@bobbriscoe.net>
Message-ID: <b19e6ec3-ff4e-6c71-4dd3-524524f307e7@bobbriscoe.net>
Date: Thu, 04 Apr 2019 12:09:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.5.1
MIME-Version: 1.0
In-Reply-To: <CAHx=1M4y=bEHQ1xt1G59B-DzuU195s4hMapAFmjP0UFqSn403A@mail.gmail.com>
Content-Type: multipart/alternative; boundary="------------885B6DAA912926147FB16947"
Content-Language: en-GB
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname - server.dnsblock1.com
X-AntiAbuse: Original Domain - ietf.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - bobbriscoe.net
X-Get-Message-Sender-Via: server.dnsblock1.com: authenticated_id: in@bobbriscoe.net
X-Authenticated-Sender: server.dnsblock1.com: in@bobbriscoe.net
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/t_PDM3khNasxmi9N8izLzur7kJ4>
Subject: Re: [tsvwg] Fwd: New Version Notification for draft-white-tsvwg-lld-00.txt
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 04 Apr 2019 11:09:28 -0000

Luca,

In reviewing this thread I think you haven't received a full answer to a 
question you raised in your email early in the thread (below).

On 17/03/2019 08:02, Luca Muscariello wrote:
> One point that is not specified anywhere is when a mflow entry is 
> removed front the flow table.
>
> As the table is used to blacklist mflows my intuition was that it has 
> to be based on some sort of timer. Which is another parameter to set 
> in the mechanism.
>
> The table then is tracking flows that are in progress or maybe not. 
> For sure an entry is maintained even for flows w/o packets in the 
> queues.  This has poor scalability properties compared to fq where an 
> entry is maintained for flows with actual data in the queues.
[BB] The queue protection algorithm ages out the queuing score held for 
each microflow (mflow) at a constant rate. In fact, the queuing score is 
normalized to time, so that the only value held in the bucket for each 
flow is the bucket's expiry time. For well-behaved flows, this expiry 
time will typically be exceeded before the next packet of the same flow 
arrives. If it has not expired when the next packet of the mflow 
arrives, the algo adds an additional amount of time to the mflow's 
bucket for the new packet.

Thus, only ill-behaved flows persistently consume flow-state memory, 
while well-behaved flows recycle flow-state memory between themselves as 
their packets interleave each other.

The amount of time that the algo adds to an mflow's bucket is the 
queuing score of each of the mflow's packets, normalized to a duration, 
using the following per-packet calculation:
     qscore = pkt_size * p / V
where:
     p is the current AQM marking probability of the low latency queue 
(dimensionless, in [0,1])
     V is the (constant) congestion-bit-rate deemed acceptable for a 
single mflow (see below for rationale) [b/s]

The division by V normalizes qscore to a duration, because the units are 
[congestion-bit / (congestion-bit / s) = s]. Of course, because V is a 
constant, the division can be implemented as a per-packet integer 
multiply by 1/V. We also choose V to be a power of 2 so a bit-shift can 
be used if more efficient.

The intuition is that the queuing score of an mflow is higher the more 
queuing delay there is when its packets arrive. And the faster the 
mflow's packets arrive, the more likely qscore will accumulate faster 
than the passage of time ages it out. But the formula for qscore is not 
just based on intuition - it comes from shadow pricing theory.

Congestion-bit-rate can be conceptualized as the bit-rate of those 
packets of a flow that are ECN-marked. A scalable congestion control 
averages a constant congestion-bit-rate for any flow-rate (i.e. any 
link-rate and any number of flows). So we set V a few times greater than 
that (currently 4x), to allow some variability in a flow without risking 
sanction. This does not mean that flows have to use a scalable 
congestion control to get through Q Protection into the low latency 
queue, it just sets an upper bound under which all scalable congestion 
controls will be able keep. It also sets the bar for unresponsive flows, 
to prevent them causing significantly more queuing than responsive 
scalable flows.

I hope the rest of your questions have been answered satisfactorily 
already by others.

Cheers


Bob

>
> The protection mechanism assumes that one queue has soft priority over 
>  the other. Strict priority would be stupid, so there must be a wfq 
> weight to schedule the classic queue less frequently. I did not find 
> the magic number that  is set in the DOCSIS specs but whatever number 
> is chosen I wonder which opitimization objective would be the 
> foundation of that.
> 10%, 20%, 30% or any number would imply that if the priority queue is 
> used at 100% utilization the other apps would get a small fraction of 
> the link capacity.
>
> What number is chosen and based on which calculations?
>
> Thanks
> Luca
>
> On Fri 15 Mar 2019 at 00:42, Greg White <g.white@cablelabs.com 
> <mailto:g.white@cablelabs.com>> wrote:
>
>     Yes, it is easy for applications to access the low latency queue,
>     and the job of QP is to redirect those that shouldn’t have been
>     there.  We could instead run all traffic through QP and let it be
>     automatically sorted, but if there is some traffic that can be
>     excluded a priori, then doing so reduces the queue delay (recall
>     that the QP algorithm only sanctions packets if the queue delay is
>     above the 1ms threshold), reduces the potential that a
>     well-behaving application is sanctioned (perhaps due to hash
>     collisions), etc.   Also, since sanctioning could result in
>     out-of-order delivery for a microflow, presuming that this is ok
>     for all applications might not be advisable.  We have some ideas
>     in the works for modifications that might be useful to reduce the
>     need for packet marking.
>
>     I don’t believe that this mechanism increases DoSability (is that
>     a word?) compared with single queue or fq systems, but would be
>     interested in some validation of that. Yes it is possible for a
>     malicious sender to mark its packets as NQB, and send at a high
>     rate with various port numbers (to get unique flow ids), and thus
>     have some percentage of packets avoid sanctioning, and in the
>     worst case drive queue delay up.  The default threshold values are
>     being adjusted from what is shown in the spec currently, so when
>     using the new default values, the worst case with carefully
>     crafted attack streams is ~32 ms of queuing delay (which, using
>     brute force, would require the malicious sender to launch ~400
>     Mbps of traffic at the QP function).  Similar traffic patterns in
>     a single queue system (or fq) could, depending on egress rate,
>     result in much more queue delay than this.   Also, if the attacker
>     does not mark their packets NQB, then the system protects the NQB
>     flows from the attack, so in this sense it is also slightly less
>     susceptible than single queue (and maybe fq).
>
>     *From: *Luca Muscariello <luca.muscariello@gmail.com
>     <mailto:luca.muscariello@gmail.com>>
>     *Date: *Wednesday, March 13, 2019 at 4:07 PM
>     *To: *Greg White <g.white@CableLabs.com>
>     *Cc: *"tsvwg@ietf.org <mailto:tsvwg@ietf.org>" <tsvwg@ietf.org
>     <mailto:tsvwg@ietf.org>>
>     *Subject: *Re: [tsvwg] New Version Notification for
>     draft-white-tsvwg-lld-00.txt
>
>     Greg,
>
>     Is there any intention to also grant access to the low latency
>     queue to applications that do not mark?
>
>     Also, if I got this whole mechanism right (big IF), it seems like
>     entering the low latency queue is only based on marking.
>
>     This way it looks like the queue is  easy to access and the
>     mechanism reacts to bad behaviour afterwards.
>
>     Does not this risk to create spurious packet enqueues into the low
>     latency queue? Or worse malicious sources
>
>     to make the low latency queue DoSable?
>
>     Luca
>
>     On Wed, Mar 13, 2019 at 2:03 PM Greg White <g.white@cablelabs.com
>     <mailto:g.white@cablelabs.com>> wrote:
>
>         Luca,
>
>         We may work on improving the informative text in that section
>         in order to make the behavior more clear without puzzling
>         through the pseudocode.
>
>         1.A packet enters the QP algorithm if it is classified to the
>         low latency queue.  By default, the intention is to use
>         ((DSCP==NQB) OR (ECN==ECT(1)) OR (ECN==CE)) (see the NQB draft
>         and ECN-L4S-ID draft) to identify likely NQB flows (but this
>         can be changed if needed).  The algorithm of course doesn’t
>         care how the packets get marked, but our view is that the
>         original application (or OS) should be the one that does the
>         marking. For either marking, there is no incentive to lie.
>
>         2.The algorithm sanctions (redirects to the classic queue) on
>         a per-packet basis.  So, a microflow that only briefly causes
>         queuing will have some of its packets sanctioned during that
>         episode only, and even during that episode, all of its packets
>         are scored. Note, in some cases this could result in
>         out-of-order delivery, which might be a minor disincentive for
>         persistently QB flows mismarking themselves as NQB.
>
>         3.See above.
>
>         Greg
>
>         *From: *Luca Muscariello <luca.muscariello@gmail.com
>         <mailto:luca.muscariello@gmail.com>>
>         *Date: *Wednesday, March 13, 2019 at 2:37 PM
>         *To: *Greg White <g.white@CableLabs.com>
>         *Cc: *"tsvwg@ietf.org <mailto:tsvwg@ietf.org>" <tsvwg@ietf.org
>         <mailto:tsvwg@ietf.org>>
>         *Subject: *Re: [tsvwg] New Version Notification for
>         draft-white-tsvwg-lld-00.txt
>
>         Greg,
>
>         Thanks for the reference.
>
>         I have a few questions. I read the document just once so,
>         don't be surprised if I got something wrong.
>
>         1) A packet enters the low latency queue if it is marked. Who
>         is responsible for the marking?
>
>         Is the original application to do that?
>
>         2) Once a packet enters the low latency queue, the flow, or
>         microflow,
>
>         is tracked in a (micro)flow table by reporting a function of
>         the queue occupancy.  How do updates of the table
>
>         take place? Insert is clear unless the flow is banned, update
>         is clear.. Remove is only clear to me when the flow is moved
>
>         out of the low latency queue once it  started to get too much
>         resources. Remove in the general case is not clear to me. Timers?
>
>         3)  It seems like the mechanism trusts the packet marker until
>         the queue protection mechanism takes
>
>         a different decision. How can a marker be redeemed and related
>          packets be readmitted to the low latency queue?
>
>         Thanks
>
>         Luca
>
>         On Mon, Mar 11, 2019 at 7:18 PM Greg White
>         <g.white@cablelabs.com <mailto:g.white@cablelabs.com>> wrote:
>
>             Hi Luca,
>
>             You can find the details in Annex P of the DOCSIS
>             MULPIv3.1 spec.
>             https://specification-search.cablelabs.com/CM-SP-MULPIv3.1
>
>             -Greg
>
>
>             From: Luca Muscariello <luca.muscariello@gmail.com
>             <mailto:luca.muscariello@gmail.com>>
>             Date: Monday, March 11, 2019 at 6:26 PM
>             To: Greg White <g.white@CableLabs.com>
>             Cc: Dave Taht <dave@taht.net <mailto:dave@taht.net>>,
>             Jonathan Morton <chromatix99@gmail.com
>             <mailto:chromatix99@gmail.com>>, "tsvwg@ietf.org
>             <mailto:tsvwg@ietf.org>" <tsvwg@ietf.org
>             <mailto:tsvwg@ietf.org>>
>             Subject: Re: [tsvwg] New Version Notification for
>             draft-morton-taht-tsvwg-sce-00.txt
>
>             Hi Greg,
>
>             I'm curious about the queue protection function in the LLD
>             document.
>             It seems to assume that a flow table is maintained to
>             determine if a flow
>             has the right to enter the low latency queue.
>
>             Can you give more details about that component? Or point
>             me to a reference?
>
>             Thanks
>             Luca
>
>
>             On 3/11/19, 5:29 PM, "tsvwg on behalf of Greg White"
>             <tsvwg-bounces@ietf.org <mailto:tsvwg-bounces@ietf.org> on
>             behalf of g.white@CableLabs.com> wrote:
>
>                 TSVWG,
>
>                 I've posted a new informative draft that gives an
>             overview of the new Low Latency DOCSIS specification (see
>             links below).  This overview may be interesting to TSVWG
>             participants because it includes support for L4S
>             (https://datatracker.ietf.org/doc/html/draft-ietf-tsvwg-aqm-dualq-coupled)
>             and for the NQB PHB
>             (https://datatracker.ietf.org/doc/html/draft-white-tsvwg-nqb).
>
>                 Best Regards,
>                 Greg
>
>
>
>                 On 3/11/19, 5:05 PM, "internet-drafts@ietf.org
>             <mailto:internet-drafts@ietf.org>"
>             <internet-drafts@ietf..org
>             <mailto:internet-drafts@ietf.org>> wrote:
>
>
>                     A new version of I-D, draft-white-tsvwg-lld-00.txt
>                     has been successfully submitted by Greg White and
>             posted to the
>                     IETF repository.
>
>                     Name:  draft-white-tsvwg-lld
>                     Revision:       00
>                     Title:          Low Latency DOCSIS - Technology
>             Overview
>                     Document date:  2019-03-11
>                     Group:          Individual Submission
>                     Pages:          25
>                     URL:
>             https://www.ietf.org/internet-drafts/draft-white-tsvwg-lld-00.txt
>
>                     Status:
>             https://datatracker.ietf.org/doc/draft-white-tsvwg-lld/
>                     Htmlized:
>             https://tools.ietf.org/html/draft-white-tsvwg-lld-00
>                     Htmlized:
>             https://datatracker.ietf.org/doc/html/draft-white-tsvwg-lld
>
>
>                     Abstract:
>                        NOTE: This document is a reformatted version of
>             [LLD-white-paper].
>
>                        The evolution of the bandwidth capabilities -
>             from kilobits per
>                        second to gigabits - across generations of
>             DOCSIS cable broadband
>                        technology has paved the way for the
>             applications that today form our
>                        digital lives.  Along with increased bandwidth,
>             or "speed", the
>                        latency performance of DOCSIS technology has
>             also improved in recent
>                        years.  Although it often gets less attention,
>             latency performance
>                        contributes as much or more to the broadband
>             experience and the
>                        feasibility of future applications as does speed.
>
>                        Low Latency DOCSIS technology (LLD) is a
>             specification developed by
>                        CableLabs in collaboration with DOCSIS vendors
>             and cable operators
>                        that tackles the two main causes of latency in
>             the network: queuing
>                        delay and media acquisition delay.  LLD
>             introduces an approach
>                        wherein data traffic from applications that
>             aren't causing latency
>                        can take a different logical path through the
>             DOCSIS network without
>                        getting hung up behind data from applications
>             that are causing
>                        latency, as is the case in today's Internet
>             architectures. This
>                        mechanism doesn't interfere with the way
>             applications share the total
>                        bandwidth of the connection, and it doesn't
>             reduce one application's
>                        latency at the expense of others.  In addition,
>             LLD improves the
>                        DOCSIS upstream media acquisition delay with a
>             faster request-grant
>                        loop and a new proactive scheduling mechanism. 
>             LLD makes the
>                        internet experience better for latency
>             sensitive applications without
>                        any negative impact on other applications.
>
>                        The latest generation of DOCSIS equipment that
>             has been deployed in
>                        the field - DOCSIS 3.1 - experiences typical
>             latency performance of
>                        around 10 milliseconds (ms) on the Access
>             Network link. However,
>                        under heavy load, the link can experience delay
>             spikes of 100 ms or
>                        more.  LLD systems can deliver a consistent 1
>             ms delay on the DOCSIS
>                        network for traffic that isn't causing latency,
>             imperceptible for
>                        nearly all applications. The experience will be
>             more consistent with
>                        much smaller delay variation.
>
>                        LLD can be deployed by field-upgrading DOCSIS
>             3.1 cable modem and
>                        cable modem termination system devices with new
>             software. The
>                        technology includes tools that enable automatic
>             provisioning of these
>                        new services, and it also introduces new tools
>             to report statistics
>                        of latency performance to the operator.
>
>                        Cable operators, DOCSIS equipment
>             manufacturers, and application
>                        providers will all have to act in order to take
>             advantage of LLD.
>                        This white paper explains the technology and
>             describes the role that
>                        each of these parties plays in making LLD a
>             reality.
>
>
>
>
>                     Please note that it may take a couple of minutes
>             from the time of submission
>                     until the htmlized version and diff are available
>             at tools.ietf.org <http://tools.ietf.org>.
>
>                     The IETF Secretariat
>
>
>

-- 
________________________________________________________________
Bob Briscoe                               http://bobbriscoe.net/