Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt

Jinoo Joung <jjoung@smu.ac.kr> Tue, 03 October 2023 11:13 UTC

Return-Path: <jjoung@smu.ac.kr>
X-Original-To: detnet@ietfa.amsl.com
Delivered-To: detnet@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 6DABEC1522DB for <detnet@ietfa.amsl.com>; Tue, 3 Oct 2023 04:13:06 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 3.097
X-Spam-Level: ***
X-Spam-Status: No, score=3.097 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, GB_SUMOF=5, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_NONE=0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=no autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=smu-ac-kr.20230601.gappssmtp.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id viDwylDZYG1f for <detnet@ietfa.amsl.com>; Tue, 3 Oct 2023 04:13:01 -0700 (PDT)
Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 5320FC1524B6 for <detnet@ietf.org>; Tue, 3 Oct 2023 04:13:01 -0700 (PDT)
Received: by mail-pj1-x102e.google.com with SMTP id 98e67ed59e1d1-2773af0c5dbso512238a91.1 for <detnet@ietf.org>; Tue, 03 Oct 2023 04:13:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smu-ac-kr.20230601.gappssmtp.com; s=20230601; t=1696331580; x=1696936380; darn=ietf.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=5iNGzlTAZiruRbMQw1RUMZZpuwdr1AYJ3fYSzmRUDBQ=; b=FMNjJMHVrLz5C7fm+ZW5+iOKw0gVaaWIJBBdAvaqyBz01Cmn0WUJ53PSTaZlwRm/3I Po13D2Klu2YlNKutmtz7ZaEzwC8PVCXb2YlIIursRL48BwnoDY7WxOFwNGx8TpSHO0Pr FRFwL6EI/whAlEvs8Zxy3TKfxQHvZY+gbbGP9rbrtPwa56TX5VqTvdxSQhWqN/txzucS JoxhtCs/ofC7PGkFct3+SivuYHDLUUTmfqSei3qRmn5n79bNEa8Zj9L5cCnwPknocZRn UPqL0hq7KlKDB9rbf8iKFAknUq0xy/yKC9E5aKvIjWaxR3SvshJl0VmTiyHOmBu3duWJ xG0w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696331580; x=1696936380; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5iNGzlTAZiruRbMQw1RUMZZpuwdr1AYJ3fYSzmRUDBQ=; b=YER2UtyB7VhRAxFNCfXQOgltPEDKqvb4TN4BkmA8aTHaoOQ8jcl5oJ9wTsHQyBUZLP ERG4SI0geXubf5o8vS/k0cRlcmF6Mir3Eiuv7N7CqCuzs/Tr+Z8pczz2qWDCTUTXthpP Yc5ThnnPvFY7dd9qdK1iv5k/bA6bbIbPZthuLk1//pS8NaGw2mQoqDApkPp61S7goYcx 8+CEtfxEhlHaWJilnfuBpL9ZmZ5O9UDaJhYE9Q+zsmBzduaNkpNsStr6cxbN3YC8BIOz zbSMYkhrnR5IRRzX/XlOzrrSmcxuEteNM+Lt+yaYsRm6lPmd2+UuufoK41bkV5xJ4KC8 gOuQ==
X-Gm-Message-State: AOJu0YxkvU9pCqDHiGhFjMx3Er8NT8U/1eYlD6rPj9a5XS70jAtT6hQ5 csrzKxrXCTDcM9pA9U7xyy7PSplGpNTikPse4K14WUAVITly6MROjGU=
X-Google-Smtp-Source: AGHT+IHVntM4YfK76FXSNPVTSjbmYW9RjvbpMr9On0GHQ0wt6tLKodDgf41QsODoGaF7pzLcN7n3ga/BId1Uu3e5Zbc=
X-Received: by 2002:a17:90b:8d3:b0:267:f9c4:c0a8 with SMTP id ds19-20020a17090b08d300b00267f9c4c0a8mr11128159pjb.4.1696331579828; Tue, 03 Oct 2023 04:12:59 -0700 (PDT)
MIME-Version: 1.0
References: <ZQx6DGoK12X306i5@faui48e.informatik.uni-erlangen.de> <202309221636162874346@zte.com.cn> <ZQ21GTWD0U6wHYoo@faui48e.informatik.uni-erlangen.de> <CA+8ZkcS0KWzpLJxk1SOjgyEtm5J5aia4tdRiEGLo05fEgjYPOQ@mail.gmail.com> <ZRtQAZhsS7/QUMcH@faui48e.informatik.uni-erlangen.de>
In-Reply-To: <ZRtQAZhsS7/QUMcH@faui48e.informatik.uni-erlangen.de>
From: Jinoo Joung <jjoung@smu.ac.kr>
Date: Tue, 03 Oct 2023 20:11:00 +0900
Message-ID: <CA+8ZkcQ4mdjft7E6EpuXfLb5pSTz8aWSXZaH2ty_DUu7eXH72A@mail.gmail.com>
To: Toerless Eckert <tte@cs.fau.de>
Cc: peng.shaofu@zte.com.cn, detnet@ietf.org, draft-eckert-detnet-glbf@ietf.org
Content-Type: multipart/alternative; boundary="0000000000003cc6ad0606cdf923"
Archived-At: <https://mailarchive.ietf.org/arch/msg/detnet/39WwFlPGK4b9hEgVpU_SkM9P-Cs>
Subject: Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt
X-BeenThere: detnet@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Discussions on Deterministic Networking BoF and Proposed WG <detnet.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/detnet>, <mailto:detnet-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/detnet/>
List-Post: <mailto:detnet@ietf.org>
List-Help: <mailto:detnet-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/detnet>, <mailto:detnet-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Oct 2023 11:13:06 -0000

Toerless, please see in-line.

On Tue, Oct 3, 2023 at 8:19 AM Toerless Eckert <tte@cs.fau.de> wrote:

> Thanks, Jinoo, inline
>
> On Sat, Sep 23, 2023 at 08:19:12AM +0900, Jinoo Joung wrote:
> > Toerless,
> >
> > Your argument to me and Shaofu is summarized in the following statement.
> >
> > "The latency calculusfor C-SCORE is based on the assumption of a TSpec
> limit
> > for each flow. To me it seems clear that this calcullus does not hold
> > true when the TSpec increases across hops."
> >
> > Another way to put it would be:
> > The increased burst size of a flow will harm the latency bound calculated
> > based on the initial burst size of the flow.
>
> Yes.
>
> > My answer is:
> > The increased burst size of a flow does NOT harm the latency bound
> > calculated based on the initial burst size of the flow.
> >
> > Let's say a flow has TSpec rt+b.
> > The worst case happens is that the actual arrival follows rt+b;
> > that is at t=0 b arrives, and then packets arrive one after another, with
> > rate r.
> >
> > CLAIM: The largest E2E latency is that of the LAST packet of the initial
> > burst b. Let's call this packet p(b).
> > Proof:
> > 1) For any packet that arrives before p(b), it departs the network before
> > p(b), so CLAIM holds.
>
> Are you assuming that all packets of the burst are inserted into the
> first hop of the network with infinite speed, so that their arrival times
> into the network are about the same ? Otherwise there is no proof that
> an increase in departure time from the network is equivalent to an
> increase of latency through the network for these packets culminating in
> the
> lasst packet having the highestend-to-end latency.
>

JJ: Infinite link capacity is assumed here.


>
> > 2) Consider a packet that arrives later than p(b). Let's call it p(b)+n.
>
> So its from a second or further burst of the flow ?
>

JJ: Packet p(b)+n is a subsequent packet from the flow.
It can be either from another burst, or a stand-alone packet.


> > For simplicity of discussion, let's assume that every packet's length is
> 1.
> > Assume that p(b)+n joins the burst b at a point of the network.
> > Then by definition of the burst, at that point the arrival time
> difference
> > between p(b) and p(b)+n is larger than or equal to n/r.
>
> Not if there was rate distortion introduced.
>

JJ: The rate does not get distorted. Only the burst size increases.
The service is guaranteed to flow at the initial rate.


>
> > Assume that the burst to which these two packets belong is resolved at
> > another point of the network.
> > Between the join point and the resolve point, they go together.
> > Then the departure time difference is less than or equal to n/r, Since
> the
> > flow is served at a rate larger than or equal to r.
> > Therefore the latency difference is the departure time difference minus
> the
> > arrival time difference,
> > which is less than or equal to zero.
>
> I can't paint a picture of this part..
>
> Cheers
>     Toerless
>
> > CLAIM holds
> >
> > So, for both cases CLAIM holds.
> >
> > I hope this helps.
> > The burst accumulation effect is well known from decades ago.
> > Do you really believe that the network calculus theory has not been aware
> > of that?
> >
> > Best,
> > Jinoo
> >
> >
> > On Sat, Sep 23, 2023 at 12:39 AM Toerless Eckert <tte@cs.fau.de> wrote:
> >
> > > (eliminating many lines of empty space - your mailer is curious that
> > > way...)
> > >
> > > On Fri, Sep 22, 2023 at 04:36:16PM +0800, peng.shaofu@zte.com.cn
> wrote:
> > > > Hi Toerless,
> > > >
> > > >
> > > > You provide a good example in previous mail to show how does bursts
> > > accumulation
> > > > of a single flow generate due to bursts aggregation from other flows.
> > > Here, I apologize
> > > > to invent a new word "bursts aggregation", to distinguish it with
> > > "bursts accumulation".
> > > > The former is something about multiple flows, while the later is
> > > something about multiple
> > > > service-burst-intervals of a single flow.
> > > >
> > > > Yes, bursts accumulation may cause TSpec distortion of the observed
> > > flow. I believe we
> > > > have agreement on this.
> > > >
> > > > However, the difference in our views is, the solution effect of
> > > different schedulings on
> > > > bursts accumulation.
> > > >
> > > > If I get your point, you think that re-shaping (including IR), or
> > > Damper, must be introduced
> > > > to eliminate bursts accumulation.
> > > >
> > > >
> > > > IMO, IR/Damper is just one option (note that I take Damper as similar
> > > design of IR, e.g, per
> > > > "traffic class + incoming port", except with different calculation
> > > method of eligibility time for
> > > > packets), other options may be time-rank, such as latency
> compensation
> > > in Deadline,
> > > > visual finish time in C-SCORE.
> > >
> > > Let me repeat what i just replied to Jinou:
> > >
> > > None of the scheduling mechanisms considered for rfc2212 treats a
> flow's
> > > adjacent packets
> > > differently just because they are clustered closer togeter than they
> > > should (because of
> > > prior hops burst aggregation/accumulation).
> > >
> > > Shaper/IR and Damper rectify this problem directly, albeit differently
> -
> > > resulting
> > > also to an in-time/work-preserving vs on-time/zero-jitter service
> > > experience. And
> > > a difference in whether we have per-hop state/processing scaling
> issues.
> > >
> > > > We can simply understand the difference between IR/Damper option and
> > > time-rank option
> > > >
> > > >
> > > > as below:
> > > >
> > > >
> > > > In IR/Damper option, early arrived packet (note: compared to the
> > > previous arrived
> > > > packet) is delayed in the shaping buffer for a period of time untill
> its
> > > eligibility time, before
> > >
> > > compared to the previousarrived packet ... from same flow .. only in
> > > shaper/IR, not damper
> > > damper delays all packets up to quasi synchronous latency.
> > >
> > > > that, it is REFUSED entry into the queueing sub-system.
> > > >
> > > >
> > > > In time-rank option, early arrived packet is TOLERATED entry into the
> > > queueing
> > > > sub-system but just with a large rank (or low priority) to not affect
> > > the urgency of scheduling
> > > > eligibility arrivals (thus get bounded latency for them).
> > >
> > > I remember discussions with Jakoov Stein re. draft-stein-srtsn, which
> is
> > > AFAIK also an
> > > instance of such a time-rank forwarding with deadlines, and he pointed
> out
> > > in later
> > > parts of the discussion that his (students) research also showed that
> the
> > > mechanism
> > > is a heuristic bounded latency solution, but not a deterministic.
> > >
> > > > That is, even in the case of bursts accumulation, a well-designed
> > > scheduling can still get a
> > > > bounded latency.
> > >
> > > And i fundamentally disagree, because i can not find a principle in
> these
> > > mechanisms
> > > that restores per-flow TSpec increase correctly. To elaborate iin more
> > > detail on my
> > > 4 * 100 Mbps flows via Gbps link: WFQ would not restore a 1 Mbps time
> > > difference between
> > > distorted packets p-1 and p of my example flow, it would only limit the
> > > burst to be
> > > equal or less to that of a 2.5 Mbps flow in the best case, because
> this is
> > > effectively what packet
> > > bursts of 400 * 1 Mbps across a 1 Gbps link do get with WFQ. But if the
> > > distorted p-1 and
> > > p packet happen to get into a burst of less than 400 flows packets,
> then
> > > each of p-1 and
> > > p would get an even higher share of the available 1 Gbps bandwidth and
> > > thus their distortion
> > > would stay higher. When there is no contention on some following hop,
> > > their distortion would
> > > not be eliminated at all.
> > >
> > > > However, I agree with you that there is still difference result of
> the
> > > amount of bursts accumulation
> > > > for in-time and on-time mode, with different buffer cost.
> > >
> > > My unfortunate feeling is that there is no way to build an
> > > in-time/work-preserving
> > > bounded latency solution that is per-hop stateless, because i need
> > > shaper/IR for
> > > in-time/work-preserving, and i can not build IR/shaper per-hop
> stateless.
> > >
> > > Note that our LBF work  (https://ieeexplore.ieee.org/document/9110431)
> > > equally attempted to
> > > provide such a deadline, and departure priority calculation based
> latency
> > > management - in-time.
> > > But we also failed to figure out how to provide a bounded latency
> calculus
> > > because of this
> > > TSpec distortion issue. Which is why we then went to the gLBF approach.
> > >
> > > And of course it is extremely annoying because as soon as you do such
> > > an advanced scheduler (PIFO, per-packet calculation of departure
> > > priority/time), you do
> > > minimize the effect of course (like WQF and C-SCORE and LBF do), but
> you
> > > don't eliminate it.
> > >
> > > Aka: close, but i fear no deterministic cigar.
> > >
> > > And btw: I'd be happy to be proven wrong, but the per-hop math/calculus
> > > that is used in rfc2212
> > > and elsewhere is all based on the premise of per-hop known and same
> TSpec
> > > for the flows.
> > >
> > > Cheers
> > >     Toerless
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Regards,
> > > >
> > > >
> > > > PSF
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Original
> > > >
> > > >
> > > >
> > > > From: ToerlessEckert <tte@cs.fau.de>
> > > > To: 彭少富10053815;
> > > > Cc: jjoung@smu.ac.kr <jjoung@smu.ac.kr>;detnet@ietf.org <
> detnet@ietf.org
> > > >;draft-eckert-detnet-glbf@ietf.org <draft-eckert-detnet-glbf@ietf.org
> >;
> > > > Date: 2023年09月22日 01:15
> > > > Subject: Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt
> > > >
> > > >
> > > > Peng,
> > > >
> > > > I am always trying to escape having to get into the depts of the
> math,
> > > and instead
> > > > trying to find as simple as possible example. And the text from
> rfc2212
> > > may be somewhat
> > > > misleading when it talks about classes.
> > > >
> > > > The simple logic, why i think that WFQ does not help the math over
> FIFO
> > > are this:
> > > >
> > > > - I can have all flows with the same r_i and b_i, and then all flows
> are
> > > treated equal.
> > > > - I can all bursts be single packets , in which case the queue i
> build
> > > up is just
> > > >   one packet/flow.
> > > >
> > > > In these cases, i think WFQ will not treat packets differently from
> > > FIFO. Correct
> > > > me if i am wrong. And the problem of TSpec distortion still exists.
> > > >
> > > > Aka:  WFC (and in extrapolation C-SCORE), woul likely have benefits
> > > reducing the
> > > > TSpec distortion to some degree if b_i is multiple packets, because
> then
> > > a back-to-back
> > > > burst of packets from one flow would be broken up, but if the
> interface
> > > serving
> > > > rate is still significantly higher than the sum(r_i), then i will
> still
> > > continue
> > > > to pass bursts hop-by-hop that lead to TSpec distortion (IMHO).
> > > >
> > > > Cheers
> > > >     Toerless
> > > >
> > > >
> > > > On Thu, Sep 21, 2023 at 12:02:01PM +0800, peng.shaofu@zte.com.cn
> wrote:
> > > > > Hi Jinoo,
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Thanks for your explanation.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > I agree with you that for a single observed flow, especialy with
> ideal
> > > fluid model,
> > > > >
> > > > >
> > > > > each scheduler in the network can provide a guaranteed service for
> it,
> > > even in the case of
> > > > >
> > > > >
> > > > > flow aggregation.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > For example, multiple flows, each with arrival curve A_i(t) = b_i +
> > > r_i * t, may belongs to the
> > > > >
> > > > >
> > > > > same traffic class and consume the resources (burst and bandwidth)
> of
> > > the same
> > > > >
> > > > >
> > > > > out-interface on the intermediate node. Suppose that the scheduler
> > > provide a rate-latency
> > > > >
> > > > >
> > > > > service curve R*(t - T) for that traffic class, where R >=
> sum(r_i).
> > > Then, if each flow arrived
> > > > >
> > > > >
> > > > > idealy, i.e., complied its own arrival curve, for the oberserved
> flow,
> > > it will be ensured to get a
> > > > >
> > > > >
> > > > > guaranteed service rate R' from the total service rate R.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Suppose that on each node the guaranteed service curve for the
> > > observed flow (e.g, flow 1)
> > > > >
> > > > >
> > > > > is R'*(t - T') , then, according to "Pay Bursts Only Once" rule,
> the
> > > E2E latency may be:
> > > > >
> > > > >
> > > > > T' * hops + b_1/R'. It seems that E2E latency just consider the
> burst
> > > of flow 1 (i.e., b_1) and
> > > > >
> > > > >
> > > > > only once, and never consider other flows..
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > However,  the truth is hidden in T'.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > According to section 6 "Aggregate Scheduling" discussion in netcal
> > > book,  the guaranteed
> > > > >
> > > > >
> > > > > service curve for flow 1, i.e., R'*(t - T'), may be deduced by:
> > > > >
> > > > >
> > > > >     R*(t - T) - sum(b_2 + r_2*t, b_3 + r_3*t, ..., b_n + r_n*t)
> > > > >
> > > > >
> > > > > = (R - r_2 - r_3 -...- r_n) * (t - ((b_2 + b_3 + ... + b_n + R*T) /
> > > (R - r_2 - r_3 -...- r_n)))
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Thus, R' equals to (R - r_2 - r_3 -...- r_n),
> > > > >
> > > > >
> > > > > and T' equals to ((b_2 + b_3 + ... + b_n + R*T) /  (R - r_2 - r_3
> > > -...- r_n)).
> > > > >
> > > > >
> > > > > It can be seen that all bursts of other flows contribute the
> result of
> > > T', on each node,
> > > > >
> > > > >
> > > > > again and again.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > If we take each b_i as 1/n of B, r_i as 1/n of R, T as 0, to simply
> > > compare the above PBOO
> > > > >
> > > > >
> > > > > based latency estiamation and traffic class based latency
> estimation,
> > > we may find that
> > > > >
> > > > >
> > > > > the former is b_1/r_1 + hops*(n-1)*b_1/r_1, while the later is
> > > b_1/r_1. It is amazing that
> > > > >
> > > > >
> > > > > the former is about n times the latter .
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Please correct me if I misunderstand.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Regards,
> > > > >
> > > > >
> > > > > PSF
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Original
> > > > >
> > > > >
> > > > >
> > > > > From: JinooJoung <jjoung@smu.ac.kr>
> > > > > To: 彭少富10053815;
> > > > > Cc: Toerless Eckert <tte@cs.fau.de>;DetNet WG <detnet@ietf.org>;
> > > draft-eckert-detnet-glbf@ietf.org <draft-eckert-detnet-glbf@ietf.org>;
> > > > > Date: 2023年09月19日 16:13
> > > > > Subject: Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Hello Shaofu.
> > > > >
> > > > >
> > > > > Thanks for the most relevant question, which, as you have said, is
> > > related to Toerless' last question regarding "reshaping on merge
> points".
> > > > >
> > > > >
> > > > > If I may repeat Toerless' question below:
> > > > >
> > > > > "If you are relying on the same math as what rfc2212 claims, just
> > > replacing
> > > > > stateful WFQ with stateless, then it seems to me that you would
> > > equally need
> > > > > the shapers demanded by rfc2212 on merge-points. I do not see them
> in
> > > C-SCORE."
> > > > >
> > > > > And your question is:
> > > > >
> > > > > " "Pay Bursts Only Once"  may be only applied in the case that the
> > > network provides
> > > > > a dedicated service rate to a flow. Our network naturally
> aggregates
> > > flows at every node,
> > > > > therefore does not dedicate a service rate to a flow, and PBOO does
> > > not apply."
> > > > > My answer is:
> > > > >
> > > > > "Pay Bursts Only Once" is applied in the case that the network
> provides
> > > > > a GUARANTEED service rate to a flow.
> > > > >
> > > > > Fair queuing, C-SCORE, or even FIFO scheduler can guarantee a
> service
> > > rate to a flow.
> > > > >
> > > > > As long as a flow, as a single entity, is guaranteed a service
> rate,
> > > it is not considered aggregated or merged.
> > > > > Therefore reshaping is not necessary, and PBOO holds.
> > > > >
> > > > > Below is my long answer. Take a look if you'd like.
> > > > >
> > > > > Fair queuing, Deficit round robin, or even FIFO scheduler
> guarantees a
> > > service rate to a flow,
> > > > > if the total arrival rate is less than the link capacity.
> > > > > The only caveat is that FIFO can only guarantee the service rate
> that
> > > is equal to the arrival rate,
> > > > > while FQ and DRR can adjust the service rate to be larger than the
> > > arrival rate.
> > > > > If such rate-guaranteeing schedulers are placed in a network,
> > > > > then a flow is guaranteed to be served with a certain service rate,
> > > > > and not considered to be "aggregated", in the intermediate nodes.
> > > > >
> > > > > RFC2212, page 9~10, in subsection "Policing", states the
> followings:
> > > > >
> > > > > "Reshaping is done at all heterogeneous source branch points and at
> > > all source merge points."
> > > > >
> > > > >
> > > > > "Reshaping need only be done if ..."
> > > > >
> > > > >
> > > > >
> > > > > "A heterogeneous source branch point is a spot where the
> > > > > multicast distribution tree from a source branches to multiple
> > > > > distinct paths, and the TSpec’s of the reservations on the various
> > > > > outgoing links are not all the same."
> > > > >
> > > > > "A source merge point is where the distribution paths or trees from
> > > two different sources
> > > > > (sharing the same reservation) merge."
> > > > >
> > > > > In short, RFC2212 states that reshaping CAN be necessary at the
> flow
> > > aggregation and deaggregation points.
> > > > >
> > > > > Flow aggregation and deaggregation are something happening usually
> at
> > > the network boundary, between networks, etc,
> > > > > with careful planning. Flow multiplexing into a FIFO is not
> considered
> > > an aggregation.
> > > > >
> > > > > Best,
> > > > >
> > > > > Jinoo
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Sep 19, 2023 at 10:57 AM <peng.shaofu@zte.com.cn> wrote:
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Hi Jinoo, Toerless,
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Sorry to interrupt your discussion.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > According to NetCal book (
> > > https://leboudec.github.io/netcal/latex/netCalBook.pdf),
> > > > >
> > > > >
> > > > > "Pay Bursts Only Once"  may be only applied in the case that the
> > > network provide
> > > > >
> > > > >
> > > > > a dedicate service rate (may be protected by a dedicate queue, or
> even
> > > a dedicate
> > > > >
> > > > >
> > > > > sub-link) for the observed flow, such as the guarantee service
> defined
> > > in RFC2212.
> > > > >
> > > > >
> > > > > In brief, there is no other flows sharing service rate with the
> > > observed flow. That is,
> > > > >
> > > > >
> > > > > there is no fan-in, no so-called " competition flows belongs to the
> > > same traffic class
> > > > >
> > > > >
> > > > > at the intermediate node".
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Traffic class is often used to identify flow aggregation, which is
> not
> > > the case that
> > > > >
> > > > >
> > > > > "Pay Bursts Only Once" may be applied. It seems that in IP/MPLS
> > > network, flow
> > > > >
> > > > >
> > > > > aggregation is naturely. The picture Toerless showed in the
> previous
> > > mail is exactly
> > > > >
> > > > >
> > > > > related with flow aggregation, i.e, the observed flow may be
> > > interfered by some
> > > > >
> > > > >
> > > > > competition flows belongs to the same traffic class at node A and
> > > again interfered by
> > > > >
> > > > >
> > > > > other competition flows belongs to the same traffic class at node B
> > > separately.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Please correct me if I misunderstood.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Regards,
> > > > >
> > > > >
> > > > > PSF
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > detnet mailing list
> > > > > detnet@ietf.org
> > > > > https://www.ietf.org/mailman/listinfo/detnet
> > > > >
> > > > >
> > > > > Toerless,
> > > > > It seems that you argue two things in the last email.
> > > > >
> > > > > 1) Your first argument: The E2E latency is the sum of per-hop
> > > latencies..
> > > > >
> > > > > You are half-right.
> > > > > According to RFC2212, page 3, the generic expression for the E2E
> > > latency bound of a flow is:
> > > > >
> > > > > [(b-M)/R*(p-R)/(p-r)] + (M+Ctot)/R + Dtot.    (1)
> > > > >
> > > > >
> > > > > Let's call this expression (1).
> > > > >
> > > > > Here, b is the max burst, M the max packet length, R the service
> rate,
> > > p the peak rate, r the arrival rate;
> > > > > and Ctot and Dtot are the sums of C and D, which are so called
> "error
> > > terms",  over the hops.
> > > > > Thus, the E2E latency bound can be a linear function of the hop
> > > count,
> > > > > since Ctot and Dtot are functions of hop count.
> > > > > However, the first term [(b-M)/R*(p-R)/(p-r)], which includes b, is
> > > not..
> > > > > So you can see the E2E latency is NOT just the sum of per-hop
> > > latencies..
> > > > >
> > > > > 2) Your second argument: C-SCORE cannot be free from burst
> > > accumulation and other flows' burst.
> > > > > My short answer: It is free from burst accumulation or other flows'
> > > burst.
> > > > >
> > > > > Imagine an ideal system, in which your flow is completely isolated.
> > > > > It is ALONE in the network, whose link has the rate R in every hop.
> > > > > No other flows at all.
> > > > >
> > > > > Assume the flow's arrival process is indeed rt+b. At time 0, b
> arrives
> > > instantly.
> > > > > (This is the worst arrival, such that it makes the worst latency.)
> > > > > Then your flow experiences the worst latency b/R at the first node,
> > > > > and M/R (transmission delay of a packet) at the subsequent nodes.
> > > > >
> > > > >
> > > > > I.e.  (b-M)/R + H*M/R, where H is the hop count.
> > > > >
> > > > > This is a special case of (1), where R is the same with r, C is M,
> and
> > > D is zero.
> > > > >
> > > > >
> > > > > Fair queuing and C-SCORE are the best approximations of such an
> ideal
> > > system, with D equals Lmax/LC,
> > > > >
> > > > > where Lmax and LC are the max packet length in the link and the
> > > capacity of the link, respectively.
> > > > > Therefore the E2E latency bound of C-SCORE is
> > > > >
> > > > > (b-M)/R + H*(M/R + Lmax/LC),
> > > > >
> > > > >
> > > > > which is again another special case of  (1).
> > > > > Note that b is divided by R, not LC.
> > > > >
> > > > > It is well known that a FIFO scheduler's D (error term) is a
> function
> > > of the sum of other flows' bursts.
> > > > > Their E2E latency bounds are approximately
> > > > > (b-M)/R + H*[M/R + (Sum of Bursts)/LC].
> > > > >
> > > > >
> > > > > ATS or UBS also relies on FIFO, but the bursts are suppressed as
> the
> > > initial values,
> > > > >
> > > > > therefore enjoys a much better E2E latency expression.
> > > > >
> > > > > Hope this helps.
> > > > >
> > > > > Best,
> > > > > Jinoo
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Sun, Sep 17, 2023 at 1:42 AM Toerless Eckert <tte@cs.fau.de>
> wrote:
> > > > >
> > > > > On Thu, Sep 14, 2023 at 08:08:24PM +0900, Jinoo Joung wrote:
> > > > >  > Toerless, thanks for the reply.
> > > > >  > In short:
> > > > >  >
> > > > >  > C-SCORE's E2E latency bound is NOT affected by the sum of
> bursts,
> > > but only
> > > > >  > by the flow's own burst.
> > > > >
> > > > >                      +----+
> > > > >            If1   --->|    |
> > > > >            ...       |  R |x--->
> > > > >            if100 --->|    |
> > > > >                      +----+
> > > > >
> > > > >  If you have a router with for example 100 input interfaces, all
> > > sending packets
> > > > >  to the same ooutput interfaces, they all will have uncorrelated
> flows
> > > arriving
> > > > >  from the different interfaces, and at least each interface can
> have a
> > > packet
> > > > >  arriving t the same time in R.
> > > > >
> > > > >  The basic caluculus of UBS is simply the most simple and hence
> > > conservative,
> > > > >  assuming all flows packet can arrive from different interface
> without
> > > > >  rate limits. But of course you can do the latency calculations in
> a
> > > tighter
> > > > >  fashion for UBS. WOuld be interesting to learn if IEEE for TSN-ATS
> > > (Qcr) was
> > > > >  looking into any of this. E.g.: applying line shaping for the
> > > aggregate of
> > > > >  flows from the same input interface and incorporating the service
> > > curve of
> > > > >  the node.
> > > > >
> > > > >  In any case, whether the tighter latency calculus is you do for
> Qcr,
> > > it
> > > > >  is equally applicable to gLBF.
> > > > >
> > > > >  > It is bounded by (B-L)/r + H(L/r + Lmax/R), where B is the
> flow's
> > > max burst
> > > > >  > size.
> > > > >
> > > > >  See above picture. How could the packet in question not also
> suffer
> > > the
> > > > >  latency introduced by the other 99 packets randomnly arriving at
> almos
> > > > >  the same time, just a tad earlier.
> > > > >
> > > > >  > You can also see that B appears only once, and not multiplied by
> > > the hop
> > > > >  > count H.  So the burst is paid only once.
> > > > >
> > > > >  C-SCORE is a stateless fair queuing (FQ) mechanism, but making FQ
> > > state,
> > > > >  does not change the fact that FQ itself does not eliminate the
> > > > >  burst accumulation on merge points such as shon in the picture.
> > > rfc2212
> > > > >  which for example also recommends FQ independently mandates the
> use of
> > > > >  reshaping.
> > > > >
> > > > >  > Please see inline marked with JJ.
> > > > >
> > > > >  yes, more inline.
> > > > >
> > > > >  > On Thu, Sep 14, 2023 at 3:34 AM Toerless Eckert <tte@cs.fau.de>
> > > wrote:
> > > > >  [cutting off dead leafs]
> > > > >
> > > > >  > > I was only talking about the E2E latency bound guaranteeable
> by
> > > DetNet
> > > > >  > > Admission Control (AC).
> > > > >  >
> > > > >  > JJ: Admission control has two aspects.
> > > > >  > First, guaranteeing the total arrival rate (or allocated service
> > > rate) does
> > > > >  > not exceed the link capacity.
> > > > >  > Second, assuring a service level agreement (SLA), or a requested
> > > latency
> > > > >  > bound (RSpec) to a flow.
> > > > >  > By "E2E latency bound guaranteeable by DetNet Admission Control
> > > (AC)",
> > > > >  > I think you mean that negotiating the SLA first, and then,
> based on
> > > the
> > > > >  > negotiated latency, allocating per-hop latency to a flow.
> > > > >  > I suggest this per-hop latency allocation, and enforcing that
> value
> > > in a
> > > > >  > node, is not a good idea,
> > > > >  > since it can harm the advantage of "pay burst only once (PBOO)"
> > > property.
> > > > >
> > > > >  Its the only way AFAIK to achieve scalability in number of hops
> and
> > > number of
> > > > >  flows by being able to calculate latency as a linear composition
> of
> > > per-hop
> > > > >  latencies. This is what rfc2212 does, this is what Qcr does, *CQF,
> > > gLBF and so on.
> > > > >
> > > > >  > PBOO can be interpreted roughly as: If your burst is resolved
> at a
> > > point in
> > > > >  > a network, then it is resolved and does not bother you anymore.
> > > > >  > However, in the process of resolving, the delay is inevitable.
> > > > >
> > > > >  Remember tht the problemtic burstyness is an increase in the
> > > intra-flow
> > > > >  burstyness resulting from merge-point based unexpected latency
> > > > >  to p-1 of the flow followed by no such delay for packet p of the
> same
> > > flow,
> > > > >  and hence clustering p-1 and p closer together, making it exceed
> its
> > > > >  reserved burst size.
> > > > >
> > > > >  This can not be compensated for in a work-conserving way by simply
> > > capturing
> > > > >  the per-hop latency of each packet p-1 and p alone, but it
> requires a
> > > shaper
> > > > >  (or UR) to get rid off.
> > > > >
> > > > >  > JJ: Because you don't know exactly where the burst is resolved,
> > > > >  > when you calculate the per-node latency bound, the latency due
> to
> > > the burst
> > > > >  > has to be added as a portion of the latency bound.
> > > > >  > Thus the sum of per-node bounds is much larger than the E2E
> latency
> > > bound
> > > > >  > calculated with seeing the network as a whole.
> > > > >
> > > > >  Consider a flow passing through 10 hops. On every hop, you have
> > > potentially
> > > > >  merge-point with new traffic flows and new burst collision. All
> that
> > > we do
> > > > >  in the simple UBS/Qcr calculus is to take the worst case into
> account,
> > > > >  where worst case may not even be admitted now, but which could be
> > > admitted
> > > > >  in the future, and at that point in time you do not want to go
> back
> > > and
> > > > >  change the latency guarantee for your already admitted flow.
> > > > >
> > > > >          Src2  Src3  Src4  Src5  Src6  Src7  Src8   Src9 Src10
> > > > >           |     |     |     |     |     |     |     |     |
> > > > >           v     v     v     v     v     v     v     v     v
> > > > >   Src1 -> R1 -> R2 -> R3 -> R4 -> R5 -> R6 -> R7 -> R8 -> R9 ->
> R10 ->
> > > Rcv1
> > > > >                  |     |     |     |     |     |     |     |     |
> > >
> > > > >                  v     v     v     v     v     v     v     v     v
> > > > >                 Rcv2  Rcv3  Rcv4  Rcv5  Rcv6  Rcv7  Rcv8  Rcv9
> Rcv10
> > > > >
> > > > >  Above is example, where Flow 1 from Src1 to Rcv1 will experience
> such
> > > > >  merge-point burst accumulation issue on every hop - worst case.
> And
> > > as mentioned
> > > > >  before, yes, when you use the simple calculus, you're also
> > > overcalculating
> > > > >  the per-hop latency for flows that e.g.: all run in parallel to
> Src1,
> > > but
> > > > >  that is just a matter of using stricter network calculus. And
> because
> > > > >  Network Calculus is complex, and i didn't want to start becoming
> an
> > > expert on
> > > > >  it, i simply built the stateless solution in a way where i can
> reuse a
> > > > >  pre-existing, proven and used-in-standards (Qcr) queuing-model,
> > > calculus.
> > > > >
> > > > >
> > > > >  > JJ: If you enforce resolving it at the network entrance by a
> strict
> > > > >  > regulation, then you may end up resolving while not actually
> needed
> > > to.
> > > > >  > However, this approach is feasible. I will think more about it.
> > > > >
> > > > >  I think the flow-interleaving is a much more fundamental issue of
> > > higher
> > > > >  utilization with large number of flows all with low bitrates.
> Look at
> > > > >  the examples of the draft-eckert-detnet-flow-interleaving, and
> tell me
> > > > >  how else but time-division-multiplexing one would be able to solve
> > > this.
> > > > >  Forget the complex option where flows from diffeent ingres
> routers to
> > > > >  different egres routers are involved. Just the most simple
> problem of
> > > > >  one ingres router PE1, maybe 10 hops through thre network to a
> PE2,
> > > and 10,000
> > > > >  acyclic flows going to thre same egress router PE2.
> > > > >
> > > > >  Seems to me quite obvious tht you can as well resolve the burst on
> > > ingress
> > > > >  PE1 instead of hoping, and getting complex math by trying to do
> this
> > > > >  on further hops along the path.
> > > > >
> > > > >  Cheers
> > > > >      Toerless
> > > > >
> > > > >  > >
> > > > >  > > > It can be a function of many factors, such as number of
> flows,
> > > their
> > > > >  > > > service rates, their max bursts, etc.
> > > > >  > > > The sum of service rates is a deciding factor of
> utilization.
> > > > >  > >
> > > > >  > > Given how queuing latency always occurs from collision of
> packets
> > > in
> > > > >  > > buffers,
> > > > >  > > the sum of burst sizes is the much bigger problem for DetNet
> than
> > > service
> > > > >  > > rates. But this is a side discuss.
> > > > >  > >
> > > > >  >
> > > > >  > JJ: That is a good point. So we should avoid queuing schemes
> whose
> > > latency
> > > > >  > bounds are affected by the sum of bursts.
> > > > >  > Fortunately, C-SCORE's E2E latency bound is NOT affected by the
> sum
> > > of
> > > > >  > bursts, but only by the flow's own burst.
> > > > >  > It is bounded by (B-L)/r + H(L/r + Lmax/R), where B is the
> flow's
> > > max burst
> > > > >  > size.
> > > > >  > You can also see that B appears only once, and not multiplied by
> > > the hop
> > > > >  > count H.
> > > > >  > So the burst is paid only once.
> > > > >  >
> > > > >  >
> > > > >  > > > So, based on an E2E latency bound expression, you can guess
> the
> > > bound at
> > > > >  > > > 100% utilization.
> > > > >  > > > But you can always fill the link with flows of any burst
> sizes,
> > > therefore
> > > > >  > > > the guess can be wrong.
> > > > >  > >
> > > > >  > > "guess" is not a good work for DetNet.
> > > > >  > >
> > > > >  > > A DetNet bounded latency mechanisms needs a latency bound
> > > expression
> > > > >  > > (calculus)
> > > > >  > > to be a guaranteeable (over)estimate of the bounded latency
> > > independent of
> > > > >  > > what other competing traffic there may be in the future. Not a
> > > "guess".
> > > > >  > >
> > > > >  >
> > > > >  > JJ: Right. We should not guess. We should be able to provide an
> > > exact
> > > > >  > mathematical expression for latency bound.
> > > > >  > Because you argued in the previous mail that the latency bound
> > > should be
> > > > >  > obtained based on 100% utilization,
> > > > >  > I was trying to explain why that should not be done.
> > > > >  >
> > > > >  >
> > > > >  > > > Admission control, on the other hand, can be based on
> > > assumption of high
> > > > >  > > > utilization level, but not necessarily 100%.
> > > > >  > > >
> > > > >  > > > You do not assume 100% utilization when you slot-schedule,
> > > don't you?
> > > > >  > >
> > > > >  > > I don't understand what you mean with slot-schedule, can you
> > > please
> > > > >  > > explain ?
> > > > >  > >
> > > > >  >
> > > > >  > JJ: Slot-scheduling, which is a common term in the research
> > > community,
> > > > >  > is a mapping of flow into a slot (or cycle) in a slot (or cycle)
> > > based
> > > > >  > queuing methods, such as CQF.
> > > > >  > When we say a schedulability, it usually means whether we can
> > > allocate
> > > > >  > requesting flows into slots (cycles) with a preconfigured
> cycle's
> > > length
> > > > >  > and number.
> > > > >  >
> > > > >  >
> > > > >  > > > So "incremental scheduling" is now popular.
> > > > >  > >
> > > > >  > > Not sure what you mean with that term.
> > > > >  >
> > > > >  >
> > > > >  > JJ: The incremental scheduling means, when a new flow wants to
> join
> > > the
> > > > >  > network, then the network examines the schedulability of the
> flow,
> > > > >  > without altering the existing flows' schedule.
> > > > >  >
> > > > >  >
> > > > >  > >
> > > > >  > >
> > > > >  > > I am only thinking about "admitting" when it comes to bounded
> > > end-to-end
> > > > >  > > latency, aka: action by the AC of the DetNet
> controller-plane, and
> > > > >  > > yes, that needs to support "on-demand" (incremental?), aka:
> > > whenever
> > > > >  > > a new flow wants to be admitted.
> > > > >  > >
> > > > >  > > > 2) In the example I gave, the two flows travel the same
> path,
> > > thus the
> > > > >  > > > second link's occupancy is identical to the first one.
> > > > >  > > > Thus the competency levels in two links are the same,
> contrary
> > > to your
> > > > >  > > > argument.
> > > > >  > >
> > > > >  > > I guss we started from different assumptions about details not
> > > explicitly
> > > > >  > > mentioned.  For example, i guess we both assume that the
> sources
> > > connect
> > > > >  > > to the
> > > > >  > > first router with arbitrary high interfaces so that we could
> ever
> > > get close
> > > > >  > > to 2B/R on the first interface.
> > > > >  > >
> > > > >  > > But then we differed in a core detail. here is my assumption
> for
> > > > >  > > the topology / admission control:
> > > > >  > >
> > > > >  > >                      Src3          to R4
> > > > >  > >               +----+   \  +----+  /  +----+
> > > > >  > >       Src1 ---|    |    \-|    |-/   |    |
> > > > >  > >       Src2 ---| R1 |x-----| R2 |x----| R3 |
> > > > >  > >               |    |.     |    |.    |    |
> > > > >  > >               +----+.     +----+.    +----+
> > > > >  > >                     |           |
> > > > >  > >                 2B buffer    2B buffer
> > > > >  > >                 R srv rate   R srv rate
> > > > >  > >
> > > > >  > > Aka: in my case, i was assuming that there could be case where
> > > > >  > > the interface from R2 to R3 could have a 2B/R queue (and not
> > > > >  > > assuming further optimizations in calculus). E.g.: in some
> > > > >  > > other possible scenario, Src2 sends to R2, and Src3 and Src1
> to
> > > > >  > > R3 for example.
> > > > >  > >
> > > > >  >
> > > > >  > JJ: You can come up with an example that your scheme works well.
> > > > >  > But that does not negate the counterexample I gave.
> > > > >  >
> > > > >  > JJ: Again, there are only two flows.
> > > > >  > And, B is not the buffer size. B is the max burst size of a
> flow.
> > > > >  > R is the link capacity.
> > > > >  > Please review carefully the example I gave.
> > > > >  >
> > > > >  >
> > > > >  > >
> > > > >  > > You must have assumed that the totality of the DetNet
> admission
> > > control
> > > > >  > > relevant topology is this:
> > > > >  > >
> > > > >  > >               +----+      +----+     +----+
> > > > >  > >       Src1 ---|    |      |    |     |    |
> > > > >  > >       Src2 ---| R1 |x-----| R2 |x----| R3 |
> > > > >  > >               |    |.     |    |.    |    |
> > > > >  > >               +----+.     +----+.    +----+
> > > > >  > >                     |           |
> > > > >  > >                 2B buffer    2B buffer
> > > > >  > >                 R srv rate   R srv rate
> > > > >  > >
> > > > >  > > Aka: DetNet admission control would have to be able to predict
> > > that
> > > > >  > > under no permitted admission scenario, R2 would build a DetNet
> > > queue,
> > > > >  > > so even when Src1 shows up as the first and only flow, the
> > > admission
> > > > >  > > control could permit a latency to R3 of 2B/R - only for the
> > > maximum
> > > > >  > > delay through R1 queue and 0 for R2 queue.
> > > > >  > >
> > > > >  > > But if this is the whole network and the admission control
> logic
> > > > >  > > can come to this conclusion, then of course it could equally
> do
> > > the
> > > > >  > > optimization and not enable gLBF Dampening on R2 output
> > > > >  > > interface, or e.g.: set MAX=0 or the like. An e'voila, gLBF
> > > > >  > > would also give 2B/R - but as said, i think it's not a
> deployment
> > > > >  > > relevant example.
> > > > >  >
> > > > >  >
> > > > >  > JJ: If you can revise and advance the gLBF, that would be great.
> > > > >  > I am willing to join that effort, if you would like to.
> > > > >  >
> > > > >  >
> > > > >  > >
> > > > >  > >
> > > > >  > > Cheers
> > > > >  > >     Toerless
> > > > >  > >
> > > > >  > > > Please see inline with JJ.
> > > > >  > > >
> > > > >  > > > Best,
> > > > >  > > > Jinoo
> > > > >  > > >
> > > > >  > > > On Wed, Sep 13, 2023 at 9:06 AM Toerless Eckert <
> tte@cs.fau.de>
> > > wrote:
> > > > >  > > >
> > > > >  > > > > On Fri, Jul 21, 2023 at 08:47:18PM +0900, Jinoo Joung
> wrote:
> > > > >  > > > > > Shaofu, thanks for the reply.
> > > > >  > > > > > It is my pleasure to discuss issues like this with you.
> > > > >  > > > > >
> > > > >  > > > > > The example network I gave is a simple one, but the
> > > scenario is the
> > > > >  > > worst
> > > > >  > > > > > that can happen.
> > > > >  > > > > > The E2E latency bounds are thus,
> > > > >  > > > > >
> > > > >  > > > > > for Case 1: ~ 2B/R
> > > > >  > > > > > for Case 2: ~ 2 * (2B/R)
> > > > >  > > > >
> > > > >  > > > > This is a bit terse, let me try to expand:
> > > > >  > > > >
> > > > >  > > > > Case 1 is FIFO or UBS/ATS, right / Case 2 is gLBF ?
> > > > >  > > > >
> > > > >  > > >
> > > > >  > > > JJ: Correct.
> > > > >  > > >
> > > > >  > > >
> > > > >  > > > >
> > > > >  > > > > Assuming i am interpreting it right, then this is
> > > inconsistent with
> > > > >  > > your
> > > > >  > > > > setup: You said all links are the same so both hops do
> have
> > > the same
> > > > >  > > > > buffer and rates, so the admission controller also expect
> to
> > > > >  > > > > have to put as many flows on second link/queue that it
> fills
> > > up 2B.
> > > > >  > > > >
> > > > >  > > >
> > > > >  > > > JJ: Yes, two links are identical. However, as I have
> mentioned,
> > > > >  > > > an E2E latency bound is calculated based on a given network
> > > environment.
> > > > >  > > > We don't always consider a filled up link capacity.
> > > > >  > > > BTW, B is the max burst size of the flow.
> > > > >  > > >
> > > > >  > > >
> > > > >  > > >
> > > > >  > > > >
> > > > >  > > > > You just then made an example, where there was never such
> an
> > > amount
> > > > >  > > > > of competing traffic on the second hop. But tht does not
> mean
> > > that
> > > > >  > > > > the admission controller could guarantee in UBS/ATS would
> have
> > > > >  > > > > less per-hop latency than 2B/R.
> > > > >  > > >
> > > > >  > > >
> > > > >  > > > JJ: Again, two links are identical and two flows travel both
> > > links.
> > > > >  > > > The difference between Case 1 and Case 2 is not because of
> the
> > > different
> > > > >  > > > competition level (they are identical.)
> > > > >  > > > but because of the non-work conserving behaviour of the
> second
> > > link in
> > > > >  > > Case
> > > > >  > > > 2.
> > > > >  > > >
> > > > >  > > >
> > > > >  > > > >
> > > > >  > > > > If the admission controller knew there would never be a
> queue
> > > on the
> > > > >  > > > > second hop, then gLBF likewise would not need to do a
> Damper
> > > on the
> > > > >  > > > > second hop. Hence as i said previously, the per-hop and
> > > end-to-end
> > > > >  > > > > bounded latency guarantee is the same between UBS and
> gLBF.
> > > > >  > > > >
> > > > >  > > > > > And again, these are the WORST E2E latencies that a
> packet
> > > can
> > > > >  > > experience
> > > > >  > > > > > in the two-hop network in the scenario.
> > > > >  > > > >
> > > > >  > > > > Its not the worst case latency for the UBS case. you just
> did
> > > not have
> > > > >  > > > > an example to create the worst case amount of competing
> > > traffic. Or you
> > > > >  > > > > overestimed the amount of buffering and hence per-hop
> latency
> > > for the
> > > > >  > > > > UBS/ATS casee.
> > > > >  > > > >
> > > > >  > > > > > In any network that is more complex, the E2E latency
> bounds
> > > of two
> > > > >  > > > > schemes
> > > > >  > > > > > are very different.
> > > > >  > > > >
> > > > >  > > > > Counterexample:
> > > > >  > > > >
> > > > >  > > > > You have a network with TSN-ATS. You have an admission
> > > controller.
> > > > >  > > > > You only have one priority for simplicity of example.
> > > > >  > > > >
> > > > >  > > > > You do not want to dymamically signal changed end-to-end
> > > latencies
> > > > >  > > > > to applications... because its difficult. So you need to
> plan
> > > > >  > > > > for worst-case bounded latencies under maximum amount of
> > > traffic
> > > > >  > > > > load. In a simple case this means you give each interface
> > > > >  > > > > a queue size B(i)/r = 10usec. Whenever a new flow needs
> to be
> > > > >  > > > > added to the network, you find a path where all the
> buffers
> > > > >  > > > > have enough space for your new flows burst and you signal
> > > > >  > > > > to the application that the end-t-end guaranteed latency
> is
> > > > >  > > > > P(path)+N*10usec, where P is the physical propagation
> latecy
> > > of
> > > > >  > > > > the path and N is the number of hops it has.
> > > > >  > > > >
> > > > >  > > > > And in result, all packets from the flow will arrive with
> > > > >  > > > > a latency between P(path)...P(path)+N*10usec - depending
> > > > >  > > > > on network load/weather.
> > > > >  > > > >
> > > > >  > > > > Now we replace UBS in the routers with gLBF. What changes
> ?
> > > > >  > > > >
> > > > >  > > > > 1) With UBS the controller still had to signal every new
> and
> > > > >  > > > > to-be-deleted flow to every router along it path to set
> up the
> > > > >  > > > > IR for the flow. This goes away (big win).
> > > > >  > > > >
> > > > >  > > > > 2) The forwarding is in our opinion cheaper/faster to
> > > implement
> > > > >  > > > > (because of lack of memory read/write cycle of IR).
> > > > >  > > > >
> > > > >  > > > > 3) The application now sees all packets arrive at fixed
> > > latency
> > > > >  > > > > of P(path)+N*10usec. Which arguably to the application
> that
> > > > >  > > > > MUST have bounded latency is from all examples i know
> > > > >  > > > > seen rather as a benefit than as a downside.
> > > > >  > > > >
> > > > >  > > > > Cheers
> > > > >  > > > >     Toerless
> > > > >  > > > >
> > > > >  > > > >
> > > > >  > > > > >
> > > > >  > > > > > Best,
> > > > >  > > > > > Jinoo
> > > > >  > > > > >
> > > > >  > > > > > On Fri, Jul 21, 2023 at 8:31 PM <peng.shaofu@zte.com.
> .cn>
> > > wrote:
> > > > >  > > > > >
> > > > >  > > > > > >
> > > > >  > > > > > > Hi Jinoo,
> > > > >  > > > > > >
> > > > >  > > > > > >
> > > > >  > > > > > > I tried to reply briefly. If Toerless have free time,
> can
> > > confirm
> > > > >  > > it.
> > > > >  > > > > > >
> > > > >  > > > > > >
> > > > >  > > > > > > Here, when we said latency bound formula, it refers to
> > > worst-case
> > > > >  > > > > latency.
> > > > >  > > > > > >
> > > > >  > > > > > >
> > > > >  > > > > > > Intuitively, the worst-case latency for gLBF (damper +
> > > shaper +
> > > > >  > > > > scheduler)
> > > > >  > > > > > >
> > > > >  > > > > > > is that:
> > > > >  > > > > > >
> > > > >  > > > > > >     damping delay per hop is always 0. (because
> > > scheduling delay =
> > > > >  > > MAX)
> > > > >  > > > > > >
> > > > >  > > > > > >     shaping delay is always 0. (because all are
> > > eligibility
> > > > >  > > arrivals)
> > > > >  > > > > > >
> > > > >  > > > > > >     scheduling delay is always MAX (i.e., concurent
> full
> > > burst
> > > > >  > > from all
> > > > >  > > > > > > eligibility arrivals on each hop)
> > > > >  > > > > > >
> > > > >  > > > > > >
> > > > >  > > > > > > Similarly, the worst-case latency for UBS (shaper +
> > > scheduler) is
> > > > >  > > that:
> > > > >  > > > > > >
> > > > >  > > > > > >     shaping delay is always 0. (because all are
> > > eligibility
> > > > >  > > arrivals)
> > > > >  > > > > > >
> > > > >  > > > > > >     scheduling delay is always MAX (i.e., concurent
> full
> > > burst
> > > > >  > > from all
> > > > >  > > > > > > eligibility arrivals on each hop)
> > > > >  > > > > > >
> > > > >  > > > > > >
> > > > >  > > > > > > Thus, the worst-case latency of gLBF and UBS is the
> same.
> > > > >  > > > > > >
> > > > >  > > > > > > Your example give a minimumal latency that may be
> > > expierenced by
> > > > >  > > UBS,
> > > > >  > > > > but
> > > > >  > > > > > >
> > > > >  > > > > > > it is not the worst-case latency. In fact, your
> example
> > > is a simple
> > > > >  > > > > > > topology that only
> > > > >  > > > > > >
> > > > >  > > > > > > contains a line without fan-in, that cause scheduling
> > > delay almost
> > > > >  > > a
> > > > >  > > > > > > minimumal
> > > > >  > > > > > >
> > > > >  > > > > > > value due to no interfering flows.
> > > > >  > > > > > >
> > > > >  > > > > > >
> > > > >  > > > > > > Regards,
> > > > >  > > > > > >
> > > > >  > > > > > > PSF
> > > > >  > > > > > >
> > > > >  > > > > > >
> > > > >  > > > > > >
> > > > >  > > > > > > Original
> > > > >  > > > > > > *From: *JinooJoung <jjoung@smu.ac.kr>
> > > > >  > > > > > > *To: *彭少富10053815;
> > > > >  > > > > > > *Cc: *tte@cs.fau.de <tte@cs.fau.de>;detnet@ietf.org <
> > > > >  > > detnet@ietf.org>;
> > > > >  > > > > > > draft-eckert-detnet-glbf@ietf.org <
> > > > >  > > draft-eckert-detnet-glbf@ietf.org>;
> > > > >  > > > > > > *Date: *2023年07月21日 14:10
> > > > >  > > > > > > *Subject: **Re: [Detnet] FYI:
> > > draft-eckert-detnet-glbf-01.txt*
> > > > >  > > > > > > _______________________________________________
> > > > >  > > > > > > detnet mailing list
> > > > >  > > > > > > detnet@ietf.org
> > > > >  > > > > > > https://www.ietf.org/mailman/listinfo/detnet
> > > > >  > > > > > >
> > > > >  > > > > > > Hello Toerless,
> > > > >  > > > > > > I have a comment on your argument.
> > > > >  > > > > > > This is not a question, so you don't have to answer.
> > > > >  > > > > > >
> > > > >  > > > > > > You argued that the gLBF + SP has the same latency
> bound
> > > formula
> > > > >  > > with
> > > > >  > > > > UBS
> > > > >  > > > > > > (equivalently ATS IR + SP).
> > > > >  > > > > > > The IR is not a generalized gLBF, so they do not have
> the
> > > same
> > > > >  > > bound.
> > > > >  > > > > > >
> > > > >  > > > > > > In short, ATS IR is a rate-based shaper so it enjoys
> "Pay
> > > burst
> > > > >  > > only
> > > > >  > > > > once"
> > > > >  > > > > > > property.
> > > > >  > > > > > > gLBF is not. So it pays burst every node.
> > > > >  > > > > > >
> > > > >  > > > > > > Consider a simplest example, where there are only two
> > > identical
> > > > >  > > flows
> > > > >  > > > > > > travelling the same path.
> > > > >  > > > > > > Every node and link in the path are identical.
> > > > >  > > > > > >
> > > > >  > > > > > > Case 1: Just FIFO
> > > > >  > > > > > > Case 2: gLBF + FIFO
> > > > >  > > > > > >
> > > > >  > > > > > > In the first node, two flows' max bursts arrive
> almost at
> > > the same
> > > > >  > > time
> > > > >  > > > > > > but your flow is just a little late.
> > > > >  > > > > > > Then your last packet in the burst (packet of
> interest,
> > > POI)
> > > > >  > > suffers
> > > > >  > > > > > > latency around 2B/R, where B is the burst size and R
> is
> > > the link
> > > > >  > > > > capacity.
> > > > >  > > > > > > This is true for both cases.
> > > > >  > > > > > >
> > > > >  > > > > > > In the next node:
> > > > >  > > > > > > In Case 1, the POI does not see any packet queued. so
> it
> > > is
> > > > >  > > delayed by
> > > > >  > > > > its
> > > > >  > > > > > > own transmission delay.
> > > > >  > > > > > > In Case 2, the burst from the other flow, as well as
> your
> > > own
> > > > >  > > burst,
> > > > >  > > > > > > awaits the POI. So the POI is again delayed around
> 2B/R.
> > > > >  > > > > > >
> > > > >  > > > > > > In the case of UBS, the max bursts are legitimate, so
> the
> > > regulator
> > > > >  > > > > does
> > > > >  > > > > > > not do anything,
> > > > >  > > > > > > and the forwarding behavior is identical to Case 1.
> > > > >  > > > > > >
> > > > >  > > > > > > Best,
> > > > >  > > > > > > Jinoo
> > > > >  > > > > > >
> > > > >  > > > > > > On Fri, Jul 21, 2023 at 10:58 AM <
> peng.shaofu@zte.com.cn>
> > > wrote:
> > > > >  > > > > > >
> > > > >  > > > > > >>
> > > > >  > > > > > >> Hi Toerless,
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >> Thanks for your response, and understand your busy
> > > situation.
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >> A quick reply is that gLBF is really an interested
> > > proposal,
> > > > >  > > which is
> > > > >  > > > > > >> very
> > > > >  > > > > > >>
> > > > >  > > > > > >> similar to the function of Deadline on-time per hop.
> > > Our views
> > > > >  > > are
> > > > >  > > > > > >>
> > > > >  > > > > > >> consistent on this point. The key beneficial is to
> avoid
> > > burst
> > > > >  > > > > cumulation.
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >> The following example originated from the analysis of
> > > deadline
> > > > >  > > on-time
> > > > >  > > > > > >>
> > > > >  > > > > > >> mode. I believe it also makes sense for gLBF. When
> you
> > > have free
> > > > >  > > time,
> > > > >  > > > > > >>
> > > > >  > > > > > >> may verify it. The result may be helpful both for
> gLBF
> > > and
> > > > >  > > deadline
> > > > >  > > > > > >>
> > > > >  > > > > > >> on-time mode.  Note that I didn't question
> mathematical
> > > proof
> > > > >  > > about
> > > > >  > > > > UBS,
> > > > >  > > > > > >>
> > > > >  > > > > > >> which get the worst-case latency based on the
> > > combination of
> > > > >  > > > > > >>
> > > > >  > > > > > >> "IR shaper + SP scheduler".
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >> Regards,
> > > > >  > > > > > >>
> > > > >  > > > > > >> PSF
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >> Original
> > > > >  > > > > > >> *From: *ToerlessEckert <tte@cs.fau.de>
> > > > >  > > > > > >> *To: *彭少富10053815;
> > > > >  > > > > > >> *Cc: *jjoung@smu.ac.kr <jjoung@smu.ac.kr>;
> > > detnet@ietf.org <
> > > > >  > > > > > >> detnet@ietf.org>;draft-eckert-detnet-glbf@ietf.org <
> > > > >  > > > > > >> draft-eckert-detnet-glbf@ietf.org>;
> > > > >  > > > > > >> *Date: *2023年07月21日 06:07
> > > > >  > > > > > >> *Subject: **Re: [Detnet] FYI:
> > > draft-eckert-detnet-glbf-01..txt*
> > > > >  > > > > > >>
> > > > >  > > > > > >> Thanks folks for the question and discussion, I have
> > > some WG chair
> > > > >  > > > > vultures hovering over me
> > > > >  > > > > > >>
> > > > >  > > > > > >> making sure i prioritize building slides now (the
> worst
> > > one is
> > > > >  > > myself
> > > > >  > > > > ;-), so i will only
> > > > >  > > > > > >> give a brief answer and will get back to it later
> when i
> > > had more
> > > > >  > > > > time.
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >> The calculus that i used is from the [UBS] research
> > > paper from
> > > > >  > > > > Johannes Specht, aka: it has
> > > > >  > > > > > >>
> > > > >  > > > > > >> the mathematical proof, full reference is in the gLBF
> > > draft.
> > > > >  > > There is
> > > > >  > > > > another, later proof of the
> > > > >  > > > > > >>
> > > > >  > > > > > >> calculus from Jean Yves Le Boudec in another research
> > > paper which
> > > > >  > > i'd
> > > > >  > > > > have to dig up, and
> > > > >  > > > > > >>
> > > > >  > > > > > >> depending on whom you ask one or the other is easier
> to
> > > read. I
> > > > >  > > am on
> > > > >  > > > > the UBS research paper
> > > > >  > > > > > >>
> > > > >  > > > > > >> side because i have not studied Jean Yves calculus
> book.
> > > But its
> > > > >  > > > > really beautifully simple
> > > > >  > > > > > >>
> > > > >  > > > > > >> that as soon as you think of flows with only
> burst-size
> > > and rate
> > > > >  > > (or
> > > > >  > > > > period) of those burst,
> > > > >  > > > > > >>
> > > > >  > > > > > >> then your delay through the queue is really just the
> sum
> > > of
> > > > >  > > bursts.
> > > > >  > > > > And i just find beauty
> > > > >  > > > > > >>
> > > > >  > > > > > >> in simplicity. And that can not be the full answer to
> > > Jinoo, but i
> > > > >  > > > > first need to read up more
> > > > >  > > > > > >> on his WRR options.
> > > > >  > > > > > >>
> > > > >  > > > > > >> The need for doing per-hop dampening is really as i
> said
> > > from two
> > > > >  > > > > points:
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >> 1. Unless we do per-hop dampening, we will not get
> such
> > > a simple
> > > > >  > > > > calculus and equally low latency.
> > > > >  > > > > > >>
> > > > >  > > > > > >> The two validation slides of the gLBF presentation
> show
> > > that one
> > > > >  > > can
> > > > >  > > > > exceed the simple
> > > > >  > > > > > >>
> > > > >  > > > > > >> calculated bounded latency already with as few as 9
> > > flows across
> > > > >  > > a
> > > > >  > > > > single hop and arriving
> > > > >  > > > > > >>
> > > > >  > > > > > >> into one single queue -  unless there is per-hop
> > > dampening (or
> > > > >  > > > > per-flow-shaper).
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >> 2. I can not imagine how to safely sell router
> equipment
> > > and build
> > > > >  > > > > out all desirable topologies without
> > > > >  > > > > > >>
> > > > >  > > > > > >> every node is able to do the dampening. And i also
> see
> > > it as the
> > > > >  > > > > right next-generation challenge
> > > > >  > > > > > >>
> > > > >  > > > > > >> and option to make that happen in high speed
> hardware.
> > > > >  > > Specifically
> > > > >  > > > > in metro rings, every big aggregation
> > > > >  > > > > > >>
> > > > >  > > > > > >> ring node has potentially 100 incoming interfaces and
> > > hence can
> > > > >  > > > > create a lot of bursts onto ring interfaces.
> > > > >  > > > > > >>
> > > > >  > > > > > >> Cheers
> > > > >  > > > > > >>    Toerless
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >> P.S.: The validation picture in our slides was from
> our
> > > Springer
> > > > >  > > > > Journal article, so
> > > > >  > > > > > >>
> > > > >  > > > > > >> i can not simply put a copy on the Internet now, but
> > > ping me in
> > > > >  > > PM if
> > > > >  > > > > you want an authors copy.
> > > > >  > > > > > >>
> > > > >  > > > > > >> On Wed, Jul 12, 2023 at 11:48:36AM +0800,
> > > peng.shaofu@zte..com.cn
> > > > >  > > > > wrote:
> > > > >  > > > > > >> > Hi Jinoo, Toerless
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > Also thank Toerless for bringing us this interested
> > > draft.
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > For the question Jinoo pointed out, I guess, based
> on
> > > the
> > > > >  > > similar
> > > > >  > > > > analysis
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > of deadline on-time per hop, that even if all flows
> > > departured
> > > > >  > > from
> > > > >  > > > > the damper
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > and arrived at the queueing subsystem at the same
> > > time, each
> > > > >  > > flow
> > > > >  > > > > can still
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > have its worst-case latency, but just consume the
> next
> > > round of
> > > > >  > > > > budget (i.e.,
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > the MAX value mentioned in the document).
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > However, consuming the next round of budget, means
> > > that it
> > > > >  > > relies
> > > > >  > > > > on the
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > downstream node to compensate latency, and may
> result
> > > a jitter
> > > > >  > > with
> > > > >  > > > > MAX
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > (i.e., worst-case latency). Due to this reason,
> > > deadline on-time
> > > > >  > > > > per hop is
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > temperately removed in version-6, waiting for more
> > > strict
> > > > >  > > proof and
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > optimization.
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > Anyway, gLBF can do same things that deadline
> on-time
> > > per hop
> > > > >  > > done.
> > > > >  > > > > The
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > following instuitive exaple is common for these two
> > > solutions.
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > Assuming that at the last node, all received flows
> have
> > > > >  > > expierenced
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > almost 0 queueding delay on the upstream nodes.
> > > Traffic class-8
> > > > >  > > has
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > per hop worst case latency 80 us (just an example,
> > > similar to
> > > > >  > > delay
> > > > >  > > > > level
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > of deadline),  traffic class-7 has 70 us, ... ...,
> > > traffic
> > > > >  > > class-1
> > > > >  > > > > has 10 us.
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > Then, at time T0, traffic class-8 arrived at the
> last
> > > node, it
> > > > >  > > will
> > > > >  > > > > dampen 80us;
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > at time T0+10us, traffic class-7 arrived, it will
> > > dampen 70us,
> > > > >  > > and
> > > > >  > > > > so on. At
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > T0+80us, all traffic class flows will departure
> from
> > > the damper,
> > > > >  > > > > and send to
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > the same outgoing port. So, an observed packet may
> > > expierence
> > > > >  > > > > another
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > round of worst case lantecy if other higher
> priority
> > > flows
> > > > >  > > > > existing, or
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > expierence best case latency (almost 0) if other
> > > higher priority
> > > > >  > > > > flows not
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > existing. That is, a jitter with value of worst
> case
> > > latency
> > > > >  > > still
> > > > >  > > > > exists.
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > Regards,
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > PSF
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > Original
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > From: JinooJoung <jjoung@smu.ac.kr>
> > > > >  > > > > > >> > To: Toerless Eckert <tte@cs.fau.de>;
> > > > >  > > > > > >> > Cc: detnet@ietf.org <detnet@ietf.org>;
> > > > >  > > > > draft-eckert-detnet-glbf@ietf.org
> > > > >  > > > > > >>  <draft-eckert-detnet-glbf@ietf.org>;
> > > > >  > > > > > >> > Date: 2023年07月09日 09:39
> > > > >  > > > > > >> > Subject: Re: [Detnet] FYI:
> > > draft-eckert-detnet-glbf-01.txt
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > _______________________________________________
> > > > >  > > > > > >> > detnet mailing list
> > > > >  > > > > > >> > detnet@ietf.org
> > > > >  > > > > > >> > https://www.ietf.org/mailman/listinfo/detnet
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > Dear Toerless; thanks for the draft.
> > > > >  > > > > > >>
> > > > >  > > > > > >> > gLBF is an interesting approach, similar in
> concept to
> > > the
> > > > >  > > Buffered
> > > > >  > > > > Network (BN) I have introduced in the ADN Framework
> document.
> > > > >  > > > > > >>
> > > > >  > > > > > >> > The difference seems that the BN buffers only once
> at
> > > the
> > > > >  > > network
> > > > >  > > > > boundary, while gLBF buffers at every node.
> > > > >  > > > > > >>
> > > > >  > > > > > >> > Therefore in the BN, a buffer handles only a few
> > > flows, while in
> > > > >  > > > > the gLBF a buffer needs to face millions of flows.
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > The implementation complexity should be addressed
> in
> > > the future
> > > > >  > > > > draft, I think.
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > I have a quick question below.
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >    +------------------------+
> > > +------------------------+
> > > > >  > > > > > >> >    | Node A                  |        | Node B
> > > > >  > >   |
> > > > >  > > > > > >> >    |   +-+   +-+   +-+      |        |   +-+   +-+
> > >  +-+      |
> > > > >  > > > > > >> >    |-x-|D|-y-|F|---|Q|----z -|------
> > > |-x-|D|-y-|F|---|Q|----z- |
> > > > >  > > > > > >> >    |   +-+   +-+   +-+      | Link |   +-+   +-+
> > >  +-+      |
> > > > >  > > > > > >> >    +------------------------+
> > > +------------------------+
> > > > >  > > > > > >> >            |<- A/B in-time latency ->|
> > > > >  > > > > > >> >            |<--A/B on-time latency ------->|
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >                Figure 3: Forwarding with Damper and
> > > measuring
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > In Figure 3, how can F and Q guarantee the nodal
> > > latency below
> > > > >  > > MAX?
> > > > >  > > > > > >>
> > > > >  > > > > > >> > Does the gLBF provide the same latency bound as
> that
> > > of UBS, as
> > > > >  > > it
> > > > >  > > > > is argued?
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > In UBS, an interleaved regulator (IR) works as the
> > > damper D in
> > > > >  > > the
> > > > >  > > > > gLBF.
> > > > >  > > > > > >>
> > > > >  > > > > > >> > IR is essentially a FIFO, whose HoQ packet is
> examined
> > > and
> > > > >  > > leaves
> > > > >  > > > > if eligible.
> > > > >  > > > > > >>
> > > > >  > > > > > >> > A packet's eligible time can be earlier than the
> time
> > > that it
> > > > >  > > > > became the HoQ.
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > However, in gLBF, a packet has a precise moment
> that
> > > it needs
> > > > >  > > to be
> > > > >  > > > > forwarded from D.
> > > > >  > > > > > >> > (Therefore, UBS is not a generalized gLBF.)
> > > > >  > > > > > >>
> > > > >  > > > > > >> > In the worst case, all the flows may want to send
> the
> > > packets
> > > > >  > > from
> > > > >  > > > > D to F at the same time.
> > > > >  > > > > > >>
> > > > >  > > > > > >> > If it can be implemented as such, bursts may
> > > accumulate, and the
> > > > >  > > > > latency cannot be guaranteed.
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> > If it cannot be implemented that way, you may
> > > introduce another
> > > > >  > > > > type of delay.
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > Don't you need an additional mechanism for latency
> > > guarantee?
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > Thanks a lot in advance, I support this draft.
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > Best,
> > > > >  > > > > > >> > Jinoo
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > On Sat, Jul 8, 2023 at 12:05 AM Toerless Eckert <
> > > tte@cs.fau.de>
> > > > >  > > > > wrote:
> > > > >  > > > > > >> >
> > > > >  > > > > > >> > Dear DetNet WG,
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  FYI on a newly posted  bounded latency
> > > method/proposal draft
> > > > >  > > that
> > > > >  > > > > we call gLBF.
> > > > >  > > > > > >> >  (guaranteed Latency Based Forwarding).
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  gLBF, as compared to TCQF and CSQF is proposed
> from
> > > our side
> > > > >  > > to be
> > > > >  > > > > a more long-term
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  solution, because it has not been validated with
> > > high-speed
> > > > >  > > > > forwarding hardware and requires
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  new network header information for the damper
> value,
> > > whereas
> > > > >  > > > > TCQF/CSQF of course can operate
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  without new headers, have proven high-speed
> > > implementations PoC
> > > > >  > > > > and are therefore really
> > > > >  > > > > > >> >  ready for adoption now.
> > > > >  > > > > > >> >
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  gLBF is a specific variant of the damper idea
> that is
> > > meant to
> > > > >  > > be
> > > > >  > > > > compatible with the
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  TSN-ATS latency calculus so that it can use the
> same
> > > > >  > > > > controller-plane/path-computation
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  algorithms and implementations one would use for
> > > TSN-ATS. It
> > > > >  > > also
> > > > >  > > > > allows to eliminate
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  the need for hop-by-hop clock synchronization and
> (we
> > > hope)
> > > > >  > > should
> > > > >  > > > > be well implementable
> > > > >  > > > > > >> >  in high-speed hardware.
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >  Any feedback welcome.
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >  Cheers
> > > > >  > > > > > >> >      Toerless
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >  In-Reply-To: <
> > > > >  > > > > 168874067601.53296.4506535864118204933@ietfa.amsl.com>
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >  On Fri, Jul 07, 2023 at 07:37:56AM -0700,
> > > > >  > > internet-drafts@ietf.org
> > > > >  > > > > > >>  wrote:
> > > > >  > > > > > >> >  >
> > > > >  > > > > > >> >  > A new version of I-D,
> > > draft-eckert-detnet-glbf-01.txt
> > > > >  > > > > > >> >  > has been successfully submitted by Toerless
> Eckert
> > > and
> > > > >  > > posted to
> > > > >  > > > > the
> > > > >  > > > > > >> >  > IETF repository.
> > > > >  > > > > > >> >  >
> > > > >  > > > > > >> >  > Name:         draft-eckert-detnet-glbf
> > > > >  > > > > > >> >  > Revision:     01
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  > Title:                Deterministic Networking
> > > (DetNet) Data
> > > > >  > > > > Plane - guaranteed Latency Based Forwarding (gLBF) for
> > > bounded latency
> > > > >  > > with
> > > > >  > > > > low jitter and asynchronous forwarding in Deterministic
> > > Networks
> > > > >  > > > > > >> >  > Document date:        2023-07-07
> > > > >  > > > > > >> >  > Group:                Individual Submission
> > > > >  > > > > > >> >  > Pages:                39
> > > > >  > > > > > >> >  > URL:
> > > > >  > > > > > >>
> > > https://www.ietf.org/archive/id/draft-eckert-detnet-glbf-01.txt
> > > > >  > > > > > >> >  > Status:
> > > > >  > > > > > >>
> > > https://datatracker.ietf.org/doc/draft-eckert-detnet-glbf/
> > > > >  > > > > > >> >  > Htmlized:
> > > > >  > > > > > >>
> > > https://datatracker.ietf.org/doc/html/draft-eckert-detnet-glbf
> > > > >  > > > > > >> >  > Diff:
> > > > >  > > > > > >>
> > > > >  > >
> > > https://author-tools.ietf.org/iddiff?url2=draft-eckert-detnet-glbf-01
> > > > >  > > > > > >> >  >
> > > > >  > > > > > >> >  > Abstract:
> > > > >  > > > > > >> >  >    This memo proposes a mechanism called
> > > "guaranteed Latency
> > > > >  > > > > Based
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  >    Forwarding" (gLBF) as part of DetNet for
> > > hop-by-hop packet
> > > > >  > > > > forwarding
> > > > >  > > > > > >> >  >    with per-hop deterministically bounded
> latency
> > > and minimal
> > > > >  > > > > jitter.
> > > > >  > > > > > >> >  >
> > > > >  > > > > > >> >  >    gLBF is intended to be useful across a wide
> > > range of
> > > > >  > > networks
> > > > >  > > > > and
> > > > >  > > > > > >> >  >    applications with need for high-precision
> > > deterministic
> > > > >  > > > > networking
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  >    services, including in-car networks or
> networks
> > > used for
> > > > >  > > > > industrial
> > > > >  > > > > > >> >  >    automation across on factory floors, all the
> way
> > > to
> > > > >  > > ++100Gbps
> > > > >  > > > > > >> >  >    country-wide networks.
> > > > >  > > > > > >> >  >
> > > > >  > > > > > >> >  >    Contrary to other mechanisms, gLBF does not
> > > require
> > > > >  > > network
> > > > >  > > > > wide
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  >    clock synchronization, nor does it need to
> > > maintain
> > > > >  > > per-flow
> > > > >  > > > > state at
> > > > >  > > > > > >> >  >    network nodes, avoiding drawbacks of other
> known
> > > methods
> > > > >  > > while
> > > > >  > > > > > >> >  >    leveraging their advantages.
> > > > >  > > > > > >> >  >
> > > > >  > > > > > >> >  >    Specifically, gLBF uses the queuing model and
> > > calculus of
> > > > >  > > > > Urgency
> > > > >  > > > > > >> >  >    Based Scheduling (UBS, [UBS]), which is used
> by
> > > TSN
> > > > >  > > > > Asynchronous
> > > > >  > > > > > >> >  >    Traffic Shaping [TSN-ATS]. gLBF is intended
> to
> > > be a
> > > > >  > > plug-in
> > > > >  > > > > > >> >  >    replacement for TSN-ATN or as a parallel
> > > mechanism beside
> > > > >  > > > > TSN-ATS
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  >    because it allows to keeping the same
> > > controller-plane
> > > > >  > > design
> > > > >  > > > > which
> > > > >  > > > > > >> >  >    is selecting paths for TSN-ATS, sizing
> TSN-ATS
> > > queues,
> > > > >  > > > > calculating
> > > > >  > > > > > >> >  >    latencies and admitting flows to calculated
> > > paths for
> > > > >  > > > > calculated
> > > > >  > > > > > >> >  >    latencies.
> > > > >  > > > > > >> >  >
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  >    In addition to reducing the jitter compared
> to
> > > TSN-ATS by
> > > > >  > > > > additional
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  >    buffering (dampening) in the network, gLBF
> also
> > > eliminates
> > > > >  > > > > the need
> > > > >  > > > > > >> >  >    for per-flow, per-hop state maintenance
> required
> > > by
> > > > >  > > TSN-ATS.
> > > > >  > > > > This
> > > > >  > > > > > >> >  >    avoids the need to signal per-flow state to
> > > every hop
> > > > >  > > from the
> > > > >  > > > > > >> >  >    controller-plane and associated scaling
> > > problems.  It also
> > > > >  > > > > reduces
> > > > >  > > > > > >> >  >    implementation cost for high-speed networking
> > > hardware
> > > > >  > > due to
> > > > >  > > > > the
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  >    avoidance of additional high-speed speed
> > > read/write memory
> > > > >  > > > > access to
> > > > >  > > > > > >> >  >    retrieve, process and update per-flow state
> > > variables for
> > > > >  > > a
> > > > >  > > > > large
> > > > >  > > > > > >> >  >    number of flows.
> > > > >  > > > > > >> >  >
> > > > >  > > > > > >>
> > > > >  > > > > > >> >  >
> > > > >  > > > > > >> >  >
> > > > >  > > > > > >> >  >
> > > > >  > > > > > >> >  > The IETF Secretariat
> > > > >  > > > > > >> >
> > > > >  > > > > > >> >  _______________________________________________
> > > > >  > > > > > >> >  detnet mailing list
> > > > >  > > > > > >> >  detnet@ietf.org
> > > > >  > > > > > >> >  https://www.ietf.org/mailman/listinfo/detnet
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >> --
> > > > >  > > > > > >> ---
> > > > >  > > > > > >> tte@cs.fau.de
> > > > >  > > > > > >>
> > > > >  > > > > > >> _______________________________________________
> > > > >  > > > > > >> detnet mailing list
> > > > >  > > > > > >> detnet@ietf.org
> > > > >  > > > > > >> https://www.ietf.org/mailman/listinfo/detnet
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >>
> > > > >  > > > > > >
> > > > >  > > > >
> > > > >  > > > > --
> > > > >  > > > > ---
> > > > >  > > > > tte@cs.fau.de
> > > > >  > > > >
> > > > >  > >
> > > > >  > > --
> > > > >  > > ---
> > > > >  > > tte@cs.fau.de
> > > > >  > >
> > > > >
> > > > >  --
> > > > >  ---
> > > > >  tte@cs.fau.de
> > > >
> > > >
> > > >
> > > > --
> > > > ---
> > > > tte@cs.fau.de
> > >
> > >
> > >
> > > --
> > > ---
> > > tte@cs.fau.de
> > >
>
> --
> ---
> tte@cs.fau.de
>