Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt

Jinoo Joung <jjoung@smu.ac.kr> Tue, 19 September 2023 08:13 UTC

Return-Path: <jjoung@smu.ac.kr>
X-Original-To: detnet@ietfa.amsl.com
Delivered-To: detnet@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 13D07C1595FE for <detnet@ietfa.amsl.com>; Tue, 19 Sep 2023 01:13:37 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 3.097
X-Spam-Level: ***
X-Spam-Status: No, score=3.097 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, GB_SUMOF=5, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_NONE=0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=no autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=smu-ac-kr.20230601.gappssmtp.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id XS8PQU0o2AnF for <detnet@ietfa.amsl.com>; Tue, 19 Sep 2023 01:13:33 -0700 (PDT)
Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id DEF9FC1527A6 for <detnet@ietf.org>; Tue, 19 Sep 2023 01:13:32 -0700 (PDT)
Received: by mail-pj1-x1036.google.com with SMTP id 98e67ed59e1d1-274736d0f64so3073347a91.3 for <detnet@ietf.org>; Tue, 19 Sep 2023 01:13:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smu-ac-kr.20230601.gappssmtp.com; s=20230601; t=1695111212; x=1695716012; darn=ietf.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=I+MTBiwt9EVzJ+Ym9iAkdCwWql1HDnCDClktPYFlkRo=; b=V7IqiCW6FZe6CSfXq5CkrNwo1xRRUhC9ViHk5iyf/0tXcnn3gEX0SBYeWX5TGmwDbM j5mp4qcMB3DeU0kbebD92Ox1hwl8UPkgqEtvVm4zbQmFYSJfLxHZ/dZUra+YxaZfFLvJ i4vIU/ME+7ZYNvxkQtFDSl1RH7LH46ty3j14qQ2dYRS0icQjIUid0c64n0sruOTpwRbB 7nKWkaNsn8FRF9VGahwQdpR6FoBRB7qQmrnG6iFkDQRcXwHhrgYuW4bFKrsAQFL4uVqO KRjsk8Qf0qLUeSycnurKWgXOzSjSgyDKpucUQKQm2wSocB0cinFawT1RE4xUgqx5BEw4 5eWQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695111212; x=1695716012; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=I+MTBiwt9EVzJ+Ym9iAkdCwWql1HDnCDClktPYFlkRo=; b=m3LGiF0GJxfNH0Ti4mTjAQLoJSM4/YSoElYf9c/meExk8Ym9V+dAsd9hIcjZD0Nh+c 8+5axdzi9FwYy3PVBSuDePDOKWp6aglUcCODPJdI4UhG0XuapSTftZrLxKCNeoRM8bOk tXGpshzO/g9zZEmwZpIeXuzTwvcojScLii6/X+cgiMYODCe06FEvWZNWC+fAuPYCBbs0 nbrYvTkzMhm0w3fOREzKmXRIbReuQB2nbnQhMb2BYpXImXCdR10imfnjeiOsSJs3YlBk kN/Q8yFTkUoO47nx16LZV2rqmOihddo+mTMbiZsicAuHFRkrFrAboIZIBg/1HmIwKwiM rsIQ==
X-Gm-Message-State: AOJu0Yzb0xwN7CAUgs+QWanYv8RtkPr7nwa87vgB2hOFB5EszOGifLhv LUicgaiLsGXZEQoJ4npkj8YfLW5xiSFJQGdu7CGCJg==
X-Google-Smtp-Source: AGHT+IFmMxVh97pDFXaNy1NcVWEF1CtO1Uf1Gvl3p9KAkunRpDzUPAC+l5oXhqy4l1yQgzfnStT0aLsYF9s3OoB4QhI=
X-Received: by 2002:a17:90a:6905:b0:268:2658:3b01 with SMTP id r5-20020a17090a690500b0026826583b01mr9005199pjj.39.1695111211446; Tue, 19 Sep 2023 01:13:31 -0700 (PDT)
MIME-Version: 1.0
References: <CA+8ZkcTaRwK_X5n_kGh65-1K9OMkiYdN-phOxc7k9OrDQVWOew@mail.gmail.com> <202309190957418930627@zte.com.cn>
In-Reply-To: <202309190957418930627@zte.com.cn>
From: Jinoo Joung <jjoung@smu.ac.kr>
Date: Tue, 19 Sep 2023 17:13:19 +0900
Message-ID: <CA+8ZkcRTMgSifH6LOyTC6C04UcFn_zsOwrSuH-YxGbaqLcDWeg@mail.gmail.com>
To: peng.shaofu@zte.com.cn
Cc: Toerless Eckert <tte@cs.fau.de>, DetNet WG <detnet@ietf.org>, draft-eckert-detnet-glbf@ietf.org
Content-Type: multipart/alternative; boundary="0000000000009d05c20605b1d554"
Archived-At: <https://mailarchive.ietf.org/arch/msg/detnet/xK71DYkePD7nBNZTagetrT_7mgM>
Subject: Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt
X-BeenThere: detnet@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Discussions on Deterministic Networking BoF and Proposed WG <detnet.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/detnet>, <mailto:detnet-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/detnet/>
List-Post: <mailto:detnet@ietf.org>
List-Help: <mailto:detnet-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/detnet>, <mailto:detnet-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 19 Sep 2023 08:13:37 -0000

Hello Shaofu.

Thanks for the most relevant question,
which, as you have said, is related to Toerless' last question regarding
"reshaping on merge points".

If I may repeat Toerless' question below:

"If you are relying on the same math as what rfc2212 claims, just replacing
stateful WFQ with stateless, then it seems to me that you would equally need
the shapers demanded by rfc2212 on merge-points. I do not see them in
C-SCORE."

And your question is:

" "Pay Bursts Only Once"  may be only applied in the case that the network
provides
a dedicated service rate to a flow. Our network naturally aggregates flows
at every node,
therefore does not dedicate a service rate to a flow, and PBOO does not
apply."

My answer is:

"Pay Bursts Only Once" is applied in the case that the network provides
a GUARANTEED service rate to a flow.
Fair queuing, C-SCORE, or even FIFO scheduler can guarantee a service rate
to a flow.
As long as a flow, as a single entity, is guaranteed a service rate, it is
not considered aggregated or merged.
Therefore reshaping is not necessary, and PBOO holds.

Below is my long answer. Take a look if you'd like.

Fair queuing, Deficit round robin, or even FIFO scheduler guarantees a
service rate to a flow,
if the total arrival rate is less than the link capacity.
The only caveat is that FIFO can only guarantee the service rate that is
equal to the arrival rate,
while FQ and DRR can adjust the service rate to be larger than the arrival
rate.
If such rate-guaranteeing schedulers are placed in a network,
then a flow is guaranteed to be served with a certain service rate,
and not considered to be "aggregated", in the intermediate nodes.

RFC2212, page 9~10, in subsection "Policing", states the followings:

"Reshaping is done at all heterogeneous source branch points and at all
source merge points."

"Reshaping need only be done if ..."

"A heterogeneous source branch point is a spot where the
multicast distribution tree from a source branches to multiple
distinct paths, and the TSpec’s of the reservations on the various
outgoing links are not all the same."

"A source merge point is where the distribution paths or trees from two
different sources
(sharing the same reservation) merge."

In short, RFC2212 states that reshaping CAN be necessary at the flow
aggregation and deaggregation points.

Flow aggregation and deaggregation are something happening usually at the
network boundary, between networks, etc,
with careful planning. Flow multiplexing into a FIFO is not considered an
aggregation.

Best,
Jinoo


On Tue, Sep 19, 2023 at 10:57 AM <peng.shaofu@zte.com.cn> wrote:

>
> Hi Jinoo, Toerless,
>
>
> Sorry to interrupt your discussion.
>
>
> According to NetCal book (
> https://leboudec.github.io/netcal/latex/netCalBook.pdf),
>
> "Pay Bursts Only Once"  may be only applied in the case that the network
> provide
>
> a dedicate service rate (may be protected by a dedicate queue, or even a
> dedicate
>
> sub-link) for the observed flow, such as the guarantee service defined in
> RFC2212.
>
> In brief, there is no other flows sharing service rate with the observed
> flow. That is,
>
> there is no fan-in, no so-called " competition flows belongs to the same
> traffic class
>
> at the intermediate node".
>
>
> Traffic class is often used to identify flow aggregation, which is not the
> case that
>
> "Pay Bursts Only Once" may be applied. It seems that in IP/MPLS network,
> flow
>
> aggregation is naturely. The picture Toerless showed in the previous mail
> is exactly
>
> related with flow aggregation, i.e, the observed flow may be interfered
> by some
>
> competition flows belongs to the same traffic class at node A and again
> interfered by
>
> other competition flows belongs to the same traffic class at node B
> separately.
>
>
> Please correct me if I misunderstood.
>
>
> Regards,
>
> PSF
>
>
>
> _______________________________________________
> detnet mailing list
> detnet@ietf.org
> https://www.ietf.org/mailman/listinfo/detnet
>
> Toerless,
> It seems that you argue two things in the last email.
>
> 1) Your first argument: The E2E latency is the sum of per-hop latencies.
>
> You are half-right.
> According to RFC2212, page 3, the generic expression for the E2E latency
> bound of a flow is:
>
> [(b-M)/R*(p-R)/(p-r)] + (M+Ctot)/R + Dtot.    (1)
>
> Let's call this expression (1).
>
> Here, b is the max burst, M the max packet length, R the service rate, p
> the peak rate, r the arrival rate;
> and Ctot and Dtot are the sums of C and D, which are so called "error
> terms",  over the hops.
> Thus, the E2E latency bound can be a linear function of the hop count,
> since Ctot and Dtot are functions of hop count.
> However, the first term [(b-M)/R*(p-R)/(p-r)], which includes b, is not.
> So you can see the E2E latency is NOT just the sum of per-hop latencies.
>
> 2) Your second argument: C-SCORE cannot be free from burst accumulation
> and other flows' burst.
> My short answer: It is free from burst accumulation or other flows' burst.
>
> Imagine an ideal system, in which your flow is completely isolated.
> It is ALONE in the network, whose link has the rate R in every hop.
> No other flows at all.
>
> Assume the flow's arrival process is indeed rt+b. At time 0, b arrives
> instantly.
> (This is the worst arrival, such that it makes the worst latency.)
> Then your flow experiences the worst latency b/R at the first node,
> and M/R (transmission delay of a packet) at the subsequent nodes.
>
> I.e.  (b-M)/R + H*M/R, where H is the hop count.
>
> This is a special case of (1), where R is the same with r, C is M, and D
> is zero.
>
> Fair queuing and C-SCORE are the best approximations of such an ideal
> system, with D equals Lmax/LC,
> where Lmax and LC are the max packet length in the link and the capacity
> of the link, respectively.
> Therefore the E2E latency bound of C-SCORE is
>
> (b-M)/R + H*(M/R + Lmax/LC),
>
> which is again another special case of  (1).
> Note that b is divided by R, not LC.
>
> It is well known that a FIFO scheduler's D (error term) is a function of
> the sum of other flows' bursts.
> Their E2E latency bounds are approximately
> (b-M)/R + H*[M/R + (Sum of Bursts)/LC].
>
> ATS or UBS also relies on FIFO, but the bursts are suppressed as the
> initial values,
> therefore enjoys a much better E2E latency expression.
>
> Hope this helps.
>
> Best,
> Jinoo
>
>
>
> On Sun, Sep 17, 2023 at 1:42 AM Toerless Eckert <tte@cs.fau.de> wrote:
>
>> On Thu, Sep 14, 2023 at 08:08:24PM +0900, Jinoo Joung wrote:
>> > Toerless, thanks for the reply.
>> > In short:
>> >
>> > C-SCORE's E2E latency bound is NOT affected by the sum of bursts, but
>> only
>> > by the flow's own burst.
>>
>>                     +----+
>>           If1   --->|    |
>>           ...       |  R |x--->
>>           if100 --->|    |
>>                     +----+
>>
>> If you have a router with for example 100 input interfaces, all sending
>> packets
>> to the same ooutput interfaces, they all will have uncorrelated flows
>> arriving
>> from the different interfaces, and at least each interface can have a
>> packet
>> arriving t the same time in R.
>>
>> The basic caluculus of UBS is simply the most simple and hence
>> conservative,
>> assuming all flows packet can arrive from different interface without
>> rate limits. But of course you can do the latency calculations in a
>> tighter
>> fashion for UBS. WOuld be interesting to learn if IEEE for TSN-ATS (Qcr)
>> was
>> looking into any of this. E.g.: applying line shaping for the aggregate
>> of
>> flows from the same input interface and incorporating the service curve of
>> the node.
>>
>> In any case, whether the tighter latency calculus is you do for Qcr, it
>> is equally applicable to gLBF.
>>
>> > It is bounded by (B-L)/r + H(L/r + Lmax/R), where B is the flow's max
>> burst
>> > size.
>>
>> See above picture. How could the packet in question not also suffer the
>> latency introduced by the other 99 packets randomnly arriving at almos
>> the same time, just a tad earlier.
>>
>> > You can also see that B appears only once, and not multiplied by the hop
>> > count H.  So the burst is paid only once.
>>
>> C-SCORE is a stateless fair queuing (FQ) mechanism, but making FQ state,
>> does not change the fact that FQ itself does not eliminate the
>> burst accumulation on merge points such as shon in the picture. rfc2212
>> which for example also recommends FQ independently mandates the use of
>> reshaping.
>>
>> > Please see inline marked with JJ.
>>
>> yes, more inline.
>>
>> > On Thu, Sep 14, 2023 at 3:34 AM Toerless Eckert <tte@cs.fau.de> wrote:
>> [cutting off dead leafs]
>>
>> > > I was only talking about the E2E latency bound guaranteeable by DetNet
>> > > Admission Control (AC).
>> >
>> > JJ: Admission control has two aspects.
>> > First, guaranteeing the total arrival rate (or allocated service rate)
>> does
>> > not exceed the link capacity.
>> > Second, assuring a service level agreement (SLA), or a requested latency
>> > bound (RSpec) to a flow.
>> > By "E2E latency bound guaranteeable by DetNet Admission Control (AC)",
>> > I think you mean that negotiating the SLA first, and then, based on the
>> > negotiated latency, allocating per-hop latency to a flow.
>> > I suggest this per-hop latency allocation, and enforcing that value in a
>> > node, is not a good idea,
>> > since it can harm the advantage of "pay burst only once (PBOO)"
>> property.
>>
>> Its the only way AFAIK to achieve scalability in number of hops and
>> number of
>> flows by being able to calculate latency as a linear composition of
>> per-hop
>> latencies. This is what rfc2212 does, this is what Qcr does, *CQF, gLBF
>> and so on.
>>
>> > PBOO can be interpreted roughly as: If your burst is resolved at a
>> point in
>> > a network, then it is resolved and does not bother you anymore.
>> > However, in the process of resolving, the delay is inevitable.
>>
>> Remember tht the problemtic burstyness is an increase in the intra-flow
>> burstyness resulting from merge-point based unexpected latency
>> to p-1 of the flow followed by no such delay for packet p of the same
>> flow,
>> and hence clustering p-1 and p closer together, making it exceed its
>> reserved burst size.
>>
>> This can not be compensated for in a work-conserving way by simply
>> capturing
>> the per-hop latency of each packet p-1 and p alone, but it requires a
>> shaper
>> (or UR) to get rid off.
>>
>> > JJ: Because you don't know exactly where the burst is resolved,
>> > when you calculate the per-node latency bound, the latency due to the
>> burst
>> > has to be added as a portion of the latency bound.
>> > Thus the sum of per-node bounds is much larger than the E2E latency
>> bound
>> > calculated with seeing the network as a whole.
>>
>> Consider a flow passing through 10 hops. On every hop, you have
>> potentially
>> merge-point with new traffic flows and new burst collision. All that we do
>> in the simple UBS/Qcr calculus is to take the worst case into account,
>> where worst case may not even be admitted now, but which could be admitted
>> in the future, and at that point in time you do not want to go back and
>> change the latency guarantee for your already admitted flow.
>>
>>         Src2  Src3  Src4  Src5  Src6  Src7  Src8   Src9 Src10
>>          |     |     |     |     |     |     |     |     |
>>          v     v     v     v     v     v     v     v     v
>>  Src1 -> R1 -> R2 -> R3 -> R4 -> R5 -> R6 -> R7 -> R8 -> R9 -> R10 -> Rcv1
>>                 |     |     |     |     |     |     |     |     |
>>                 v     v     v     v     v     v     v     v     v
>>                Rcv2  Rcv3  Rcv4  Rcv5  Rcv6  Rcv7  Rcv8  Rcv9  Rcv10
>>
>> Above is example, where Flow 1 from Src1 to Rcv1 will experience such
>> merge-point burst accumulation issue on every hop - worst case.  And as
>> mentioned
>> before, yes, when you use the simple calculus, you're also overcalculating
>> the per-hop latency for flows that e.g.: all run in parallel to Src1, but
>> that is just a matter of using stricter network calculus. And because
>> Network Calculus is complex, and i didn't want to start becoming an
>> expert on
>> it, i simply built the stateless solution in a way where i can reuse a
>> pre-existing, proven and used-in-standards (Qcr) queuing-model, calculus.
>>
>>
>> > JJ: If you enforce resolving it at the network entrance by a strict
>> > regulation, then you may end up resolving while not actually needed to.
>> > However, this approach is feasible. I will think more about it.
>>
>> I think the flow-interleaving is a much more fundamental issue of higher
>> utilization with large number of flows all with low bitrates. Look at
>> the examples of the draft-eckert-detnet-flow-interleaving, and tell me
>> how else but time-division-multiplexing one would be able to solve this.
>> Forget the complex option where flows from diffeent ingres routers to
>> different egres routers are involved. Just the most simple problem of
>> one ingres router PE1, maybe 10 hops through thre network to a PE2, and
>> 10,000
>> acyclic flows going to thre same egress router PE2.
>>
>> Seems to me quite obvious tht you can as well resolve the burst on ingress
>> PE1 instead of hoping, and getting complex math by trying to do this
>> on further hops along the path.
>>
>> Cheers
>>     Toerless
>>
>> > >
>> > > > It can be a function of many factors, such as number of flows, their
>> > > > service rates, their max bursts, etc.
>> > > > The sum of service rates is a deciding factor of utilization.
>> > >
>> > > Given how queuing latency always occurs from collision of packets in
>> > > buffers,
>> > > the sum of burst sizes is the much bigger problem for DetNet than
>> service
>> > > rates. But this is a side discuss.
>> > >
>> >
>> > JJ: That is a good point. So we should avoid queuing schemes whose
>> latency
>> > bounds are affected by the sum of bursts.
>> > Fortunately, C-SCORE's E2E latency bound is NOT affected by the sum of
>> > bursts, but only by the flow's own burst.
>> > It is bounded by (B-L)/r + H(L/r + Lmax/R), where B is the flow's max
>> burst
>> > size.
>> > You can also see that B appears only once, and not multiplied by the hop
>> > count H.
>> > So the burst is paid only once.
>> >
>> >
>> > > > So, based on an E2E latency bound expression, you can guess the
>> bound at
>> > > > 100% utilization.
>> > > > But you can always fill the link with flows of any burst sizes,
>> therefore
>> > > > the guess can be wrong.
>> > >
>> > > "guess" is not a good work for DetNet.
>> > >
>> > > A DetNet bounded latency mechanisms needs a latency bound expression
>> > > (calculus)
>> > > to be a guaranteeable (over)estimate of the bounded latency
>> independent of
>> > > what other competing traffic there may be in the future. Not a
>> "guess".
>> > >
>> >
>> > JJ: Right. We should not guess. We should be able to provide an exact
>> > mathematical expression for latency bound.
>> > Because you argued in the previous mail that the latency bound should be
>> > obtained based on 100% utilization,
>> > I was trying to explain why that should not be done.
>> >
>> >
>> > > > Admission control, on the other hand, can be based on assumption of
>> high
>> > > > utilization level, but not necessarily 100%.
>> > > >
>> > > > You do not assume 100% utilization when you slot-schedule, don't
>> you?
>> > >
>> > > I don't understand what you mean with slot-schedule, can you please
>> > > explain ?
>> > >
>> >
>> > JJ: Slot-scheduling, which is a common term in the research community,
>> > is a mapping of flow into a slot (or cycle) in a slot (or cycle) based
>> > queuing methods, such as CQF.
>> > When we say a schedulability, it usually means whether we can allocate
>> > requesting flows into slots (cycles) with a preconfigured cycle's length
>> > and number.
>> >
>> >
>> > > > So "incremental scheduling" is now popular.
>> > >
>> > > Not sure what you mean with that term.
>> >
>> >
>> > JJ: The incremental scheduling means, when a new flow wants to join the
>> > network, then the network examines the schedulability of the flow,
>> > without altering the existing flows' schedule.
>> >
>> >
>> > >
>> > >
>> > > I am only thinking about "admitting" when it comes to bounded
>> end-to-end
>> > > latency, aka: action by the AC of the DetNet controller-plane, and
>> > > yes, that needs to support "on-demand" (incremental?), aka: whenever
>> > > a new flow wants to be admitted.
>> > >
>> > > > 2) In the example I gave, the two flows travel the same path, thus
>> the
>> > > > second link's occupancy is identical to the first one.
>> > > > Thus the competency levels in two links are the same, contrary to
>> your
>> > > > argument.
>> > >
>> > > I guss we started from different assumptions about details not
>> explicitly
>> > > mentioned.  For example, i guess we both assume that the sources
>> connect
>> > > to the
>> > > first router with arbitrary high interfaces so that we could ever get
>> close
>> > > to 2B/R on the first interface.
>> > >
>> > > But then we differed in a core detail. here is my assumption for
>> > > the topology / admission control:
>> > >
>> > >                      Src3          to R4
>> > >               +----+   \  +----+  /  +----+
>> > >       Src1 ---|    |    \-|    |-/   |    |
>> > >       Src2 ---| R1 |x-----| R2 |x----| R3 |
>> > >               |    |.     |    |.    |    |
>> > >               +----+.     +----+.    +----+
>> > >                     |           |
>> > >                 2B buffer    2B buffer
>> > >                 R srv rate   R srv rate
>> > >
>> > > Aka: in my case, i was assuming that there could be case where
>> > > the interface from R2 to R3 could have a 2B/R queue (and not
>> > > assuming further optimizations in calculus). E.g.: in some
>> > > other possible scenario, Src2 sends to R2, and Src3 and Src1 to
>> > > R3 for example.
>> > >
>> >
>> > JJ: You can come up with an example that your scheme works well.
>> > But that does not negate the counterexample I gave.
>> >
>> > JJ: Again, there are only two flows.
>> > And, B is not the buffer size. B is the max burst size of a flow.
>> > R is the link capacity.
>> > Please review carefully the example I gave.
>> >
>> >
>> > >
>> > > You must have assumed that the totality of the DetNet admission
>> control
>> > > relevant topology is this:
>> > >
>> > >               +----+      +----+     +----+
>> > >       Src1 ---|    |      |    |     |    |
>> > >       Src2 ---| R1 |x-----| R2 |x----| R3 |
>> > >               |    |.     |    |.    |    |
>> > >               +----+.     +----+.    +----+
>> > >                     |           |
>> > >                 2B buffer    2B buffer
>> > >                 R srv rate   R srv rate
>> > >
>> > > Aka: DetNet admission control would have to be able to predict that
>> > > under no permitted admission scenario, R2 would build a DetNet queue,
>> > > so even when Src1 shows up as the first and only flow, the admission
>> > > control could permit a latency to R3 of 2B/R - only for the maximum
>> > > delay through R1 queue and 0 for R2 queue.
>> > >
>> > > But if this is the whole network and the admission control logic
>> > > can come to this conclusion, then of course it could equally do the
>> > > optimization and not enable gLBF Dampening on R2 output
>> > > interface, or e.g.: set MAX=0 or the like. An e'voila, gLBF
>> > > would also give 2B/R - but as said, i think it's not a deployment
>> > > relevant example.
>> >
>> >
>> > JJ: If you can revise and advance the gLBF, that would be great.
>> > I am willing to join that effort, if you would like to.
>> >
>> >
>> > >
>> > >
>> > > Cheers
>> > >     Toerless
>> > >
>> > > > Please see inline with JJ.
>> > > >
>> > > > Best,
>> > > > Jinoo
>> > > >
>> > > > On Wed, Sep 13, 2023 at 9:06 AM Toerless Eckert <tte@cs.fau.de>
>> wrote:
>> > > >
>> > > > > On Fri, Jul 21, 2023 at 08:47:18PM +0900, Jinoo Joung wrote:
>> > > > > > Shaofu, thanks for the reply.
>> > > > > > It is my pleasure to discuss issues like this with you.
>> > > > > >
>> > > > > > The example network I gave is a simple one, but the scenario is
>> the
>> > > worst
>> > > > > > that can happen.
>> > > > > > The E2E latency bounds are thus,
>> > > > > >
>> > > > > > for Case 1: ~ 2B/R
>> > > > > > for Case 2: ~ 2 * (2B/R)
>> > > > >
>> > > > > This is a bit terse, let me try to expand:
>> > > > >
>> > > > > Case 1 is FIFO or UBS/ATS, right / Case 2 is gLBF ?
>> > > > >
>> > > >
>> > > > JJ: Correct.
>> > > >
>> > > >
>> > > > >
>> > > > > Assuming i am interpreting it right, then this is inconsistent
>> with
>> > > your
>> > > > > setup: You said all links are the same so both hops do have the
>> same
>> > > > > buffer and rates, so the admission controller also expect to
>> > > > > have to put as many flows on second link/queue that it fills up
>> 2B.
>> > > > >
>> > > >
>> > > > JJ: Yes, two links are identical. However, as I have mentioned,
>> > > > an E2E latency bound is calculated based on a given network
>> environment.
>> > > > We don't always consider a filled up link capacity.
>> > > > BTW, B is the max burst size of the flow.
>> > > >
>> > > >
>> > > >
>> > > > >
>> > > > > You just then made an example, where there was never such an
>> amount
>> > > > > of competing traffic on the second hop. But tht does not mean that
>> > > > > the admission controller could guarantee in UBS/ATS would have
>> > > > > less per-hop latency than 2B/R.
>> > > >
>> > > >
>> > > > JJ: Again, two links are identical and two flows travel both links.
>> > > > The difference between Case 1 and Case 2 is not because of the
>> different
>> > > > competition level (they are identical.)
>> > > > but because of the non-work conserving behaviour of the second link
>> in
>> > > Case
>> > > > 2.
>> > > >
>> > > >
>> > > > >
>> > > > > If the admission controller knew there would never be a queue on
>> the
>> > > > > second hop, then gLBF likewise would not need to do a Damper on
>> the
>> > > > > second hop. Hence as i said previously, the per-hop and end-to-end
>> > > > > bounded latency guarantee is the same between UBS and gLBF.
>> > > > >
>> > > > > > And again, these are the WORST E2E latencies that a packet can
>> > > experience
>> > > > > > in the two-hop network in the scenario.
>> > > > >
>> > > > > Its not the worst case latency for the UBS case. you just did not
>> have
>> > > > > an example to create the worst case amount of competing traffic.
>> Or you
>> > > > > overestimed the amount of buffering and hence per-hop latency for
>> the
>> > > > > UBS/ATS casee.
>> > > > >
>> > > > > > In any network that is more complex, the E2E latency bounds of
>> two
>> > > > > schemes
>> > > > > > are very different.
>> > > > >
>> > > > > Counterexample:
>> > > > >
>> > > > > You have a network with TSN-ATS. You have an admission controller.
>> > > > > You only have one priority for simplicity of example.
>> > > > >
>> > > > > You do not want to dymamically signal changed end-to-end latencies
>> > > > > to applications... because its difficult. So you need to plan
>> > > > > for worst-case bounded latencies under maximum amount of traffic
>> > > > > load. In a simple case this means you give each interface
>> > > > > a queue size B(i)/r = 10usec. Whenever a new flow needs to be
>> > > > > added to the network, you find a path where all the  buffers
>> > > > > have enough space for your new flows burst and you signal
>> > > > > to the application that the end-t-end guaranteed latency is
>> > > > > P(path)+N*10usec, where P is the physical propagation latecy of
>> > > > > the path and N is the number of hops it has.
>> > > > >
>> > > > > And in result, all packets from the flow will arrive with
>> > > > > a latency between P(path)...P(path)+N*10usec - depending
>> > > > > on network load/weather.
>> > > > >
>> > > > > Now we replace UBS in the routers with gLBF. What changes ?
>> > > > >
>> > > > > 1) With UBS the controller still had to signal every new and
>> > > > > to-be-deleted flow to every router along it path to set up the
>> > > > > IR for the flow. This goes away (big win).
>> > > > >
>> > > > > 2) The forwarding is in our opinion cheaper/faster to implement
>> > > > > (because of lack of memory read/write cycle of IR).
>> > > > >
>> > > > > 3) The application now sees all packets arrive at fixed latency
>> > > > > of P(path)+N*10usec. Which arguably to the application that
>> > > > > MUST have bounded latency is from all examples i know
>> > > > > seen rather as a benefit than as a downside.
>> > > > >
>> > > > > Cheers
>> > > > >     Toerless
>> > > > >
>> > > > >
>> > > > > >
>> > > > > > Best,
>> > > > > > Jinoo
>> > > > > >
>> > > > > > On Fri, Jul 21, 2023 at 8:31 PM <peng.shaofu@zte.com.cn> wrote:
>> > > > > >
>> > > > > > >
>> > > > > > > Hi Jinoo,
>> > > > > > >
>> > > > > > >
>> > > > > > > I tried to reply briefly. If Toerless have free time, can
>> confirm
>> > > it.
>> > > > > > >
>> > > > > > >
>> > > > > > > Here, when we said latency bound formula, it refers to
>> worst-case
>> > > > > latency.
>> > > > > > >
>> > > > > > >
>> > > > > > > Intuitively, the worst-case latency for gLBF (damper + shaper
>> +
>> > > > > scheduler)
>> > > > > > >
>> > > > > > > is that:
>> > > > > > >
>> > > > > > >     damping delay per hop is always 0. (because scheduling
>> delay =
>> > > MAX)
>> > > > > > >
>> > > > > > >     shaping delay is always 0. (because all are eligibility
>> > > arrivals)
>> > > > > > >
>> > > > > > >     scheduling delay is always MAX (i.e., concurent full burst
>> > > from all
>> > > > > > > eligibility arrivals on each hop)
>> > > > > > >
>> > > > > > >
>> > > > > > > Similarly, the worst-case latency for UBS (shaper +
>> scheduler) is
>> > > that:
>> > > > > > >
>> > > > > > >     shaping delay is always 0. (because all are eligibility
>> > > arrivals)
>> > > > > > >
>> > > > > > >     scheduling delay is always MAX (i.e., concurent full burst
>> > > from all
>> > > > > > > eligibility arrivals on each hop)
>> > > > > > >
>> > > > > > >
>> > > > > > > Thus, the worst-case latency of gLBF and UBS is the same.
>> > > > > > >
>> > > > > > > Your example give a minimumal latency that may be expierenced
>> by
>> > > UBS,
>> > > > > but
>> > > > > > >
>> > > > > > > it is not the worst-case latency. In fact, your example is a
>> simple
>> > > > > > > topology that only
>> > > > > > >
>> > > > > > > contains a line without fan-in, that cause scheduling delay
>> almost
>> > > a
>> > > > > > > minimumal
>> > > > > > >
>> > > > > > > value due to no interfering flows.
>> > > > > > >
>> > > > > > >
>> > > > > > > Regards,
>> > > > > > >
>> > > > > > > PSF
>> > > > > > >
>> > > > > > >
>> > > > > > >
>> > > > > > > Original
>> > > > > > > *From: *JinooJoung <jjoung@smu.ac.kr>
>> > > > > > > *To: *彭少富10053815;
>> > > > > > > *Cc: *tte@cs.fau.de <tte@cs.fau.de>;detnet@ietf.org <
>> > > detnet@ietf.org>;
>> > > > > > > draft-eckert-detnet-glbf@ietf.org <
>> > > draft-eckert-detnet-glbf@ietf.org>;
>> > > > > > > *Date: *2023年07月21日 14:10
>> > > > > > > *Subject: **Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt*
>> > > > > > > _______________________________________________
>> > > > > > > detnet mailing list
>> > > > > > > detnet@ietf.org
>> > > > > > > https://www.ietf.org/mailman/listinfo/detnet
>> > > > > > >
>> > > > > > > Hello Toerless,
>> > > > > > > I have a comment on your argument.
>> > > > > > > This is not a question, so you don't have to answer.
>> > > > > > >
>> > > > > > > You argued that the gLBF + SP has the same latency bound
>> formula
>> > > with
>> > > > > UBS
>> > > > > > > (equivalently ATS IR + SP).
>> > > > > > > The IR is not a generalized gLBF, so they do not have the same
>> > > bound.
>> > > > > > >
>> > > > > > > In short, ATS IR is a rate-based shaper so it enjoys "Pay
>> burst
>> > > only
>> > > > > once"
>> > > > > > > property.
>> > > > > > > gLBF is not. So it pays burst every node.
>> > > > > > >
>> > > > > > > Consider a simplest example, where there are only two
>> identical
>> > > flows
>> > > > > > > travelling the same path.
>> > > > > > > Every node and link in the path are identical.
>> > > > > > >
>> > > > > > > Case 1: Just FIFO
>> > > > > > > Case 2: gLBF + FIFO
>> > > > > > >
>> > > > > > > In the first node, two flows' max bursts arrive almost at the
>> same
>> > > time
>> > > > > > > but your flow is just a little late.
>> > > > > > > Then your last packet in the burst (packet of interest, POI)
>> > > suffers
>> > > > > > > latency around 2B/R, where B is the burst size and R is the
>> link
>> > > > > capacity.
>> > > > > > > This is true for both cases.
>> > > > > > >
>> > > > > > > In the next node:
>> > > > > > > In Case 1, the POI does not see any packet queued. so it is
>> > > delayed by
>> > > > > its
>> > > > > > > own transmission delay.
>> > > > > > > In Case 2, the burst from the other flow, as well as your own
>> > > burst,
>> > > > > > > awaits the POI. So the POI is again delayed around 2B/R.
>> > > > > > >
>> > > > > > > In the case of UBS, the max bursts are legitimate, so the
>> regulator
>> > > > > does
>> > > > > > > not do anything,
>> > > > > > > and the forwarding behavior is identical to Case 1.
>> > > > > > >
>> > > > > > > Best,
>> > > > > > > Jinoo
>> > > > > > >
>> > > > > > > On Fri, Jul 21, 2023 at 10:58 AM <peng.shaofu@zte.com.cn>
>> wrote:
>> > > > > > >
>> > > > > > >>
>> > > > > > >> Hi Toerless,
>> > > > > > >>
>> > > > > > >>
>> > > > > > >> Thanks for your response, and understand your busy situation.
>> > > > > > >>
>> > > > > > >>
>> > > > > > >> A quick reply is that gLBF is really an interested proposal,
>> > > which is
>> > > > > > >> very
>> > > > > > >>
>> > > > > > >> similar to the function of Deadline on-time per hop.  Our
>> views
>> > > are
>> > > > > > >>
>> > > > > > >> consistent on this point. The key beneficial is to avoid
>> burst
>> > > > > cumulation.
>> > > > > > >>
>> > > > > > >>
>> > > > > > >> The following example originated from the analysis of
>> deadline
>> > > on-time
>> > > > > > >>
>> > > > > > >> mode. I believe it also makes sense for gLBF. When you have
>> free
>> > > time,
>> > > > > > >>
>> > > > > > >> may verify it. The result may be helpful both for gLBF and
>> > > deadline
>> > > > > > >>
>> > > > > > >> on-time mode.  Note that I didn't question mathematical proof
>> > > about
>> > > > > UBS,
>> > > > > > >>
>> > > > > > >> which get the worst-case latency based on the combination of
>> > > > > > >>
>> > > > > > >> "IR shaper + SP scheduler".
>> > > > > > >>
>> > > > > > >>
>> > > > > > >> Regards,
>> > > > > > >>
>> > > > > > >> PSF
>> > > > > > >>
>> > > > > > >>
>> > > > > > >>
>> > > > > > >>
>> > > > > > >> Original
>> > > > > > >> *From: *ToerlessEckert <tte@cs.fau.de>
>> > > > > > >> *To: *彭少富10053815;
>> > > > > > >> *Cc: *jjoung@smu.ac.kr <jjoung@smu.ac.kr>;detnet@ietf.org <
>> > > > > > >> detnet@ietf.org>;draft-eckert-detnet-glbf@ietf.org <
>> > > > > > >> draft-eckert-detnet-glbf@ietf.org>;
>> > > > > > >> *Date: *2023年07月21日 06:07
>> > > > > > >> *Subject: **Re: [Detnet] FYI:
>> draft-eckert-detnet-glbf-01.txt*
>> > > > > > >>
>> > > > > > >> Thanks folks for the question and discussion, I have some WG
>> chair
>> > > > > vultures hovering over me
>> > > > > > >>
>> > > > > > >> making sure i prioritize building slides now (the worst one
>> is
>> > > myself
>> > > > > ;-), so i will only
>> > > > > > >> give a brief answer and will get back to it later when i had
>> more
>> > > > > time.
>> > > > > > >>
>> > > > > > >>
>> > > > > > >> The calculus that i used is from the [UBS] research paper
>> from
>> > > > > Johannes Specht, aka: it has
>> > > > > > >>
>> > > > > > >> the mathematical proof, full reference is in the gLBF draft.
>> > > There is
>> > > > > another, later proof of the
>> > > > > > >>
>> > > > > > >> calculus from Jean Yves Le Boudec in another research paper
>> which
>> > > i'd
>> > > > > have to dig up, and
>> > > > > > >>
>> > > > > > >> depending on whom you ask one or the other is easier to
>> read. I
>> > > am on
>> > > > > the UBS research paper
>> > > > > > >>
>> > > > > > >> side because i have not studied Jean Yves calculus book. But
>> its
>> > > > > really beautifully simple
>> > > > > > >>
>> > > > > > >> that as soon as you think of flows with only burst-size and
>> rate
>> > > (or
>> > > > > period) of those burst,
>> > > > > > >>
>> > > > > > >> then your delay through the queue is really just the sum of
>> > > bursts.
>> > > > > And i just find beauty
>> > > > > > >>
>> > > > > > >> in simplicity. And that can not be the full answer to Jinoo,
>> but i
>> > > > > first need to read up more
>> > > > > > >> on his WRR options.
>> > > > > > >>
>> > > > > > >> The need for doing per-hop dampening is really as i said
>> from two
>> > > > > points:
>> > > > > > >>
>> > > > > > >>
>> > > > > > >> 1. Unless we do per-hop dampening, we will not get such a
>> simple
>> > > > > calculus and equally low latency.
>> > > > > > >>
>> > > > > > >> The two validation slides of the gLBF presentation show that
>> one
>> > > can
>> > > > > exceed the simple
>> > > > > > >>
>> > > > > > >> calculated bounded latency already with as few as 9  flows
>> across
>> > > a
>> > > > > single hop and arriving
>> > > > > > >>
>> > > > > > >> into one single queue -  unless there is per-hop dampening
>> (or
>> > > > > per-flow-shaper).
>> > > > > > >>
>> > > > > > >>
>> > > > > > >> 2. I can not imagine how to safely sell router equipment and
>> build
>> > > > > out all desirable topologies without
>> > > > > > >>
>> > > > > > >> every node is able to do the dampening. And i also see it as
>> the
>> > > > > right next-generation challenge
>> > > > > > >>
>> > > > > > >> and option to make that happen in high speed hardware.
>> > > Specifically
>> > > > > in metro rings, every big aggregation
>> > > > > > >>
>> > > > > > >> ring node has potentially 100 incoming interfaces and hence
>> can
>> > > > > create a lot of bursts onto ring interfaces.
>> > > > > > >>
>> > > > > > >> Cheers
>> > > > > > >>    Toerless
>> > > > > > >>
>> > > > > > >>
>> > > > > > >> P.S.: The validation picture in our slides was from our
>> Springer
>> > > > > Journal article, so
>> > > > > > >>
>> > > > > > >> i can not simply put a copy on the Internet now, but ping me
>> in
>> > > PM if
>> > > > > you want an authors copy.
>> > > > > > >>
>> > > > > > >> On Wed, Jul 12, 2023 at 11:48:36AM +0800,
>> peng.shaofu@zte.com.cn
>> > > > > wrote:
>> > > > > > >> > Hi Jinoo, Toerless
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > Also thank Toerless for bringing us this interested draft.
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > For the question Jinoo pointed out, I guess, based on the
>> > > similar
>> > > > > analysis
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > of deadline on-time per hop, that even if all flows
>> departured
>> > > from
>> > > > > the damper
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > and arrived at the queueing subsystem at the same time,
>> each
>> > > flow
>> > > > > can still
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > have its worst-case latency, but just consume the next
>> round of
>> > > > > budget (i.e.,
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > the MAX value mentioned in the document).
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > However, consuming the next round of budget, means that it
>> > > relies
>> > > > > on the
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > downstream node to compensate latency, and may result a
>> jitter
>> > > with
>> > > > > MAX
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > (i.e., worst-case latency). Due to this reason, deadline
>> on-time
>> > > > > per hop is
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > temperately removed in version-6, waiting for more strict
>> > > proof and
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > optimization.
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > Anyway, gLBF can do same things that deadline on-time per
>> hop
>> > > done.
>> > > > > The
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > following instuitive exaple is common for these two
>> solutions.
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > Assuming that at the last node, all received flows have
>> > > expierenced
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > almost 0 queueding delay on the upstream nodes. Traffic
>> class-8
>> > > has
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > per hop worst case latency 80 us (just an example, similar
>> to
>> > > delay
>> > > > > level
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > of deadline),  traffic class-7 has 70 us, ... ..., traffic
>> > > class-1
>> > > > > has 10 us.
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > Then, at time T0, traffic class-8 arrived at the last
>> node, it
>> > > will
>> > > > > dampen 80us;
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > at time T0+10us, traffic class-7 arrived, it will dampen
>> 70us,
>> > > and
>> > > > > so on. At
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > T0+80us, all traffic class flows will departure from the
>> damper,
>> > > > > and send to
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > the same outgoing port. So, an observed packet may
>> expierence
>> > > > > another
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > round of worst case lantecy if other higher priority flows
>> > > > > existing, or
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > expierence best case latency (almost 0) if other higher
>> priority
>> > > > > flows not
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > existing. That is, a jitter with value of worst case
>> latency
>> > > still
>> > > > > exists.
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > Regards,
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > PSF
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > Original
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > From: JinooJoung <jjoung@smu.ac.kr>
>> > > > > > >> > To: Toerless Eckert <tte@cs.fau.de>;
>> > > > > > >> > Cc: detnet@ietf.org <detnet@ietf.org>;
>> > > > > draft-eckert-detnet-glbf@ietf.org
>> > > > > > >>  <draft-eckert-detnet-glbf@ietf.org>;
>> > > > > > >> > Date: 2023年07月09日 09:39
>> > > > > > >> > Subject: Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > _______________________________________________
>> > > > > > >> > detnet mailing list
>> > > > > > >> > detnet@ietf.org
>> > > > > > >> > https://www.ietf.org/mailman/listinfo/detnet
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > Dear Toerless; thanks for the draft.
>> > > > > > >>
>> > > > > > >> > gLBF is an interesting approach, similar in concept to the
>> > > Buffered
>> > > > > Network (BN) I have introduced in the ADN Framework document.
>> > > > > > >>
>> > > > > > >> > The difference seems that the BN buffers only once at the
>> > > network
>> > > > > boundary, while gLBF buffers at every node.
>> > > > > > >>
>> > > > > > >> > Therefore in the BN, a buffer handles only a few flows,
>> while in
>> > > > > the gLBF a buffer needs to face millions of flows.
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > The implementation complexity should be addressed in the
>> future
>> > > > > draft, I think.
>> > > > > > >> >
>> > > > > > >> > I have a quick question below.
>> > > > > > >> >
>> > > > > > >> >    +------------------------+
>> +------------------------+
>> > > > > > >> >    | Node A                  |        | Node B
>> > >   |
>> > > > > > >> >    |   +-+   +-+   +-+      |        |   +-+   +-+   +-+
>>     |
>> > > > > > >> >    |-x-|D|-y-|F|---|Q|----z -|------
>> |-x-|D|-y-|F|---|Q|----z- |
>> > > > > > >> >    |   +-+   +-+   +-+      | Link |   +-+   +-+   +-+
>>   |
>> > > > > > >> >    +------------------------+
>> +------------------------+
>> > > > > > >> >            |<- A/B in-time latency ->|
>> > > > > > >> >            |<--A/B on-time latency ------->|
>> > > > > > >> >
>> > > > > > >> >                Figure 3: Forwarding with Damper and
>> measuring
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > In Figure 3, how can F and Q guarantee the nodal latency
>> below
>> > > MAX?
>> > > > > > >>
>> > > > > > >> > Does the gLBF provide the same latency bound as that of
>> UBS, as
>> > > it
>> > > > > is argued?
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > In UBS, an interleaved regulator (IR) works as the damper
>> D in
>> > > the
>> > > > > gLBF.
>> > > > > > >>
>> > > > > > >> > IR is essentially a FIFO, whose HoQ packet is examined and
>> > > leaves
>> > > > > if eligible.
>> > > > > > >>
>> > > > > > >> > A packet's eligible time can be earlier than the time that
>> it
>> > > > > became the HoQ.
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > However, in gLBF, a packet has a precise moment that it
>> needs
>> > > to be
>> > > > > forwarded from D.
>> > > > > > >> > (Therefore, UBS is not a generalized gLBF.)
>> > > > > > >>
>> > > > > > >> > In the worst case, all the flows may want to send the
>> packets
>> > > from
>> > > > > D to F at the same time.
>> > > > > > >>
>> > > > > > >> > If it can be implemented as such, bursts may accumulate,
>> and the
>> > > > > latency cannot be guaranteed.
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> > If it cannot be implemented that way, you may introduce
>> another
>> > > > > type of delay.
>> > > > > > >> >
>> > > > > > >> > Don't you need an additional mechanism for latency
>> guarantee?
>> > > > > > >> >
>> > > > > > >> > Thanks a lot in advance, I support this draft.
>> > > > > > >> >
>> > > > > > >> > Best,
>> > > > > > >> > Jinoo
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> >
>> > > > > > >> > On Sat, Jul 8, 2023 at 12:05 AM Toerless Eckert <
>> tte@cs.fau.de>
>> > > > > wrote:
>> > > > > > >> >
>> > > > > > >> > Dear DetNet WG,
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> >  FYI on a newly posted  bounded latency method/proposal
>> draft
>> > > that
>> > > > > we call gLBF.
>> > > > > > >> >  (guaranteed Latency Based Forwarding).
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> >  gLBF, as compared to TCQF and CSQF is proposed from our
>> side
>> > > to be
>> > > > > a more long-term
>> > > > > > >>
>> > > > > > >> >  solution, because it has not been validated with
>> high-speed
>> > > > > forwarding hardware and requires
>> > > > > > >>
>> > > > > > >> >  new network header information for the damper value,
>> whereas
>> > > > > TCQF/CSQF of course can operate
>> > > > > > >>
>> > > > > > >> >  without new headers, have proven high-speed
>> implementations PoC
>> > > > > and are therefore really
>> > > > > > >> >  ready for adoption now.
>> > > > > > >> >
>> > > > > > >>
>> > > > > > >> >  gLBF is a specific variant of the damper idea that is
>> meant to
>> > > be
>> > > > > compatible with the
>> > > > > > >>
>> > > > > > >> >  TSN-ATS latency calculus so that it can use the same
>> > > > > controller-plane/path-computation
>> > > > > > >>
>> > > > > > >> >  algorithms and implementations one would use for TSN-ATS.
>> It
>> > > also
>> > > > > allows to eliminate
>> > > > > > >>
>> > > > > > >> >  the need for hop-by-hop clock synchronization and (we
>> hope)
>> > > should
>> > > > > be well implementable
>> > > > > > >> >  in high-speed hardware.
>> > > > > > >> >
>> > > > > > >> >  Any feedback welcome.
>> > > > > > >> >
>> > > > > > >> >  Cheers
>> > > > > > >> >      Toerless
>> > > > > > >> >
>> > > > > > >> >  In-Reply-To: <
>> > > > > 168874067601.53296.4506535864118204933@ietfa.amsl.com>
>> > > > > > >> >
>> > > > > > >> >  On Fri, Jul 07, 2023 at 07:37:56AM -0700,
>> > > internet-drafts@ietf.org
>> > > > > > >>  wrote:
>> > > > > > >> >  >
>> > > > > > >> >  > A new version of I-D, draft-eckert-detnet-glbf-01.txt
>> > > > > > >> >  > has been successfully submitted by Toerless Eckert and
>> > > posted to
>> > > > > the
>> > > > > > >> >  > IETF repository.
>> > > > > > >> >  >
>> > > > > > >> >  > Name:         draft-eckert-detnet-glbf
>> > > > > > >> >  > Revision:     01
>> > > > > > >>
>> > > > > > >> >  > Title:                Deterministic Networking (DetNet)
>> Data
>> > > > > Plane - guaranteed Latency Based Forwarding (gLBF) for bounded
>> latency
>> > > with
>> > > > > low jitter and asynchronous forwarding in Deterministic Networks
>> > > > > > >> >  > Document date:        2023-07-07
>> > > > > > >> >  > Group:                Individual Submission
>> > > > > > >> >  > Pages:                39
>> > > > > > >> >  > URL:
>> > > > > > >>
>> https://www.ietf.org/archive/id/draft-eckert-detnet-glbf-01.txt
>> > > > > > >> >  > Status:
>> > > > > > >> https://datatracker.ietf.org/doc/draft-eckert-detnet-glbf/
>> > > > > > >> >  > Htmlized:
>> > > > > > >>
>> https://datatracker.ietf.org/doc/html/draft-eckert-detnet-glbf
>> > > > > > >> >  > Diff:
>> > > > > > >>
>> > > https://author-tools.ietf.org/iddiff?url2=draft-eckert-detnet-glbf-01
>> > > > > > >> >  >
>> > > > > > >> >  > Abstract:
>> > > > > > >> >  >    This memo proposes a mechanism called "guaranteed
>> Latency
>> > > > > Based
>> > > > > > >>
>> > > > > > >> >  >    Forwarding" (gLBF) as part of DetNet for hop-by-hop
>> packet
>> > > > > forwarding
>> > > > > > >> >  >    with per-hop deterministically bounded latency and
>> minimal
>> > > > > jitter.
>> > > > > > >> >  >
>> > > > > > >> >  >    gLBF is intended to be useful across a wide range of
>> > > networks
>> > > > > and
>> > > > > > >> >  >    applications with need for high-precision
>> deterministic
>> > > > > networking
>> > > > > > >>
>> > > > > > >> >  >    services, including in-car networks or networks used
>> for
>> > > > > industrial
>> > > > > > >> >  >    automation across on factory floors, all the way to
>> > > ++100Gbps
>> > > > > > >> >  >    country-wide networks.
>> > > > > > >> >  >
>> > > > > > >> >  >    Contrary to other mechanisms, gLBF does not require
>> > > network
>> > > > > wide
>> > > > > > >>
>> > > > > > >> >  >    clock synchronization, nor does it need to maintain
>> > > per-flow
>> > > > > state at
>> > > > > > >> >  >    network nodes, avoiding drawbacks of other known
>> methods
>> > > while
>> > > > > > >> >  >    leveraging their advantages.
>> > > > > > >> >  >
>> > > > > > >> >  >    Specifically, gLBF uses the queuing model and
>> calculus of
>> > > > > Urgency
>> > > > > > >> >  >    Based Scheduling (UBS, [UBS]), which is used by TSN
>> > > > > Asynchronous
>> > > > > > >> >  >    Traffic Shaping [TSN-ATS]. gLBF is intended to be a
>> > > plug-in
>> > > > > > >> >  >    replacement for TSN-ATN or as a parallel mechanism
>> beside
>> > > > > TSN-ATS
>> > > > > > >>
>> > > > > > >> >  >    because it allows to keeping the same
>> controller-plane
>> > > design
>> > > > > which
>> > > > > > >> >  >    is selecting paths for TSN-ATS, sizing TSN-ATS
>> queues,
>> > > > > calculating
>> > > > > > >> >  >    latencies and admitting flows to calculated paths for
>> > > > > calculated
>> > > > > > >> >  >    latencies.
>> > > > > > >> >  >
>> > > > > > >>
>> > > > > > >> >  >    In addition to reducing the jitter compared to
>> TSN-ATS by
>> > > > > additional
>> > > > > > >>
>> > > > > > >> >  >    buffering (dampening) in the network, gLBF also
>> eliminates
>> > > > > the need
>> > > > > > >> >  >    for per-flow, per-hop state maintenance required by
>> > > TSN-ATS.
>> > > > > This
>> > > > > > >> >  >    avoids the need to signal per-flow state to every hop
>> > > from the
>> > > > > > >> >  >    controller-plane and associated scaling problems.
>> It also
>> > > > > reduces
>> > > > > > >> >  >    implementation cost for high-speed networking
>> hardware
>> > > due to
>> > > > > the
>> > > > > > >>
>> > > > > > >> >  >    avoidance of additional high-speed speed read/write
>> memory
>> > > > > access to
>> > > > > > >> >  >    retrieve, process and update per-flow state
>> variables for
>> > > a
>> > > > > large
>> > > > > > >> >  >    number of flows.
>> > > > > > >> >  >
>> > > > > > >>
>> > > > > > >> >  >
>> > > > > > >> >  >
>> > > > > > >> >  >
>> > > > > > >> >  > The IETF Secretariat
>> > > > > > >> >
>> > > > > > >> >  _______________________________________________
>> > > > > > >> >  detnet mailing list
>> > > > > > >> >  detnet@ietf.org
>> > > > > > >> >  https://www.ietf.org/mailman/listinfo/detnet
>> > > > > > >>
>> > > > > > >>
>> > > > > > >>
>> > > > > > >> --
>> > > > > > >> ---
>> > > > > > >> tte@cs.fau.de
>> > > > > > >>
>> > > > > > >> _______________________________________________
>> > > > > > >> detnet mailing list
>> > > > > > >> detnet@ietf.org
>> > > > > > >> https://www.ietf.org/mailman/listinfo/detnet
>> > > > > > >>
>> > > > > > >>
>> > > > > > >>
>> > > > > > >
>> > > > >
>> > > > > --
>> > > > > ---
>> > > > > tte@cs.fau.de
>> > > > >
>> > >
>> > > --
>> > > ---
>> > > tte@cs.fau.de
>> > >
>>
>> --
>> ---
>> tte@cs.fau.de
>>
>
>