Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt

"Andrea Francini (Nokia)" <andrea.francini@nokia-bell-labs.com> Sun, 24 September 2023 16:08 UTC

Return-Path: <andrea.francini@nokia-bell-labs.com>
X-Original-To: detnet@ietfa.amsl.com
Delivered-To: detnet@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3B049C151061; Sun, 24 Sep 2023 09:08:12 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 2.893
X-Spam-Level: **
X-Spam-Status: No, score=2.893 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, GB_SUMOF=5, HTML_MESSAGE=0.001, RCVD_IN_MSPIKE_H2=-0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=no autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=nokia-bell-labs.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 89sQ8DsBS_3d; Sun, 24 Sep 2023 09:08:07 -0700 (PDT)
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2092.outbound.protection.outlook.com [40.107.93.92]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 0CB8CC14CE4F; Sun, 24 Sep 2023 09:08:06 -0700 (PDT)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IaUBuFGxNwGSTLeLjV8yM/higQ9FmgySJXoatHBtlUbG52Uod8fuXmJE7lx67VLxaLySPLdzwmlhESa5p7vS1pJjAEqmZZIuoY4yaWGg4C7eJoLdaJ/fIq8pifCyXIEH7H5G7g56TVgilbLzwfh19d4ZCyi/7rzWtEmhla6oW1tffOEbzLcAF+tEwrWwZZ3g5y62hYsUvAtT6oFDfpVud/xnhAmRHoMFAJ3191/LN8dWSOEcSZAf5LtX8V7/5Xlmt7OZ+pg33GnXmiMDQzA8jC5erw5LoLDeBpVgYO4s2Z7ut0F/s5NOOhTX8tq2StbYS9OqDiodwyfCvufGfN4XRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8SEOUEq4dKW3RytLRXqqzMIv8kmyyDvR6GrnqBZfBQ8=; b=QrAaplrU4411eBKnnibszaGrpfW1fcnV2n1Uo54GnDF3NT0/UfJawa3sLecr89E2LF+KrjOpmjVaW6dXdweXu2aeAnDoyPuU5VYJPwzl7kMrcutwOmMT//dl5WBe4elMrlb9zZKXc/nO8BBnL7CZBKLcjH9XZw7fQZh8G+HBBJQtpJeJmXaCf1+rlFLiBcp0jCiIFpkm1cJLDrAa52bBUifgapkeLBE7mZNk/3EB1Nsr9kDCFGAVhQPMhNNBXXPeXxXfnATr0n5I4Ahv9kUGKWCHhvIabtdOn9ZuaWI/DShpI9sxNj3V86Qr34C9l3YmMefDn+M6YlcBYQ3htPWsyA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nokia-bell-labs.com; dmarc=pass action=none header.from=nokia-bell-labs.com; dkim=pass header.d=nokia-bell-labs.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nokia-bell-labs.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8SEOUEq4dKW3RytLRXqqzMIv8kmyyDvR6GrnqBZfBQ8=; b=m14UxRbrrtHxX1fATrQpl45hbHmqyozgdpZmF1ytBIR+MwJMF1R/BzLjHm8Ynlo9nPRf3d4pT5Vv0RE0c3KGmxsVrxtimdKWuOfZDxIZwPy4U/0Reo6cYl1V0mkCPUIfjWFvy83bktZ+2gX6GjUgrHPZ3Jk93pQ1J/vKtIy+H5kJm7OXbgEZYIgm84kXiAL90n87t3NwqAFbPaJUHCeGVVqMLhVo6MLO6kduAfQDayF6s4vK/2mc3Gd4pdt2tOKeZKqoifh1dqZiyCDjQnmWxRLi+8lBlrP/s23RetvCQhvUpO711fhG/I0oIUWqxP+CVjfg1nsYe7Rwqyx9MYleeA==
Received: from DM6PR08MB6028.namprd08.prod.outlook.com (2603:10b6:5:112::20) by CO3PR08MB7895.namprd08.prod.outlook.com (2603:10b6:303:172::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6813.19; Sun, 24 Sep 2023 16:07:59 +0000
Received: from DM6PR08MB6028.namprd08.prod.outlook.com ([fe80::2026:cc51:b0f6:f8ce]) by DM6PR08MB6028.namprd08.prod.outlook.com ([fe80::2026:cc51:b0f6:f8ce%3]) with mapi id 15.20.6813.027; Sun, 24 Sep 2023 16:07:59 +0000
From: "Andrea Francini (Nokia)" <andrea.francini@nokia-bell-labs.com>
To: Jinoo Joung <jjoung@smu.ac.kr>, Toerless Eckert <tte@cs.fau.de>
CC: "peng.shaofu@zte.com.cn" <peng.shaofu@zte.com.cn>, "detnet@ietf.org" <detnet@ietf.org>, "draft-eckert-detnet-glbf@ietf.org" <draft-eckert-detnet-glbf@ietf.org>
Thread-Topic: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt
Thread-Index: AQHZsOR3zvtbSwMzWkiHC07qmfOk6q+wqm0AgATbQgCADcV1AIAAQLmAgABGXYCAAFm9AIAABHMAgFQaNYCAAGuegIAAyd2AgASXyCmAAQIAgIACva2AgABo84CAAt50gIAA3YUAgAEBbwCAAHYjgIAAgI4AgAKmBpA=
Date: Sun, 24 Sep 2023 16:07:58 +0000
Message-ID: <DM6PR08MB602816CD2B41FAFF6E052A69E9FDA@DM6PR08MB6028.namprd08.prod.outlook.com>
References: <ZQx6DGoK12X306i5@faui48e.informatik.uni-erlangen.de> <202309221636162874346@zte.com.cn> <ZQ21GTWD0U6wHYoo@faui48e.informatik.uni-erlangen.de> <CA+8ZkcS0KWzpLJxk1SOjgyEtm5J5aia4tdRiEGLo05fEgjYPOQ@mail.gmail.com>
In-Reply-To: <CA+8ZkcS0KWzpLJxk1SOjgyEtm5J5aia4tdRiEGLo05fEgjYPOQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nokia-bell-labs.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR08MB6028:EE_|CO3PR08MB7895:EE_
x-ms-office365-filtering-correlation-id: 78bc7b73-7d97-4247-6f87-08dbbd186bca
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: QX4e4x6hgih2X1+4Hj9GfDNlxwtS50gZJ0f4nvzQQIsauUZ+Nkwl6gbZHgeaQtgp9Q1opdej51ZyVPNcxwfHynjtHAmP4nVEEkZ5BsGIeMhOUuPbNSpL9dePAlfFZxsYQ+aNAvWKCF6NRqZiRU6UjKm4NRSXiEoty19sZgT7zg6En0OJTSFeL61rAHU/GOctx3FBhH+XBLXdDpG4mtSSGKUsViKNXp9lun7C2JIkwCLEhojZBhPrswCTCYVCu37pMV6n6mtg3wR7GFuDezwehA4gy8ZZDox/8oDmur4mxzo+dZKyevvM/PqQLUJb4J+h5HYY+FVjDabDesGXQbAmrruYrHzbbT7255keQX9UNKYhtHd2eiLYVXGvM03l4A13tNgN/+yUy/0bCDk14J5da8qrok1Bf6XDmJZ9k1Bxj4I4tCKiSj5S4HG7UxF2sgz1lYKLU2dBd2cgYHz59z8x5RTaYGVN+NnG/xy6ZaeNu28bLQgCQGOOIETnIUsv+okAAJj3Q47QttNjtpzg0ISA932KlS6VLIWwHA/ervim2rBs4sak9XUzT34+64kG/kodWVqmMzUh6UczeSUwtH/EVHWGqiV1y7FEEjwLkuJ7LtU=
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR08MB6028.namprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(376002)(396003)(346002)(39850400004)(136003)(366004)(230922051799003)(1800799009)(451199024)(186009)(6506007)(9686003)(122000001)(84970400001)(166002)(86362001)(66574015)(53546011)(7696005)(83380400001)(478600001)(966005)(71200400001)(66556008)(66476007)(66446008)(54906003)(316002)(64756008)(38070700005)(38100700002)(55016003)(41300700001)(4326008)(8936002)(9326002)(52536014)(82960400001)(33656002)(8676002)(2906002)(76116006)(66946007)(30864003)(5660300002)(66899024)(110136005)(579004)(559001); DIR:OUT; SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: CVLx/jkWF2Mr8ErTu52IhaLD4R3hV1N7FlAqD6uvc67tRQ1yiKHGA+CTnWP/roi2kE+re1mds83C9oc0r12C+1G0tqCI/kRtzyULtPbYUdH/Cb2FD2dwc8Nz30j32e4tn7QhTvSXqKmg+vwLVcbCbDvdqcOuJNxCaO6VOJeyAID42305xwUFj3rtftP2BoN0sU2DBQWp00mNdbRDZ5ZO5R9NoEX1rOGp7mYAed21Q8vlQZC8ZvRIdRygPO/hqk9tX+RlxLprlGB8BHyt/aUA3dHruPKZ09HaHGaiZfNPBfM7heu8UOXqo/Q5sizRyaPxqbRXWd+Oz/D04/n8aWWAmC+8//jVBCMC2FrKAylLlZB6Qte0eMnvMzAmClIgctoK/FysoxYUjGNBl6dz+DkOjEd+bspgZKbpAIih0ZymET0P8MbUJcxZc4NP168xc4EbRv+IhPGbzfvqAcI8JwUduToC3Kp9fz2LBAKdWAon6IsD9xbmrNvOJ2VZeJTUZ1JpuuPzkWWLZknivgr3mmBq+TzSCakGErHwHxq2dHxaPtlsOs7ZM2lksd1UOeZS81HL6mAe2EbTVS1E7g7uCHDLWDXzabVTkzXgblLWMnglWdDzxnZfCwkaVVv6HUOS+8niXWQJPT68Sm0FdBKdbiTw9QDZkUt02i6l9IHmVS2KtdoUKlpkVXF41mx5S034epmQ9EjmQKjKA/h25+8lk7f1LgaDSAO6iQEscY4Ho+ibs6X1Y2d8H6QRLAlq6YD+wlgVyJ8DdPOMBsUT8COis/8xipg0LlT22kGuqqonB3iBU6Z91LaXf9G+CfOq+0YKn1oD5+zSPWZV+k8aAaeHAPN8qGrUFgOF6njkxWw18ivlCSHs16Z1YE3CBBvpfx3ZoUCFngwmZxv+tWhV6M+U/4czart+nUArUT/DEayFjY5XdZBKVjViqxApuqK+EjtKQZ1FNOyUcWN9nAQFuQx7jBy1LA1uW/j4w0x0ecPMEH7JC793zcvL9Fx5/B6u1QgTJdub8ZKbr5eqvFODMy+XR7cwxxNHGIdCOcoKYBQAWh9T5c55iVGdjssY7Kdz+7uzy6L4kywVRxy4jN/FQVQiSYJlb+Qe5KhdYCgp0NpEjzHPFypLIhf0RdJVDGaWpNu/35DoxvzaK1F5mWdvK/Af9ED6P/TVlqdoCc302KlK/q5ppRKiFRmS9jasFNBtkXLilyBdcdtVMweZMPX1C14s01A3gbLeYV+9X82KVK2frWGHkhpQkK+LFzfHmvU5zbyGltDqaG95UZ7c2AZIch3D3PacC8wRSbKWuLDhCTx4fNqq+jv8surWBw0cdgGo9dSHH+l+9JhPg2B4rHpNZGFZ69eTICa61kuaD/AbGeRbhne958Me0LPmfJM/SLTZRXBE74DAhiqEVRnt/V8ThpOKb86GYQTf39gKrtdSWX+IcxnVZPOl2LdkgLFx4CBw6DfKAnGdz119UNKRmjS6gg1hMVqDFtH0Tm9IEUOA3JgZQxA51r4opRSffAn6RkK0C7SCY9lbn+XD6X7n7nUlv5+taXNTq+BOyMTCV+0zja3IKOHlpqnRxi8CjMYpdcVzK/SLPLmGXj5jw9Pqu7iSLo2dsAXUWq4v+pxmWYawp5H55hKFqB30gRlP1I0OI/UNMxImV9M3x4Y5bjehIJKy/P9cvRnzdDbsKcJuM+GlYiVFQRhYiLw=
Content-Type: multipart/alternative; boundary="_000_DM6PR08MB602816CD2B41FAFF6E052A69E9FDADM6PR08MB6028namp_"
MIME-Version: 1.0
X-OriginatorOrg: nokia-bell-labs.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR08MB6028.namprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 78bc7b73-7d97-4247-6f87-08dbbd186bca
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Sep 2023 16:07:58.7886 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 5d471751-9675-428d-917b-70f44f9630b0
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: +1ncVKsDIpBvhlJagpE+o9UBNH5qefOFBbbt92G9tV2XdgfdTmO9kvnW9Q+fSXNlk/CCxbiQRQtv3VLu8dxSculnREgdF5bGLcUX90JYeVmRsHOBq4Z+zVffJcOuSQaG
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO3PR08MB7895
Archived-At: <https://mailarchive.ietf.org/arch/msg/detnet/xDfbvOJqOWRLdTlkj-CQEUlTaDM>
Subject: Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt
X-BeenThere: detnet@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Discussions on Deterministic Networking BoF and Proposed WG <detnet.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/detnet>, <mailto:detnet-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/detnet/>
List-Post: <mailto:detnet@ietf.org>
List-Help: <mailto:detnet-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/detnet>, <mailto:detnet-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 24 Sep 2023 16:08:12 -0000

Hi Jinoo,

As I understand it, C-SCORE can be seen as a network-wide instantiation of the Virtual Clock scheduler by Lixia Zhang.

When realized In a single scheduling node, Virtual Clock is a Latency-Rate server and also a Rate-Proportional Server. Is it possible to provide formal demonstration that C-SCORE has the same end-to-end latency properties as a network of Virtual Clock servers (or, more generally, of Rate-Proportional Servers), possibly within a finite margin of error?

One basic element of concern from the draft that describes the scheme is that the real-time clocks of the ingress nodes are not required to be synchronized. Single-node virtual clock works (for latency enforcement only, the service fairness index was instead unbounded, with the negative effect on implementation that I bring up below) because all packets are timestamped based on a common time reference. This can generally be assumed to be not the case for C-SCORE, where the local clocks of the ingress nodes can drift apart from each other. Why not require at least a periodic re-sync of the ingress-node clocks?

Also, have you looked into a detailed specification for the range of the time representation in the timestamp (finishing-potential) field carried by the packets? A major issue with the implementation of Virtual Clock in a single node was that the timestamp of a single flow could grow much larger than the system potential (the real time) if it was the only flow supplying back-to-back packets for an extended time. The time representation allowed by a finite number of bits could then wrap around, causing the timestamps of packets of different flows to be no longer comparable in a meaningful way. It seems that the same could happen with C-SCORE.

Best regards,

Andrea

From: detnet <detnet-bounces@ietf.org> On Behalf Of Jinoo Joung
Sent: Friday, September 22, 2023 7:19 PM
To: Toerless Eckert <tte@cs.fau.de>
Cc: peng.shaofu@zte.com.cn; detnet@ietf.org; draft-eckert-detnet-glbf@ietf.org
Subject: Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt


CAUTION: This is an external email. Please be very careful when clicking links or opening attachments. See the URL nok.it/ext for additional information.


Toerless,

Your argument to me and Shaofu is summarized in the following statement.

"The latency calculusfor C-SCORE is based on the assumption of a TSpec limit
for each flow. To me it seems clear that this calcullus does not hold
true when the TSpec increases across hops."

Another way to put it would be:
The increased burst size of a flow will harm the latency bound calculated based on the initial burst size of the flow.

My answer is:
The increased burst size of a flow does NOT harm the latency bound calculated based on the initial burst size of the flow.

Let's say a flow has TSpec rt+b.
The worst case happens is that the actual arrival follows rt+b;
that is at t=0 b arrives, and then packets arrive one after another, with rate r.

CLAIM: The largest E2E latency is that of the LAST packet of the initial burst b. Let's call this packet p(b).
Proof:
1) For any packet that arrives before p(b), it departs the network before p(b), so CLAIM holds.
2) Consider a packet that arrives later than p(b). Let's call it p(b)+n.
For simplicity of discussion, let's assume that every packet's length is 1.
Assume that p(b)+n joins the burst b at a point of the network.
Then by definition of the burst, at that point the arrival time difference between p(b) and p(b)+n is larger than or equal to n/r.
Assume that the burst to which these two packets belong is resolved at another point of the network.
Between the join point and the resolve point, they go together.
Then the departure time difference is less than or equal to n/r, Since the flow is served at a rate larger than or equal to r.
Therefore the latency difference is the departure time difference minus the arrival time difference,
which is less than or equal to zero.
CLAIM holds

So, for both cases CLAIM holds.

I hope this helps.
The burst accumulation effect is well known from decades ago.
Do you really believe that the network calculus theory has not been aware of that?

Best,
Jinoo


On Sat, Sep 23, 2023 at 12:39 AM Toerless Eckert <tte@cs.fau.de<mailto:tte@cs.fau.de>> wrote:
(eliminating many lines of empty space - your mailer is curious that way...)

On Fri, Sep 22, 2023 at 04:36:16PM +0800, peng.shaofu@zte.com.cn<mailto:peng.shaofu@zte.com.cn> wrote:
> Hi Toerless,
>
>
> You provide a good example in previous mail to show how does bursts accumulation
> of a single flow generate due to bursts aggregation from other flows. Here, I apologize
> to invent a new word "bursts aggregation", to distinguish it with "bursts accumulation".
> The former is something about multiple flows, while the later is something about multiple
> service-burst-intervals of a single flow.
>
> Yes, bursts accumulation may cause TSpec distortion of the observed flow. I believe we
> have agreement on this.
>
> However, the difference in our views is, the solution effect of different schedulings on
> bursts accumulation.
>
> If I get your point, you think that re-shaping (including IR), or Damper, must be introduced
> to eliminate bursts accumulation.
>
>
> IMO, IR/Damper is just one option (note that I take Damper as similar design of IR, e.g, per
> "traffic class + incoming port", except with different calculation method of eligibility time for
> packets), other options may be time-rank, such as latency compensation in Deadline,
> visual finish time in C-SCORE.

Let me repeat what i just replied to Jinou:

None of the scheduling mechanisms considered for rfc2212 treats a flow's adjacent packets
differently just because they are clustered closer togeter than they should (because of
prior hops burst aggregation/accumulation).

Shaper/IR and Damper rectify this problem directly, albeit differently - resulting
also to an in-time/work-preserving vs on-time/zero-jitter service experience. And
a difference in whether we have per-hop state/processing scaling issues.

> We can simply understand the difference between IR/Damper option and time-rank option
>
>
> as below:
>
>
> In IR/Damper option, early arrived packet (note: compared to the previous arrived
> packet) is delayed in the shaping buffer for a period of time untill its eligibility time, before

compared to the previousarrived packet ... from same flow .. only in shaper/IR, not damper
damper delays all packets up to quasi synchronous latency.

> that, it is REFUSED entry into the queueing sub-system.
>
>
> In time-rank option, early arrived packet is TOLERATED entry into the queueing
> sub-system but just with a large rank (or low priority) to not affect the urgency of scheduling
> eligibility arrivals (thus get bounded latency for them).

I remember discussions with Jakoov Stein re. draft-stein-srtsn, which is AFAIK also an
instance of such a time-rank forwarding with deadlines, and he pointed out in later
parts of the discussion that his (students) research also showed that the mechanism
is a heuristic bounded latency solution, but not a deterministic.

> That is, even in the case of bursts accumulation, a well-designed scheduling can still get a
> bounded latency.

And i fundamentally disagree, because i can not find a principle in these mechanisms
that restores per-flow TSpec increase correctly. To elaborate iin more detail on my
4 * 100 Mbps flows via Gbps link: WFQ would not restore a 1 Mbps time difference between
distorted packets p-1 and p of my example flow, it would only limit the burst to be
equal or less to that of a 2.5 Mbps flow in the best case, because this is effectively what packet
bursts of 400 * 1 Mbps across a 1 Gbps link do get with WFQ. But if the distorted p-1 and
p packet happen to get into a burst of less than 400 flows packets, then each of p-1 and
p would get an even higher share of the available 1 Gbps bandwidth and thus their distortion
would stay higher. When there is no contention on some following hop, their distortion would
not be eliminated at all.

> However, I agree with you that there is still difference result of the amount of bursts accumulation
> for in-time and on-time mode, with different buffer cost.

My unfortunate feeling is that there is no way to build an in-time/work-preserving
bounded latency solution that is per-hop stateless, because i need shaper/IR for
in-time/work-preserving, and i can not build IR/shaper per-hop stateless.

Note that our LBF work  (https://ieeexplore.ieee.org/document/9110431) equally attempted to
provide such a deadline, and departure priority calculation based latency management - in-time.
But we also failed to figure out how to provide a bounded latency calculus because of this
TSpec distortion issue. Which is why we then went to the gLBF approach.

And of course it is extremely annoying because as soon as you do such
an advanced scheduler (PIFO, per-packet calculation of departure priority/time), you do
minimize the effect of course (like WQF and C-SCORE and LBF do), but you don't eliminate it.

Aka: close, but i fear no deterministic cigar.

And btw: I'd be happy to be proven wrong, but the per-hop math/calculus that is used in rfc2212
and elsewhere is all based on the premise of per-hop known and same TSpec  for the flows.

Cheers
    Toerless
>
>
>
>
>
>
> Regards,
>
>
> PSF
>
>
>
>
>
>
>
>
>
>
>
> Original
>
>
>
> From: ToerlessEckert <tte@cs.fau.de<mailto:tte@cs.fau.de>>
> To: 彭少富10053815;
> Cc: jjoung@smu.ac.kr<mailto:jjoung@smu.ac.kr> <jjoung@smu.ac.kr<mailto:jjoung@smu.ac.kr>>;detnet@ietf.org<mailto:detnet@ietf.org> <detnet@ietf.org<mailto:detnet@ietf.org>>;draft-eckert-detnet-glbf@ietf.org<mailto:draft-eckert-detnet-glbf@ietf.org> <draft-eckert-detnet-glbf@ietf.org<mailto:draft-eckert-detnet-glbf@ietf.org>>;
> Date: 2023年09月22日 01:15
> Subject: Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt
>
>
> Peng,
>
> I am always trying to escape having to get into the depts of the math, and instead
> trying to find as simple as possible example. And the text from rfc2212 may be somewhat
> misleading when it talks about classes.
>
> The simple logic, why i think that WFQ does not help the math over FIFO are this:
>
> - I can have all flows with the same r_i and b_i, and then all flows are treated equal.
> - I can all bursts be single packets , in which case the queue i build up is just
>   one packet/flow.
>
> In these cases, i think WFQ will not treat packets differently from FIFO. Correct
> me if i am wrong. And the problem of TSpec distortion still exists.
>
> Aka:  WFC (and in extrapolation C-SCORE), woul likely have benefits reducing the
> TSpec distortion to some degree if b_i is multiple packets, because then a back-to-back
> burst of packets from one flow would be broken up, but if the interface serving
> rate is still significantly higher than the sum(r_i), then i will still continue
> to pass bursts hop-by-hop that lead to TSpec distortion (IMHO).
>
> Cheers
>     Toerless
>
>
> On Thu, Sep 21, 2023 at 12:02:01PM +0800, peng.shaofu@zte.com.cn<mailto:peng.shaofu@zte.com.cn> wrote:
> > Hi Jinoo,
> >
> >
> >
> >
> >
> >
> > Thanks for your explanation.
> >
> >
> >
> >
> >
> >
> > I agree with you that for a single observed flow, especialy with ideal fluid model,
> >
> >
> > each scheduler in the network can provide a guaranteed service for it, even in the case of
> >
> >
> > flow aggregation.
> >
> >
> >
> >
> >
> >
> > For example, multiple flows, each with arrival curve A_i(t) = b_i + r_i * t, may belongs to the
> >
> >
> > same traffic class and consume the resources (burst and bandwidth) of the same
> >
> >
> > out-interface on the intermediate node. Suppose that the scheduler provide a rate-latency
> >
> >
> > service curve R*(t - T) for that traffic class, where R >= sum(r_i). Then, if each flow arrived
> >
> >
> > idealy, i.e., complied its own arrival curve, for the oberserved flow, it will be ensured to get a
> >
> >
> > guaranteed service rate R' from the total service rate R.
> >
> >
> >
> >
> >
> >
> > Suppose that on each node the guaranteed service curve for the observed flow (e.g, flow 1)
> >
> >
> > is R'*(t - T') , then, according to "Pay Bursts Only Once" rule, the E2E latency may be:
> >
> >
> > T' * hops + b_1/R'. It seems that E2E latency just consider the burst of flow 1 (i.e., b_1) and
> >
> >
> > only once, and never consider other flows..
> >
> >
> >
> >
> >
> >
> > However,  the truth is hidden in T'.
> >
> >
> >
> >
> >
> >
> > According to section 6 "Aggregate Scheduling" discussion in netcal book,  the guaranteed
> >
> >
> > service curve for flow 1, i.e., R'*(t - T'), may be deduced by:
> >
> >
> >     R*(t - T) - sum(b_2 + r_2*t, b_3 + r_3*t, ..., b_n + r_n*t)
> >
> >
> > = (R - r_2 - r_3 -...- r_n) * (t - ((b_2 + b_3 + ... + b_n + R*T) /  (R - r_2 - r_3 -...- r_n)))
> >
> >
> >
> >
> >
> >
> > Thus, R' equals to (R - r_2 - r_3 -...- r_n),
> >
> >
> > and T' equals to ((b_2 + b_3 + ... + b_n + R*T) /  (R - r_2 - r_3 -...- r_n)).
> >
> >
> > It can be seen that all bursts of other flows contribute the result of T', on each node,
> >
> >
> > again and again.
> >
> >
> >
> >
> >
> >
> > If we take each b_i as 1/n of B, r_i as 1/n of R, T as 0, to simply compare the above PBOO
> >
> >
> > based latency estiamation and traffic class based latency estimation, we may find that
> >
> >
> > the former is b_1/r_1 + hops*(n-1)*b_1/r_1, while the later is b_1/r_1. It is amazing that
> >
> >
> > the former is about n times the latter .
> >
> >
> >
> >
> >
> > Please correct me if I misunderstand.
> >
> >
> >
> >
> >
> >
> > Regards,
> >
> >
> > PSF
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Original
> >
> >
> >
> > From: JinooJoung <jjoung@smu.ac.kr<mailto:jjoung@smu.ac.kr>>
> > To: 彭少富10053815;
> > Cc: Toerless Eckert <tte@cs.fau.de<mailto:tte@cs.fau.de>>;DetNet WG <detnet@ietf.org<mailto:detnet@ietf.org>>;draft-eckert-detnet-glbf@ietf.org<mailto:draft-eckert-detnet-glbf@ietf.org> <draft-eckert-detnet-glbf@ietf.org<mailto:draft-eckert-detnet-glbf@ietf.org>>;
> > Date: 2023年09月19日 16:13
> > Subject: Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt
> >
> >
> >
> >
> >
> >
> > Hello Shaofu.
> >
> >
> > Thanks for the most relevant question, which, as you have said, is related to Toerless' last question regarding "reshaping on merge points".
> >
> >
> > If I may repeat Toerless' question below:
> >
> > "If you are relying on the same math as what rfc2212 claims, just replacing
> > stateful WFQ with stateless, then it seems to me that you would equally need
> > the shapers demanded by rfc2212 on merge-points. I do not see them in C-SCORE."
> >
> > And your question is:
> >
> > " "Pay Bursts Only Once"  may be only applied in the case that the network provides
> > a dedicated service rate to a flow. Our network naturally aggregates flows at every node,
> > therefore does not dedicate a service rate to a flow, and PBOO does not apply."
> > My answer is:
> >
> > "Pay Bursts Only Once" is applied in the case that the network provides
> > a GUARANTEED service rate to a flow.
> >
> > Fair queuing, C-SCORE, or even FIFO scheduler can guarantee a service rate to a flow.
> >
> > As long as a flow, as a single entity, is guaranteed a service rate, it is not considered aggregated or merged.
> > Therefore reshaping is not necessary, and PBOO holds.
> >
> > Below is my long answer. Take a look if you'd like.
> >
> > Fair queuing, Deficit round robin, or even FIFO scheduler guarantees a service rate to a flow,
> > if the total arrival rate is less than the link capacity.
> > The only caveat is that FIFO can only guarantee the service rate that is equal to the arrival rate,
> > while FQ and DRR can adjust the service rate to be larger than the arrival rate.
> > If such rate-guaranteeing schedulers are placed in a network,
> > then a flow is guaranteed to be served with a certain service rate,
> > and not considered to be "aggregated", in the intermediate nodes.
> >
> > RFC2212, page 9~10, in subsection "Policing", states the followings:
> >
> > "Reshaping is done at all heterogeneous source branch points and at all source merge points."
> >
> >
> > "Reshaping need only be done if ..."
> >
> >
> >
> > "A heterogeneous source branch point is a spot where the
> > multicast distribution tree from a source branches to multiple
> > distinct paths, and the TSpec’s of the reservations on the various
> > outgoing links are not all the same."
> >
> > "A source merge point is where the distribution paths or trees from two different sources
> > (sharing the same reservation) merge."
> >
> > In short, RFC2212 states that reshaping CAN be necessary at the flow aggregation and deaggregation points.
> >
> > Flow aggregation and deaggregation are something happening usually at the network boundary, between networks, etc,
> > with careful planning. Flow multiplexing into a FIFO is not considered an aggregation.
> >
> > Best,
> >
> > Jinoo
> >
> >
> >
> >
> > On Tue, Sep 19, 2023 at 10:57 AM <peng.shaofu@zte.com.cn<mailto:peng.shaofu@zte.com.cn>> wrote:
> >
> >
> >
> >
> >
> >
> > Hi Jinoo, Toerless,
> >
> >
> >
> >
> >
> >
> > Sorry to interrupt your discussion.
> >
> >
> >
> >
> >
> >
> > According to NetCal book (https://leboudec.github.io/netcal/latex/netCalBook.pdf),
> >
> >
> > "Pay Bursts Only Once"  may be only applied in the case that the network provide
> >
> >
> > a dedicate service rate (may be protected by a dedicate queue, or even a dedicate
> >
> >
> > sub-link) for the observed flow, such as the guarantee service defined in RFC2212.
> >
> >
> > In brief, there is no other flows sharing service rate with the observed flow. That is,
> >
> >
> > there is no fan-in, no so-called " competition flows belongs to the same traffic class
> >
> >
> > at the intermediate node".
> >
> >
> >
> >
> >
> >
> > Traffic class is often used to identify flow aggregation, which is not the case that
> >
> >
> > "Pay Bursts Only Once" may be applied. It seems that in IP/MPLS network, flow
> >
> >
> > aggregation is naturely. The picture Toerless showed in the previous mail is exactly
> >
> >
> > related with flow aggregation, i.e, the observed flow may be interfered by some
> >
> >
> > competition flows belongs to the same traffic class at node A and again interfered by
> >
> >
> > other competition flows belongs to the same traffic class at node B separately.
> >
> >
> >
> >
> >
> >
> > Please correct me if I misunderstood.
> >
> >
> >
> >
> >
> >
> > Regards,
> >
> >
> > PSF
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > detnet mailing list
> > detnet@ietf.org<mailto:detnet@ietf.org>
> > https://www.ietf.org/mailman/listinfo/detnet
> >
> >
> > Toerless,
> > It seems that you argue two things in the last email.
> >
> > 1) Your first argument: The E2E latency is the sum of per-hop latencies..
> >
> > You are half-right.
> > According to RFC2212, page 3, the generic expression for the E2E latency bound of a flow is:
> >
> > [(b-M)/R*(p-R)/(p-r)] + (M+Ctot)/R + Dtot.    (1)
> >
> >
> > Let's call this expression (1).
> >
> > Here, b is the max burst, M the max packet length, R the service rate, p the peak rate, r the arrival rate;
> > and Ctot and Dtot are the sums of C and D, which are so called "error terms",  over the hops.
> > Thus, the E2E latency bound can be a linear function of the hop count,
> > since Ctot and Dtot are functions of hop count.
> > However, the first term [(b-M)/R*(p-R)/(p-r)], which includes b, is not..
> > So you can see the E2E latency is NOT just the sum of per-hop latencies..
> >
> > 2) Your second argument: C-SCORE cannot be free from burst accumulation and other flows' burst.
> > My short answer: It is free from burst accumulation or other flows' burst.
> >
> > Imagine an ideal system, in which your flow is completely isolated.
> > It is ALONE in the network, whose link has the rate R in every hop.
> > No other flows at all.
> >
> > Assume the flow's arrival process is indeed rt+b. At time 0, b arrives instantly.
> > (This is the worst arrival, such that it makes the worst latency.)
> > Then your flow experiences the worst latency b/R at the first node,
> > and M/R (transmission delay of a packet) at the subsequent nodes.
> >
> >
> > I.e.  (b-M)/R + H*M/R, where H is the hop count.
> >
> > This is a special case of (1), where R is the same with r, C is M, and D is zero.
> >
> >
> > Fair queuing and C-SCORE are the best approximations of such an ideal system, with D equals Lmax/LC,
> >
> > where Lmax and LC are the max packet length in the link and the capacity of the link, respectively.
> > Therefore the E2E latency bound of C-SCORE is
> >
> > (b-M)/R + H*(M/R + Lmax/LC),
> >
> >
> > which is again another special case of  (1).
> > Note that b is divided by R, not LC.
> >
> > It is well known that a FIFO scheduler's D (error term) is a function of the sum of other flows' bursts.
> > Their E2E latency bounds are approximately
> > (b-M)/R + H*[M/R + (Sum of Bursts)/LC].
> >
> >
> > ATS or UBS also relies on FIFO, but the bursts are suppressed as the initial values,
> >
> > therefore enjoys a much better E2E latency expression.
> >
> > Hope this helps.
> >
> > Best,
> > Jinoo
> >
> >
> >
> >
> >
> >
> > On Sun, Sep 17, 2023 at 1:42 AM Toerless Eckert <tte@cs.fau.de<mailto:tte@cs.fau.de>> wrote:
> >
> > On Thu, Sep 14, 2023 at 08:08:24PM +0900, Jinoo Joung wrote:
> >  > Toerless, thanks for the reply.
> >  > In short:
> >  >
> >  > C-SCORE's E2E latency bound is NOT affected by the sum of bursts, but only
> >  > by the flow's own burst.
> >
> >                      +----+
> >            If1   --->|    |
> >            ...       |  R |x--->
> >            if100 --->|    |
> >                      +----+
> >
> >  If you have a router with for example 100 input interfaces, all sending packets
> >  to the same ooutput interfaces, they all will have uncorrelated flows arriving
> >  from the different interfaces, and at least each interface can have a packet
> >  arriving t the same time in R.
> >
> >  The basic caluculus of UBS is simply the most simple and hence conservative,
> >  assuming all flows packet can arrive from different interface without
> >  rate limits. But of course you can do the latency calculations in a tighter
> >  fashion for UBS. WOuld be interesting to learn if IEEE for TSN-ATS (Qcr) was
> >  looking into any of this. E.g.: applying line shaping for the aggregate of
> >  flows from the same input interface and incorporating the service curve of
> >  the node.
> >
> >  In any case, whether the tighter latency calculus is you do for Qcr, it
> >  is equally applicable to gLBF.
> >
> >  > It is bounded by (B-L)/r + H(L/r + Lmax/R), where B is the flow's max burst
> >  > size.
> >
> >  See above picture. How could the packet in question not also suffer the
> >  latency introduced by the other 99 packets randomnly arriving at almos
> >  the same time, just a tad earlier.
> >
> >  > You can also see that B appears only once, and not multiplied by the hop
> >  > count H.  So the burst is paid only once.
> >
> >  C-SCORE is a stateless fair queuing (FQ) mechanism, but making FQ state,
> >  does not change the fact that FQ itself does not eliminate the
> >  burst accumulation on merge points such as shon in the picture. rfc2212
> >  which for example also recommends FQ independently mandates the use of
> >  reshaping.
> >
> >  > Please see inline marked with JJ.
> >
> >  yes, more inline.
> >
> >  > On Thu, Sep 14, 2023 at 3:34 AM Toerless Eckert <tte@cs.fau.de<mailto:tte@cs.fau.de>> wrote:
> >  [cutting off dead leafs]
> >
> >  > > I was only talking about the E2E latency bound guaranteeable by DetNet
> >  > > Admission Control (AC).
> >  >
> >  > JJ: Admission control has two aspects.
> >  > First, guaranteeing the total arrival rate (or allocated service rate) does
> >  > not exceed the link capacity.
> >  > Second, assuring a service level agreement (SLA), or a requested latency
> >  > bound (RSpec) to a flow.
> >  > By "E2E latency bound guaranteeable by DetNet Admission Control (AC)",
> >  > I think you mean that negotiating the SLA first, and then, based on the
> >  > negotiated latency, allocating per-hop latency to a flow.
> >  > I suggest this per-hop latency allocation, and enforcing that value in a
> >  > node, is not a good idea,
> >  > since it can harm the advantage of "pay burst only once (PBOO)" property.
> >
> >  Its the only way AFAIK to achieve scalability in number of hops and number of
> >  flows by being able to calculate latency as a linear composition of per-hop
> >  latencies. This is what rfc2212 does, this is what Qcr does, *CQF, gLBF and so on.
> >
> >  > PBOO can be interpreted roughly as: If your burst is resolved at a point in
> >  > a network, then it is resolved and does not bother you anymore.
> >  > However, in the process of resolving, the delay is inevitable.
> >
> >  Remember tht the problemtic burstyness is an increase in the intra-flow
> >  burstyness resulting from merge-point based unexpected latency
> >  to p-1 of the flow followed by no such delay for packet p of the same flow,
> >  and hence clustering p-1 and p closer together, making it exceed its
> >  reserved burst size.
> >
> >  This can not be compensated for in a work-conserving way by simply capturing
> >  the per-hop latency of each packet p-1 and p alone, but it requires a shaper
> >  (or UR) to get rid off.
> >
> >  > JJ: Because you don't know exactly where the burst is resolved,
> >  > when you calculate the per-node latency bound, the latency due to the burst
> >  > has to be added as a portion of the latency bound.
> >  > Thus the sum of per-node bounds is much larger than the E2E latency bound
> >  > calculated with seeing the network as a whole.
> >
> >  Consider a flow passing through 10 hops. On every hop, you have potentially
> >  merge-point with new traffic flows and new burst collision. All that we do
> >  in the simple UBS/Qcr calculus is to take the worst case into account,
> >  where worst case may not even be admitted now, but which could be admitted
> >  in the future, and at that point in time you do not want to go back and
> >  change the latency guarantee for your already admitted flow.
> >
> >          Src2  Src3  Src4  Src5  Src6  Src7  Src8   Src9 Src10
> >           |     |     |     |     |     |     |     |     |
> >           v     v     v     v     v     v     v     v     v
> >   Src1 -> R1 -> R2 -> R3 -> R4 -> R5 -> R6 -> R7 -> R8 -> R9 -> R10 -> Rcv1
> >                  |     |     |     |     |     |     |     |     |
> >                  v     v     v     v     v     v     v     v     v
> >                 Rcv2  Rcv3  Rcv4  Rcv5  Rcv6  Rcv7  Rcv8  Rcv9  Rcv10
> >
> >  Above is example, where Flow 1 from Src1 to Rcv1 will experience such
> >  merge-point burst accumulation issue on every hop - worst case.  And as mentioned
> >  before, yes, when you use the simple calculus, you're also overcalculating
> >  the per-hop latency for flows that e.g.: all run in parallel to Src1, but
> >  that is just a matter of using stricter network calculus. And because
> >  Network Calculus is complex, and i didn't want to start becoming an expert on
> >  it, i simply built the stateless solution in a way where i can reuse a
> >  pre-existing, proven and used-in-standards (Qcr) queuing-model, calculus.
> >
> >
> >  > JJ: If you enforce resolving it at the network entrance by a strict
> >  > regulation, then you may end up resolving while not actually needed to.
> >  > However, this approach is feasible. I will think more about it.
> >
> >  I think the flow-interleaving is a much more fundamental issue of higher
> >  utilization with large number of flows all with low bitrates. Look at
> >  the examples of the draft-eckert-detnet-flow-interleaving, and tell me
> >  how else but time-division-multiplexing one would be able to solve this.
> >  Forget the complex option where flows from diffeent ingres routers to
> >  different egres routers are involved. Just the most simple problem of
> >  one ingres router PE1, maybe 10 hops through thre network to a PE2, and 10,000
> >  acyclic flows going to thre same egress router PE2.
> >
> >  Seems to me quite obvious tht you can as well resolve the burst on ingress
> >  PE1 instead of hoping, and getting complex math by trying to do this
> >  on further hops along the path.
> >
> >  Cheers
> >      Toerless
> >
> >  > >
> >  > > > It can be a function of many factors, such as number of flows, their
> >  > > > service rates, their max bursts, etc.
> >  > > > The sum of service rates is a deciding factor of utilization.
> >  > >
> >  > > Given how queuing latency always occurs from collision of packets in
> >  > > buffers,
> >  > > the sum of burst sizes is the much bigger problem for DetNet than service
> >  > > rates. But this is a side discuss.
> >  > >
> >  >
> >  > JJ: That is a good point. So we should avoid queuing schemes whose latency
> >  > bounds are affected by the sum of bursts.
> >  > Fortunately, C-SCORE's E2E latency bound is NOT affected by the sum of
> >  > bursts, but only by the flow's own burst.
> >  > It is bounded by (B-L)/r + H(L/r + Lmax/R), where B is the flow's max burst
> >  > size.
> >  > You can also see that B appears only once, and not multiplied by the hop
> >  > count H.
> >  > So the burst is paid only once.
> >  >
> >  >
> >  > > > So, based on an E2E latency bound expression, you can guess the bound at
> >  > > > 100% utilization.
> >  > > > But you can always fill the link with flows of any burst sizes, therefore
> >  > > > the guess can be wrong.
> >  > >
> >  > > "guess" is not a good work for DetNet.
> >  > >
> >  > > A DetNet bounded latency mechanisms needs a latency bound expression
> >  > > (calculus)
> >  > > to be a guaranteeable (over)estimate of the bounded latency independent of
> >  > > what other competing traffic there may be in the future. Not a "guess".
> >  > >
> >  >
> >  > JJ: Right. We should not guess. We should be able to provide an exact
> >  > mathematical expression for latency bound.
> >  > Because you argued in the previous mail that the latency bound should be
> >  > obtained based on 100% utilization,
> >  > I was trying to explain why that should not be done.
> >  >
> >  >
> >  > > > Admission control, on the other hand, can be based on assumption of high
> >  > > > utilization level, but not necessarily 100%.
> >  > > >
> >  > > > You do not assume 100% utilization when you slot-schedule, don't you?
> >  > >
> >  > > I don't understand what you mean with slot-schedule, can you please
> >  > > explain ?
> >  > >
> >  >
> >  > JJ: Slot-scheduling, which is a common term in the research community,
> >  > is a mapping of flow into a slot (or cycle) in a slot (or cycle) based
> >  > queuing methods, such as CQF.
> >  > When we say a schedulability, it usually means whether we can allocate
> >  > requesting flows into slots (cycles) with a preconfigured cycle's length
> >  > and number.
> >  >
> >  >
> >  > > > So "incremental scheduling" is now popular.
> >  > >
> >  > > Not sure what you mean with that term.
> >  >
> >  >
> >  > JJ: The incremental scheduling means, when a new flow wants to join the
> >  > network, then the network examines the schedulability of the flow,
> >  > without altering the existing flows' schedule.
> >  >
> >  >
> >  > >
> >  > >
> >  > > I am only thinking about "admitting" when it comes to bounded end-to-end
> >  > > latency, aka: action by the AC of the DetNet controller-plane, and
> >  > > yes, that needs to support "on-demand" (incremental?), aka: whenever
> >  > > a new flow wants to be admitted.
> >  > >
> >  > > > 2) In the example I gave, the two flows travel the same path, thus the
> >  > > > second link's occupancy is identical to the first one.
> >  > > > Thus the competency levels in two links are the same, contrary to your
> >  > > > argument.
> >  > >
> >  > > I guss we started from different assumptions about details not explicitly
> >  > > mentioned.  For example, i guess we both assume that the sources connect
> >  > > to the
> >  > > first router with arbitrary high interfaces so that we could ever get close
> >  > > to 2B/R on the first interface.
> >  > >
> >  > > But then we differed in a core detail. here is my assumption for
> >  > > the topology / admission control:
> >  > >
> >  > >                      Src3          to R4
> >  > >               +----+   \  +----+  /  +----+
> >  > >       Src1 ---|    |    \-|    |-/   |    |
> >  > >       Src2 ---| R1 |x-----| R2 |x----| R3 |
> >  > >               |    |.     |    |.    |    |
> >  > >               +----+.     +----+.    +----+
> >  > >                     |           |
> >  > >                 2B buffer    2B buffer
> >  > >                 R srv rate   R srv rate
> >  > >
> >  > > Aka: in my case, i was assuming that there could be case where
> >  > > the interface from R2 to R3 could have a 2B/R queue (and not
> >  > > assuming further optimizations in calculus). E.g.: in some
> >  > > other possible scenario, Src2 sends to R2, and Src3 and Src1 to
> >  > > R3 for example.
> >  > >
> >  >
> >  > JJ: You can come up with an example that your scheme works well.
> >  > But that does not negate the counterexample I gave.
> >  >
> >  > JJ: Again, there are only two flows.
> >  > And, B is not the buffer size. B is the max burst size of a flow.
> >  > R is the link capacity.
> >  > Please review carefully the example I gave.
> >  >
> >  >
> >  > >
> >  > > You must have assumed that the totality of the DetNet admission control
> >  > > relevant topology is this:
> >  > >
> >  > >               +----+      +----+     +----+
> >  > >       Src1 ---|    |      |    |     |    |
> >  > >       Src2 ---| R1 |x-----| R2 |x----| R3 |
> >  > >               |    |.     |    |.    |    |
> >  > >               +----+.     +----+.    +----+
> >  > >                     |           |
> >  > >                 2B buffer    2B buffer
> >  > >                 R srv rate   R srv rate
> >  > >
> >  > > Aka: DetNet admission control would have to be able to predict that
> >  > > under no permitted admission scenario, R2 would build a DetNet queue,
> >  > > so even when Src1 shows up as the first and only flow, the admission
> >  > > control could permit a latency to R3 of 2B/R - only for the maximum
> >  > > delay through R1 queue and 0 for R2 queue.
> >  > >
> >  > > But if this is the whole network and the admission control logic
> >  > > can come to this conclusion, then of course it could equally do the
> >  > > optimization and not enable gLBF Dampening on R2 output
> >  > > interface, or e.g.: set MAX=0 or the like. An e'voila, gLBF
> >  > > would also give 2B/R - but as said, i think it's not a deployment
> >  > > relevant example.
> >  >
> >  >
> >  > JJ: If you can revise and advance the gLBF, that would be great.
> >  > I am willing to join that effort, if you would like to.
> >  >
> >  >
> >  > >
> >  > >
> >  > > Cheers
> >  > >     Toerless
> >  > >
> >  > > > Please see inline with JJ.
> >  > > >
> >  > > > Best,
> >  > > > Jinoo
> >  > > >
> >  > > > On Wed, Sep 13, 2023 at 9:06 AM Toerless Eckert <tte@cs.fau.de<mailto:tte@cs.fau.de>> wrote:
> >  > > >
> >  > > > > On Fri, Jul 21, 2023 at 08:47:18PM +0900, Jinoo Joung wrote:
> >  > > > > > Shaofu, thanks for the reply.
> >  > > > > > It is my pleasure to discuss issues like this with you.
> >  > > > > >
> >  > > > > > The example network I gave is a simple one, but the scenario is the
> >  > > worst
> >  > > > > > that can happen.
> >  > > > > > The E2E latency bounds are thus,
> >  > > > > >
> >  > > > > > for Case 1: ~ 2B/R
> >  > > > > > for Case 2: ~ 2 * (2B/R)
> >  > > > >
> >  > > > > This is a bit terse, let me try to expand:
> >  > > > >
> >  > > > > Case 1 is FIFO or UBS/ATS, right / Case 2 is gLBF ?
> >  > > > >
> >  > > >
> >  > > > JJ: Correct.
> >  > > >
> >  > > >
> >  > > > >
> >  > > > > Assuming i am interpreting it right, then this is inconsistent with
> >  > > your
> >  > > > > setup: You said all links are the same so both hops do have the same
> >  > > > > buffer and rates, so the admission controller also expect to
> >  > > > > have to put as many flows on second link/queue that it fills up 2B.
> >  > > > >
> >  > > >
> >  > > > JJ: Yes, two links are identical. However, as I have mentioned,
> >  > > > an E2E latency bound is calculated based on a given network environment.
> >  > > > We don't always consider a filled up link capacity.
> >  > > > BTW, B is the max burst size of the flow.
> >  > > >
> >  > > >
> >  > > >
> >  > > > >
> >  > > > > You just then made an example, where there was never such an amount
> >  > > > > of competing traffic on the second hop. But tht does not mean that
> >  > > > > the admission controller could guarantee in UBS/ATS would have
> >  > > > > less per-hop latency than 2B/R.
> >  > > >
> >  > > >
> >  > > > JJ: Again, two links are identical and two flows travel both links.
> >  > > > The difference between Case 1 and Case 2 is not because of the different
> >  > > > competition level (they are identical.)
> >  > > > but because of the non-work conserving behaviour of the second link in
> >  > > Case
> >  > > > 2.
> >  > > >
> >  > > >
> >  > > > >
> >  > > > > If the admission controller knew there would never be a queue on the
> >  > > > > second hop, then gLBF likewise would not need to do a Damper on the
> >  > > > > second hop. Hence as i said previously, the per-hop and end-to-end
> >  > > > > bounded latency guarantee is the same between UBS and gLBF.
> >  > > > >
> >  > > > > > And again, these are the WORST E2E latencies that a packet can
> >  > > experience
> >  > > > > > in the two-hop network in the scenario.
> >  > > > >
> >  > > > > Its not the worst case latency for the UBS case. you just did not have
> >  > > > > an example to create the worst case amount of competing traffic. Or you
> >  > > > > overestimed the amount of buffering and hence per-hop latency for the
> >  > > > > UBS/ATS casee.
> >  > > > >
> >  > > > > > In any network that is more complex, the E2E latency bounds of two
> >  > > > > schemes
> >  > > > > > are very different.
> >  > > > >
> >  > > > > Counterexample:
> >  > > > >
> >  > > > > You have a network with TSN-ATS. You have an admission controller.
> >  > > > > You only have one priority for simplicity of example.
> >  > > > >
> >  > > > > You do not want to dymamically signal changed end-to-end latencies
> >  > > > > to applications... because its difficult. So you need to plan
> >  > > > > for worst-case bounded latencies under maximum amount of traffic
> >  > > > > load. In a simple case this means you give each interface
> >  > > > > a queue size B(i)/r = 10usec. Whenever a new flow needs to be
> >  > > > > added to the network, you find a path where all the  buffers
> >  > > > > have enough space for your new flows burst and you signal
> >  > > > > to the application that the end-t-end guaranteed latency is
> >  > > > > P(path)+N*10usec, where P is the physical propagation latecy of
> >  > > > > the path and N is the number of hops it has.
> >  > > > >
> >  > > > > And in result, all packets from the flow will arrive with
> >  > > > > a latency between P(path)...P(path)+N*10usec - depending
> >  > > > > on network load/weather.
> >  > > > >
> >  > > > > Now we replace UBS in the routers with gLBF. What changes ?
> >  > > > >
> >  > > > > 1) With UBS the controller still had to signal every new and
> >  > > > > to-be-deleted flow to every router along it path to set up the
> >  > > > > IR for the flow. This goes away (big win).
> >  > > > >
> >  > > > > 2) The forwarding is in our opinion cheaper/faster to implement
> >  > > > > (because of lack of memory read/write cycle of IR).
> >  > > > >
> >  > > > > 3) The application now sees all packets arrive at fixed latency
> >  > > > > of P(path)+N*10usec. Which arguably to the application that
> >  > > > > MUST have bounded latency is from all examples i know
> >  > > > > seen rather as a benefit than as a downside.
> >  > > > >
> >  > > > > Cheers
> >  > > > >     Toerless
> >  > > > >
> >  > > > >
> >  > > > > >
> >  > > > > > Best,
> >  > > > > > Jinoo
> >  > > > > >
> >  > > > > > On Fri, Jul 21, 2023 at 8:31 PM <peng.shaofu@zte.com..cn<mailto:peng.shaofu@zte.com..cn>> wrote:
> >  > > > > >
> >  > > > > > >
> >  > > > > > > Hi Jinoo,
> >  > > > > > >
> >  > > > > > >
> >  > > > > > > I tried to reply briefly. If Toerless have free time, can confirm
> >  > > it.
> >  > > > > > >
> >  > > > > > >
> >  > > > > > > Here, when we said latency bound formula, it refers to worst-case
> >  > > > > latency.
> >  > > > > > >
> >  > > > > > >
> >  > > > > > > Intuitively, the worst-case latency for gLBF (damper + shaper +
> >  > > > > scheduler)
> >  > > > > > >
> >  > > > > > > is that:
> >  > > > > > >
> >  > > > > > >     damping delay per hop is always 0. (because scheduling delay =
> >  > > MAX)
> >  > > > > > >
> >  > > > > > >     shaping delay is always 0. (because all are eligibility
> >  > > arrivals)
> >  > > > > > >
> >  > > > > > >     scheduling delay is always MAX (i.e., concurent full burst
> >  > > from all
> >  > > > > > > eligibility arrivals on each hop)
> >  > > > > > >
> >  > > > > > >
> >  > > > > > > Similarly, the worst-case latency for UBS (shaper + scheduler) is
> >  > > that:
> >  > > > > > >
> >  > > > > > >     shaping delay is always 0. (because all are eligibility
> >  > > arrivals)
> >  > > > > > >
> >  > > > > > >     scheduling delay is always MAX (i.e., concurent full burst
> >  > > from all
> >  > > > > > > eligibility arrivals on each hop)
> >  > > > > > >
> >  > > > > > >
> >  > > > > > > Thus, the worst-case latency of gLBF and UBS is the same.
> >  > > > > > >
> >  > > > > > > Your example give a minimumal latency that may be expierenced by
> >  > > UBS,
> >  > > > > but
> >  > > > > > >
> >  > > > > > > it is not the worst-case latency. In fact, your example is a simple
> >  > > > > > > topology that only
> >  > > > > > >
> >  > > > > > > contains a line without fan-in, that cause scheduling delay almost
> >  > > a
> >  > > > > > > minimumal
> >  > > > > > >
> >  > > > > > > value due to no interfering flows.
> >  > > > > > >
> >  > > > > > >
> >  > > > > > > Regards,
> >  > > > > > >
> >  > > > > > > PSF
> >  > > > > > >
> >  > > > > > >
> >  > > > > > >
> >  > > > > > > Original
> >  > > > > > > *From: *JinooJoung <jjoung@smu.ac.kr<mailto:jjoung@smu.ac.kr>>
> >  > > > > > > *To: *彭少富10053815;
> >  > > > > > > *Cc: *tte@cs.fau.de<mailto:tte@cs.fau.de> <tte@cs.fau.de<mailto:tte@cs.fau.de>>;detnet@ietf.org<mailto:detnet@ietf.org> <
> >  > > detnet@ietf.org<mailto:detnet@ietf.org>>;
> >  > > > > > > draft-eckert-detnet-glbf@ietf.org<mailto:draft-eckert-detnet-glbf@ietf.org> <
> >  > > draft-eckert-detnet-glbf@ietf.org<mailto:draft-eckert-detnet-glbf@ietf.org>>;
> >  > > > > > > *Date: *2023年07月21日 14:10
> >  > > > > > > *Subject: **Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt*
> >  > > > > > > _______________________________________________
> >  > > > > > > detnet mailing list
> >  > > > > > > detnet@ietf.org<mailto:detnet@ietf.org>
> >  > > > > > > https://www.ietf.org/mailman/listinfo/detnet
> >  > > > > > >
> >  > > > > > > Hello Toerless,
> >  > > > > > > I have a comment on your argument.
> >  > > > > > > This is not a question, so you don't have to answer.
> >  > > > > > >
> >  > > > > > > You argued that the gLBF + SP has the same latency bound formula
> >  > > with
> >  > > > > UBS
> >  > > > > > > (equivalently ATS IR + SP).
> >  > > > > > > The IR is not a generalized gLBF, so they do not have the same
> >  > > bound.
> >  > > > > > >
> >  > > > > > > In short, ATS IR is a rate-based shaper so it enjoys "Pay burst
> >  > > only
> >  > > > > once"
> >  > > > > > > property.
> >  > > > > > > gLBF is not. So it pays burst every node.
> >  > > > > > >
> >  > > > > > > Consider a simplest example, where there are only two identical
> >  > > flows
> >  > > > > > > travelling the same path.
> >  > > > > > > Every node and link in the path are identical.
> >  > > > > > >
> >  > > > > > > Case 1: Just FIFO
> >  > > > > > > Case 2: gLBF + FIFO
> >  > > > > > >
> >  > > > > > > In the first node, two flows' max bursts arrive almost at the same
> >  > > time
> >  > > > > > > but your flow is just a little late.
> >  > > > > > > Then your last packet in the burst (packet of interest, POI)
> >  > > suffers
> >  > > > > > > latency around 2B/R, where B is the burst size and R is the link
> >  > > > > capacity.
> >  > > > > > > This is true for both cases.
> >  > > > > > >
> >  > > > > > > In the next node:
> >  > > > > > > In Case 1, the POI does not see any packet queued. so it is
> >  > > delayed by
> >  > > > > its
> >  > > > > > > own transmission delay.
> >  > > > > > > In Case 2, the burst from the other flow, as well as your own
> >  > > burst,
> >  > > > > > > awaits the POI. So the POI is again delayed around 2B/R.
> >  > > > > > >
> >  > > > > > > In the case of UBS, the max bursts are legitimate, so the regulator
> >  > > > > does
> >  > > > > > > not do anything,
> >  > > > > > > and the forwarding behavior is identical to Case 1.
> >  > > > > > >
> >  > > > > > > Best,
> >  > > > > > > Jinoo
> >  > > > > > >
> >  > > > > > > On Fri, Jul 21, 2023 at 10:58 AM <peng.shaofu@zte.com.cn<mailto:peng.shaofu@zte.com.cn>> wrote:
> >  > > > > > >
> >  > > > > > >>
> >  > > > > > >> Hi Toerless,
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >> Thanks for your response, and understand your busy situation.
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >> A quick reply is that gLBF is really an interested proposal,
> >  > > which is
> >  > > > > > >> very
> >  > > > > > >>
> >  > > > > > >> similar to the function of Deadline on-time per hop.  Our views
> >  > > are
> >  > > > > > >>
> >  > > > > > >> consistent on this point. The key beneficial is to avoid burst
> >  > > > > cumulation.
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >> The following example originated from the analysis of deadline
> >  > > on-time
> >  > > > > > >>
> >  > > > > > >> mode. I believe it also makes sense for gLBF. When you have free
> >  > > time,
> >  > > > > > >>
> >  > > > > > >> may verify it. The result may be helpful both for gLBF and
> >  > > deadline
> >  > > > > > >>
> >  > > > > > >> on-time mode.  Note that I didn't question mathematical proof
> >  > > about
> >  > > > > UBS,
> >  > > > > > >>
> >  > > > > > >> which get the worst-case latency based on the combination of
> >  > > > > > >>
> >  > > > > > >> "IR shaper + SP scheduler".
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >> Regards,
> >  > > > > > >>
> >  > > > > > >> PSF
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >> Original
> >  > > > > > >> *From: *ToerlessEckert <tte@cs.fau.de<mailto:tte@cs.fau.de>>
> >  > > > > > >> *To: *彭少富10053815;
> >  > > > > > >> *Cc: *jjoung@smu.ac.kr<mailto:jjoung@smu.ac.kr> <jjoung@smu.ac.kr<mailto:jjoung@smu.ac.kr>>;detnet@ietf.org<mailto:detnet@ietf.org> <
> >  > > > > > >> detnet@ietf.org<mailto:detnet@ietf.org>>;draft-eckert-detnet-glbf@ietf.org<mailto:draft-eckert-detnet-glbf@ietf.org> <
> >  > > > > > >> draft-eckert-detnet-glbf@ietf.org<mailto:draft-eckert-detnet-glbf@ietf.org>>;
> >  > > > > > >> *Date: *2023年07月21日 06:07
> >  > > > > > >> *Subject: **Re: [Detnet] FYI: draft-eckert-detnet-glbf-01..txt*
> >  > > > > > >>
> >  > > > > > >> Thanks folks for the question and discussion, I have some WG chair
> >  > > > > vultures hovering over me
> >  > > > > > >>
> >  > > > > > >> making sure i prioritize building slides now (the worst one is
> >  > > myself
> >  > > > > ;-), so i will only
> >  > > > > > >> give a brief answer and will get back to it later when i had more
> >  > > > > time.
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >> The calculus that i used is from the [UBS] research paper from
> >  > > > > Johannes Specht, aka: it has
> >  > > > > > >>
> >  > > > > > >> the mathematical proof, full reference is in the gLBF draft.
> >  > > There is
> >  > > > > another, later proof of the
> >  > > > > > >>
> >  > > > > > >> calculus from Jean Yves Le Boudec in another research paper which
> >  > > i'd
> >  > > > > have to dig up, and
> >  > > > > > >>
> >  > > > > > >> depending on whom you ask one or the other is easier to read. I
> >  > > am on
> >  > > > > the UBS research paper
> >  > > > > > >>
> >  > > > > > >> side because i have not studied Jean Yves calculus book. But its
> >  > > > > really beautifully simple
> >  > > > > > >>
> >  > > > > > >> that as soon as you think of flows with only burst-size and rate
> >  > > (or
> >  > > > > period) of those burst,
> >  > > > > > >>
> >  > > > > > >> then your delay through the queue is really just the sum of
> >  > > bursts.
> >  > > > > And i just find beauty
> >  > > > > > >>
> >  > > > > > >> in simplicity. And that can not be the full answer to Jinoo, but i
> >  > > > > first need to read up more
> >  > > > > > >> on his WRR options.
> >  > > > > > >>
> >  > > > > > >> The need for doing per-hop dampening is really as i said from two
> >  > > > > points:
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >> 1. Unless we do per-hop dampening, we will not get such a simple
> >  > > > > calculus and equally low latency.
> >  > > > > > >>
> >  > > > > > >> The two validation slides of the gLBF presentation show that one
> >  > > can
> >  > > > > exceed the simple
> >  > > > > > >>
> >  > > > > > >> calculated bounded latency already with as few as 9  flows across
> >  > > a
> >  > > > > single hop and arriving
> >  > > > > > >>
> >  > > > > > >> into one single queue -  unless there is per-hop dampening (or
> >  > > > > per-flow-shaper).
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >> 2. I can not imagine how to safely sell router equipment and build
> >  > > > > out all desirable topologies without
> >  > > > > > >>
> >  > > > > > >> every node is able to do the dampening. And i also see it as the
> >  > > > > right next-generation challenge
> >  > > > > > >>
> >  > > > > > >> and option to make that happen in high speed hardware.
> >  > > Specifically
> >  > > > > in metro rings, every big aggregation
> >  > > > > > >>
> >  > > > > > >> ring node has potentially 100 incoming interfaces and hence can
> >  > > > > create a lot of bursts onto ring interfaces.
> >  > > > > > >>
> >  > > > > > >> Cheers
> >  > > > > > >>    Toerless
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >> P.S.: The validation picture in our slides was from our Springer
> >  > > > > Journal article, so
> >  > > > > > >>
> >  > > > > > >> i can not simply put a copy on the Internet now, but ping me in
> >  > > PM if
> >  > > > > you want an authors copy.
> >  > > > > > >>
> >  > > > > > >> On Wed, Jul 12, 2023 at 11:48:36AM +0800, peng.shaofu@zte..com.cn<http://com.cn>
> >  > > > > wrote:
> >  > > > > > >> > Hi Jinoo, Toerless
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > Also thank Toerless for bringing us this interested draft.
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > For the question Jinoo pointed out, I guess, based on the
> >  > > similar
> >  > > > > analysis
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > of deadline on-time per hop, that even if all flows departured
> >  > > from
> >  > > > > the damper
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > and arrived at the queueing subsystem at the same time, each
> >  > > flow
> >  > > > > can still
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > have its worst-case latency, but just consume the next round of
> >  > > > > budget (i.e.,
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > the MAX value mentioned in the document).
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > However, consuming the next round of budget, means that it
> >  > > relies
> >  > > > > on the
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > downstream node to compensate latency, and may result a jitter
> >  > > with
> >  > > > > MAX
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > (i.e., worst-case latency). Due to this reason, deadline on-time
> >  > > > > per hop is
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > temperately removed in version-6, waiting for more strict
> >  > > proof and
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > optimization.
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > Anyway, gLBF can do same things that deadline on-time per hop
> >  > > done.
> >  > > > > The
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > following instuitive exaple is common for these two solutions.
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > Assuming that at the last node, all received flows have
> >  > > expierenced
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > almost 0 queueding delay on the upstream nodes. Traffic class-8
> >  > > has
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > per hop worst case latency 80 us (just an example, similar to
> >  > > delay
> >  > > > > level
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > of deadline),  traffic class-7 has 70 us, ... ..., traffic
> >  > > class-1
> >  > > > > has 10 us.
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > Then, at time T0, traffic class-8 arrived at the last node, it
> >  > > will
> >  > > > > dampen 80us;
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > at time T0+10us, traffic class-7 arrived, it will dampen 70us,
> >  > > and
> >  > > > > so on. At
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > T0+80us, all traffic class flows will departure from the damper,
> >  > > > > and send to
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > the same outgoing port. So, an observed packet may expierence
> >  > > > > another
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > round of worst case lantecy if other higher priority flows
> >  > > > > existing, or
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > expierence best case latency (almost 0) if other higher priority
> >  > > > > flows not
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > existing. That is, a jitter with value of worst case latency
> >  > > still
> >  > > > > exists.
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > Regards,
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > PSF
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > Original
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > From: JinooJoung <jjoung@smu.ac.kr<mailto:jjoung@smu.ac.kr>>
> >  > > > > > >> > To: Toerless Eckert <tte@cs.fau.de<mailto:tte@cs.fau.de>>;
> >  > > > > > >> > Cc: detnet@ietf.org<mailto:detnet@ietf.org> <detnet@ietf.org<mailto:detnet@ietf.org>>;
> >  > > > > draft-eckert-detnet-glbf@ietf.org<mailto:draft-eckert-detnet-glbf@ietf.org>
> >  > > > > > >>  <draft-eckert-detnet-glbf@ietf.org<mailto:draft-eckert-detnet-glbf@ietf.org>>;
> >  > > > > > >> > Date: 2023年07月09日 09:39
> >  > > > > > >> > Subject: Re: [Detnet] FYI: draft-eckert-detnet-glbf-01.txt
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > _______________________________________________
> >  > > > > > >> > detnet mailing list
> >  > > > > > >> > detnet@ietf.org<mailto:detnet@ietf.org>
> >  > > > > > >> > https://www.ietf.org/mailman/listinfo/detnet
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > Dear Toerless; thanks for the draft.
> >  > > > > > >>
> >  > > > > > >> > gLBF is an interesting approach, similar in concept to the
> >  > > Buffered
> >  > > > > Network (BN) I have introduced in the ADN Framework document.
> >  > > > > > >>
> >  > > > > > >> > The difference seems that the BN buffers only once at the
> >  > > network
> >  > > > > boundary, while gLBF buffers at every node.
> >  > > > > > >>
> >  > > > > > >> > Therefore in the BN, a buffer handles only a few flows, while in
> >  > > > > the gLBF a buffer needs to face millions of flows.
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > The implementation complexity should be addressed in the future
> >  > > > > draft, I think.
> >  > > > > > >> >
> >  > > > > > >> > I have a quick question below.
> >  > > > > > >> >
> >  > > > > > >> >    +------------------------+      +------------------------+
> >  > > > > > >> >    | Node A                  |        | Node B
> >  > >   |
> >  > > > > > >> >    |   +-+   +-+   +-+      |        |   +-+   +-+   +-+      |
> >  > > > > > >> >    |-x-|D|-y-|F|---|Q|----z -|------ |-x-|D|-y-|F|---|Q|----z- |
> >  > > > > > >> >    |   +-+   +-+   +-+      | Link |   +-+   +-+   +-+      |
> >  > > > > > >> >    +------------------------+      +------------------------+
> >  > > > > > >> >            |<- A/B in-time latency ->|
> >  > > > > > >> >            |<--A/B on-time latency ------->|
> >  > > > > > >> >
> >  > > > > > >> >                Figure 3: Forwarding with Damper and measuring
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > In Figure 3, how can F and Q guarantee the nodal latency below
> >  > > MAX?
> >  > > > > > >>
> >  > > > > > >> > Does the gLBF provide the same latency bound as that of UBS, as
> >  > > it
> >  > > > > is argued?
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > In UBS, an interleaved regulator (IR) works as the damper D in
> >  > > the
> >  > > > > gLBF.
> >  > > > > > >>
> >  > > > > > >> > IR is essentially a FIFO, whose HoQ packet is examined and
> >  > > leaves
> >  > > > > if eligible.
> >  > > > > > >>
> >  > > > > > >> > A packet's eligible time can be earlier than the time that it
> >  > > > > became the HoQ.
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > However, in gLBF, a packet has a precise moment that it needs
> >  > > to be
> >  > > > > forwarded from D.
> >  > > > > > >> > (Therefore, UBS is not a generalized gLBF.)
> >  > > > > > >>
> >  > > > > > >> > In the worst case, all the flows may want to send the packets
> >  > > from
> >  > > > > D to F at the same time.
> >  > > > > > >>
> >  > > > > > >> > If it can be implemented as such, bursts may accumulate, and the
> >  > > > > latency cannot be guaranteed.
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> > If it cannot be implemented that way, you may introduce another
> >  > > > > type of delay.
> >  > > > > > >> >
> >  > > > > > >> > Don't you need an additional mechanism for latency guarantee?
> >  > > > > > >> >
> >  > > > > > >> > Thanks a lot in advance, I support this draft.
> >  > > > > > >> >
> >  > > > > > >> > Best,
> >  > > > > > >> > Jinoo
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> >
> >  > > > > > >> > On Sat, Jul 8, 2023 at 12:05 AM Toerless Eckert <tte@cs.fau.de<mailto:tte@cs.fau.de>>
> >  > > > > wrote:
> >  > > > > > >> >
> >  > > > > > >> > Dear DetNet WG,
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> >  FYI on a newly posted  bounded latency method/proposal draft
> >  > > that
> >  > > > > we call gLBF.
> >  > > > > > >> >  (guaranteed Latency Based Forwarding).
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> >  gLBF, as compared to TCQF and CSQF is proposed from our side
> >  > > to be
> >  > > > > a more long-term
> >  > > > > > >>
> >  > > > > > >> >  solution, because it has not been validated with high-speed
> >  > > > > forwarding hardware and requires
> >  > > > > > >>
> >  > > > > > >> >  new network header information for the damper value, whereas
> >  > > > > TCQF/CSQF of course can operate
> >  > > > > > >>
> >  > > > > > >> >  without new headers, have proven high-speed implementations PoC
> >  > > > > and are therefore really
> >  > > > > > >> >  ready for adoption now.
> >  > > > > > >> >
> >  > > > > > >>
> >  > > > > > >> >  gLBF is a specific variant of the damper idea that is meant to
> >  > > be
> >  > > > > compatible with the
> >  > > > > > >>
> >  > > > > > >> >  TSN-ATS latency calculus so that it can use the same
> >  > > > > controller-plane/path-computation
> >  > > > > > >>
> >  > > > > > >> >  algorithms and implementations one would use for TSN-ATS. It
> >  > > also
> >  > > > > allows to eliminate
> >  > > > > > >>
> >  > > > > > >> >  the need for hop-by-hop clock synchronization and (we hope)
> >  > > should
> >  > > > > be well implementable
> >  > > > > > >> >  in high-speed hardware.
> >  > > > > > >> >
> >  > > > > > >> >  Any feedback welcome.
> >  > > > > > >> >
> >  > > > > > >> >  Cheers
> >  > > > > > >> >      Toerless
> >  > > > > > >> >
> >  > > > > > >> >  In-Reply-To: <
> >  > > > > 168874067601.53296.4506535864118204933@ietfa.amsl.com<mailto:168874067601.53296.4506535864118204933@ietfa.amsl.com>>
> >  > > > > > >> >
> >  > > > > > >> >  On Fri, Jul 07, 2023 at 07:37:56AM -0700,
> >  > > internet-drafts@ietf.org<mailto:internet-drafts@ietf.org>
> >  > > > > > >>  wrote:
> >  > > > > > >> >  >
> >  > > > > > >> >  > A new version of I-D, draft-eckert-detnet-glbf-01.txt
> >  > > > > > >> >  > has been successfully submitted by Toerless Eckert and
> >  > > posted to
> >  > > > > the
> >  > > > > > >> >  > IETF repository.
> >  > > > > > >> >  >
> >  > > > > > >> >  > Name:         draft-eckert-detnet-glbf
> >  > > > > > >> >  > Revision:     01
> >  > > > > > >>
> >  > > > > > >> >  > Title:                Deterministic Networking (DetNet) Data
> >  > > > > Plane - guaranteed Latency Based Forwarding (gLBF) for bounded latency
> >  > > with
> >  > > > > low jitter and asynchronous forwarding in Deterministic Networks
> >  > > > > > >> >  > Document date:        2023-07-07
> >  > > > > > >> >  > Group:                Individual Submission
> >  > > > > > >> >  > Pages:                39
> >  > > > > > >> >  > URL:
> >  > > > > > >> https://www.ietf.org/archive/id/draft-eckert-detnet-glbf-01.txt
> >  > > > > > >> >  > Status:
> >  > > > > > >> https://datatracker.ietf.org/doc/draft-eckert-detnet-glbf/
> >  > > > > > >> >  > Htmlized:
> >  > > > > > >> https://datatracker.ietf.org/doc/html/draft-eckert-detnet-glbf
> >  > > > > > >> >  > Diff:
> >  > > > > > >>
> >  > > https://author-tools.ietf.org/iddiff?url2=draft-eckert-detnet-glbf-01
> >  > > > > > >> >  >
> >  > > > > > >> >  > Abstract:
> >  > > > > > >> >  >    This memo proposes a mechanism called "guaranteed Latency
> >  > > > > Based
> >  > > > > > >>
> >  > > > > > >> >  >    Forwarding" (gLBF) as part of DetNet for hop-by-hop packet
> >  > > > > forwarding
> >  > > > > > >> >  >    with per-hop deterministically bounded latency and minimal
> >  > > > > jitter.
> >  > > > > > >> >  >
> >  > > > > > >> >  >    gLBF is intended to be useful across a wide range of
> >  > > networks
> >  > > > > and
> >  > > > > > >> >  >    applications with need for high-precision deterministic
> >  > > > > networking
> >  > > > > > >>
> >  > > > > > >> >  >    services, including in-car networks or networks used for
> >  > > > > industrial
> >  > > > > > >> >  >    automation across on factory floors, all the way to
> >  > > ++100Gbps
> >  > > > > > >> >  >    country-wide networks.
> >  > > > > > >> >  >
> >  > > > > > >> >  >    Contrary to other mechanisms, gLBF does not require
> >  > > network
> >  > > > > wide
> >  > > > > > >>
> >  > > > > > >> >  >    clock synchronization, nor does it need to maintain
> >  > > per-flow
> >  > > > > state at
> >  > > > > > >> >  >    network nodes, avoiding drawbacks of other known methods
> >  > > while
> >  > > > > > >> >  >    leveraging their advantages.
> >  > > > > > >> >  >
> >  > > > > > >> >  >    Specifically, gLBF uses the queuing model and calculus of
> >  > > > > Urgency
> >  > > > > > >> >  >    Based Scheduling (UBS, [UBS]), which is used by TSN
> >  > > > > Asynchronous
> >  > > > > > >> >  >    Traffic Shaping [TSN-ATS]. gLBF is intended to be a
> >  > > plug-in
> >  > > > > > >> >  >    replacement for TSN-ATN or as a parallel mechanism beside
> >  > > > > TSN-ATS
> >  > > > > > >>
> >  > > > > > >> >  >    because it allows to keeping the same controller-plane
> >  > > design
> >  > > > > which
> >  > > > > > >> >  >    is selecting paths for TSN-ATS, sizing TSN-ATS queues,
> >  > > > > calculating
> >  > > > > > >> >  >    latencies and admitting flows to calculated paths for
> >  > > > > calculated
> >  > > > > > >> >  >    latencies.
> >  > > > > > >> >  >
> >  > > > > > >>
> >  > > > > > >> >  >    In addition to reducing the jitter compared to TSN-ATS by
> >  > > > > additional
> >  > > > > > >>
> >  > > > > > >> >  >    buffering (dampening) in the network, gLBF also eliminates
> >  > > > > the need
> >  > > > > > >> >  >    for per-flow, per-hop state maintenance required by
> >  > > TSN-ATS.
> >  > > > > This
> >  > > > > > >> >  >    avoids the need to signal per-flow state to every hop
> >  > > from the
> >  > > > > > >> >  >    controller-plane and associated scaling problems.  It also
> >  > > > > reduces
> >  > > > > > >> >  >    implementation cost for high-speed networking hardware
> >  > > due to
> >  > > > > the
> >  > > > > > >>
> >  > > > > > >> >  >    avoidance of additional high-speed speed read/write memory
> >  > > > > access to
> >  > > > > > >> >  >    retrieve, process and update per-flow state variables for
> >  > > a
> >  > > > > large
> >  > > > > > >> >  >    number of flows.
> >  > > > > > >> >  >
> >  > > > > > >>
> >  > > > > > >> >  >
> >  > > > > > >> >  >
> >  > > > > > >> >  >
> >  > > > > > >> >  > The IETF Secretariat
> >  > > > > > >> >
> >  > > > > > >> >  _______________________________________________
> >  > > > > > >> >  detnet mailing list
> >  > > > > > >> >  detnet@ietf.org<mailto:detnet@ietf.org>
> >  > > > > > >> >  https://www.ietf.org/mailman/listinfo/detnet
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >> --
> >  > > > > > >> ---
> >  > > > > > >> tte@cs.fau.de<mailto:tte@cs.fau.de>
> >  > > > > > >>
> >  > > > > > >> _______________________________________________
> >  > > > > > >> detnet mailing list
> >  > > > > > >> detnet@ietf.org<mailto:detnet@ietf.org>
> >  > > > > > >> https://www.ietf.org/mailman/listinfo/detnet
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >>
> >  > > > > > >
> >  > > > >
> >  > > > > --
> >  > > > > ---
> >  > > > > tte@cs.fau.de<mailto:tte@cs.fau.de>
> >  > > > >
> >  > >
> >  > > --
> >  > > ---
> >  > > tte@cs.fau.de<mailto:tte@cs.fau.de>
> >  > >
> >
> >  --
> >  ---
> >  tte@cs.fau.de<mailto:tte@cs.fau.de>
>
>
>
> --
> ---
> tte@cs.fau.de<mailto:tte@cs.fau.de>



--
---
tte@cs.fau.de<mailto:tte@cs.fau.de>