Re: [Lsr] Working Group Last Call for draft-ietf-lsr-ip-flexalgo-04 - "IGP Flexible Algorithms (Flex-Algorithm) In IP Networks"

Ketan Talaulikar <ketant.ietf@gmail.com> Wed, 13 April 2022 08:53 UTC

Return-Path: <ketant.ietf@gmail.com>
X-Original-To: lsr@ietfa.amsl.com
Delivered-To: lsr@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9D37B3A18DF; Wed, 13 Apr 2022 01:53:00 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.106
X-Spam-Level:
X-Spam-Status: No, score=-2.106 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001] autolearn=unavailable autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id yXmrX2dGbIh8; Wed, 13 Apr 2022 01:52:55 -0700 (PDT)
Received: from mail-vs1-xe31.google.com (mail-vs1-xe31.google.com [IPv6:2607:f8b0:4864:20::e31]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id D98CD3A19BA; Wed, 13 Apr 2022 01:52:54 -0700 (PDT)
Received: by mail-vs1-xe31.google.com with SMTP id r25so926364vsa.13; Wed, 13 Apr 2022 01:52:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=K+oeYxmUVjyoU0gySVBXFKFgsohF+LeL/UQodBw5X0s=; b=lwF0KVU8zKBV9/5coyp1JHwMPmEFoCuwpZCpS8+G3/O0Tf6EnDnVrzc9gMR0/Q7k29 HldgNvoLQyMOe08837WQLuwg5gBht7a0hOSmJzt3OdMw/1zskJT6tKyMODc79l22j/Mf unEwMHEgJJ6pkls12NhtBrYhpNEXzu53ACa9+sO1bsneWi+UCsTPTMvR4tM4VfFUW0Wo 1KNruvCoq1lbAA7E3/bSgMBanI9Q3+k7yaYPwooeCugWFCnEz47hV3l1i8RzREruFCIi 5HKK59R6h1NU8KjY2zbcRWyi1/BP0tQUdfoi1r92XqrGMTaIfAJsC3y6SPyKdi3Go4TW fxxQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=K+oeYxmUVjyoU0gySVBXFKFgsohF+LeL/UQodBw5X0s=; b=fkHEUGxmmZ5J6T9qKZxyUa86e+kdmNdppnOhJyg2N/ibiEx+bhSRsoYUrAZB/xNr3m OV2TJhlOBDBevdHM6BQmHVSzsrGL+EPTWSEfTJZNexK7CZIxPOX4CY9omH/GEwCqo81L mFnHe8jgKxzdef/BV2X/zibdxhxqLHZLnzyR3vpITHISy2RiUDlQlMFJ5fq16BJoJA6F 3fozzqe6JKkwcAU7+DTmVMVJ8b/JWxkTjGZ6v1LPdX3cGPi2/SRLORdB8c/HIyD1LI17 G63+svI4rXnjBuAECDVrZtkJTTzt6ABXAHdF6jf/8+HFnXhHrm41k8cnkTqgc7Q/9yJz HSJg==
X-Gm-Message-State: AOAM533ZiOyRKXxuCYDjAY81TkFx0GCFU/otGY3ui1K/FmNymW7xunel NZKtvOajCwrVM/JC9/GzKXAWRaYiUmwOyal+Y5qPjucQ
X-Google-Smtp-Source: ABdhPJwDR9n12Ruf3WG2yBtt84S8K36TvUc8gEmXOeSqkn8DtYnC42etkrSV8hztPWBNMnvRNYhdigrA+ETYzSjT6Gw=
X-Received: by 2002:a05:6102:3d04:b0:329:88fa:94d2 with SMTP id i4-20020a0561023d0400b0032988fa94d2mr2452699vsv.64.1649839973414; Wed, 13 Apr 2022 01:52:53 -0700 (PDT)
MIME-Version: 1.0
References: <36E526F2-34CB-4C0A-84C2-79A50D9D4C36@cisco.com> <CAH6gdPwrshSVGNsjJVqND8kpNBTBQWicggEz_qyP0DtMYY5wjg@mail.gmail.com> <b6250861-a35d-2a47-6701-194b074e7233@cisco.com> <CAH6gdPwbL5qWX_GXfuv5YL4mRL9xUy3p9wc7an-FbnzpTc0U9A@mail.gmail.com> <46e4c6c1-4ae6-a628-ba27-daa5381c0ac0@cisco.com> <CAH6gdPwS97eEgRQsX16=QfR_yEiPi0WPWGt6PM0XawE5v07exA@mail.gmail.com> <b5def3f0-9bb4-84d6-5fe2-4ba3091dcb95@cisco.com>
In-Reply-To: <b5def3f0-9bb4-84d6-5fe2-4ba3091dcb95@cisco.com>
From: Ketan Talaulikar <ketant.ietf@gmail.com>
Date: Wed, 13 Apr 2022 14:22:42 +0530
Message-ID: <CAH6gdPyJxppbyjhYBxX4R+LJvt-TdFfvjmPHJ-PHLOoQBPeO2w@mail.gmail.com>
To: Peter Psenak <ppsenak@cisco.com>
Cc: "Acee Lindem (acee)" <acee=40cisco.com@dmarc.ietf.org>, "lsr@ietf.org" <lsr@ietf.org>, "draft-ietf-lsr-ip-flexalgo@ietf.org" <draft-ietf-lsr-ip-flexalgo@ietf.org>
Content-Type: multipart/alternative; boundary="0000000000008d3a8b05dc854d08"
Archived-At: <https://mailarchive.ietf.org/arch/msg/lsr/Y-VJ31mbWVVbZtU2KJ-SKH2pNbo>
Subject: Re: [Lsr] Working Group Last Call for draft-ietf-lsr-ip-flexalgo-04 - "IGP Flexible Algorithms (Flex-Algorithm) In IP Networks"
X-BeenThere: lsr@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Link State Routing Working Group <lsr.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/lsr>, <mailto:lsr-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/lsr/>
List-Post: <mailto:lsr@ietf.org>
List-Help: <mailto:lsr-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/lsr>, <mailto:lsr-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 13 Apr 2022 08:53:01 -0000

Hi Peter,

I will not press this point further if I am the only one that finds this
complexity without any benefit. :-)

Please check inline below for some clarifications with KT3.


On Wed, Apr 13, 2022 at 12:47 PM Peter Psenak <ppsenak@cisco.com> wrote:

> Hi Ketan,
>
>
> please see inline (##PP3):
>
> On 13/04/2022 06:00, Ketan Talaulikar wrote:
> > Hi Peter,
> >
> > Please check inline below with KT2. I am trimming everything other than
> > the one point of continuing debate.
> >
> >      >      >
> >      >      > 2) The relationship between the algo usage for IP FlexAlgo
> >     and other
> >      >      > data planes (e.g. FlexAlgo with SR) is not very clear.
> >     There arise
> >      >      > complications when the algo usage for IP FlexAlgo overlap
> >     with other
> >      >      > (say SR) data planes since the FAD is shared but the node
> >      >     participation
> >      >      > is not shared. While Sec 9 suggests that we can work
> >     through these
> >      >      > complications, I question the need for such complexity.
> >     The FlexAlgo
> >      >      > space is large enough to allow it to be shared between
> >     various data
> >      >      > planes without overlap. My suggestion would be to neither
> >     carve out
> >      >      > parallel algo spaces within IGPs for various types of
> >     FlexAlgo data
> >      >      > planes nor allow the same algo to be used by both IP and
> >     SR data
> >      >     planes.
> >      >      > So that we have a single topology computation in the IGP
> >     for a given
> >      >      > algo based on its FAD and data plane participation and
> >     then when it
> >      >      > comes to prefix calculation, the results could involve
> >      >     programming of
> >      >      > entries in respective forwarding planes based on the
> >     signaling of
> >      >     the
> >      >      > respective prefix reachabilities. The coverage of these
> >     aspects in a
> >      >      > dedicated section upfront will help.
> >      >
> >      >     ##PP
> >      >     I strongly disagree.
> >      >
> >      >     FAD is data-pane/app independent. Participation is
> data-plane/app
> >      >     dependent. Base flex-algo specification is very clear about
> >     that. That
> >      >     has advantages and we do not want to modify that part.
> >      >
> >      >
> >      > KT> No issue with this part.
> >      >
> >      >
> >      >     Topology calculation for algo/data-plane needs to take both
> >     FAD and
> >      >     participation into account. You need independent calculation
> >     for each
> >      >     data-plane/app in the same algo.
> >      >
> >      >
> >      > KT> So, an implementation now needs to potentially support
> >     performing
> >      > multiple topology computations for each algo. This is a
> >     complication for
> >      > which I do not see the justification. Why not just pick different
> >      > algorithms for different data planes for those (rare?)
> >     deployments where
> >      > someone wants multiple data planes?
> >
> >     ##PP2
> >     flex-algo architecture supports multiple apps/data-planes per algo,
> >     with
> >     unique participation per app/data-plane. That requires per-algo/per
> >     app/data-plane calculation. What is complicated on it?
> >
> >
> > KT2> This specific and precise statement that you have provided is not
> > covered in either draft-ietf-lsr-flex-algo or this document. For
> > starters, this needs to be clarified and covered so that it gets the
> > attention of any reader during the review. This has implications for
> > implementations.
>
> ##PP3
> sure we can add it explicitly there, but if you read the base flex-algo
> draft carefully, it is quite clear. I will add that exact statement in
> the next re-spin of the base spec.
>

KT3> Thanks. I think we may also need to carefully scrub the use of the
term "application" since it seems to bring out different interpretations
thanks to the "application" in ASLA. It is better if we use the term
"application" only in the same semantics as ASLA  - this means that
FlexAlgo is a single "application". We can perhaps use the term "traffic
flows" or "service flows" as an alternate for "application flows" that are
steered over or use a FlexAlgo.  And then when it comes to Node
Participation in a FlexAlgo, we could use the term "FlexAlgo Forwarding
Mechanism" instead of "Applications' Forwarding for FlexAlgo". Thoughts?


> >
> >
> >     If your implementation does not want to support it, fine, but the
> >     architecture allows it and there is/are implementation(s) that
> already
> >     support it. This is not defined in this draft, it's defined in base
> >     flex-algo spec.
> >
> >
> > KT2> I am not sure if it is really an option for implementation once it
> > is in the specification. And this is not about "my" implementation :-).
> > So it is not that because some implementations can do (or does) it that
> > it should be in the specification. The determination on whether it
> > should be in a specification needs to be based on the tradeoff between
> > requiring multiple computations per algo with the potential benefit or
> > use case that is enabled by it.
>
> ##PP3
> again, this is how things have been defined from day one, and for a good
> reason. Requiring per app flex-algo even though I want to use the same
> metric and constraints for both app would be inefficient.
>

KT3> For my understanding, the only inefficiency that you are referring to
with the "separate algo per FlexAlgo forwarding mechanism" is a duplicate
FAD advertisement. Am I missing anything else?


>
> >
> >
> >
> >      >
> >      >
> >      >     The fact that the same FAD is shareable between all apps has
> it
> >      >     advantages and use cases - e.g. if the participation for algo
> >     X is the
> >      >     same in SR and IP data-planes, one can use SR to protect IP
> >     in that
> >      >     algo.
> >      >
> >      >
> >      > KT> Would this protection use case not violate the base FlexAlgo
> >     rule
> >      > that the protection has to remain within the specific topology.
> >     If there
> >      > is an SR data plane, then why would one want an IP data plane as
> >     well?
> >
> >     ##PP2
> >     if the participation in two app/data-planes is the same for the algo,
> >     the resulting topology is the same. If your implementation is smart,
> it
> >     can only run a single computation for that case. There is no
> violation
> >     here whatsoever.
> >
> >
> > KT2> If the resulting topology is the same between SR data plane and IP
> > data plane, what is the need to enable the IP data plane? Why not just
> > steer the IP traffic over the FlexAlgo data plane? And when it is not
> > the same topology, then we cannot really do the protection for IP
> > FlexAlgo using SR FlexAlgo. So what is really the use case or benefit
> > for enabling this?
>
> ##PP3
> I just gave you an example where this might be useful. You may not like
> it, but it will have no impact on the defined architecture.
>

KT3> Ack - we can agree to disagree on this.


>
> >
> >
> >
> >
> >      > IP forwarding can be steered over the SR-based FlexAlgo topology
> >     along
> >      > with the protection provided by it. Am I missing something?
> >
> >     ##PP2
> >     topology for both primary and backup computation must be the same.
> >
> >
> > KT2> I see the primary use case for IP FlexAlgo (or another data plane)
> > to be that the data plane is used by itself. In the (rare?) case where
> > multiple data planes are required to coexist, it is simpler both from
> > implementation and deployment POV to use different algos. It would be
> > good to have operator inputs here. The only cost that I see for this is
> > that the same FAD may get advertised twice only in the case where it is
> > identical for multiple data planes. So I am still not seeing the benefit
> > of enabling multiple (i.e. per data plane) computations for a single
> > algo rather than just keeping it a single computation per algo where a
> > single data plane is associated with a specific algo.
>
> ##PP3
> I really do not see the problem. As you stated above repeating the same
> FAD for multiple algos would be inefficient. The beauty of FAD is that
> it is app independent and can be used by many of them.
>
> If you like to repeat it, fine it will still work. But we do not want to
> mandate that in the spec.
>

KT3> There is currently no normative text in the draft-lsr-flex-algo that
specifies that an implementation needs to support a "per flexalgo
forwarding mechanism" computation for each algo. So when this clarification
is added, can this be a MAY or perhaps a SHOULD so that an implementation
has the choice to perhaps not do this and still remain compliant to the
spec?

Thanks,
Ketan


>
>
> thanks,
> Peter
>
> >
> > Thanks,
> > Ketan
>
>