Re: [Lsr] Working Group Last Call for draft-ietf-lsr-ip-flexalgo-04 - "IGP Flexible Algorithms (Flex-Algorithm) In IP Networks"

Ketan Talaulikar <ketant.ietf@gmail.com> Wed, 13 April 2022 13:56 UTC

Return-Path: <ketant.ietf@gmail.com>
X-Original-To: lsr@ietfa.amsl.com
Delivered-To: lsr@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 6CE3E3A2040; Wed, 13 Apr 2022 06:56:43 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -7.107
X-Spam-Level:
X-Spam-Status: No, score=-7.107 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6KwlW4Yhhsfh; Wed, 13 Apr 2022 06:56:38 -0700 (PDT)
Received: from mail-vs1-xe2a.google.com (mail-vs1-xe2a.google.com [IPv6:2607:f8b0:4864:20::e2a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 7083F3A203E; Wed, 13 Apr 2022 06:56:38 -0700 (PDT)
Received: by mail-vs1-xe2a.google.com with SMTP id a127so1664486vsa.3; Wed, 13 Apr 2022 06:56:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=INN2dnVt6PxXXD2J038QY9VdOXI3Gn4vqOeEOmJmlrg=; b=LU40ImWStcK67sdXh2SXnCi4FISmLEkaBardq17Y8/8hqFoFJr1O4jEOoERuVwXDA+ NDQfUJuOoC0s5RAvCKQwKmEn0wuX7Q4BpiQY8HAFQyZXIx2gpLElbTxqKS30CE8Mq6Jz BUvkA0CcQ3JrQ2yiEk73XmXXQZzNXlkVEaaheb+ks1I8oxrtIxb8D3VwmqCWuVwOlPHl uO6FUSto3Xoo/PP4EPzA3LAWc7JH6CdqoZ1N7/c3Zcs7BvlL7F14nNhVjRN/lvKU9TbS Wy1m1di5nIRPY04j6DHuPkqsGkQJPOe1021pj6Iwbr+ei4p25RcDc4RB4K3br3T14Qsk MwVA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=INN2dnVt6PxXXD2J038QY9VdOXI3Gn4vqOeEOmJmlrg=; b=EGJ7UQIHNY2hWySY+pMpFlKIMMgpq/TO5XK1cEKGeTqvwalVviV1u4ChlQN16pC+sq 73FE4HNWc2ydaHf7ED1P140rBp0cZ/B9kKqw9h629EOSmOojeObjvI1E18/deqddJ/yQ epyZWpUX2aa65DFDymQDc/BAF5FVVQiS4MdUz+USRDS7wl6sDWdOAMQucqfgS+AKRu5P pGQhzfvve6+bHUMckvgNb8YwTpc/kOiQRqlLGoY/Bo9LaxJs5GDVRnx1MCwm6nBcpJBi fv5GO/pjX4vfNb/aC4Nk+l7tDUPGcVt4cb8T3L4FdmfUEXiLq4JDbUxQu6loxzKtq3xn Klmg==
X-Gm-Message-State: AOAM530PPQgW+d/65qqIVkNH/oCNjrrPYmeZn3arg72oBUBTqxTjXQj5 zLHXhkZLAXFtL7LGZ8k3fsOffcB699QzDfJ9yuM=
X-Google-Smtp-Source: ABdhPJz91QlWvWzdjjtYlR94xXiXMLyYltDw2pMUtUqla8NCKqnHOyRQpwdmlMQZxtDK2ZECUfVL6s1AskeGQBCw8C0=
X-Received: by 2002:a67:b24d:0:b0:327:dd90:ec43 with SMTP id s13-20020a67b24d000000b00327dd90ec43mr13167181vsh.15.1649858197043; Wed, 13 Apr 2022 06:56:37 -0700 (PDT)
MIME-Version: 1.0
References: <36E526F2-34CB-4C0A-84C2-79A50D9D4C36@cisco.com> <CAH6gdPwrshSVGNsjJVqND8kpNBTBQWicggEz_qyP0DtMYY5wjg@mail.gmail.com> <b6250861-a35d-2a47-6701-194b074e7233@cisco.com> <CAH6gdPwbL5qWX_GXfuv5YL4mRL9xUy3p9wc7an-FbnzpTc0U9A@mail.gmail.com> <46e4c6c1-4ae6-a628-ba27-daa5381c0ac0@cisco.com> <CAH6gdPwS97eEgRQsX16=QfR_yEiPi0WPWGt6PM0XawE5v07exA@mail.gmail.com> <b5def3f0-9bb4-84d6-5fe2-4ba3091dcb95@cisco.com> <CAH6gdPyJxppbyjhYBxX4R+LJvt-TdFfvjmPHJ-PHLOoQBPeO2w@mail.gmail.com> <e7cc29b8-00fb-6294-5c87-4409428b8ae2@cisco.com>
In-Reply-To: <e7cc29b8-00fb-6294-5c87-4409428b8ae2@cisco.com>
From: Ketan Talaulikar <ketant.ietf@gmail.com>
Date: Wed, 13 Apr 2022 19:26:26 +0530
Message-ID: <CAH6gdPzQ-nPuwoMm1HpK8b6Fxbh=MkD-+D01DLT2DV4QUHc6TA@mail.gmail.com>
To: Peter Psenak <ppsenak@cisco.com>
Cc: "Acee Lindem (acee)" <acee=40cisco.com@dmarc.ietf.org>, "lsr@ietf.org" <lsr@ietf.org>, "draft-ietf-lsr-ip-flexalgo@ietf.org" <draft-ietf-lsr-ip-flexalgo@ietf.org>
Content-Type: multipart/alternative; boundary="000000000000c3c15d05dc898b18"
Archived-At: <https://mailarchive.ietf.org/arch/msg/lsr/TZbG6EdSiYgd3IRwWlEOGBmaB0I>
Subject: Re: [Lsr] Working Group Last Call for draft-ietf-lsr-ip-flexalgo-04 - "IGP Flexible Algorithms (Flex-Algorithm) In IP Networks"
X-BeenThere: lsr@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Link State Routing Working Group <lsr.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/lsr>, <mailto:lsr-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/lsr/>
List-Post: <mailto:lsr@ietf.org>
List-Help: <mailto:lsr-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/lsr>, <mailto:lsr-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 13 Apr 2022 13:56:44 -0000

Hi Peter,

I would still reiterate the need to clarify the usage of "application"
terminology in the base FlexAlgo spec. We don't need to call it
"data-plane", I was suggesting "forwarding mechanism" or it can be
something else as well.

Just my 2c

Thanks,
Ketan


On Wed, Apr 13, 2022 at 2:35 PM Peter Psenak <ppsenak@cisco.com> wrote:

> Hi Ketan,
>
> please see inline (##PP4):
>
>
> On 13/04/2022 10:52, Ketan Talaulikar wrote:
> > Hi Peter,
> >
> > I will not press this point further if I am the only one that finds this
> > complexity without any benefit. :-)
> >
> > Please check inline below for some clarifications with KT3.
> >
> >
> > On Wed, Apr 13, 2022 at 12:47 PM Peter Psenak <ppsenak@cisco.com
> > <mailto:ppsenak@cisco.com>> wrote:
> >
> >     Hi Ketan,
> >
> >
> >     please see inline (##PP3):
> >
> >     On 13/04/2022 06:00, Ketan Talaulikar wrote:
> >      > Hi Peter,
> >      >
> >      > Please check inline below with KT2. I am trimming everything
> >     other than
> >      > the one point of continuing debate.
> >      >
> >      >      >      >
> >      >      >      > 2) The relationship between the algo usage for IP
> >     FlexAlgo
> >      >     and other
> >      >      >      > data planes (e.g. FlexAlgo with SR) is not very
> clear.
> >      >     There arise
> >      >      >      > complications when the algo usage for IP FlexAlgo
> >     overlap
> >      >     with other
> >      >      >      > (say SR) data planes since the FAD is shared but
> >     the node
> >      >      >     participation
> >      >      >      > is not shared. While Sec 9 suggests that we can work
> >      >     through these
> >      >      >      > complications, I question the need for such
> complexity.
> >      >     The FlexAlgo
> >      >      >      > space is large enough to allow it to be shared
> between
> >      >     various data
> >      >      >      > planes without overlap. My suggestion would be to
> >     neither
> >      >     carve out
> >      >      >      > parallel algo spaces within IGPs for various types
> of
> >      >     FlexAlgo data
> >      >      >      > planes nor allow the same algo to be used by both
> >     IP and
> >      >     SR data
> >      >      >     planes.
> >      >      >      > So that we have a single topology computation in
> >     the IGP
> >      >     for a given
> >      >      >      > algo based on its FAD and data plane participation
> and
> >      >     then when it
> >      >      >      > comes to prefix calculation, the results could
> involve
> >      >      >     programming of
> >      >      >      > entries in respective forwarding planes based on the
> >      >     signaling of
> >      >      >     the
> >      >      >      > respective prefix reachabilities. The coverage of
> these
> >      >     aspects in a
> >      >      >      > dedicated section upfront will help.
> >      >      >
> >      >      >     ##PP
> >      >      >     I strongly disagree.
> >      >      >
> >      >      >     FAD is data-pane/app independent. Participation is
> >     data-plane/app
> >      >      >     dependent. Base flex-algo specification is very clear
> >     about
> >      >     that. That
> >      >      >     has advantages and we do not want to modify that part.
> >      >      >
> >      >      >
> >      >      > KT> No issue with this part.
> >      >      >
> >      >      >
> >      >      >     Topology calculation for algo/data-plane needs to take
> >     both
> >      >     FAD and
> >      >      >     participation into account. You need independent
> >     calculation
> >      >     for each
> >      >      >     data-plane/app in the same algo.
> >      >      >
> >      >      >
> >      >      > KT> So, an implementation now needs to potentially support
> >      >     performing
> >      >      > multiple topology computations for each algo. This is a
> >      >     complication for
> >      >      > which I do not see the justification. Why not just pick
> >     different
> >      >      > algorithms for different data planes for those (rare?)
> >      >     deployments where
> >      >      > someone wants multiple data planes?
> >      >
> >      >     ##PP2
> >      >     flex-algo architecture supports multiple apps/data-planes per
> >     algo,
> >      >     with
> >      >     unique participation per app/data-plane. That requires
> >     per-algo/per
> >      >     app/data-plane calculation. What is complicated on it?
> >      >
> >      >
> >      > KT2> This specific and precise statement that you have provided
> >     is not
> >      > covered in either draft-ietf-lsr-flex-algo or this document. For
> >      > starters, this needs to be clarified and covered so that it gets
> the
> >      > attention of any reader during the review. This has implications
> for
> >      > implementations.
> >
> >     ##PP3
> >     sure we can add it explicitly there, but if you read the base
> flex-algo
> >     draft carefully, it is quite clear. I will add that exact statement
> in
> >     the next re-spin of the base spec.
> >
> >
> > KT3> Thanks. I think we may also need to carefully scrub the use of the
> > term "application" since it seems to bring out different interpretations
> > thanks to the "application" in ASLA. It is better if we use the term
> > "application" only in the same semantics as ASLA  - this means that
> > FlexAlgo is a single "application". We can perhaps use the term "traffic
> > flows" or "service flows" as an alternate for "application flows" that
> > are steered over or use a FlexAlgo.  And then when it comes to Node
> > Participation in a FlexAlgo, we could use the term "FlexAlgo Forwarding
> > Mechanism" instead of "Applications' Forwarding for FlexAlgo". Thoughts?
>
> ##PP4
> the term application is used in the base flex-algo spec from day one. It
> was chosen because it was generic enough to describe whatever the
> flex-algo may be used for down the road. We could have used 'data-plane'
> instead, but it could be quite restrictive IMHO.
>
>
> >
> >      >
> >      >
> >      >     If your implementation does not want to support it, fine, but
> the
> >      >     architecture allows it and there is/are implementation(s)
> >     that already
> >      >     support it. This is not defined in this draft, it's defined
> >     in base
> >      >     flex-algo spec.
> >      >
> >      >
> >      > KT2> I am not sure if it is really an option for implementation
> >     once it
> >      > is in the specification. And this is not about "my"
> >     implementation :-).
> >      > So it is not that because some implementations can do (or does)
> >     it that
> >      > it should be in the specification. The determination on whether it
> >      > should be in a specification needs to be based on the tradeoff
> >     between
> >      > requiring multiple computations per algo with the potential
> >     benefit or
> >      > use case that is enabled by it.
> >
> >     ##PP3
> >     again, this is how things have been defined from day one, and for a
> >     good
> >     reason. Requiring per app flex-algo even though I want to use the
> same
> >     metric and constraints for both app would be inefficient.
> >
> >
> > KT3> For my understanding, the only inefficiency that you are referring
> > to with the "separate algo per FlexAlgo forwarding mechanism" is a
> > duplicate FAD advertisement. Am I missing anything else?
>
> ##PP4
> right. But the point is there is nothing that prevents multiple apps
> using the same algo in the architecture itself. And I see no good reason
> for such restriction.
> >
> >
> >      >
> >      >
> >      >
> >      >      >
> >      >      >
> >      >      >     The fact that the same FAD is shareable between all
> >     apps has it
> >      >      >     advantages and use cases - e.g. if the participation
> >     for algo
> >      >     X is the
> >      >      >     same in SR and IP data-planes, one can use SR to
> >     protect IP
> >      >     in that
> >      >      >     algo.
> >      >      >
> >      >      >
> >      >      > KT> Would this protection use case not violate the base
> >     FlexAlgo
> >      >     rule
> >      >      > that the protection has to remain within the specific
> >     topology.
> >      >     If there
> >      >      > is an SR data plane, then why would one want an IP data
> >     plane as
> >      >     well?
> >      >
> >      >     ##PP2
> >      >     if the participation in two app/data-planes is the same for
> >     the algo,
> >      >     the resulting topology is the same. If your implementation is
> >     smart, it
> >      >     can only run a single computation for that case. There is no
> >     violation
> >      >     here whatsoever.
> >      >
> >      >
> >      > KT2> If the resulting topology is the same between SR data plane
> >     and IP
> >      > data plane, what is the need to enable the IP data plane? Why not
> >     just
> >      > steer the IP traffic over the FlexAlgo data plane? And when it is
> >     not
> >      > the same topology, then we cannot really do the protection for IP
> >      > FlexAlgo using SR FlexAlgo. So what is really the use case or
> >     benefit
> >      > for enabling this?
> >
> >     ##PP3
> >     I just gave you an example where this might be useful. You may not
> like
> >     it, but it will have no impact on the defined architecture.
> >
> >
> > KT3> Ack - we can agree to disagree on this.
> >
> >
> >      >
> >      >
> >      >
> >      >
> >      >      > IP forwarding can be steered over the SR-based FlexAlgo
> >     topology
> >      >     along
> >      >      > with the protection provided by it. Am I missing something?
> >      >
> >      >     ##PP2
> >      >     topology for both primary and backup computation must be the
> >     same.
> >      >
> >      >
> >      > KT2> I see the primary use case for IP FlexAlgo (or another data
> >     plane)
> >      > to be that the data plane is used by itself. In the (rare?) case
> >     where
> >      > multiple data planes are required to coexist, it is simpler both
> >     from
> >      > implementation and deployment POV to use different algos. It
> >     would be
> >      > good to have operator inputs here. The only cost that I see for
> >     this is
> >      > that the same FAD may get advertised twice only in the case where
> >     it is
> >      > identical for multiple data planes. So I am still not seeing the
> >     benefit
> >      > of enabling multiple (i.e. per data plane) computations for a
> single
> >      > algo rather than just keeping it a single computation per algo
> >     where a
> >      > single data plane is associated with a specific algo.
> >
> >     ##PP3
> >     I really do not see the problem. As you stated above repeating the
> same
> >     FAD for multiple algos would be inefficient. The beauty of FAD is
> that
> >     it is app independent and can be used by many of them.
> >
> >     If you like to repeat it, fine it will still work. But we do not
> >     want to
> >     mandate that in the spec.
> >
> >
> > KT3> There is currently no normative text in the draft-lsr-flex-algo
> > that specifies that an implementation needs to support a "per flexalgo
> > forwarding mechanism" computation for each algo. So when this
> > clarification is added, can this be a MAY or perhaps a SHOULD so that an
> > implementation has the choice to perhaps not do this and still remain
> > compliant to the spec?
>
> ##PP4
> I'm fine to make that optional.
>
> thanks,
> Peter
> >
> > Thanks,
> > Ketan
> >
> >
> >
> >     thanks,
> >     Peter
> >
> >      >
> >      > Thanks,
> >      > Ketan
> >
>
>