Re: [Lsr] Working Group Last Call for draft-ietf-lsr-ip-flexalgo-04 - "IGP Flexible Algorithms (Flex-Algorithm) In IP Networks"

Peter Psenak <ppsenak@cisco.com> Wed, 13 April 2022 09:05 UTC

Return-Path: <ppsenak@cisco.com>
X-Original-To: lsr@ietfa.amsl.com
Delivered-To: lsr@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0E59D3A1B5C; Wed, 13 Apr 2022 02:05:43 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -9.61
X-Spam-Level:
X-Spam-Status: No, score=-9.61 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, NICE_REPLY_A=-0.001, SPF_NONE=0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001, USER_IN_DEF_DKIM_WL=-7.5] autolearn=unavailable autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=cisco.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id uqWL3vLqv2T2; Wed, 13 Apr 2022 02:05:36 -0700 (PDT)
Received: from aer-iport-3.cisco.com (aer-iport-3.cisco.com [173.38.203.53]) (using TLSv1.2 with cipher DHE-RSA-SEED-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 943713A1B4F; Wed, 13 Apr 2022 02:05:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=11495; q=dns/txt; s=iport; t=1649840735; x=1651050335; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=1CzFQuXV37ORAWpL29Nn98MHdZa/Yqw1cvIVy7OVaFs=; b=CX9N1abKuCALweD0dMI2eMMlYMGFDHGyT2UsmGFhELS3NgsFL0jVYE2M Vh+cfVNrMscZVCN0ZHc2QUhl6pCPDYRcKAX4dw/4bfKUmnV48koXm3hbN tetdhjg0qocoRKZPvIbSZllgO3aHYvelo/+RAjzTVLHMHDTa4iWesCRDA k=;
X-IPAS-Result: A0A+AACHkVZi/xbLJq1aHAEBAQEBAQcBARIBAQQEAQFAgUcGAQELAYN7ASkSRIRUiQaIEwOcaxSBaAsBAQEPQwQBAYR9AQkCgRwmNQgOAQIEAQEBAQMCAwEBAQEBAQMBAQUBAQECAQcEgQkThXWGQwEFHQYPAQU6BxALEgYCAhEOBAMCAkYDDgYNBgIBAYMAgxqtHXqBMYEBiBeBZYEQLAGOVEOBSUSBFScMgkcwPoN+AYEFgxeCZQSBZlsIBV0EQzBbSg8DByYBAQgFDBwCOoxShUuuLoNThBaUbIZpBg8FLoN0jDmGLzCONYMSll2hbIUngWMCOIFZMxoIGxWDJFEZD45Xjhs/AzE4AgYBCgEBAwmLZl4BAQ
IronPort-Data: A9a23:DIH2z6OVXbO7cN3vrR3UlsFynXyQoLVcMsEvi/4bfWQNrUp00jZRm mtKXGiPO/rZZDSjfYwlbt+3pE0EuJ/dn94wQHM5pCpnJ55oRWUpJjg4wmPYZX76whjrFRo/h ykmQoCcappyFhcwnz/1WpD5t35wyKqUcbT1De/AK0hZSBRtIMsboUoLd9UR38g52bBVPyvX4 Ymo+5GGYwf/s9JJGjt8B5yr+UsHUMva4Fv0jnRmDdhXsVnXkWUiDZ53Dcld+FOhH+G4tsbjL wry5OnRElHxpn/BOfv5+lrPSXDmd5aJVeS4ZtW6bID56vRKjnRaPq/Wr5PwY28P49mCt4gZJ NmgKfVcRC9xVpAgltjxXDF6GAxTFo995IX3BmmWrc/Jwk3tK2rVlqAG4EEeZeX0+85+DHsL/ vsCJXVXKBuCnOmxhrm8T4GAhOx6c5KtZ9NZ4Ck7i2uDZRolacirr6Hi/cdD0TE5hehFHO3VY IwSbj8HgBHoOU0UYA1OYH44tPW63z64NAJ2lEiMiI5wwzeInAYvj7e4ZbI5ffTPH625hH2wq njP8Xi8AxwGOpmb0iDA82rpiOvCjSr/V4U6FbCk+LhtmlL77mgIEjUXWEe15/6jhSaWUtlSJ hlIoiEvtqM1skesS/HxWhSiqziFswISHd1KHIUS6wyRw6zIpQeUGmYsQTtIadhgv8gzLQHGz XeAksmsBCRoqqHQT3uBsLyVtji1fyMSKAfueBM5cOfM2PG7yKlbs/4FZo8L/HKd5jEtJQzN/ g==
IronPort-HdrOrdr: A9a23:PS46B6t/yGoIL6TCc/9DUzKD7skDR9V00zEX/kB9WHVpmwKj5q OTdYcgtCMc7wxhPk3I+OrwX5VoLkmwyXcY2/h1AV7mZniDhILKFu1fBOnZqQEIcheWnoVgPO VbAspD4bbLY2SS4/yb3OD1KbkdKB3tytHRuQ8YpE0dND1XVw==
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-AV: E=Sophos;i="5.90,256,1643673600"; d="scan'208";a="283221"
Received: from aer-iport-nat.cisco.com (HELO aer-core-2.cisco.com) ([173.38.203.22]) by aer-iport-3.cisco.com with ESMTP/TLS/DHE-RSA-SEED-SHA; 13 Apr 2022 09:05:33 +0000
Received: from [10.147.24.18] ([10.147.24.18]) by aer-core-2.cisco.com (8.15.2/8.15.2) with ESMTP id 23D95WEs013570; Wed, 13 Apr 2022 09:05:33 GMT
Message-ID: <e7cc29b8-00fb-6294-5c87-4409428b8ae2@cisco.com>
Date: Wed, 13 Apr 2022 11:05:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.7.0
Content-Language: en-US
To: Ketan Talaulikar <ketant.ietf@gmail.com>
Cc: "Acee Lindem (acee)" <acee=40cisco.com@dmarc.ietf.org>, "lsr@ietf.org" <lsr@ietf.org>, "draft-ietf-lsr-ip-flexalgo@ietf.org" <draft-ietf-lsr-ip-flexalgo@ietf.org>
References: <36E526F2-34CB-4C0A-84C2-79A50D9D4C36@cisco.com> <CAH6gdPwrshSVGNsjJVqND8kpNBTBQWicggEz_qyP0DtMYY5wjg@mail.gmail.com> <b6250861-a35d-2a47-6701-194b074e7233@cisco.com> <CAH6gdPwbL5qWX_GXfuv5YL4mRL9xUy3p9wc7an-FbnzpTc0U9A@mail.gmail.com> <46e4c6c1-4ae6-a628-ba27-daa5381c0ac0@cisco.com> <CAH6gdPwS97eEgRQsX16=QfR_yEiPi0WPWGt6PM0XawE5v07exA@mail.gmail.com> <b5def3f0-9bb4-84d6-5fe2-4ba3091dcb95@cisco.com> <CAH6gdPyJxppbyjhYBxX4R+LJvt-TdFfvjmPHJ-PHLOoQBPeO2w@mail.gmail.com>
From: Peter Psenak <ppsenak@cisco.com>
In-Reply-To: <CAH6gdPyJxppbyjhYBxX4R+LJvt-TdFfvjmPHJ-PHLOoQBPeO2w@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"; format="flowed"
Content-Transfer-Encoding: 8bit
X-Outbound-SMTP-Client: 10.147.24.18, [10.147.24.18]
X-Outbound-Node: aer-core-2.cisco.com
Archived-At: <https://mailarchive.ietf.org/arch/msg/lsr/lubU71kugBOBCM87SfmgaGG5beo>
Subject: Re: [Lsr] Working Group Last Call for draft-ietf-lsr-ip-flexalgo-04 - "IGP Flexible Algorithms (Flex-Algorithm) In IP Networks"
X-BeenThere: lsr@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Link State Routing Working Group <lsr.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/lsr>, <mailto:lsr-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/lsr/>
List-Post: <mailto:lsr@ietf.org>
List-Help: <mailto:lsr-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/lsr>, <mailto:lsr-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 13 Apr 2022 09:05:43 -0000

Hi Ketan,

please see inline (##PP4):


On 13/04/2022 10:52, Ketan Talaulikar wrote:
> Hi Peter,
> 
> I will not press this point further if I am the only one that finds this 
> complexity without any benefit. :-)
> 
> Please check inline below for some clarifications with KT3.
> 
> 
> On Wed, Apr 13, 2022 at 12:47 PM Peter Psenak <ppsenak@cisco.com 
> <mailto:ppsenak@cisco.com>> wrote:
> 
>     Hi Ketan,
> 
> 
>     please see inline (##PP3):
> 
>     On 13/04/2022 06:00, Ketan Talaulikar wrote:
>      > Hi Peter,
>      >
>      > Please check inline below with KT2. I am trimming everything
>     other than
>      > the one point of continuing debate.
>      >
>      >      >      >
>      >      >      > 2) The relationship between the algo usage for IP
>     FlexAlgo
>      >     and other
>      >      >      > data planes (e.g. FlexAlgo with SR) is not very clear.
>      >     There arise
>      >      >      > complications when the algo usage for IP FlexAlgo
>     overlap
>      >     with other
>      >      >      > (say SR) data planes since the FAD is shared but
>     the node
>      >      >     participation
>      >      >      > is not shared. While Sec 9 suggests that we can work
>      >     through these
>      >      >      > complications, I question the need for such complexity.
>      >     The FlexAlgo
>      >      >      > space is large enough to allow it to be shared between
>      >     various data
>      >      >      > planes without overlap. My suggestion would be to
>     neither
>      >     carve out
>      >      >      > parallel algo spaces within IGPs for various types of
>      >     FlexAlgo data
>      >      >      > planes nor allow the same algo to be used by both
>     IP and
>      >     SR data
>      >      >     planes.
>      >      >      > So that we have a single topology computation in
>     the IGP
>      >     for a given
>      >      >      > algo based on its FAD and data plane participation and
>      >     then when it
>      >      >      > comes to prefix calculation, the results could involve
>      >      >     programming of
>      >      >      > entries in respective forwarding planes based on the
>      >     signaling of
>      >      >     the
>      >      >      > respective prefix reachabilities. The coverage of these
>      >     aspects in a
>      >      >      > dedicated section upfront will help.
>      >      >
>      >      >     ##PP
>      >      >     I strongly disagree.
>      >      >
>      >      >     FAD is data-pane/app independent. Participation is
>     data-plane/app
>      >      >     dependent. Base flex-algo specification is very clear
>     about
>      >     that. That
>      >      >     has advantages and we do not want to modify that part.
>      >      >
>      >      >
>      >      > KT> No issue with this part.
>      >      >
>      >      >
>      >      >     Topology calculation for algo/data-plane needs to take
>     both
>      >     FAD and
>      >      >     participation into account. You need independent
>     calculation
>      >     for each
>      >      >     data-plane/app in the same algo.
>      >      >
>      >      >
>      >      > KT> So, an implementation now needs to potentially support
>      >     performing
>      >      > multiple topology computations for each algo. This is a
>      >     complication for
>      >      > which I do not see the justification. Why not just pick
>     different
>      >      > algorithms for different data planes for those (rare?)
>      >     deployments where
>      >      > someone wants multiple data planes?
>      >
>      >     ##PP2
>      >     flex-algo architecture supports multiple apps/data-planes per
>     algo,
>      >     with
>      >     unique participation per app/data-plane. That requires
>     per-algo/per
>      >     app/data-plane calculation. What is complicated on it?
>      >
>      >
>      > KT2> This specific and precise statement that you have provided
>     is not
>      > covered in either draft-ietf-lsr-flex-algo or this document. For
>      > starters, this needs to be clarified and covered so that it gets the
>      > attention of any reader during the review. This has implications for
>      > implementations.
> 
>     ##PP3
>     sure we can add it explicitly there, but if you read the base flex-algo
>     draft carefully, it is quite clear. I will add that exact statement in
>     the next re-spin of the base spec.
> 
> 
> KT3> Thanks. I think we may also need to carefully scrub the use of the 
> term "application" since it seems to bring out different interpretations 
> thanks to the "application" in ASLA. It is better if we use the term 
> "application" only in the same semantics as ASLA  - this means that 
> FlexAlgo is a single "application". We can perhaps use the term "traffic 
> flows" or "service flows" as an alternate for "application flows" that 
> are steered over or use a FlexAlgo.  And then when it comes to Node 
> Participation in a FlexAlgo, we could use the term "FlexAlgo Forwarding 
> Mechanism" instead of "Applications' Forwarding for FlexAlgo". Thoughts?

##PP4
the term application is used in the base flex-algo spec from day one. It 
was chosen because it was generic enough to describe whatever the 
flex-algo may be used for down the road. We could have used 'data-plane' 
instead, but it could be quite restrictive IMHO.


> 
>      >
>      >
>      >     If your implementation does not want to support it, fine, but the
>      >     architecture allows it and there is/are implementation(s)
>     that already
>      >     support it. This is not defined in this draft, it's defined
>     in base
>      >     flex-algo spec.
>      >
>      >
>      > KT2> I am not sure if it is really an option for implementation
>     once it
>      > is in the specification. And this is not about "my"
>     implementation :-).
>      > So it is not that because some implementations can do (or does)
>     it that
>      > it should be in the specification. The determination on whether it
>      > should be in a specification needs to be based on the tradeoff
>     between
>      > requiring multiple computations per algo with the potential
>     benefit or
>      > use case that is enabled by it.
> 
>     ##PP3
>     again, this is how things have been defined from day one, and for a
>     good
>     reason. Requiring per app flex-algo even though I want to use the same
>     metric and constraints for both app would be inefficient.
> 
> 
> KT3> For my understanding, the only inefficiency that you are referring 
> to with the "separate algo per FlexAlgo forwarding mechanism" is a 
> duplicate FAD advertisement. Am I missing anything else?

##PP4
right. But the point is there is nothing that prevents multiple apps 
using the same algo in the architecture itself. And I see no good reason 
for such restriction.
> 
> 
>      >
>      >
>      >
>      >      >
>      >      >
>      >      >     The fact that the same FAD is shareable between all
>     apps has it
>      >      >     advantages and use cases - e.g. if the participation
>     for algo
>      >     X is the
>      >      >     same in SR and IP data-planes, one can use SR to
>     protect IP
>      >     in that
>      >      >     algo.
>      >      >
>      >      >
>      >      > KT> Would this protection use case not violate the base
>     FlexAlgo
>      >     rule
>      >      > that the protection has to remain within the specific
>     topology.
>      >     If there
>      >      > is an SR data plane, then why would one want an IP data
>     plane as
>      >     well?
>      >
>      >     ##PP2
>      >     if the participation in two app/data-planes is the same for
>     the algo,
>      >     the resulting topology is the same. If your implementation is
>     smart, it
>      >     can only run a single computation for that case. There is no
>     violation
>      >     here whatsoever.
>      >
>      >
>      > KT2> If the resulting topology is the same between SR data plane
>     and IP
>      > data plane, what is the need to enable the IP data plane? Why not
>     just
>      > steer the IP traffic over the FlexAlgo data plane? And when it is
>     not
>      > the same topology, then we cannot really do the protection for IP
>      > FlexAlgo using SR FlexAlgo. So what is really the use case or
>     benefit
>      > for enabling this?
> 
>     ##PP3
>     I just gave you an example where this might be useful. You may not like
>     it, but it will have no impact on the defined architecture.
> 
> 
> KT3> Ack - we can agree to disagree on this.
> 
> 
>      >
>      >
>      >
>      >
>      >      > IP forwarding can be steered over the SR-based FlexAlgo
>     topology
>      >     along
>      >      > with the protection provided by it. Am I missing something?
>      >
>      >     ##PP2
>      >     topology for both primary and backup computation must be the
>     same.
>      >
>      >
>      > KT2> I see the primary use case for IP FlexAlgo (or another data
>     plane)
>      > to be that the data plane is used by itself. In the (rare?) case
>     where
>      > multiple data planes are required to coexist, it is simpler both
>     from
>      > implementation and deployment POV to use different algos. It
>     would be
>      > good to have operator inputs here. The only cost that I see for
>     this is
>      > that the same FAD may get advertised twice only in the case where
>     it is
>      > identical for multiple data planes. So I am still not seeing the
>     benefit
>      > of enabling multiple (i.e. per data plane) computations for a single
>      > algo rather than just keeping it a single computation per algo
>     where a
>      > single data plane is associated with a specific algo.
> 
>     ##PP3
>     I really do not see the problem. As you stated above repeating the same
>     FAD for multiple algos would be inefficient. The beauty of FAD is that
>     it is app independent and can be used by many of them.
> 
>     If you like to repeat it, fine it will still work. But we do not
>     want to
>     mandate that in the spec.
> 
> 
> KT3> There is currently no normative text in the draft-lsr-flex-algo 
> that specifies that an implementation needs to support a "per flexalgo 
> forwarding mechanism" computation for each algo. So when this 
> clarification is added, can this be a MAY or perhaps a SHOULD so that an 
> implementation has the choice to perhaps not do this and still remain 
> compliant to the spec?

##PP4
I'm fine to make that optional.

thanks,
Peter
> 
> Thanks,
> Ketan
> 
> 
> 
>     thanks,
>     Peter
> 
>      >
>      > Thanks,
>      > Ketan
>