Re: [Lsr] https://tools.ietf.org/html/draft-wang-lsr-prefix-unreachable-annoucement-05

Robert Raszuk <robert@raszuk.net> Wed, 10 March 2021 10:29 UTC

Return-Path: <robert@raszuk.net>
X-Original-To: lsr@ietfa.amsl.com
Delivered-To: lsr@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0052C3A213F for <lsr@ietfa.amsl.com>; Wed, 10 Mar 2021 02:29:19 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.098
X-Spam-Level:
X-Spam-Status: No, score=-2.098 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=unavailable autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=raszuk.net
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fCPKqWjhxi7n for <lsr@ietfa.amsl.com>; Wed, 10 Mar 2021 02:29:17 -0800 (PST)
Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com [IPv6:2a00:1450:4864:20::12d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 2F8F73A213E for <lsr@ietf.org>; Wed, 10 Mar 2021 02:29:17 -0800 (PST)
Received: by mail-lf1-x12d.google.com with SMTP id u4so32621430lfs.0 for <lsr@ietf.org>; Wed, 10 Mar 2021 02:29:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raszuk.net; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=FIa2OhrUicZQD7DhfOQ5SF6SVgtDKQP4/wZGirhvwZA=; b=URjmyplDKy8MaAidaBtw53et+cRLOQwm3D13Z2q5WfY7MI/H1KRZT9fcgCNXEILtb5 HFh7Z1UuT6r39FKe/sKZSxi6JvWEm/o+Inxz6TuB40wN0CWbgYBKJj/aq/4V4gMiiURV pfEGvgY+L1VbuNXcIb3GBNTYOymlmeeWUIOBuYS3w8i+O52LEY3f1JzBEQfdljPIp1CW 60xL0r3zc57VUEnHjBa6FTy7FTmlpHjIlZvT9ZSJ82ogU7DQ2iiHCO96TFj0DJyyF9MG w7GROrKCP9lEOLHUq36YnPr8eZIpZdY7oHT2pfE6xt5/d9ZrfxnNiWIhcWerxg5wgBno 4GZg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=FIa2OhrUicZQD7DhfOQ5SF6SVgtDKQP4/wZGirhvwZA=; b=O0Bs0LiBSa2duTQCOVuw/Z7AXwmJQj2gvyXnyML/qvfwshDl/HZr3x9OC7t4SpQiPO z09gwP6hgktfvOghRgI+Y9SXg1RnltjGIgLHfa7HtntEN3ZT4e589KJ920wRCK/jF1dO NPZs3Z60gm7aEnJKtH8esw+z1hRb6tly3//MLY7y1vj8A7l61O/0PcZFXRNiQPAVNzMW BadGnQ1QfFhzdwk1dwPGxBVVWUr7sNIk+I+JYPFUju2jUkX++f9OYC/B44C1W4smPWvQ eyWx72nC5Z9oabDVextsbIl5SJig/+7PrPks4yfAIGgtFn6hlCVYjFT5xESGu8PixuJ3 1RDQ==
X-Gm-Message-State: AOAM5311juFNbFyjXfWyBiW35qxxaJALhQRa160J8HAHjvgKxULtFy9X Sekfd30UVYnrYEviVa2/83REapabf3OIQA688mwNOQ==
X-Google-Smtp-Source: ABdhPJyrS1y5O6onk675VJe+bE+zsoO9RhwuaqFhTQhrmHr3n+OodiZhSKYMVcRmXEHVQoyu/ceExV4MAsyBHpNVIfI=
X-Received: by 2002:a19:d8:: with SMTP id 207mr1727263lfa.602.1615372152818; Wed, 10 Mar 2021 02:29:12 -0800 (PST)
MIME-Version: 1.0
References: <22FDE3EA-B5D1-4E4D-B698-1D79173E8637@tony.li> <6E0281D2-7755-499A-B084-CA8472949683@chinatelecom.cn> <D6B0D95F-68AD-4A18-B98C-69835E8B149B@tony.li> <018801d71499$9890feb0$c9b2fc10$@tsinghua.org.cn> <CABNhwV2SpcDcm-s-WkWPpnVLpYB2nZGz2Yv0SfZah+-k=bGx4A@mail.gmail.com> <BFB3CE24-446A-4ADA-96ED-9CF876EA6A00@tony.li> <CAOj+MMGeR4bodbgpPqDCtLZD6XmX6fkjyxLWZAKa4LC2R1tBzg@mail.gmail.com> <ecf2e8b4-fdae-def6-1a29-ec1ae37f5811@cisco.com> <CAOj+MMFSEqVkM62TDAc6yn19Hup+v-9w=kiq_q6dVn39LcOkqQ@mail.gmail.com> <fdf0e62a-21fa-67e9-811d-5aa8749bb077@cisco.com> <CAOj+MMGqab_MSeZuwu0jLpCiDoZrcjnjebScscULsvnJt4_Sgw@mail.gmail.com> <2b2e9a39-ee2d-ab1c-2d59-ff5847c943e8@cisco.com> <CAOj+MMETEOgA_QO0V_k052cu10a2ZVkf8at-1+kut7OQwf=Kug@mail.gmail.com> <14e8038e-338f-599e-3c40-fdaac247fc10@cisco.com>
In-Reply-To: <14e8038e-338f-599e-3c40-fdaac247fc10@cisco.com>
From: Robert Raszuk <robert@raszuk.net>
Date: Wed, 10 Mar 2021 11:29:05 +0100
Message-ID: <CAOj+MMF4xdh2TsMWVEmw_qUxxTS-zFbtE4xK8-cL-cw3xmcrgg@mail.gmail.com>
To: Peter Psenak <ppsenak@cisco.com>
Cc: Tony Li <tony.li@tony.li>, Gyan Mishra <hayabusagsm@gmail.com>, Aijun Wang <wangaijun@tsinghua.org.cn>, Aijun Wang <wangaj3@chinatelecom.cn>, lsr <lsr@ietf.org>, "Acee Lindem (acee)" <acee@cisco.com>, draft-wang-lsr-prefix-unreachable-annoucement <draft-wang-lsr-prefix-unreachable-annoucement@ietf.org>
Content-Type: multipart/alternative; boundary="00000000000059614005bd2c23ca"
Archived-At: <https://mailarchive.ietf.org/arch/msg/lsr/iU8DUV_a-nMsAaHOasjgG9xp-QA>
Subject: Re: [Lsr] https://tools.ietf.org/html/draft-wang-lsr-prefix-unreachable-annoucement-05
X-BeenThere: lsr@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Link State Routing Working Group <lsr.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/lsr>, <mailto:lsr-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/lsr/>
List-Post: <mailto:lsr@ietf.org>
List-Help: <mailto:lsr-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/lsr>, <mailto:lsr-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 10 Mar 2021 10:29:19 -0000

Peter,

> But suddenly the DOWN event distribution is considered
> problematic. Not sure I follow.

In routing and IP reachability we use p2mp distribution and flooding as it
is required to provide any to any connectivity.

Such spray model no longer fits services where not every endpoint
participates in all services.

So my point is that just because you have transport ready we should not
continue to announce neither good nor bad news in spray fashion for
services.

Sure it works, but it is hardly a good design and sound architecture.

It happened to BGP as the convenience of already having TCP sessions
between nodes was so great that we loaded loads of stuff to go along basic
routing reachability.

And now it seems time came to do the same with IGPs :).

I think unless we stop and define a real pub-sub messaging protocol (like
FB does with open-R)  we will continue this.

And to me it is like building a tower from the cards ... the higher you go
the more likely your entire tower is to collapse.

Cheers,
R.

PS.

> with MPLS loopback address of all PEs is advertised everywhere.

Is this a feature or a day one design bug later fixed by RFC5283 ?




On Wed, Mar 10, 2021 at 9:10 AM Peter Psenak <ppsenak@cisco.com> wrote:

> Robert,
>
>
> On 09/03/2021 19:30, Robert Raszuk wrote:
> > Hi Peter,
> >
> >      > Example 1:
> >      >
> >      > If session to PE1 goes down, withdraw all RDs received from such
> PE.
> >
> >     still dependent on RDs and BGP specific.
> >
> >
> > To me this does sound like a feature ... to you I think it was rather
> > pejorative.
>
> not sure I understand your point with "pejorative"...
>
> There are other ways to provide services outside of BGP - think GRE,
> IPsec, etc. The solution should cover them all.
>
> >
> >     We want app independent way of
> >     signaling the reachability loss. At the end that's what IGPs do
> without
> >     a presence of summarization.
> >
> >
> > Here you go. I suppose you just drafted the first use case for OSPF
> > Transport Instance.
>
> you said it, not me.
>
>
> >
> > I suppose you just run new ISIS or OSPF Instance and flood info about PE
> > down events to all other instance nodes (hopefully just PEs and no Ps as
> > such plane would be OTT one).  Still you will be flooding this to 100s
> > of PEs which may never need this information at all which I think is the
> > main issue here. Such bad news IMHO should be distributed on a pub/sub
> > basis only. First you subscribe then you get updates ... not get
> > everything then keep junk till it get's removed or expires.
>
> with MPLS loopback address of all PEs is advertised everywhere. So you
> keep the state when the remote PE loopback is up and you get a state
> withdrawal when the remote PE loopback goes down.
>
> In Srv6, with summarization we can reduced the amount of UP state to
> minimum. But suddenly the DOWN event distribution is considered
> problematic. Not sure I follow.
>
> thanks,
> Peter
>
> >
> > Many thx,
> > Robert
> >
>
>