Re: [Anima] ANIMA when there is a system-wide issue

Michael Richardson <mcr+ietf@sandelman.ca> Thu, 11 February 2021 22:40 UTC

Return-Path: <mcr+ietf@sandelman.ca>
X-Original-To: anima@ietfa.amsl.com
Delivered-To: anima@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id CF1E33A0D69 for <anima@ietfa.amsl.com>; Thu, 11 Feb 2021 14:40:30 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Level:
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PbKMQoqDM9HE for <anima@ietfa.amsl.com>; Thu, 11 Feb 2021 14:40:27 -0800 (PST)
Received: from tuna.sandelman.ca (tuna.sandelman.ca [209.87.249.19]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id A47193A0D64 for <anima@ietf.org>; Thu, 11 Feb 2021 14:40:27 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by tuna.sandelman.ca (Postfix) with ESMTP id 0D35238A5B; Thu, 11 Feb 2021 17:43:50 -0500 (EST)
Received: from tuna.sandelman.ca ([127.0.0.1]) by localhost (localhost [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 9MfFjDJSNIwj; Thu, 11 Feb 2021 17:43:48 -0500 (EST)
Received: from sandelman.ca (obiwan.sandelman.ca [IPv6:2607:f0b0:f:2::247]) by tuna.sandelman.ca (Postfix) with ESMTP id 96BEB38A53; Thu, 11 Feb 2021 17:43:48 -0500 (EST)
Received: from localhost (localhost [IPv6:::1]) by sandelman.ca (Postfix) with ESMTP id 4F2F3320; Thu, 11 Feb 2021 17:40:25 -0500 (EST)
From: Michael Richardson <mcr+ietf@sandelman.ca>
To: Toerless Eckert <tte@cs.fau.de>, Anima WG <anima@ietf.org>
In-Reply-To: <20210211201910.GA48871@faui48f.informatik.uni-erlangen.de>
References: <136aa329-41a5-8b65-ef9e-fadf089696eb@gmail.com> <704b66e9-d41c-f7e9-7e4b-f2d934ec9158@gmail.com> <PR3PR07MB68265F26A2CFB818D9CFDFBCF3C30@PR3PR07MB6826.eurprd07.prod.outlook.com> <20210128160356.GB54347@faui48f.informatik.uni-erlangen.de> <17274.1611866107@localhost> <20210211201910.GA48871@faui48f.informatik.uni-erlangen.de>
X-Mailer: MH-E 8.6+git; nmh 1.7+dev; GNU Emacs 26.1
X-Face: $\n1pF)h^`}$H>Hk{L"x@)JS7<%Az}5RyS@k9X%29-lHB$Ti.V>2bi.~ehC0; <'$9xN5Ub# z!G,p`nR&p7Fz@^UXIn156S8.~^@MJ*mMsD7=QFeq%AL4m<nPbLgmtKK-5dC@#:k
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="=-=-="; micalg=pgp-sha512; protocol="application/pgp-signature"
Date: Thu, 11 Feb 2021 17:40:25 -0500
Message-ID: <18842.1613083225@localhost>
Archived-At: <https://mailarchive.ietf.org/arch/msg/anima/imR-UtHOIzTH3biMlokUNJyhlRY>
Subject: Re: [Anima] ANIMA when there is a system-wide issue
X-BeenThere: anima@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Autonomic Networking Integrated Model and Approach <anima.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/anima>, <mailto:anima-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/anima/>
List-Post: <mailto:anima@ietf.org>
List-Help: <mailto:anima-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/anima>, <mailto:anima-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 11 Feb 2021 22:40:31 -0000

Toerless Eckert <tte@cs.fau.de> wrote:
    > draft-ACP section 7. explains how to implement ACP on any low-end
    > L2-only as well as L2/L3 devices so that the device can operate ACP as
    > if it was just an L3 device, while in parallel operating unchanged on
    > some or all ports as an L2 switch with STB: ACP packets are software
    > routed and software IPsec en/decrypted.

}   Predictable scaling requirements for ACP neighbors can most easily be
}   achieved if in topologies such as these, ACP capable L2 switches can
}   ensure that discovery messages terminate on them so that neighboring
}   ACP routers and switches will only find the physically connected ACP
}   L2 switches as their candidate ACP neighbors.

So, your solution is to have L2 switches not forward based upon L3 multicast
addresses.  While this is deterministically mapped to a unique L2 multicast MAC,
and maybe this can work...  but, as you say below, that's not primarily how
the punt has worked in the past.


    > The magic to this is also explained, its simply to per-port-punt DULL
    > GRASP packets and later encapsulated ACP packets to CPU as it is done
    > today for STP, CDP, LLDP, EAPOL packets. The rest is then standard ACP
    > implementation as on any L3 ACP device.

okay.
My only request is that we avoid adding a new thing to the list of per-port
things, because they take up ASIC / NPU space.

    > In my past implementation experience, the punt option that always works
    > is one where you punt a specific ethertype.

Great, so one of the options I proposed is to have a new ethertype for
IPv6-DULL messages, as you also say below.

    > This is so because all the
    > L2 switch hardware was designed based on supporting 802.1x, and that is
    > where LLDP and EAPOL came into play and the minimum HW selector to
    > decide between punt/forward/block was simply ethertype.

    > Aka: The most simple extension we would need to drastically improve the
    > ability for ACP to get into lucrativee markets is to define an ACP
    > encap using its own ethertype.  That is primarily an issue with finding
    > i think 2000 USD and arguing with ethertype registry owner about the
    > long term value of that ethertype.

I think that the IEEE/IETF liason can get us an ethertype, given an
appropriate STD track draft.
Tell me if that you think this is within the ANIMA WG's charter.

    > Forget using multicast MAC destinations. Maybe i can find the time
    > trying to remember all the horrible things that could go wrong with it.

okay.

--
Michael Richardson <mcr+IETF@sandelman.ca>   . o O ( IPv6 IøT consulting )
           Sandelman Software Works Inc, Ottawa and Worldwide