Re: [lisp] WG Last Call draft-ietf-lisp-impact-01

Ross Callon <rcallon@juniper.net> Mon, 13 April 2015 21:16 UTC

Return-Path: <rcallon@juniper.net>
X-Original-To: lisp@ietfa.amsl.com
Delivered-To: lisp@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2A5FD1A875C for <lisp@ietfa.amsl.com>; Mon, 13 Apr 2015 14:16:13 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.902
X-Spam-Level:
X-Spam-Status: No, score=-1.902 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id SfYnhProJmYu for <lisp@ietfa.amsl.com>; Mon, 13 Apr 2015 14:16:08 -0700 (PDT)
Received: from na01-bn1-obe.outbound.protection.outlook.com (mail-bn1bon0785.outbound.protection.outlook.com [IPv6:2a01:111:f400:fc10::1:785]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 0EFD51A872C for <lisp@ietf.org>; Mon, 13 Apr 2015 14:16:07 -0700 (PDT)
Received: from BY1PR0501MB1430.namprd05.prod.outlook.com (25.160.107.152) by BY1PR0501MB1334.namprd05.prod.outlook.com (25.160.200.18) with Microsoft SMTP Server (TLS) id 15.1.130.23; Mon, 13 Apr 2015 21:15:50 +0000
Received: from BY1PR0501MB1430.namprd05.prod.outlook.com (25.160.107.152) by BY1PR0501MB1430.namprd05.prod.outlook.com (25.160.107.152) with Microsoft SMTP Server (TLS) id 15.1.136.25; Mon, 13 Apr 2015 21:15:47 +0000
Received: from BY1PR0501MB1430.namprd05.prod.outlook.com ([25.160.107.152]) by BY1PR0501MB1430.namprd05.prod.outlook.com ([25.160.107.152]) with mapi id 15.01.0136.014; Mon, 13 Apr 2015 21:15:47 +0000
From: Ross Callon <rcallon@juniper.net>
To: Florin Coras <fcoras@ac.upc.edu>, Damien Saucez <damien.saucez@gmail.com>
Thread-Topic: [lisp] WG Last Call draft-ietf-lisp-impact-01
Thread-Index: AQHQWnSPQbPJi+hgVUqTeQIKhdZvrp00M8ZggAArMgCAAAJBgIAFE3wAgAqKBzCAB4JLAIAAJQYw
Date: Mon, 13 Apr 2015 21:15:46 +0000
Message-ID: <BY1PR0501MB1430DE38963719B1C8CE20BAA5E70@BY1PR0501MB1430.namprd05.prod.outlook.com>
References: <B339BFE7-7E19-4AAA-8B2C-276402024C74@gigix.net> <BY1PR0501MB14304477BFF1F86BAFC810B3A5F50@BY1PR0501MB1430.namprd05.prod.outlook.com> <5518A89C.3090108@joelhalpern.com> <BY1PR0501MB14306D728BC2F0CAE037DE58A5F50@BY1PR0501MB1430.namprd05.prod.outlook.com> <5C76220B-7DC6-46B9-8C57-A30D977FA7C8@gmail.com> <BY1PR0501MB1430FCF009B4994C006EB3A4A5FB0@BY1PR0501MB1430.namprd05.prod.outlook.com> <552C1064.9000500@ac.upc.edu>
In-Reply-To: <552C1064.9000500@ac.upc.edu>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: ac.upc.edu; dkim=none (message not signed) header.d=none;
x-originating-ip: [66.129.241.13]
x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:; SRVR:BY1PR0501MB1430; UriScan:; BCL:0; PCL:0; RULEID:; SRVR:BY1PR0501MB1334;
x-forefront-antispam-report: BMV:1; SFV:NSPM; SFS:(10019020)(6009001)(51704005)(164054003)(53754006)(52604005)(24454002)(13464003)(43544003)(76104003)(377454003)(243025005)(479174004)(102836002)(15975445007)(74316001)(92566002)(106116001)(33656002)(99286002)(2950100001)(2900100001)(4001410100001)(66066001)(62966003)(50986999)(54356999)(93886004)(76176999)(76576001)(77156002)(40100003)(19580395003)(19580405001)(5890100001)(46102003)(87936001)(230783001)(86362001)(2171001)(122556002)(2656002); DIR:OUT; SFP:1102; SCL:1; SRVR:BY1PR0501MB1430; H:BY1PR0501MB1430.namprd05.prod.outlook.com; FPR:; SPF:None; MLV:sfv; LANG:en;
x-microsoft-antispam-prvs: <BY1PR0501MB1430449641AC078F7D0F75FFA5E70@BY1PR0501MB1430.namprd05.prod.outlook.com>
x-exchange-antispam-report-test: UriScan:;
x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(601004)(5002010)(5005006); SRVR:BY1PR0501MB1430; BCL:0; PCL:0; RULEID:; SRVR:BY1PR0501MB1430;
x-forefront-prvs: 0545EFAC9A
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Apr 2015 21:15:46.4508 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: bea78b3c-4cdb-4130-854a-1d193232e5f4
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY1PR0501MB1430
X-OriginatorOrg: juniper.net
Archived-At: <http://mailarchive.ietf.org/arch/msg/lisp/HNKEAEYx4dypf4HUMmDvZu9EAow>
Cc: LISP mailing list list <lisp@ietf.org>, "draft-ietf-lisp-impact@tools.ietf.org" <draft-ietf-lisp-impact@tools.ietf.org>
Subject: Re: [lisp] WG Last Call draft-ietf-lisp-impact-01
X-BeenThere: lisp@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: List for the discussion of the Locator/ID Separation Protocol <lisp.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/lisp>, <mailto:lisp-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/lisp/>
List-Post: <mailto:lisp@ietf.org>
List-Help: <mailto:lisp-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/lisp>, <mailto:lisp-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 13 Apr 2015 21:16:13 -0000

> I'm pretty sure you mean the granularity of the entries cached (the 
> width of the EID-prefixes).

Yes. I have been summarizing this using the term "granularity of the cache" or "cache granularity", but of course I mean the granularity / "width" of the entries which are maintained in the cache. 

> The goal our experiments was to understand the
> performance of LISP map-caches if edge 
> networks already owning their address space (PI address owners) were to 
> switch to LISP. Speculating if and how PA owning edge networks are to 
> switch to LISP was outside the scope.

Yes, I think that this text gets to the point of my biggest issue. I am concerned that people may miss this assumption in the study and think that it pertains to a potential wider deployment of LISP. 

Perhaps the note that we need in the impact document would include something like: "[CCD12] analyzes the performance of the LISP map cache if only the current edge networks already owning their address space (PI address owners) were to switch to LISP. It is not known what the performance implications would be if LISP were more widely deployed, or if the availability of LISP or other factors were to cause a significant increase in the number of PI addresses." 

Thanks, Ross

-----Original Message-----
From: Florin Coras [mailto:fcoras@ac.upc.edu] 
Sent: Monday, April 13, 2015 2:52 PM
To: Ross Callon; Damien Saucez
Cc: Joel Halpern; Luigi Iannone; LISP mailing list list; draft-ietf-lisp-impact@tools.ietf.org
Subject: Re: [lisp] WG Last Call draft-ietf-lisp-impact-01

Hi Ross,

Apologies for the belated reply, it was a busy week. First of all, 
thanks for taking the time to read the paper and for the good comments. 
More inline.

On 4/8/15 5:29 PM, Ross Callon wrote:
>> See [CCD12], this document provides a generic model that works for
>> any desegregation assumption as long as the map-cache is implemented
>> with LRU.
>>
>> Damien Saucez
> [CCD12] is of course:
>
>     Coras, F., Cabellos-Aparicio, A., and J. Domingo-Pascual, "An 
> Analytical Model
>     for the LISP Cache Size", In Proc. IFIP Networking 2012, May 2012.
>
> I just finished reading this through. To me the analytic model looks 
> mostly fine, except for one issue mentioned below, but the results (in 
> terms of how large a cache is needed) depends upon the assumption 
> about the granularity of the cache.

I'm pretty sure you mean the granularity of the entries cached (the 
width of the EID-prefixes). And this is indeed so because in light of 
the fixed size of the 32-bit (or 128-bit) address space, granularity 
controls the number of prefixes and thereby the number of destinations 
traffic could have. But note that the relevant metric for a cache is the 
number of destinations rather than the granularity.
>
> Near the end of section 2 of this paper (the last paragraph before 
> section 3) includes the text:
>         "... One can easily observe that the map-cache is most 
> efficient in
>     situations when destination EIDs present high temporal and/or spatial
>     locality and that its size depends upon the diversity of the visited
>     destinations. As a result, a cache's performance depends entirely on
>     its provisioned size and on traffic characteristics."
>
> Of course the desired size of the cache depends upon the likelihood, 
> for a given cache size, that the next EID that comes along will match 
> an entry which is already in the cache, which depends on the size of 
> the entries in the cache. [As one absurd extreme example, if you had a 
> single default route in the cache, every EID would match it and the 
> cache could be of size 1. A less absurd but still slightly extreme 
> example, if you could get away with nothing finer than a /8 in the 
> cache, then 256 entries would be the largest possible cache size.]  
> Thus the text is true for any given granularity of entries in the 
> cache. The paper might more accurately have said "As a result, a 
> cache's performance depends entirely on its provisioned size, on 
> traffic characteristics, and on the granularity of cache entries".

There are two things mixed in here: i) the parameters influencing cache 
performance and ii) assumptions to practically evaluate a cache's 
performance.

One of the traffic's characteristics is the number of destinations it 
has. As explained before, when dealing with EID-prefix caches, 
granularity influences the cardinality of the set of destinations, so 
its effect is indirectly considered. Therefore, I believe, our statement 
is general enough.

Regarding the second point, when setting up an experiment to evaluate 
map-cache performance, one has to consider the granularity of 
EID-prefixes. As you've noted, we've used BGP-prefix granularity to fix 
the number of destinations in our experiments. Understandably, this is 
debatable, but more on it lower.
>
> In section 4.1, the second paragraph begins:
>
>     Both for the evaluation of the working set in section 3.3 and for
>     the cache performance evaluation to be presented in section 4.3,
>     IP addresses had to be mapped to their corresponding prefixes. We
>     considered EID-prefixes to be of BGP-prefix granularity".
>
> Of course some assumption of the granularity of prefixes is needed. It 
> is clear that what we assume for the granularity affects the result. I 
> don't agree that BGP-prefix granularity is the right assumption.

I've gleaned from your previous emails that you believe aggregation of 
PA assignments hides the actual size of the EID-prefix namespace. I 
guess this is why you disagree with our assumption. The goal our 
experiments was to understand the performance of LISP map-caches if edge 
networks already owning their address space (PI address owners) were to 
switch to LISP. Speculating if and how PA owning edge networks are to 
switch to LISP was outside the scope.

However, an important thing to consider is that cache performance is not 
primarily driven by the number of destinations but by the their 
popularity distribution. To give an absurd example: even if there are 
billions of destinations in the mapping system, if a site's users only 
visit google and facebook, the site's map-cache will only need 2 
entries. In fact, we've showed in [1] that the cache model can be 
entirely determined from the popularity distribution. Thereby, to 
understand a site's map-cache asymptotic scalability it is enough to 
characterize the popularity distribution of the EID-prefixes visited.

So, how does the popularity distribution look like? Since we cannot 
compute the popularity of EID-prefixes without any assumptions on 
granularity, we can instead look at web traffic as a good approximation. 
This has been shown to be power law, often referred to as Zipf-like (see 
[2]), i.e, most of the traffic goes to a small set of destinations while 
the majority of destinations are sent few or no packets at all. Note 
though that we obtained a similar law for our traces when considering 
EID-prefixes to be of BGP-prefix granularity and this law has been shown 
to hold in many other fields (see [1] and the references therein).

All this means that cache miss rates decrease very fast with cache size 
(see eq. (11) in [1]), but the exact 'speed' will depend on the 
characteristics of each site. In the worst case, if for some strange 
reason the popularity distribution becomes flat (i.e, all EID-prefixes 
are equally popular) then the map-cache size will have to be 
proportional to the number of EID-prefixes to ensure a reasonably good 
hit-rate.

[1] F. Coras, J. Domingo, A. Cabellos. "On the scalability of LISP 
mappings caches". Technical Report. URL: 
http://personals.ac.upc.edu/fcoras/publications/2015-fcoras-scalability.pdf
[2] Breslau, Lee, Pei Cao, Li Fan, Graham Phillips, and Scott Shenker. 
"Web caching and Zipf-like distributions: Evidence and implications." 
INFOCOM, in Proceedings. IEEE, vol. 1, pp. 126-134, 1999.

>
> The other issue that I found with the paper refers to a later sentence 
> in the same second paragraph of section 4.1:
>
>     The only preprocessing we performed was to filter out more specific
>     prefixes. Generally such prefixes are used for traffic engineering
>     purposes. However, the LISP protocol provides mechanisms for a more
>     efficient management of these functions that do not require 
> EID-prefix
>     de-aggregation.
>
> My understanding of this is that what some service providers do today 
> is to announce some more specific prefixes (that match a PA prefix 
> assigned to them) at some interconnection points and some different 
> more specific prefixes at other interconnection points in order to 
> draw in traffic where they want to draw it in.

If the more specifics are meant only for steering the traffic, then LISP 
offers better mechanisms for ingress TE (with only one mapping) by means 
of RLOC priority and weights. If instead you are suggesting those more 
specifics are advertised per *individual client* (which you seem to do 
lower) then we're back at the point above. That is, we might be 
underestimating the size of the EID-prefix namespace. Out of curiosity, 
do you happen to know any reference suggesting this is the norm rather 
than the exception?

> If with LISP you were to map all of these prefixes in one mapping 
> table entry then you would be mapping them all to the same RLOC (or 
> set of RLOCs). Ignoring the issue that any one large ISP could have 
> thousands of PE routers (with thousands more CE routers attached to 
> them), if all of these were mapped to one RLOC then you would not be 
> able to do traffic engineering on a finer scale (unless you were to do 
> deep packet inspection, which I believe is not the plan). Thus I 
> understand how LISP might potentially prevent these more specific 
> prefixes from being advertised into the global routing table, but they 
> will still need to be in the mapping table.
>
Here you seem to assume that all of the ISP's PA clients will turn on 
LISP on their CPEs (I really hope you're right ;-) ). If that happens, 
yes, all their prefixes will need to be registered in the mapping 
system. But again, what matters is popularity, not the absolute number 
of EID-prefixes. That is, there's nothing saying that a site sending 
traffic to the PA prefix will have to store all of the ISP's clients 
prefixes. Most probably it will cache just one.

Also, there's an alternative here. The ISP could turn LISP on its 
upstream ASBRs and their customers, since they're PA, would be 
unaffected. In this case, any site communicating with the ISP's clients 
would need to store just one mapping and the ISP would take care of 
routing the decapsulated packets to their intended destination. This is 
somewhat similar, to Instagram and Netflix being EIDs in Amazon's 
datacenters.

Thanks,
Florin
> Thanks, Ross
>
> -----Original Message-----
> From: Damien Saucez [mailto:damien.saucez@gmail.com]
> Sent: Thursday, April 02, 2015 3:16 AM
> To: Ross Callon
> Cc: Joel Halpern; Luigi Iannone; LISP mailing list list; 
> draft-ietf-lisp-impact@tools.ietf.org
> Subject: Re: [lisp] WG Last Call draft-ietf-lisp-impact-01
>
>
> On 30 Mar 2015, at 03:50, Ross Callon <rcallon@juniper.net> wrote:
>
>> I suppose that there are two options:
>>
>> 1. Admit that we have no clue what the cache size or control overhead 
>> will be.
>>
> See [CCD12], this document provides a generic model that works for
> any desegregation assumption as long as the map-cache is implemented
> with LRU.
>
> Damien Saucez
>
>> 2. Repeat the cited studies, but with a pessimistic (worse case) 
>> rather than optimistic (better than best case) assumption about cache 
>> granularity.
>>
>> The second option would allow us to actually have some idea what 
>> would happen if LISP were deployed on an Internet-wide scale, but 
>> would of course take more time and more work.
>>
>> Ross
>>
>> PS: I will be offline at meetings all of this week, so I might be a 
>> bit slower to respond for the next few days.
>>
>> -----Original Message-----
>> From: Joel M. Halpern [mailto:jmh@joelhalpern.com]
>> Sent: Sunday, March 29, 2015 9:36 PM
>> To: Ross Callon; Luigi Iannone; LISP mailing list list
>> Cc: draft-ietf-lisp-impact@tools.ietf.org
>> Subject: Re: [lisp] WG Last Call draft-ietf-lisp-impact-01
>>
>> Putting aside the TBD (which of course needs to be fixed),  I have
>> trouble figuring out what you want us to say about the main issue in
>> your review.  On the one hand, this is the very issue that we have been
>> asked to comment on, and on the other hand you say that we don't know.
>> What do you think it is reasoanble to say?
>>
>> Yours,
>> Joel
>>
>> On 3/29/15 9:03 PM, Ross Callon wrote:
>>> Generally I think that this document needs more work before it will be
>>> ready to submit for publication. Some comments on 
>>> draft-ietf-lisp-impact-01:
>>>
>>> First of all, I assume that the TBD at the end of section 3 will be
>>> fixed. This reads:
>>>
>>>   TBD: add a paragraph to explain the operational difference while
>>>
>>>   dealing with a pull model instead of a push.
>>>
>>> Also in section 3, the third paragraph begins:
>>>
>>>   In addition, Iannone and Bonaventure [IB07] show that the number of
>>>
>>>   mapping entries that must be handled by an ITR of a campus network
>>>
>>>   with 10,000 users is limited to few tens of thousands, and does not
>>>
>>>   represent more than 3 to 4 Megabytes of memory.
>>>
>>> Reference [IB07] is of course:
>>>
>>>              Iannone, L. and O. Bonaventure, "On the cost of caching
>>> locator/id mappings", In
>>>
>>>              Proc. ACM CoNEXT 2007, December 2007.
>>>
>>> This paper states:
>>>
>>>    In our analysis, we assume that the granularity of the
>>>
>>>    EID-to-RLOC mapping is the prefix blocks assigned by
>>>
>>>    RIRs. We call it /BGP granularity. In particular, we
>>>
>>>    used the list of prefixes made available by the iPlane
>>>
>>>    Project [15], containing around 240,000 entries. Using
>>>
>>>    /BGP granularity means that each EID is first mapped
>>>
>>>    on a /BGP prefix. The cache will thus contain /BGP
>>>
>>>    to RLOC mappings.2 This is a natural choice, since
>>>
>>>    routing locators are supposed to be border routers.
>>>
>>> The authors should be aware that there is some aggregation /
>>> summarization being done in the operation of BGP routing, and that the
>>> granularity of routes which appear in the default-free BGP routing
>>> tables is fundamentally different from the granularity of enterprise
>>> network / ISP boundaries across which traffic is exchanged.
>>>
>>> The same paragraph cites Kim et al.  [KIF11], which is Kim, J., 
>>> Iannone,
>>> L., and A. Feldmann, "Deep dive into the lisp cache and what isps 
>>> should
>>> know about it", In Proc.  IFIP Networking 2011, May 2011. From this
>>> document:
>>>
>>>    In addition, we use a local BGP prefixes
>>>
>>>    database, fed with the list of BGP prefixes published by the iPlane
>>> Project [17].
>>>
>>>    The database is used to group EID-to-RLOCs mappings with the
>>> granularity of
>>>
>>>    existing BGP prefixes, because, as for today, there is no
>>> sufficient information
>>>
>>>    to predict what will be the granularity of mappings in a
>>> LISP-enabled Internet.
>>>
>>> I agree that "there is not sufficient information to predict what will
>>> be the granularity of mappings in a LISP-enabled Internet". However, 
>>> not
>>> knowing what the mapping granularity will be does not justify using an
>>> extremely optimistic guess, and then acting as if the results are
>>> meaningful. These assumptions are clearly off by some number of orders
>>> of magnitude, but how many orders of magnitude is unknown. We will note
>>> that the current internet default-free routing table includes a few
>>> hundred thousand entries (roughly twice the 240,000 entries that 
>>> existed
>>> when this study was done).
>>>
>>> For example, we might assume that the intended global deployment model
>>> involves xTRs at the boundary between enterprise networks and service
>>> providers, and might note that there are several million companies in
>>> the USA alone (most of these are relatively small companies, of 
>>> course).
>>> Thus there may be very roughly on the order of a hundred million or so
>>> companies worldwide. If each one had a separate entry in the mapping
>>> table, then the number of entries will be nearly 1,000 times larger 
>>> than
>>> BGP-prefix granularity.
>>>
>>> Section 4 mentions as one possible use of LISP: "enable mobility of
>>> subscriber end points". If individual end points are advertised into
>>> LISP, then the granularity of the mapping table may be on the order of
>>> individual systems. In this case the number of mapping table entries
>>> that could exist globally might be on the same order of magnitude as 
>>> the
>>> number of people in the world, or very roughly 7 Billion entries. This
>>> would suggest that the mapping table might be roughly 30,000 times 
>>> finer
>>> grained than was assumed in the referenced studies.
>>>
>>> I don't see how we can accurately predict the control plane load of 
>>> LISP
>>> without some sense for what the granularity of the mapping table will
>>> be. It should however be possible to bound the control plane load. The
>>> referenced studies give a lower bound on possible control plane load 
>>> (it
>>> won't be any less), but give neither an accurate measurement nor an
>>> upper bound on the potential control plane load. I don't think that the
>>> document can claim to explain the impact of LISP without there being an
>>> attempt to measure an upper bound on the control plane load.
>>>
>>> Finally, perhaps I missed it but I didn't see any discussion of the
>>> volume of overhead related to OAM traffic used for liveness detection
>>> (the need for ITR's to determine the reachability of ETR's).
>>>
>>> Thanks, Ross
>>>
>>> *From:*lisp [mailto:lisp-bounces@ietf.org] *On Behalf Of *Luigi Iannone
>>> *Sent:* Monday, March 09, 2015 10:22 AM
>>> *To:* LISP mailing list list
>>> *Cc:* draft-ietf-lisp-impact@tools.ietf.org
>>> *Subject:* [lisp] WG Last Call draft-ietf-lisp-impact-01
>>>
>>> Hi All,
>>>
>>> the authors of the LISP Impact document
>>> [https://tools.ietf.org/html/draft-ietf-lisp-impact-01] requested the
>>> Work Group Last Call.
>>>
>>> This email starts a WG Last Call, to end March 30th, 2015.
>>>
>>> Because usually the pre-meeting period is already overloaded, the LC
>>> duration is set to three weeks.
>>>
>>> Please review this updated WG document and let the WG know if you agree
>>> that it is ready for handing to the AD.
>>>
>>> If you have objections, please state your reasons why, and explain what
>>> it would take to address your concerns.
>>>
>>> Any raised issue will be discussed during the WG meeting in Dallas.
>>>
>>> Thanks
>>>
>>>
>>> Luigi & Joel
>>>
>>>
>>>
>>> _______________________________________________
>>> lisp mailing list
>>> lisp@ietf.org
>>> https://www.ietf.org/mailman/listinfo/lisp
>>>
>> _______________________________________________
>> lisp mailing list
>> lisp@ietf.org
>> https://www.ietf.org/mailman/listinfo/lisp