Re: [Yot] [core] YANG notification within CoMI

Carsten Bormann <cabo@tzi.org> Tue, 12 June 2018 09:48 UTC

Return-Path: <cabo@tzi.org>
X-Original-To: yot@ietfa.amsl.com
Delivered-To: yot@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id DD98C131110; Tue, 12 Jun 2018 02:48:48 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.2
X-Spam-Level:
X-Spam-Status: No, score=-4.2 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id noQ9RXB2StbH; Tue, 12 Jun 2018 02:48:44 -0700 (PDT)
Received: from mailhost.informatik.uni-bremen.de (mailhost.informatik.uni-bremen.de [IPv6:2001:638:708:30c9::12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 43D3C130E0D; Tue, 12 Jun 2018 02:48:44 -0700 (PDT)
X-Virus-Scanned: amavisd-new at informatik.uni-bremen.de
Received: from submithost.informatik.uni-bremen.de (submithost.informatik.uni-bremen.de [IPv6:2001:638:708:30c9::b]) by mailhost.informatik.uni-bremen.de (8.14.5/8.14.5) with ESMTP id w5C9mU7g002627; Tue, 12 Jun 2018 11:48:31 +0200 (CEST)
Received: from [192.168.217.114] (p5DC7E3F3.dip0.t-ipconnect.de [93.199.227.243]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by submithost.informatik.uni-bremen.de (Postfix) with ESMTPSA id 414lSZ3HwGzDWvj; Tue, 12 Jun 2018 11:48:30 +0200 (CEST)
Content-Type: text/plain; charset="utf-8"
Mime-Version: 1.0 (Mac OS X Mail 11.4 \(3445.8.2\))
From: Carsten Bormann <cabo@tzi.org>
In-Reply-To: <DM5PR06MB2777CAB016D2789C4F1DD67F9A7B0@DM5PR06MB2777.namprd06.prod.outlook.com>
Date: Tue, 12 Jun 2018 11:48:29 +0200
Cc: Peter van der Stok <stokcons@bbhmail.nl>, Andy Bierman <andy@yumaworks.com>, Alexander Pelov <a@ackl.io>, "Eric Voit (evoit)" <evoit@cisco.com>, Henk Birkholz <henk.birkholz@sit.fraunhofer.de>, "yot@ietf.org" <yot@ietf.org>, Core <core@ietf.org>
X-Mao-Original-Outgoing-Id: 550489708.433179-ae471a9967a9c1a21d9bebf25267fd88
Content-Transfer-Encoding: quoted-printable
Message-Id: <836A7F59-C26B-4A6C-B9DE-331E9D1CB123@tzi.org>
References: <DM5PR06MB2777CAB016D2789C4F1DD67F9A7B0@DM5PR06MB2777.namprd06.prod.outlook.com>
To: Michel Veillette <Michel.Veillette@trilliant.com>
X-Mailer: Apple Mail (2.3445.8.2)
Archived-At: <https://mailarchive.ietf.org/arch/msg/yot/YU0syG0YmJBjMNM6W212AoH60qM>
Subject: Re: [Yot] [core] YANG notification within CoMI
X-BeenThere: yot@ietf.org
X-Mailman-Version: 2.1.26
Precedence: list
List-Id: Yang of Things <yot.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/yot>, <mailto:yot-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/yot/>
List-Post: <mailto:yot@ietf.org>
List-Help: <mailto:yot-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/yot>, <mailto:yot-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 12 Jun 2018 09:48:49 -0000

Hi Michel,

sorry for being slow to answer; I finally had a nice discussion with Henk today about your concerns that I want to summarize.

Let me propose some terminology first:  “servers” are a CoAP term, so when we talk about “heavy boxes with noisy fans installed into a rack”, I’ll use “machines” (as opposed to “devices” that are the light bulbs being managed).  Not great terminology, but better than using the term “server” for both.

You write about the observe-based notifications proposed for COMI:

> 1) This approach in incompatible with load balancers, notifications are directly returned to the specific server within the cluster which have initiated the observe request.

Whether that is true depends a lot on what kind of load balancer you are thinking about.
For UDP CoAP, an anycast mechanism is the obvious choice for a load balancer.

Clearly, for any load balancing mechanism to be useful, the machines sharing the load need to share state (symbolized by “database” in your diagram).
If they are clients, they need to share communication state as well as application state (e.g., addresses of servers they talk to, the tokens they use for outgoing requests).  They don’t need to do this for every single transaction (they don’t if there is no problem in an occasional loss), but they do have to share observe state.

When people talk about load balancers and resilience, they sometimes mean that the peer with initiative (here: the CoAP server sending another notification) needs to perform a full rendezvous with the other side (e.g., in the Big Web, a fresh DNS lookup that might lead to a completely different machine, potentially a new TLS session).  See (3) below for how we see this, for now let’s just say that we are trying to achieve generally rendezvous-free notifications.

> 2) Typical CoAP implementation (e.g. Californium) doesn't support the persistence of the observe context. These contexts can’t be recovered after a server restart and can't be shared between servers.

That is indeed a problem.  But the real problem is that they are not ready for sharing their client state at all.  Once they are, adding persistence to that sharing becomes a rather straightforward exercise.

So what I’m reading out of your message is that, to employ load balancing for COMI notifications, CoAP implementations need to grow support for sharing communication state (preferably with a persistent mode).  That is an important message to implementers, and thank you for highlighting this.

> 3) Registrations to event streams are not resilient, they can terminate unexpectedly upon transmission error or reception of a Reset message.

Now how does the resilience come in?

In the Big Web situation mentioned above, it comes from redoing the rendezvous each time a notification is needed (potentially with some caching, both of DNS state [leading to defined periods of blackholing] and of any connections still open [which will hopefully time out if there is a problem].  

In a rendezvous-free world, we have to do this explicitly.  For a CoAP server that cares about delivering the notifications, it will send (at least some of) the notifications as confirmable messages [it actually has to, once every 24 hours, but can do that more often if resilience calls for it].  So it will notice when the recipient of the notifications is no longer there.  [It will also notice if the recipient is confused enough to send a Reset, but persistence of communication state is supposed to make this a non-event.]  RFC 7641 tells us that the CoAP server is to cease delivering notifications when the client machine goes away.  It doesn’t tell us what else the implementation might want to do in that event.  In a COMI call home scenario, I would expect the device to notice that its relationship to home broke and redo the rendezvous (call home) — once, when needed, not for every transaction (mitigated only by caching).  

So, in effect, we can have all the advantages of the “do the rendezvous always, with caching” world with much less black-holing and unnecessary message overhead.

For the above to be actionable, we do have to have implementations that:
— on the big machines, can share enough communication state so they can take part in anycast-based load balancing,
— on the small devices, can react to loss of an observation interest by redoing a call-home transaction.

The second one clearly is an implementation quality requirement.  Let’s work on that with the implementers.
The first could be thought to call for a protocol for coordinating the machines.  The IETF has not been very successful in establishing “reliable server pooling”.  I would actually expect implementations that want to provide that coordination to come with their own high-performance mechanisms, involving the usual state sharing databases such as Redis — they already have to do this with the management (application) state shared between the machines.

Grüße, Carsten