Re: Draft minutes - Toronto EID BOF

Noel Chiappa <jnc@ginger.lcs.mit.edu> Mon, 17 October 1994 10:57 UTC

Received: from ietf.nri.reston.va.us by IETF.CNRI.Reston.VA.US id aa01176; 17 Oct 94 6:57 EDT
Received: from CNRI.Reston.VA.US by IETF.CNRI.Reston.VA.US id aa01172; 17 Oct 94 6:57 EDT
Received: from murtoa.cs.mu.OZ.AU by CNRI.Reston.VA.US id aa02929; 17 Oct 94 6:56 EDT
Received: from mailing-list by murtoa.cs.mu.OZ.AU (8.6.9/1.0) id UAA11912; Mon, 17 Oct 1994 20:30:09 +1000
Received: from munnari.oz.au by murtoa.cs.mu.OZ.AU (8.6.9/1.0) with SMTP id UAA11887; Mon, 17 Oct 1994 20:11:01 +1000
Precedence: list
Received: from ginger.lcs.mit.edu by munnari.oz.au with SMTP (5.83--+1.3.1+0.50) id AA08177; Mon, 17 Oct 1994 04:02:30 +1000 (from jnc@ginger.lcs.mit.edu)
Received: by ginger.lcs.mit.edu id AA18892; Sun, 16 Oct 94 14:02:23 -0400
Date: Sun, 16 Oct 1994 14:02:23 -0400
Sender: ietf-archive-request@IETF.CNRI.Reston.VA.US
From: Noel Chiappa <jnc@ginger.lcs.mit.edu>
Message-Id: <9410161802.AA18892@ginger.lcs.mit.edu>
To: dcrocker@mordor.stanford.edu, jnc@ginger.lcs.mit.edu
Subject: Re: Draft minutes - Toronto EID BOF
Cc: big-internet@munnari.oz.au, jnc@ginger.lcs.mit.edu

    From: dcrocker@mordor.stanford.edu (Dave Crocker)

    I view the mechanism as more transport layer than not.  A key distinction
    between transport and internet ... is whether it's in the internet
    infrastructure or only in the end-system leaf nodes.

I don't think this is a workable distinction. There are a number of things at
the internet layer (in which I include ICMP) which are host-host, and not used
by the intermediate forwarding nodes. Of course, since the internet layer is
mostly concerned with getting the traffic across the forwarding nodes, most of
the functions at that layer are indeed related to forwarding. However, as I'll
explain below, on thinking about it I do think this is more of a transport
level mobility mechanism (something which I think is bad), but not for the
reason you give above.

    On the other hand, the control channel mechanism ends up serving as a
    "shim" mini-layer below regular transport and above regular internet.

I don't see that there's a major utility for a host-host datagram layer on top
of the forwarding layer; I think rolling those functions into the internetwork
layer does not result in any decreased flexibility, etc. I thus think the
internet layer ought to contain everything below individual transports, and
which is done network-wide; i.e. addressing, routing, resource allocation,
etc. As I mentioned in a previous message, whether you make something an
option, as opposed to dedicated a field in the fixed header, is a 'trivial'
engineering decision based on the frequency of use. However, to some degree
it's just terminology, so let's skip it..


    Also, as you observe, there are significant changes that must be put into
    the regular transport layer.

Yes. It's this point that caused me the most thought; not that you propose
changes, but the nature of the changes.

The reason I had advocated naming endpoints was to allow all transport layers
access to a uniform mechanism for identifying the other end of their channels,
irrespective of the location of that endpoint, which interface it was using
(on a multi-homed host), etc. The intent was that transport levels would not
know about addresses at all, so they wouldn't have to get involved in the
details of changing addresses; those grubby details would all be invisible to
them. They would operate solely on endpoint names.

The change you propose unfortunately doesn't have this characteristic. While
transport *connections* may be identified by a endpoint name (the DNS name),
the *packets* are not. The only thing that indentifies which connection a
packet belongs to is the address. As a result, transport layers still have to
know about addresses, and have to know when they change. Transport layers have
to be prepared to map from addresses to what they use to identify connection
(TCN's, including DNS names). Not such a big deal, you say? Well, there are a
number of issues.

To start with, there's a great irony. One of the big "points" raised against
variable length addresses on this list was that it would make finding the
connection control block much slower, due to the cost of handling a variable
length data item. Of course, this was an utterly bogus argument, since any
implementor who's vaguely competent would realize that checking port numbers
first would speed this up to more or less what it was before. However, that
didn't seem to make people happy with variable length addresses. Your TCCL,
which contains a variable number of addresses (any of which might match the
one in the packet) is just such a variable length data item, which must be
inspected to find the matching CCB. If variable length addresses were
unacceptable, why are variable length TCCL's any more so?

(As a sidelight, that was one of the goals of EID's as endpoint names; as
single, fixed-length, shortish names which never changed, transport layers
could be converted to using them without a lot of upheaval.)

However, the variable length thing is not a problem for me. I think competent
programmers can make this stuff work. What *is* a concern (not quite a
problem) to me, one I hadn't really realized was a problem (e.g. with ESEL's
too) until now is that the packet does not fully identify which connection it
belongs to (using the same info as the application names the connection with).
I.e. the endpoint name is not in the packet. This represents a weak point,
which could produce either failures (albeit low-probablity) or loss of
flexibility.

For an example of the former, consider a host H2 which moves to address X just
after some other host H1 has left that address. H2 receives a delayed packet
that was meant for H1; the packet is a TCP packet which happens to match an
open connection, and is a reset. The packet will be accepted, and the
connection closed. Of course, this particular scenario is very unlikely, but
the fact that I can think one up so quickly is a bit worrisome.

One way to attack this part of the problem (lack of endpoint identification in
the packet) might be to include the endpoint name in the packet, but I think
that's probably too brute force. A more space-efficient way is to use another
field which is already there; i.e. to mandate authentication in the transport
checksum.  Including the endpoint name as part of the pseudo-header is
probably the cheapest, as this can be precomputed when the connection is
opened, and thus introduces no extra run-time cost. If you want to get really
serious, and tamper-proof, encrypt the transport checksum with the endpoint's
private key.  That private key is as unique an endpoint name as you could hope
for, and the approach of encrypting the checksum doesn't actually use any more
bits in the packet (although there is more computational cost).


Still, none of these are my main objection, which is that each and every
transport layer which wishes to support mobility is going to have to include
individual mechanisms to support TCCL's, etc. In this sense, it *is* a
transport layer mechanism, as I stated above. It's just that you have taken
some of the information distribution mechanism and moved that into a single
protocol. (BTW, I think ICMP would be the way to go, not UDP).

So, it's not as clean as a solution involving use of an endpoint name at the
interface between the internet layer and the transport layer. I.e. the binding
between address and endpoint name would be done within the internet layer.
That would be optimal, and no, none of the mechanism for that has to be in
*any* router, and no, it doesn't mean you have to change the internetwork
packet header format.


    Christian Huitema apparently is suggesting that the list of IP addresses
    be exchanged via TCP options, rather than a separate control channel.

In addition to the points above, this will mean that you need a separate
mobility protocol for each transport layer.


    > If you view the ability of an unmodified host to talk to an actively
    > mobile host as "continuing to function", your assertion is not true. You
    > clearly need to have this protocol implemented on both ends before one
    > end can be mobile.

    Any mobile systems that you can talk to today you can talk to tomorrow.
    ... You do not need to modify all of the intermediate nodes in order to
    get mobility running. Just the two participating end nodes.

Yes, but an unmodified sessile host cannot talk to a mobile host which is
using your new protocol. I'm not saying this is bad; we don't have any SIPP
hosts, so we could get this into them all. I just don't think your original
statement in the document is completely correct.


    > Now, maybe you just want to blow off supporting mobile applications.

    Yup. ... Show me some real world requirements for mobile applications,
    solving various issues of process migration separate from the protocol
    portion, and show me these requirements emerging in productions systems,
    and THEN I'll agree that we should treat that deficiency as a showstopper.

I seem to recall some from the previous debate, but I'm not an application
wizard. Anyway, I don't care whether or not you support them, just wanted
to make sure you realized this limitation. It might be nice to make a note
of this design decision in your document.


    > I'm talking about cases in which the addresses on each end stay the
    > same, but the path through the fabric changes.

    how is this visible to the end points?  How does recovery from it result
    in anything different than sending to or from a different host interface?
    What specific suggestion are you making for changing the proposal or what
    alternative mechanism are you propsosing?

I was saying that if you hide the binding from address to endpoint name in the
internetwork layer, so the transport layer doesn't even see it, the result to
the transport layer, when mobility happens, is the same as having a path
change. I.e. there's no *harm* to the transport layer in hiding that binding.
It doesn't lose any info it's not already losing in other cases.


    > I'm pretty astonished by your characterization of the internetwork layer
    > as "fragile".

    I'm thinking of the amount and nature of the effort it takes to keep it
    running and how often there are serious operational problems.  The thing
    works, but we should not go around messing with it.

True, but I don't see what this has to do with that. That's almost 100% caused
by the routing. (Since SIPP doesn't include any features which are not in CIDR
IPv4 to attack routing problems, I fail to see how SIPP is going to help with
that any, but I digress. That case is closed anyway.)

	Noel