Re: [P2PSIP] New draft: HIP BONE

Pekka Nikander <pekka.nikander@nomadiclab.com> Thu, 27 December 2007 13:31 UTC

Return-path: <p2psip-bounces@ietf.org>
Received: from [127.0.0.1] (helo=stiedprmman1.va.neustar.com) by megatron.ietf.org with esmtp (Exim 4.43) id 1J7spL-0000eE-Ab; Thu, 27 Dec 2007 08:31:11 -0500
Received: from [10.91.34.44] (helo=ietf-mx.ietf.org) by megatron.ietf.org with esmtp (Exim 4.43) id 1J7spK-0000e8-2T for p2psip@ietf.org; Thu, 27 Dec 2007 08:31:10 -0500
Received: from n2.nomadiclab.com ([2001:14b8:400:101::2]) by ietf-mx.ietf.org with esmtp (Exim 4.43) id 1J7spJ-0008SF-6t for p2psip@ietf.org; Thu, 27 Dec 2007 08:31:10 -0500
Received: from n2.nomadiclab.com (localhost [127.0.0.1]) by n2.nomadiclab.com (Postfix) with ESMTP id 721391F87EA; Thu, 27 Dec 2007 15:31:07 +0200 (EET)
Received: from [127.0.0.1] (localhost [IPv6:::1]) by n2.nomadiclab.com (Postfix) with ESMTP id D8DA41F87E9; Thu, 27 Dec 2007 15:31:06 +0200 (EET)
In-Reply-To: <77F357662F8BFA4CA7074B0410171B6D04049ADA@XCH-NW-5V1.nw.nos.boeing.com>
References: <476BA8D9.4010203@ericsson.com><20d2bdfb0712210823m2218c4a6mcace60af3d82db57@mail.gmail.com> <12016CE5-4145-4F69-8C1A-BE9A165EEFDD@nomadiclab.com> <77F357662F8BFA4CA7074B0410171B6D04049ADA@XCH-NW-5V1.nw.nos.boeing.com>
Mime-Version: 1.0 (Apple Message framework v753)
Content-Type: text/plain; charset="US-ASCII"; delsp="yes"; format="flowed"
Message-Id: <2A9BA698-3320-459A-9AF7-877183245F0A@nomadiclab.com>
Content-Transfer-Encoding: 7bit
From: Pekka Nikander <pekka.nikander@nomadiclab.com>
Subject: Re: [P2PSIP] New draft: HIP BONE
Date: Thu, 27 Dec 2007 15:31:06 +0200
To: "Henderson, Thomas R" <thomas.r.henderson@boeing.com>
X-Mailer: Apple Mail (2.753)
X-Virus-Scanned: ClamAV using ClamSMTP
X-Spam-Score: -1.4 (-)
X-Scan-Signature: 825e642946eda55cd9bc654a36dab8c2
Cc: P2PSIP Mailing List <p2psip@ietf.org>
X-BeenThere: p2psip@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Peer-to-Peer SIP working group discussion list <p2psip.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/p2psip>, <mailto:p2psip-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www1.ietf.org/pipermail/p2psip>
List-Post: <mailto:p2psip@ietf.org>
List-Help: <mailto:p2psip-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/p2psip>, <mailto:p2psip-request@ietf.org?subject=subscribe>
Errors-To: p2psip-bounces@ietf.org

>> In general, we tried to clarify the proposed relationship between  
>> HIP and the peer protocols by the architectural analogy [...],  
>> where we state that HIP acts in a role similar to IP [...] and the  
>> peer protocols in a role similar to routing protocols [...].

<snip>

>> A better way to state what we propose might be "Nodes  
>> participating into an overlay forward I1 packets in a hop-by-hop  
>> fashion over the HIP BONE using the  forwarding tables, which in  
>> turn are built based on the routing table constructed by the peer  
>> protocol for the  overlay."

<snip>

>> [...] my understanding is that a forwarding node would look at the  
>> next hop ORCHID based on the forwarding table, and then pass the  
>> packet to the HIP implementation.  The HIP implementation would  
>> then detect if there is an active HIP association towards that  
>> ORCHID.  If there is, the I1 packet is passed over that HIP  
>> association.  If there are not but there is a valid locator  
>> associated with the next hop ORCHID, then the I1 packet is passed  
>> over IP using the locator.

<snip>

>> Further note that there are at least two choices of what to use as  
>> index to the forwarding table look up:
>> 1) one can use the Peer ID, presumably carried in a new HIP  
>> parameter in the I1 packet, or
>> 2) one can use the ORCHID, carried in the source HIT field in the  
>> I1 packet.

<snip>

> If I understand correctly, you are suggesting that there would be  
> HIP forwarding table(s) with next hop ORCHIDs.  There may be next  
> hop IP addresses or active HIP tunnels corresponding to each next  
> hop ORCHID.

Yes, thought I'd prefer to say "next hop locators" instead of "next  
hop IP addresses", since I could imagine such locators also to have  
other forms, such as UDP tunnels for HIP signalling traffic or such.

> There may be multiple peer protocols populating this HIP table.

I didn't really think about that.  Hence, here we go forward from my  
previous thinking.

> Then, it seems to be the case that there must be multiple HIP  
> forwarding tables, one for each overlay.  [...]  Furthermore, nodes  
> in a DHT will be responsible for portions of the space and these  
> portions will overlap from overlay to overlay.

Yes, you are right.  Furthermore, the set of nodes in each overlay  
may be partially disjoint.

> If so, then there is a requirement to be able to look up the right  
> forwarding table from an I1, which I think you are suggesting above  
> in the last paragraph.

Well, as far as I can see, that depends on the assumptions on the  
number of partially overlapping overlays.  If that number is small,  
the I could imagine that instead of picking just one forwarding table  
to use, one could simply replicate the I1 to all overlays in the case  
the right overlay is not know.  However, if replicating an I1, it  
would most probably make sense to tag it with the names of all  
overlays to which it is being sent to, or at least denote the primary  
overlay it was sent to.  But the details depend on detail  
requirements that I don't understand, e.g., what is the exact  
semantics of having multiple overlays.

> So the I1 (and probably all HIP packets) needs to carry some  
> metadata about the ORCHID type in use, either in a new parameter or  
> embedded in a certificate.

Right; that makes sense once a HIP signalling packet is being  
forwarded within an overlay.  I think a new parameter makes more  
sense, as it would most probably be more efficient to handle that.   
For other reasons, I was thinking of a parameter that would carry an  
ORCHID Context ID.

> Basically, there needs to be a new namespace in use at the HIP  
> forwarding layer -- the namespace
> uniquely identifying overlays.  This could perhaps be the ORCHID  
> Context ID.

I concur.

> Another related question I have is whether you think the legacy  
> application API (based on sockets) would work anymore with ORCHIDs,  
> because if you get a raw ORCHID across the sockets API, you can't  
> tell by inspecting the ORCHID which overlay it belongs to, because  
> the context ID is not visible.

Hmm.  The situation may be worse.  The case may well be that the app  
has no information at all about the overlays, so that it may not even  
be possible to tell the right overlay at all.  In such a case my  
suggestion above, i.e., to send the packet to all overlays  
simultaneously, might make sense.

> It seems that you would need to pass metadata about the ORCHID  
> across the sockets API, so that the node knows
> which ORCHID-based overlay that it belongs to, or else you need to  
> use LSIs even in the IPv6 case (as suggested by Philip Matthews at  
> the Vancouver meeting).

As far as I can see, that depends on the requirements.  If the  
requirement is simply create a secured connection to the peer if  
reachable at all, then IMHO it would make sense to try all of the  
overlays at the same time.  As long as the number of overlays is  
relatively small, the amount of additional traffic will be quite  
small, and originate from the source, i.e., is not replicated in the  
network.

--Pekka


_______________________________________________
P2PSIP mailing list
P2PSIP@ietf.org
https://www1.ietf.org/mailman/listinfo/p2psip