Re: [Roll] [roll] #105: trickle-mcast: how to determine scope of MPL domain

Robert Cragie <robert.cragie@gridmerge.com> Thu, 08 November 2012 23:54 UTC

Return-Path: <robert.cragie@gridmerge.com>
X-Original-To: roll@ietfa.amsl.com
Delivered-To: roll@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5EDC721F8B60 for <roll@ietfa.amsl.com>; Thu, 8 Nov 2012 15:54:51 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.225
X-Spam-Level:
X-Spam-Status: No, score=-2.225 tagged_above=-999 required=5 tests=[AWL=0.373, BAYES_00=-2.599, HTML_MESSAGE=0.001]
Received: from mail.ietf.org ([64.170.98.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id g7Q+WBLadLOV for <roll@ietfa.amsl.com>; Thu, 8 Nov 2012 15:54:49 -0800 (PST)
Received: from mail78.extendcp.co.uk (mail78.extendcp.co.uk [79.170.40.78]) by ietfa.amsl.com (Postfix) with ESMTP id AA77B21F8B5F for <roll@ietf.org>; Thu, 8 Nov 2012 15:54:48 -0800 (PST)
Received: from client-86-29-206-117.glfd-bam-2.adsl.virginmedia.com ([86.29.206.117] helo=[192.168.0.2]) by mail78.extendcp.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.77) id 1TWbvq-0004yj-Pr for roll@ietf.org; Thu, 08 Nov 2012 23:54:47 +0000
Message-ID: <509C4689.1080700@gridmerge.com>
Date: Thu, 08 Nov 2012 23:55:53 +0000
From: Robert Cragie <robert.cragie@gridmerge.com>
Organization: Gridmerge Ltd.
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2
MIME-Version: 1.0
To: roll@ietf.org
References: <058.e817419e990e1afb26be9aa25d5cfc21@trac.tools.ietf.org> <B50D0F163D52B74DA572DD345D5044AF0F6EFA99@xmb-rcd-x04.cisco.com> <50932647.3050509@exegin.com> <B50D0F163D52B74DA572DD345D5044AF0F6F2837@xmb-rcd-x04.cisco.com> <5094202F.4010805@exegin.com> <B50D0F163D52B74DA572DD345D5044AF0F6F874A@xmb-rcd-x04.cisco.com> <509C03C2.50809@exegin.com>
In-Reply-To: <509C03C2.50809@exegin.com>
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg="sha1"; boundary="------------ms070906050400000801060504"
X-Authenticated-As: robert.cragie@gridmerge.com
Subject: Re: [Roll] [roll] #105: trickle-mcast: how to determine scope of MPL domain
X-BeenThere: roll@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert.cragie@gridmerge.com
List-Id: Routing Over Low power and Lossy networks <roll.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/roll>, <mailto:roll-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/roll>
List-Post: <mailto:roll@ietf.org>
List-Help: <mailto:roll-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/roll>, <mailto:roll-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 08 Nov 2012 23:54:51 -0000

Hi Dario,

Comments inline, bracketed by <RCC></RCC>

Robert

On 08/11/2012 7:10 PM, Dario Tedeschi wrote:
> Hi Jonathan
>
> On 02/11/2012 10:18 PM, Jonathan Hui (johui) wrote:
>>
>> Hi Dario,
>>
>> Thanks for the detailed example - I see our disconnect now.
>>
>> With your approach (require link-local in the outer header), the IPv6 
>> multicast address identifies the application endpoints *and* the MPL 
>> domain.  For that reason, your approach really only needs a single 
>> identifier to both limit the flooding scope and determine the 
>> application endpoints.
> It depends on what you mean by MPL domain. In my view,  FF02::MPL 
> identifies the MPL domain, while the inner IPv6 destination address 
> identifies the application endpoint.
>
>
>>  I can see how that would work (as you demonstrated) if we make the 
>> restriction that the IPv6 multicast addresses used within an MPL 
>> domain have the same prefix that identifies the MPL domain itself. 
>>  The trouble comes when you want to support the full generality that 
>> IPv6 multicast addresses used by application endpoints can be arbitrary.
>
> The "generality", you talk of, is why protocols like MLD exist. MLD 
> informs routers of mc addresses other devices are interested in. 
> Essentially it provides routing information. How could we support the 
> "full generality" of mc addresses without this information (whether 
> implied or from something like MLD).  With this in mind, I don't 
> understand the need for non-link-local scope in the outer header, 
> because the "generality" you seek would be determined by the mc 
> address of the original packet (i.e. the mc address of the inner header). 
<RCC>Peter van der Stok had a use case where this might be useful 
whereby a PAN is subdivided into two subnets, using e.g. Prefix1 and 
Prefix2. A site local multicast would be encapsulated in a packet using 
a unicast prefix-based multicast for e.g. Prefix1. Then it would only be 
forwarded through Prefix1 subnet to maybe only one border router and 
emanate on another interface on that border router. Prefix2 MPL 
forwarders would not do anything. I don't see how you could do that with 
link local multicast as the inner packet's scope has no relation in this 
case.</RCC>
> All my approach is really saying is that only the original/inner mc 
> address determines how far a packet will propagate, regardless of 
> routing domain. MPL could just be one of many routing domains a mc 
> packet must traverse before reaching its furthermost boundary. Or MPL 
> may be the only routing domain, where the mc packet only reaches a 
> sub-set of devices within the domain (i.e. a multicast group or a set 
> based on unicast-prefix-based mc).
<RCC>In the last case, you don't need encapsulation of course</RCC>
>
>
>>
>> For example, how does MPL support an application that subscribes to a 
>> well-known non-link-local IPv6 multicast address?  I guess one 
>> approach is to say that if the IPv6 multicast address is not a 
>> unicast-prefix-based multicast address, then it disseminates across 
>> the entire region of connected MPL forwarders.
>
> Granted one could have a situation where all routers hear an mc packet 
> that is only intended for a subset of devices, but that does not mean 
> all routers need to forward that packet or pass it to a higher layer. 
> Again, this would depend on the inner mc address and the routing 
> information available to routers.

<RCC>See above case where this may be different</RCC>
> The routers without the appropriate routing information would not 
> forward. Similarly, routers without mc membership information from an 
> app would not pass the packet to the next higher layer.
<RCC>
It is important to distinguish forwarding from processing.

1. In the unencapsulated case:

1a. Whether to forward or not is based in the destination address and 
the MPL option
1b. Whether to process or not is based in the destination address

2. In the MDM encapsulated case:

2a. Whether to forward or not is based in the outer destination address 
and the MPL option
2b. Whether to process or not is based in the inner destination address

3. In the LLM encapsulated case:

3a. Whether to forward or not is based in the inner destination address 
and the MPL option
3b. Whether to process or not is based in the inner destination address

When looking at choosing (2) or (3), consider some of these statements:

(1a) is consistent with (2a) but not consistent with (3a). On that 
basis, (2) makes more sense
(2b) and (3b) are the same, so both are acceptable
(2b) may seem inconsistent with (2a), i.e. basing the decision to 
process on the inner header, but clearly separates the endpoint domain 
from the forwarding domain. (3) cannot make that distinction.
</RCC>



>
>
>
>>
>> One minor point with your approach is that the delivery requires 
>> processing the MPL Option of the outer header and the inner IPv6 
>> header.  That isn't so nice from an architectural perspective, but 
>> that is what we did with RFC 6553.
>
> Using non-link-local in the outer header does not mitigate that. The 
> forwarder still needs to look at the inner header to determine if the 
> inner mc address is one an app is listening on. In fact implementing 
> this is a bit messy compared to my approach, because the forwarder has 
> to look ahead into the packet before decapsulating. 
<RCC>I don't see that. It has to decapsulate anyway for processing. If 
the address means it doesn't get processed, it throws it away. The 
forwarding logic may come to the same conclusion.</RCC>
> My approach always requires decapsulation before making any decision 
> about where the packet must go next. It's simpler and more consistent. 
> I've actually had the fortune/misfortune of implementing both and I 
> can safely say the link-local approach was cleaner.
<RCC>I would disagree on the single point that  you need an additional 
mechanism for handling forwarding from the unencapsulated case. I agree 
it is a trivial addition but an addition nevertheless</RCC>
>
>
>>
>> In my approach (allow non-link-local in the outer header), I tried to 
>> separate out the identifiers for the application endpoints and the 
>> MPL domain.  That is why I used the outer header's destination 
>> address to identify the MPL domain and the inner header's destination 
>> address to identify the application endpoints.  With this approach, 
>> it actually becomes feasible to address situations where the devices 
>> within an MPL domain subscribe to arbitrary IPv6 multicast addresses 
>> - not just ones that are based on the unicast prefix.
>
> Firstly, yes I agree the inner destination address should determine 
> the application endpoint. What I'm not clear on is why we need an MPL 
> domain to cover more than the LLN or why we need to support multiple 
> MPL domains in one LLN. Tf the latter case is required to allow for 
> different sets of MPL propagation parameters, then I'd imagine that 
> should rather be handled by the HbH option.
<RCC>It could be done that way but you are now introducing another 
"label" by which to determine forwarding when address scope an group ID 
seems perfectly adequate to me</RCC>

>
> - Dario
>
>>
>> --
>> Jonathan Hui
>>
>> On Nov 2, 2012, at 12:34 PM, Dario Tedeschi <dat@exegin.com 
>> <mailto:dat@exegin.com>> wrote:
>>
>>> On 01/11/2012 7:12 PM, Jonathan Hui (johui) wrote:
>>>> On Nov 1, 2012, at 6:47 PM, Dario Tedeschi<dat@exegin.com>  wrote:
>>>>
>>>>> I don't understand what benefit is gained by allowing the use of non-link-local in the outer header, if encapsulation is required. Supporting both link-local and higher in the outer header just servers to complicate the forwarder.
>>>> The purpose is to limit the extent to which MPL disseminates a packet to something smaller than the entire LLN (item 2).
>>>
>>> Isn't that what multicast groups and/or unicast-prefix-based 
>>> multicasts are for? That is to say, to reach a defined set of devices.
>>>
>>>
>>>>> Is item 2 a requirement that a subset of devices in the LLN participate in MPL forwarding and others don't, or is it that there are two MPL domains, or is it that one subset of devices are listening on multicast address A while others are listening on multicast address B? In any case, I don't see how the use of link-local scope in the *outer* header would not work.
>>>> As mentioned above, the purpose is to limit the physical extent of MPL forwarders that disseminate a message.  If we use a link-local destination address in the outer header, how do you propose to limit the region?
>>>
>>> The destination in the inner header determines if the packet needs 
>>> to be forwarded or not, or forwarded on a different interface.
>>>
>>>
>>>>> As for encapsulation, using an MPL multicast address of the from FF02::00XX, in the outer header, would only add three bytes to the packet after 6lowpan compression.
>>>> I agree.
>>>>
>>>> Maybe you could describe a concrete example of how using link-local addresses in the outer header would address Peter's scenario that he posted to the list?
>>>
>>> Example: Two border routers (BR1 and BR2) each forming a network:
>>>
>>> --- Network 1 (BR1) ---
>>> Unicast prefix: FD01::/64
>>> Unicast-prefix-based multicast address prefix: FF35:0040:FD01::/96
>>>
>>> --- Network 2 (BR2) ---
>>> Unicast prefix: FD02::/64
>>> Unicast-prefix-based multicast address prefix: FF35:0040:FD02::/96
>>>
>>>  1. A non-MPL aware node in network 1 wishes to send a multicast to
>>>     all nodes in network 1.
>>>  2. It sends to multicast address FF35:0040:FD01::1, un-encapsulated.
>>>  3. The packet is received by a MPL router in network 2 (N2R1).
>>>  4. N2R1 finds no higher layer listening to FF35:0040:FD01::1 and,
>>>     therefore, does not pass the packet up.
>>>  5. N2R1 finds no matching routing information for FF35:0040:FD01::1
>>>     and does not forward the packet. The packet is, therefore,
>>>     discarded.
>>>  6. The packet is also received by a MPL router in network 1 (N1R1).
>>>  7. N1R1 finds a higher layer listening to FF35:0040:FD01::1 and
>>>     passes a copy of the packet up. Note: This would depend on
>>>     whether or not any higher layers were actually interested in the
>>>     mc group. Also, this step is not a prerequisite for the next
>>>     step to occur.
>>>  8. N1R1 finds matching routing information for FF35:0040:FD01::1,
>>>     because it is a member of network FD01::/64
>>>  9. N1R1 encapsulates the packet with a MPL HbH option such that the
>>>     outer and inner destination addresses appear as:
>>>     [FF02::MPL][FF35:0040:FD01::1], respectively.
>>> 10. N1R1 transmits the new resulting packet.
>>> 11. The packet is received by another MPL router in network 1 (N1R2).
>>> 12. Seeing that the destination address is FF02::MPL, N1R2
>>>     decapsulates the packet (i.e. the original packet exits the
>>>     tunnel).
>>> 13. N1R2 finds a higher layer listening to FF35:0040:FD01::1 and
>>>     passes a copy of the inner packet up. Note: This step is not a
>>>     prerequisite for the next step to occur.
>>> 14. N1R2 also finds matching routing information for
>>>     FF35:0040:FD01::1, because it is a member of network FD01::/64.
>>> 15. N1R2 re-encapsulates the packet with the *original* MPL HbH
>>>     option such that the outer and inner destination addresses
>>>     appear as: [FF02::MPL][FF35:0040:FD01::1], respectively.
>>> 16. N1R2 transmits the resulting packet.
>>> 17. The packet is received by yet another MPL router in network 2
>>>     (N2R2).
>>> 18. Seeing that the destination address is FF02::MPL, N2R2
>>>     decapsulates the packet (i.e. the original packet exits the
>>>     tunnel).
>>> 19. N2R2 finds no matching routing information or listener for
>>>     FF35:0040:FD01::1 and, therefore, discards the packet.
>>>
>>>
>>> Note:
>>> I chose a non-MPL aware originator of a multicast packet, because I 
>>> wanted to be more thorough. I could have chosen an example where the 
>>> originator of the packet *was* a MPL aware device. In such a case, 
>>> it would have encapsulated with its own MPL HbH option as if it were 
>>> forwarding the packet (i.e. outer and inner destinations would have 
>>> been [FF02::MPL][FF35:0040:FD01::1]). One complication of non-MPL 
>>> aware devices sending non-link-local multicasts is the problem of 
>>> fan-out: If such a device multicasts/broadcasts at the link-layer 
>>> for IPv6 multicasts, then many MPL routers may hear the packet and 
>>> try forward it with their own seeds. Although this wouldn't cause a 
>>> real packet-storm, it would cause something close to it, depending 
>>> on how many routers were in earshot of the originator. However, this 
>>> is a general problem that has nothing to do with MPL's address scope.
>>>
>>> Secondly, notice that FF02::MPL can be viewed as a well defined 
>>> address for a "tunnel exit point". It just so happens that it 
>>> actually identifies multiple physical "exit points".
>>>
>>> - Dario
>>>
>>>
>>
>
>
>
> _______________________________________________
> Roll mailing list
> Roll@ietf.org
> https://www.ietf.org/mailman/listinfo/roll