Re: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard

Alia Atlas <akatlas@gmail.com> Fri, 24 January 2014 00:22 UTC

Return-Path: <akatlas@gmail.com>
X-Original-To: mpls@ietfa.amsl.com
Delivered-To: mpls@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A2F4B1A01D3; Thu, 23 Jan 2014 16:22:18 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, SPF_PASS=-0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id m7ro5TIPVu4m; Thu, 23 Jan 2014 16:22:15 -0800 (PST)
Received: from mail-ie0-x230.google.com (mail-ie0-x230.google.com [IPv6:2607:f8b0:4001:c03::230]) by ietfa.amsl.com (Postfix) with ESMTP id 2725A1A034B; Thu, 23 Jan 2014 16:22:15 -0800 (PST)
Received: by mail-ie0-f176.google.com with SMTP id tp5so2090609ieb.35 for <multiple recipients>; Thu, 23 Jan 2014 16:22:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=rvOrBZR4e7WAEY2PHiW87cbKpP9iWrgkYKW6V4SEptU=; b=ziLIscQNwneUbehpL2DqVfP+jLhgsiCAhjTE+TzXDJCO29NZhhQe+apfGvseMcx7h8 98iSSO1yEew3BZ3MmM7dZsAPb7FSncEoLafJxsE40F6368x7e4UtB/xKpWk8hIhHkTCv nDRdlbHABLC/EbooIrT6w3Pg9dt5Du7MKPF40JKQqbRGqYR4eUpzBtEgjcm1Aqxgw8fF n4viAlQs6aBBEOZDzE+qv+9C9KarrDgVssUd8IyaCVi7+MEbEmG+juXWNTzGa8n2DVjP NY1JKgENEGMOD/cfZLutvtUNl/YMF6TYldsEkUse6u8QDsxNQVAs5vdS1QPz3VRfecFZ gm9g==
MIME-Version: 1.0
X-Received: by 10.50.43.134 with SMTP id w6mr2063144igl.20.1390522934092; Thu, 23 Jan 2014 16:22:14 -0800 (PST)
Received: by 10.64.72.132 with HTTP; Thu, 23 Jan 2014 16:22:13 -0800 (PST)
In-Reply-To: <CACKN6JEA--9zXgLSWfMHpJD3svQ-JeMPBq2sazPifbujSpwn7g@mail.gmail.com>
References: <20140122172930.3D31A18C13B@mercury.lcs.mit.edu> <64A7AA55-795A-40FA-8008-5FCE3B8E2C44@netapp.com> <52E18661.4060000@isi.edu> <CACKN6JFzaGkiCzJgcd0BEHeWi5x0ReemJOv4ASuXAnz36RA-fg@mail.gmail.com> <52E18BF1.1040004@isi.edu> <CACKN6JEA6=vJM94gdhc8iBJV52eTg1X-f7fyBiouJMGz+HOqsw@mail.gmail.com> <52E1AC3E.2040204@isi.edu> <CAG4d1rf+wAJuD2GvYfm14bOoEvbhqq0azN5fOq35aPJDUvg=gw@mail.gmail.com> <CACKN6JEA--9zXgLSWfMHpJD3svQ-JeMPBq2sazPifbujSpwn7g@mail.gmail.com>
Date: Thu, 23 Jan 2014 19:22:13 -0500
Message-ID: <CAG4d1rf7Nefwo2Kiej2dQF7LLqw-_n+zH4u=aWtQEK+W+8KX-A@mail.gmail.com>
From: Alia Atlas <akatlas@gmail.com>
To: Edward Crabbe <edc@google.com>
Content-Type: multipart/alternative; boundary="089e01184b0c89e95004f0ac59ce"
Cc: "mpls@ietf.org" <mpls@ietf.org>, IETF discussion list <ietf@ietf.org>, Noel Chiappa <jnc@mercury.lcs.mit.edu>, Joe Touch <touch@isi.edu>
Subject: Re: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard
X-BeenThere: mpls@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Multi-Protocol Label Switching WG <mpls.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/mpls>, <mailto:mpls-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/mpls/>
List-Post: <mailto:mpls@ietf.org>
List-Help: <mailto:mpls-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/mpls>, <mailto:mpls-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 24 Jan 2014 00:22:18 -0000

On Thu, Jan 23, 2014 at 7:08 PM, Edward Crabbe <edc@google.com> wrote:

> I think there also needs to be a network boundary statement with
> recommendations for filtering and usage in terms of intra vs interdomain
> protocol usage.
>

Agreed - I think that's part of describing how the applicability statement
can be enforced by a service provider.

Alia


>
>
> On Thu, Jan 23, 2014 at 4:07 PM, Alia Atlas <akatlas@gmail.com> wrote:
>
>> I don't want to get in the way of vehement discussion, but I thought we
>> were on the verge of finding an actual solution...
>>
>> IMHO, that was a combination of an applicability statement, using SHOULD
>> for congestion control and checksum, and defining a longer-term OAM-based
>> approach (as Stewart Bryant suggested) to be able to verify that packet
>> corruption or excessive drops aren't happening.
>>
>> Does that sound like an acceptable set?
>>
>> Alia
>>
>>
>> On Thu, Jan 23, 2014 at 6:56 PM, Joe Touch <touch@isi.edu> wrote:
>>
>>>
>>>
>>> On 1/23/2014 3:32 PM, Edward Crabbe wrote:
>>>
>>>> Joe, thanks for your response. Comments inline:
>>>>
>>>>
>>>>     On 1/23/2014 1:27 PM, Edward Crabbe wrote:
>>>>
>>>>         Part of the point of using UDP is to make use of lowest common
>>>>         denominator forwarding hardware in introducing entropy to
>>>>         protocols that
>>>>         lack it ( this is particularly true of the GRE in UDP use case
>>>> also
>>>>         under discussion elsewhere).
>>>>
>>>>         The tunnel is not the source of the traffic.  The _source of the
>>>>         traffic_ is the source of the traffic.
>>>>
>>>>
>>>>     To the Internet, the tunnel encapusulator is the source of traffic.
>>>>     Tracing the data back further than that is a mirage at best - and
>>>>     irrelevant.
>>>>
>>>>
>>>> The 'internet' cares about characteristics of reactivity to congestion.
>>>>   This is guaranteed by the /source of the traffic/ independent of any
>>>> intermediate node.
>>>>
>>>
>>> Are you prepared to make that a requirement of this document, i.e., that
>>> the only MPLS traffic that can be UDP encapsulated is known to react to
>>> congestion?
>>>
>>> How exactly can you know that?
>>>
>>>
>>>      The tunnel head-end is responsible for the tunnel walking, talking,
>>>>     and quaking like a duck (host). When the tunnel head-end knows
>>>>     something about the ultimate origin of the traffic - whether real,
>>>>     imagined, or from Asgard - then it has done it's duty (e.g., that
>>>>     it's already congestion controlled).
>>>>
>>>>     But that head end is responsible, regardless of what it knows or
>>>>     doesn't. And when it doesn't know, the only way to be responsible is
>>>>     to put in its own reactivity.
>>>>
>>>> This is not fact; it's actually precisely the principle  we're currently
>>>> arguing about.  ;)
>>>>
>>>
>>> Actually, it's a paraphrasing of Section 3.1.3 of RFC5405.
>>>
>>> We can continue to debate it, but until it's been *changed* by a
>>> revision, it remains BCP.
>>>
>>>
>>>  I would posit:
>>>>
>>>> The tunnel doesn't have to know anything about congestion or performance
>>>> characteristics because the originating application must.
>>>>
>>>
>>> That works only if you know that fact about the originating application.
>>> However, there are plenty of applications whose traffic goes over MPLS that
>>> isn't congestion reactive or bandwidth-limited.
>>>
>>>
>>> > See GRE,
>>>
>>>> MPLS, many other tunnel types,
>>>>
>>>
>>> This isn't an issue for all tunnels until they enter the Internet...
>>>
>>>
>>>  including several existing within the
>>>> IETF that make use of an outer UDP header.
>>>>
>>>
>>> Which are all already supposed to follow the recommendations of RFC5405.
>>> To the extent that they don't, they don't agree with that BCP.
>>>
>>> I'm not saying such things never can or will exist, but I don't think
>>> the IETF should be self-contradictory. We already agreed as a group on such
>>> BCPs and other standards, and new standards-track docs need to follow them.
>>>
>>>
>>>          The originating application
>>>>         who's traffic is being tunneled should be responsible for
>>>> congestion
>>>>         control, or lack there of.
>>>>
>>>>     Perhaps it should be, but that's an agreement between whomever
>>>>     implements/deploys the tunnel headend and whomever provides the
>>>>     originating traffic to them. The problem is that this isn't true for
>>>>     the typical use case for this kind of encapsulation.
>>>>
>>>> How so?  As mentioned before, this is the same case as standard GRE/MPLS
>>>> etc.
>>>>
>>>
>>> It's putting MPLS inside UDP. That's a different case, and the reason
>>> RFC5405 applies.
>>>
>>>
>>>      I.e., if we were talking about MPLS traffic that already was
>>>>     reactive, we wouldn't be claiming the need for additional
>>>>     encapsulator mechanism. It's precisely because nothing is known
>>>>     about the MPLS traffic that the encapsulator needs to act.
>>>>
>>>> The MPLS traffic doesn't have to be reactive, it's the applications
>>>> being encapsulated / traversing a particular tunnel that are responsible
>>>> for and aware of path and congestion charateristics.  Because the MPLS
>>>> head end knows nothing about the /end to end application 'session'/
>>>> characteristics it /shouldn't/ have anything to do with congestion
>>>> management.
>>>>
>>>
>>> OK, so what you're saying is that "traffic using this encapsulation MUST
>>> be known to be congestion reactive". Put that in the doc and we'll debate
>>> whether we believe it.
>>>
>>> But right now you're basically saying that because you think it's
>>> someone else's problem (the originating application), it isn't yours. The
>>> difficulty with that logic is that you (the tunnel headend) is responsible
>>> to ensure that this is true - either by *knowing* that the originating
>>> traffic is congestion reactive, or by putting in its own mechanism to
>>> ensure that this happens if the originating application isn't.
>>>
>>>
>>>       > Are we advocating a return to intermediate
>>>>
>>>>         congestion control (I like X.25 as much as the next guy,
>>>>         but...).  This
>>>>         is a very stark change of direction.
>>>>
>>>>         I think mandating congestion control  is not technically sound
>>>> from
>>>>         either a theoretical (violation of end to end principle,
>>>> stacking of
>>>>         congestion control algorithms leading to complex and potentially
>>>>         suboptimal results) or economic perspective (as a very large
>>>>         backbone,
>>>>         we've been doing just fine without intermediate congestion
>>>>         management
>>>>         thank you very much, and I have 0 desire to pay for a cost
>>>>         prohibitive,
>>>>         unnecessary feature in silicon.)
>>>>
>>>>     Write that up, and we'll see how it turns out in the IETF. However,
>>>>     right now, the IETF BCPs do require reactive congestion management
>>>>     of transport streams.
>>>>
>>>> Which part?  The end-to-end principle, or the aversion to congestion
>>>> control stacking?  These have been implicit in all tunneling protocols
>>>> produced by the IETF for the modern internet.
>>>>
>>>
>>> Sure, and that's reflected in RFC5405 already. However, please, PLEASE
>>> appreciate that NOBODY here is asking you to put in "congestion control
>>> stacking"; that happens when you run two dynamic, reactive control
>>> algorithms using the same timescale on top of each other.
>>>
>>> Equally well-known in CC circles is that you CAN - and often *should* -
>>> stack different kinds of mechanisms at different layers with different
>>> timescales. E.g., that's why we have an AQM WG - because even when all the
>>> traffic is TCP, that's not quite enough inside the network. That's also why
>>> Lars was suggesting something coarse on a longer timescale - a circuit
>>> breaker - rather than AIMD on a RTT basis.
>>>
>>> Keep in mind as well that the E2E argument says that you can't get an
>>> E2E service by composing the equivalent HBH one; it also says that HBH
>>> mechanisms can be required for efficiency. That's what we're talking about
>>> here - the efficiency impact of congestion, not the overall correctness of
>>> E2E control.
>>>
>>>
>>>      If you don't want/like that, then either don't use transport
>>>>     encapsulation, or change the BCPs.
>>>>
>>>> These BCPs are defined for an originating /application/.
>>>>
>>>
>>> Yes, and I don't understand why you (and others) keep thinking it
>>> matters that there are layers of things behind the tunnel head end. It
>>> doesn't - unless you KNOW what those layers are, and can ensure that they
>>> behave as you expect.
>>>
>>>
>>>  In this case
>>>> the UDP header is simply a shim header applied to existing application
>>>> traffic.
>>>>
>>>
>>> It's not "simply a shim" - if that's the case, use IP and we're done. No
>>> need for congestion control.
>>>
>>> The reason congestion issues arise is because you're inserting a header
>>> ****THAT YOU EXPECT PARTS OF THE INTERNET YOU TRAVERSE TO REACT TO****.
>>>
>>> If you put in a UDP-like header that nobody in the Internet would
>>> interpret, this wouldn't be an issue.
>>>
>>> But you simply cannot expect the Internet to treat you like
>>> "application" traffic if you won't enforce acting like that traffic too.
>>>
>>>
>>>  The tunnel head does not introduce traffic independent of the
>>>> originating application.
>>>>
>>>
>>> The Internet ****neither knows nor cares****.
>>>
>>> To the Internet, the head-end is the source. Whatever data the head end
>>> puts inside the UDP packets *is application data* to the rest of the
>>> Internet.
>>>
>>> Again, if you are saying that you know so much about the originating
>>> source that you know you don't need additional mechanism at the headend,
>>> say so - but then live by that requirement.
>>>
>>> If *any* MPLS traffic could show up at the headend, then it becomes the
>>> headend's responsibility to do something.
>>>
>>> ---
>>>
>>> Consider the following case:
>>>
>>>         - video shows up inside the OS, destined for the network
>>>
>>>         - software X bundles that video and sends it to go out
>>>
>>>         - software Y puts that data into UDP packets to go
>>>         to the Internet
>>>
>>> So what's the "application" here? To the Internet, it's software Y --
>>> the thing that puts the 'application' data into UDP packets. The previous
>>> steps are irrelevant - just as irrelevant as the singer your video camera
>>> is filming, as irrelevant as the sun that created the light that is
>>> reflected off the singer to your camera.
>>>
>>> If software Y knows so much about the steps that lead to its input data
>>> that it knows it's congestion reactive, nothing more need be done.
>>>
>>> If NOT (and that's the relevant corollary here), then it becomes
>>> software Y's responsibility to put in some reactivity.
>>>
>>> Joe
>>>
>>
>>
>