Re: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard

Edward Crabbe <edc@google.com> Fri, 24 January 2014 00:09 UTC

Return-Path: <edc@google.com>
X-Original-To: mpls@ietfa.amsl.com
Delivered-To: mpls@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B80F31A049D for <mpls@ietfa.amsl.com>; Thu, 23 Jan 2014 16:09:09 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.913
X-Spam-Level:
X-Spam-Status: No, score=-1.913 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, RP_MATCHES_RCVD=-0.535, SPF_PASS=-0.001] autolearn=unavailable
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id jhkrOblxPJlE for <mpls@ietfa.amsl.com>; Thu, 23 Jan 2014 16:09:06 -0800 (PST)
Received: from mail-wg0-x22b.google.com (mail-wg0-x22b.google.com [IPv6:2a00:1450:400c:c00::22b]) by ietfa.amsl.com (Postfix) with ESMTP id E78FE1A0499 for <mpls@ietf.org>; Thu, 23 Jan 2014 16:09:05 -0800 (PST)
Received: by mail-wg0-f43.google.com with SMTP id y10so2259121wgg.34 for <mpls@ietf.org>; Thu, 23 Jan 2014 16:09:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=jI6V2xlZ2oaxIW6hKnc+9sUSJ1S3GLJ13RLK+iDfNV8=; b=SmZm1kJL16ST0la54OuW/vEj1c2y15+A/letuk2MUsAUXEFJqwsBG/8KvAZk6mMkbw pbbR7MvNIwImZ8ZOt7OjKMvxzcMWq9VTOLk8mEVJ0AqJsrux+l/7nlMaAUSSOvAuZnPx RcCSGfuOIIl/O35xQXTC79ZRvbEeJtSG0VYGRleqSn1V7zvfghGp1uSbpubmEVOFTf8j fYSfLKtXjJTU2+RfgXI4cDu8cQqf0WHDdBlc8+uEsrLWA9UxyKr81/yuFCyDVUApCbgy 4MSp/MCHjkwDd5H6K5hyNT8YI/BdMjHG2oghNSkSPPW6wzi5eysGmk5wrYzdKHhHDHEW sR+w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=jI6V2xlZ2oaxIW6hKnc+9sUSJ1S3GLJ13RLK+iDfNV8=; b=iMsrdsCvUTeNYrFzZXsO4AhRXyPGqDd230mf47TEo9c+q7JlLIpr1PKQ2u194DQOdg x6dlzWWvSL+Es4NxLrUl07G/oP/wP8FPGuYt2m7jVEWsGm3oFHoiveSwWCm3z6WnwINc +N4/6wXxwBs0QL387mzSJ0nW24lzyS+cCLmJvLHboQrMnDxf4aDwov6KDsONWA7XCqME XJiuIGdZyeWGkieGDbxDh4ivAtNseDYsQeNI+bPp58gyMXS06t8KDK/zdf/1d8j1N5H6 WwvdGS0q84Bgu2t0q08Et5lEmhr/S9kyI8Z6H/mI0i+W21r/Qso25kp587RraCW2T+X4 hETA==
X-Gm-Message-State: ALoCoQlDorA5MpZkF77do51tbIXQM5/zuJR6FVomlAFlVYWkdvXuZq+ZeF2KDx2FnqwCtvNLSTCBJEkqBlVYRGZX7ka0NEDOjKKPSYc/dwKFN7ERdLm0cE0RDsPST3f651KWBf45pRq1iguecgB7K+uP0VEUVTZ73bC9lX8jThQxH9M04kA9ZirY1bba6ojAUG93pAIz/xM8
X-Received: by 10.180.93.169 with SMTP id cv9mr1093140wib.3.1390522144277; Thu, 23 Jan 2014 16:09:04 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.23.3 with HTTP; Thu, 23 Jan 2014 16:08:24 -0800 (PST)
In-Reply-To: <CAG4d1rf+wAJuD2GvYfm14bOoEvbhqq0azN5fOq35aPJDUvg=gw@mail.gmail.com>
References: <20140122172930.3D31A18C13B@mercury.lcs.mit.edu> <64A7AA55-795A-40FA-8008-5FCE3B8E2C44@netapp.com> <52E18661.4060000@isi.edu> <CACKN6JFzaGkiCzJgcd0BEHeWi5x0ReemJOv4ASuXAnz36RA-fg@mail.gmail.com> <52E18BF1.1040004@isi.edu> <CACKN6JEA6=vJM94gdhc8iBJV52eTg1X-f7fyBiouJMGz+HOqsw@mail.gmail.com> <52E1AC3E.2040204@isi.edu> <CAG4d1rf+wAJuD2GvYfm14bOoEvbhqq0azN5fOq35aPJDUvg=gw@mail.gmail.com>
From: Edward Crabbe <edc@google.com>
Date: Thu, 23 Jan 2014 16:08:24 -0800
Message-ID: <CACKN6JEA--9zXgLSWfMHpJD3svQ-JeMPBq2sazPifbujSpwn7g@mail.gmail.com>
To: Alia Atlas <akatlas@gmail.com>
Content-Type: multipart/alternative; boundary="f46d043c7fc0769e9404f0ac2aea"
Cc: "mpls@ietf.org" <mpls@ietf.org>, IETF discussion list <ietf@ietf.org>, Noel Chiappa <jnc@mercury.lcs.mit.edu>, Joe Touch <touch@isi.edu>
Subject: Re: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard
X-BeenThere: mpls@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Multi-Protocol Label Switching WG <mpls.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/mpls>, <mailto:mpls-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/mpls/>
List-Post: <mailto:mpls@ietf.org>
List-Help: <mailto:mpls-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/mpls>, <mailto:mpls-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 24 Jan 2014 00:09:09 -0000

I think there also needs to be a network boundary statement with
recommendations for filtering and usage in terms of intra vs interdomain
protocol usage.


On Thu, Jan 23, 2014 at 4:07 PM, Alia Atlas <akatlas@gmail.com> wrote:

> I don't want to get in the way of vehement discussion, but I thought we
> were on the verge of finding an actual solution...
>
> IMHO, that was a combination of an applicability statement, using SHOULD
> for congestion control and checksum, and defining a longer-term OAM-based
> approach (as Stewart Bryant suggested) to be able to verify that packet
> corruption or excessive drops aren't happening.
>
> Does that sound like an acceptable set?
>
> Alia
>
>
> On Thu, Jan 23, 2014 at 6:56 PM, Joe Touch <touch@isi.edu> wrote:
>
>>
>>
>> On 1/23/2014 3:32 PM, Edward Crabbe wrote:
>>
>>> Joe, thanks for your response. Comments inline:
>>>
>>>
>>>     On 1/23/2014 1:27 PM, Edward Crabbe wrote:
>>>
>>>         Part of the point of using UDP is to make use of lowest common
>>>         denominator forwarding hardware in introducing entropy to
>>>         protocols that
>>>         lack it ( this is particularly true of the GRE in UDP use case
>>> also
>>>         under discussion elsewhere).
>>>
>>>         The tunnel is not the source of the traffic.  The _source of the
>>>         traffic_ is the source of the traffic.
>>>
>>>
>>>     To the Internet, the tunnel encapusulator is the source of traffic.
>>>     Tracing the data back further than that is a mirage at best - and
>>>     irrelevant.
>>>
>>>
>>> The 'internet' cares about characteristics of reactivity to congestion.
>>>   This is guaranteed by the /source of the traffic/ independent of any
>>> intermediate node.
>>>
>>
>> Are you prepared to make that a requirement of this document, i.e., that
>> the only MPLS traffic that can be UDP encapsulated is known to react to
>> congestion?
>>
>> How exactly can you know that?
>>
>>
>>      The tunnel head-end is responsible for the tunnel walking, talking,
>>>     and quaking like a duck (host). When the tunnel head-end knows
>>>     something about the ultimate origin of the traffic - whether real,
>>>     imagined, or from Asgard - then it has done it's duty (e.g., that
>>>     it's already congestion controlled).
>>>
>>>     But that head end is responsible, regardless of what it knows or
>>>     doesn't. And when it doesn't know, the only way to be responsible is
>>>     to put in its own reactivity.
>>>
>>> This is not fact; it's actually precisely the principle  we're currently
>>> arguing about.  ;)
>>>
>>
>> Actually, it's a paraphrasing of Section 3.1.3 of RFC5405.
>>
>> We can continue to debate it, but until it's been *changed* by a
>> revision, it remains BCP.
>>
>>
>>  I would posit:
>>>
>>> The tunnel doesn't have to know anything about congestion or performance
>>> characteristics because the originating application must.
>>>
>>
>> That works only if you know that fact about the originating application.
>> However, there are plenty of applications whose traffic goes over MPLS that
>> isn't congestion reactive or bandwidth-limited.
>>
>>
>> > See GRE,
>>
>>> MPLS, many other tunnel types,
>>>
>>
>> This isn't an issue for all tunnels until they enter the Internet...
>>
>>
>>  including several existing within the
>>> IETF that make use of an outer UDP header.
>>>
>>
>> Which are all already supposed to follow the recommendations of RFC5405.
>> To the extent that they don't, they don't agree with that BCP.
>>
>> I'm not saying such things never can or will exist, but I don't think the
>> IETF should be self-contradictory. We already agreed as a group on such
>> BCPs and other standards, and new standards-track docs need to follow them.
>>
>>
>>          The originating application
>>>         who's traffic is being tunneled should be responsible for
>>> congestion
>>>         control, or lack there of.
>>>
>>>     Perhaps it should be, but that's an agreement between whomever
>>>     implements/deploys the tunnel headend and whomever provides the
>>>     originating traffic to them. The problem is that this isn't true for
>>>     the typical use case for this kind of encapsulation.
>>>
>>> How so?  As mentioned before, this is the same case as standard GRE/MPLS
>>> etc.
>>>
>>
>> It's putting MPLS inside UDP. That's a different case, and the reason
>> RFC5405 applies.
>>
>>
>>      I.e., if we were talking about MPLS traffic that already was
>>>     reactive, we wouldn't be claiming the need for additional
>>>     encapsulator mechanism. It's precisely because nothing is known
>>>     about the MPLS traffic that the encapsulator needs to act.
>>>
>>> The MPLS traffic doesn't have to be reactive, it's the applications
>>> being encapsulated / traversing a particular tunnel that are responsible
>>> for and aware of path and congestion charateristics.  Because the MPLS
>>> head end knows nothing about the /end to end application 'session'/
>>> characteristics it /shouldn't/ have anything to do with congestion
>>> management.
>>>
>>
>> OK, so what you're saying is that "traffic using this encapsulation MUST
>> be known to be congestion reactive". Put that in the doc and we'll debate
>> whether we believe it.
>>
>> But right now you're basically saying that because you think it's someone
>> else's problem (the originating application), it isn't yours. The
>> difficulty with that logic is that you (the tunnel headend) is responsible
>> to ensure that this is true - either by *knowing* that the originating
>> traffic is congestion reactive, or by putting in its own mechanism to
>> ensure that this happens if the originating application isn't.
>>
>>
>>       > Are we advocating a return to intermediate
>>>
>>>         congestion control (I like X.25 as much as the next guy,
>>>         but...).  This
>>>         is a very stark change of direction.
>>>
>>>         I think mandating congestion control  is not technically sound
>>> from
>>>         either a theoretical (violation of end to end principle,
>>> stacking of
>>>         congestion control algorithms leading to complex and potentially
>>>         suboptimal results) or economic perspective (as a very large
>>>         backbone,
>>>         we've been doing just fine without intermediate congestion
>>>         management
>>>         thank you very much, and I have 0 desire to pay for a cost
>>>         prohibitive,
>>>         unnecessary feature in silicon.)
>>>
>>>     Write that up, and we'll see how it turns out in the IETF. However,
>>>     right now, the IETF BCPs do require reactive congestion management
>>>     of transport streams.
>>>
>>> Which part?  The end-to-end principle, or the aversion to congestion
>>> control stacking?  These have been implicit in all tunneling protocols
>>> produced by the IETF for the modern internet.
>>>
>>
>> Sure, and that's reflected in RFC5405 already. However, please, PLEASE
>> appreciate that NOBODY here is asking you to put in "congestion control
>> stacking"; that happens when you run two dynamic, reactive control
>> algorithms using the same timescale on top of each other.
>>
>> Equally well-known in CC circles is that you CAN - and often *should* -
>> stack different kinds of mechanisms at different layers with different
>> timescales. E.g., that's why we have an AQM WG - because even when all the
>> traffic is TCP, that's not quite enough inside the network. That's also why
>> Lars was suggesting something coarse on a longer timescale - a circuit
>> breaker - rather than AIMD on a RTT basis.
>>
>> Keep in mind as well that the E2E argument says that you can't get an E2E
>> service by composing the equivalent HBH one; it also says that HBH
>> mechanisms can be required for efficiency. That's what we're talking about
>> here - the efficiency impact of congestion, not the overall correctness of
>> E2E control.
>>
>>
>>      If you don't want/like that, then either don't use transport
>>>     encapsulation, or change the BCPs.
>>>
>>> These BCPs are defined for an originating /application/.
>>>
>>
>> Yes, and I don't understand why you (and others) keep thinking it matters
>> that there are layers of things behind the tunnel head end. It doesn't -
>> unless you KNOW what those layers are, and can ensure that they behave as
>> you expect.
>>
>>
>>  In this case
>>> the UDP header is simply a shim header applied to existing application
>>> traffic.
>>>
>>
>> It's not "simply a shim" - if that's the case, use IP and we're done. No
>> need for congestion control.
>>
>> The reason congestion issues arise is because you're inserting a header
>> ****THAT YOU EXPECT PARTS OF THE INTERNET YOU TRAVERSE TO REACT TO****.
>>
>> If you put in a UDP-like header that nobody in the Internet would
>> interpret, this wouldn't be an issue.
>>
>> But you simply cannot expect the Internet to treat you like "application"
>> traffic if you won't enforce acting like that traffic too.
>>
>>
>>  The tunnel head does not introduce traffic independent of the
>>> originating application.
>>>
>>
>> The Internet ****neither knows nor cares****.
>>
>> To the Internet, the head-end is the source. Whatever data the head end
>> puts inside the UDP packets *is application data* to the rest of the
>> Internet.
>>
>> Again, if you are saying that you know so much about the originating
>> source that you know you don't need additional mechanism at the headend,
>> say so - but then live by that requirement.
>>
>> If *any* MPLS traffic could show up at the headend, then it becomes the
>> headend's responsibility to do something.
>>
>> ---
>>
>> Consider the following case:
>>
>>         - video shows up inside the OS, destined for the network
>>
>>         - software X bundles that video and sends it to go out
>>
>>         - software Y puts that data into UDP packets to go
>>         to the Internet
>>
>> So what's the "application" here? To the Internet, it's software Y -- the
>> thing that puts the 'application' data into UDP packets. The previous steps
>> are irrelevant - just as irrelevant as the singer your video camera is
>> filming, as irrelevant as the sun that created the light that is reflected
>> off the singer to your camera.
>>
>> If software Y knows so much about the steps that lead to its input data
>> that it knows it's congestion reactive, nothing more need be done.
>>
>> If NOT (and that's the relevant corollary here), then it becomes software
>> Y's responsibility to put in some reactivity.
>>
>> Joe
>>
>
>