Re: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard

Curtis Villamizar <curtis@ipv6.occnc.com> Fri, 24 January 2014 04:05 UTC

Return-Path: <curtis@ipv6.occnc.com>
X-Original-To: ietf@ietfa.amsl.com
Delivered-To: ietf@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id AC6D31A045D; Thu, 23 Jan 2014 20:05:17 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.437
X-Spam-Level:
X-Spam-Status: No, score=-2.437 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RP_MATCHES_RCVD=-0.535, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 29x39oZsyOWd; Thu, 23 Jan 2014 20:05:14 -0800 (PST)
Received: from maildrop2.v6ds.occnc.com (maildrop2.v6ds.occnc.com [IPv6:2001:470:88e6:3::232]) by ietfa.amsl.com (Postfix) with ESMTP id 68BE21A03D6; Thu, 23 Jan 2014 20:05:14 -0800 (PST)
Received: from harbor3.ipv6.occnc.com (harbor3.v6ds.occnc.com [IPv6:2001:470:88e6:3::239]) (authenticated bits=128) by maildrop2.v6ds.occnc.com (8.14.7/8.14.7) with ESMTP id s0O45AEB014301; Thu, 23 Jan 2014 23:05:11 -0500 (EST) (envelope-from curtis@ipv6.occnc.com)
Message-Id: <201401240405.s0O45AEB014301@maildrop2.v6ds.occnc.com>
To: Alia Atlas <akatlas@gmail.com>
From: Curtis Villamizar <curtis@ipv6.occnc.com>
Subject: Re: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard
In-reply-to: Your message of "Thu, 23 Jan 2014 19:07:25 -0500." <CAG4d1rf+wAJuD2GvYfm14bOoEvbhqq0azN5fOq35aPJDUvg=gw@mail.gmail.com>
Date: Thu, 23 Jan 2014 23:05:10 -0500
X-Mailman-Approved-At: Fri, 24 Jan 2014 08:01:20 -0800
Cc: "mpls@ietf.org" <mpls@ietf.org>, IETF discussion list <ietf@ietf.org>, Noel Chiappa <jnc@mercury.lcs.mit.edu>
X-BeenThere: ietf@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
Reply-To: curtis@ipv6.occnc.com
List-Id: IETF-Discussion <ietf.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ietf>, <mailto:ietf-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/ietf/>
List-Post: <mailto:ietf@ietf.org>
List-Help: <mailto:ietf-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ietf>, <mailto:ietf-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 24 Jan 2014 04:05:17 -0000

+1

Alia - Nice concise summary.  Thanks.

Curtis


In message <CAG4d1rf+wAJuD2GvYfm14bOoEvbhqq0azN5fOq35aPJDUvg=gw@mail.gmail.com>
Alia Atlas writes:
> 
> I don't want to get in the way of vehement discussion, but I thought
> we were on the verge of finding an actual solution...
>  
> IMHO, that was a combination of an applicability statement, using
> SHOULD for congestion control and checksum, and defining a longer-term
> OAM-based approach (as Stewart Bryant suggested) to be able to verify
> that packet corruption or excessive drops aren't happening.
>  
> Does that sound like an acceptable set?
>  
> Alia
>  
>  
> On Thu, Jan 23, 2014 at 6:56 PM, Joe Touch <touch@isi.edu> wrote:
>  
> >
> >
> > On 1/23/2014 3:32 PM, Edward Crabbe wrote:
> >
> >> Joe, thanks for your response. Comments inline:
> >>
> >>
> >>     On 1/23/2014 1:27 PM, Edward Crabbe wrote:
> >>
> >>         Part of the point of using UDP is to make use of lowest common
> >>         denominator forwarding hardware in introducing entropy to
> >>         protocols that
> >>         lack it ( this is particularly true of the GRE in UDP use case
> >> also
> >>         under discussion elsewhere).
> >>
> >>         The tunnel is not the source of the traffic.  The _source of the
> >>         traffic_ is the source of the traffic.
> >>
> >>
> >>     To the Internet, the tunnel encapusulator is the source of traffic.
> >>     Tracing the data back further than that is a mirage at best - and
> >>     irrelevant.
> >>
> >>
> >> The 'internet' cares about characteristics of reactivity to congestion.
> >>   This is guaranteed by the /source of the traffic/ independent of any
> >> intermediate node.
> >>
> >
> > Are you prepared to make that a requirement of this document, i.e., that
> > the only MPLS traffic that can be UDP encapsulated is known to react to
> > congestion?
> >
> > How exactly can you know that?
> >
> >
> >      The tunnel head-end is responsible for the tunnel walking, talking,
> >>     and quaking like a duck (host). When the tunnel head-end knows
> >>     something about the ultimate origin of the traffic - whether real,
> >>     imagined, or from Asgard - then it has done it's duty (e.g., that
> >>     it's already congestion controlled).
> >>
> >>     But that head end is responsible, regardless of what it knows or
> >>     doesn't. And when it doesn't know, the only way to be responsible is
> >>     to put in its own reactivity.
> >>
> >> This is not fact; it's actually precisely the principle  we're currently
> >> arguing about.  ;)
> >>
> >
> > Actually, it's a paraphrasing of Section 3.1.3 of RFC5405.
> >
> > We can continue to debate it, but until it's been *changed* by a revision,
> > it remains BCP.
> >
> >
> >  I would posit:
> >>
> >> The tunnel doesn't have to know anything about congestion or performance
> >> characteristics because the originating application must.
> >>
> >
> > That works only if you know that fact about the originating application.
> > However, there are plenty of applications whose traffic goes over MPLS that
> > isn't congestion reactive or bandwidth-limited.
> >
> >
> > > See GRE,
> >
> >> MPLS, many other tunnel types,
> >>
> >
> > This isn't an issue for all tunnels until they enter the Internet...
> >
> >
> >  including several existing within the
> >> IETF that make use of an outer UDP header.
> >>
> >
> > Which are all already supposed to follow the recommendations of RFC5405.
> > To the extent that they don't, they don't agree with that BCP.
> >
> > I'm not saying such things never can or will exist, but I don't think the
> > IETF should be self-contradictory. We already agreed as a group on such
> > BCPs and other standards, and new standards-track docs need to follow them.
> >
> >
> >          The originating application
> >>         who's traffic is being tunneled should be responsible for
> >> congestion
> >>         control, or lack there of.
> >>
> >>     Perhaps it should be, but that's an agreement between whomever
> >>     implements/deploys the tunnel headend and whomever provides the
> >>     originating traffic to them. The problem is that this isn't true for
> >>     the typical use case for this kind of encapsulation.
> >>
> >> How so?  As mentioned before, this is the same case as standard GRE/MPLS
> >> etc.
> >>
> >
> > It's putting MPLS inside UDP. That's a different case, and the reason
> > RFC5405 applies.
> >
> >
> >      I.e., if we were talking about MPLS traffic that already was
> >>     reactive, we wouldn't be claiming the need for additional
> >>     encapsulator mechanism. It's precisely because nothing is known
> >>     about the MPLS traffic that the encapsulator needs to act.
> >>
> >> The MPLS traffic doesn't have to be reactive, it's the applications
> >> being encapsulated / traversing a particular tunnel that are responsible
> >> for and aware of path and congestion charateristics.  Because the MPLS
> >> head end knows nothing about the /end to end application 'session'/
> >> characteristics it /shouldn't/ have anything to do with congestion
> >> management.
> >>
> >
> > OK, so what you're saying is that "traffic using this encapsulation MUST
> > be known to be congestion reactive". Put that in the doc and we'll debate
> > whether we believe it.
> >
> > But right now you're basically saying that because you think it's someone
> > else's problem (the originating application), it isn't yours. The
> > difficulty with that logic is that you (the tunnel headend) is responsible
> > to ensure that this is true - either by *knowing* that the originating
> > traffic is congestion reactive, or by putting in its own mechanism to
> > ensure that this happens if the originating application isn't.
> >
> >
> >       > Are we advocating a return to intermediate
> >>
> >>         congestion control (I like X.25 as much as the next guy,
> >>         but...).  This
> >>         is a very stark change of direction.
> >>
> >>         I think mandating congestion control  is not technically sound
> >> from
> >>         either a theoretical (violation of end to end principle, stacking
> >> of
> >>         congestion control algorithms leading to complex and potentially
> >>         suboptimal results) or economic perspective (as a very large
> >>         backbone,
> >>         we've been doing just fine without intermediate congestion
> >>         management
> >>         thank you very much, and I have 0 desire to pay for a cost
> >>         prohibitive,
> >>         unnecessary feature in silicon.)
> >>
> >>     Write that up, and we'll see how it turns out in the IETF. However,
> >>     right now, the IETF BCPs do require reactive congestion management
> >>     of transport streams.
> >>
> >> Which part?  The end-to-end principle, or the aversion to congestion
> >> control stacking?  These have been implicit in all tunneling protocols
> >> produced by the IETF for the modern internet.
> >>
> >
> > Sure, and that's reflected in RFC5405 already. However, please, PLEASE
> > appreciate that NOBODY here is asking you to put in "congestion control
> > stacking"; that happens when you run two dynamic, reactive control
> > algorithms using the same timescale on top of each other.
> >
> > Equally well-known in CC circles is that you CAN - and often *should* -
> > stack different kinds of mechanisms at different layers with different
> > timescales. E.g., that's why we have an AQM WG - because even when all the
> > traffic is TCP, that's not quite enough inside the network. That's also why
> > Lars was suggesting something coarse on a longer timescale - a circuit
> > breaker - rather than AIMD on a RTT basis.
> >
> > Keep in mind as well that the E2E argument says that you can't get an E2E
> > service by composing the equivalent HBH one; it also says that HBH
> > mechanisms can be required for efficiency. That's what we're talking about
> > here - the efficiency impact of congestion, not the overall correctness of
> > E2E control.
> >
> >
> >      If you don't want/like that, then either don't use transport
> >>     encapsulation, or change the BCPs.
> >>
> >> These BCPs are defined for an originating /application/.
> >>
> >
> > Yes, and I don't understand why you (and others) keep thinking it matters
> > that there are layers of things behind the tunnel head end. It doesn't -
> > unless you KNOW what those layers are, and can ensure that they behave as
> > you expect.
> >
> >
> >  In this case
> >> the UDP header is simply a shim header applied to existing application
> >> traffic.
> >>
> >
> > It's not "simply a shim" - if that's the case, use IP and we're done. No
> > need for congestion control.
> >
> > The reason congestion issues arise is because you're inserting a header
> > ****THAT YOU EXPECT PARTS OF THE INTERNET YOU TRAVERSE TO REACT TO****.
> >
> > If you put in a UDP-like header that nobody in the Internet would
> > interpret, this wouldn't be an issue.
> >
> > But you simply cannot expect the Internet to treat you like "application"
> > traffic if you won't enforce acting like that traffic too.
> >
> >
> >  The tunnel head does not introduce traffic independent of the
> >> originating application.
> >>
> >
> > The Internet ****neither knows nor cares****.
> >
> > To the Internet, the head-end is the source. Whatever data the head end
> > puts inside the UDP packets *is application data* to the rest of the
> > Internet.
> >
> > Again, if you are saying that you know so much about the originating
> > source that you know you don't need additional mechanism at the headend,
> > say so - but then live by that requirement.
> >
> > If *any* MPLS traffic could show up at the headend, then it becomes the
> > headend's responsibility to do something.
> >
> > ---
> >
> > Consider the following case:
> >
> >         - video shows up inside the OS, destined for the network
> >
> >         - software X bundles that video and sends it to go out
> >
> >         - software Y puts that data into UDP packets to go
> >         to the Internet
> >
> > So what's the "application" here? To the Internet, it's software Y -- the
> > thing that puts the 'application' data into UDP packets. The previous steps
> > are irrelevant - just as irrelevant as the singer your video camera is
> > filming, as irrelevant as the sun that created the light that is reflected
> > off the singer to your camera.
> >
> > If software Y knows so much about the steps that lead to its input data
> > that it knows it's congestion reactive, nothing more need be done.
> >
> > If NOT (and that's the relevant corollary here), then it becomes software
> > Y's responsibility to put in some reactivity.
> >
> > Joe