Re: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard

<l.wood@surrey.ac.uk> Fri, 24 January 2014 00:35 UTC

Return-Path: <l.wood@surrey.ac.uk>
X-Original-To: mpls@ietfa.amsl.com
Delivered-To: mpls@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 64CF81A04B1; Thu, 23 Jan 2014 16:35:41 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level:
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id WoskbpwVuOtF; Thu, 23 Jan 2014 16:35:36 -0800 (PST)
Received: from mail1.bemta5.messagelabs.com (mail1.bemta5.messagelabs.com [195.245.231.153]) by ietfa.amsl.com (Postfix) with ESMTP id D3F221A04AD; Thu, 23 Jan 2014 16:35:35 -0800 (PST)
Received: from [85.158.136.51:13375] by server-17.bemta-5.messagelabs.com id 87/38-19152-655B1E25; Fri, 24 Jan 2014 00:35:34 +0000
X-Env-Sender: l.wood@surrey.ac.uk
X-Msg-Ref: server-16.tower-49.messagelabs.com!1390523733!18630929!1
X-Originating-IP: [131.227.200.39]
X-StarScan-Received:
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5644 invoked from network); 24 Jan 2014 00:35:33 -0000
Received: from exht012p.surrey.ac.uk (HELO EXHT012P.surrey.ac.uk) (131.227.200.39) by server-16.tower-49.messagelabs.com with AES128-SHA encrypted SMTP; 24 Jan 2014 00:35:33 -0000
Received: from EXMB01CMS.surrey.ac.uk ([169.254.1.204]) by EXHT012P.surrey.ac.uk ([131.227.200.39]) with mapi; Fri, 24 Jan 2014 00:35:32 +0000
From: l.wood@surrey.ac.uk
To: touch@isi.edu, akatlas@gmail.com
Date: Fri, 24 Jan 2014 00:33:49 +0000
Thread-Topic: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard
Thread-Index: Ac8YmNyQleJV1FEDSgabe3hbCgDd0wAAxaYP
Message-ID: <290E20B455C66743BE178C5C84F1240847E63346E6@EXMB01CMS.surrey.ac.uk>
References: <20140122172930.3D31A18C13B@mercury.lcs.mit.edu> <64A7AA55-795A-40FA-8008-5FCE3B8E2C44@netapp.com> <52E18661.4060000@isi.edu> <CACKN6JFzaGkiCzJgcd0BEHeWi5x0ReemJOv4ASuXAnz36RA-fg@mail.gmail.com> <52E18BF1.1040004@isi.edu> <CACKN6JEA6=vJM94gdhc8iBJV52eTg1X-f7fyBiouJMGz+HOqsw@mail.gmail.com> <52E1AC3E.2040204@isi.edu> <CAG4d1rf+wAJuD2GvYfm14bOoEvbhqq0azN5fOq35aPJDUvg=gw@mail.gmail.com>, <52E1AF6E.1000108@isi.edu>
In-Reply-To: <52E1AF6E.1000108@isi.edu>
Accept-Language: en-US, en-GB
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
acceptlanguage: en-US, en-GB
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: mpls@ietf.org, jnc@mercury.lcs.mit.edu, ietf@ietf.org
Subject: Re: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard
X-BeenThere: mpls@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Multi-Protocol Label Switching WG <mpls.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/mpls>, <mailto:mpls-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/mpls/>
List-Post: <mailto:mpls@ietf.org>
List-Help: <mailto:mpls-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/mpls>, <mailto:mpls-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 24 Jan 2014 00:35:41 -0000

I wouldn't say the checksum issue has petered out - I've provided replacement paragraphs of text to Xiaohu which will hopefully resolve it.

Lloyd Wood
http://about.me/lloydwood
________________________________________
From: ietf [ietf-bounces@ietf.org] On Behalf Of Joe Touch [touch@isi.edu]
Sent: 24 January 2014 00:10
To: Alia Atlas
Cc: mpls@ietf.org; Edward Crabbe; Noel Chiappa; IETF discussion list
Subject: Re: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard

Hi, Alia,

On 1/23/2014 4:07 PM, Alia Atlas wrote:
> I don't want to get in the way of vehement discussion, but I thought we
> were on the verge of finding an actual solution...
>
> IMHO, that was a combination of an applicability statement, using SHOULD
> for congestion control and checksum, and defining a longer-term
> OAM-based approach (as Stewart Bryant suggested) to be able to verify
> that packet corruption or excessive drops aren't happening.
>
> Does that sound like an acceptable set?

It answers my concerns (I was concerned about the checksum issue too,
though it seems to have petered out). I wasn't tracking whether there
were other issues that this doesn't address that were raised, though.

Joe

> Alia
>
>
> On Thu, Jan 23, 2014 at 6:56 PM, Joe Touch <touch@isi.edu
> <mailto:touch@isi.edu>> wrote:
>
>
>
>     On 1/23/2014 3:32 PM, Edward Crabbe wrote:
>
>         Joe, thanks for your response. Comments inline:
>
>
>              On 1/23/2014 1:27 PM, Edward Crabbe wrote:
>
>                  Part of the point of using UDP is to make use of lowest
>         common
>                  denominator forwarding hardware in introducing entropy to
>                  protocols that
>                  lack it ( this is particularly true of the GRE in UDP
>         use case also
>                  under discussion elsewhere).
>
>                  The tunnel is not the source of the traffic.  The
>         _source of the
>                  traffic_ is the source of the traffic.
>
>
>              To the Internet, the tunnel encapusulator is the source of
>         traffic.
>              Tracing the data back further than that is a mirage at best
>         - and
>              irrelevant.
>
>
>         The 'internet' cares about characteristics of reactivity to
>         congestion.
>            This is guaranteed by the /source of the traffic/ independent
>         of any
>         intermediate node.
>
>
>     Are you prepared to make that a requirement of this document, i.e.,
>     that the only MPLS traffic that can be UDP encapsulated is known to
>     react to congestion?
>
>     How exactly can you know that?
>
>
>              The tunnel head-end is responsible for the tunnel walking,
>         talking,
>              and quaking like a duck (host). When the tunnel head-end knows
>              something about the ultimate origin of the traffic -
>         whether real,
>              imagined, or from Asgard - then it has done it's duty
>         (e.g., that
>              it's already congestion controlled).
>
>              But that head end is responsible, regardless of what it
>         knows or
>              doesn't. And when it doesn't know, the only way to be
>         responsible is
>              to put in its own reactivity.
>
>         This is not fact; it's actually precisely the principle  we're
>         currently
>         arguing about.  ;)
>
>
>     Actually, it's a paraphrasing of Section 3.1.3 of RFC5405.
>
>     We can continue to debate it, but until it's been *changed* by a
>     revision, it remains BCP.
>
>
>         I would posit:
>
>         The tunnel doesn't have to know anything about congestion or
>         performance
>         characteristics because the originating application must.
>
>
>     That works only if you know that fact about the originating
>     application. However, there are plenty of applications whose traffic
>     goes over MPLS that isn't congestion reactive or bandwidth-limited.
>
>
>      > See GRE,
>
>         MPLS, many other tunnel types,
>
>
>     This isn't an issue for all tunnels until they enter the Internet...
>
>
>         including several existing within the
>         IETF that make use of an outer UDP header.
>
>
>     Which are all already supposed to follow the recommendations of
>     RFC5405. To the extent that they don't, they don't agree with that BCP.
>
>     I'm not saying such things never can or will exist, but I don't
>     think the IETF should be self-contradictory. We already agreed as a
>     group on such BCPs and other standards, and new standards-track docs
>     need to follow them.
>
>
>                  The originating application
>                  who's traffic is being tunneled should be responsible
>         for congestion
>                  control, or lack there of.
>
>              Perhaps it should be, but that's an agreement between whomever
>              implements/deploys the tunnel headend and whomever provides the
>              originating traffic to them. The problem is that this isn't
>         true for
>              the typical use case for this kind of encapsulation.
>
>         How so?  As mentioned before, this is the same case as standard
>         GRE/MPLS
>         etc.
>
>
>     It's putting MPLS inside UDP. That's a different case, and the
>     reason RFC5405 applies.
>
>
>              I.e., if we were talking about MPLS traffic that already was
>              reactive, we wouldn't be claiming the need for additional
>              encapsulator mechanism. It's precisely because nothing is known
>              about the MPLS traffic that the encapsulator needs to act.
>
>         The MPLS traffic doesn't have to be reactive, it's the applications
>         being encapsulated / traversing a particular tunnel that are
>         responsible
>         for and aware of path and congestion charateristics.  Because
>         the MPLS
>         head end knows nothing about the /end to end application 'session'/
>         characteristics it /shouldn't/ have anything to do with congestion
>         management.
>
>
>     OK, so what you're saying is that "traffic using this encapsulation
>     MUST be known to be congestion reactive". Put that in the doc and
>     we'll debate whether we believe it.
>
>     But right now you're basically saying that because you think it's
>     someone else's problem (the originating application), it isn't
>     yours. The difficulty with that logic is that you (the tunnel
>     headend) is responsible to ensure that this is true - either by
>     *knowing* that the originating traffic is congestion reactive, or by
>     putting in its own mechanism to ensure that this happens if the
>     originating application isn't.
>
>
>               > Are we advocating a return to intermediate
>
>                  congestion control (I like X.25 as much as the next guy,
>                  but...).  This
>                  is a very stark change of direction.
>
>                  I think mandating congestion control  is not
>         technically sound from
>                  either a theoretical (violation of end to end
>         principle, stacking of
>                  congestion control algorithms leading to complex and
>         potentially
>                  suboptimal results) or economic perspective (as a very
>         large
>                  backbone,
>                  we've been doing just fine without intermediate congestion
>                  management
>                  thank you very much, and I have 0 desire to pay for a cost
>                  prohibitive,
>                  unnecessary feature in silicon.)
>
>              Write that up, and we'll see how it turns out in the IETF.
>         However,
>              right now, the IETF BCPs do require reactive congestion
>         management
>              of transport streams.
>
>         Which part?  The end-to-end principle, or the aversion to congestion
>         control stacking?  These have been implicit in all tunneling
>         protocols
>         produced by the IETF for the modern internet.
>
>
>     Sure, and that's reflected in RFC5405 already. However, please,
>     PLEASE appreciate that NOBODY here is asking you to put in
>     "congestion control stacking"; that happens when you run two
>     dynamic, reactive control algorithms using the same timescale on top
>     of each other.
>
>     Equally well-known in CC circles is that you CAN - and often
>     *should* - stack different kinds of mechanisms at different layers
>     with different timescales. E.g., that's why we have an AQM WG -
>     because even when all the traffic is TCP, that's not quite enough
>     inside the network. That's also why Lars was suggesting something
>     coarse on a longer timescale - a circuit breaker - rather than AIMD
>     on a RTT basis.
>
>     Keep in mind as well that the E2E argument says that you can't get
>     an E2E service by composing the equivalent HBH one; it also says
>     that HBH mechanisms can be required for efficiency. That's what
>     we're talking about here - the efficiency impact of congestion, not
>     the overall correctness of E2E control.
>
>
>              If you don't want/like that, then either don't use transport
>              encapsulation, or change the BCPs.
>
>         These BCPs are defined for an originating /application/.
>
>
>     Yes, and I don't understand why you (and others) keep thinking it
>     matters that there are layers of things behind the tunnel head end.
>     It doesn't - unless you KNOW what those layers are, and can ensure
>     that they behave as you expect.
>
>
>         In this case
>         the UDP header is simply a shim header applied to existing
>         application
>         traffic.
>
>
>     It's not "simply a shim" - if that's the case, use IP and we're
>     done. No need for congestion control.
>
>     The reason congestion issues arise is because you're inserting a
>     header ****THAT YOU EXPECT PARTS OF THE INTERNET YOU TRAVERSE TO
>     REACT TO****.
>
>     If you put in a UDP-like header that nobody in the Internet would
>     interpret, this wouldn't be an issue.
>
>     But you simply cannot expect the Internet to treat you like
>     "application" traffic if you won't enforce acting like that traffic too.
>
>
>         The tunnel head does not introduce traffic independent of the
>         originating application.
>
>
>     The Internet ****neither knows nor cares****.
>
>     To the Internet, the head-end is the source. Whatever data the head
>     end puts inside the UDP packets *is application data* to the rest of
>     the Internet.
>
>     Again, if you are saying that you know so much about the originating
>     source that you know you don't need additional mechanism at the
>     headend, say so - but then live by that requirement.
>
>     If *any* MPLS traffic could show up at the headend, then it becomes
>     the headend's responsibility to do something.
>
>     ---
>
>     Consider the following case:
>
>              - video shows up inside the OS, destined for the network
>
>              - software X bundles that video and sends it to go out
>
>              - software Y puts that data into UDP packets to go
>              to the Internet
>
>     So what's the "application" here? To the Internet, it's software Y
>     -- the thing that puts the 'application' data into UDP packets. The
>     previous steps are irrelevant - just as irrelevant as the singer
>     your video camera is filming, as irrelevant as the sun that created
>     the light that is reflected off the singer to your camera.
>
>     If software Y knows so much about the steps that lead to its input
>     data that it knows it's congestion reactive, nothing more need be done.
>
>     If NOT (and that's the relevant corollary here), then it becomes
>     software Y's responsibility to put in some reactivity.
>
>     Joe
>
>