Re: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard

jnc@mercury.lcs.mit.edu (Noel Chiappa) Wed, 22 January 2014 17:29 UTC

Return-Path: <jnc@mercury.lcs.mit.edu>
X-Original-To: mpls@ietfa.amsl.com
Delivered-To: mpls@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B93B61A011B; Wed, 22 Jan 2014 09:29:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.735
X-Spam-Level:
X-Spam-Status: No, score=-4.735 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RP_MATCHES_RCVD=-0.535] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ctB-qpFkctrB; Wed, 22 Jan 2014 09:29:31 -0800 (PST)
Received: from mercury.lcs.mit.edu (mercury.lcs.mit.edu [18.26.0.122]) by ietfa.amsl.com (Postfix) with ESMTP id 17D701A0119; Wed, 22 Jan 2014 09:29:30 -0800 (PST)
Received: by mercury.lcs.mit.edu (Postfix, from userid 11178) id 3D31A18C13B; Wed, 22 Jan 2014 12:29:30 -0500 (EST)
To: ietf@ietf.org, mpls@ietf.org
Message-Id: <20140122172930.3D31A18C13B@mercury.lcs.mit.edu>
Date: Wed, 22 Jan 2014 12:29:30 -0500
From: jnc@mercury.lcs.mit.edu
X-Mailman-Approved-At: Thu, 23 Jan 2014 10:57:52 -0800
Cc: jnc@mercury.lcs.mit.edu
Subject: Re: [mpls] Last Call: <draft-ietf-mpls-in-udp-04.txt> (Encapsulating MPLS in UDP) to Proposed Standard
X-BeenThere: mpls@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Multi-Protocol Label Switching WG <mpls.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/mpls>, <mailto:mpls-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/mpls/>
List-Post: <mailto:mpls@ietf.org>
List-Help: <mailto:mpls-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/mpls>, <mailto:mpls-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 22 Jan 2014 17:29:33 -0000

    > From: "Eggert, Lars" <lars@netapp.com>

    > I would like the document to specify at the very least a circuit
    > breaker mechanism, that stops the tunneled traffic if severe packet
    > loss is detected along the path.

I think people are looking at this from the wrong perspective, focusing in on
UDP and what its specs say, and not on the larger engineering picture.

Envision the following 4 (or more) scenarios for one Border Tunneling Routing
(BTR), BTR A, to send packets to another BTR, BTR B, on the path from ultimate
source S (somewhere before BTR A) to destination D (somewhere after BTR B).

- Plain IP
- Some existing encapsulation like GRE
- A new, custom encapsulation
- Encapsulation using UDP

What you seem to be claiming is that in case 4 we need to have congestion
detection and response at the intermediate forwarding node BTR A - but it
would not be required in cases 1-3? This makes no sense.

Even better, suppose that BTR A implements _both_ one of the first three,
_and_ UDP encapsulation. If its response to UDP congestion on the path to BTR
B is to.... switch to a _different_ encapsulation for traffic to that
intermediate forwarding node, one for which it's not required to detect and
respond to congestion, did that really help?

Similarly, if people doing tunnels ditched UDP in favor of some other
encapsulation (assuming they could find something that would get through as
many filters as UDP does, would have the same load-spreading properties that
UDP does, etc, etc - or maybe not, if that's the price they have to pay for
being free of the grief they are getting because they are using UDP) - would
that do anything at all for any potential congestion from their traffic? No,
it would still be there, obviously.


Look, the current architectural model of the Internet for dealing with
congestion is that the _application endpoints_ have to notice it, and slow
down. Intermediate forwarding nodes don't have any particular responsibility
other than to drop packets if they have too many.

You are quite right that if we take some application that doesn't detect and
respond to congestion (perhaps because it was written for a local environment,
and some bright spark is tunnelling that L2 protocol over the Internet), that
can cause problems - but that's because we are violating the Internet's
architectural assumption on how/who/where congestion control is done.

I don't have any particularly brilliant suggestions on how to respond to
situations in which applications don't detect and respond to congestion.
Architecturally, if we are to keep to the existing congestion control scheme
(endpoints are responsible), the responsibilty has to go back to the ultimate
source of the traffic somehow...

But saying that _intermediate forwarding nodes_ have to detect down-stream
congestion, and respond, represents a fundamental change to the Internet's
architecture for congestion control.

	Noel