Re: [Spud] SPUD's open/close are unconvincing

Toerless Eckert <eckert@cisco.com> Thu, 09 April 2015 01:22 UTC

Return-Path: <eckert@cisco.com>
X-Original-To: spud@ietfa.amsl.com
Delivered-To: spud@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id DCDC31ACD8D for <spud@ietfa.amsl.com>; Wed, 8 Apr 2015 18:22:34 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -13.911
X-Spam-Level:
X-Spam-Status: No, score=-13.911 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, J_CHICKENPOX_72=0.6, RCVD_IN_DNSWL_HI=-5, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_DEF_DKIM_WL=-7.5] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YLaRBN9XXR1v for <spud@ietfa.amsl.com>; Wed, 8 Apr 2015 18:22:33 -0700 (PDT)
Received: from alln-iport-1.cisco.com (alln-iport-1.cisco.com [173.37.142.88]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B62AF1ACD8B for <spud@ietf.org>; Wed, 8 Apr 2015 18:22:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=6231; q=dns/txt; s=iport; t=1428542553; x=1429752153; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=KD0zFsiK3Xnw34fQdSahOBh5WL+Dn9H+gs6x3vWFzE0=; b=PVeED4ROc1oHIPPASAdzJbhwSGWNkDYB4yzs+pTkh9FgLtiQMGAGx2cj szXlVzdcVMR4EeCH/2LH/ksA1YObAuJ+q+hv87axh6VKdATJkQb6Dz0Yo HGzgwQeLP4Tf4U6KJYHdTnielg2jwiZ0Hx+piCVvTW1WnJ6+qrnqAJm5o I=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: A0A5BQCs0yVV/5pdJa1cgwjMewKBLzwQAQEBAQEBAX2EHwEBAQMBJxM/BQsLEgYJJQ8FNRSINQjMfwEBAQEBAQEBAQEBAQEBAQEBAQEBGIosf4R8B4QtBYsni2yDZwGBHYM3gmOJWoNKIoIDHIFwHoJ0AQEB
X-IronPort-AV: E=Sophos;i="5.11,547,1422921600"; d="scan'208";a="139487989"
Received: from rcdn-core-3.cisco.com ([173.37.93.154]) by alln-iport-1.cisco.com with ESMTP; 09 Apr 2015 01:22:32 +0000
Received: from mcast-linux1.cisco.com (mcast-linux1.cisco.com [172.27.244.121]) by rcdn-core-3.cisco.com (8.14.5/8.14.5) with ESMTP id t391MVdU020587 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 9 Apr 2015 01:22:31 GMT
Received: from mcast-linux1.cisco.com (localhost.cisco.com [127.0.0.1]) by mcast-linux1.cisco.com (8.13.8/8.13.8) with ESMTP id t391MU1w027017; Wed, 8 Apr 2015 18:22:31 -0700
Received: (from eckert@localhost) by mcast-linux1.cisco.com (8.13.8/8.13.8/Submit) id t391MT6U027014; Wed, 8 Apr 2015 18:22:29 -0700
Date: Wed, 8 Apr 2015 18:22:29 -0700
From: Toerless Eckert <eckert@cisco.com>
To: Daniel Kahn Gillmor <dkg@fifthhorseman.net>
Message-ID: <20150409012229.GG24286@cisco.com>
References: <87iod631nv.fsf@alice.fifthhorseman.net> <DM2PR0301MB06555C7D7F32A69214405D44A8FC0@DM2PR0301MB0655.namprd03.prod.outlook.com> <20150408193920.GD24286@cisco.com> <871tju2rdq.fsf@alice.fifthhorseman.net>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <871tju2rdq.fsf@alice.fifthhorseman.net>
User-Agent: Mutt/1.4.2.2i
Archived-At: <http://mailarchive.ietf.org/arch/msg/spud/MqOFBXnGc2Ukt5mN31plrF5no5k>
Cc: spud@ietf.org
Subject: Re: [Spud] SPUD's open/close are unconvincing
X-BeenThere: spud@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Session Protocol Underneath Datagrams <spud.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/spud>, <mailto:spud-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/spud/>
List-Post: <mailto:spud@ietf.org>
List-Help: <mailto:spud-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/spud>, <mailto:spud-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 09 Apr 2015 01:22:35 -0000

On Wed, Apr 08, 2015 at 06:21:21PM -0400, Daniel Kahn Gillmor wrote:
> >    What is the best possible design we can do to make UDP flows
> >    be equal or better permissible across WELL BEHAVED FW and similar
> >    middleboxes ? Please define "WELL BEHAVED"as part of your answer.
> 
> As i wrote to Joe, i'm not convinced that this is an answerable question
> considering that no one has provided a technical argument yet for why
> the enterprise firewall operators can't already do what we're talking
> about here.

Sorry, i need to give a multi-tiered answer:

A) Lowest layer what i said initiall - mobility, load-splitting,
multiplexing.

B) Next layer: Yes, the FW admin could do for UDP what she does for TCP,
but in todays ecosystem she will not: 

   UDP has
     I) bad karma due to history of bad UDP apps in the past
    II) More filtering on UDP than equivalent TCP traffic because
        there is so much business critical TCP crap
   III) not a lot of business critical apps riding on UDP
    IV) A historic architecture view of "doing reliable transport
        on top of UDP is a bad, undesirable workaround"

   Seeded by I, II-IV form a vicious cycle. 

C) Decades long a ridiculously slow and cumbersome evolution
   of transport layer functionality. We should have done the bloody
   AQM work a decade or more ago.

   Reason IMHO is that transport layer is in the kernel, resulting
   in a totally broken agility model for transport layer
   compared to middleware/app-layer (which created . 

Alltogether this means to me the goal of the SPUD exercise
architecturally is: 

Redefine the Internet architecture to say:

 -> Transport Layer needs to be split up into sub-layers:

    1) application multiplexing layer - UDP, (running in OS) unchanged
    2) common, middlebox friendly connection/packet signaling layer
    3) transport connection QoE  layer
       (reliability (FEC,ARQ), rate-control, measurement, jitter control,...)
       Aka: short term just existing Internet transport protocols
       (RTP, TCP, SCTP,...) but if one would design a new transport
       protocol, one could not redo whats already in 2). 
    
    2) and 3) can run in app/middleware (per app) and/or in kernel.

SPUD is the first attempt at 2).

If we wouldn't do 2), but just 3) over 1), we will make the
chicken & egg problem of 1) even worse, because we're just asking
FWs to now inspect multiple reliable transports inside UDP instead
of (with 2) giving them one common single layer to inspect. And
of course we also do not solve C), because there is just a bunch
of things like mobility, load-splitting, per-packet marking and so
on that we know we cold get from the network to benefit apps, but
without moving it into user-land, it's not going to go anywhere fast.

> > c) If i read it correctly, SPUD can 
> >
> >    a) set up potentially multiple independent pipes across the same 5-tuple.
> >    b) A single pipe can go across different 5-tuples (multipath/mobility).
> >    
> >    These two features should explain why tracking 5-tuple as you
> >    suggest alone would not be sufficient.
> >
> >    Btw: I am a fan of b), i don't think a) should be encouraged given
> >    experience with RTCweb. In that sense, the total number of
> >    pipes would even be equal or smaller than the number of 5 tuples.
> 
> This is an interesting point, thanks.  I'd be curious to hear more
> details about the RTCweb experience that have convinced you that (a)
> should not be encouraged.

Yeah, let me reword, first time around it wasn't precise:

If an RTCweb session requires 10 5-tuple flows, it may take
10 times as long to set it up, and you may end up with 1/10'th maximum
number of parallel sessions due to the number of NAT/FW 5-tuple pinholes
you need to build.  Because we want NAT/FW to be SPUD tube aware,
SPUD SHOULD NOT go out and claim: 

"you can replace 10 5-tuple UDP flows with 1 UDP flow with 10 SPUD tubes,
 and your NAT/FW pinhole problem goes away". 

Aka: An RTCweb app worried about pinhole setup should opt for one
Tube, but that in itself would already give it a lot of other benefits
mobility, load-splitting, per-packet-marking,....

Btw: IMHO performance of NAT/FW pinhole setup is not a limiting factor
if you just buy the right NAT/FW product, but i do by now don't
think anymore that all packets of a transport level session should
have the same QoS.

SPUD can solve this as well with per-packet markings, and
once you have that, the only reason why you would want multiple
tubes for an app-session IMHO is if you explicitly want to give 
FWs the ability to disect your traffic (eg: drop the video portion
but permit audio).

> The trouble with (b), of course, is that in the multipath or mobility
> case, the tube will continue its flow over different network segments,
> which violates the goals people have described for open and close.
> 
> As i move to a new path, i'm deliberately not sending a Close message
> (because i want to keep the tube open, right?) -- so what do the
> middleboxes do?

I think the first order of business was not to be worse than TCP,
but not yet trying to finalize what the most widely agreeable improvements
beyond that are.

> And after i've moved to a new network and want to continue the same
> tube, surely i won't indicate that the flow is opening (it's already
> open).  As a result, the equipment on the new path won't see the Open
> message -- should they discard it now?
> 
> These arrangements seem in conflict to me.

I can think of ways to improve the signaling. The main
issue is the IETFs history of onpath signaling and the resulting
resistance forces in the IETF against touching that subject.

But consider the opportunity that redunant or cluster of
NAT/FW will share SPUD tunnel-IDs amongst them to support
the mobility case. No new standardized signaling required.

Cheers
    Toerless

> Regards,
> 
>         --dkg
> 
> > P.S.: Cool domain name. Is that indicative of the role you're planning
> > to take in the discussion ? ;-))) (sorry, i can never resists these questions).
> 
> :)