Re: [Spud] SPUD's open/close are unconvincing

Tom Herbert <tom@herbertland.com> Thu, 09 April 2015 03:46 UTC

Return-Path: <tom@herbertland.com>
X-Original-To: spud@ietfa.amsl.com
Delivered-To: spud@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id CFCB91B2C0B for <spud@ietfa.amsl.com>; Wed, 8 Apr 2015 20:46:26 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.378
X-Spam-Level:
X-Spam-Status: No, score=-1.378 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, FM_FORGED_GMAIL=0.622, J_CHICKENPOX_72=0.6, RCVD_IN_DNSWL_LOW=-0.7] autolearn=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3FdYz7K-6VsX for <spud@ietfa.amsl.com>; Wed, 8 Apr 2015 20:46:25 -0700 (PDT)
Received: from mail-ig0-f170.google.com (mail-ig0-f170.google.com [209.85.213.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id CE1771B2C0A for <spud@ietf.org>; Wed, 8 Apr 2015 20:46:24 -0700 (PDT)
Received: by igbqf9 with SMTP id qf9so55336370igb.1 for <spud@ietf.org>; Wed, 08 Apr 2015 20:46:24 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=WTn6aVSCPSnWwxDQerq2MV1dlS3pEZpWzZnF6hDr1Kc=; b=FimqNWvdd++1+iKs3pTEg3jiMk6Ye9TgR3eOaBmgkDRm0LJV+zEPHAyfJYFoMuIyji XMHGh54sowjDOl2Qbwk/3icYvCIvKqSRn9G2St3XiVXtPaCzEW6r9WIhjAuxWCa3KLNo eSZ/Trbem1If5DQAfgMG8G93N3GEBA+1o1Y6gWFgbjzwFW5cAFnuhHDWuzhdQuIzR+Oa 1I7vppkpwYuB59bY0WJlJEibrRvH2+ZoujFEsJ0m7D6FjykU8vZ4qel6NdIGHX4MQwkQ xU8CvYGEymlSRuoILgETFumMAjMxHiU3XdtUqzeHmTn7kOBskHwuDI3no9zr311ZjCJs 1Okw==
X-Gm-Message-State: ALoCoQk26nEkkCqUGzJ0ZIPshtzxkOzRiL+K1h7tyE+RNgPxyowcK+x69yHqGbnEWefR4FZjhOIh
MIME-Version: 1.0
X-Received: by 10.107.128.149 with SMTP id k21mr17395589ioi.7.1428551184250; Wed, 08 Apr 2015 20:46:24 -0700 (PDT)
Received: by 10.107.149.15 with HTTP; Wed, 8 Apr 2015 20:46:24 -0700 (PDT)
In-Reply-To: <20150409012229.GG24286@cisco.com>
References: <87iod631nv.fsf@alice.fifthhorseman.net> <DM2PR0301MB06555C7D7F32A69214405D44A8FC0@DM2PR0301MB0655.namprd03.prod.outlook.com> <20150408193920.GD24286@cisco.com> <871tju2rdq.fsf@alice.fifthhorseman.net> <20150409012229.GG24286@cisco.com>
Date: Wed, 8 Apr 2015 20:46:24 -0700
Message-ID: <CALx6S35NH9yPZxeARTic10b0jFEi8aC4Gmt79cxuzF_VpYYqLA@mail.gmail.com>
From: Tom Herbert <tom@herbertland.com>
To: Toerless Eckert <eckert@cisco.com>
Content-Type: text/plain; charset=UTF-8
Archived-At: <http://mailarchive.ietf.org/arch/msg/spud/4jytEJRXpB_zd3eE_jYBMhv44KY>
Cc: Daniel Kahn Gillmor <dkg@fifthhorseman.net>, spud@ietf.org
Subject: Re: [Spud] SPUD's open/close are unconvincing
X-BeenThere: spud@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Session Protocol Underneath Datagrams <spud.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/spud>, <mailto:spud-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/spud/>
List-Post: <mailto:spud@ietf.org>
List-Help: <mailto:spud-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/spud>, <mailto:spud-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 09 Apr 2015 03:46:27 -0000

On Wed, Apr 8, 2015 at 6:22 PM, Toerless Eckert <eckert@cisco.com> wrote:
> On Wed, Apr 08, 2015 at 06:21:21PM -0400, Daniel Kahn Gillmor wrote:
>> >    What is the best possible design we can do to make UDP flows
>> >    be equal or better permissible across WELL BEHAVED FW and similar
>> >    middleboxes ? Please define "WELL BEHAVED"as part of your answer.
>>
>> As i wrote to Joe, i'm not convinced that this is an answerable question
>> considering that no one has provided a technical argument yet for why
>> the enterprise firewall operators can't already do what we're talking
>> about here.
>
> Sorry, i need to give a multi-tiered answer:
>
> A) Lowest layer what i said initiall - mobility, load-splitting,
> multiplexing.
>
> B) Next layer: Yes, the FW admin could do for UDP what she does for TCP,
> but in todays ecosystem she will not:
>
>    UDP has
>      I) bad karma due to history of bad UDP apps in the past
>     II) More filtering on UDP than equivalent TCP traffic because
>         there is so much business critical TCP crap
>    III) not a lot of business critical apps riding on UDP
>     IV) A historic architecture view of "doing reliable transport
>         on top of UDP is a bad, undesirable workaround"
>
>    Seeded by I, II-IV form a vicious cycle.
>
> C) Decades long a ridiculously slow and cumbersome evolution
>    of transport layer functionality. We should have done the bloody
>    AQM work a decade or more ago.
>
>    Reason IMHO is that transport layer is in the kernel, resulting
>    in a totally broken agility model for transport layer
>    compared to middleware/app-layer (which created .
>
> Alltogether this means to me the goal of the SPUD exercise
> architecturally is:
>
> Redefine the Internet architecture to say:
>
>  -> Transport Layer needs to be split up into sub-layers:
>
>     1) application multiplexing layer - UDP, (running in OS) unchanged
>     2) common, middlebox friendly connection/packet signaling layer
>     3) transport connection QoE  layer
>        (reliability (FEC,ARQ), rate-control, measurement, jitter control,...)
>        Aka: short term just existing Internet transport protocols
>        (RTP, TCP, SCTP,...) but if one would design a new transport
>        protocol, one could not redo whats already in 2).
>
>     2) and 3) can run in app/middleware (per app) and/or in kernel.
>
> SPUD is the first attempt at 2).
>
> If we wouldn't do 2), but just 3) over 1), we will make the
> chicken & egg problem of 1) even worse, because we're just asking
> FWs to now inspect multiple reliable transports inside UDP instead
> of (with 2) giving them one common single layer to inspect. And
> of course we also do not solve C), because there is just a bunch
> of things like mobility, load-splitting, per-packet marking and so
> on that we know we cold get from the network to benefit apps, but
> without moving it into user-land, it's not going to go anywhere fast.
>
I think the kernel/user-land argument is a red herring. The problem is
that middleboxes routinely participate in transport layer protocols
which was never architected-- transport layer protocols are inherently
end-to-end protocols.  It's relatively easy to change client and
server sides to accommodate new transport functionality, but pretty
much impossible for us to change all the possible middleboxes in a
path in a timely fashion. Just one middlebox in the path that decides
to drop our packet because it doesn't understand our new option, or
doesn't like our new flags can spoil everything-- it is really
difficult to work around interoperabilities like this.  So, yes, the
net effect of this is that we have become very conservative with
transport layer changes, and when we do make changes at the transport
layer we often have to masquerade these in something that is
considered generally palatable to middleboxes. This is why we intend
to run transport protocols over UDP in the first place, and has even
motivated brazen attempts to overload TCP protocol number with other
things (e.g. STT). SPUD is a good opportunity to generalize and
standardize middlebox/transport protocol interactions, but if the its
benefits are dependent on middleboxes being updated that not is going
to go anywhere fast either!

>> > c) If i read it correctly, SPUD can
>> >
>> >    a) set up potentially multiple independent pipes across the same 5-tuple.
>> >    b) A single pipe can go across different 5-tuples (multipath/mobility).
>> >
>> >    These two features should explain why tracking 5-tuple as you
>> >    suggest alone would not be sufficient.
>> >
>> >    Btw: I am a fan of b), i don't think a) should be encouraged given
>> >    experience with RTCweb. In that sense, the total number of
>> >    pipes would even be equal or smaller than the number of 5 tuples.
>>
>> This is an interesting point, thanks.  I'd be curious to hear more
>> details about the RTCweb experience that have convinced you that (a)
>> should not be encouraged.
>
> Yeah, let me reword, first time around it wasn't precise:
>
> If an RTCweb session requires 10 5-tuple flows, it may take
> 10 times as long to set it up, and you may end up with 1/10'th maximum
> number of parallel sessions due to the number of NAT/FW 5-tuple pinholes
> you need to build.  Because we want NAT/FW to be SPUD tube aware,
> SPUD SHOULD NOT go out and claim:
>
> "you can replace 10 5-tuple UDP flows with 1 UDP flow with 10 SPUD tubes,
>  and your NAT/FW pinhole problem goes away".
>
> Aka: An RTCweb app worried about pinhole setup should opt for one
> Tube, but that in itself would already give it a lot of other benefits
> mobility, load-splitting, per-packet-marking,....
>
> Btw: IMHO performance of NAT/FW pinhole setup is not a limiting factor
> if you just buy the right NAT/FW product, but i do by now don't
> think anymore that all packets of a transport level session should
> have the same QoS.
>
> SPUD can solve this as well with per-packet markings, and
> once you have that, the only reason why you would want multiple
> tubes for an app-session IMHO is if you explicitly want to give
> FWs the ability to disect your traffic (eg: drop the video portion
> but permit audio).
>
>> The trouble with (b), of course, is that in the multipath or mobility
>> case, the tube will continue its flow over different network segments,
>> which violates the goals people have described for open and close.
>>
>> As i move to a new path, i'm deliberately not sending a Close message
>> (because i want to keep the tube open, right?) -- so what do the
>> middleboxes do?
>
> I think the first order of business was not to be worse than TCP,
> but not yet trying to finalize what the most widely agreeable improvements
> beyond that are.
>
>> And after i've moved to a new network and want to continue the same
>> tube, surely i won't indicate that the flow is opening (it's already
>> open).  As a result, the equipment on the new path won't see the Open
>> message -- should they discard it now?
>>
>> These arrangements seem in conflict to me.
>
> I can think of ways to improve the signaling. The main
> issue is the IETFs history of onpath signaling and the resulting
> resistance forces in the IETF against touching that subject.
>
> But consider the opportunity that redunant or cluster of
> NAT/FW will share SPUD tunnel-IDs amongst them to support
> the mobility case. No new standardized signaling required.
>
> Cheers
>     Toerless
>
>> Regards,
>>
>>         --dkg
>>
>> > P.S.: Cool domain name. Is that indicative of the role you're planning
>> > to take in the discussion ? ;-))) (sorry, i can never resists these questions).
>>
>> :)
>
> _______________________________________________
> Spud mailing list
> Spud@ietf.org
> https://www.ietf.org/mailman/listinfo/spud