Re: [iccrg] [Stackevo-discuss] New Version Notification for draft-welzl-irtf-iccrg-tcp-in-udp-00.txt

Michael Welzl <michawe@ifi.uio.no> Fri, 25 March 2016 11:48 UTC

Return-Path: <michawe@ifi.uio.no>
X-Original-To: iccrg@ietfa.amsl.com
Delivered-To: iccrg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A2E8A12D5A7 for <iccrg@ietfa.amsl.com>; Fri, 25 Mar 2016 04:48:30 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.21
X-Spam-Level:
X-Spam-Status: No, score=-4.21 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, T_RP_MATCHES_RCVD=-0.01] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id l9Ky3nfNYmrD for <iccrg@ietfa.amsl.com>; Fri, 25 Mar 2016 04:48:27 -0700 (PDT)
Received: from mail-out5.uio.no (mail-out5.uio.no [IPv6:2001:700:100:10::17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 11EC012D599 for <iccrg@irtf.org>; Fri, 25 Mar 2016 04:48:27 -0700 (PDT)
Received: from mail-mx2.uio.no ([129.240.10.30]) by mail-out5.uio.no with esmtp (Exim 4.80.1) (envelope-from <michawe@ifi.uio.no>) id 1ajQE5-0003wi-15; Fri, 25 Mar 2016 12:48:25 +0100
Received: from 3.134.189.109.customer.cdi.no ([109.189.134.3] helo=[192.168.0.107]) by mail-mx2.uio.no with esmtpsa (TLSv1:DHE-RSA-AES256-SHA:256) user michawe (Exim 4.80) (envelope-from <michawe@ifi.uio.no>) id 1ajQE4-0005Qa-Cx; Fri, 25 Mar 2016 12:48:24 +0100
Content-Type: text/plain; charset="utf-8"
Mime-Version: 1.0 (Mac OS X Mail 9.2 \(3112\))
From: Michael Welzl <michawe@ifi.uio.no>
In-Reply-To: <20160325105448.GM88304@verdi>
Date: Fri, 25 Mar 2016 12:48:23 +0100
Content-Transfer-Encoding: quoted-printable
Message-Id: <294CAD03-ABEB-4EC7-84F8-FD2C668518C2@ifi.uio.no>
References: <A741874C-0E2C-4905-9FD3-D29B4B94A9C0@ifi.uio.no> <56F3212B.5020408@isi.edu> <20F3E6FF-DED4-46BD-BFD5-C76F8A6A8D40@ifi.uio.no> <56F32C47.6080707@isi.edu> <271375F3-2B9D-4C61-9C6E-468E6423A1A4@ifi.uio.no> <56F427D9.9030208@isi.edu> <9826517E-9237-4EBB-AC0D-54388EBD4E5B@ifi.uio.no> <20160325105448.GM88304@verdi>
To: John Leslie <john@jlc.net>
X-Mailer: Apple Mail (2.3112)
X-UiO-SPF-Received:
X-UiO-Ratelimit-Test: rcpts/h 7 msgs/h 3 sum rcpts/h 13 sum msgs/h 5 total rcpts 39682 max rcpts/h 54 ratelimit 0
X-UiO-Spam-info: not spam, SpamAssassin (score=-5.0, required=5.0, autolearn=disabled, TVD_RCVD_IP=0.001, UIO_MAIL_IS_INTERNAL=-5, uiobl=NO, uiouri=NO)
X-UiO-Scanned: 920E93ADEA685A25E3713006AB197D57484B531F
X-UiO-SPAM-Test: remote_host: 109.189.134.3 spam_score: -49 maxlevel 80 minaction 2 bait 0 mail/h: 3 total 583 max/h 14 blacklist 0 greylist 0 ratelimit 0
Archived-At: <http://mailarchive.ietf.org/arch/msg/iccrg/lx0SeeUzgNYRsngCJFDT-gKcPSk>
Cc: iccrg@irtf.org
Subject: Re: [iccrg] [Stackevo-discuss] New Version Notification for draft-welzl-irtf-iccrg-tcp-in-udp-00.txt
X-BeenThere: iccrg@irtf.org
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: "Discussions of Internet Congestion Control Research Group \(ICCRG\)" <iccrg.irtf.org>
List-Unsubscribe: <https://www.irtf.org/mailman/options/iccrg>, <mailto:iccrg-request@irtf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/iccrg/>
List-Post: <mailto:iccrg@irtf.org>
List-Help: <mailto:iccrg-request@irtf.org?subject=help>
List-Subscribe: <https://www.irtf.org/mailman/listinfo/iccrg>, <mailto:iccrg-request@irtf.org?subject=subscribe>
X-List-Received-Date: Fri, 25 Mar 2016 11:48:30 -0000

> On 25. mar. 2016, at 11.54, John Leslie <john@jlc.net> wrote:
> 
> Michael Welzl <michawe@ifi.uio.no> wrote:
>> ...
>> so then how do you solve the 2-path problem I explained above? ...
>> Consider an empty network. Path 1 has capacity X. Path 2 has capacity
>> 3*X. Some device in the network decides to round-robin schedule packets
>> from a single TCP connection across paths 1 and 2.
>> Congestion will first appear on path 1. TCP will react, halving its
>> cwnd. Most of the capacity of path 2 will always remain unused that way.
>> 
>> How do you solve this with e2e congestion control?
> 
>   Obviously, as you design the example, you don't.
> 
>   Neither the e2e sender nor the e2e receiver has enough knowledge if
> some device along the path decides to switch paths without notice.
> 
>   (Fortunately, we don't have to discuss the case where the receiver
> get notice of which path was used but the sender must guess which
> path will be used!)
> 
>> ... 
>> The only solution I can see to my problem above is not to schedule
>> traffic as I described it. Obviously the scheduler should send more
>> packets on path 2 than on path 1.
> 
>   Whadaya mean, "the" scheduler? I see at least two of them here!

Sorry, I meant the in-network scheduler that decides which of the two paths packets will take.


>> Doing this correctly requires knowing the capacities of these links.
> 
>   Indeed. If the round-robin scheduler chooses to ignore the difference
> in capacities, the TCP scheduler can't expect to overcome that.
> 
>   (There are operators which do this sort of thing: usually they manually
> adjust things so the capacities are "close enough" and make _some_ effort
> to avoid this sub-network becoming a bottleneck.)

Interesting!


>> Considering a more dynamic network situation, it requires knowledge
>> about where the bottleneck currently is, and how much capacity is
>> available per link. This is a control loop.
> 
>   We have been spoiled by bottlenecks being mostly stable. They won't
> always be.
> 
>   The question of where a bottleneck "currently is" isn't really knowable.
> They tend (in 2015) to be stable enough that it _seems_ knowable; but
> even "currently" doesn't really mean what we think it does.
> 
>   Operators find that certain "bottlenecks" can be quickly fixed by
> shifting/adding capacity, while others resist that solution. Consequently,
> they play _that_ whack-a-mole game; and this tends to drive "current"
> bottlenecks close to the end-users. 
> 
>> If you put such a control loop in the network, you may want to give it
>> a different name than me, but this is what I meant when I said
>> ???this would require putting congestion controls inside the network???.
> 
>   I suspect that "a different name" would be very helpful...
> 
>   There is a lot of history of "congestion control" in networks, mostly
> based upon admission-control. This seems to me less than compatible with
> the TCP model of congestion control.

…though these were just research musings anyway. I’m not at all proposing to design that in-network control loop with this draft.


>> As per my example above, this would mean that you???d need something
>> quite similar to congestion control inside the network, or the available
>> capacity for packets on all these paths must be equal, all the way to
>> the receiver - else you???ll end up underutilizing your network.
> 
>   Operators _don't_ have the same horror about "underutilizing" that
> you do.
> 
>   It turns out to be very practical to drive the bottlenecks close to
> the endpoints; and aggregation tends to ameliorate under-utilization…

Sure, that all makes sense to me - but you’re talking about operators dealing with today’s system. I don’t think that we want to design a congestion control based on the assumption that underutilizing won’t matter anyway?

Cheers,
Michael