Re: [tcpm] Why is Cubic said to be RTT-fair?

Tom Sanders <toms.sanders@gmail.com> Sat, 23 May 2015 01:16 UTC

Return-Path: <toms.sanders@gmail.com>
X-Original-To: tcpm@ietfa.amsl.com
Delivered-To: tcpm@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 79C841A874A for <tcpm@ietfa.amsl.com>; Fri, 22 May 2015 18:16:37 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 4.989
X-Spam-Level: ****
X-Spam-Status: No, score=4.989 tagged_above=-999 required=5 tests=[BAYES_50=0.8, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, FRT_STOCK2=3.988, HTML_MESSAGE=0.001, MIME_8BIT_HEADER=0.3, SPF_PASS=-0.001] autolearn=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id clAzQoeb9lJm for <tcpm@ietfa.amsl.com>; Fri, 22 May 2015 18:16:34 -0700 (PDT)
Received: from mail-wi0-x22b.google.com (mail-wi0-x22b.google.com [IPv6:2a00:1450:400c:c05::22b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 31BB71A0119 for <tcpm@ietf.org>; Fri, 22 May 2015 18:16:34 -0700 (PDT)
Received: by wibt6 with SMTP id t6so3291199wib.0 for <tcpm@ietf.org>; Fri, 22 May 2015 18:16:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=Mne7GpqEMka8mS31HgMFRMhREWj0j2BzkmXsJbBOFhs=; b=iiJfPsJGervvIp3+9x5SmEANQ88juuw6HTBgRLbTMrX43709dfy3+Y6Z8Gqw2f9o0Z eS27RoeXDEDW/qRiZqs+5LatXTdl03UV/G9YWva2G79hvyuSJkS2WyEzwAF+s0UFLPUt HLLQSgHUC/qXVkKZl7uFPAsQdikysAjf+qbUGpdNpthwjQUEJEObTkfeMZW6wISWMStz /HwPsaT1hzkmpwJRMQl8Dlk2Z35kM9gOnbFM3zr2BJCZLWVomEgksZZh4H/TFEYk99HD x2Ni3nFqeNUHxyJ0R4rq68NlTXQzna38XGqVLSGibLVQnTKbRmLIpf9uqQtauzv9z2We 0Vlg==
MIME-Version: 1.0
X-Received: by 10.194.184.79 with SMTP id es15mr1528605wjc.112.1432343792964; Fri, 22 May 2015 18:16:32 -0700 (PDT)
Received: by 10.28.133.148 with HTTP; Fri, 22 May 2015 18:16:32 -0700 (PDT)
In-Reply-To: <B9E8525A-A11F-42BB-9F00-6D52F8A955DD@tik.ee.ethz.ch>
References: <CAFKtPK0cwhj0Wv3JnCDfv29EQ1bpQ7UR3mYGgHZzFJabnrcwqQ@mail.gmail.com> <7C415879-6BF7-403D-B904-CC2B4D32CA04@mac.com> <CAFKtPK2eBgE+8xyr0i9nGViMrDZEXQDyaFhbj_xBj8SW540oQA@mail.gmail.com> <CADVnQyk0k_1YA9dSq_5C_hme0aMKoz3Oktr6eAPesuyx2sizTQ@mail.gmail.com> <CAFKtPK0iKuik_EiOJ9vOePG3vkKQsXA-UCrKUG01RySOthbAHA@mail.gmail.com> <B9E8525A-A11F-42BB-9F00-6D52F8A955DD@tik.ee.ethz.ch>
Date: Sat, 23 May 2015 06:46:32 +0530
Message-ID: <CAFKtPK38xXXUfwC3c4CfkyAtcb03c5j1FO3rEeonbu0hJCP0WQ@mail.gmail.com>
From: Tom Sanders <toms.sanders@gmail.com>
To: Mirja Kühlewind <mirja.kuehlewind@tik.ee.ethz.ch>
Content-Type: multipart/alternative; boundary="047d7b8738d0f9db890516b586a7"
Archived-At: <http://mailarchive.ietf.org/arch/msg/tcpm/Bn1E9wFQFPUvkGxNRZYlbvd-2z8>
Cc: "tcpm@ietf.org" <tcpm@ietf.org>
Subject: Re: [tcpm] Why is Cubic said to be RTT-fair?
X-BeenThere: tcpm@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: TCP Maintenance and Minor Extensions Working Group <tcpm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tcpm>, <mailto:tcpm-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/tcpm/>
List-Post: <mailto:tcpm@ietf.org>
List-Help: <mailto:tcpm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tcpm>, <mailto:tcpm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 23 May 2015 01:16:37 -0000

Hi Mirja,

>
>
> the text below only says that if you have two cubic flows with a different
> base RTT competing at the same bottleneck they’ll still converge to share
> the link equally. This does not mean that if you change the RTT strongly
> during one connection that Cubic is able to catch up fast.
>
> In fact this is one weakness of Cubic that I have observed. If there are
> change in base delay/rtt or strong increases in the available capacity,
> Cubic needs a rather long time to catch up. This is because cubic has a
> ‚target value‘ to which it increases it sending rate/window quickly but
> then if the capacity cap is not somewhere close to this value (as expected)
> it will probe very conservatively around this value and it takes a rather
> long time until it gets back into a phase where it increases its congestion
> window fast.
>

Then which congestion control algorithm fares better there? I thought Cubic
was suited for large fat pipes -- bigger bandwidth-delay pipes.


>
> So what do you mean by 'I immediately saw that the TCP performance went
> down‘. How long do you run your test? And over which period of time this
> the performance go down?
>

I did a simple test.

I connect two machines on my 1GB LAN.I do a tcp data transfer test and i
get around ~900Mbps. I then added a delay using qdisc on my outgoing
ethernet interface to simulate a longer RTT. So, if my ping earlier was
around 0.255ms, it then became 120ms. I repeat the test and i see that i
can only send about 200Mbps. I had to use multiple TCP flows to fill up my
1GB link with Cubic. I run this experiment for about 2 minutes, and Cubic
is not able to catch up. I dont have long standing flows, and only need to
send bursts of data, hence did not leave it running for very long.

I was under the impression, after having read the original Cubic paper that
Cubic's window growth rate is independent of RTT and hence wasnt expecting
any difference in the throughput.

Toms.

>
> Mirja
>
>
> > Am 21.05.2015 um 02:51 schrieb Tom Sanders <toms.sanders@gmail.com>:
> >
> > Hi Neal,
> >
> > This is odd.
> >
> > In the original Cubic paper (
> http://www4.ncsu.edu/~rhee/export/bitcp/cubic-paper.pdf) by Rhee and Xu,
> they explicitly state that the Cubic is independent of RTT.
> >
> > "The main feature of CUBIC is that its window growth function is defined
> in real-time so that its growth will be independent of RTT. Our work was
> partially inspired by HTCP [5], whose window growth function is also based
> on real time. The congestion epoch period of CUBIC is determined by the
> packet loss rate alone. As TCP’s throughput is defined by the packet loss
> rate as well as RTT, the throughput of CUBIC is defined by only the packet
> loss rate. Thus, when the loss rate is high and/or RTT is short, CUBIC can
> operate in a TCP mode. Moreover, since the growth function is independent
> of RTT, its RTT fairness is guaranteed as different RTT flows will still
> grow their windows at the same rate."
> >
> > Am i missing something here?
> >
> > Thanks, Toms
> >
> > On 20 May 2015 at 23:13, Neal Cardwell <ncardwell@google.com> wrote:
> > On Wed, May 20, 2015 at 1:18 PM, Tom Sanders <toms.sanders@gmail.com>
> wrote:
> > > Thanks Rick, this was very helpful.
> > >
> > > So if there are no setsockopts, etc then CUBIC's performance, unlike
> other
> > > TCP variants, should not get impacted by RTTs, right? Is that a fair
> > > statement to make?
> >
> > CUBIC, like most TCP congestion control variants, has significant
> > aspects that evolve on the scale of RTTs: for example, in slow start,
> > loss recovery, or when the CUBIC function is very steep, CUBIC's
> > behavior is heavily dependent on RTT (send some packets, wait for an
> > ACK, send some more packets,...). So CUBIC will not be
> > perfectly"RTT-fair.
> >
> > I think it might be fair to say something like: once a path is
> > saturated with CUBIC flows, as long as loss recovery or send/receive
> > buffers are not a bottleneck, then competition between CUBIC flows
> > should be more RTT-fair than Reno, in the long run, since for large
> > parts of the life cycle of the CUBIC flow the evolution of cwnd is
> > constrained by wall clock time rather than ACK arrival.
> >
> > neal
> >
> >
> > > Toms
> > >
> > > On 20 May 2015 at 19:51, Rick Jones <perfgeek@mac.com> wrote:
> > >>
> > >>
> > >> On May 20, 2015, at 6:20 AM, Tom Sanders <toms.sanders@gmail.com>
> wrote:
> > >>
> > >> > Hi,
> > >> >
> > >> > As per my understanding, unlike other TCP variants, CUBIC claims to
> be
> > >> > RTT fair. This is because the window growth rate is independent of
> the RTT,
> > >> > and hence the performance is the same as on a low and a high RTT
> paths.
> > >> > Others are dependent upon RTT since their windows grow as they
> receive ACKs
> > >> > from their peers, while CUBIC doesnt (window growth is a function
> of time).
> > >> > This is theory.
> > >> >
> > >> > In practice, CUBIC (which is the default now on most linux
> > >> > distributions) also depends upon RTT. I connected to machines on a
> LAN and
> > >> > did a FTP transfer. I got a certain bandwidth. Next, i artificially
> inserted
> > >> > delay using "tc qdisc" in the path. I increased the delay on the
> ethernet
> > >> > interface connecting my linux machine to the LAN to be around
> 100ms. I
> > >> > immediately saw that the TCP performance went down.
> > >> >
> > >> > To bring it up to the same level as before i had to use multiple TCP
> > >> > streams. To me this means that CUBIC performance is dependent of
> the RTT. So
> > >> > why do we call it RTT-fair?
> > >>
> > >> I cannot speak to CUBIC specifically, but I would think you would
> want to
> > >> make sure you were seeing effects of congestion window and not of the
> > >> classic TCP window.  For example, if the FTP client/server you were
> using
> > >> makes an explicit setsockopt() to set the socket buffer sizes and by
> > >> extension the classic TCP window, your bumping of the RTT to around
> 100ms
> > >> may have taken the bandwidth down as a consequence of the classic:
> > >>
> > >> Throughput <= WindowSize/RTT
> > >>
> > >> Similarly, since you mention Linux, if the FTP client/server didn’t
> make
> > >> explicit setsockopt() calls,  the sysctl values for tcp_rmem and
> tcp_wmem
> > >> may have been such that the auto tuning of the window size couldn’t
> allow
> > >> the classic TCP window to grow “enough.”
> > >>
> > >> I suspect that the experiment needs to be setup to have the same
> classic
> > >> window size in both cases, sized per the above to be sufficient to
> achieve
> > >> the peak throughput in the high RTT case.  Then you will have
> eliminated
> > >> classic window as a factor.
> > >>
> > >> Also, depending on the version of the Linux kernel you are using,
> there
> > >> may be some effects from tcp small queues or perhaps (a stretch?)
> even byte
> > >> queue limits which may not be playing well with netem.  (Just
> guessing there
> > >> really)
> > >>
> > >> happy benchmarking,
> > >>
> > >> rick jones
> > >
> > >
> > >
> > >
> > > --
> > > Toms.
> > >
> > > _______________________________________________
> > > tcpm mailing list
> > > tcpm@ietf.org
> > > https://www.ietf.org/mailman/listinfo/tcpm
> > >
> >
> >
> >
> > --
> > Toms.
> > _______________________________________________
> > tcpm mailing list
> > tcpm@ietf.org
> > https://www.ietf.org/mailman/listinfo/tcpm
>
>


-- 
Toms.