Re: Fwd: Re: [tcpm] FW: Call for Adoption: TCP Tuning for HTTP

Willy Tarreau <w@1wt.eu> Thu, 03 March 2016 22:06 UTC

Return-Path: <ietf-http-wg-request+bounce-httpbisa-archive-bis2juki=lists.ie@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id CF4D21B2CFC for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 3 Mar 2016 14:06:54 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.908
X-Spam-Level:
X-Spam-Status: No, score=-6.908 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_HI=-5, RP_MATCHES_RCVD=-0.006, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Fe4REZPlBv3q for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 3 Mar 2016 14:06:51 -0800 (PST)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B58A61B2CCE for <httpbisa-archive-bis2Juki@lists.ietf.org>; Thu, 3 Mar 2016 14:06:50 -0800 (PST)
Received: from lists by frink.w3.org with local (Exim 4.80) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1abbJh-0008AP-E6 for ietf-http-wg-dist@listhub.w3.org; Thu, 03 Mar 2016 22:01:53 +0000
Resent-Date: Thu, 03 Mar 2016 22:01:53 +0000
Resent-Message-Id: <E1abbJh-0008AP-E6@frink.w3.org>
Received: from lisa.w3.org ([128.30.52.41]) by frink.w3.org with esmtps (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <w@1wt.eu>) id 1abbJb-00089i-4d for ietf-http-wg@listhub.w3.org; Thu, 03 Mar 2016 22:01:47 +0000
Received: from wtarreau.pck.nerim.net ([62.212.114.60] helo=1wt.eu) by lisa.w3.org with esmtp (Exim 4.80) (envelope-from <w@1wt.eu>) id 1abbJZ-0007Za-0O for ietf-http-wg@w3.org; Thu, 03 Mar 2016 22:01:46 +0000
Received: (from willy@localhost) by pcw.home.local (8.14.3/8.14.3/Submit) id u23M18mH023899; Thu, 3 Mar 2016 23:01:08 +0100
Date: Thu, 03 Mar 2016 23:01:08 +0100
From: Willy Tarreau <w@1wt.eu>
To: Patrick McManus <mcmanus@ducksong.com>
Cc: Joe Touch <touch@isi.edu>, HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20160303220108.GB23875@1wt.eu>
References: <56D74C23.5010705@isi.edu> <56D76A7E.7090507@isi.edu> <20160302232125.GA18275@1wt.eu> <56D77892.2000308@isi.edu> <20160303065545.GA18412@1wt.eu> <56D87BAC.4060204@isi.edu> <20160303184418.GA18774@1wt.eu> <CAOdDvNokUDxmfy87VrQNLoQvQknP6L3h6fLbuFeVpOiDN4szAQ@mail.gmail.com>
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAOdDvNokUDxmfy87VrQNLoQvQknP6L3h6fLbuFeVpOiDN4szAQ@mail.gmail.com>
User-Agent: Mutt/1.4.2.3i
Received-SPF: pass client-ip=62.212.114.60; envelope-from=w@1wt.eu; helo=1wt.eu
X-W3C-Hub-Spam-Status: No, score=-7.0
X-W3C-Hub-Spam-Report: AWL=0.922, BAYES_00=-1.9, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, W3C_AA=-1, W3C_IRA=-1, W3C_IRR=-3, W3C_WL=-1
X-W3C-Scan-Sig: lisa.w3.org 1abbJZ-0007Za-0O fe8c94a73494b11090a04f7b7d02d4e5
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Fwd: Re: [tcpm] FW: Call for Adoption: TCP Tuning for HTTP
Archived-At: <http://www.w3.org/mid/20160303220108.GB23875@1wt.eu>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/31171
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

Hi Pat,

On Thu, Mar 03, 2016 at 03:43:38PM -0500, Patrick McManus wrote:
> Hi Wily, Joe,
> 
> This message is a bit of a diversion from the discussion so far. sorry bout
> that.

No pb, anyway we're discussing the issues our various implementations have
to face in field.

> On Thu, Mar 3, 2016 at 1:44 PM, Willy Tarreau <w@1wt.eu> wrote:
> 
> > I've seen people
> > patch their kernels to lower the TIME_WAIT down to 2 seconds to address
> > such shortcomings! Quite frankly, this workaround *is* causing trouble!
> >
> 
> really? That's fascinating to me. Can you provide background or citations
> on what kind of trouble has been attributed to this and the scenario where
> it was done?

Yes that's very simple. Someone runs a load balancer that has to close
server connections itself and that does not abuse SO_LINGER to force an
RST to be emitted. At one point this person notices that the load starts
to look like spikes approaching 1000 connections per second, is very
irregular and that a lot of connection errors are reported to the clients. 
The person runs netstat and sees thousands of established connections
from hosts everywhere on the net, and very few established connections
to the servers. Looking closer, it appears that the total number of
connections per server is 64512, all state included. That number sounds
familiar to all those who already extend their source port range from
1024 to 65535. The problem is that almost all connections are in
TIME_WAIT state, very few are established on this side. The reason is
that the point where you create connections faster than they expire has
been reached. The person then searches on the net and finds that this
problem is well known and various people use different methods depending
on the OS. On Solaris it was common to reduce the time_wait timeout which
was configurable (I believe it was the same as the close_wait but I could
be wrong, it was 10 years ago at least). On Linux people often suggest to
enable tcp_recycle (which you must never do).

In both cases, that appears to solve the issue but then the person starts
noticing that the connection times are very erratic and that an abnormal
number of TCP resets are reported by firewalls. And worse, since the symptoms
are the same as at the beginning of the problem, the person further reduces
the timeouts.

What happens is that from time to time under high load you'll experience
lost packets, especially in networks involving many connection opening/
closing because you can't count on the flow control imposed by the congestion
control algorithm for connections that are independant on each other, so there
are some packet storms that can exceed some switch buffers or even the
destination NIC's buffers.

When the lost packet is the last one (the ACK that the side in FIN_WAIT
sends in response to the FIN before entering the TIME_WAIT state), the
other one will retransmit the FIN. Please keep in mind we were considering
that the TCP client closed first so this TIME_WAIT side is the client. At
low enough loads, the first bad actor is a firewall between the LB and the
server (more and more often this firewall is implemented on the server
itself or worse, on the underlying hypervisor which nobody controls nor
monitors). Many firewalls are tuned with aggressive FIN_WAIT/TIME_WAIT
timeouts which cause their session to expire before the other side dares
to retransmit. The server then remains for a long time in LAST_ACK state,
resending this last ACK packet for some time before giving up.

But by this time, our nice client has already used all other ports and
needs to reuse this port. Since its TIME_WAIT timeout was reduced to
something lower than the server's LAST_ACK, it believes the port is
free and reuses it. It sends a SYN which passes through the firewall,
this SYN reaches the server which disagrees and sends an RST back
(when the client picked a new SYN above the end of previous window)
or an ACK which will generally be blocked by the firewall, or if the
firewall accepts it, will be transmitted to the client which will then
send an RST, wait one second and send the SYN again. As you can imagine,
this dance is quite counter-productive for the performance since you
convert hundreds of microseconds to multiples of seconds to establish
certain connections.

And above a certain connection rate, you don't need a firewall anymore
to mangnify the problem. If the client wraps its ports once a second
or so, often it's not possible anymore to set a TIME_WAIT timeout to
a lower value (due to the resolution of the settings), so they have
to use tcp_recycle to bypass the TIME_WAIT. You get the same effect as
above, with some connections being responded to with an RST and other
ones with an ACK instead of a SYN-ACK, to which the client sends a RST,
waits one second or so and tries again.

And guess what ? I've seen people rush haproxy in production with
"option nolinger" set on both sides to send RSTs to both sides just to
work around this, without even realizing that they were only hiding the
problem, transforming long connect times into truncated responses :-(

> You don't need to go through the theoretical - I know what TW could
> conceptually catch - but the assertion about the shorter timeout causing
> field problems is something I'd love to understand better.

The TW is a very important state if you want to guarantee correctness
on your sessions in a situation where ports wrap quickly (and that's
the only case we're interested in BTW).

> For TW to be useful protection it also has to be paired with
> re-transmission and some application states that will be impacted by the
> screwup which reduces its utility, particularly for HTTP. I read the above
> statement as saying that TW is indeed useful in the field at the
> application layer - but maybe it is referencing some side effect I'm not
> thinking of rather than the vulnerability of not using it.

No, the application even doesn't know about it (until it fails to connect
because of it of course). However when the application is aware of how it
works, what protection it provides and what issues it can cause, then it
can try to do its best not to have to cheat on it nor to be impacted by
it. For example, I remember on Solaris 8 you had 32768 source ports and
240s time_wait timeout by default. That's a limit of only 136 connections
per second before being blocked if the client closes first. I remember it
even caused an Apache benchmark to fail at a customer's, they said that
"Apache couldn't go beyond 100 connections per second and it had a bug
because it couldn't use all the available CPU"!

> Those kinds of post mortem war stories where it is seen in the field are
> pretty interesting and help inform the discussion about whether the cure is
> worse than the disease.

Definitely. And I remember your sentence "pcap or it didn't happen" :-)

> My inclination has generally been that TW doesn't
> help a lot in practice and has some limitations and causes pain (as well
> documented in this thread.). So it would be interesting to look at the
> fallout of a situation it could have helped with.

The real issues we have is that TCP stacks continue to use large timeouts
by default, and these ones multiplied by a number of retries quickly end
up being fairly large. It still makes sense over congested links to wait
a few seconds, but we just want to wait for a few *retransmits* not seconds.
Mobile operating systems seem to retransmit very fast. I've seen delays
around 150 ms between two SYNs on a smartphone in 2010, and the exponential
back-off didn't immediately start. That participate to the RAN congestion
of couse.

> This feels a bit like the musing over the subpar utility of the tcp
> checksum on high bandwidth networks. For that one, the answer at least in
> the http space is 'use https for integrity and sort out the rare error at
> the application level'. I'm wondering if that's the right advice in the
> time_wait space for http as well.. we're really still talking about
> integrity. Go ahead and turn it off - just make sure you're running a
> higher level protocol that won't confuse old data with new data.

It doesn't work well for the connection setup as explained above. I could
agree with you that regarding integrity, multiple solutions are offered.
We even have MD5 checksums in standard if needed. But you need to be able
to quickly recover from an unclear situation, and TIME_WAIT helps you
never being in that unclear situation.

> (Fun paper: http://conferences2.sigcomm.org/imc/2015/papers/p303.pdf showed
> that at the tail of a ping survey, 1 % of replies from 1% of addresses
> needed >= 145 seconds to arrive. And that's just delay - not
> retransmission. A truly protective TW is a very large number.)

Not surprized. I've seen the same order of magnitude in 2010 on a mobile
operator's RAN. The issue comes from deep buffers. Since the network link
can transparently handover between different speeds and the bandwidth is
huge, they need to store megabytes of data, literally. Well, at some point
you have to wait for these megabytes to reach the LAN over a shared link...
And there TCP's retransmits can hurt because at the output of the pipe,
you see a lot of duplicates, sometimes indicating that 20-50% of the data
were retransmits already.

Due to all this I'm not surprized at all about the ongoing works to port
the transport to userlad that we've seen last summer in Münster! There's
an amazing room for improvement that's left unexploited and that cannot
realistically be exploited by just tuning TCP without breaking
compatibility with everything already deployed!

Cheers,
Willy