Re: [tcpm] Is this a problem?

Joe Touch <touch@ISI.EDU> Mon, 29 October 2007 22:43 UTC

Return-path: <>
Received: from [] ( by with esmtp (Exim 4.43) id 1ImdKG-0003kb-Fb; Mon, 29 Oct 2007 18:43:16 -0400
Received: from tcpm by with local (Exim 4.43) id 1ImdKF-0003kL-7G for; Mon, 29 Oct 2007 18:43:15 -0400
Received: from [] ( by with esmtp (Exim 4.43) id 1ImdKE-0003kD-EY for; Mon, 29 Oct 2007 18:43:14 -0400
Received: from ([]) by with esmtp (Exim 4.43) id 1ImdK9-0005HT-5g for; Mon, 29 Oct 2007 18:43:14 -0400
Received: from [] ( []) by (8.13.8/8.13.8) with ESMTP id l9TMgqn2017753; Mon, 29 Oct 2007 15:42:53 -0700 (PDT)
Message-ID: <>
Date: Mon, 29 Oct 2007 15:42:41 -0700
From: Joe Touch <touch@ISI.EDU>
User-Agent: Thunderbird (Windows/20070728)
MIME-Version: 1.0
To: Mahesh Jethanandani <>
Subject: Re: [tcpm] Is this a problem?
References: <>
In-Reply-To: <>
X-Enigmail-Version: 0.95.4
X-ISI-4-43-8-MailScanner: Found to be clean
X-Spam-Score: 0.0 (/)
X-Scan-Signature: cd26b070c2577ac175cd3a6d878c6248
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: TCP Maintenance and Minor Extensions Working Group <>
List-Unsubscribe: <>, <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
Content-Type: multipart/mixed; boundary="===============0442104579=="

Mahesh Jethanandani wrote:
> Folks,
> We have documented a case of HTTP servers that are prone to resource
> starvation with the use of a small user level program. The program does
> not require any special privileges or changes in the kernel. The user
> level program on the client opens a connection to a HTTP server, sends a
> GET request for a large file (larger than the advertised window of the
> client) but never reads the response.
> Three well-known, public sites were tested for this vulnerability. 
> The two most common HTTP servers, Apache and IIS were the target.
> While one site had put mitigation technique in place, the others had
> none. With the latter two we were able to hold connections in
> ESTABLISHED state for days. The former site had a mitigation in place
> with a fixed timeout of 11 min., which was easy to guess and work
> around.
> We (the authors) believe that this is a huge problem. What do you
> folks feel?

I agree with Caitlin and your previous responses; this is an application

> Previous responses to this documentation has been that it is a
> application problem. It is clear from our experimentation that most HTTP
> servers (and FTP too) have not implemented any mitigation techniques. We
> believe that this problem exists across the whole range of TCP based
> applications prevalent on the internet, although our experiments were
> limited to the web application. Where applications have tried to put
> mitigation techniques in place, workaround has been easy. This is mainly
> because applications do not have the same amount of visibility as TCP
> does on the state of the connection.

Applications know when a connection hasn't closed. They also know when a
write would block due to insufficient socket resources. Either or both
of these provide sufficient visibility to avoid the problem entirely.

As you note, applications can solve this easily. Since it's their
obligation to do so, and it's easy for them to do so, there's no reason
to complicate the transport layer with this responsibility.

Furthermore, does the propose solution solve the DOS problem? Aren't
there other ways to keep a source stalled? (i.e., what about continued
SACKs? or repeated ACKs indicating a lost segment? at the very least, we
could ACK a byte at a time, sending 3-4 duplicate ACKs for each byte,
which would keep the sender from opening its window and stall things a


tcpm mailing list