RE: [tcpm] Is this a problem?

"Caitlin Bestler" <Caitlin.Bestler@neterion.com> Thu, 08 November 2007 17:55 UTC

Return-path: <tcpm-bounces@ietf.org>
Received: from [127.0.0.1] (helo=stiedprmman1.va.neustar.com) by megatron.ietf.org with esmtp (Exim 4.43) id 1IqBbH-0002EU-BN; Thu, 08 Nov 2007 12:55:31 -0500
Received: from tcpm by megatron.ietf.org with local (Exim 4.43) id 1IqBbH-0002Du-3j for tcpm-confirm+ok@megatron.ietf.org; Thu, 08 Nov 2007 12:55:31 -0500
Received: from [10.91.34.44] (helo=ietf-mx.ietf.org) by megatron.ietf.org with esmtp (Exim 4.43) id 1IqBbG-0002Cq-Pj for tcpm@ietf.org; Thu, 08 Nov 2007 12:55:30 -0500
Received: from mx.neterion.com ([72.1.205.142] helo=owa.neterion.com) by ietf-mx.ietf.org with esmtp (Exim 4.43) id 1IqBbD-0005hW-B2 for tcpm@ietf.org; Thu, 08 Nov 2007 12:55:30 -0500
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
x-mimeole: Produced By Microsoft Exchange V6.5
Subject: RE: [tcpm] Is this a problem?
Date: Thu, 08 Nov 2007 12:55:25 -0500
Message-ID: <78C9135A3D2ECE4B8162EBDCE82CAD77027EA415@nekter>
In-Reply-To: <47333FD9.8010508@cisco.com>
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Thread-Topic: [tcpm] Is this a problem?
Thread-Index: AcgiKIit4wNAyzsyRACalWYxB1vUKQABbW1g
References: <121882.10140.qm@web31702.mail.mud.yahoo.com> <4730B50A.1030102@isi.edu><20071106190845.GC5881@elb.elitists.net> <4730BC89.5000909@isi.edu><20071106192746.GE5881@elb.elitists.net> <20071106193912.GF5881@elb.elitists.net> <4730C9D6.1020700@cisco.com><20071106203212.GG5881@elb.elitists.net> <47333FD9.8010508@cisco.com>
From: Caitlin Bestler <Caitlin.Bestler@neterion.com>
To: tcpm@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 0ddefe323dd869ab027dbfff7eff0465
X-BeenThere: tcpm@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: TCP Maintenance and Minor Extensions Working Group <tcpm.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/tcpm>, <mailto:tcpm-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:tcpm@ietf.org>
List-Help: <mailto:tcpm-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/tcpm>, <mailto:tcpm-request@ietf.org?subject=subscribe>
Errors-To: tcpm-bounces@ietf.org

*If* you were going to solve this problem as a *transport* problem
then you would have to define it in transport terms. I'll attempt
to offer such a description, and show that it is still in conflict
with the application layer problem.

The apparent transport layer issue is that the application does not
know whether the lack of credits are due to lack of receive buffers
or network congestion. The presumption here is that the server is
willing to suffer a bit if the problem is in the network rather
than the client.

The problem is that as long as the client is using TCP over a
conventional stack the receive buffers actually belong to the
OS and not to the client. So the absence or presence of receive
buffers does not really tell us anything about the client's state.
For example the reason the available receive buffering for the
connection is so small might be that the OS or Hypervisor has
swapped the client out.

So the lack of adequate receive windows can have *three* causes:
network congestion, lack of OS buffering resources or failure
of the client to drain the connection. And the latter case can
be a failure of the client to drain the connection that is based
on malice and/or sufficiently sloppy coding that terminating the
connection is justified, or it could just be that the client
is not getting scheduled for OS reasons.

There is no way for the transport layer to distinguish between
any of the latter three cases. There are many reasons for a
lack of receive buffering, and they all look the same on
the wire.

Now you could use a transport that guaranteed availability of
specific buffers that are fully guaranteed. You might even call
it something like Remote Direct Memory Access. But before you
could deploy this solution you would have to convince the clients
to use it. If you are trying to do this as a DoS protection then
you must convince *All* clients to make the shift so that you
can turn off support for legacy modes. If you were to actually
attempt such a project you should not do it with TCP. HTTP would
be far better implemented over SCTP and/or RDMA than over TCP
(if you are willing to ignore the billion or so installed clients).

More generally, while there is some merit in separating feedback
about end-to-end buffer credits from congestion tracking, such
a change would not be a "minor" tweak to TCP. If the market was
willing to make major changes to TCP it would already be using
SCTP instead.




_______________________________________________
tcpm mailing list
tcpm@ietf.org
https://www1.ietf.org/mailman/listinfo/tcpm