Re: [tcpm] PoC for draft-moncaster-tcpm-rcv-cheat-02

Rob Sherwood <capveg@cs.umd.edu> Fri, 28 March 2008 14:59 UTC

Return-Path: <tcpm-bounces@ietf.org>
X-Original-To: ietfarch-tcpm-archive@core3.amsl.com
Delivered-To: ietfarch-tcpm-archive@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 1BB1628C96D; Fri, 28 Mar 2008 07:59:32 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -100.797
X-Spam-Level:
X-Spam-Status: No, score=-100.797 tagged_above=-999 required=5 tests=[AWL=-0.360, BAYES_00=-2.599, FH_RELAY_NODNS=1.451, HELO_MISMATCH_ORG=0.611, RDNS_NONE=0.1, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id laQXPq-zKCD1; Fri, 28 Mar 2008 07:59:30 -0700 (PDT)
Received: from core3.amsl.com (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id DD63B28C961; Fri, 28 Mar 2008 07:59:30 -0700 (PDT)
X-Original-To: tcpm@core3.amsl.com
Delivered-To: tcpm@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 4B07628C954 for <tcpm@core3.amsl.com>; Fri, 28 Mar 2008 07:59:30 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QrYqsCS26AYu for <tcpm@core3.amsl.com>; Fri, 28 Mar 2008 07:59:29 -0700 (PDT)
Received: from flyer.cs.umd.edu (flyer.cs.umd.edu [128.8.128.178]) by core3.amsl.com (Postfix) with ESMTP id 1D82D28C961 for <tcpm@ietf.org>; Fri, 28 Mar 2008 07:59:29 -0700 (PDT)
Received: from loompa.cs.umd.edu (loompa.cs.umd.edu [128.8.128.63]) by flyer.cs.umd.edu (8.12.11.20060308/8.12.5) with ESMTP id m2SExkCp018759; Fri, 28 Mar 2008 10:59:46 -0400
Received: from loompa.cs.umd.edu (localhost [127.0.0.1]) by loompa.cs.umd.edu (8.13.8+Sun/8.12.5) with ESMTP id m2SExRH8012799; Fri, 28 Mar 2008 10:59:27 -0400 (EDT)
Received: (from capveg@localhost) by loompa.cs.umd.edu (8.13.8+Sun/8.13.8/Submit) id m2SExQMT012798; Fri, 28 Mar 2008 10:59:26 -0400 (EDT)
Date: Fri, 28 Mar 2008 10:59:26 -0400
From: Rob Sherwood <capveg@cs.umd.edu>
To: Caitlin Bestler <Caitlin.Bestler@neterion.com>
Message-ID: <20080328145926.GI9776@cs.umd.edu>
References: <200803260029.33658.v13@v13.gr> <20080326202501.GQ24842@cs.umd.edu> <78C9135A3D2ECE4B8162EBDCE82CAD77034431B9@nekter>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <78C9135A3D2ECE4B8162EBDCE82CAD77034431B9@nekter>
User-Agent: Mutt/1.5.16 (2007-06-09)
Cc: tcpm@ietf.org
Subject: Re: [tcpm] PoC for draft-moncaster-tcpm-rcv-cheat-02
X-BeenThere: tcpm@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: TCP Maintenance and Minor Extensions Working Group <tcpm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/tcpm>, <mailto:tcpm-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:tcpm@ietf.org>
List-Help: <mailto:tcpm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tcpm>, <mailto:tcpm-request@ietf.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: tcpm-bounces@ietf.org
Errors-To: tcpm-bounces@ietf.org

On Thu, Mar 27, 2008 at 07:14:44PM -0400, Caitlin Bestler wrote:
> On very high speed links the NIC and/or driver has very little
> choice but to implement one or more strategies to reduce the number
> of interactions with the TCP consumer. Delivering each TCP segment
> as soon as it is deliverable is not a viable option.
> 
> These techniques could include interrupt coalescing, Large Receive
> Offload, TCP offload and others. Large Receive Offload in particular
> will be negatively impacted whenever the TCP sender does not faithfully
> send entire  messages in order and as promptly as the TCP congestion
> control algorithms expect. LRO seeks to reduce the number of distinct
> notifications required to deliver a sequence of TCP Segments for the
> same connection, but in a manner that minimizes latency delays.
> 
> When a gap indicates a lost packet the answer is simple, aggregate
> what you have and deliver it. When a gap really means that the sender
> is testing you then you still aggregate what you have and deliver it.
> And then deliver the deliberately mis-ordered segment separately.

Again, I think we have missed each other's point here.  I think you are
over estimating the _frequency_ that the test need be applied.  How
often the test is applied is a local decision made by the sender, in a
trade off between efficiency and how pro-actively attackers are
detected.  My intuition as to why the test has such low over head is
that it is applied less frequently then the regular packet loss rate
(caused by congestion and the TCP saw tooth) so it does not increase
the apparent packet loss rate significantly.  Applied to your
high-speed connections, this means that the rate of notifications
between the layers would not increase significantly either.

Further, the parameters for how often the test is applied are tunable:
my experiments recommend skipping a segment once every 1-200 segments,
which, for for the machines I had available to me, seemed to be a
reasonable compromise between overhead and detection.  The higher rate
connections that you speak of may be better suited with different
parameters: skipping a segment every 1-2000 or something similar.

In short, I don't see how you can say that this solution is too
inefficient; one can reduce the overhead of the test arbitrarily ( at
the cost of reduced detection),


> Indeed, if a sender engaged in deliberate TCP segment misordering
> on a regular basis it could be considered a Denial-of-service attack.

I believe that the security implications of this design are independent
of the test I've proposed here.


> This is something that can very easily be solved at the application
> layer, by simply requiring the application layer to periodically
> echo an application layer cookie back to the sender. A trivial AJAX
> script could handle this without a single change to the TCP layer.
> Using timestamps would also probably work. 


I guess this is another argument that I don't understand: you're saying
it's better to changes all applications individually than solve this in
one single place?  I agree that adding a challenge-response to all
applications would solve the problem, but it assumes modifying the
entire world several times over: all applications * all clients * all
servers.  I just don't see how that is feasible - to say nothing of
having to do a per-application analysis of what challenge response is
best for the least amount of overhead.

> And if the client was
> unwilling to provide these echoes, just limit the bandwidth that
> you will provide to them.

As I mention in my previous mail, using "each host limits the bandwidth
each connection gets", even as a fall back for compatibility, does not
stop the attack.  The attacker can simply target more victims to
generate the same aggregated amount of traffic.

> As for the "network" implications, this attack is a very poor 
> candidate for a successful DoS attack because it requires the
> attacker to saturate a link that actually leads to them. The
> pattern of the attack would be easily detected, and once detected
> the destination IP address is easily traceable.

Assuming the source ISP was responsive, this might work.  But, I
contend that most operators looking at the traffic patterns would
assume that the attack was backwards, i.e., that it was the victims
targeting the attacker.  I don't believe that the actions that an
operator would be likely to take to stop the mistaken backwards attack
(blocking traffic from victim to attacker) would stop the actual
attack.  This attack works (with reduced efficiency) when the attacker
is totally blind with respect to the victims traffic.  But yes, IF the
source ISP cared, IF they detect the attack, and IF they do the right
thing by blocking the senders Acks, then the attack would stop for that
one attacker.  It just seems like a lot of "if"'s and a lot of
dependence on other people doing the right thing when it is your
network that is congested.

> Work in the IEEE's Data Center Bridging task group should lead to 
> reduction or elimination of congestion drops within IP subnets.
> That means that TCP handling optimizations in NICs, drivers or
> TCP stacks that expect in-order delivery will be even more effective.
> 
> The NICs and Bridges supporting these techniques will be going through
> a lot of extra work to provide largely lossless and in-order delivery
> of TCP segments. Deliberately sending them in the wrong order borders
> on being ungrateful.

At least from your description here, it would seem that they have DoS
implications independent of this test.  I will have to read up on these
issues, thank you for pointing them out.

- Rob
.
_______________________________________________
tcpm mailing list
tcpm@ietf.org
https://www.ietf.org/mailman/listinfo/tcpm