Re: [tcpm] Is this a problem?

MURALI BASHYAM <> Tue, 20 November 2007 08:07 UTC

Return-path: <>
Received: from [] ( by with esmtp (Exim 4.43) id 1IuO8q-00086J-Cr; Tue, 20 Nov 2007 03:07:32 -0500
Received: from tcpm by with local (Exim 4.43) id 1IuO8o-000868-Vh for; Tue, 20 Nov 2007 03:07:30 -0500
Received: from [] ( by with esmtp (Exim 4.43) id 1IuO8j-0007z8-GM for; Tue, 20 Nov 2007 03:07:25 -0500
Received: from ([]) by with smtp (Exim 4.43) id 1IuO8f-0002Mp-Sq for; Tue, 20 Nov 2007 03:07:25 -0500
Received: (qmail 89245 invoked by uid 60001); 20 Nov 2007 08:07:21 -0000
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024;; h=X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:Cc:MIME-Version:Content-Type:Message-ID; b=yJgm62mH/i97xheCGGG5z+QNj3MjcsMnnk7daKhj9wUzXYGjp2bSOJhH8mNzU3gz5ruHRG2pdryLH2VioyQGBlJujBKj2VOGH/x0j3LVlkTkNtPFeJNYV5Y0qxo+MMvF1jBczaGjzyrKpr3vv/FrQyJrHuyGDPFLXZB/V+xLDq4=;
X-YMail-OSG: M59EG1cVM1nh45oYEIRfg4H.fROtvGDOd9konAflBbKpRXQM76oDr2rR9ZNSzwBx1MlnFBf_0rLNS4dnLtm3tIx.pO_mjGcNgd3W3ETBGvbZUgU-
Received: from [] by via HTTP; Tue, 20 Nov 2007 00:07:21 PST
X-Mailer: YahooMailRC/818.27 YahooMailWebService/0.7.157
Date: Tue, 20 Nov 2007 00:07:21 -0800
Subject: Re: [tcpm] Is this a problem?
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Message-ID: <>
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 10ba05e7e8a9aa6adb025f426bef3a30
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: TCP Maintenance and Minor Extensions Working Group <>
List-Unsubscribe: <>, <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>

----- Original Message ----
From: Mark Allman <>
Cc: John Heffner <>; Mahesh Jethanandani <>;
Sent: Monday, November 19, 2007 12:49:19 PM
Subject: Re: [tcpm] Is this a problem? 

> The problem directly stems from TCP's choice to persist
> indefinitely. It seems a very simple notion of allowing the
> application to be the master here (borrowing Joe's words :-)), and
> providing a ceiling on how long this behaviour will continue. This is
> fully in spirit with how the other TCP timers have evolved and added
> over the years. Now this alone does not address the requirements of a
> co-ordinated distributed DOS attack, but the point here is how long
> the connection should be allowed to (re)try sending data is purely an
> *application* decision i.e it MUST be under application control. It's
> a bad design to have an indefinite retry like this in the transport
> layer without providing an override to the app.  

I don't follow this line of thinking.  Let's see ...

  + I don't know where we have added standard timers to TCP except
    we have to (e.g., for time-wait or something).  We basically add
    timers when the only things we can count on is the passage of time.

  + It doesn't seem to me a problem that a connection does this persist
    business indefinitely because both ends are consenting.  It isn't
    like one end is silent or going away.

  + The application *is* in control.  The application can close a
    connection whenever it wants to close a connection.  Giving it a
    to tell TCP when to kill a connection is a distinction without a

Merely closing a connection does not accomplish things, the FIN will simply get
queued behind existing data. What's required is an abort here for termination. But 
abort is a drastic action for the application which it cannot issue w/o some sort of explicit
feedback from TCP...

I don't think you answered my question in any way.  Why do we have to
standardize this?  It seems to me that if some server wants to
a policy that says "a connection can stay in zero window persist for 60
seconds" then great.  Fine.  I don't care.  Might work fine for the use
case of that server.  Might cause problems for its connections.  But,
that is its policy then wonderful.  Who am I to say that is right or
wrong?  Change the policy and all that still applies.  Seems perfectly
consistent with lots of other things ....

When and how does the application know that the connection entered and exited
the persist condition? Where is the required feedback here? I am not aware of any...
It seems like we are exporting a TCP specific state and knowledge all the way into the 
application to accomplish what seems a simple matter of starting a timer on entry into
persist condition, stop on exit, abort on timeout (if specified by application). 

  E.g., who am I to say which SYNs a TCP should accept?  If they are
  from "bad" IPs and I want to drop them on the floor who are you to
  tell me I am wrong?

  E.g., who am I to say how many retransmits you should conduct before
  determining the peer is somehow gone and you give up?

Why does this persist stuff need to be done in a standard fashion?

Because zero window is a transport layer notion, and it is the responsibility of the transport
layer to provide a robust and fair solution to this issue which benefits ALL applications instantaneously, 
at least that's the way i view it. Why does congestion control require standardization, when every 
client/server application out there is perfectly capable of doing it?  To achieve consistent behaviour
across the widest range of applications...


Be a better pen pal. 
Text or chat with friends inside Yahoo! Mail. See how.

tcpm mailing list