Re: [tcpm] Is this a problem?

Mark Allman <mallman@icir.org> Tue, 20 November 2007 12:13 UTC

Return-path: <tcpm-bounces@ietf.org>
Received: from [127.0.0.1] (helo=stiedprmman1.va.neustar.com) by megatron.ietf.org with esmtp (Exim 4.43) id 1IuRyY-0003z5-8d; Tue, 20 Nov 2007 07:13:10 -0500
Received: from tcpm by megatron.ietf.org with local (Exim 4.43) id 1IuRyW-0003z0-JB for tcpm-confirm+ok@megatron.ietf.org; Tue, 20 Nov 2007 07:13:08 -0500
Received: from [10.90.34.44] (helo=chiedprmail1.ietf.org) by megatron.ietf.org with esmtp (Exim 4.43) id 1IuRyW-0003ys-8E for tcpm@ietf.org; Tue, 20 Nov 2007 07:13:08 -0500
Received: from pork.icsi.berkeley.edu ([192.150.186.19]) by chiedprmail1.ietf.org with esmtp (Exim 4.43) id 1IuRyV-0000Ba-EI for tcpm@ietf.org; Tue, 20 Nov 2007 07:13:08 -0500
Received: from guns.icir.org (adsl-69-222-35-58.dsl.bcvloh.ameritech.net [69.222.35.58]) by pork.ICSI.Berkeley.EDU (8.12.11.20060308/8.12.11) with ESMTP id lAKCD5tp013826; Tue, 20 Nov 2007 04:13:06 -0800
Received: from lawyers.icir.org (adsl-69-222-35-58.dsl.bcvloh.ameritech.net [69.222.35.58]) by guns.icir.org (Postfix) with ESMTP id 1F4321223B62; Tue, 20 Nov 2007 07:13:00 -0500 (EST)
Received: from lawyers.icir.org (localhost [127.0.0.1]) by lawyers.icir.org (Postfix) with ESMTP id 740442F6E0C; Tue, 20 Nov 2007 07:12:38 -0500 (EST)
To: MURALI BASHYAM <murali_bashyam@yahoo.com>
From: Mark Allman <mallman@icir.org>
Subject: Re: [tcpm] Is this a problem?
In-Reply-To: <299249.88905.qm@web31703.mail.mud.yahoo.com>
Organization: ICSI Center for Internet Research (ICIR)
Song-of-the-Day: Who Made Who
MIME-Version: 1.0
Date: Tue, 20 Nov 2007 07:12:38 -0500
Message-Id: <20071120121238.740442F6E0C@lawyers.icir.org>
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 36c793b20164cfe75332aa66ddb21196
Cc: tcpm@ietf.org
X-BeenThere: tcpm@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
Reply-To: mallman@icir.org
List-Id: TCP Maintenance and Minor Extensions Working Group <tcpm.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/tcpm>, <mailto:tcpm-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:tcpm@ietf.org>
List-Help: <mailto:tcpm-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/tcpm>, <mailto:tcpm-request@ietf.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0454679474=="
Errors-To: tcpm-bounces@ietf.org

(hat off, clearly)

> I don't follow this line of thinking.  Let's see ...
> 
>   + I don't know where we have added standard timers to TCP except
>     where
>     we have to (e.g., for time-wait or something).  We basically add
>     timers when the only things we can count on is the passage of time.
> 
>   + It doesn't seem to me a problem that a connection does this persist
>     business indefinitely because both ends are consenting.  It isn't
>     like one end is silent or going away.
> 
>   + The application *is* in control.  The application can close a
>     connection whenever it wants to close a connection.  Giving it a
>     way
>     to tell TCP when to kill a connection is a distinction without a
>     difference. 
> 
> Merely closing a connection does not accomplish things, the FIN will
> simply get queued behind existing data. What's required is an abort
> here for termination. 

Yes- sorry for being overly glib in my language in the last bullet.  You
are right ... abort, not close.

> But abort is a drastic action for the application which it cannot
> issue w/o some sort of explicit feedback from TCP...

I don't think so.  I think applications care about getting data through
the network.  If they can't do it then they abort regardless.  This has
been said better by others, however.  I recognize that we disagree here.

> > I don't think you answered my question in any way.  Why do we have to
> > standardize this?  It seems to me that if some server wants to
> > implement a policy that says "a connection can stay in zero window
> > persist for 60 seconds" then great.  Fine.  I don't care.  Might work
> > fine for the use case of that server.  Might cause problems for its
> > connections.  But, if that is its policy then wonderful.  Who am I to
> > say that is right or wrong?  Change the policy and all that still
> > applies.  Seems perfectly consistent with lots of other things ....
> 
> When and how does the application know that the connection entered and
> exited the persist condition? Where is the required feedback here? I
> am not aware of any...  It seems like we are exporting a TCP specific
> state and knowledge all the way into the application to accomplish
> what seems a simple matter of starting a timer on entry into persist
> condition, stop on exit, abort on timeout (if specified by
> application).

I should have been more careful.  I was thinking of this policy as a TCP
or OS-level policy.  Certainly it could be in the app, too---regardless
of whether the app knows exactly why it has not made progress over some
timescale that it cares about.  Again, this is about *where* to
standardize something and this has been argued better by others.  I am
more interested in the question of whether we even need to have the
"where?" argument because I don't think we need to standardize it
anywhere.  (See below.)

> >   E.g., who am I to say which SYNs a TCP should accept?  If they are
> >   from "bad" IPs and I want to drop them on the floor who are you to
> >   tell me I am wrong?
> > 
> >   E.g., who am I to say how many retransmits you should conduct
> >   before determining the peer is somehow gone and you give up?
> > 
> > Why does this persist stuff need to be done in a standard fashion?
> 
> Because zero window is a transport layer notion, and it is the
> responsibility of the transport layer to provide a robust and fair
> solution to this issue which benefits ALL applications
> instantaneously, at least that's the way i view it. 

My view is that there need not be one true notion of "robust and fair".
Why should their be?

> Why does congestion control require standardization, when every
> client/server application out there is perfectly capable of doing it?
> To achieve consistent behaviour across the widest range of
> applications...

I think this is a complete mis-characterization.  The answer is that
congestion control is standardized because congestion control is about
dealing with a *shared resource*.  We can do that control from a number
of places in the stack and people have advocated for each of them at
different points.  But, *where* that functionality exists is a second
question after we establish that the functionality needs to exist and we
need to standardize on it.  This persist business is not about
controlling a shared resource, but about controlling a *local
resource*.  Why should the community standardize local resource control?
That seems absurd to me.  So, to me, arguing about where to mitigate the
persist stuff is putting the cart before the horse.  First, we'd need to
establish some reason to standardize local resource control.

allman



_______________________________________________
tcpm mailing list
tcpm@ietf.org
https://www1.ietf.org/mailman/listinfo/tcpm