Re: [p2pi] [tsv-area] TANA proposed charter

Stanislav Shalunov <> Tue, 21 October 2008 17:05 UTC

Return-Path: <>
Received: from [] (localhost []) by (Postfix) with ESMTP id 1294528C146; Tue, 21 Oct 2008 10:05:45 -0700 (PDT)
Received: from localhost (localhost []) by (Postfix) with ESMTP id 100F328C149 for <>; Tue, 21 Oct 2008 10:05:44 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[AWL=0.000, BAYES_00=-2.599]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id G1lpODRq3SHX for <>; Tue, 21 Oct 2008 10:05:43 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id CECEC3A68C6 for <>; Tue, 21 Oct 2008 10:05:42 -0700 (PDT)
Received: by with SMTP id 8so441094yxg.49 for <>; Tue, 21 Oct 2008 10:06:51 -0700 (PDT)
Received: by with SMTP id k5mr3847144wfe.260.1224608811421; Tue, 21 Oct 2008 10:06:51 -0700 (PDT)
Received: from ? ( []) by with ESMTPS id 22sm18620114wfg.13.2008. (version=TLSv1/SSLv3 cipher=RC4-MD5); Tue, 21 Oct 2008 10:06:50 -0700 (PDT)
Message-Id: <>
From: Stanislav Shalunov <>
To: Nicholas Weaver <nweaver@ICSI.Berkeley.EDU>
In-Reply-To: <>
Mime-Version: 1.0 (Apple Message framework v929.2)
Date: Tue, 21 Oct 2008 10:06:48 -0700
References: <> <> <> <>
X-Mailer: Apple Mail (2.929.2)
Cc: TSV Area <>,,, "Eddy, Wesley M. \(GRC-RCN0\)\[VZ\]" <>
Subject: Re: [p2pi] [tsv-area] TANA proposed charter
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: P2P Infrastructure Discussion <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"; DelSp="yes"

On Oct 21, 2008, at 9:32 AM, Nicholas Weaver wrote:

> Overall, I like the charter.
> The one problem I have, however, is not technical but the economic  
> model.
> One can do a fairly decent (not great, but OK) approximation today  
> by using a delay-based (eg, TCP-vegas) congestion control, as its  
> designed to minimize queues and tends to get outcompeted by TCP reno  
> (the two big-desired).
> Especially since intuition suggests that you could do this "single- 
> ended", have the TCP stack on ONE side do delay-based estimation in  
> both directions, and request window resizing to control the other  
> side's sending rate.
> Yet why would a bulk-data content provider want to USE it?

This is a great question and it deserves a more detailed answer than  
this, but let me try, briefly, to provide one reason to use it:

Because the user wants to use the Internet while P2P app is doing  
whatever it is doing.  Or the user will uninstall the app.

The content provider serves at the leisure of the user.  The user is  
willing to share uplink capacity as long as the sharing is largely  
invisible.  The user will be upset by uploads that affect the user's  
ability to use the network for interactive applications and web  

(There are other scenarios, particularly with shared congestion and  
traffic management, that generally push the content provider to be  
nicer, so the above is just one example.)

-- Stas

> If the business model is "Video or other bulk data NOW", you don't  
> want it friendlier than TCP, you only want to make sure you don't  
> kill the queues on a customer's access device (if P2P).  Thus the  
> goal is ONLY to minimize queues, because being friendlier than TCP  
> otherwise might get your data squeezed out in the rest of the network.
> If anything, you might want to be more HOSTILE, because you have a  
> minimum aggregate data rate (averaged out over say 10-30 seconds of  
> buffer) you need to keep a realtime display going.
> If the business model is "Video or other bulk data OVERNIGHT", then  
> you are competing with the US Postal Service for legal content,  
> which is pretty darn cheap for bandwidth.
> I'd really like a "Less Than Best Effort" data class/congestion  
> control that does not populate queues and yields completely (on the  
> order of a couple of RTTs) to conventional TCP.  But I really wonder  
> who'd use it?
> On Oct 21, 2008, at 8:06 AM, Bruce Davie wrote:
>> I'll start by saying that I support the WG and think the new  
>> charter is pretty good. I have a few suggestions for small changes.
>>> Applications that transmit large amounts of data for a long
>>> time with congestion-limited TCP, but without ECN fill the
>>> buffer at the head of the bottleneck link.
>> Replace ECN with Active Queue Management (AQM)
> I'd actually say "ECN or Active Queue Management"
>>> In the best case,
>>> with an ideally sized buffer of one RTT, the delay doubles. In
>>> some cases, the extra delay may be much larger.
>> Since there isn't complete agreement on the ideal size of a buffer,  
>> just say:
>> "If the buffer size is one RTT, the delay doubles...."
> Especially since the bottleneck devices in question are sometimes  
> obscenely overbuffered.
>>> (1) An experimental congestion control algorithm for
>>> less-than-best-effort "background" transmissions, i.e., an
>>> algorithm that attempts to scavenge otherwise idle bandwidth
>>> for its transmissions in a way that minimizes interference
>>> with regular best-effort traffic.
>>> Desired features of such an algorithm are:
>>> * saturate the bottleneck,
>>> * eliminate long standing queues and thus keep delay low when
>>> no other traffic is present,
>>> * quickly yield to regular best-effort traffic that uses
>>> standard TCP congestion control,
>> perhaps it would be more precise to say:
>> "quickly yield to traffic sharing the same bottleneck queue that  
>> uses standard TCP congestion control"
> One question is "How MUCH yielding".  Should the nicer-than-TCP flow  
> go to 0?  Or should there be some equilibrium value (eg, with 2  
> flows, one TCP, one friendly-bulk-data, should it be something like  
> 75/25?)
> _______________________________________________
> p2pi mailing list

Stanislav Shalunov

p2pi mailing list