Re: [httpstreaming] Current Status and Our Goal

"Ali C. Begen (abegen)" <abegen@cisco.com> Fri, 15 October 2010 10:40 UTC

Return-Path: <abegen@cisco.com>
X-Original-To: httpstreaming@core3.amsl.com
Delivered-To: httpstreaming@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id A28CC3A6832 for <httpstreaming@core3.amsl.com>; Fri, 15 Oct 2010 03:40:47 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -10.56
X-Spam-Level:
X-Spam-Status: No, score=-10.56 tagged_above=-999 required=5 tests=[AWL=0.039, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UTBzjJctTw8Q for <httpstreaming@core3.amsl.com>; Fri, 15 Oct 2010 03:40:46 -0700 (PDT)
Received: from sj-iport-6.cisco.com (sj-iport-6.cisco.com [171.71.176.117]) by core3.amsl.com (Postfix) with ESMTP id 7945B3A689A for <httpstreaming@ietf.org>; Fri, 15 Oct 2010 03:40:46 -0700 (PDT)
Authentication-Results: sj-iport-6.cisco.com; dkim=neutral (message not signed) header.i=none
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AvsEAJvMt0yrR7Hu/2dsb2JhbAChIXGgTZw+hUkEhFSJAA
X-IronPort-AV: E=Sophos;i="4.57,335,1283731200"; d="scan'208";a="604444551"
Received: from sj-core-5.cisco.com ([171.71.177.238]) by sj-iport-6.cisco.com with ESMTP; 15 Oct 2010 10:42:07 +0000
Received: from xbh-sjc-231.amer.cisco.com (xbh-sjc-231.cisco.com [128.107.191.100]) by sj-core-5.cisco.com (8.13.8/8.14.3) with ESMTP id o9FAg7Do019954; Fri, 15 Oct 2010 10:42:07 GMT
Received: from xmb-sjc-215.amer.cisco.com ([171.70.151.169]) by xbh-sjc-231.amer.cisco.com with Microsoft SMTPSVC(6.0.3790.4675); Fri, 15 Oct 2010 03:42:07 -0700
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
Date: Fri, 15 Oct 2010 03:37:13 -0700
Message-ID: <04CAD96D4C5A3D48B1919248A8FE0D540D6894A4@xmb-sjc-215.amer.cisco.com>
In-Reply-To: <051901cb6c38$8b640f30$30298a0a@china.huawei.com>
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Thread-Topic: [httpstreaming] Current Status and Our Goal
Thread-Index: ActsOI2ZTGksLTUYQ7uupUcOpwZCsgAGiwdA
References: <00df01cb5de2$2ac49730$4f548a0a@china.huawei.com> <AANLkTimB3-=zWGnT=uq9Qcb-N8Pq+-RR0WMN12BZ9pr4@mail.gmail.com> <03f501cb65a1$50699d70$f13cd850$%roni@huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D5BEADB@xmb-sjc-215.amer.cisco.com> <03f901cb65a5$7ee4bc80$7cae3580$%roni@huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D5BEB08@xmb-sjc-215.amer.cisco.com> <074201cb66c1$1a192d50$4f548a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D5BF360@xmb-sjc-215.amer.cisco.com> <017101cb6924$bc093410$30298a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D5BF70B@xmb-sjc-215.amer.cisco.com> <03ce01cb6b67$fdb9d910$30298a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D689385@xmb-sjc-215.amer.cisco.com> <009c01cb6c03$f2125320$30298a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D689412@xmb-sjc-215.amer.cisco.com> <022c01cb6c0b$5fc75490$30298a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D68946C@xmb-sjc-215.amer.cisco.com> <051901cb6c38$8b640f30$30 298a0a@c hina.huawei.com>
From: "Ali C. Begen (abegen)" <abegen@cisco.com>
To: Qin Wu <sunseawq@huawei.com>, Roni Even <Even.roni@huawei.com>, "David A. Bryan" <dbryan@ethernot.org>
X-OriginalArrivalTime: 15 Oct 2010 10:42:07.0480 (UTC) FILETIME=[9D272780:01CB6C55]
Cc: httpstreaming@ietf.org
Subject: Re: [httpstreaming] Current Status and Our Goal
X-BeenThere: httpstreaming@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Network based HTTP Streaming discussion list <httpstreaming.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/httpstreaming>, <mailto:httpstreaming-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/httpstreaming>
List-Post: <mailto:httpstreaming@ietf.org>
List-Help: <mailto:httpstreaming-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/httpstreaming>, <mailto:httpstreaming-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 15 Oct 2010 10:40:48 -0000

> -----Original Message-----
> From: Qin Wu [mailto:sunseawq@huawei.com]
> Sent: Friday, October 15, 2010 3:14 AM
> To: Ali C. Begen (abegen); Roni Even; David A. Bryan
> Cc: httpstreaming@ietf.org
> Subject: Re: [httpstreaming] Current Status and Our Goal
> 
> ----- Original Message -----
> From: "Ali C. Begen (abegen)" <abegen@cisco.com>
> To: "Qin Wu" <sunseawq@huawei.com>; "Roni Even" <Even.roni@huawei.com>; "David A. Bryan" <dbryan@ethernot.org>
> Cc: <httpstreaming@ietf.org>
> Sent: Friday, October 15, 2010 1:14 PM
> Subject: RE: [httpstreaming] Current Status and Our Goal
> 
> 
> > But, in http streaming scenario, the ratio of download/upload (from the client's perspective) is much larger than 1. BTW, if
> > you keep your chunk duration relatively longer, the amount of requests that you will end up sending will be almost nil
> > compared to what you will receive.
> >
> > [Qin]: No, I just compare pull and push with the same chunk duration.
> 
> You can choose whatever chunk size you wanna use. Does not matter. The fact remains the same. Unless someone uses
> chunks of a few hundreds of ms, it won't matter.
> 
>  [Qin]: Suppose 10 chunks is available at the server side, in the pull model, the client need to send at least 10 requests and
> receive 10 responses.
>           in the push model, the client may only need to send one request and then receive 10 responses.
>           comparing with pull, push model save 9 requests.
>           also if you look at websocket, push has more lightweight header than pull.
>           However as you said:
>           "
>           if  you keep your chunk duration relatively longer, the amount of requests that you will end up sending will be almost
> nil
>          compared to what you will receive.
>           "
>           based on your understanding, suppose still 10chunks is avaible at the server side,  in the pull model, the client send 10
> request to fetch them and
>           may receive thousands of response. Is it what you said here?

You missed my point. The size of the 10 request messages you sent in the example above is not much compared to the size of the 10 chunks you receive. And if the chunk duration gets larger, the difference will be even bigger.

> ...
> > Sorry if "playback time" meant the duration of the playback. I rather meant the start time of the playback. So, if a client
> > keeps a larger buffer hoping that it will avoid buffer underruns, this may delay the playback start time.
> >
> > [Qin]: Right. So we need to choose approporiate buffer size to seek balance between buffer underflow and startup time
> > delay. The buffer size can be changed at any time.
> 
> It is not up to me or you. Any implementation (depending on the device, scenario, network type, etc.) can choose any size for
> the buffer.
> 
> [Qin]: I think it is one issue pertain to Buffer management. Since RTP can handle this smoothly, why not HTTP streaming?

Sorry I am not following. RTP does not deal with this. Maybe you are referring to RTSP streaming and then buffer duration is again something chosen by the client not the protocol.
 
> > Or, the client can start the playback quickly and build the buffer over time by requesting lower-bitrate chunks.
> >
> > [Qin]: I think the presumption is the server should prepare multiple streams of the same content with different bitrates. the
> > server may have a big load if the content is live contents.
> 
> It is not the server who prepares the content. Not necessarily and not often. So, serving the same content at different bitrates
> does not pose any extra load on the server (except where the caching comes into play).
> 
> [Qin]: Yes, it does not matter whether streaming component is collocated with web server or separated.
> 
> > Whether it paces the presentation speed down or not in order to avoid buffer underruns is something up to the client. Most
> > implementations I have seen simply switch to a lower-bitrate profile well before the buffer is fully drained. So, they don't
> > experience buffer underruns and they don’t need to pace the video presentation speed down.
> >
> > [Qin]: I don't doubt giving more control to client has its advantages. But I think giving some control to the server also has its
> > advantage. e.g.,  you may not need to prepare multiple stream of the same content with different bitrate.
> 
> Well, AFAICT the method in which people (including me) are interested in is this so-called multi-bitrate streaming, which
> implies the same content being encoded at multiple bitrates.
> 
> [Qin]: I am interested too. But I suspect there are other ways to decrease such overhead. e.g., trancoding may be best choice
> for live content.
> 
> > the server control the transmission rate based on the network condition and client requirements.
> 
> How would this work if you have only one bitrate on the server? Is your approach as follows:
> 
> Rather than offering the same content encoded at multiple bitrates and serving them based on what the client wants, you
> wanna serve the content encoded at a single bitrate and by manipulating the transmission rate on the server (based on
> server's knowledge of network and client), you "adaptively" send it. Is that the scenario you have in mind?
> 
> [Qin]: Personally that is what I am trying to look for. a single birate for live content may change at time which can be
> realized by transcoding.

If you are saying a server doing transcoding for adaptation scales better than an http server serving client requests for different chunks, then I think you should make your arguments clear enough in the problem statement draft and see whether others think. To me, it is a losing proposition. 

-acbegen