[httpstreaming] Push Vs Pull Re: Current Status and Our Goal

Qin Wu <sunseawq@huawei.com> Mon, 18 October 2010 05:43 UTC

Return-Path: <sunseawq@huawei.com>
X-Original-To: httpstreaming@core3.amsl.com
Delivered-To: httpstreaming@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 78C4D3A6A8E for <httpstreaming@core3.amsl.com>; Sun, 17 Oct 2010 22:43:52 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.945
X-Spam-Level:
X-Spam-Status: No, score=0.945 tagged_above=-999 required=5 tests=[AWL=-0.313, BAYES_00=-2.599, FH_RELAY_NODNS=1.451, HELO_MISMATCH_COM=0.553, MIME_BASE64_TEXT=1.753, RDNS_NONE=0.1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id trqLDpYP69iN for <httpstreaming@core3.amsl.com>; Sun, 17 Oct 2010 22:43:50 -0700 (PDT)
Received: from szxga05-in.huawei.com (unknown [119.145.14.67]) by core3.amsl.com (Postfix) with ESMTP id 2CA263A6A87 for <httpstreaming@ietf.org>; Sun, 17 Oct 2010 22:43:50 -0700 (PDT)
Received: from huawei.com (szxga05-in [172.24.2.49]) by szxga05-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LAH003XB1B756@szxga05-in.huawei.com> for httpstreaming@ietf.org; Mon, 18 Oct 2010 13:45:07 +0800 (CST)
Received: from huawei.com ([172.24.2.119]) by szxga05-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LAH00CPI1B56J@szxga05-in.huawei.com> for httpstreaming@ietf.org; Mon, 18 Oct 2010 13:45:06 +0800 (CST)
Received: from w53375 ([10.138.41.48]) by szxml04-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTPA id <0LAH00GV81B5TK@szxml04-in.huawei.com> for httpstreaming@ietf.org; Mon, 18 Oct 2010 13:45:05 +0800 (CST)
Date: Mon, 18 Oct 2010 13:45:04 +0800
From: Qin Wu <sunseawq@huawei.com>
To: "Ali C. Begen (abegen)" <abegen@cisco.com>, Roni Even <Even.roni@huawei.com>, "David A. Bryan" <dbryan@ethernot.org>
Message-id: <02d501cb6e87$9dc75dc0$30298a0a@china.huawei.com>
MIME-version: 1.0
X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2900.3664
X-Mailer: Microsoft Outlook Express 6.00.2900.3664
Content-type: text/plain; charset="windows-1252"
Content-transfer-encoding: base64
X-Priority: 3
X-MSMail-priority: Normal
References: <00df01cb5de2$2ac49730$4f548a0a@china.huawei.com> <03f501cb65a1$50699d70$f13cd850$%roni@huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D5BEADB@xmb-sjc-215.amer.cisco.com> <03f901cb65a5$7ee4bc80$7cae3580$%roni@huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D5BEB08@xmb-sjc-215.amer.cisco.com> <074201cb66c1$1a192d50$4f548a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D5BF360@xmb-sjc-215.amer.cisco.com> <017101cb6924$bc093410$30298a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D5BF70B@xmb-sjc-215.amer.cisco.com> <03ce01cb6b67$fdb9d910$30298a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D689385@xmb-sjc-215.amer.cisco.com> <009c01cb6c03$f2125320$30298a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D689412@xmb-sjc-215.amer.cisco.com> <022c01cb6c0b$5fc75490$30298a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D68946C@xmb-sjc-215.amer.cisco.com> <04CAD96D4C5A3D48B1919248A8FE0D540D6894A4@xmb-sjc-215.amer.cisco.com>
Cc: httpstreaming@ietf.org
Subject: [httpstreaming] Push Vs Pull Re: Current Status and Our Goal
X-BeenThere: httpstreaming@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Network based HTTP Streaming discussion list <httpstreaming.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/httpstreaming>, <mailto:httpstreaming-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/httpstreaming>
List-Post: <mailto:httpstreaming@ietf.org>
List-Help: <mailto:httpstreaming-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/httpstreaming>, <mailto:httpstreaming-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 18 Oct 2010 05:43:52 -0000

----- Original Message ----- 
From: "Ali C. Begen (abegen)" <abegen@cisco.com>
To: "Qin Wu" <sunseawq@huawei.com>; "Roni Even" <Even.roni@huawei.com>; "David A. Bryan" <dbryan@ethernot.org>
Cc: <httpstreaming@ietf.org>
Sent: Friday, October 15, 2010 6:37 PM
Subject: RE: [httpstreaming] Current Status and Our Goal

> -----Original Message-----
> From: Qin Wu [mailto:sunseawq@huawei.com]
> Sent: Friday, October 15, 2010 3:14 AM
> To: Ali C. Begen (abegen); Roni Even; David A. Bryan
> Cc: httpstreaming@ietf.org
> Subject: Re: [httpstreaming] Current Status and Our Goal
> 
> ----- Original Message -----
> From: "Ali C. Begen (abegen)" <abegen@cisco.com>
> To: "Qin Wu" <sunseawq@huawei.com>; "Roni Even" <Even.roni@huawei.com>; "David A. Bryan" <dbryan@ethernot.org>
> Cc: <httpstreaming@ietf.org>
> Sent: Friday, October 15, 2010 1:14 PM
> Subject: RE: [httpstreaming] Current Status and Our Goal
> 
> 
> > But, in http streaming scenario, the ratio of download/upload (from the client's perspective) is much larger than 1. BTW, if
> > you keep your chunk duration relatively longer, the amount of requests that you will end up sending will be almost nil
> > compared to what you will receive.
> >
> > [Qin]: No, I just compare pull and push with the same chunk duration.
> 
> You can choose whatever chunk size you wanna use. Does not matter. The fact remains the same. Unless someone uses
> chunks of a few hundreds of ms, it won't matter.
> 
>  [Qin]: Suppose 10 chunks is available at the server side, in the pull model, the client need to send at least 10 requests and
> receive 10 responses.
>           in the push model, the client may only need to send one request and then receive 10 responses.
>           comparing with pull, push model save 9 requests.
>           also if you look at websocket, push has more lightweight header than pull.
>           However as you said:
>           "
>           if  you keep your chunk duration relatively longer, the amount of requests that you will end up sending will be almost
> nil
>          compared to what you will receive.
>           "
>           based on your understanding, suppose still 10chunks is avaible at the server side,  in the pull model, the client send 10
> request to fetch them and
>           may receive thousands of response. Is it what you said here?

You missed my point. The size of the 10 request messages you sent in the example above is not much compared to the size of the 10 chunks you receive. And if the chunk duration gets larger, the difference will be even bigger.

[Qin]:  Good point, but I think the size of 10 request messages may be not  count, but the timing of response is controlled by these 10 request message which may be hard to satisfy the real time streaming requirements for 
latency.

> ...
> > Sorry if "playback time" meant the duration of the playback. I rather meant the start time of the playback. So, if a client
> > keeps a larger buffer hoping that it will avoid buffer underruns, this may delay the playback start time.
> >
> > [Qin]: Right. So we need to choose approporiate buffer size to seek balance between buffer underflow and startup time
> > delay. The buffer size can be changed at any time.
> 
> It is not up to me or you. Any implementation (depending on the device, scenario, network type, etc.) can choose any size for
> the buffer.
> 
> [Qin]: I think it is one issue pertain to Buffer management. Since RTP can handle this smoothly, why not HTTP streaming?

Sorry I am not following. RTP does not deal with this. Maybe you are referring to RTSP streaming and then buffer duration is again something chosen by the client not the protocol.

[Qin]:Al though the client control the buffer duration for playout at the client side, the client can not control the buffer for encoding at the server side.
 
> > Or, the client can start the playback quickly and build the buffer over time by requesting lower-bitrate chunks.
> >
> > [Qin]: I think the presumption is the server should prepare multiple streams of the same content with different bitrates. the
> > server may have a big load if the content is live contents.
> 
> It is not the server who prepares the content. Not necessarily and not often. So, serving the same content at different bitrates
> does not pose any extra load on the server (except where the caching comes into play).
> 
> [Qin]: Yes, it does not matter whether streaming component is collocated with web server or separated.
> 
> > Whether it paces the presentation speed down or not in order to avoid buffer underruns is something up to the client. Most
> > implementations I have seen simply switch to a lower-bitrate profile well before the buffer is fully drained. So, they don't
> > experience buffer underruns and they don’t need to pace the video presentation speed down.
> >
> > [Qin]: I don't doubt giving more control to client has its advantages. But I think giving some control to the server also has its
> > advantage. e.g.,  you may not need to prepare multiple stream of the same content with different bitrate.
> 
> Well, AFAICT the method in which people (including me) are interested in is this so-called multi-bitrate streaming, which
> implies the same content being encoded at multiple bitrates.
> 
> [Qin]: I am interested too. But I suspect there are other ways to decrease such overhead. e.g., trancoding may be best choice
> for live content.
> 
> > the server control the transmission rate based on the network condition and client requirements.
> 
> How would this work if you have only one bitrate on the server? Is your approach as follows:
> 
> Rather than offering the same content encoded at multiple bitrates and serving them based on what the client wants, you
> wanna serve the content encoded at a single bitrate and by manipulating the transmission rate on the server (based on
> server's knowledge of network and client), you "adaptively" send it. Is that the scenario you have in mind?
> 
> [Qin]: Personally that is what I am trying to look for. a single birate for live content may change at time which can be
> realized by transcoding.

If you are saying a server doing transcoding for adaptation scales better than an http server serving client requests for different chunks, then I think you should make your arguments clear enough in the problem statement draft and see whether others think. To me, it is a losing proposition. 

[Qin]: Okay. thank for pointing out this.

-acbegen