Re: [httpstreaming] Current Status and Our Goal

Qin Wu <sunseawq@huawei.com> Fri, 15 October 2010 01:49 UTC

Return-Path: <sunseawq@huawei.com>
X-Original-To: httpstreaming@core3.amsl.com
Delivered-To: httpstreaming@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 6DE343A6956 for <httpstreaming@core3.amsl.com>; Thu, 14 Oct 2010 18:49:34 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.995
X-Spam-Level:
X-Spam-Status: No, score=0.995 tagged_above=-999 required=5 tests=[AWL=-0.263, BAYES_00=-2.599, FH_RELAY_NODNS=1.451, HELO_MISMATCH_COM=0.553, MIME_BASE64_TEXT=1.753, RDNS_NONE=0.1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ZQAbkqa0N17c for <httpstreaming@core3.amsl.com>; Thu, 14 Oct 2010 18:49:33 -0700 (PDT)
Received: from szxga02-in.huawei.com (unknown [119.145.14.65]) by core3.amsl.com (Postfix) with ESMTP id 920DA3A67E2 for <httpstreaming@ietf.org>; Thu, 14 Oct 2010 18:49:32 -0700 (PDT)
Received: from huawei.com (szxga02-in [172.24.2.6]) by szxga02-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LAB00EBG6GI1N@szxga02-in.huawei.com> for httpstreaming@ietf.org; Fri, 15 Oct 2010 09:50:42 +0800 (CST)
Received: from huawei.com ([172.24.2.119]) by szxga02-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LAB00B1W6GIRU@szxga02-in.huawei.com> for httpstreaming@ietf.org; Fri, 15 Oct 2010 09:50:42 +0800 (CST)
Received: from w53375 ([10.138.41.48]) by szxml04-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTPA id <0LAB00C896GH2B@szxml04-in.huawei.com> for httpstreaming@ietf.org; Fri, 15 Oct 2010 09:50:41 +0800 (CST)
Date: Fri, 15 Oct 2010 09:50:41 +0800
From: Qin Wu <sunseawq@huawei.com>
To: "Ali C. Begen (abegen)" <abegen@cisco.com>, Roni Even <Even.roni@huawei.com>, "David A. Bryan" <dbryan@ethernot.org>
Message-id: <022c01cb6c0b$5fc75490$30298a0a@china.huawei.com>
MIME-version: 1.0
X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2900.3664
X-Mailer: Microsoft Outlook Express 6.00.2900.3664
Content-type: text/plain; charset="Windows-1252"
Content-transfer-encoding: base64
X-Priority: 3
X-MSMail-priority: Normal
References: <00df01cb5de2$2ac49730$4f548a0a@china.huawei.com> <AANLkTimB3-=zWGnT=uq9Qcb-N8Pq+-RR0WMN12BZ9pr4@mail.gmail.com> <03f501cb65a1$50699d70$f13cd850$%roni@huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D5BEADB@xmb-sjc-215.amer.cisco.com> <03f901cb65a5$7ee4bc80$7cae3580$%roni@huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D5BEB08@xmb-sjc-215.amer.cisco.com> <074201cb66c1$1a192d50$4f548a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D5BF360@xmb-sjc-215.amer.cisco.com> <017101cb6924$bc093410$30298a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D5BF70B@xmb-sjc-215.amer.cisco.com> <03ce01cb6b67$fdb9d910$30298a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D689385@xmb-sjc-215.amer.cisco.com> <009c01cb6c03$f2125320$30298a0a@china.huawei.com> <04CAD96D4C5A3D48B1919248A8FE0D540D689412@xmb-sjc-215.amer.cisco.com>
Cc: httpstreaming@ietf.org
Subject: Re: [httpstreaming] Current Status and Our Goal
X-BeenThere: httpstreaming@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Network based HTTP Streaming discussion list <httpstreaming.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/httpstreaming>, <mailto:httpstreaming-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/httpstreaming>
List-Post: <mailto:httpstreaming@ietf.org>
List-Help: <mailto:httpstreaming-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/httpstreaming>, <mailto:httpstreaming-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 15 Oct 2010 01:49:34 -0000

Hi,
----- Original Message ----- 
From: "Ali C. Begen (abegen)" <abegen@cisco.com>
To: "Qin Wu" <sunseawq@huawei.com>; "Roni Even" <Even.roni@huawei.com>; "David A. Bryan" <dbryan@ethernot.org>
Cc: <httpstreaming@ietf.org>
Sent: Friday, October 15, 2010 9:11 AM
Subject: RE: [httpstreaming] Current Status and Our Goal




> -----Original Message-----
> From: Qin Wu [mailto:sunseawq@huawei.com]
> Sent: Thursday, October 14, 2010 8:58 PM
> To: Ali C. Begen (abegen); Roni Even; David A. Bryan
> Cc: httpstreaming@ietf.org
> Subject: Re: [httpstreaming] Current Status and Our Goal
> 
> Hi,
> ----- Original Message -----
> From: "Ali C. Begen (abegen)" <abegen@cisco.com>
> To: "Qin Wu" <sunseawq@huawei.com>; "Roni Even" <Even.roni@huawei.com>; "David A. Bryan" <dbryan@ethernot.org>
> Cc: <httpstreaming@ietf.org>
> Sent: Friday, October 15, 2010 6:49 AM
> Subject: RE: [httpstreaming] Current Status and Our Goal
> 
> 
> 
> 
> > -----Original Message-----
> > From: Qin Wu [mailto:sunseawq@huawei.com]
> > Sent: Thursday, October 14, 2010 2:21 AM
> > To: Ali C. Begen (abegen); Roni Even; David A. Bryan
> > Cc: httpstreaming@ietf.org
> > Subject: Re: [httpstreaming] Current Status and Our Goal
> >
> > Hi,
> > ----- Original Message -----
> > From: "Ali C. Begen (abegen)" <abegen@cisco.com>
> > To: "Qin Wu" <sunseawq@huawei.com>; "Roni Even" <Even.roni@huawei.com>; "David A. Bryan" <dbryan@ethernot.org>
> > Cc: <httpstreaming@ietf.org>
> > Sent: Tuesday, October 12, 2010 8:09 AM
> > Subject: RE: [httpstreaming] Current Status and Our Goal
> >
> > > -----Original Message-----
> > > From: Qin Wu [mailto:sunseawq@huawei.com]
> > > Sent: Monday, October 11, 2010 5:15 AM
> > > To: Ali C. Begen (abegen); Roni Even; David A. Bryan
> > > Cc: httpstreaming@ietf.org
> > > Subject: Re: [httpstreaming] Current Status and Our Goal
> > >
> > > Hi,:
> > > ----- Original Message -----
> > > From: "Ali C. Begen (abegen)" <abegen@cisco.com>
> > > To: "Qin Wu" <sunseawq@huawei.com>; "Roni Even" <Even.roni@huawei.com>; "David A. Bryan"
> <dbryan@ethernot.org>
> > > Cc: <httpstreaming@ietf.org>
> > > Sent: Sunday, October 10, 2010 12:46 AM
> > > Subject: RE: [httpstreaming] Current Status and Our Goal
> > >
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Qin Wu [mailto:sunseawq@huawei.com]
> > > > Sent: Friday, October 08, 2010 4:16 PM
> > > > To: Ali C. Begen (abegen); Roni Even; David A. Bryan
> > > > Cc: httpstreaming@ietf.org
> > > > Subject: Re: [httpstreaming] Current Status and Our Goal
> > > >
> > > > > also a need for video synchronization to start rendering.
> > > >
> > > > Synchronization among the viewers you mean? That could be a concern but seriously, since client implementations will
> > > > differ, network capacities will differ, i.e., pretty much everything will differ for different clients, I don't think there is a
> > > > solution to this. Better said, I don't think there is a problem.
> > > >
> > > > [Qin] I think Synchronization between the server and the client is one issue we may look at. Since the server has buffer
> for
> > > > encoding, the client has a buffer for playout, we definitely need one streaming media synchronization mechanism
> which
> > > may
> > > > help reduce delay.
> > >
> > > Sorry, I don’t get this. Server or someone else advertises what is available to the client and client fetches whatever it
> wants
> > > (and is available). Why is there a need for synchronization here?
> > >
> > > [Qin]: We may look at two different  use cases: push model and pull model
> > > In the pull model, you are right. The client  control the timing of fetching the chunks by driving the HTTP request. It only
> > > request it needs and can handle. However  client based pull is characterized
> > > as polling for new data each time and is not efficient way to deliver the real time streaming contents.
> >
> > By polling I suppose you mean sending HTTP requests. Well, that is pretty much implied by using HTTP, which is a request-
> > response protocol and this makes HTTP as stateless as possible. And IMO sending individual requests offer more
> advantages
> > than disadvantages.
> >
> > [Qin]: Suppose ten chunks are available at the server, the client in pull model MUST send ten requests to fetch all the
> chunks
> > and server should answer with ten response with each chunk.
> > however in the push model, I think the client only need to initiate one request and then the server control ten chunks
> delivery
> > and push ten chunks to the client one by one.
> >  Isn't the push model more efficient than pull model from transport perspective?
> 
> Depends. Just because in the push model there are less request messages, it does not mean it is more efficient. That would
> be a bad definition for efficiency. The pull model brings many advantages that the push model cannot offer (at the same
> flexibility). I am not saying one is better than the other, but your scope for determining efficiency is rather limited.
> 
> [Qin]: Okay. But If you look at websocket protocol as push model, probably you will find they can be used accelerate browser
> and improve efficiency.

But, in http streaming scenario, the ratio of download/upload (from the client's perspective) is much larger than 1. BTW, if you keep your chunk duration relatively longer, the amount of requests that you will end up sending will be almost nil compared to what you will receive.

[Qin]: No, I just compare pull and push with the same chunk duration. 
 
> > > In the push model, the client does not know when the conents is available at the server. For the client will not send
> request
> > > for each chunks. the client clock is easy to asynchronize with encoder clock.
> >
> > Well the server can push the content (in a push model) but it still is dependent on the client implementation to determine
> > what to do with that content. One implementation, if it wants, can start playing it 10 hours later for all we care. I still don't
> > get what you are trying to do here.
> >
> > [Qin]: In the push model, the server didn't know the client capability to consume streaming data, i.e., the server didn't
> know
> > how fast the client process live streaming. If the encoding rate at the server is faster than comsuing rate at the client, the
> > buffer will overflow. If the encoding rate at the server is slower than comsuing rate at the client, the buffer will underflow.
> 
> For both on-demand and live streaming, the encoding rate is supposed to be equal to consumption rate (which sounds to me
> as the rate the decoder is consuming). What you are referring to as buffer overflow or underflow is related to streaming or
> transmission rate. On the long-term average, all these need to be equal anyway.
> 
> [Qin]: But How, is there any way to do this? or they just wait for rebuffering and then recover by itself.
>
> But, I will just repeat myself. Neither has anything to do with clock synchronization between the server and client.
> 
> [Qin]: Okay.
> 
> The client can choose the buffer duration and the actual playback time.
> 
> [Qin]: Suppose the chunks is not ready in the playout buffer at the client, does it mean that the client just slow down playback
> speed and wait for subsequent chunks is arriving at the playback buffer and then client resume the normal
> playback speed?

Sorry if "playback time" meant the duration of the playback. I rather meant the start time of the playback. So, if a client keeps a larger buffer hoping that it will avoid buffer underruns, this may delay the playback start time. 

[Qin]: Right. So we need to choose approporiate buffer size to seek balance between buffer underflow and startup time delay. The buffer size can be changed at any time.

Or, the client can start the playback quickly and build the buffer over time by requesting lower-bitrate chunks.

[Qin]: I think the presumption is the server should prepare multiple streams of the same content with different bitrates. the server may have a big load if the content is live contents.

Whether it paces the presentation speed down or not in order to avoid buffer underruns is something up to the client. Most implementations I have seen simply switch to a lower-bitrate profile well before the buffer is fully drained. So, they don't experience buffer underruns and they don’t need to pace the video presentation speed down.

[Qin]: I don't doubt giving more control to client has its advantages. But I think giving some control to the server also has its advantage. e.g.,  you may not need to prepare multiple stream of the same content with different bitrate.
the server control the transmission rate based on the network condition and client requirements.

-acbegen