Re: [clue] Expressing encoding information in CLUE

Christian Groves <Christian.Groves@nteczone.com> Mon, 14 October 2013 02:44 UTC

Return-Path: <Christian.Groves@nteczone.com>
X-Original-To: clue@ietfa.amsl.com
Delivered-To: clue@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id DA68A21F9D0E for <clue@ietfa.amsl.com>; Sun, 13 Oct 2013 19:44:16 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.129
X-Spam-Level:
X-Spam-Status: No, score=-0.129 tagged_above=-999 required=5 tests=[AWL=-2.344, BAYES_40=-0.185, J_CHICKENPOX_13=0.6, J_CHICKENPOX_15=0.6, J_CHICKENPOX_18=0.6, J_CHICKENPOX_51=0.6]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 1qXjaRFXU1VS for <clue@ietfa.amsl.com>; Sun, 13 Oct 2013 19:44:15 -0700 (PDT)
Received: from ipmail06.adl2.internode.on.net (ipmail06.adl2.internode.on.net [IPv6:2001:44b8:8060:ff02:300:1:2:6]) by ietfa.amsl.com (Postfix) with ESMTP id C118621F90A7 for <clue@ietf.org>; Sun, 13 Oct 2013 19:44:13 -0700 (PDT)
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: ApQBAJNZW1J20brS/2dsb2JhbAANTIM/wjAJgTCDGQEBAQQBAQEXHhsbBAYNBAsRBAEBAQkWCAcJAwIBAgEVHwkIEwYCAQEXh3eqE5J3BI4JC4E5DAaEHQOiU4psgVMBBAIC
Received: from ppp118-209-186-210.lns20.mel6.internode.on.net (HELO [127.0.0.1]) ([118.209.186.210]) by ipmail06.adl2.internode.on.net with ESMTP; 14 Oct 2013 13:14:05 +1030
Message-ID: <525B5A73.4070800@nteczone.com>
Date: Mon, 14 Oct 2013 13:44:03 +1100
From: Christian Groves <Christian.Groves@nteczone.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1
MIME-Version: 1.0
To: clue@ietf.org
References: <5253C3C5.6080207@cisco.com> <52544BB9.7060403@cisco.com> <525578E9.6010500@alum.mit.edu> <009901cec511$73984d00$5ac8e700$@gmail.com> <5255AD6F.8070302@alum.mit.edu> <52574E9E.6000701@nteczone.com> <52584FB0.5060507@alum.mit.edu>
In-Reply-To: <52584FB0.5060507@alum.mit.edu>
Content-Type: text/plain; charset="ISO-8859-1"; format="flowed"
Content-Transfer-Encoding: 7bit
Subject: Re: [clue] Expressing encoding information in CLUE
X-BeenThere: clue@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: CLUE - ControLling mUltiple streams for TElepresence <clue.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/clue>, <mailto:clue-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/clue>
List-Post: <mailto:clue@ietf.org>
List-Help: <mailto:clue-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/clue>, <mailto:clue-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 14 Oct 2013 02:44:17 -0000

Hello Paul,

My understanding is that 3GPP uses the media capability negotiation 
framework (i.e. 3GPP TS 24.292). However I'm not sure if they utilise 
latent configurations.

With respect to the usage of label the examples do show the use of label 
with configurations with respect to the BFCP protocol. So I couldn't see 
why it wouldn't be valid to use label for CLUE? I don't see we would be 
breaking any new ground.

Regards, Christian

On 12/10/2013 6:21 AM, Paul Kyzivat wrote:
> Christian,
>
> I had given passing thought to capneg, but no more than that. Its 
> worth discussing. (Note that I don't know much about it.)
>
> I have yet to hear of anyone actually using this mechanism. Presumably 
> somebody has. But IMO it remains to be seen if this is going to fly. 
> If somebody can cite deployments that could ease my concerns.
>
> If we went that way, then we would have need to reference parts of a 
> latent configuration from the CLUE protocol. I think we would be 
> breaking new ground there. Its highly questionable whether reference 
> to a=label attributes embedded within a latent configuration are well 
> defined. That may be true of references to latent a=mid attributes as 
> well.
>
> (In fact, attempting to use the grouping framework within a latent 
> configuration might present difficulties.)
>
> This doesn't mean we can't make this work. It does mean its going to 
> take some effort to do so.
>
>     Thanks,
>     Paul
>
> On 10/10/13 9:04 PM, Christian Groves wrote:
>> With regards to not allocating resources isn't that what RFC 6871 and
>> the latent configuration attribute is for? i.e.
>>
>> Latent Configuration: A latent configuration indicates which
>>     combinations of capabilities could be used in a future negotiation
>>     for the session and its associated media stream components. Latent
>>     configurations are neither ready for use nor offered for actual or
>>     potential use in the current offer/answer exchange.  Latent
>>     configurations merely inform the other side of possible
>>     configurations supported by the entity.  Those latent configurations
>>     may be used to guide subsequent offer/answer exchanges, but they are
>>     not offered for use as part of the current offer/answer exchange
>>
>>
>> An Advertiser could offer the "base" audio and video and then the extra
>> captures as latent configurations. The latent configurations can then be
>> linked the CLUE capture through a=label etc.
>>
>> RFC7006 also allows a means of indicating the bandwidth capabilities.
>> This could also be used instead of having CLUE provide this.
>>
>> Regards,
>> Christian
>>
>>
>> On 10/10/2013 6:24 AM, Paul Kyzivat wrote:
>>> On 10/9/13 1:03 PM, Roni Even wrote:
>>>> Hi,
>>>> Some quick points, I still need to look more into the options and I
>>>> was busy
>>>> this week and could not attend the Tuesday call.
>>>>
>>>> As for Paul concern about the resource allocation when providing the
>>>> encoding is SDP , I think that this is not an issue since the m-lines
>>>> specify sendonly streams to be used by the advertisement (section 6
>>>> examples
>>>> of the signaling draft)
>>>
>>> My understanding of how when/how the bandwidth is managed is pretty
>>> fuzzy. But it would be nice if it isn't a problem in this case.
>>>
>>> Reserving ports, ICE, etc. is still an issue, but bundle will help
>>> with that.
>>>
>>>     Thanks,
>>>     Paul
>>>
>>>> My general view on the encoding was not favorable from the start. I
>>>> could
>>>> not understand why we need the resource management as part of CLUE.
>>>> My past
>>>> experience was that trying to optimize resource per each codec is not
>>>> easy
>>>> and it is better to use abstract compute units that are codec
>>>> independent
>>>> but allow for multiplications of the unit based on capabilities. For
>>>> example
>>>> H.264 CIF will be one unit and H.264 HD will use 4 units and VP8 HD
>>>> will use
>>>> 4 units (not real numbers just an example), still I am not sure why
>>>> in CLUE
>>>> where we wanted to address relations between streams (spatial) we
>>>> invest so
>>>> much in resource management.
>>>>
>>>> Roni
>>>>
>>>>> -----Original Message-----
>>>>> From: clue-bounces@ietf.org [mailto:clue-bounces@ietf.org] On 
>>>>> Behalf Of
>>>>> Paul Kyzivat
>>>>> Sent: 09 October, 2013 6:40 PM
>>>>> To: clue@ietf.org
>>>>> Subject: Re: [clue] Expressing encoding information in CLUE
>>>>>
>>>>> Rob,
>>>>>
>>>>> Thanks for digging into this and getting the discussion started.
>>>>> This is an important decision, so we need some vigorous 
>>>>> discussion. I'm
>>>>> looking for feedback from everybody that is actively involved in 
>>>>> this.
>>>>>
>>>>> I've been on both sides of this. I've pursued the encoding-in-SDP
>>>>> approach
>>>> in
>>>>> the signaling document. In part because of the consequences that 
>>>>> showed
>>>>> up in doing that I've been pushing the exploration of the
>>>>> alternative(s).
>>>> Now
>>>>> we have that.
>>>>>
>>>>> With the encoding-in-SDP approach I remain disturbed by the need for
>>>>> the
>>>>> advertiser to commit media resources (ports, bandwidth) for all the
>>>>> encodings it wants to mention in an advertisement. I think this
>>>>> could be
>>>> quite
>>>>> burdensome, especially for an MCU, that may be advertising many more
>>>>> options than any one endpoint is likely to use. This will be less 
>>>>> of a
>>>> problem
>>>>> with bundling, but it still may be a burden wrt bandwidth
>>>>> commitments. It
>>>>> would be unfortunate if the MCU must limit itself in the number of
>>>>> encodings it offers to avoid the call being blocked for demanding
>>>>> too much
>>>>> bandwidth.
>>>>>
>>>>> OTOH, I appreciate Rob's concern over reinventing syntax for codec
>>>> specific
>>>>> constraints.
>>>>>
>>>>> There is no perfect answer here - we must pick our poison.
>>>>>
>>>>> I'm not going to make a choice of my own now (maybe ever). I want to
>>>>> see
>>>>> what others say.
>>>>>
>>>>>     Thanks,
>>>>>     Paul
>>>>>
>>>>> On 10/8/13 2:15 PM, Robert Hansen wrote:
>>>>>> The previous was an attempt at a relatively objective look at the
>>>>>> different syntactic options for conveying encoding information in
>>>>>> CLUE.
>>>>>> However, I do have opinions on the options available.
>>>>>>
>>>>>> The proposal for negotiating both encodings and multiplexing 
>>>>>> behaviour
>>>>>> in CLUE was included because it's something we have discussed 
>>>>>> heavily
>>>>>> in the past. However there were good reason we moved away from it 
>>>>>> - it
>>>>>> provides a CLUE-specific method of multiplexing that won't play well
>>>>>> with the syntax being defined in the wider MMUSIC sphere, as well as
>>>>>> all the problems inhereit with multiplexing on a single m-line.
>>>>>>
>>>>>> The other two options explored use SDP for multiplexing, but one
>>>>>> expressed the encoding limitations in SDP while the other does so in
>>>>>> CLUE. I think both are workable solutions, but my preference is for
>>>>>> the SDP option.
>>>>>>
>>>>>> While the expressing encoding limitations in CLUE approach does
>>>>>> provide a number of advantages: fewer m-lines, fewer O/As, more
>>>>>> accurate encoder limitations and (as Paul pointed out in the design
>>>>>> call today) the ability to express encoder limitations without
>>>>>> committing resources, I think the need to come up with syntax for 
>>>>>> the
>>>>>> encoding limits in CLUE is a painful one. If these limits can be
>>>>>> expressed in abstract, codec-independent ways then that's one thing,
>>>>>> but each time this issue comes up somewhere the upshot seems to be
>>>>>> that abstract, codec-independent limitations aren't actually 
>>>>>> possible,
>>>>>> and that would leave us with the need to reinvent syntax for any 
>>>>>> codec
>>>>> that could be used.
>>>>>>
>>>>>> I'd be very happy to be proved wrong about this, but I don't 
>>>>>> think the
>>>>>> advantages of expressing the encoding constraints in CLUE are
>>>>>> sufficient to outweight the cost of such codec-specific work, and I
>>>>>> don't think we'll be able to come up with a nice abstraction 
>>>>>> everyone
>>>> will
>>>>> sign off on.
>>>>>>
>>>>>> Rob
>>>>>>
>>>>>> On 08/10/2013 09:35, Robert Hansen wrote:
>>>>>>> At the design meeting last week I volunteered to take a stab at
>>>>>>> evaluating alternate approaches to conveying all encoding 
>>>>>>> information
>>>>>>> in SDP. The text got a bit long for an email, so I've also attached
>>>>>>> it as a text document. I'll also put together some slides on the
>>>>>>> topic for this call this afternoon.
>>>>>>>
>>>>>>> One of the fundamental issues we have in clue is how to 
>>>>>>> negotiate the
>>>>>>> details of the encodings defined in draft-ietf-clue-framework. 
>>>>>>> Unless
>>>>>>> other entities such as captures, which are unique to CLUE, 
>>>>>>> encodings
>>>>>>> share many properties with limits that are currently expressible in
>>>>>>> SDP, but have additional requirements and limits that are less good
>>>>>>> fits for SDP. One of the
>>>>>>>
>>>>>>> The current assumption we have been working from, as detailed in
>>>>>>> draft-kyzivat-clue-signaling, is that the encodings (defined in
>>>>>>> draft-ietf-clue-framework), will be expressed in SDP and not in the
>>>>>>> CLUE signalling. However, this has only ever been a working
>>>>>>> assumption rather than something we've reached consensus on, and 
>>>>>>> this
>>>>>>> is a fundamental design decision that impacts not just the message
>>>>>>> syntax but also the call flows and state machines inherent in the
>>>>>>> signalling. As such, I volunteered to examine the alternative
>>>>>>> approach, to hopefully allow us to resolve this issue so we can 
>>>>>>> move
>>>>> forward.
>>>>>>>
>>>>>>> To do this I am going to examine two alternate options (for 
>>>>>>> deails of
>>>>>>> the SDP approach see draft-kyzivat-clue-signaling). One is to, 
>>>>>>> where
>>>>>>> possible, negotiate the limits and use of multiple video and audio
>>>>>>> subsessions as part of the CLUE Advertisment/Configure exchange, 
>>>>>>> with
>>>>>>> the negotiated SDP m-lines treated as an 'envelope' subdivided by
>>>>>>> CLUE; this approach was the one that had more initial 
>>>>>>> exploration by
>>>>>>> the CLUE WG before we moved towards the SDP approach. The other
>>>>>>> approach is a half-way house that has not yet been explored in any
>>>>>>> detail; this is to negotiate each channel in SDP, but to express 
>>>>>>> the
>>>>>>> encoding constraints within the CLUE signalling.
>>>>>>>
>>>>>>> *** Encodings in CLUE ***
>>>>>>>
>>>>>>> In this approach the encodings available, and the constraints
>>>>>>> associated with them, are expressed in the CLUE Advertisment. 
>>>>>>> The far
>>>>>>> end then selects which encodings they wish to receive, and the 
>>>>>>> limits
>>>>>>> on those encodings as part of the CLUE Configure message.
>>>>>>>
>>>>>>> In the simplest (and, most likely, commonest case) this would 
>>>>>>> allow a
>>>>>>> single sendrecv m-line per media type to be used for all sent and
>>>>>>> received CLUE-controlled streams. The various streams must then be
>>>>>>> demultiplexed; this can be done via explicit SSRC for static 
>>>>>>> streams
>>>>>>> or via an additional header such as appId. SSRC is used in 
>>>>>>> examples;
>>>>>>> replacing these with appId indicators should be roughly equivalent
>>>>>>> for streams with dynamic SSRCs. As such the initial SDP, and then
>>>>>>> Advertiment, might look something like:
>>>>>>>
>>>>>>> EXAMPLE SDP OFFER (from A)
>>>>>>> m=video ...
>>>>>>> ...
>>>>>>> a=ssrc:1234
>>>>>>> a=ssrc:2345
>>>>>>> a=ssrc:3456
>>>>>>> a=sendrecv
>>>>>>>
>>>>>>> EXAMPLE ADVERTISMENT (from A)
>>>>>>>    Capture Scene 1:
>>>>>>>     Capture 1: Left (Encoding Group 1)
>>>>>>>     Capture 2: Center (Encoding Group 1)
>>>>>>>     Capture 3: Right (Encoding Group 1)
>>>>>>>     Capture 4: Switched
>>>>>>>     Capture Scene Entry 1: 1,2,3
>>>>>>>     Capture Scene Entry 2: 4
>>>>>>>     Simultaneous Sets: 1,2,3,4
>>>>>>>    Encoding Group 1:
>>>>>>>     Encoding 1: H264, 1080p30, ssrc=1234
>>>>>>>     Encoding 2: H264, 1080p30, ssrc=2345
>>>>>>>     Encoding 3: H264, 1080p30, ssrc=3456
>>>>>>>
>>>>>>> The SSRC/appId attributes are ignorable and hence sendable in the
>>>>>>> initial SSRC, though implementations may prefer to leave them out
>>>>>>> initially and then readvertise once CLUE is established.
>>>>>>>
>>>>>>> I've also included SSRCs alongside the encodings. In this case all
>>>>>>> encodings are on the same m-line, but there are scenarios where 
>>>>>>> this
>>>>>>> multiplexing may not be appropriate or even possible, such as the
>>>>>>> disaggregated case when different streams must be sent to different
>>>>>>> IP addresses. In such circumstances there is a need to be able to
>>>>>>> distinguish which encodings can be sent on which m-line. An
>>>>>>> alternative would be to have the SSRCs only in SDP, and use 
>>>>>>> something
>>>>>>> else to tie that to the encodings (such as a label attribute).
>>>>>>>
>>>>>>> Having the encodings split across multiple m-lines will present
>>>>>>> additional challenges, which will be detailed later. For now,
>>>>>>> consider the rest of the simple case. The far end will complete the
>>>>>>> SDP negotiation, and send a Configure message choosing what to
>>>>>>> receive, something like this:
>>>>>>>
>>>>>>> EXAMPLE SDP ANSWER (from B)
>>>>>>> m=video ...
>>>>>>> ... H264@720p30
>>>>>>> a=sendrecv
>>>>>>>
>>>>>>> EXAMPLE CONFIGURATION (from B)
>>>>>>>    Capture 1, Encoding 1
>>>>>>>    Capture 2, Encoding 2
>>>>>>>    Capture 3, Encoding 3
>>>>>>>
>>>>>>> At this point the sender can begin sending three media streams, 
>>>>>>> which
>>>>>>> the receiver can demultiplex by SSRC.
>>>>>>>
>>>>>>> However, one major issue remains even for the multiplexed case: how
>>>>>>> to express receiver encode limitations. These are currently 
>>>>>>> expressed
>>>>>>> in SDP, and are defined to apply to a single media stream. However,
>>>>>>> in the case above three streams are being sent to that port, and we
>>>>>>> have to decide what those limits mean.
>>>>>>>
>>>>>>> One answer would be that it provides the *overall* limit, but this
>>>>>>> quickly runs into problems. One issue is that not all parameters 
>>>>>>> can
>>>>>>> be sensibly subdivided: for H264 max-br and max-mbps could be 
>>>>>>> divided
>>>>>>> into three relatively easily, but max-fs makes less sense. And even
>>>>>>> avoiding this, it would allow the far end to validly send a single
>>>>>>> stream that matched the combined limit, which the receiver might 
>>>>>>> not
>>>>>>> be able to cope with; existing 3-screen systems often have three
>>>>>>> seperate decoders that cannot pool their decode resource (eg, they
>>>>>>> might be able to decode three 720p streams, but not one 1080p
>>>>>>> stream).
>>>>>>>
>>>>>>> As such a better answer is that the SDP receive limits express the
>>>>>>> limit for *any* individual stream; in the example above each of the
>>>>>>> three streams could be sent at up to 720p30. However, this still
>>>>>>> poses an
>>>>>>> issue: what happens if the receiver wants to receive different
>>>>>>> capture encodings with different constraints. In the example above
>>>>>>> most receivers will probably want all three streams at similar
>>>>>>> resolutions, but in the scenario where a device wants to receive 
>>>>>>> one
>>>>>>> primary capture encoding to view full-screne and six
>>>>>>> capture-encodings to use as small picture-in-picture insets then 
>>>>>>> the
>>>>>>> receiver needs to be able to express these different constraints.
>>>>>>>
>>>>>>> One option is for the SDP codec limit to provide a set of maxima 
>>>>>>> for
>>>>>>> any given stream, but then be able to express *additional* 
>>>>>>> limits for
>>>>>>> each capture encoding as part of the Configure message; a highly
>>>>>>> simplified version might look like this:
>>>>>>>
>>>>>>> EXAMPLE CONFIGURATION (from B)
>>>>>>>    Capture 1, Encoding 1, 720p30
>>>>>>>    Capture 2, Encoding 2, 360p30
>>>>>>>    Capture 3, Encoding 3, 360p30
>>>>>>>
>>>>>>> These would provide additional constraints on what could be sent 
>>>>>>> for
>>>>>>> each stream; they could only lower (not raise) the constraints
>>>>>>> expressed in SDP. On the plus side, the 'encodings in CLUE' 
>>>>>>> approach
>>>>>>> will mean we need to create syntax to express codec-specific 
>>>>>>> encoder
>>>>>>> limitations for the Advertise messages, and much of that syntax 
>>>>>>> could
>>>>>>> use here. On the minus side, this means replicating functionality
>>>>>>> that is already part of SDP.
>>>>>>>
>>>>>>> If we don't want to go down the route of re-expressing codec limits
>>>>>>> in CLUE there is another option, which is that receivers that 
>>>>>>> want to
>>>>>>> express different receive limits for different capture encodings 
>>>>>>> they
>>>>>>> wish to receive will need to create new m-lines in the SDP so they
>>>>>>> can differentiate between the capture encodings they want to 
>>>>>>> receive.
>>>>>>> The plus side of this approach is that this is a requirement of the
>>>>>>> 'Encodings in CLUE' approach anyway, as it's needed for the case
>>>>>>> where the initial answerer is a disaggregated system that doesn't
>>>>>>> want to multiplex their streams; these scenarios will also need to
>>>>>>> reoffer with new m-lines and split up the streams.
>>>>>>>
>>>>>>> The downside of this approach is that coming up with a general 
>>>>>>> way to
>>>>>>> mix and match multiple m-line each of which may multiplex different
>>>>>>> streams, and the ability to change the the number of m-lines in use
>>>>>>> and which encodings are multiplexed onto each is painful, to the
>>>>>>> point where I don't have a particularly good handle on it. I've
>>>>>>> included some initial speculation in an appendix at the end of the
>>>>>>> page to avoid eating up too much space here.
>>>>>>>
>>>>>>> Advantages of 'Encodings in CLUE' relative to 'Encodings in SDP':
>>>>>>> * 'Encodings in SDP' approach can't express all H.264 send
>>>>>>> limitations using existing SDP syntax
>>>>>>> * In the straightforward multiplexed case fewer (or no) additional
>>>>>>> O/As and m-lines
>>>>>>>
>>>>>>> Disadvantages:
>>>>>>> * Need to reinvent syntax to express H264 (and any other codec)
>>>>>>> limitations
>>>>>>> * Problems with moving from multiplexed to non-multiplexed (and
>>>>>>> similar)
>>>>>>> * Media-specific information in CLUE messaging, potentially 
>>>>>>> limiting
>>>>>>> interoperability with other ongoing multistream work
>>>>>>>
>>>>>>> *** Encoding Constraints in CLUE ***
>>>>>>>
>>>>>>> This scenario presents a half-way house between the 'Encodings in
>>>>>>> CLUE'
>>>>>>> approach expressed above and the 'Encodings in SDP' approach
>>>>>>> expressed in draft-kyzivat-clue-signaling. In this case there is a
>>>>>>> seperate m-line per stream, just as with the 'Encodings in SDP'
>>>>>>> approach. However, in this approach the encoding constraints are
>>>>>>> expressed in the CLUE Advertisment.
>>>>>>>
>>>>>>> Let us explore the three-screen example from before with this new
>>>>>>> approach. There will be an initial SDP O/A to establish the CLUE
>>>>>>> channel and verify that both sides support CLUE. There is then a 
>>>>>>> new
>>>>>>> SDP offer with an m-line per stream available, and a matching
>>>>> advertisment:
>>>>>>>
>>>>>>> EXAMPLE SDP OFFER (from A)
>>>>>>> m=video ...
>>>>>>> ... H264@720p30
>>>>>>> a=label:A
>>>>>>> a=sendrecv
>>>>>>> m=video ...
>>>>>>> ... H264@720p30
>>>>>>> a=label:B
>>>>>>> a=sendrecv
>>>>>>> m=video ...
>>>>>>> ... H264@720p30
>>>>>>> a=label:C
>>>>>>> a=sendrecv
>>>>>>>
>>>>>>> EXAMPLE ADVERTISMENT (from A)
>>>>>>>    Capture Scene 1:
>>>>>>>     Capture 1: Left (Encoding Group 1)
>>>>>>>     Capture 2: Center (Encoding Group 1)
>>>>>>>     Capture 3: Right (Encoding Group 1)
>>>>>>>     Capture 4: Switched
>>>>>>>     Capture Scene Entry 1: 1,2,3
>>>>>>>     Capture Scene Entry 2: 4
>>>>>>>     Simultaneous Sets: 1,2,3,4
>>>>>>>    Encoding Group 1:
>>>>>>>     Encoding 1: H264, 1080p30, label=A
>>>>>>>     Encoding 2: H264, 1080p30, label=B
>>>>>>>     Encoding 3: H264, 1080p30, label=C
>>>>>>>
>>>>>>> I've used label above to tie the encodings to the m-lines, but this
>>>>>>> could alternately be SSRC (if SSRC was being used), appId (if appId
>>>>>>> were being used) or anything else.
>>>>>>>
>>>>>>> This closely matches the examples in draft-kyzivat-clue-signaling.
>>>>>>> However, the sender can use 'sendrecv' rather than 'sendonly' 
>>>>>>> for the
>>>>>>> new m-lines rather than 'sendonly', as the encoder limits are
>>>>>>> expressed instead in CLUE. This also has the advantage that the 
>>>>>>> CLUE
>>>>>>> syntax for these limits allows the precise H264 send limits to be
>>>>>>> expressed (while SDP limits which parameters can be used in H.264
>>>>> sendonly).
>>>>>>>
>>>>>>> If the far end wants to receive the three static captures then it
>>>>>>> sends an answer SDP which can accept those, along with a Configure
>>>>>>> message specifying which captures it wants to receive:
>>>>>>>
>>>>>>> EXAMPLE SDP ANSWER (from B)
>>>>>>> m=video ...
>>>>>>> ... H264@720p30
>>>>>>> a=sendrecv
>>>>>>> m=video ...
>>>>>>> ... H264@720p30
>>>>>>> a=sendrecv
>>>>>>> m=video ...
>>>>>>> ... H264@720p30
>>>>>>> a=sendrecv
>>>>>>>
>>>>>>> EXAMPLE CONFIGURATION (from B)
>>>>>>>    Capture 1, Encoding 1
>>>>>>>    Capture 2, Encoding 2
>>>>>>>    Capture 3, Encoding 3
>>>>>>>
>>>>>>> The receiver no longer needs to express the receive limits in CLUE;
>>>>>>> with an m-line per received stream the standard SDP parameters
>>>>>>> already allow full expression of the limitations. Now it just needs
>>>>>>> to state what encoding it wants to use for each capture it 
>>>>>>> receives.
>>>>>>>
>>>>>>> Also, in the SDP the new m-lines are marked as 'sendrecv'; if we
>>>>>>> consider the other direction (B sendings to A) then if B has three
>>>>>>> streams or fewer then it will not have to make a new SDP offer. For
>>>>>>> instance, if B supported up to two encodings its response could
>>>>>>> instead have been:
>>>>>>>
>>>>>>> EXAMPLE SDP ANSWER (from B)
>>>>>>> m=video ...
>>>>>>> ... H264@720p30
>>>>>>> a=sendrecv
>>>>>>> m=video ...
>>>>>>> ... H264@720p30
>>>>>>> a=sendrecv
>>>>>>> m=video ...
>>>>>>> ... H264@720p30
>>>>>>> a=recvonly
>>>>>>>
>>>>>>> And could immediately send an Advertisment with its capture and
>>>>>>> encoding information. A new offer would be needed if B had more
>>>>>>> encodings than A, or if A wanted to change its receive capabilities
>>>>>>> based on the capture encodings it wanted from A.
>>>>>>>
>>>>>>> This approach is very like the 'Encodings in SDP' approach, with 
>>>>>>> the
>>>>>>> following advantages and disadvantages:
>>>>>>>
>>>>>>> Advantages relative to 'Encodings in SDP':
>>>>>>> * 'Encodings in SDP' approach can't express all H.264 send
>>>>>>> limitations using existing SDP syntax
>>>>>>> * Many m-lines can be sendrecv, in many cases will need fewer O/As
>>>>>>>
>>>>>>> Disadvantages:
>>>>>>> * Still need to reinvent syntax to express H264 (and any other 
>>>>>>> codec)
>>>>>>> encoding limitations
>>>>>>>
>>>>>>> *** Summary ***
>>>>>>>
>>>>>>> Appealing as the 'Encodings in CLUE' case is when all streams of a
>>>>>>> given media type can be multiplexed onto a single m-line, treating
>>>>>>> SDP as an 'envelope' CLUe can subdivide, I don't believe this
>>>>>>> approach is actually feasible. Also having to support the multiple
>>>>>>> m-line case and be able to sensibly change the number of m-lines is
>>>>>>> painful, while the need to replicate receive codec limits in CLUE
>>>>>>> that are already expressed very capably in SDP is unlikely to be
>>>> popular.
>>>>>>>
>>>>>>> I believe the 'Encoding *constraints* in CLUE' approach is
>>>>>>> considerably more plausible, though the variation from the 
>>>>>>> 'Encodings
>>>> in
>>>>> SDP'
>>>>>>> approach is fairly small. Principly it comes down to whether the 
>>>>>>> work
>>>>>>> required to develop syntax to express the send limits for the 
>>>>>>> various
>>>>>>> codecs in CLUE is worth the added accuracy of what can be expressed
>>>>>>> in CLUE over what is currently available in SDP, along with the
>>>>>>> advantage of less need to add new m-lines (and the associated O/As
>>>>> involved).
>>>>>>>
>>>>>>> *** Appendix ***
>>>>>>>
>>>>>>> Let us first consider the case of wanting to split an offered 
>>>>>>> set of
>>>>>>> streams. The sender (A) can send up to five capture encodings, 
>>>>>>> and so
>>>>>>> send an SDP offer along the lines of:
>>>>>>>
>>>>>>> m=video ...
>>>>>>> ...
>>>>>>> a=ssrc:1234
>>>>>>> a=ssrc:2345
>>>>>>> a=ssrc:3456
>>>>>>> a=ssrc:4567
>>>>>>> a=ssrc:5678
>>>>>>> a=sendrecv
>>>>>>>
>>>>>>> The receiver (B) wants to receive one of those streams on one 
>>>>>>> m-line,
>>>>>>> three more on a second m-line and the fifth not at all. The 
>>>>>>> rules of
>>>>>>> O/A mean that B must send an initial answer with the single m-line,
>>>>>>> which we won't illustrate here, and then send a new offer. Some
>>>>>>> method must be provided to show what streams are to be received
>>>>>>> where; in this case I'm using the max-ssrc draft:
>>>>>>>
>>>>>>> m=video ...
>>>>>>> ...
>>>>>>> a=max-recv-ssrc:{*:1}
>>>>>>> a=sendrecv
>>>>>>> m=video ...
>>>>>>> ...
>>>>>>> a=max-recv-ssrc:{*:3}
>>>>>>> a=sendrecv
>>>>>>> m=video ... 0
>>>>>>>
>>>>>>> A will then need to reorder their allocation of SSRCs to m-lines to
>>>>>>> sensibly fit this. And if later in the call B decides that it now
>>>>>>> wants to multiplex all of the streams this will need to be done in
>>>>>>> reverse, with B sending a new offer with only one activevideo  
>>>>>>> m-line
>>>>>>> and suitable max-recv-ssrc value and hope that A will correctly
>>>>>>> interpret this. This reallocation of streams between m-lines is the
>>>>>>> part I think will be difficult to define normatively.
>>>>>>>
>>>>>>> Rob
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> clue mailing list
>>>>>>> clue@ietf.org
>>>>>>> https://www.ietf.org/mailman/listinfo/clue
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> clue mailing list
>>>>>> clue@ietf.org
>>>>>> https://www.ietf.org/mailman/listinfo/clue
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> clue mailing list
>>>>> clue@ietf.org
>>>>> https://www.ietf.org/mailman/listinfo/clue
>>>>
>>>>
>>>
>>> _______________________________________________
>>> clue mailing list
>>> clue@ietf.org
>>> https://www.ietf.org/mailman/listinfo/clue
>>>
>>
>> _______________________________________________
>> clue mailing list
>> clue@ietf.org
>> https://www.ietf.org/mailman/listinfo/clue
>>
>
> _______________________________________________
> clue mailing list
> clue@ietf.org
> https://www.ietf.org/mailman/listinfo/clue
>