Re: [Slim] Moving forward on draft-ietf-slim-negotiating-human-language

Gunnar Hellström <gunnar.hellstrom@omnitor.se> Tue, 21 November 2017 19:45 UTC

Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: slim@ietfa.amsl.com
Delivered-To: slim@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5A1C9129572 for <slim@ietfa.amsl.com>; Tue, 21 Nov 2017 11:45:14 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level:
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id P8PzSSv_0I24 for <slim@ietfa.amsl.com>; Tue, 21 Nov 2017 11:45:12 -0800 (PST)
Received: from bin-vsp-out-03.atm.binero.net (bin-mail-out-06.binero.net [195.74.38.229]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id CD35B129548 for <slim@ietf.org>; Tue, 21 Nov 2017 11:45:11 -0800 (PST)
X-Halon-ID: 77c48d25-cef4-11e7-811c-0050569116f7
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [77.53.37.96]) by bin-vsp-out-03.atm.binero.net (Halon) with ESMTPSA id 77c48d25-cef4-11e7-811c-0050569116f7; Tue, 21 Nov 2017 20:45:04 +0100 (CET)
To: Bernard Aboba <bernard.aboba@gmail.com>
Cc: slim@ietf.org
References: <CAOW+2dsZtuciPiKMfif=ZmUqBcUd9TyYtL5gPYDp7ZfLOHHDBA@mail.gmail.com> <p06240600d637c6f98ecc@99.111.97.136> <CAOW+2dv5NSiCbW=p1exvPV=PF8YCVdiz2gi-OCxmaUB-jGe22w@mail.gmail.com> <p06240600d6389cd2043f@99.111.97.136> <97d9a6b8-de3b-9f79-483b-18376fcf0ced@omnitor.se> <CAOW+2dtpRoeYkMJzX9vyNUojJDax4DQUU2F4PauBwt1sm-83Hg@mail.gmail.com> <55f2b336-3f14-f49a-ec78-f00b0373db00@omnitor.se> <bc5b7aa5-7c6a-5096-47d0-01e5ee079e93@omnitor.se> <CAOW+2duUPsTY=Ygzwfu0eOYbHaBAwMqm+oxA6AdMXinxTMNM1A@mail.gmail.com> <165d2c07-19ca-f2b1-2aac-3aac842b97e9@omnitor.se> <CAOW+2dsrRoCE4YQ1U+y448C4qmMY1Hb+8jM=aTyzvsPBYA0akg@mail.gmail.com> <14297efe-82c5-a6d9-94f7-fe82be6b423a@alum.mit.edu> <CAOW+2dtej1p3pD5-FbVjnkWbXH3O6TYO7zVBqy0PAfReA1Lh4A@mail.gmail.com>
From: Gunnar Hellström <gunnar.hellstrom@omnitor.se>
Message-ID: <e70a716e-0e43-c39e-9ac2-71b9d4afe20a@omnitor.se>
Date: Tue, 21 Nov 2017 20:45:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <CAOW+2dtej1p3pD5-FbVjnkWbXH3O6TYO7zVBqy0PAfReA1Lh4A@mail.gmail.com>
Content-Type: multipart/alternative; boundary="------------925C7BC7E725CFEBD00FA39B"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/slim/bicZKPuCM6N4GEd3t3AZeAPOXbk>
Subject: Re: [Slim] Moving forward on draft-ietf-slim-negotiating-human-language
X-BeenThere: slim@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Selection of Language for Internet Media <slim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/slim>, <mailto:slim-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/slim/>
List-Post: <mailto:slim@ietf.org>
List-Help: <mailto:slim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/slim>, <mailto:slim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 21 Nov 2017 19:45:14 -0000

Den 2017-11-21 kl. 18:06, skrev Bernard Aboba:
> Paul said:
>
> "When using lip sync, is there any necessity to put the language tag 
> on the video?"
>
> [BA] Good point.
>
> "   By including a language tag for spoken language in an audio
>    description and using the "lip sync" grouping mechanism defined
>    in [RFC5888] to group it with a video media stream it is possible
>    to indicate synchronized audio and video so as to support lip
>  reading."
>
> [BA] This seems like an improvement.
<GH>I do not think that an indication of lip reading synch grouping can 
be assumed to mean that the user promises to be seen in video. I guess 
that most products implementing the lip synch grouping do it generally 
for all calls regardless of if the user want to provide or see lips in 
synch. But it is a good feature to use if you desire to see a speaker.
The 'hlang' attribute in a video description is on the other hand clear 
indications that you want to provide or receive language in the video 
media stream.
Therefore I think we should return either to say that a spoken/written 
language tag in video media description means a view of the speaker if 
there is also a lip synch grouping, or even skip the dependency on lip 
synch grouping.  (there is a risk that we introduce tricky corner cases 
by the bundling of lip synch and language use. How about if we by 
further work agree on a way to indicate written captions in MPEG4 video, 
and want to indicate that in a product that always provides lip synch 
grouping. That will cause conflicts.)

Randall recently commented that use of text captions in the video stream 
is a far fetched use case. MPEG4 has caption elements defined and it can 
be provided in media declared as video, but it may be right that it is 
rarely or never used in conversational calls. If we can agree on that we 
could simply return to saying that a spoken/written language tag in 
video description means a view of a speaker, and skip the requirement to 
link it to the language in the audio stream.

Gunnar
>
> On Tue, Nov 21, 2017 at 8:44 AM, Paul Kyzivat <pkyzivat@alum.mit.edu 
> <mailto:pkyzivat@alum.mit.edu>> wrote:
>
>     On 11/21/17 10:59 AM, Bernard Aboba wrote:
>
>         [BA] LGTM.  Do you recall what the objection was to the term
>         "spoken/written language"?
>
>         Gunnar had said:
>
>         By including a language tag for spoken language in a video
>         description and using the "lip sync" grouping mechanism
>         defined in [RFC5888] it is possible to indicate synchronized
>         audio and video so as to support lip reading.
>
>
>     When using lip sync, is there any necessity to put the language
>     tag on the video? ISTM that is irrelevant, as long as it is on the
>     synced audio media. ISTM it would be better to say:
>
>        By including a language tag for spoken language in an audio
>        description and using the "lip sync" grouping mechanism defined
>        in [RFC5888] to group it with a video media stream it is possible
>        to indicate synchronized audio and video so as to support lip
>        reading.
>
>             Thanks,
>             Paul
>
>
>     _______________________________________________
>     SLIM mailing list
>     SLIM@ietf.org <mailto:SLIM@ietf.org>
>     https://www.ietf.org/mailman/listinfo/slim
>     <https://www.ietf.org/mailman/listinfo/slim>
>
>
>
>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288