Re: [Slim] IETF last call for draft-ietf-slim-negotiating-human-language (Section 5.4)

Gunnar Hellström <gunnar.hellstrom@omnitor.se> Wed, 15 February 2017 21:36 UTC

Return-Path: <gunnar.hellstrom@omnitor.se>
X-Original-To: ietf@ietfa.amsl.com
Delivered-To: ietf@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 25B58129455 for <ietf@ietfa.amsl.com>; Wed, 15 Feb 2017 13:36:35 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Level:
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=unavailable autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id CUg-UNPYow5L for <ietf@ietfa.amsl.com>; Wed, 15 Feb 2017 13:36:31 -0800 (PST)
Received: from bin-vsp-out-01.atm.binero.net (bin-mail-out-06.binero.net [195.74.38.229]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 53EE71296D1 for <ietf@ietf.org>; Wed, 15 Feb 2017 13:36:31 -0800 (PST)
X-Halon-ID: ce14895f-f3c6-11e6-a561-005056917a89
Authorized-sender: gunnar.hellstrom@omnitor.se
Received: from [192.168.2.136] (unknown [83.209.158.27]) by bin-vsp-out-01.atm.binero.net (Halon Mail Gateway) with ESMTPSA; Wed, 15 Feb 2017 22:36:26 +0100 (CET)
Subject: Re: [Slim] IETF last call for draft-ietf-slim-negotiating-human-language (Section 5.4)
To: "Phillips, Addison" <addison@lab126.com>, Randy Presuhn <randy_presuhn@alumni.stanford.edu>, "ietf@ietf.org" <ietf@ietf.org>, "slim@ietf.org" <slim@ietf.org>
References: <20170213161000.665a7a7059d7ee80bb4d670165c8327d.917539e857.wbe@email0 3.godaddy.com> <ddc5af1d-f084-f57e-d6c9-5963e4fe98d3@omnitor.se> <4c4ef65a-a907-cf5e-4b2c-835fb55d0146@omnitor.se> <p06240603d4c8f105055e@[99.111.97.136]> <434a4f06-f034-46ca-9df7-f59059e67e41@alumni.stanford.edu> <843f0cc1-2686-162d-25dc-0075847579bc@omnitor.se> <44474907d69a42a0adb66cdc4933603a@EX13D08UWB002.ant.amazon.com> <ba7f397b-59ae-c549-cd1a-e22e1b73b3c1@omnitor.se> <e1ce6bbaaf624c1e88d31c72440093ba@EX13D08UWB002.ant.amazon.com>
From: Gunnar Hellström <gunnar.hellstrom@omnitor.se>
Message-ID: <a5df00c7-fe78-aee8-d7da-30d65e476d7f@omnitor.se>
Date: Wed, 15 Feb 2017 22:36:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1
MIME-Version: 1.0
In-Reply-To: <e1ce6bbaaf624c1e88d31c72440093ba@EX13D08UWB002.ant.amazon.com>
Content-Type: text/plain; charset="windows-1252"; format="flowed"
Content-Transfer-Encoding: 8bit
Archived-At: <https://mailarchive.ietf.org/arch/msg/ietf/H-xATPEhcm5ZtbZPviVttMLZfCU>
X-BeenThere: ietf@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: IETF-Discussion <ietf.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ietf>, <mailto:ietf-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ietf/>
List-Post: <mailto:ietf@ietf.org>
List-Help: <mailto:ietf-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ietf>, <mailto:ietf-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 15 Feb 2017 21:36:35 -0000

Addison,

Den 2017-02-15 kl. 18:12, skrev Phillips, Addison:
> Gunnar replied:
>> Den 2017-02-14 kl. 21:39, skrev Phillips, Addison:
>>> I have some allergy to the SHALL language: there is no way to automatically
>> determine conformance. Many language tags represent nonsensical values, due
>> to the nature of language tag composition. Content providers need to use care
>> in selecting the tags that they use and this section is merely pointing out good
>> guidance for tag selection, albeit in a heavy-handed way. BCP47 RFC 5646
>> Section 4.1 [1] already provides most of this guidance and a reference to that
>> source might be useful here, if only because that document requires it:
>>> <quote>
>>>      Standards, protocols, and applications that reference this document
>>>      normatively but apply different rules to the ones given in this
>>>      section MUST specify how language tag selection varies from the
>>>      guidelines given here.
>>> </quote>
>>>
>>> I would suggest reducing the SHALL items to SHOULD.
>> Accepted.
>> That also opens up for another use we have discussed before but been advised
>> to not use. That is to indicate use of written language by attaching a script
>> subtag even if the script subtag we use is suppressed by BCP 47. We can dro that
>> need however, with the use of the Zxxx script subtag for non-written, and clearly
>> include that usage in our specification as required from BCP 47.
> I don't necessarily think that mandating a script subtag as a signal of written content (vs. spoken content) is that useful. In most protocols, the written nature of the content is indicated by the presence of text. Trying to coerce the language modality via language tags seems complicated, especially since most language tags are harvested from the original source. Introducing processes to evaluate and insert or remove script subtags seems unnecessary to me. That said, I have no objection to content using script subtags if they are useful.
In this case we are negotiating use of media streams before they are 
established, so that the connection will be made between best capable 
devices and call participants. There is no indication in the media 
coding parameters available to tell if text will be carried in video.  
So, if we want to be able to specify the three modalities possible in 
video we needto have differentiated notations for them: 1. view of a 
signing person, 2. view of a speaking person, 3. Text.      For 1. the 
signing person, it is simple, because the language subtags are explicit 
in that they indicate sign language. But for the two others I was not 
aware of any useful way before I was informed about the Zxxx script subtag.
Now, the reasoning about the need for these to be possible to 
distinguish caused me to specify that for the view of the signing person 
we use the Zxxx script subtag and for any text we do not need to specify 
any script subtag.
The view of the speaking person is the only very important alternative 
of the four identified "silly states", and that was already included in 
section 5.2. But both Bernard and I wanted to see the "silly states" 
chapter sharpened up and real alternatives sorted out and specified.
>
>>> I'm not sure what #2 really means. Shouldn't text captions be indicated by the
>> written language rather than the spoken language? And I'm not sure what
>> "spoken/written language" means.
>> #2 was: "
>>
>> 2.    Text captions included in the video stream SHALL be indicated
>> by a Language-Tag for spoken/written language."
>>
>> Yes, the intention is to use written language in the video stream. There are
>> technologies for that.
> I'm aware of that. My concern is that in this case "spoken/written" is applied to "text captions", which are not spoken be definition? This section is talking about the differences between identifying spoken and written language. The text captions fall into the written side of the equation, no?
>
> I'd probably prefer to see something like "2. Text captions included in the video stream SHOULD include a Language-Tag to identify the language."
Yes, that is a way to avoid mentioning spoken/written that apparently is 
confusing when the subtag in this case is used for written modality.
>
>> Since the language subtags in the IANA registry are combined for spoken
>> languages and written languages, I call them Language-Tags for spoken/written
>> language.
> The language subtags are for languages--all modalities. My comment here is that "spoken/written" adds no information.
Spoken/written is different from signed that is the "normal" modality 
for video.
>
>> It would be misleading to say that we use a Language-Tag for a written
>> language, because the same tag could in another context mean a spoken
>> language.
> One uses a Language-Tag for indicating the language. When the text is written, sometimes the user will pick a different language tag (zh-Hant-HK) than they might choose for spoken text (yue-HK, zh-cmn-HK, etc.). Sometimes (actually, nearly all the time except for special cases) the language tag for the spoken and written language is the same tag (en-US, de-CH, ja-JP, etc.). Again, the modality of the language is a separate consideration from the language. Nearly always, it is better to use the same tag for both spoken and written content rather than trying to use the tag to distinguish between them: different Content-Types require different decoders anyway, but it is really useful to say "give me all of the 'en-US' content you have" or "do you have content for a user who speaks 'es'"
>
>> Since we have the script subtag Zxxx for non-written, we do not need to
>> construct an explicit tag for the written language tag, it should be sufficient with
>> our specification of the use in our case.
> In case it isn't clear aboe, I oppose introducing the 'Zxxx' subtag save for cases where the non-written nature of the content is super-important to the identification of the language.
There is a possible alternative in RFC 4796, the SDP Content attribute, 
where "speaker" can be identified.  But that does not easily allow for 
describing alternative use of video for sign language or view of a 
speaking person.

So, the alternative to using Zxxx is as I see it to not be able to 
specify text in the video stream. Good interoperability of text in the 
text stream is so much more important so I am prepare to go that way if 
it is needed. Bernards view would be interesting here.
>> In my latest recent proposal, I still have a very similar wording. Since you had
>> problems understanding it, there might still be a need to tune it. Can you
>> propose wording?
>> This is the current proposal:
>>
>> "   2.    Text captions included in the video stream SHOULD be indicated
>>     by a humintlang attribute with Language-Tag for spoken/written language.
>> "
> I did that above. I think it is useful not to over-think it. When I see "Content-Type: video/mpeg; Content-Language: en-GB", I rather expect audio content in English and not written content (although the video stream might also includes pictures of English text such as the titles in a movie). When, as in this case, setting up a negotiated language experience, interoperability is most aided by matching the customer's language preferences to available resources. This is easiest when customers do not get carried away with complex language tags (ranges in BCP 47 parlance, e.g. tlh-Cyrl-AQ-fonupa) and systems do not have to introspect the language tags, inserting and removing script subtags to match the various language modes.
>
> Addison
/Gunnar
>
> _______________________________________________
> SLIM mailing list
> SLIM@ietf.org
> https://www.ietf.org/mailman/listinfo/slim

-- 
-----------------------------------------
Gunnar Hellström
Omnitor
gunnar.hellstrom@omnitor.se
+46 708 204 288