Re: [codec] comparitive quality testing

David Virette <david.virette@huawei.com> Fri, 15 April 2011 14:47 UTC

Return-Path: <david.virette@huawei.com>
X-Original-To: codec@ietfc.amsl.com
Delivered-To: codec@ietfc.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfc.amsl.com (Postfix) with ESMTP id 226B9E0840 for <codec@ietfc.amsl.com>; Fri, 15 Apr 2011 07:47:09 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.599
X-Spam-Level:
X-Spam-Status: No, score=-6.599 tagged_above=-999 required=5 tests=[AWL=-0.001, BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([208.66.40.236]) by localhost (ietfc.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nWWo7b+c0uTt for <codec@ietfc.amsl.com>; Fri, 15 Apr 2011 07:47:03 -0700 (PDT)
Received: from usaga04-in.huawei.com (usaga04-in.huawei.com [206.16.17.180]) by ietfc.amsl.com (Postfix) with ESMTP id 50EE5E06C6 for <codec@ietf.org>; Fri, 15 Apr 2011 07:47:03 -0700 (PDT)
Received: from huawei.com (usaga04-in [172.18.4.101]) by usaga04-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LJP00GP57QD3X@usaga04-in.huawei.com> for codec@ietf.org; Fri, 15 Apr 2011 09:47:01 -0500 (CDT)
Received: from d009000303 ([10.220.139.89]) by usaga04-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTPA id <0LJP00AW17QA9P@usaga04-in.huawei.com> for codec@ietf.org; Fri, 15 Apr 2011 09:47:00 -0500 (CDT)
Date: Fri, 15 Apr 2011 16:46:58 +0200
From: David Virette <david.virette@huawei.com>
In-reply-to: <BANLkTimE6EzGY76Lm+-wtWtRTQgOjqhAEw@mail.gmail.com>
To: 'Roman Shpount' <roman@telurix.com>
Message-id: <017901cbfb7b$fa079060$ee16b120$%virette@huawei.com>
MIME-version: 1.0
X-Mailer: Microsoft Office Outlook 12.0
Content-type: multipart/related; boundary="Boundary_(ID_Sap/v9HQjr/nYEki18lGOg)"
Content-language: fr
Thread-index: Acv60NvrBuYE4+67REeXJ5g8fnUPMgAqQs2A
References: <BCB3F026FAC4C145A4A3330806FEFDA93BA8B64643@EMBX01-HQ.jnpr.net> <BANLkTimE6EzGY76Lm+-wtWtRTQgOjqhAEw@mail.gmail.com>
Cc: codec@ietf.org
Subject: Re: [codec] comparitive quality testing
X-BeenThere: codec@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Codec WG <codec.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/codec>, <mailto:codec-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/codec>
List-Post: <mailto:codec@ietf.org>
List-Help: <mailto:codec-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/codec>, <mailto:codec-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 15 Apr 2011 14:47:09 -0000

Dear Roman,

Would you have some specific conditions you would like to see tested for
G.729: clean speech and/or noisy speech experiments, error conditions,
levels? It is just to have an idea of what has to be added/modified in the
test plan.

Best regards,

David

 

David Virette
HUAWEI TECHNOLOGIES CO.,LTD. huawei_logo



Building C
Riesstrasse 25
80992 Munich, Germany
Tel: +49 89 158834 4148
Fax: +49 89 158834 4447
Mobile: +49 1622047469
E-mail: david.virette@huawei.com
www.huawei.com
----------------------------------------------------------------------------
---------------------------------------------------------
This e-mail and its attachments contain confidential information from
HUAWEI, which 
is intended only for the person or entity whose address is listed above. Any
use of the 
information contained herein in any way (including, but not limited to,
total or partial 
disclosure, reproduction, or dissemination) by persons other than the
intended 
recipient(s) is prohibited. If you receive this e-mail in error, please
notify the sender by 
phone or email immediately and delete it!

From: codec-bounces@ietf.org [mailto:codec-bounces@ietf.org] On Behalf Of
Roman Shpount
Sent: jeudi 14 avril 2011 20:21
To: Gregory Maxwell
Cc: codec@ietf.org
Subject: Re: [codec] comparitive quality testing

 

I think part of the confusion comes from the fact that there are two
purposes for the comparative testing. One is to validate that the codec
meets the WG requirements. Another is to show how new codec compares to the
industry dominant codecs. For me, the second goal is more important then the
first one. I think if we care about the adoption of Opus, we should consider
making the comparative test results a deliverable for the working group. It
is very hard for a real company in the open market to justify doing
something, like adapting a new codec without a compelling reason. Knowing
how this codec compares to other existing codecs is a big part of providing
such a reason. If we look at the tests from this point of view, we need to
see how Opus compares to G.729 and AMR in narrow band, and AMR-WB and G.722
in wideband, Since there are no existing deployments of a meaningful size
(apart from a closed proprietary systems, like Skype) for UWB and FB, we can
compare Opus with industry leaders, such as G.719. 

One can argue that we should also compare Opus with patent free codecs,
which adds iLBC and Speex to the list, but I personally see this as less of
a requirement. iLBC never managed to get market traction outside of the open
source world, and even in the open source world nobody bothered to write
even moderately optimized version of it. Speex is known for audio quality
problems so that it would be an easy target to beat. On the other hand this
would probably not be much of a milestone and will not tell anybody a lot
about the Opus quality.

There were several tests that compared Opus with non-interactive codecs, but
once again this is not something which would affect choosing Opus vs other
codecs, since other codecs are clearly inappropriate for Opus purposes.

We can argue about adding more codecs to the list, but I am not sure this
will make the difference. We only need to compare with a very few to give
everybody a clear idea about the Opus quality. As far as defining the
criteria for the codec being acceptable for standardization, all we really
need is a comparable quality (not worse then the other codecs by some
defined margin). This is not a competition where Opus needs to win every
race to be successful. 

The whole reason why I am interested in formal comparative testing of Opus
is because I am impressed by its quality. I think having well documented
test results which were cross-checked by multiple people might make a
critical difference in Opus adoption, and as a result the success of this
working group.

No hats, just my two cents...
_____________
Roman Shpount



On Thu, Apr 14, 2011 at 10:25 AM, Gregory Maxwell <gmaxwell@juniper.net>
wrote:

Roni Even [ron.even.tlv@gmail.com] wrote:
> I do not mind if the WG will decide to remove the quality claim and
continue
> with developing  a royalty free codec with "good enough" quality not
saying
> it is better than other codecs.
> I just think that it should be clear from the charter and requirements
what
> is the purpose of the work.

It's funny how we can argue and argue, only to later realize that it comes
down to a simple mutual misunderstanding.

I thought everyone was already on the same page with respect to the
goals: it's good to be as good as possible, but the chartered purpose
of the WG was only to do a "good quality" codec that was suited
to the listed applications and deployments.

As a developer I know that quality testing is important, and of course
we've done a lot of it of various types.  I strongly believe in scientific
testing, so of course my first instinct would have been to do it here,
but perhaps the reality of the consensus process makes that less
reasonable-as others have pointed out, most other WGs don't really
do anything comparable to quality testing.

Likewise, making sure the outcome is as legally unencumbered as I can
is also very important to me,  but because of the vulgarities of the
process and the law, this isn't something that the working group itself
makes promises about.

So, perhaps it makes sense for the working group to not make any quality
promises in the same way it makes no promises about patents.

It seems clear enough to me now that we can much more easily come to
consensus about achieving good-enough status than about formal testing
gates and requirements.

We should accept your suggestion-drop all the comparative quality
requirements from the requirements draft, and stop discussing comparative
quality here-and then make some progress on technology, rather than
continue bickering about details where we are not going to come to
consensus.

The market can figure out the comparative quality question on its own.

_______________________________________________
codec mailing list
codec@ietf.org
https://www.ietf.org/mailman/listinfo/codec