Re: [codec] comparitive quality testing

Roman Shpount <roman@telurix.com> Fri, 15 April 2011 17:01 UTC

Return-Path: <roman@telurix.com>
X-Original-To: codec@ietfc.amsl.com
Delivered-To: codec@ietfc.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfc.amsl.com (Postfix) with ESMTP id DAD16130075 for <codec@ietfc.amsl.com>; Fri, 15 Apr 2011 10:01:12 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.976
X-Spam-Level:
X-Spam-Status: No, score=-2.976 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([208.66.40.236]) by localhost (ietfc.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fuFaefRFNpP3 for <codec@ietfc.amsl.com>; Fri, 15 Apr 2011 10:01:08 -0700 (PDT)
Received: from mail-ew0-f44.google.com (mail-ew0-f44.google.com [209.85.215.44]) by ietfc.amsl.com (Postfix) with ESMTP id B6F1C130061 for <codec@ietf.org>; Fri, 15 Apr 2011 10:01:07 -0700 (PDT)
Received: by ewy19 with SMTP id 19so1031744ewy.31 for <codec@ietf.org>; Fri, 15 Apr 2011 10:01:07 -0700 (PDT)
Received: by 10.14.35.163 with SMTP id u35mr688571eea.137.1302886865526; Fri, 15 Apr 2011 10:01:05 -0700 (PDT)
Received: from mail-ew0-f44.google.com (mail-ew0-f44.google.com [209.85.215.44]) by mx.google.com with ESMTPS id y7sm2059866eeh.14.2011.04.15.10.01.04 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 15 Apr 2011 10:01:04 -0700 (PDT)
Received: by ewy19 with SMTP id 19so1031724ewy.31 for <codec@ietf.org>; Fri, 15 Apr 2011 10:01:04 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.213.14.2 with SMTP id e2mr654926eba.147.1302886864400; Fri, 15 Apr 2011 10:01:04 -0700 (PDT)
Received: by 10.213.4.14 with HTTP; Fri, 15 Apr 2011 10:01:04 -0700 (PDT)
In-Reply-To: <017901cbfb7b$fa079060$ee16b120$%virette@huawei.com>
References: <BCB3F026FAC4C145A4A3330806FEFDA93BA8B64643@EMBX01-HQ.jnpr.net> <BANLkTimE6EzGY76Lm+-wtWtRTQgOjqhAEw@mail.gmail.com> <017901cbfb7b$fa079060$ee16b120$%virette@huawei.com>
Date: Fri, 15 Apr 2011 13:01:04 -0400
Message-ID: <BANLkTikC1LuwvLLuuDKPdRdam4_tJxsLCw@mail.gmail.com>
From: Roman Shpount <roman@telurix.com>
To: David Virette <david.virette@huawei.com>
Content-Type: multipart/related; boundary="0015174becd4bc323104a0f7fc0e"
Cc: codec@ietf.org
Subject: Re: [codec] comparitive quality testing
X-BeenThere: codec@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Codec WG <codec.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/codec>, <mailto:codec-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/codec>
List-Post: <mailto:codec@ietf.org>
List-Help: <mailto:codec-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/codec>, <mailto:codec-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 15 Apr 2011 17:01:13 -0000

David,

We should compare with standard G.729 (not A or B) at 8 kbit/s, 0 and 5%
packet loss, clean speech and speech mixed with at least one noise sample.
We can use ITU-T audio library for both speech and noise samples.
_____________
Roman Shpount


On Fri, Apr 15, 2011 at 10:46 AM, David Virette <david.virette@huawei.com>wrote:

> Dear Roman,
>
> Would you have some specific conditions you would like to see tested for
> G.729: clean speech and/or noisy speech experiments, error conditions,
> levels? It is just to have an idea of what has to be added/modified in the
> test plan.
>
> Best regards,
>
> David
>
>
>
> David Virette
> HUAWEI TECHNOLOGIES CO.,LTD. [image: huawei_logo]
>
>
>
> Building C
> Riesstrasse 25
> 80992 Munich, Germany
> Tel: +49 89 158834 4148
> Fax: +49 89 158834 4447
> Mobile: +49 1622047469
> E-mail: david.virette@huawei.com
> www.huawei.com
>
> -------------------------------------------------------------------------------------------------------------------------------------
> This e-mail and its attachments contain confidential information from
> HUAWEI, which
> is intended only for the person or entity whose address is listed above.
> Any use of the
> information contained herein in any way (including, but not limited to,
> total or partial
> disclosure, reproduction, or dissemination) by persons other than the
> intended
> recipient(s) is prohibited. If you receive this e-mail in error, please
> notify the sender by
> phone or email immediately and delete it!
>
> *From:* codec-bounces@ietf.org [mailto:codec-bounces@ietf.org] *On Behalf
> Of *Roman Shpount
> *Sent:* jeudi 14 avril 2011 20:21
> *To:* Gregory Maxwell
> *Cc:* codec@ietf.org
> *Subject:* Re: [codec] comparitive quality testing
>
>
>
> I think part of the confusion comes from the fact that there are two
> purposes for the comparative testing. One is to validate that the codec
> meets the WG requirements. Another is to show how new codec compares to the
> industry dominant codecs. For me, the second goal is more important then the
> first one. I think if we care about the adoption of Opus, we should consider
> making the comparative test results a deliverable for the working group. It
> is very hard for a real company in the open market to justify doing
> something, like adapting a new codec without a compelling reason. Knowing
> how this codec compares to other existing codecs is a big part of providing
> such a reason. If we look at the tests from this point of view, we need to
> see how Opus compares to G.729 and AMR in narrow band, and AMR-WB and G.722
> in wideband, Since there are no existing deployments of a meaningful size
> (apart from a closed proprietary systems, like Skype) for UWB and FB, we can
> compare Opus with industry leaders, such as G.719.
>
> One can argue that we should also compare Opus with patent free codecs,
> which adds iLBC and Speex to the list, but I personally see this as less of
> a requirement. iLBC never managed to get market traction outside of the open
> source world, and even in the open source world nobody bothered to write
> even moderately optimized version of it. Speex is known for audio quality
> problems so that it would be an easy target to beat. On the other hand this
> would probably not be much of a milestone and will not tell anybody a lot
> about the Opus quality.
>
> There were several tests that compared Opus with non-interactive codecs,
> but once again this is not something which would affect choosing Opus vs
> other codecs, since other codecs are clearly inappropriate for Opus
> purposes.
>
> We can argue about adding more codecs to the list, but I am not sure this
> will make the difference. We only need to compare with a very few to give
> everybody a clear idea about the Opus quality. As far as defining the
> criteria for the codec being acceptable for standardization, all we really
> need is a comparable quality (not worse then the other codecs by some
> defined margin). This is not a competition where Opus needs to win every
> race to be successful.
>
> The whole reason why I am interested in formal comparative testing of Opus
> is because I am impressed by its quality. I think having well documented
> test results which were cross-checked by multiple people might make a
> critical difference in Opus adoption, and as a result the success of this
> working group.
>
> No hats, just my two cents...
> _____________
> Roman Shpount
>
> On Thu, Apr 14, 2011 at 10:25 AM, Gregory Maxwell <gmaxwell@juniper.net>
> wrote:
>
> Roni Even [ron.even.tlv@gmail.com] wrote:
> > I do not mind if the WG will decide to remove the quality claim and
> continue
> > with developing  a royalty free codec with "good enough" quality not
> saying
> > it is better than other codecs.
> > I just think that it should be clear from the charter and requirements
> what
> > is the purpose of the work.
>
> It's funny how we can argue and argue, only to later realize that it comes
> down to a simple mutual misunderstanding.
>
> I thought everyone was already on the same page with respect to the
> goals: it's good to be as good as possible, but the chartered purpose
> of the WG was only to do a "good quality" codec that was suited
> to the listed applications and deployments.
>
> As a developer I know that quality testing is important, and of course
> we've done a lot of it of various types.  I strongly believe in scientific
> testing, so of course my first instinct would have been to do it here,
> but perhaps the reality of the consensus process makes that less
> reasonable—as others have pointed out, most other WGs don't really
> do anything comparable to quality testing.
>
> Likewise, making sure the outcome is as legally unencumbered as I can
> is also very important to me,  but because of the vulgarities of the
> process and the law, this isn't something that the working group itself
> makes promises about.
>
> So, perhaps it makes sense for the working group to not make any quality
> promises in the same way it makes no promises about patents.
>
> It seems clear enough to me now that we can much more easily come to
> consensus about achieving good-enough status than about formal testing
> gates and requirements.
>
> We should accept your suggestion—drop all the comparative quality
> requirements from the requirements draft, and stop discussing comparative
> quality here—and then make some progress on technology, rather than
> continue bickering about details where we are not going to come to
> consensus.
>
> The market can figure out the comparative quality question on its own.
>
> _______________________________________________
> codec mailing list
> codec@ietf.org
> https://www.ietf.org/mailman/listinfo/codec
>
>
>