Re: [Cfrg] Publicly verifiable benchmarks

Michael Hamburg <> Fri, 10 October 2014 19:00 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 2303C1ACE9C for <>; Fri, 10 Oct 2014 12:00:58 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: 2.155
X-Spam-Level: **
X-Spam-Status: No, score=2.155 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FH_HOST_EQ_D_D_D_D=0.765, FH_HOST_EQ_D_D_D_DB=0.888, HELO_MISMATCH_ORG=0.611, HOST_MISMATCH_NET=0.311, J_CHICKENPOX_15=0.6, RDNS_DYNAMIC=0.982, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id s4PB7l0SxdaC for <>; Fri, 10 Oct 2014 12:00:56 -0700 (PDT)
Received: from ( []) (using TLSv1.1 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id C82E81ACE81 for <>; Fri, 10 Oct 2014 12:00:49 -0700 (PDT)
Received: from [] (unknown []) by (Postfix) with ESMTPSA id CD3F3F5EAE; Fri, 10 Oct 2014 11:59:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;; s=sldo; t=1412967555; bh=JemY07iSwYElPTzD0xnewuBobYFQ8o2qd5Mc3hsuHM8=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From; b=MQAP8sfcGlkLw8QuDgB0SCYjp7orpEkyX64R52l2lZoGqfBI9ZysWZn4qp0WlfKni oCp3D6VBJMen3NsDsQDaklh9jSl2ZagrvUs97guiYKYU8ckRGLrFk6urvqktbGLtcz aXjuDbd49XfqACiVhjcuktcb6JGhr0dqVyWnjG1k=
Content-Type: text/plain; charset=utf-8
Mime-Version: 1.0 (Mac OS X Mail 8.0 \(1990.1\))
From: Michael Hamburg <>
In-Reply-To: <>
Date: Fri, 10 Oct 2014 12:00:44 -0700
Content-Transfer-Encoding: quoted-printable
Message-Id: <>
References: <>
To: "D. J. Bernstein" <>
X-Mailer: Apple Mail (2.1990.1)
Subject: Re: [Cfrg] Publicly verifiable benchmarks
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Crypto Forum Research Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 10 Oct 2014 19:00:59 -0000

I’ll add a +1 in favor of SUPERCOP as a better-than-all-the-others reproducible benchmarking system.  It’s a bit finicky, though; in particular, typing “sh do” and waiting a week isn’t the most convenient development cycle.  Furthermore, some of the machines on have quirks (turboboost, mismatched cycle counter frequency, etc) which can make the data difficult to interpret and reproduce.

If you are considering submitting a system, I’d strongly recommend testing on, You should also make sure to install the compilers DJB uses (GCC 4.8.1 and Clang 3.2 on titan0, or GCC 4.6.3 and Clang 3.0 on h6sandy, for example) to make sure that your system compiles and runs passably well using those compilers.  I didn’t do this last time, which is (part of?) why the numbers from my own benchmarks do not match DJB’s numbers; see below.

> On Oct 10, 2014, at 12:18 AM, D. J. Bernstein <> wrote:
> Parkinson, Sean writes:
>> Making a decision on new elliptic curves based on data that hasn't
>> been corroborated by a 3rd party is bad practice.
> More than 1500 implementations of various cryptographic functions have
> been contributed to, and are publicly available as part of, the
> state-of-the-art SUPERCOP benchmarking framework:
> Practically all of the implementations are free to use, and many of them
> are in fact widely used. Most of the researchers producing speed records
> for cryptography are contributing their software to the same system.
> The eBACS benchmarking project systematically and reproducibly measures
> these implementations on more than 20 different microarchitectures:
> It's easy for other people to download and run the benchmarks on their
> favorite CPU, and to contribute the results to the central system, where
> detailed speed reports are posted publicly for other people to see and
> verify. eBACS has practically eliminated the measurement errors and
> disputes that plagued previous approaches to cryptographic benchmarking.
> eBASH, the hash component of eBACS, was mentioned 30 times in NIST's
> final report on the SHA-3 competition.
> As a concrete example, now that Mike has sent crypto_dh/ed448goldilocks
> in for benchmarking, eBACS is automatically filling lines into
> whenever machines finish benchmark runs: 529066 cycles on titan0, 689020
> cycles on hydra8, 757676 cycles on h6sandy, etc. These don't exactly
> confirm Mike's comparisons to the Sandy Bridge numbers that Microsoft
> claimed in they do seem
> adequate to support his point about ed448goldilocks hitting a sweet spot
> on the security/speed curve while Microsoft's design strategy
> compromises the security/speed tradeoff:
>   * ed448goldilocks isn't quite twice as fast as numsp512t1
>     (ed-512-mers): 757676 cycles vs. 1293000 cycles.
>   * ed448goldilocks is about 23% slower than numsp384t1 (ed-384-mers):
>     757676 cycles vs. 617000 cycles.
> Of course, if Mike or anyone else thinks that ed448goldilocks can be
> computed more efficiently, he's welcome to prove it by contributing a
> better implementation of that function to SUPERCOP, and then the
> benchmarks will be updated appropriately. He can also raise reasonable
> questions about the accuracy of Microsoft's claims; if Microsoft's
> numbers are actually correct then Microsoft can dispel the skepticism
> by contributing their own code to SUPERCOP.

For the next SUPERCOP release, please pull the latest BAT from

The build on has a bug which prevents compilation on older 64-bit clangs, such as clang 3.2 on titan0, and clang 3.0 on h6sandy.

So for example on ProteusMachine (Haswell Core i7-4790 CPU @ 3.60GHz, HT and TB disabled) with supercop-fastbuild's clang_-g_-O3_-march=native_-mtune=native_-fomit-frame-pointer 4.2.1_Compatible_Ubuntu_Clang_3.5_(trunk), the timings are 160431 keypair cycles and 479118 dh cycles.

— Mike

> As a more detailed example of reproducibility, let's look at what the
> benchmarks say about X25519 on Haswell. Checking
> we see a median of 145907 cycles (quartiles: 144894 and 147191 cycles)
> for the crypto_dh/curve25519 software on an Intel Xeon E3-1275 V3.
> Clicking on "titan0" shows more information: the best speeds found for
> crypto_scalarmult/curve25519 on this machine used
>   gcc-4.8.1 -m64 -O -march=native -mtune=native -fomit-frame-pointer
> to compile the "amd64-51" implementation. Anyone can use the same free
> implementation with the same free compiler and will obtain the same
> compiled code running in the same number of Haswell cycles:
>   wget
>   tar -xf supercop-20140924.tar.bz2
>   cd supercop-20140924
>   # compile and measure everything: nohup sh data-do &
>   # alternatively, extract X25519 as follows:
>   mkdir x25519
>   cp measure-anything.c x25519
>   cp crypto_scalarmult/measure.c x25519
>   cp crypto_scalarmult/curve25519/amd64-51/* x25519
>   cp include/randombytes.h x25519
>   cp cpucycles/amd64cpuinfo.h x25519/cpucycles.h
>   cp cpucycles/amd64cpuinfo.c x25519/cpucycles.c
>   cp cpucycles/osfreq.c x25519/osfreq.c
>   cd x25519
>   ( sed s/CRYPTO_/crypto_scalarmult_/ < api.h
>     echo '#define crypto_scalarmult_IMPLEMENTATION "amd64-51"'
>     echo '#define crypto_scalarmult_VERSION "-"'
>   ) > crypto_scalarmult.h
>   echo 'static const char cpuid[] = {0};' > cpuid.h
>   gcc -m64 -O -march=native -mtune=native -fomit-frame-pointer \
>   -D COMPILER='"gcc"' \
>   -D LOOPS=1 \
>   -o measure measure-anything.c measure.c cpucycles.c \
>   mont* fe*.c *.s
>   ./measure
> For example, on one core of Andrey's 3.4GHz i7-4770 (Haswell), this
> X25519 code will take the same ~146000 cycles: i.e., more than 23000
> operations/second, whereas the latest Haswell-optimized OpenSSL NIST
> P-256 ECDH code that he measured was only 15000 operations/second.
> This is, by the way, rather old Curve25519 code optimized for Nehalem,
> the microarchitecture of the first Core i7 CPUs in 2008---but on Intel's
> latest Haswell CPUs it's still solidly beating NIST P-256 code that's
> optimized for Haswell. There's ample literature explaining that
>   * reductions mod 2^255-19 are faster than reductions mod
>     2^256-2^224+2^192+2^96-1 on a broad range of platforms, and that
>   * Montgomery scalarmult is faster than Weierstrass scalarmult,
> so the performance gap is unsurprising.
> Why did Andrey report only 17289 operations/second for X25519 on
> Haswell? The answer, in a nutshell, is that there's an active ecosystem
> of Curve25519/X25519/Ed25519 implementations, and it's easy to find
> implementations that prioritize simplicity over speed---including the
> one implementation included in Andrey's manual benchmarks. Of course,
> any application developer who needs more speed will look for, and find,
> the faster X25519 implementations.
> ---Dan
> _______________________________________________
> Cfrg mailing list