Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

Olafur Gudmundsson <> Wed, 02 April 2014 02:38 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id D01B71A00A6 for <>; Tue, 1 Apr 2014 19:38:03 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: 0.493
X-Spam-Status: No, score=0.493 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_NONE=-0.0001, TVD_PH_BODY_ACCOUNTS_PRE=2.393] autolearn=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id xphwbcGVd323 for <>; Tue, 1 Apr 2014 19:38:02 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 19DB31A0097 for <>; Tue, 1 Apr 2014 19:38:02 -0700 (PDT)
Received: from localhost (localhost.localdomain []) by (SMTP Server) with ESMTP id 4D2A21A0A74; Tue, 1 Apr 2014 22:37:58 -0400 (EDT)
X-Virus-Scanned: OK
Received: by (Authenticated sender: with ESMTPSA id C180C1A1D9F; Tue, 1 Apr 2014 22:37:56 -0400 (EDT)
Content-Type: text/plain; charset=windows-1252
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Olafur Gudmundsson <>
In-Reply-To: <>
Date: Tue, 1 Apr 2014 22:37:54 -0400
Content-Transfer-Encoding: quoted-printable
Message-Id: <>
References: <> <> <> <> <>
To: Nicholas Weaver <>
X-Mailer: Apple Mail (2.1510)
Cc:, =?windows-1252?Q?Matth=E4us_Wander?= <>, Bill Woodcock <>
Subject: Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: IETF DNSOP WG mailing list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 02 Apr 2014 02:38:04 -0000

On Apr 1, 2014, at 9:05 AM, Nicholas Weaver <> wrote:

> On Apr 1, 2014, at 5:39 AM, Olafur Gudmundsson <> wrote:
>> Doing these big jumps is the wrong thing to do, increasing the key size increases three things:
>> 	time to generate signatures  
>> 	bits on the wire
>> 	verification time. 
>> I care more about verification time than bits on the wire (as I think that is a red herring).
>> Signing time increase is a self inflicted wound so that is immaterial. 
>>                 sign    verify    sign/s verify/s
>> rsa 1024 bits 0.000256s 0.000016s   3902.8  62233.2
>> rsa 2048 bits 0.001722s 0.000053s    580.7  18852.8
>> rsa 4096 bits 0.012506s 0.000199s     80.0   5016.8
>> Thus doubling the key size decreases the verification performance by roughly by about 70%. 
>> KSK's verification times affect the time to traverse the DNS tree, thus 
>> If 1024 is too short 1280 is fine for now
>> If 2048 is too short 2400 bit key is much harder to break thus it should be fine. 
>> just a plea for key use policy sanity not picking on Bill in any way.
> NO!  FUCK THAT SHIT.  Seriously.

Watch your language, just because I'm calling you on the carpet for simplistic world view, does not mean you need to use foul language. 

> There is far far far too much worrying about "performance" of crypto, in cases like this where the performance just doesn't matter!

disagree strongly, you are only looking at a part of the whole picture. 
Verification adds resolution latency + verification adds extra queries which is more latency
	latency == un-happy eye-balls.  

> Yes, you can only do 18K verifies per CPU per second for 2048b keys.  Cry me a river.  Bite the bullet, go to 2048 bits NOW, especially since the servers do NOT have resistance to roll-back-the-clock attacks.

Why not go to a good ECC instead ? (not sure which one, but not P256 or P384) 

18K answers/second ==> is a fraction of what larger resolver servers do today during peak times, yes not all answers need validation.
BUT you need to take into account that in some cases there is going to be lots of redundancy in verification in large resolver clusters, thus
if your query stream hits 5 different hosts all of them may end up doing almost 5x of the work, thus adding servers does not scale. 
Yes people can create any cast clusters in depth where only the front end ones do verification and the back end ones only answer queries, but
that has different implications. 

Remember it is not average load that matters it is peak load, even if the peak for 30 minutes on one day of the year. 

Over the years I have been saying use keys that are appropriate, thus for someone like Paypal it makes sense to have strong keys,
but for my private domain does it matter what key size I use? 
I do not buy the theory that one size fits all model for crypto, people should not use unsafe crypto, but the one size fits all is not the
right answer, just like not every zone needs a KSK and ZSK split. ( I use single 1280 bit RSA key with NSEC) 

A real world analogy is that not everyone needs the same kind of car, some people need big cars, others small ones or even no car. 

Furthermore using larger keys than your parents is non-sensical as that moves the cheap point of key cracking attack. 
> In a major cluster validating recursive resolver, like what Comcast runs with Nominum or Google uses with Public DNS, the question is not how many verifies it can do per second per CPU core, but how many verifies it needs to do per second per CPU core.

I have no doubt that CPU's can keep up but the point I was trying to make is increasing the key sizes by this big jump 
==> invalidates peoples assumptions on what the load is going to be in the near term. 

> And at the same time, this is a problem we already know how to parallelize, and which is obscenely parallel, and which also caches…

Do we? Some high performance DNS software is still un-treaded, many resolvers are run in VM's with low number of cores 
exported to the VM. 

> Lets assume a typical day of 1 billion external lookups for a major ISP centralized resolver, and that all are verified.  Thats less 1 CPU core-day to validate every DNSSEC lookup that day at 2048b keys.  

1B is low due to low TTL's and synthetic names used for transactions, and as I said before it is peak load that matters not average. 
DNSSEC processing is just a part of the whole processing model. 

> And yeah, DNS is peaky, but that's also why this task is being run on a cluster already, and each cluster node has a lot of CPUs.

that costs money, and effort to operate.