Re: [openpgp] Followup on fingerprints

ianG <> Sun, 09 August 2015 16:23 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 56AD21ACF6C for <>; Sun, 9 Aug 2015 09:23:40 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id U5PD3CT8QYyq for <>; Sun, 9 Aug 2015 09:23:38 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id A3B5E1ACF5B for <>; Sun, 9 Aug 2015 09:23:38 -0700 (PDT)
Received: from tormenta.local ( []) by (Postfix) with ESMTPSA id 5E7A46D724; Sun, 9 Aug 2015 12:23:37 -0400 (EDT)
Message-ID: <>
Date: Sun, 09 Aug 2015 17:23:55 +0100
From: ianG <>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:31.0) Gecko/20100101 Thunderbird/31.7.0
MIME-Version: 1.0
References: <r422Ps-1075i-742112EAADFB47BE9A7F41E9D65CE374@Williams-MacBook-Pro.local>
In-Reply-To: <r422Ps-1075i-742112EAADFB47BE9A7F41E9D65CE374@Williams-MacBook-Pro.local>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit
Archived-At: <>
Subject: Re: [openpgp] Followup on fingerprints
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: "Ongoing discussion of OpenPGP issues." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sun, 09 Aug 2015 16:23:40 -0000

Hi Bill,

On 9/08/2015 16:49 pm, Bill Frantz wrote:
> I am more and more convinced of the wisdom of Alan Karp when he insists
> that any system which uses a hash must specify what happens when there
> is a hash collision. He points out that anytime data longer than the
> hash output is hashed, there is the possibility of a collision, which is
> true when calculating key fingerprints.
> Now the consequences may be severe or trivial. If a PGP message routing
> application uses the fingerprint to select the destination, the
> consequence of a collision may be as trivial as routing messages to
> recipients who can't decrypt them, or the more serious consequence of
> not sending messages to the recipient who can decrypt them. The exercise
> of figuring out what will happen results in better design.
> There has also been an undertone of, "If we can't come up with an
> attack, there aren't any." in this thread.

I disagree with that characterisation.  I would rather see it like this:

       If we can't come up with an attack,
       then we should not defend against it.
       We should *accept the risk*.

1. it is likely the attacker won't either.

2. if the attack-space is theoretical or exotic (eg quantum), the 
likelihood of it developing to a practical attack is low and/or slow. 
Then, waiting for more info is the smarter thing to do unless there is a 
cheap easy fix.

3. when an algorithm is designed to be very strong, this gives us an 
ability to lean on it and rely on it entirely.  This delivers benefits 
in code simplicity, and releases resources to concentrate on what we do 
know is likely to break, and is hurting our users.

The one thing we can say about hashes is they are darn strong.  They've 
never really failed us.  The only blackmark was MD5, when some projects 
where negligent and didn't move [0].  Lean on hashes.

4. how can you defend against an attack you can't come up with?

5. and finally, concentrating on attacks that we imagine might happen in 
the lab and can't build even a PoC for leads to isolation and myopia. 
We are in the business of serving our users, and they have plenty of 
things harming them right now.  Let's concentrate on delivering what 
addresses their pain, not what confuses us.

(Although I agree that the risk-analysis methodology -- which explicitly 
includes the willingness to accept a risk -- has not found widespread 
favour in our community :)

> I find this attitude very
> dangerous as new classes of attacks (e.g. power analysis) are constantly
> being discovered.

1. when that information develops, we can "come up with the attack." 
This stops us drifting into lab myopia.

2. How is power analysis going to effect a hash?  Surely, if I can do 
power analysis, I'm not that concerned about the hash, I'd rather suck 
the key bits?  OK, I know it was an example... but this is an example of 
"not an attack."

> I would suggest wording in the security considerations section something
> like:
> "During the design process, any application using key fingerprints
> SHOULD characterize the consequences of a fingerprint collision on the
> application's security and implementation integrity, particularly when
> using fewer bits than the output of the fingerprint hash."

If we're talking about a mid-range published key fingerprint of say 100 
bits, then there is a capability for collisions and perhaps preimages in 
the future, sure.

But we have a built in mechanism for that already;  increase to 150 then 
200.  So how about:

"During the design process of any application using shortened key 
fingerprints, attention should be paid to a recovery strategy in the 
event that the shortened fingerprint becomes subject to collisions or 
preimage attacks."

And at full hash-length in the fingerprint I'd be silent in the 
document.  Accept the risk.  Lean on the full SHAx.  It's been the most 
solid rock in cryptography so far.


[0] MD5 should have been deprecated in 1993-1995 timescale, SHA1 should 
have been deprecated in the 2000-2004 timescale, if you are at risk of 
collision attacks.