Re: [TLS] Another INRIA bug in TLS

Daniel Kahn Gillmor <dkg@fifthhorseman.net> Fri, 22 May 2015 20:55 UTC

Return-Path: <dkg@fifthhorseman.net>
X-Original-To: tls@ietfa.amsl.com
Delivered-To: tls@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2DE611A8879 for <tls@ietfa.amsl.com>; Fri, 22 May 2015 13:55:59 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level:
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id SJlPQ02S_RLv for <tls@ietfa.amsl.com>; Fri, 22 May 2015 13:55:57 -0700 (PDT)
Received: from che.mayfirst.org (che.mayfirst.org [209.234.253.108]) by ietfa.amsl.com (Postfix) with ESMTP id E8EDD1A8863 for <tls@ietf.org>; Fri, 22 May 2015 13:55:56 -0700 (PDT)
Received: from fifthhorseman.net (unknown [38.109.115.130]) by che.mayfirst.org (Postfix) with ESMTPSA id 99D6EF984 for <tls@ietf.org>; Fri, 22 May 2015 16:55:54 -0400 (EDT)
Received: by fifthhorseman.net (Postfix, from userid 1000) id 5466C20040; Fri, 22 May 2015 16:55:32 -0400 (EDT)
From: Daniel Kahn Gillmor <dkg@fifthhorseman.net>
To: tls@ietf.org
In-Reply-To: <1432317148442.5357@microsoft.com>
References: <9A043F3CF02CD34C8E74AC1594475C73AB029727@uxcn10-tdc05.UoA.auckland.ac.nz> <1432317148442.5357@microsoft.com>
User-Agent: Notmuch/0.20~rc1 (http://notmuchmail.org) Emacs/24.4.1 (x86_64-pc-linux-gnu)
Date: Fri, 22 May 2015 16:55:29 -0400
Message-ID: <87pp5snxha.fsf@alice.fifthhorseman.net>
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="=-=-="; micalg=pgp-sha512; protocol="application/pgp-signature"
Archived-At: <http://mailarchive.ietf.org/arch/msg/tls/MqS-HybeBrN6_bqrunQEmjVIMhk>
Subject: Re: [TLS] Another INRIA bug in TLS
X-BeenThere: tls@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <tls.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tls>, <mailto:tls-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/tls/>
List-Post: <mailto:tls@ietf.org>
List-Help: <mailto:tls-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tls>, <mailto:tls-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 22 May 2015 20:55:59 -0000

[ i fixed typo in the Subject line because it bothered me; sorry if that
  kills threading in anyone's MUA ]

On Fri 2015-05-22 13:52:29 -0400, Santiago Zanella-Beguelin wrote:
>>>A miTLS client maintains a table of known trusted parameters, including the
>>>subgroup order for parameters with non-safe primes. When receiving unknown
>>>parameters from a server, it tests the primality of p and (p-1)/2 to check if
>>>p is a safe prime (and caches a positive result in the table).
>> 
>>So you do a full primality test (Miller-Rabin or whatever) for each connect?
>
> No. We do primality tests only if the prime isn't in the cache. That happens the
> first time the client connects to a server, or when the server refreshes its
> parameters. That, assuming the parameters aren't already in the pre-populated
> cache.
>
>>Doesn't that make it awfully slow?
>
> No. The typical cost of validation is just one table lookup. This works
> extremely well when people use just a few thousand parameters and don't change
> them often. Even if that changes in light of Logjam (I hope it does!), online
> primality tests for every connection are not prohibitive. I did a quick test
> using Sage and the stock is_pseudoprime: testing if a 2048-bit prime is safe
> takes ~180ms in a laptop, while computing two DH operations (exponentiations)
> takes ~17ms. The amortized cost using a cache is of course much lower.

I'm considering adding a Miller-Rabin primality check for the group
modulus to negotiated-ff-dhe's "Local Policy for Custom Groups" section:

 https://tools.ietf.org/html/draft-ietf-tls-negotiated-ff-dhe-09#section-3.1

I'm also considering adding a suggestion for checking that the modulus
is a safe prime, if it is not already known to the client to be an
acceptable modulus.  What do others think about either of these
additions?

----

I see how the idea of every server choosing their own groups makes sense
From a crypto theory point of view, and if everyone does it right, it
would indeed probably be the best situation.

I have concerns about it from a practical perspective, though.

A client encountering a novel group now has to make a choice about
whether to perform some tests on the group (as suggested above) or to
proceed without testing (hoping that the server picked the right
parameters).  Since TLS clients are in tension between the demands of
efficiency (make connections faster and cheaper for your user) and
security (ensure the integrity and confidentiality of your user's
communications), many will be tempted to skip the extra checks, since if
the remote servers are well-configured, the check should be unnecessary.

Furthermore, generating a large safe prime group is expensive and slow,
which puts it out of reach of many devices that act as TLS servers,
which practically might mean we're delegating the job of prime selection
to implementors or software distributors.

As a case in point, the SSH protocol provides custom moduli, but i
believe most SSH servers ship with a set of default moduli (e.g. the
OpenSSH project ships a default set of moduli), and everyone just uses
those.  In the OpenSSH case, the upstream moduli appear to be generated,
tested and found to be safe primes on the basis of 100 rounds of
Miller-Rabin (see moduli(5) and /etc/ssh/moduli on a debian system, for
example), instead of generating proofs of primality (which are
significantly more expensive).

Yesterday, OpenSSH updated the set of moduli they'll be distributing for
the first time in 3 years [0] (this is better than the oakley groups,
which haven't been updated!).  The new ones were also tested on the
basis of 100 rounds of Miller-Rabin.  There is also discussion on the
openssh-unix-dev mailing list that different distributions might
consider shipping different sets of moduli, though this could
potentially introduce a distro-fingerprinting attack in places where
such an attack might not already exist.

We need protocol designs that simplify things for implementators and
deployers, and that remove the need for expensive security checks that
might be skipped for efficiency reasons. sufficiently large known-groups
seem to solve that problem, as long as we deprecate smaller groups
promptly.  I'm not sure that encouraging custom groups does, even though
it sounds like it would be the safer option when everyone is playing by
the best practice guidelines.

      --dkg

[0] https://anongit.mindrot.org/openssh.git/commit/?id=8b02481143d75e91c49d1bfae0876ac1fbf9511a