Re: [TLS] Another IRINA bug in TLS

Dan Brown <> Thu, 21 May 2015 20:45 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 7F5EB1A8AC2 for <>; Thu, 21 May 2015 13:45:09 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_NONE=-0.0001] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id WURZkV8G_10q for <>; Thu, 21 May 2015 13:45:07 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id E05FB1A8AA4 for <>; Thu, 21 May 2015 13:45:06 -0700 (PDT)
Received: from ([]) by with ESMTP/TLS/AES128-SHA; 21 May 2015 16:45:05 -0400
Received: from ([fe80::45d:f4fe:6277:5d1b]) by ([fe80::2066:5d4f:8c45:af55%17]) with mapi id 14.03.0210.002; Thu, 21 May 2015 16:45:04 -0400
From: Dan Brown <>
To: "''" <>
Thread-Topic: [TLS] Another IRINA bug in TLS
Date: Thu, 21 May 2015 20:45:04 +0000
Message-ID: <>
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <>
In-Reply-To: <>
Accept-Language: en-CA, en-US
Content-Language: en-US
X-MS-Has-Attach: yes
x-originating-ip: []
Content-Type: multipart/signed; protocol="application/x-pkcs7-signature"; micalg=SHA1; boundary="----=_NextPart_000_002E_01D093E5.7D3232A0"
MIME-Version: 1.0
Archived-At: <>
Cc: "''" <>, "''" <>
Subject: Re: [TLS] Another IRINA bug in TLS
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 21 May 2015 20:45:09 -0000

> -----Original Message-----
> From: Watson Ladd []
> On Thu, May 21, 2015 at 12:51 PM, Dan Brown <>
> wrote:
> > In the Imperfect Forward Secrecy paper, Section 5 Recommendations, the
> > medium term recommendation to avoid fixed-prime DHE groups seems to
> > conflict with the current direction TLS is taking towards named only 
> > groups.
> Is that *really* what the recommendation says? It's true that using the same
> group repeatedly enables an attacker who does an L(1/3) calculation to 
> quickly
> compute all discrete logarithms together. But instead of making
> countermeasures to this, it's far easier to ensure that the price this
> precomputation is large, by using larger groups.

[DB] That's how I read their medium term recommendation.  I agree with them, 
and you, that larger groups should be the first defense (they call it short 
term).  I guess that by "medium term", they mean to apply it in addition to 
the short term, perhaps at a later date.  I think such countermeasures should 
be optional.

It probably does make sense to postpone this discussion, with other priorities 
taking precedence.

This paper seems to demonstrate that this well-known fact of pre-computation, 
which perhaps should have been anticipated, could potentially have real-world 
consequences.  Past deployments of 1024-bit DH (and smaller) seem not to have 
anticipated this risk adequately.  So, it does illustrate this issue here 

> >
> > Choosing a larger key size is the most efficient and simple remedy
> > against these pre-computations.
> Exactly. No need to proceed further.

[DB] The imperfect forward secrecy paper cites Semaev's example of DHE groups 
with easier-than-average discrete logs as a reason to not rely entirely 
fixed-prime groups.  The current I-D FFDHE uses e to provide a 
nothing-up-my-sleeve NUMS property, so it seems like a concern that has been 
considered already. It seems to me almost certain that the current I-D does 
indeed achieve NUMS property, much like the Brainpool curves do for ECC.  So, 
this particular risk seems minute, but it can be hard to shake off suspicions.

> >
> > Allowing custom groups provides a second defense against the such
> > pre-computation, but supporting custom groups creates other problems.
> > Peers need to verify the custom groups, and the groups need to be
> > generated.  The computation and communication spent on verifying
> > custom groups could perhaps be better spent on using larger fixed groups.
> You've just explained why I'm against it, better than I could. So what 
> actually is
> the reason you support it?
[DB] Well, as I said it's only a second line of defense. I would only support 
it as an option for those who can afford the extra cost: the reasons to use 
this are not enough to mandate this countermeasure.

Here's an unlikely situation in which it may help thwart mass surveillance 
(rather than help a targeted individual or session).

The trend in factoring and FFDLP algorithms is that new algorithms are only 
faster than old algorithms above some key size.  In other words, a plot of the 
new FFDLP algorithm's runtime against keysize rises sharply then flattens out, 
eventually undercutting the older algorithms.  If this trend continues, then a 
future (or secret) attack may be flatter than the current best known FFDLP 
algorithms, in which case the gain from larger keysize is not as much as 
expected by looking at the published FFDLP algorithms.  (Of course, the best 
countermeasure to this small risk would be ECDHE!) If all goes well, this 
flatter attack will never (has never) happen(ed).   But if it does happen, and 
everybody used the same larger group, their extra effort may not have helped a 
non-targeted user as much as if everybody used smaller different groups.

The risk mitigations provided by increased group size versus diversification 
of groups are rather different in quality, and are thus hard to compare.  But 
for users more interested in avoiding mass-surveillance rather than targeted 
attacks, might prefer to spend their spare computational power accordingly.

> >
> > My preference is to recommend larger, vetted, fixed groups (e.g. at
> > least 2048-bit DH and 256-ish-bit ECDHE) just as TLS and CFRG are
> > currently doing, but to still keep custom groups (of similar size) as
> > an option, perhaps to be added later after some further discussion
> > about the best way to specify custom groups.
> Why do custom groups help? We've seen how they can hurt, by complicating
> implementations, making possible cross-protocol attacks, and other kinds of
> nastiness.

[DB] Yes, custom groups have led to attacks, but that seems to be more due to 
some fumbles about ambiguously encoding them.  I hope that once the dust 
settles a bit, and TLS puts more effort into *DHE, instead of RSA key 
exchange, such ambiguity can be resolved.  In other words, any proposal for 
custom groups should be done quite carefully.

> You're asking that we adopt them so that an attacker who does a
> massive calculation gets only one useful result, instead of having to repeat 
> a
> calculation they already did a few times. How many months of extra time does
> this get us?

[DB] Everybody sharing a group creates a large incentive and single target for 
an attacker.  If the computational power of the attacker tops out at some 
value (i.e. at some max bit-op per dollar), and we've chosen a group size just 
below this top power attack, then the difference is whether that attacker can 
compromise a small targeted set of users of one group, versus a 
mass-surveilance of nearly all users.  To make it concrete, for a group whose 
size tops out the attack at two months, then it buys a non-targeted 
(mass-surveiled) user a number of months equal to the number of groups that 
have been used.

Again, I emphasize, that this would only be a second line of defense: the 
first line is choosing the right size of group.