Re: [saag] AD review of draft-iab-crypto-alg-agility-06

Viktor Dukhovni <> Wed, 02 September 2015 21:29 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 911A01B40DE for <>; Wed, 2 Sep 2015 14:29:02 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.6
X-Spam-Status: No, score=-2.6 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id LOIH_6HoJXLD for <>; Wed, 2 Sep 2015 14:29:00 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id E7C341B447E for <>; Wed, 2 Sep 2015 14:28:59 -0700 (PDT)
Received: by (Postfix, from userid 1034) id E0252284D23; Wed, 2 Sep 2015 21:28:58 +0000 (UTC)
Date: Wed, 2 Sep 2015 21:28:58 +0000
From: Viktor Dukhovni <>
Message-ID: <>
References: <> <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <>
User-Agent: Mutt/1.5.23 (2014-03-12)
Archived-At: <>
Subject: Re: [saag] AD review of draft-iab-crypto-alg-agility-06
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Security Area Advisory Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 02 Sep 2015 21:29:02 -0000

On Wed, Sep 02, 2015 at 04:48:38PM -0400, Russ Housley wrote:

> Pulling some thoughts from the things Kenny said earlier this week, I suggest:
>    Cryptographic algorithms age; they become weaker with time.  As new
>    cryptanalysis techniques are developed and computing capabilities
>    improve, the work required to break a particular cryptographic
>    algorithm will reduce, making an attack on the algorithm more
>    feasible for more attackers.  While it is unknown how cryptoanalytic
>    attacks will evolve, it is certain that they will get better.  It is
>    unknown how much better they will become, or when the advances will
>    happen.  Protocol designers need to assume that advances in computing
>    power or advances cryptoanalytic techniques will eventually make any
>    algorithm obsolete.  For this reason, protocols need mechanisms to
>    migrate from one algorithm suite to another over time.

Works for me.

> > I find section 2.5 too vague, not sure what point it is really
> > trying to make.
> I'm not sure how to handle this comment.  I guess the point that I am
> trying to make is that there can be too many levels of indirection and
> still preserve the integrity of the first part of the negotiation.

    2.5.  Cryptographic Key Management

       Traditionally, protocol designers have avoided more than one approach
       to key management because it makes the security analysis of the
       overall protocol more difficult.  When frameworks such as EAP and
       GSSAPI are employed, the key management is very flexible, often
       hiding many of the details from the application.  This results in
       protocols that support multiple key management approaches.  In fact,
       the key management approach itself may be negotiable, which creates a
       design challenge to protect the negotiation of the key management
       approach before it is used to produce cryptographic keys.

       Protocols can negotiate a key management approach, derive an initial
       cryptographic key, and then authenticate the negotiation.  However,
       if the authentication fails, the only recourse is to start the
       negotiation over from the beginning.

       Some environments will restrict the key management approaches by
       policy.  Such policies tend to improve interoperability within a
       particular environment, but they cause problems for individuals that
       need to work in multiple incompatible environments.

Reading section 2.5 again, and being well versed in both TLS and
GSSAPI (I commit code to both OpenSSL and Heimdal), I still have
no idea what it is saying.  In what sense is "key management" (which
for me means how keys are deployed and rotated) "negotiated" in
the protocol.

Is this talking about "Key Exchange" rather than "Key Management"?
Is the problem you have in mind that when, for example, negotiating
"GSSAPI" in SASL one might not know what that entails before deciding
to use GSSAPI over some other SASL mechanism?

Whatever this section is trying to say, I'm just not smart enough
to figure it out, even with the hint in this response.

> Merging with comments from others, the current text is:
>    Without clear mechanisms for algorithm and suite transition,
>    preserving interoperability becomes a difficult social problem.  For
>    example, consider web browsers.  Dropping support for an algorithm
>    suite can break connectivity to some web sites, and the browser
>    vendor will lose users by doing so.  This situation creates
>    incentives to support algorithm suites that would otherwise be
>    deprecated in order to preserve interoperability.

Works for me.

> > 	   Institutions, being large or dominate users within a large
> > 	   user base, can assist by coordinating the demise of an
> > 	   algorithm suite, making the rollover easier for their own
> > 	   users as well as others.
> > 
> >    Somehow the meaning of the above eludes me.  It needs a rewrite.
> The point is that big customers can help with the social part of the
> transition by putting pressure on their suppliers.  I'm not sure what part
> to change to make that more clear for you.

    s/rollover/transition/ and change the introductory clause:

    Dominant, or otherwise sufficiently large, market players can ...

> > Section 2,7:
> > 
> >       When selecting or negotiating a suite of cryptographic algorithms,
> >       the strength of each algorithm SHOULD be considered.  The algorithms
> >       in a suite SHOULD be roughly equal;
> > 
> >    s/roughly equal/comparably strong/ or (to really spell it out):
> >                   /have comparable best known attack work-factors/
> > 
> >    However if a particular element of a suite is believed stronger
> >    than the rest, we don't need to get too pedantic about that.
> >    Slightly lop-sided choices are OK if the stronger outlier is
> >    adequately fast, and weaker variants are not widely used.
> That was the point of "roughly".  Also, the second paragraph bring in the point about performance being a factor.
> How about this:
>    When selecting or negotiating a suite of cryptographic algorithms,
>    the strength of each algorithm SHOULD be considered.  The algorithms
>    in a suite SHOULD be roughly equal; however, the security service
>    provided by each algorithm in a particular context needs to be
>    considered when making the selection.  Algorithm strength needs to be
>    considered at the time a protocol is designed.  It also needs to be
>    considered at the time a protocol implementation is deployed and
>    configured.  Advice from from experts is useful, but in reality, such
>    advice is often unavailable to system administrators that are
>    deploying a protocol implementation.  For this reason, protocol
>    designers SHOULD provide clear guidance to implementors, leading to
>    balanced options being available at the time of deployment.

I do not think the greater length makes it clearer, perhaps the
original shorter version will do.

> >   Overall I think this text is wrong to weasel out.  Failure to
> >   negotiate the DH parameter size has proved rather problematic
> >   in TLS, with servers needing to guess at universally inteoperable
> >   prime bit length.  This is being addressed, with the DH groups
> >   extension now supporting standard prime groups as well as standard
> >   EC curves.  Underspecified algorithms MUST NOT be used.  Either
> >   fix the parameters, or negotiate them.
> You raise an important point.  I suggest:
>    Performance is always a factor is selecting cryptographic algorithms.
>    Performance and security need to be balanced.  Some algorithms offer
>    flexibility in their strength by adjusting the key size, number of
>    rounds, authentication tag size, prime group size, and so on.  For
>    example, TLS cipher suites include Diffie-Hellman or RSA without
>    specifying a particular public key length.  If the algorithm
>    identifier or suite identifier named a particular public key length,
>    migration to longer ones would be more difficult.  On the other hand,
>    inclusion of a public key length would make it easier to migrate away
>    from short ones when computational resources available to attacker
>    dictate the need to do so.  The flexibility on asymmetric key length
>    has lead to interoperability problems, and to avoid these problems in
>    the future any aspect of the algorithm not specified by the algorithm
>    identifier MUST be negotiated, including key size and parameters.

Better, but of course negotiating the strength of long-term public
keys is generally not possible, the server can't choose these on
the fly.  So the MUST is perhaps too strong.  Rather, protocol
designs SHOULD try to avoid unilateral choices of cryptographic
parameters to the extent possible.  Thus we should encourage
specification of a small set of explicit sizes or set of explicit
groups, ... and then negotiate their use.