Re: [TLS] Wrapping up cached info

Yoav Nir <> Thu, 20 May 2010 04:41 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 644FA3A6876 for <>; Wed, 19 May 2010 21:41:53 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.539
X-Spam-Status: No, score=-1.539 tagged_above=-999 required=5 tests=[AWL=-0.540, BAYES_50=0.001, RCVD_IN_DNSWL_LOW=-1]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id dnS8KU-tHqZ6 for <>; Wed, 19 May 2010 21:41:52 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id B841F3A6C60 for <>; Wed, 19 May 2010 21:41:49 -0700 (PDT)
X-CheckPoint: {4BF4CA35-0-1B201DC2-1FFFF}
Received: from ( []) by (8.12.10+Sun/8.12.10) with ESMTP id o4K4fbpp004731; Thu, 20 May 2010 07:41:37 +0300 (IDT)
Received: from ([]) by ([]) with mapi; Thu, 20 May 2010 07:42:04 +0300
From: Yoav Nir <>
To: "'Stefan Santesson'" <>, "" <>, Joseph Salowey <>
Date: Thu, 20 May 2010 07:42:03 +0300
Thread-Topic: [TLS] Wrapping up cached info
Thread-Index: Acr2AF8iO6bF6i2yM0up48R7QWj7GQAD0o9pAFJeyFwAH1U8gA==
Message-ID: <>
References: <> <>
In-Reply-To: <>
Accept-Language: en-US
Content-Language: en-US
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "" <>
Subject: Re: [TLS] Wrapping up cached info
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 20 May 2010 04:41:53 -0000

In my case the silence means a national holiday.

I still don't see why we need a secure cryptographic hash there as well. There has still not been any outline of an attack even in the presence of collisions.

Given this, I don't see any reason to move away from FNV-1a.

-----Original Message-----
From: [] On Behalf Of Stefan Santesson
Sent: Wednesday, May 19, 2010 4:43 PM
To: Stefan Santesson;; Joseph Salowey
Subject: Re: [TLS] Wrapping up cached info

Not sure how I should interpret the silence?

I guess it means that everyone agrees with me :)

To repeat in short. Here is what I suggest based on the latest traffic:

1) A client, caching information also caches what hash algorithm was used to
calculate the finished message at the time of caching.

2) On repeated connections, the client indicate cached info by sending a
hash of the cached object using the cached hash algorithm. (No use of FNV

3) The server accepts by returning the received hash instead of the cached

4) The only hash agility provided is that the client will send a hash
algorithm identifier with the hash.

5) The client MUST NOT send more than one hash per cached object, and MUST
used the cached hash algorithm.

This solves all issues raised (securely binds the cached data to the
finished calculation) and removes the need for hash agility
Syntactically it just requires adding a hash identifier and adjusting the
vector length for hash data.

So I basically wander:
- Would this be acceptable?
- Who could NOT live with this solution?
- Who think it is worth the effort to agree on a better solution, and why?


On 10-05-18 12:24 AM, "Stefan Santesson" <> wrote:

> As Martin pointed out to me privately, the hash used in the finished
> calculation becomes known to the client only after receiving the serer
> hello.
> It would therefore be natural for the client to use the hash function used
> to calculate the finished message at the time when the data was cached.
> The client would then indicate which hash algorithm it used and upon
> acceptance, the server will honor the request.
> /Stefan
> On 10-05-17 10:34 PM, "Stefan Santesson" <> wrote:
>> Guys,
>> Where I come from we have a say "don't cross the river to get to the water".
>> And to me this proposal to change the finished calculation is just that.
>> Look at it.
>> The proposal is to bind the cached data by adding a hash of the cached data
>> to the finished calculation.
>> The proposal is further to avoid hash agility by picking the hash algorithm
>> used by TLS's Finished message computation.
>> Now there are two ways to achieve this goal.
>> 1) The crossing the river to get water approach:  Exchange FNV hashes of the
>> cached data in the handshake protocol exchange and then inject hashes of the
>> same data into the finished calculation through an alteration of the TLS
>> protocol.
>> 2) The simple approach: Use the hash algorithm of the finished calculation
>> to hash the cached data (according to the current draft).
>> Alternative 2 securely bind the hashed data into the finished message
>> calculation without altering the algorithm.
>> Alternative 2 requires at most a hash algorithm identifier added to the
>> protocol, if at all. We don't need to add negotiation since we always use
>> the hash of the finished message calculation. Adding this identifier would
>> be the only change made to the current draft.
>> Alternative 2 don't require additional security analysis. If the hash used
>> to calculate the finished message is broken, then we are screwed anyway.
>> /Stefan
>> On 10-05-17 9:16 PM, "Martin Rex" <> wrote:
>>> Joseph Salowey wrote:
>>>> I agree with Uri, that if you determine you need SHA-256 then you should
>>>> plan for hash agility.  TLS 1.2 plans for hash agility.
>>>> What about Nico's proposal where a checksum is used to identify the
>>>> cached data and the actual handshake contains the actual data hashed
>>>> with the algorithm used in the PRF negotiated with the cipher suite?
>>>> This way we don't have to introduce hash agility into the extension, but
>>>> we have cryptographic hash agility where it matters in the Finished
>>>> computation.  Does it solve the problem?
>>> Yes, I think so.
>>> This approach should solve the issue at the technical level.
>>> Going more into detail, one would hash/mac only the data that got
>>> actually replaced in the handshake, each prefixed by a (locally computed)
>>> length field.
>>> -Martin
>>> _______________________________________________