Re: [TLS] [POSSIBLE SPAM] Re: Collisions (Re: Consensus Call: FNV vs SHA1)

"Kemp, David P." <> Wed, 12 May 2010 17:50 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id CADDF3A6944 for <>; Wed, 12 May 2010 10:50:19 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -4.142
X-Spam-Status: No, score=-4.142 tagged_above=-999 required=5 tests=[AWL=-0.143, BAYES_50=0.001, RCVD_IN_DNSWL_MED=-4]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id vV5GDo1nT+K3 for <>; Wed, 12 May 2010 10:50:18 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id A65A83A6CF8 for <>; Wed, 12 May 2010 10:26:15 -0700 (PDT)
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
X-MimeOLE: Produced By Microsoft Exchange V6.5
Date: Wed, 12 May 2010 13:25:48 -0400
Message-ID: <>
In-Reply-To: <>
Thread-Topic: [TLS] [POSSIBLE SPAM] Re: Collisions (Re: Consensus Call: FNV vs SHA1)
Thread-Index: AcrxVZWCbzFWSLlLSsWekh0YQm8DvAAnCvdw
References: <> <><> <> <> <>
From: "Kemp, David P." <>
To: <>
X-OriginalArrivalTime: 12 May 2010 17:27:13.0109 (UTC) FILETIME=[5BFED050:01CAF1F8]
Subject: Re: [TLS] [POSSIBLE SPAM] Re: Collisions (Re: Consensus Call: FNV vs SHA1)
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 12 May 2010 17:50:19 -0000

It's not obvious that cache substitution could not be integrated cleanly
into the TLS record layer.  A hypothetical
CompressionMethod.CachedObject algorithm would know nothing about the
specific types of objects being cached, it would just have a
register-object method that creates an equivalence between an opaque
object-id string and an opaque object-content string.

Higher-level code that knows about TAs, certificates, etc would hash the
objects it has cached and register them with the compression algorithm
at application startup, and from that point on the record layer would
match and compress/expand any registered objects encountered in the data
stream (with escaping as necessary).  This should all be invisible to
the handshake layer; I can't think of a reason why it would require
"retroactive" data modification.  It might require modification of the
TLS spec, since currently a compresson algorithm is not permitted to
expand the content length by more than 1024 bytes and the best cached
objects are larger than that.

Of course this is just a napkin sketch; it certainly might turn out to
be impractical to implement.  But I don't see any big impediments yet.


-----Original Message-----
From: Marsh Ray [] 
Sent: Tuesday, May 11, 2010 6:02 PM
To: Kemp, David P.
Subject: Re: [TLS] [POSSIBLE SPAM] Re: Collisions (Re: Consensus Call:
FNV vs SHA1)
Importance: Low

On 5/11/2010 4:16 PM, Kemp, David P. wrote:
> The security analysis should focus on the bizarre Finished message
> calculation rather than on the hash algorithm.  The essence of caching
> is that cached data has the same effect as transmitted data, only
> :-).  Section 4 violates that assumption:
>    "The handshake protocol will proceed using the cached data as if it
>    was provided in the handshake protocol. The Finished message will
>    however be calculated over the actual data exchanged in the
>    protocol."

It is definitely a mismatch, but I think it's the simplest and least
inconsistent of the possible ways it could be defined.

> If the Finished message is not calculated as if the data were actually
> transmitted, then it cannot ensure the integrity of that data.

Not as easily and directly, but it can if we can prove either:

A. the values transmitted are perfect equivalents for the values used,

B. a mismatch (no matter how carefully engineered by the attacker) can
not make a semantic difference to any reasonable application code other
than what would result from an ordinary Finished verify failure.

> Strike
> the second sentence and the problem goes away.

And you gain a harder one in return...

> The transmitter has to
> perform Finished calculations on the original datastream, then
> post-process it to substitute hashes where possible.  The receiver
> has to expand hashes into data, and then perform handshake operations
> including Finished calculations.

But these things happen at different layers of the protocol.

The Finished messages are calculated on the handshake layer as carried
by the record layer (IIRC this includes the HandshakeType and length
bytes). Currently it can be implemented by keeping a running hash of the
bytes without actually buffering them in memory. That code shouldn't
need to change at all.

Operations like iterating a list of TAs to find a client cert or
validating a server cert are typically going to be performed at a higher
layer in the code that probably already has similar function defined for
processing those items.

If you needed to calculate a Finished message by retroactively modifying
the handshake data stream, it would be harder. You would have to do
things like fix up length fields (potentially recursing up multiple
layers). That would certainly make introducing new
CachedInformationTypes more disruptive. It would also effectively
require implementations to buffer everything until all substitutions
were resolved.

That's why I think the draft does it the best way.

- Marsh