Re: [TLS] New Cached info draft

"Brian Smith" <brian@briansmith.org> Tue, 30 March 2010 18:30 UTC

Return-Path: <brian@briansmith.org>
X-Original-To: tls@core3.amsl.com
Delivered-To: tls@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 11FD33A6A16 for <tls@core3.amsl.com>; Tue, 30 Mar 2010 11:30:18 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.216
X-Spam-Level:
X-Spam-Status: No, score=-0.216 tagged_above=-999 required=5 tests=[AWL=1.254, BAYES_00=-2.599, DNS_FROM_OPENWHOIS=1.13]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id MbSsGUK0pyIL for <tls@core3.amsl.com>; Tue, 30 Mar 2010 11:30:16 -0700 (PDT)
Received: from mxout-08.mxes.net (mxout-08.mxes.net [216.86.168.183]) by core3.amsl.com (Postfix) with ESMTP id 77BBE3A6A65 for <tls@ietf.org>; Tue, 30 Mar 2010 11:30:16 -0700 (PDT)
Received: from T60 (unknown [70.134.204.209]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by smtp.mxes.net (Postfix) with ESMTPSA id 72D42509DC; Tue, 30 Mar 2010 14:30:38 -0400 (EDT)
From: "Brian Smith" <brian@briansmith.org>
To: "'Stefan Santesson'" <stefan@aaa-sec.com>, <tls@ietf.org>
References: <201003301723.o2UHNoc5008008@fs4113.wdf.sap.corp> <C7D80ACE.9BDF%stefan@aaa-sec.com>
In-Reply-To: <C7D80ACE.9BDF%stefan@aaa-sec.com>
Date: Tue, 30 Mar 2010 13:30:37 -0500
Message-ID: <003501cad037$1b37c960$51a75c20$@briansmith.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 14.0
thread-index: AQH2zpkx6SXE20QoIjYvnNXk//OcBgFxK2uqAU89Q70=
Content-Language: en-us
Subject: Re: [TLS] New Cached info draft
X-BeenThere: tls@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <tls.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/tls>, <mailto:tls-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/tls>
List-Post: <mailto:tls@ietf.org>
List-Help: <mailto:tls-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tls>, <mailto:tls-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 30 Mar 2010 18:30:18 -0000

Stefan Santesson wrote:
> On 10-03-30 6:23 PM, "Martin Rex" <mrex@sap.com>; wrote:
> > I do not think that he suggested to not return the extension _and_
> > replace cached data.
> 
> I interpreted the ServerCachedInformation structure as a separate
extension
> sent only by the server.

No, I meant for the client and the server to use the same extension ID, but
with different syntax for the extension_data. That is allowed. In fact the
current draft already has slightly different syntax for the client and
server extension data; the client digest_value is fixed at 8 bytes and the
server digest value can be either 0 or 8 bytes.

> >> On 10-03-30 5:34 PM, "Brian Smith" <brian@briansmith.org>; wrote:
> >>> * The draft says that CachedInformation.cached_info can be up to 
> >>> 590KB in size. extension_data can't be larger than 64KB, so the max
bound
> >>> for the CachedInformation.cached_info array must be 7281 or less. But,
> >>> really, sending more than a few hashes per type of cached info is
likely to
> >>> run into DoS countermeasures. It would be better to have the
specification
> >>> require and/or at least recommend that there not be more than one (or
at
> >>> most a few) hashes per information type in the client hello.
> >
> > To me, allowing the client to cache distinct values for the same
> > server leads to cache management problems.  How should a client expire
> > outdated content from his cache?  If the client only caches one item
> > per "server:port" pair, then expiring of outdated cached information
> > is a non-issue.
> 
> It's a non-issue in any case. A timer for example works well. Nothing
> prevents the client to refuse caching more than one object per type and
> server, but that restriction doesn't strike me as necessary.

It is good to keep the maximum size of extensions small so that the server
can allocate and reuse fixed-size buffers that are as small as possible. I
don't see the use for allowing multiple values per information type, but at
least I think a small cap on the total size of the extension_data (say, 1KB)
would be useful. There's no need for a server to waste resources to support
clients that send dozens, hundreds, or thousands of digests.

> >>> * The draft says "A present non-empty digest_value indicates that the
server
> >>> will honor caching of objects of the specified type that matches the
present
> >>> digest value." I don't see why this is necessary. The server should
always
> >>> be supporting the digests of the values that it most recently
returned, for
> >>> the information items it claims to support, so the semantics for empty
> >>> digest_values in the server extension are good enough.
> >
> > I would also appreciate semantics as suggested here.
> > Allow the server to return a ServerHelloExtension that explicitly list
> > the types of information for which the server supports caching, but
> > _without_ a digest_value, both on discovery and on actual use of
> > the caching extension by the client, so that the server does not
> > have to pre-calculate this data of future handshake message
> > while it is composing ServerHello.
> >
> 
> The server doesn't have to send digest values in current draft.

AFAICT, there's nothing in the draft that says that the client should use
this information in any way. As long as the client is free to ignore the
server-sent digest_values when present, it doesn't hurt. But, I don't see
how it really helps either. It's better to keep the syntax as simple as
possible.

Again, it is best to require that the server explicitly list the information
types for which it supports caching. It costs the server basically nothing
to provide the few extra bytes, and it is very useful information for the
client to have.

Regards,
Brian