Re: [TLS] Downgrade protection, fallbacks, and server time

Viktor Dukhovni <> Thu, 02 June 2016 15:33 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id DA5DB12D77D for <>; Thu, 2 Jun 2016 08:33:14 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_NONE=-0.0001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id YGZwNmo_DFrE for <>; Thu, 2 Jun 2016 08:33:12 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 0240B12D77C for <>; Thu, 2 Jun 2016 08:33:12 -0700 (PDT)
Received: from vpro.lan ( []) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPSA id DDF63284B14 for <>; Thu, 2 Jun 2016 15:33:10 +0000 (UTC) (envelope-from
Content-Type: text/plain; charset=us-ascii
Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\))
From: Viktor Dukhovni <>
In-Reply-To: <>
Date: Thu, 2 Jun 2016 11:33:10 -0400
Content-Transfer-Encoding: quoted-printable
Message-Id: <>
References: <> <> <> <> <>
X-Mailer: Apple Mail (2.3124)
Archived-At: <>
Subject: Re: [TLS] Downgrade protection, fallbacks, and server time
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 02 Jun 2016 15:33:15 -0000

> On Jun 2, 2016, at 11:16 AM, David Benjamin <>; wrote:
> I've mused on something like that (I was the main driver behind painstakingly removing the existing version fallback in Chrome), but I don't think non-determinism is a good idea. Site owners need to be able to reproduce the failures their users see.
> But, yes, I will of course be monitoring the true metrics (my probing a list of sites is only an approximation) and seeing what can be done here, as I did previously.

Opening a new window or tab and trying again a couple of times is not
a major reproducibility barrier.  The odds of failure would increase
with time, and would not be small to start with.

It would be important to roll the dice just once for a given site within
a given window or tab (at least until the user navigates to a new domain)
so that once contact is successful, further disruption does not render the
site unusable.  Basically, resume with the highest protocol that worked
consistently until such state is safe to flush, but reduce the odds of
initial success over a well publicized time-frame.