Re: [apps-discuss] [websec] [kitten] [saag] HTTP authentication: the next generation

Phillip Hallam-Baker <> Sat, 18 December 2010 16:46 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 60F373A6B32; Sat, 18 Dec 2010 08:46:57 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -3.351
X-Spam-Status: No, score=-3.351 tagged_above=-999 required=5 tests=[AWL=0.247, BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-1]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id sV17309iSGMG; Sat, 18 Dec 2010 08:46:56 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 6A1333A6B31; Sat, 18 Dec 2010 08:46:55 -0800 (PST)
Received: by ywk9 with SMTP id 9so813314ywk.31 for <multiple recipients>; Sat, 18 Dec 2010 08:48:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type; bh=ijwI0GcNw4czVPTgUUhQd5LnD/RhxnVpgX9bs7S46fo=; b=f96paZmYl+d36RVMk/yETNHaI+A0ENWSmj9gEcmHadBN3MOj2LNYl7KGFFCjRSpSZl qzAVYfc3oq62xCBzMEpSAqxmVjTbNL8qfBAuJIwDLwT9JR+FkXDZEHKr8uuAO/foYCMT uK8nrOVgxXkHnHcly1gWX0vlutoC5GSyY4nQs=
DomainKey-Signature: a=rsa-sha1; c=nofws;; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=W15I6zVxvEU7S1jlvsHCHLRXhE4NgdTTYqYbij3m0QzKNxBHgjmx+2rjdXGf4nkr1H VxhhU7heQRETf1mM/zhx2oVrjmblBwQgMTEz4MPWcVnuz0oBbUUzN02nTGeP11GS4djp 4xqDuMZueByWSToL4nYZwhauFryx8cPJdLEUc=
MIME-Version: 1.0
Received: by with SMTP id u16mr1380812ank.1.1292690922503; Sat, 18 Dec 2010 08:48:42 -0800 (PST)
Received: by with HTTP; Sat, 18 Dec 2010 08:48:42 -0800 (PST)
In-Reply-To: <2229.1292253372.639419@puncture>
References: <> <p06240809c928635499e8@> <> <> <> <> <> <> <> <> <2229.1292235952.971571@puncture> <> <2229.1292239384.281779@puncture> <> <2229.1292253372.639419@puncture>
Date: Sat, 18 Dec 2010 16:48:42 +0000
Message-ID: <>
From: Phillip Hallam-Baker <>
To: Common Authentication Technologies - Next Generation <>, websec <>, "" <>, " Group" <>, General discussion of application-layer protocols <>, "" <>
Content-Type: multipart/alternative; boundary="00163662e65b3d7fce0497b20fd7"
X-Mailman-Approved-At: Sun, 19 Dec 2010 08:25:32 -0800
Subject: Re: [apps-discuss] [websec] [kitten] [saag] HTTP authentication: the next generation
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: General discussion of application-layer protocols <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sat, 18 Dec 2010 16:46:57 -0000

I think that we need to distinguish between an authentication mechanism and
an authentication infrastructure.

Part of the problem with HTTP authentication is that it was
quickly superseded by HTML based authentication mechanisms. And these in
turn suffer from the problem that password authentication fails when people
share their passwords across sites, which of course they have no choice but
to do when every stupid web site requires them to create yet another stupid

Since Digest Authentication became an RFC, I don't think there has ever been
more than about 6 weeks elapsed without someone suggesting to me that we
include SHA1 or SHA2 as a digest algorithm. Which is of course pointless
when the major flaw in the authentication infrastructure is the lack of an
authentication infrastructure. The original reason for designing Digest the
way that I did was that public key cryptography was encumbered. Had public
key cryptography been available, I would have used it.

By authentication infrastructure, I mean an infrastructure that allows the
user to employ the same credentials at multiple sites with minimal or no
user interaction. I do not mean a framework that allows for the use of 20
different protocols for verifying a username and password.

We do have almost as many proposals for federated authentication as
authentication schemes of course. But each time there seems to be an
obsession with things that technocrats obsess about and at best contempt for
the actual user.

OpenID almost succeeded. But why on earth did we have to adopt URIs as the
means of representing a user account? And why was it necessary to design a
spec around the notion that what mattered most in the design of the spec was
the ability to hack together an account manager using obsolete versions of
common scripting languages?

Another feature of that debate I cannot understand is why we had to start
talking about 'identity' as if it was some new and somehow profound problem
that had only just been discovered.

There is of course a standard for representing federated user accounts that
has already emerged on the net. And once that is realized, the technical
requirements of a solution become rather obvious.

As Web sites discover that their account holders cannot remember their
username, most have adopted email addresses as account identifiers. That is
what we should use as the basis for federated web authentication.

So if the user account identifier looks like, how does
an entity verify that a purported user has a valid claim to that account?

The obvious mechanism in my view is to use DNS based discovery of an
authentication service. For example, we might use the ESRV scheme I have
been working on:  ESRV 0 prot "_saml._ws"  ESRV 0 prot "_xcat._ws"

Which declares that the SAML and 'XCAT' (presumably kitten in XML) protocols
may be used to resolve authentication requests.

One major advantage of this approach is that it makes it easy for sites to
move to using the new federated auth scheme. Most sites already store an
email address that is used to validate the account.

The actual mechanism by which the authentication claim is verified is not
very interesting, nor does it particularly need to be standardized. What
does require standardization is the ability to embed the protocol in 'the
Web' in a fluent and secure manner.

Here is how I suggest this be achieved:

1) HTTP header

The Web browser attaches an offer of authentication by means  of an account
attached to a specific domain to (potentially) every request:


If the server does not support Auth-N, the header will simply be ignored.
Otherwise  the server can ask for automated authentication.

2) HTTP Response

If the server decides to use the authentication mechanism, it responds with
information that tells the client what level of authentication is required.
For example, a bank might require a 2 factor scheme. There is going to be at
a minimum a nonce.

Auth-N: snonce=<128bits>

3) HTTP Request

It should be possible for the client to prove that it has ownership of the
authentication token corresponding to the account.

It is not necessarily the case that the account owner wants to reveal to the
site all their information. For example, it may not even want the site to
know the account name. This is all fairly easy to set up using symmetric

Auth-N:; blindedaccount=<> snonce=<128bits>;

One feature that the OpenID work has highlighted the need for is some form
of user directed account manager. If the user is going to be in control of
this process, they need to be able to specify what information is made
available to specific sites.


I think that what we require here is not yet another authentication
framework or protocol. What we need is the glue to bind it into an
infrastructure that makes it useful.

The most important design decision is to make use of RFC822 email address
format as the format for federated authentication accounts.

Once that decision is made, the rest will simply fall out of stating the
requirements precisely.

The risk here is that yet again we end up redo-ing the parts that we know
how to build rather than focus on the real problem which is fitting them

Above all, the user has to be the first priority in any design.