Re: [apps-discuss] [websec] [kitten] [saag] HTTP authentication: the next generation

Adrien de Croy <> Sun, 19 December 2010 11:10 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 4D2EB3A6853; Sun, 19 Dec 2010 03:10:18 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -4.845
X-Spam-Status: No, score=-4.845 tagged_above=-999 required=5 tests=[AWL=-3.704, BAYES_00=-2.599, HTML_MESSAGE=0.001, MIME_HTML_ONLY=1.457]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id MCZWRmsCQ7a2; Sun, 19 Dec 2010 03:10:17 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 048063A6852; Sun, 19 Dec 2010 03:10:15 -0800 (PST)
Received: From [] (unverified []) by SMTP Server [] (WinGate SMTP Receiver v7.0.0 (Build 3109)) with SMTP id <>; Mon, 20 Dec 2010 00:12:02 +1300
Message-ID: <>
Date: Mon, 20 Dec 2010 00:12:02 +1300
From: Adrien de Croy <>
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv: Gecko/20101207 Thunderbird/3.1.7
MIME-Version: 1.0
To: Phillip Hallam-Baker <>
References: <> <p06240809c928635499e8@> <> <> <> <> <> <> <> <> <2229.1292235952.971571@puncture> <> <2229.1292239384.281779@puncture> <> <2229.1292253372.639419@puncture> <>
In-Reply-To: <>
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit
X-Mailman-Approved-At: Sun, 19 Dec 2010 08:25:22 -0800
Cc: General discussion of application-layer protocols <>, websec <>, Common Authentication Technologies - Next Generation <>, "" <>, "" <>, " Group" <>
Subject: Re: [apps-discuss] [websec] [kitten] [saag] HTTP authentication: the next generation
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: General discussion of application-layer protocols <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sun, 19 Dec 2010 11:10:18 -0000

I think we need to go a bit further and consider the issue of trust.

one problem with delegating account-holding back to a domain under the control of the account-holder, is you have no trust.  I could be or  I can set up whatever account I like.  Websites have no information about whether I'm trustworthy or not, and have to build up their own individual profile of me.

To be really useful, the account-holding must be with a trusted independent organisation able to be relied on by other websites. 

The organisation then has the opportunity to add value by

a) verifying the true identity of the account holder
b) maintaining reputation information about the account holder
c) revoking abusive accounts.

Ends up looking a lot like X.509 certificate infrastructure.  Imagine if everyone needed a client certificate to send any mail.  We'd have no spam.

Of course these sorts of concepts are completely unpalatable to many people on account of privacy issues. Some of these activities are the sort of things that governments should really be doing (and already are in many cases).

Solving this problem has implications for all internet use, not just HTTP.



On 19/12/2010 5:48 a.m., Phillip Hallam-Baker wrote:
I think that we need to distinguish between an authentication mechanism and an authentication infrastructure.

Part of the problem with HTTP authentication is that it was quickly superseded by HTML based authentication mechanisms. And these in turn suffer from the problem that password authentication fails when people share their passwords across sites, which of course they have no choice but to do when every stupid web site requires them to create yet another stupid account. 

Since Digest Authentication became an RFC, I don't think there has ever been more than about 6 weeks elapsed without someone suggesting to me that we include SHA1 or SHA2 as a digest algorithm. Which is of course pointless when the major flaw in the authentication infrastructure is the lack of an authentication infrastructure. The original reason for designing Digest the way that I did was that public key cryptography was encumbered. Had public key cryptography been available, I would have used it.

By authentication infrastructure, I mean an infrastructure that allows the user to employ the same credentials at multiple sites with minimal or no user interaction. I do not mean a framework that allows for the use of 20 different protocols for verifying a username and password.

We do have almost as many proposals for federated authentication as authentication schemes of course. But each time there seems to be an obsession with things that technocrats obsess about and at best contempt for the actual user.

OpenID almost succeeded. But why on earth did we have to adopt URIs as the means of representing a user account? And why was it necessary to design a spec around the notion that what mattered most in the design of the spec was the ability to hack together an account manager using obsolete versions of common scripting languages?

Another feature of that debate I cannot understand is why we had to start talking about 'identity' as if it was some new and somehow profound problem that had only just been discovered.

There is of course a standard for representing federated user accounts that has already emerged on the net. And once that is realized, the technical requirements of a solution become rather obvious.

As Web sites discover that their account holders cannot remember their username, most have adopted email addresses as account identifiers. That is what we should use as the basis for federated web authentication. 

So if the user account identifier looks like, how does an entity verify that a purported user has a valid claim to that account?

The obvious mechanism in my view is to use DNS based discovery of an authentication service. For example, we might use the ESRV scheme I have been working on:

_auth._" rel="nofollow">  ESRV 0 prot "_saml._ws"
_auth._" rel="nofollow">  ESRV 0 prot "_xcat._ws"

Which declares that the SAML and 'XCAT' (presumably kitten in XML) protocols may be used to resolve authentication requests.

One major advantage of this approach is that it makes it easy for sites to move to using the new federated auth scheme. Most sites already store an email address that is used to validate the account. 

The actual mechanism by which the authentication claim is verified is not very interesting, nor does it particularly need to be standardized. What does require standardization is the ability to embed the protocol in 'the Web' in a fluent and secure manner.

Here is how I suggest this be achieved:

1) HTTP header

The Web browser attaches an offer of authentication by means  of an account attached to a specific domain to (potentially) every request:

Auth-N: domain=" rel="nofollow">

If the server does not support Auth-N, the header will simply be ignored. Otherwise  the server can ask for automated authentication.

2) HTTP Response

If the server decides to use the authentication mechanism, it responds with information that tells the client what level of authentication is required. For example, a bank might require a 2 factor scheme. There is going to be at a minimum a nonce.

Auth-N: snonce=<128bits>

3) HTTP Request

It should be possible for the client to prove that it has ownership of the authentication token corresponding to the account. 

It is not necessarily the case that the account owner wants to reveal to the site all their information. For example, it may not even want the site to know the account name. This is all fairly easy to set up using symmetric techniques.

Auth-N: domain=" rel="nofollow">; blindedaccount=<> snonce=<128bits>; cnonce=<128bits>

One feature that the OpenID work has highlighted the need for is some form of user directed account manager. If the user is going to be in control of this process, they need to be able to specify what information is made available to specific sites.


I think that what we require here is not yet another authentication framework or protocol. What we need is the glue to bind it into an infrastructure that makes it useful.

The most important design decision is to make use of RFC822 email address format as the format for federated authentication accounts. 

Once that decision is made, the rest will simply fall out of stating the requirements precisely. 

The risk here is that yet again we end up redo-ing the parts that we know how to build rather than focus on the real problem which is fitting them together. 

Above all, the user has to be the first priority in any design. 

Adrien de Croy - WinGate Proxy Server -" rel="nofollow">