Re: [dix] Updated phishing requirements draft

"Ben Laurie" <benl@google.com> Mon, 03 July 2006 13:22 UTC

Received: from [127.0.0.1] (helo=stiedprmman1.va.neustar.com) by megatron.ietf.org with esmtp (Exim 4.43) id 1FxONt-0007uP-6K; Mon, 03 Jul 2006 09:22:41 -0400
Received: from [10.91.34.44] (helo=ietf-mx.ietf.org) by megatron.ietf.org with esmtp (Exim 4.43) id 1FxONs-0007tw-8j for dix@ietf.org; Mon, 03 Jul 2006 09:22:40 -0400
Received: from smtp-out.google.com ([216.239.45.12]) by ietf-mx.ietf.org with esmtp (Exim 4.43) id 1FxONq-0002Ha-QM for dix@ietf.org; Mon, 03 Jul 2006 09:22:40 -0400
Received: from lois.corp.google.com (lois.corp.google.com [172.24.0.50]) by smtp-out.google.com with ESMTP id k63DMOV5014758 for <dix@ietf.org>; Mon, 3 Jul 2006 06:22:29 -0700
DomainKey-Signature: a=rsa-sha1; s=beta; d=google.com; c=nofws; q=dns; h=received:message-id:date:from:to:subject:in-reply-to: mime-version:content-type:content-transfer-encoding: content-disposition:references; b=qGoWlF7ehU6v2ysrHM5cg+viZprmEKTEcNXSOz47UV+X3Zg/bBACppXSKctrj26Tu s0Ty5bmaN04dxVqN7cOrQ==
Received: from smtp-out2.google.com (fpx33.prod.google.com [10.253.24.33]) by lois.corp.google.com with ESMTP id k63DLpfu019633 for <dix@ietf.org>; Mon, 3 Jul 2006 06:22:13 -0700
Received: by smtp-out2.google.com with SMTP id 33so431396fpx for <dix@ietf.org>; Mon, 03 Jul 2006 06:22:07 -0700 (PDT)
Received: by 10.253.15.18 with SMTP id 18mr454277fpo; Mon, 03 Jul 2006 06:22:07 -0700 (PDT)
Received: by 10.253.14.2 with HTTP; Mon, 3 Jul 2006 06:22:07 -0700 (PDT)
Message-ID: <1b587cab0607030622s1c9dbad5v5a0ede75454f09e@mail.google.com>
Date: Mon, 03 Jul 2006 14:22:07 +0100
From: Ben Laurie <benl@google.com>
To: Digital Identity Exchange <dix@ietf.org>
Subject: Re: [dix] Updated phishing requirements draft
In-Reply-To: <tslac7x6x98.fsf@cz.mit.edu>
MIME-Version: 1.0
Content-Type: text/plain; charset="ISO-8859-1"; format="flowed"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
References: <tslac7x6x98.fsf@cz.mit.edu>
X-Spam-Score: 0.0 (/)
X-Scan-Signature: dff86644adfd94a4a79427a3accd2986
X-BeenThere: dix@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
Reply-To: Digital Identity Exchange <dix@ietf.org>
List-Id: Digital Identity Exchange <dix.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/dix>, <mailto:dix-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www1.ietf.org/pipermail/dix>
List-Post: <mailto:dix@ietf.org>
List-Help: <mailto:dix-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/dix>, <mailto:dix-request@ietf.org?subject=subscribe>
Errors-To: dix-bounces@ietf.org

On 6/28/06, Sam Hartman <hartmans-ietf@mit.edu> wrote:
>
>
> Hi.  I submitted a 01 version of my phishing requirements draft Sunday
> evening, but it has not yet popped out the other side so I'm including
> it below.
>
> I've tried to improve it based on review comments.  I did not get a
> chance to address everything; I was focusing on some obvious issues of
> clarity and on refining my thoughts so that the draft reflects my
> current understanding of the area.  I do thank all those who sent
> comments both on the list and privately.  I also thank those who were
> willing to walk through the requirements with me and help me confirm
> that the requirements are sufficient for what I'm trying to achieve.
>
>
>
>
>
>
> Network Working Group                                         S. Hartman
> Internet-Draft                                                       MIT
> Expires: December 27, 2006                                 June 25, 2006
>
>
>        Requirements for Web Authentication Resistant to Phishing
>                  draft-hartman-webauth-phishing-01.txt
>
> Status of this Memo
>
>    By submitting this Internet-Draft, each author represents that any
>    applicable patent or other IPR claims of which he or she is aware
>    have been or will be disclosed, and any of which he or she becomes
>    aware will be disclosed, in accordance with Section 6 of BCP 79.
>
>    Internet-Drafts are working documents of the Internet Engineering
>    Task Force (IETF), its areas, and its working groups.  Note that
>    other groups may also distribute working documents as Internet-
>    Drafts.
>
>    Internet-Drafts are draft documents valid for a maximum of six months
>    and may be updated, replaced, or obsoleted by other documents at any
>    time.  It is inappropriate to use Internet-Drafts as reference
>    material or to cite them other than as "work in progress."
>
>    The list of current Internet-Drafts can be accessed at
>    http://www.ietf.org/ietf/1id-abstracts.txt.
>
>    The list of Internet-Draft Shadow Directories can be accessed at
>    http://www.ietf.org/shadow.html.
>
>    This Internet-Draft will expire on December 27, 2006.
>
> Copyright Notice
>
>    Copyright (C) The Internet Society (2006).
>
> Abstract
>
>    This memo proposes requirements for protocols between web identity
>    providers and users and for requirements for protocols between
>    identity providers and relying parties.  These requirements minimize
>    the likelihood that criminals will be able to gain the credentials
>    necessary to impersonate a user or be able to fraudulently convince
>    users to disclose personal information.  To meet these requirements
>    browsers must change.  Websites must never receive information such
>    as passwords that can be used to impersonate the user to third
>    parties.  Browsers should perform mutual authentication and flag
>
>
>
> Hartman                 Expires December 27, 2006               [Page 1]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
>    situations when the target website is not authorized to accept the
>    identity being offered as this is a strong indication of fraud.
>
>
> Table of Contents
>
>    1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  3
>    2.  Requirements notation  . . . . . . . . . . . . . . . . . . . .  5
>    3.  Threat Model . . . . . . . . . . . . . . . . . . . . . . . . .  6
>      3.1.  Capabilities of Attackers  . . . . . . . . . . . . . . . .  6
>      3.2.  Attacks of Interest  . . . . . . . . . . . . . . . . . . .  7
>    4.  Requirements for Preventing Phishing . . . . . . . . . . . . .  8
>      4.1.  Support for Passwords  . . . . . . . . . . . . . . . . . .  8
>      4.2.  Trusted UI . . . . . . . . . . . . . . . . . . . . . . . .  8
>      4.3.  No Password Equivelents  . . . . . . . . . . . . . . . . .  9
>      4.4.  Mutual Authentication  . . . . . . . . . . . . . . . . . .  9
>      4.5.  Authentication Tied to Resulting Page  . . . . . . . . . . 10
>      4.6.  Restricted Identity Providers  . . . . . . . . . . . . . . 11
>      4.7.  Protecting Enrollment  . . . . . . . . . . . . . . . . . . 11
>    5.  Is it the right Server?  . . . . . . . . . . . . . . . . . . . 13
>    6.  Security Considerations  . . . . . . . . . . . . . . . . . . . 14
>    7.  References . . . . . . . . . . . . . . . . . . . . . . . . . . 16
>      7.1.  Normative References . . . . . . . . . . . . . . . . . . . 16
>      7.2.  Informative References . . . . . . . . . . . . . . . . . . 16
>    Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 17
>    Intellectual Property and Copyright Statements . . . . . . . . . . 18
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hartman                 Expires December 27, 2006               [Page 2]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
> 1.  Introduction
>
>    Typically, web sites ask users to send a user name and password in
>    order to log in and authenticate their identity to the website.  The
>    user name and plaintext password is often sent over a TLS [RFC4346]
>    encrypted connection.  As a result of this, the server learns the
>    password and can pretend to be the user to any other system where the
>    user has used the same password.  The security of passwords over TLS
>    depends on making sure that the password is sent to the right,
>    trusted server.  TLS implementations typically confirm that the name
>    entered by the user in the URL corresponds to the certificate as
>    described in [RFC2818].
>
>    One serious security threat on the web today is phishing.  Phishing
>    is a form of fraud where an attacker convinces a user to provide
>    confidential information to the attacker believing they are providing
>    the information to a party they trust with that information.  For
>    example, an email claiming to be from a user's bank may direct the
>    user to go to a website and verify account information.  The attacker
>    captures the user name and password and potentially other sensitive
>    information.  Domain names that look like target websites, links in
>    email, and many other factors contribute to phishers' ability to
>    convince users to trust them.
>
>    It is useful to distinguish two targets of phishing.  Sometimes
>    phishing is targeting web authentication credentials such as user
>    name and password.  Sometimes phishing is targeting other
>    confidential information.  This memo presents requirements that
>    significantly reduce the effectiveness of the first category of
>    phishing: these requirements guarantee that even if the user
>    authenticates to the wrong server, that server cannot impersonate the
>    user to a third party.  However to combat phishing targeted at other
>    confidential information the best we can do is try to help the user
>    detect fraud before they release confidential information.
>
>    So, the approach taken by these requirements is to handle these two
>    types of phishing differently.  Users are given some trusted
>    mechanism to determine whether they are typing their password into a
>    secure browser component that will authenticate them to the web
>    server or whether they are typing their password into a legacy
>    mechanism that will send their password to the server.  If the user
>    types a password into the trusted browser component, they have strong
>    assurances that their password has not been disclosed and that the
>    page returned from the web server was generated by a party that
>    either knows their password or who is authenticated by an identity
>    provider who knows their password.  The web server can then use
>    confidential information known to the user and web server to enhance
>    the user's trust in its identity beyond what is available given the
>
>
>
> Hartman                 Expires December 27, 2006               [Page 3]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
>    social engineering attacks against TLS server authentication.  If a
>    user enters their password into the wrong server but discovers this
>    before they give that server any other confidential information, then
>    there exposure is very limited.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hartman                 Expires December 27, 2006               [Page 4]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
> 2.  Requirements notation
>
>    The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
>    "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
>    document are to be interpreted as described in [RFC2119].
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hartman                 Expires December 27, 2006               [Page 5]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
> 3.  Threat Model
>
>    This section describes the assumed capabilities of phishers,
>    describes assumptions about web security and describes what
>    vulnerabilities exist.
>
>    We assume that the browser and operating system are secure and can be
>    trusted by the end user.  There are many attacks against browsers and
>    operating systems.  However without this assumption we cannot make
>    progress in securing web authentication.  So we will assume that
>    browsers and operating systems are secure and note that other work to
>    improve the security of browsers and operating systems is critical to
>    the security of the entire web authentication system.
>
>    We assume that users have limited motivation to combat phishing.
>    Users cannot be expected to read the source of web pages, understand
>    how DNS works well enough to look out for spoofed domains, or
>    understand URI encoding.  Users do not typically understand
>    certificates and cannot make informed decisions about whether the
>    subject name in a certificate corresponds to the entity they are
>    attempting to communicate with.
>
> 3.1.  Capabilities of Attackers
>
>    Attackers can convince the user to go to a website of their choosing.
>    Since the attacker controls the web site and since the user chose to
>    go to the website the TLS certificate will verify and the website
>    will appear to be secure.  The certificate will typically not be
>    issued to the entity the user thinks they are communicating with, but
>    as discussed above, the user will not notice this.
>
>    The attacker can convincingly replicate any part of the UI of the
>    website being spoofed.  The attacker can also spoof trust markers
>    such as the security lock, URL bar and other parts of the browser UI.
>    There is one limitation to the attacker's ability to replicate UI.
>    The attacker cannot replicate a UI that depends on information the
>    attacker does not know.  For example, an attacker could generally
>    replicate the UI of a banking site's login page.  However the
>    attacker probably could not replicate the account summary page until
>    the attacker learned the user name and password.
>
>    The attacker can convince the user to do anything with the phishing
>    site that they would do with the real target site.  As a consequence,
>    if we want to avoid the user giving the attacker their password, we
>    must transition to a solution where the user would not give the real
>    target site their password.  Instead they will need to
>    cryptographically prove that they know their password without
>    revealing it.
>
>
>
> Hartman                 Expires December 27, 2006               [Page 6]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
> 3.2.  Attacks of Interest
>
>    The primary attack of interest is an attack in which the user sends
>    confidential information to an unintended recipient.

This contradicts the introduction, which says you are only interested
in authentication credentials.

>    Another significant attack is an attack in which a recipient gains
>    sufficient credentials to impersonate the user to other recipients.
>    The obvious version of this attack is when the recipient learns the
>    password of the user.  However even giving the recipient a time-
>    limited token that can be used to impersonate the user would be an
>    instance of this attack.  Note that some authentication systems such
>    as Kerberos [RFC4120] provide a facility to delegate the ability to
>    act as the user to the target of the authentication.  Such a facility
>    when used with an inappropriately trusted target would be an instance
>    of this attack.
>
>    Of less serious concerns at the present time are attacks on data
>    integrity where a phisher provides false or misleading information to
>    a user.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hartman                 Expires December 27, 2006               [Page 7]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
> 4.  Requirements for Preventing Phishing
>
>    This section describes requirements for web authentication solutions.
>    These solutions are intended to prevent phishing targeted at
>    obtaining web authentication credentials.  These requirements will
>    make it more difficult for phishers to target other confidential
>    information.
>
>    The requirements discussed here are similar to the principles
>    outlined in "Limits to Anti-Phishing" [ANTIPHISHING].  Most of this
>    work was discovered independently but work from that paper has been
>    integrated where appropriate.  Google's perspective on phishing is
>    very interesting because of their operational experience.
>
> 4.1.  Support for Passwords
>
>    The web authentication solution MUST support passwords and MUST be
>    secure even when passwords are commonly used.

It seems to me you are presupposing the solution to your real
requirement, which is that the user must be able to walk up to any
machine and use it to log in. It is not obvious to me that this means
it must support passwords (at least not to more than one site, say the
one where I've stored all my credentials).

>  In many environments,
>    users need the ability to walk up to a computer they have never used
>    before and log in to a website.  Carrying a smart card or USB token
>    significantly increases the deployment cost of the website and
>    decreases user convenience.  The smart card is costly to deploy
>    because it requires a process for replacing smart cards, requires
>    support staff to be trained in answering questions regarding smart
>    cards and requires a smart card to be issued when an identity is
>    issued.  Smart cards are less convenient because users cannot gain
>    access to protected resources without having their card physically
>    with them.  Many public access computers do not have smart cards
>    available and do not provide access to USB ports; when they do they
>    tend not to support smart cards.  It is important not to
>    underestimate the training costs (either in money or user
>    frustration) of teaching people used to remembering a user name and
>    password about a new security technology.
>
>    IT is desirable that a solution support other forms of authentication
>    such as smart cards and one-time passwords as these are useful in
>    some environments.
>
> 4.2.  Trusted UI
>
>    Users need the ability to trust components of the UI in order to know
>    that the UI is being presented by a trusted component of the browser.
>    The primary concern is to make sure that the user knows the password
>    is being given to trusted software rather than being filled into an
>    HTML form element that will be sent to the server.
>
>    There are three basic approaches to establishing a trusted UI.  The
>    first is to use a dynamic UI based on a secret known by the UI; the
>
>
>
> Hartman                 Expires December 27, 2006               [Page 8]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
>    Google paper [ANTIPHISHING] recommends this approach.  A second
>    approach is to provide a UI action that highlights trusted or non-
>    trusted components in some way.  This could work similarly to the
>    Expose feature in Apple's OS X where a keystroke visually
>    distinguishes structural elements of the UI.  Of course such a
>    mechanism would only be useful if users actually used it.  Finally,
>    the multi-level security community has extensive research in
>    designing UIs to display classified, compartmentalized information.
>    It is critical that these UIs be able to label information and that
>    these labels not be spoofable.
>
>    See Section 5 for another case where confidential information in a UI
>    can be used to build trust.
>
> 4.3.  No Password Equivelents
>
>    A critical requirement is that when a user authenticates to a
>    website, the website MUST NOT receive a strong password equivalent
>    [IABAUTH].  A strong password equivalent is anything that would allow
>    a phisher to authenticate as a user with a different identity
>    provider.  Weak password equivalents MAY only be sent when a new
>    identity is being enrolled or a password is changed.  A weak password
>    equivalent allows a party to authenticate to a given identity
>    provider as the user.

Again, you are presupposing the necessity of passwords. Surely this
should be "no authentication credential equivalents".

>    There are two implications of this requirement.  First, strong
>    cryptographic authentication protocol needs to be used instead of
>    sending the password encrypted over TLS.  The zero-knowledge class of
>    password protocols such as those discussed in section 8 of the IAB
>    authentication mechanisms document [IABAUTH] seem potentially useful
>    in this case.  Note that mechanisms in this space tend to have
>    significant deployment problems because of intellectual property
>    issues.
>
>    The second implication of this requirement is that if an
>    authentication token is presented to a website, the website MUST NOT
>    be able to modify the token to authenticate as the user to a third
>    party.  The party generating the token must cryptographically bind it
>    to either the website that will receive the token or to a key known
>    only to the user.

Once more you are presupposing the solution. One-time passwords work
find for this purpose but are not (necessarily) cryptographically
bound to anything.

> If tokens are bound to keys, the user MUST prove
>    knowledge of this key as part of the authentication process.  The key
>    MUST not be disclosed to the server unless the token is bound to the
>    server and the key is only used with that token.
>
> 4.4.  Mutual Authentication
>
>    The Google paper [ANTIPHISHING] describes a requirement for mutual
>    authentication.  A common phishing practice is to accept a user name
>
>
>
> Hartman                 Expires December 27, 2006               [Page 9]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
>    and password as part of an attempt to make the phishing site
>    authentic.  The real target is some other confidential information.
>    The user name and password are captured, but are not verified.  After
>    the user name and password are entered, the phishing site collects
>    other confidential information.
>
>    Authentication of the server at the TLS level and authentication of
>    the client within the TLS session is not sufficient.  AS discussed
>    previously the attacker can direct the user to a site that the
>    attacker controls so the TLS authentication will succeed.  So an
>    authentication solution for phishing MUST detect the situation where
>    the server ignores or does not participate in the authentication.

This doesn't follow - the situation you have described is one where
the server _does_ participate in authentication. The problem is that
they are not the server the user thinks they are,.

>  If
>    authentication is based on a shared secret such as a password, then
>    the authentication protocol MUST prove that the secret or a suitable
>    verifier is known by both parties.  Interestingly the existence of a
>    shared secret will provide better protection that the right server is
>    being contacted than if public key credentials are used.  By their
>    nature, public key credentials allow parties to be contacted without
>    a prior security association.

However if the public keys _are_ associated with the server, then they
do provide an assurance (not, of course, that this is current
practice, generally).

>  In protecting against phishing
>    targeted at obtaining other confidential information, this may prove
>    a liability.  However public key credentials provide strong
>    protection against phishing targeted at obtaining authentication
>    credentials because they are not vulnerable to dictionary attacks.
>    Such dictionary attacks are a significant weakness of shared secrets
>    such as passwords intended to be remembered by humans.  For public
>    key protocols, this requirement would mean that the server typically
>    needs to sign an assertion of what identity it authenticated.
>
> 4.5.  Authentication Tied to Resulting Page
>
>    Users expect that whatever party they authenticate to will be the
>    party that generates the content they see.  One possible phishing
>    attack is to insert the phisher between the user and the real site as
>    a man-in-the-middle.  On today's websites, the phisher typically
>    gains the user's user name and password.  Even if the other
>    requirements of this specification are met, the phisher could gain
>    access to the user's session on the target site.  This attack is of
>    particular concern to the banking industry.  A man-in-the-middle may
>    gain access to the session which may give the phisher confidential
>    information or the ability to execute transactions on the user's
>    behalf.
>
>    The authentication system MUST guarantee to the user and the target
>    server that the response is generated by the target server and will
>    only be seen by parties authorized by either the target server or the
>    user.

The requirement, surely, is that the response is not _useful_ to
eavesdroppers, rather than not seen. There are plenty of protcols that
can be run totally in the clear that leave nothing useful for an
observer (e.g. Diffie-Hellman key agreement).

>  This can be done in several ways:
>
>
>
>
>
> Hartman                 Expires December 27, 2006              [Page 10]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
>    1.  Assuming that only certificates from trusted CAs are accepted and
>        the user has not bypassed certificate validation, it is
>        sufficient to confirm that the identity of the server at the TLS
>        level is the same at the HTTP authentication level.  In the case
>        of TLS client authentication this is trivially true.
>
>    2.  Another alternative is to bind the authentication exchange to the
>        channel created by the TLS session.  The general concept behind
>        channel binding is discussed in section 2.2.2 of [BTNS].  This
>        paragraph needs to be expanded to point to proposals for doing
>        channel binding with TLS. xxx
>
>    3.  Redirect based schemes in which the identity provider is told
>        what site to return the user to meets this requirement provided
>        again that certificate validation is done at the TLS layer.

And, presumably, many, many others.

>
> 4.6.  Restricted Identity Providers
>
>    Some identity providers will allow anyone to accept their identity.
>    However particularly for financial institutions and large service
>    providers it will be common that only authorized business partners
>    will be able to accept the identity.  The confirmation that the the
>    relying party is such a business partner will often be a significant
>    part of the value provided by the identity provider, so it is
>    important that the protocol enable this.  For such identities, the
>    user MUST be assured that the target server is authorized by the
>    identity provider to accept identities from that identity provider.
>    Several mechanisms could be used to accomplish this:
>
>    1.  The target server can provide a certificate issued by the
>        identity provider as part of the authentication.
>
>    2.  The identity provider can explicitly approve the identity.  For
>        example in a redirect-based scheme the identity provider knows
>        the identity of the relying party before providing claims of
>        identity to that party.  A similar situation happens with
>        Kerberos.

A general criticism, but particularly apropos in this section: you
appear to have no concern for the privacy of the user. It should be
possible to authenticate without revealing to the identity provider
who you are authenticating to.

Incidentally, you've suddenly started talking about identity providers
without saying what they are.


>
> 4.7.  Protecting Enrollment
>
>    One area of particular vulnerability to phishing is enrollment of a
>    new identity in an identity provider.  Protecting against phishing
>    targeted at obtaining other confidential information as a new service
>    is established is outside the scope of this document.  If
>    confidential information such as credit card numbers are provided as
>    part of account setup, then this may be a target for phishing.
>
>    However there is one critical aspect in which enrollment impacts the
>
>
>
> Hartman                 Expires December 27, 2006              [Page 11]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
>    security of authentication.  During enrollment, a password is
>    typically established for an account at an identity provider.  The
>    process of establishing a password MUST NOT provide a strong password
>    equivalent to the identity provider.  That is, the identity provider
>    MUST NOT gain enough information to impersonate the user to a third
>    party while establishing a password.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hartman                 Expires December 27, 2006              [Page 12]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
> 5.  Is it the right Server?
>
>    In Section 4, requirements were presented for web authentication
>    solutions to minimize the risk of phishing targeted at web access
>    information.  This section discusses in a non-normative manner
>    various mechanisms for determining that the right server has been
>    contacted.  Authenticating to the right party is an important part of
>    reducing the risk of phishing targeted at other confidential
>    information.
>
>    Validation of the certificates used in TLS and verification that the
>    name in the URI maps to these certificates can be useful.  As
>    discussed in Section 3, attackers can spoof the name in the URI.
>    However the TLS checks do defeat some attacks.  Also, as discussed in
>    Section 4.5, TLS validation may be important to higher-level checks.
>
>    A variety of initiatives propose to assign trust to servers.  This
>    includes proposals to allow users to indicate certain servers are
>    trusted based on information they enter.  Also, proposals to allow
>    third parties including parties established for this purpose and
>    existing certificate authorities to indicate trust have been made.
>    These proposals will almost certainly make phishing more difficult.
>
>    In the case where there is an existing relationship, these
>    requirements provide a way that information about the relationship
>    can be used to provide assurance that the right party has been
>    contacted.
>
>    In Section 4.2, we discuss how a secret between the user and their
>    local computer can be used to let the user know when a password will
>    be handled securely.  A similar mechanism can be used to help the
>    user once they are authenticated to the website.  The website can
>    present information based on a secret shared between the user and
>    website to convince the user that they have authenticated to the
>    correct site.  This depends critically on the requirements of
>    Section 4.5 to guarantee that the phisher cannot obtain the secret.
>    It is tempting to use this form of trusted UI before authentication.
>    For example, a website could request a user name and then display
>    information based on a secret for that user before accepting a
>    password.  The problem with this approach is that phishers can obtain
>    this information, because it can be obtained without knowing the
>    password.  However if the secret is displayed after authentication
>    then phishers could not obtain the secret.  This is one of the many
>    reasons why it is important to prevent phishing targeted at
>    authentication credentials.
>
>
>
>
>
>
> Hartman                 Expires December 27, 2006              [Page 13]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
> 6.  Security Considerations
>
>    This memo discusses the security of web authentication and how to
>    minimize the risk of phishing in web authentication systems.  This
>    section discusses the security of the overall system and discusses
>    how components of the system that are not directly within the scope
>    of this document affect the security of web transactions.  This
>    section also discusses residual risks that remain even when the
>    requirements proposed here are implemented.
>
>    The approach taken here is to separate the problem of phishing into
>    phishing targeted at web authentication credentials and phishing
>    targeted at other information.  Users are given some trusted
>    mechanism to determine whether they are typing their password into a
>    secure browser component that will authenticate them to the web
>    server or whether they are typing their password into a legacy
>    mechanism that will send their password to the server.  If the user
>    types a password into the trusted browser component, they have strong
>    assurances that their password has not been disclosed and that the
>    page returned from the web server was generated by a party that
>    either knows their password or who is authenticated by an identity
>    provider who knows their password.  The web server can then use
>    confidential information known to the user and web server to enhance
>    the user's trust in its identity beyond what is available given the
>    social engineering attacks against TLS server authentication.  If a
>    user enters their password into the wrong server but discovers this
>    before they give that server any other confidential information, then
>    there exposure is very limited.
>
>    This model assumes that the browser and operating system are a
>    trusted component.  As discussed in Section 3, there are numerous
>    attacks against host security.  Appropriate steps should be taken to
>    minimize these risks.  If the host security is compromised, the
>    password can be captured as it is typed by the user.
>
>    This model assumes that users will only enter their passwords into
>    trusted browser components.  There are several potential problems
>    with this assumption.  First, users need to understand the UI
>    distinction and know what it looks like when they are typing into a
>    trusted component and what a legacy HTML form looks like.  Users must
>    care enough about the security of their passwords to only type them
>    into trusted components.  The browser must be designed in such a way
>    that the server cannot create a UI component that appears to be a
>    trusted component but is actually a legacy HTML form; Section 4.2
>    discusses this requirement.
>
>    IN addition, a significant risk that users will type their password
>    into legacy HTML forms comes from the incremental deployment of any
>
>
>
> Hartman                 Expires December 27, 2006              [Page 14]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
>    web authentication technology.  Websites will need a way to work with
>    older web browsers that do not yet support mechanisms that meet these
>    requirements.  Not all websites will immediately adopt these
>    mechanisms.  Users will sometimes browse from computers that have
>    mechanisms meeting these requirements and sometimes from older
>    browsers.  They only gain protection from phishing when they type
>    passwords into trusted components.  If a password is sometimes used
>    with websites that meet these requirements and sometimes with legacy
>    websites, and if the password is captured by a phisher targeting a
>    legacy website, then that captured password can be used even on
>    websites meeting these requirements.  Similarly, if a user is tricked
>    into using HTML forms when they should not, passwords can be exposed.
>    Users can significantly reduce this risk by using different passwords
>    for websites that use trusted browser authentication than for those
>    that still use HTML forms.
>
>    The risk of dictionary attack is always a significant concern for
>    password systems.  Users can (but typically do not) minimize this
>    risk by choosing long, hard to guess phrases for passwords.  The risk
>    can be removed once a password is already established by using a
>    zero-knowledge password protocol.

This just isn't true. An attacker can always assume the role of the
client and try to authenticate using their dictionary of passwords.
This works no matter what the authentication mechanism is. If the
attacker is the server then they can do this very efficiently (since
they do not have to do it over the network).

The only defence against dictionary attacks is to use a strong password.

>  However the risk of dictionary
>    attack is always present when setting up a new password or changing a
>    password.  Minimizing the number of services that use the same
>    password can minimize this risk.  When zero-knowledge password
>    protocols are used, being extra careful to make sure the right server
>    is used when establishing a password can significantly reduce this
>    risk.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hartman                 Expires December 27, 2006              [Page 15]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
> 7.  References
>
> 7.1.  Normative References
>
>    [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
>               Requirement Levels", BCP 14, RFC 2119, March 1997.
>
> 7.2.  Informative References
>
>    [ANTIPHISHING]
>               Nelson, J. and D. Jeske, "Limits to Anti Phishing",
>               January 2006.
>
>               Proceedings of the W3c Security and Usability Workshop; ht
>               tp://www.w3.org/2005/Security/usability-ws/papers/
>               37-google/'
>
>    [BTNS]     Touch, J., "Problem and Applicability Statement for Better
>               Than Nothing Security",
>               draft-ietf-btns-prob-and-applic-02.txt (work in progress),
>               February 2006.
>
>    [IABAUTH]  Rescorla, E., "A Survey of Authentication Mechanisms",
>               draft-iab-auth-mech-05.txt (work in progress),
>               February 2006.
>
>    [RFC2818]  Rescorla, E., "HTTP Over TLS", RFC 2818, May 2000.
>
>    [RFC4120]  Neuman, C., Yu, T., Hartman, S., and K. Raeburn, "The
>               Kerberos Network Authentication Service (V5)", RFC 4120,
>               July 2005.
>
>    [RFC4346]  Dierks, T. and E. Rescorla, "The Transport Layer Security
>               (TLS) Protocol Version 1.1", RFC 4346, April 2006.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hartman                 Expires December 27, 2006              [Page 16]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
> Author's Address
>
>    Sam Hartman
>    Massachusetts Institute of Technology
>
>    Email: hartmans-ietf@mit.edu
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hartman                 Expires December 27, 2006              [Page 17]
>
> Internet-Draft          Web Phishing Requirements              June 2006
>
>
> Intellectual Property Statement
>
>    The IETF takes no position regarding the validity or scope of any
>    Intellectual Property Rights or other rights that might be claimed to
>    pertain to the implementation or use of the technology described in
>    this document or the extent to which any license under such rights
>    might or might not be available; nor does it represent that it has
>    made any independent effort to identify any such rights.  Information
>    on the procedures with respect to rights in RFC documents can be
>    found in BCP 78 and BCP 79.
>
>    Copies of IPR disclosures made to the IETF Secretariat and any
>    assurances of licenses to be made available, or the result of an
>    attempt made to obtain a general license or permission for the use of
>    such proprietary rights by implementers or users of this
>    specification can be obtained from the IETF on-line IPR repository at
>    http://www.ietf.org/ipr.
>
>    The IETF invites any interested party to bring to its attention any
>    copyrights, patents or patent applications, or other proprietary
>    rights that may cover technology that may be required to implement
>    this standard.  Please address the information to the IETF at
>    ietf-ipr@ietf.org.
>
>
> Disclaimer of Validity
>
>    This document and the information contained herein are provided on an
>    "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
>    OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET
>    ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED,
>    INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE
>    INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
>    WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
>
>
> Copyright Statement
>
>    Copyright (C) The Internet Society (2006).  This document is subject
>    to the rights, licenses and restrictions contained in BCP 78, and
>    except as set forth therein, the authors retain all their rights.
>
>
> Acknowledgment
>
>    Funding for the RFC Editor function is currently provided by the
>    Internet Society.
>
>
>
>
> Hartman                 Expires December 27, 2006              [Page 18]
>
>
>
> _______________________________________________
> dix mailing list
> dix@ietf.org
> https://www1.ietf.org/mailman/listinfo/dix
>
>
>

_______________________________________________
dix mailing list
dix@ietf.org
https://www1.ietf.org/mailman/listinfo/dix