Re: [VoT] VoT Identity Proofing and individual claims

Justin Richer <> Wed, 13 September 2017 21:00 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id CCCDA13305C for <>; Wed, 13 Sep 2017 14:00:07 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -4.189
X-Spam-Status: No, score=-4.189 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, T_FILL_THIS_FORM_SHORT=0.01, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 3N3PW0CEzeVt for <>; Wed, 13 Sep 2017 14:00:03 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id CAA8313293A for <>; Wed, 13 Sep 2017 14:00:02 -0700 (PDT)
X-AuditID: 12074425-c95ff70000007029-f7-59b99c50e963
Received: from ( []) (using TLS with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by (Symantec Messaging Gateway) with SMTP id FD.05.28713.05C99B95; Wed, 13 Sep 2017 17:00:00 -0400 (EDT)
Received: from (OUTGOING-AUTH-1.MIT.EDU []) by (8.13.8/8.9.2) with ESMTP id v8DL00S1031093; Wed, 13 Sep 2017 17:00:00 -0400
Received: from [] ( []) (authenticated bits=0) (User authenticated as jricher@ATHENA.MIT.EDU) by (8.13.8/8.12.4) with ESMTP id v8DKxwj3020077 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT); Wed, 13 Sep 2017 16:59:59 -0400
From: Justin Richer <>
Message-Id: <>
Content-Type: multipart/alternative; boundary="Apple-Mail=_F4C4DAEE-29D6-48EA-A47F-058A1D180D57"
Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\))
Date: Wed, 13 Sep 2017 16:59:57 -0400
In-Reply-To: <>
To: Phil Hunt <>
References: <> <> <>
X-Mailer: Apple Mail (2.3273)
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrMKsWRmVeSWpSXmKPExsUixG6nohswZ2ekwcNVFhYL5jeyWzT8fMDq wOSxZMlPJo+PT2+xBDBFcdmkpOZklqUW6dslcGXcufCereDNTaaKDW3LmBsY7y5j6mLk4JAQ MJE4u7m+i5GLQ0hgMZPE4VntLBDORkaJO72bGSGca0wSjX1LmbsYOTnYBFQlpq9pYQKxeQWs JL51f2EDsZkFkiQ2b+tiBJnKK6Av0fucESQsLGAncXDOHzCbBah179vNLCA2J1C84+k+RohW AYm5h6aBjRQRUJH4dvU61N5FjBJfTz5jB0lICMhK3Jp9iXkCI/8sJOtmIayDCGtLLFv4mhnC 1pTY372cBVNcQ6Lz20TWBYxsqxhlU3KrdHMTM3OKU5N1i5MT8/JSi3Qt9HIzS/RSU0o3MYJD 20V1B+Ocv16HGAU4GJV4eB9Y7owUYk0sK67MPcQoycGkJMq7VxcoxJeUn1KZkVicEV9UmpNa fIhRgoNZSYT3ZgtQjjclsbIqtSgfJiXNwaIkziuu0RghJJCeWJKanZpakFoEk5Xh4FCS4DWb DdQoWJSanlqRlplTgpBm4uAEGc4DNPznLJDhxQWJucWZ6RD5U4z2HCce3v7DxNFx8y6Q3AAm 94FIIZa8/LxUKXHeNpDRAiBtGaV5cJNBact9nZ3FK0ZxoEeFebVBqniAKQ9u9iugtUxAa8+c 3gGytiQRISXVwNjLxT3/UcrkHdGT/eYfit1wr2+HOZ9FtHYpxwutKbJzF7RzHxHaV6tY8eeD zpadfbOdFmqaM39yfvIwN+1MaihHxr8JP+NPWbzmNo+ct8N9vXMXp5lVYJC1wF7966znDiVu ecKYsN3/0td9a6pL5254cHr+0xT5uUt31LBHPKnSXdyzoG2HwDslluKMREMt5qLiRAC3WrWp NgMAAA==
Archived-At: <>
Subject: Re: [VoT] VoT Identity Proofing and individual claims
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Vectors of Trust discussion list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 13 Sep 2017 21:00:08 -0000

Phil, responses inline.

> On Sep 13, 2017, at 3:09 PM, Phil Hunt <>; wrote:
> Justin,
> I don’t believe making things arbitrarily out of scope helps simplify.  In this case, I believe it complicates matters.

I don’t understand your argument. We’re solving one problem, but not the problem you’re trying to solve. It feels complicated because you’re trying to use a tool that’s not fit for the problem you’re presenting. That doesn’t mean that it’s complicated for the problem the tool is designed for. Keeping other things out of scope is exactly making this a tractable problem within particular boundaries. This decision of scope for this work is hardly arbitrary or ill-informed, so dismissing it as such is both misunderstanding the work and not helpful in the conversation. 

> Firstly, missing from the spec….a definition of Identity.

We didn’t try to define “identity” as a term because there is no one single agreed-upon definition of identity, and yet it is understood broadly as a concept enough to make identity systems work, isn’t it? We do have a discussion of our identity model at play in section 1.2 though. I’m assuming you’ve read that, so please submit text that would improve that section.

>> 2.1 <>;.  Identity Proofing
>>    The Identity Proofing dimension defines, overall, how strongly the
>>    set of identity attributes have been verified and vetted.  In other
>>    words, this dimension describes how likely it is that a given digital
>>    identity transaction corresponds to a particular (real-world)
>>    identity subject.
>>    This dimension SHALL be represented by the "P" demarcator and a
>>    single-character level value, such as "P0", "P1", etc.  Most
>>    definitions of identity proofing will have a natural ordering, as
>>    more or less stringent proofing can be applied to an individual.  In
>>    such cases it is RECOMMENDED that a digit style value be used for
>>    this component.
> What is meant by overall?  Is it an average?  The lowest value of all of the attributes (claims)?

Again, that depends on the specific trust framework that defines what the values mean. For NIST, for example, there are baseline proofings required for all mentioned attributes at a given IAL. If you don’t meet that level for all of those attributes, then you don’t meet that IAL. Other frameworks could provide something else, but I’ve yet to see a different approach. Do you know of a trust framework that defines an “average” proofing level for a user’s attributes? 

Also interesting to note: if you’re providing additional attributes that aren’t listed in the framework, like “has a medical degree”, then you’re on your own as the framework doesn’t address those. If you need to address those separately, you’ll need a framework (and vector definitions) that cover those. 

And if you want to have different sets of information for each attribute, then you’ll want a different solution that’s not VoT.

> What is meant by a real world identity subject?  Do you mean a legal person?  Do you mean a device/thing?

Sure, those would all work. All depends on what the IdP is providing identity assertions for. We’re not going to artificially limit that, though we do have "a person using a computer system” generally in mind. Again, see section 1.2 for more details on that model. 

> An overall level of confidence in how strongly a transaction corresponds to a particular identity subject sounds an awful lot like authentication confidence.

An overall confidence would be that, but this isn’t what we’re saying: it’s how much the attributes are tied to a real person. How strongly that gets tied to the transaction is subject to the other vector components, such as authentication credentials and assertions. That’s the whole reason of splitting it out separately. So yes, it’s not as granular as individual attributes, which you’re asking about below, but it’s more granular than a single level of confidence that you have stated here. It’s in between, like we say in the introduction. 

> I can see how you might arrive at this definition looking an an internal data system like a payroll system. You can assess the procedures and establish an overall confidence level in the quality of data.  But as soon as you want to share that payroll data with other systems.  
> I began my experience in Identity management building large scale meta-directories. We once assumed business systems like payroll systems were authoritative over things like employee addresses.  Turns out we were wrong.  Payroll doesn’t care about addresses, they care about bank account deposit numbers.  

Then, by your trust framework and context, you wouldn’t ask payroll for addresses, right? So if your trust framework requires you to verify the address at, say, P3, but you don’t do that, then you’re not doing P3 for those. And if you need to split out classes of users where you’ve got “authoritative bank account numbers” and “authoritative bank account numbers AND addresses”, then you define two different categories to represent those classes of users. If you want a system that can describe the assurance of an arbitrary attribute for a given transaction, then VoT doesn’t work for you. Go find a screwdriver and stop trying to figure out why this hammer doesn’t work for you.

> Conclusion:  Just because the payroll department has a high level of confidence in the attributes they have *internally*, it does not mean there is high confidence for secondary uses.  If you ask payroll about sharing data, they might only share employee number, salary, and bank accounts as being accurate (assuming there was a valid reason to share).  Should an IDP based on payroll only assert high level data?  Can they assert low-level data for the purpose of correlation/confirmation?
> Another case, look at Facebook and Twitter. They have variable assertions as well.  Some users are verified and some are not.  What does that mean? How would VoT help in this case?  Would it mean that the out of scope information in the name claim is deemed accurate?  Are we talking about the same Michael Douglas or are we talking about Michael Keaton (Michael Keaton’s real name is Michael Douglas).

VoT would help in exactly this case by being able to assert different proofing based on different accounts. But it all comes down to what you trust the IdP to assert given its trust framework context — if they agreed to proof the name, and they say they did, and you trust them to make that claim, well then you’re all set. If they don’t agree to that (as in it’s not stipulated in the trust agreements) , or they agreed to it and lied about it, or they did it but you don’t trust them to say it … then I can’t really help you with this. 

In the end, VoT is all about one simple thing: conveying bundles of information about mostly-orthogonal aspects of an identity transaction. It’s about an IdP telling an RP: “I did the following things from a list of things that we agreed upon”. Just like saying “LOA3” in the US Government context meant something pretty specific, saying “P2.Cc.Ab” also means something specific in that kind of context. But saying “LOA3” in Canada means something a little bit different, and saying “LOA3” elsewhere could be completely unrelated to either case. In all of these, both the LoA and VoT versions, interpretation of the result to a particular real-world meaning is outside the conveyance protocol or encoding. There’s a massive amount of context that is taken into account, and that context will tell you if you can trust the name or the address or anything else when you get a particular value.

You’re trying to fit it into something it’s not meant for and not good at, and asking why it doesn’t work; to me, it’s no surprise that it doesn’t work for that. VoT does not do what you’re describing, and that’s by design. That part is spelled out in the introduction. On the same token, your lawnmower makes a pretty bad toothbrush. But like I keep reiterating: if you have a different problem, use a different tool. I am perfectly happy with this technology not being a silver bullet or panacea or what have you. There are a lot of people who seem to have the problem that this tool solves, so we’re going to keep going with solving those problems. I look forward to seeing your solutions for attribute trust and metadata in the future.

 — Justin

> Phil
> Oracle Corporation, Identity Cloud Services Architect
> @independentid
> <> <>
>> On Sep 13, 2017, at 10:57 AM, Justin Richer < <>> wrote:
>> It applies to the IdP’s judgement across the entire assertion. How that’s calculated when the trust in different attributes differs will vary depending on the IdP, and probably defined by rules within the trust framework that defines the vector values themselves. NIST 800-63-3 volume A, for example, is very clear about how to handle all the attributes collected at different levels. If you were using NIST’s encoding of VoT, specified in the upcoming volume D of that document, then you’d be bound by those rules as an IdP. A university group, like you’re talking about below, might have different rules under its own trust framework and vector component definitions.
>> It looks like you might be asking if this is about per-attribute metadata. If you read the introduction, we explicitly state that we’re not trying to solve per-attribute metadata. If you’d like that kind of data, you’re looking for a different kind of solution, of which there have been numerous attempts over the years. Generally speaking, attribute metadata systems look great on paper but are incredibly complicated for RP’s to implement. When we wrote VoT, we directly decided to take a different approach, something between the granularity of attributes and the single-scale of LOA. 
>> This level of granularity is very, very useful in the real world, otherwise we wouldn’t have dozens of international trust frameworks based around the concept of proofing an individual and tying that account to a set of proofed attributes. There isn’t an easy way to express something complex as “We’re sure this is Justin but aren’t sure about his medical degree” in any VoT implementation that I’ve seen to date, and especially not in the example values given in the spec itself. But if you really wanted to, you could have something like:
>>  - P3: High level of confidence in individual’s name, address, phone, eye color, and shoe size
>>  - Pm: In person verification of a medical degree by making the claimant perform surgery on the verifier.
>> So if you wanted to express that, you could say “P3.Pm” or just “P3” if you weren’t so sure about that medical degree. However, like I described above, I don’t think this is a good solution for that as you’d need to get really specific with each attribute. If you really need that level of expressiveness, you’ll want a different solution and VoT doesn’t work for you. However, just because it doesn’t solve this use case doesn’t mean it’s not useful in many others. VoT is a tool fit for a purpose that we tried to express in the intro text, it fits in between attribute metadata and single scalar measurements. And in fact, you could use it side by side in the same system, and I see no conflict in this.
>> Don’t deny the world a hammer just because you think you need a screwdriver.
>>  — Justin
>>> On Sep 13, 2017, at 1:01 PM, Phil Hunt < <>> wrote:
>>> Section 3.1
>>> Does 3.1 apply to the identifier issued, to the whole assertion? 
>>> An Identity is usually an identifier and a set of claims.
>>> So what about claims?   Some claims may be issued by a provider (and thus are P3) while others may be provided as self asserted by the subject.  Some, as in banking may have involved a physical documents or other mechanism and thus all claims are not equal.
>>> I have trouble determining the affect of P0-P3 and worry that privilege escalation will occur since not all claims are equal.  There isn’t really a way to say “We’re confident this is Justin, we’re just not so sure about his medical degree”.
>>> Consider a university knows student numbers and degrees and courses completed. Is it authoritative over nationality, residence, addresses?  Maybe. Maybe not.
>>> Consider a social network. In many cases they can be considered authoritative over the social network identity (P3) but know nothing about most users.
>>> I’m just not sure identity proofing as expressed is actually useful.
>>> Phil
>>> Oracle Corporation, Identity Cloud Services Architect
>>> @independentid
>>> <> <>
>>> _______________________________________________
>>> vot mailing list
>>> <>
>>> <>