Re: [Rats] [CoRIM] The use case for TDX- and SEV-SNP-measured virtual firmware

Dionna Amalie Glaze <dionnaglaze@google.com> Fri, 12 January 2024 18:40 UTC

Return-Path: <dionnaglaze@google.com>
X-Original-To: rats@ietfa.amsl.com
Delivered-To: rats@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 6C736C14F6AA for <rats@ietfa.amsl.com>; Fri, 12 Jan 2024 10:40:45 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -17.607
X-Spam-Level:
X-Spam-Status: No, score=-17.607 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, ENV_AND_HDR_SPF_MATCH=-0.5, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001, USER_IN_DEF_DKIM_WL=-7.5, USER_IN_DEF_SPF_WL=-7.5] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=google.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bbZWeeRKNTQy for <rats@ietfa.amsl.com>; Fri, 12 Jan 2024 10:40:40 -0800 (PST)
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com [IPv6:2a00:1450:4864:20::52a]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 8ABB9C14F6A4 for <rats@ietf.org>; Fri, 12 Jan 2024 10:40:40 -0800 (PST)
Received: by mail-ed1-x52a.google.com with SMTP id 4fb4d7f45d1cf-557bbcaa4c0so729a12.1 for <rats@ietf.org>; Fri, 12 Jan 2024 10:40:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1705084839; x=1705689639; darn=ietf.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=lkaU4HKzmddh2RTWuoCNJaXSfdfsC90q7VpSmdea7HE=; b=PTpis3J9Ke12396QJC1LkQUhF2Sec9OrzSdT5Z5Xvb4QngKY1HNrPArUzMCNx9tP0J xLYzYWWN8Qe/fzAq3v0UkrYst66MVtMZwiRYBa5pDVFSis8zGGHZbL6Jbmk59Q88Vzy1 22E+qAR89DMEUt7fIBO56SCCyDHrVoSajs2yiHpd3ysG51JbKQMOyOS5Cq8Tfom4RzO8 ppMlZPTHPuzAauyVNzYJGPQSheuz0PWMyZQswTdxobjeSRgb4DSq5qjjHuReyaSAKFxf yoWs66v8i9nbOoaypgffscZ+/uBmNzBUVKiEYFA9UoKpBWI6ptL+KJZLta0xZ2bfuKjE 0lNQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705084839; x=1705689639; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lkaU4HKzmddh2RTWuoCNJaXSfdfsC90q7VpSmdea7HE=; b=iYFuwVzMC1QSawYOeoiYXMP9LsxCJ3DKp0KdP/5C2Yu7JX6asZJ4e4p9AqsXwqmI0P CIvaU04yKKLrKHA/R5ZVWbM05bYG5GRJjWWKXQ0P2MUC9ZBvfoxcRXm16jr0DPdccrEG pANwwCdhIimflqtq9U/7Z1f2WvOFZ1C46rCxIF6FxDoBRJ5hxN8fAmO6jW4DC2Fg3DxY Dny1ux8ohGudCbRccwpq45qFODohEL07A/LxUZM7SYqkoiy4NW4BKbn2VFUuE1d17T7w cDnj2xRWsL7N8HRke/8sTgcZ1wbQwIOF1wQNpMFBBZFAclheDaRcNECMMrY/bzBYzp96 W+ww==
X-Gm-Message-State: AOJu0Yw6UqF2HqZ54ta8Z5D3SnwoDNp5tfORCESQMN7uTIfT/pAIU6nI CgL4l5oF9xfYAtX5C4l9S9H3sa91saFEU+2NTs81vcpLiqYG
X-Google-Smtp-Source: AGHT+IGW0HYSNcgQo5yp43w8SZU5c8KWov21wMHtSKOoIX9ogcr1HgeD3mkhouaCtAzwOITxL7q6X+KTHKqzEpMiSew=
X-Received: by 2002:a05:6402:40d2:b0:557:15d:b784 with SMTP id z18-20020a05640240d200b00557015db784mr391171edb.2.1705084838567; Fri, 12 Jan 2024 10:40:38 -0800 (PST)
MIME-Version: 1.0
References: <CAAH4kHak38yodUYUJGGPjor42PB5cNgHnC_h-c0F3T6KJapTjw@mail.gmail.com> <CAK2Cwb4zHtRTjb82njC1eUc-R83Fjpw39JNBfCT+tFNaLoTcRw@mail.gmail.com> <CAAH4kHYqYONbs4mODjJ_hRmhxrbzHup1pbUbVGWuijFf0tGyZA@mail.gmail.com> <4efe6901-ea7a-e24c-98f9-957289b6d1dc@sit.fraunhofer.de> <CAAH4kHZNefuAK-2RG-DK9ydQfuq=aoqvMN-CS0AGX4jCyONsXQ@mail.gmail.com> <d44ed1e7-f043-3e08-a8a8-b5960b27c6af@sit.fraunhofer.de>
In-Reply-To: <d44ed1e7-f043-3e08-a8a8-b5960b27c6af@sit.fraunhofer.de>
From: Dionna Amalie Glaze <dionnaglaze@google.com>
Date: Fri, 12 Jan 2024 10:40:25 -0800
Message-ID: <CAAH4kHboQ1KT9xoOa1tXveP+mZQA9soWjgR5sku5F+8FAUj+3Q@mail.gmail.com>
To: Henk Birkholz <henk.birkholz@sit.fraunhofer.de>
Cc: Tom Jones <thomasclinganjones@gmail.com>, rats@ietf.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/rats/RWU9jjezm0_d3qpl6lawUrHtLM8>
Subject: Re: [Rats] [CoRIM] The use case for TDX- and SEV-SNP-measured virtual firmware
X-BeenThere: rats@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Remote ATtestation procedureS <rats.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rats>, <mailto:rats-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/rats/>
List-Post: <mailto:rats@ietf.org>
List-Help: <mailto:rats-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rats>, <mailto:rats-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 12 Jan 2024 18:40:45 -0000

On Thu, Jan 11, 2024 at 11:40 PM Henk Birkholz
<henk.birkholz@sit.fraunhofer.de> wrote:
>
> Hi Dionna,
>
> before starting to phrase more detailed replies (and I will come back to
> that!), I'd like to start with a few clarifying question about your
> goals, if you allow me, in-line below.
>
>
> Viele Grüße,
>
> Henk
>
> On 12.01.24 00:31, Dionna Amalie Glaze wrote:
> > On Tue, Jan 9, 2024 at 1:06 PM Henk Birkholz
> > <henk.birkholz@sit.fraunhofer.de> wrote:
> >>
> >> Hi Dionna,
> >>
> >> thank you for bringing the CVM goals here! I think they are a great
> >> addition to the mix. Let's try to figure out some answers to your
> >> questions step by step.
> >>
> >> You are touching on a lot of topics, so I am focusing on adding context
> >> to the RATS side of things first and in the interest of the list's
> >> subscribers only point quickly to one recent SCITT activity up front:
> >>
> >>> https://github.com/ietf-wg-scitt/draft-ietf-scitt-architecture/pull/156
> >>
> >> Pending the approval of that PR, the initial attempt to facilitate DIDs
> >> as a first citizen identifier is a thing of the past.
> >>
> >>
> >> So, RATS: please let me try to add some additional context about in-toto
> >> attestations in the context of RATS & CoRIM to start with, in the way
> >> how I currently understand it.
> >>
> >> The attestations you refer to are described here, I think:
> >>
> >>> https://github.com/in-toto/attestation/tree/main/spec/v1
> >>
> >> In remote attestation land, the concept that can be found at that
> >> pointer is potentially best compared with a RATS Endorsement or NIST's
> >> 3rd-Party Attestation and maybe also NIST's 1st-Party Attestation (a
> >> "self-attestation"), I think. It seems not to be RATS Evidence, I think,
> >> which is the input to a RATS Verifier, and I am happy to exchange more
> >> thoughts about that.
> >>
> >
> > They are certainly more on the software side of things, but I wouldn't
> > suggest that an in-toto attestation couldn't be a carrier of evidence.
> > * We could define a predicate that both reflects a TEE attestation
> > report's fields and includes the TEE attestation report from the
> > device for verification, and its verifier extension would not accept
> > the predicate if it's not properly signed by the manufacturer.
> > * We can define a "fully-double-check SLSA" predicate that will both
> > test that the provenance is properly signed but also test that the
> > input sources and toolchain container build binaries with the same
> > digest. This is evidence for build transparency of measured artifacts
> > in an attested environment, which can then be translated to accepted
> > claims about provenance that are stronger than unchecked endorsements.
> > As a middle ground, you might require that a signed build attestation
> > has a non-repudiation proof with a code transparency service that the
> > verifier must query to accept the build attestation. I'm not sure we
> > have agreement on what counts as evidence for a claim.
>
> 1.) When you write TEE attestation report, are you referring to Evidence
> consumed by Verifiers or to Attestation Results consumed by Relying Parties?
>

Evidence. A TEE attestation report is a device-signed blob that
contains an attested measurement of the target environment.
This report should be consumed by verifiers to produce claims in a
signed token "attestation result" that relying parties consume.

> 2.) What I now think you are saying is that you would like to increase
> the assurances ("trustworthiness") of toolchain activities, including
> authenticity and accountability of toolchain components. SLSA today
> provides a way to document toolchain behavior. You would like to enrich
> that assertion set with RATS Attestation Results, so you can show that
> the assertions about the toolchain are trustworthy because the toolchain
> was composed exactly as it was intended to be and did what it was
> intended to do and nothing else?
>

I want to increase the assurances on reference values so that the
trustworthiness is not purely in the social contract for the CoRIM
issuer's key management, yes.
So while the CoRIM issuer's key could be considered associated with a
level of operational security, it does not come with provenance
claims. We want to have a direct link to the sources and toolchain
binaries (build container) that were used to produce the binaries
endorsed by a CoRIM. There are plans of furthering the SLSA security
level assurances to the point where the build environment itself is
attestable, so you don't need to trust the operational security claims
about the builder's sources of non-hermeticity like network access.

> 3.) You would like to explore the possibility, if the assertion formats
> used by SLSA (in-toto attestations) can not only convey the assertions
> about the toolchain and its activities, but also not only convey (as,
> e.g., by including an EAT) but natively express (and I am uncertain
> here!) remote attestation Evidence if the toolchain environment can also
> takes on the role of an Attester?
>

An "attestation result" is itself evidence from the verifier as an
attester. It attests that the provided evidence matches configured
policy.
The JWT EAT is closer to an in-toto attestation in representation than
CoRIM is, given the JSON encoding, but I do appreciate the desire for
a compact representation with CBOR. I think the two projects share a
lot of goals in common.

The CoRIM spec describes topologies of attesters going into creating
evidence for a verifier, but I don't see much difference in using a
verifier as an intermediate attester to translate EATs and other
evidence into simplified EATs with more holistic claims that
downstream users can more easily appraise.

> 4.) As toolchains are potentially running in CVMs, corresponding
> Verifiers must also assess the confidentiality capabilities of the
> Attester and express them in the Attestation Result (see, for exmaple,
> https://www.ietf.org/archive/id/draft-ietf-rats-ar4si-05.html#name-specific-claims)
>

Yes, see also https://github.com/slsa-framework/slsa/issues/975
Verifiers themselves can run in attested environments with simpler
evidence for relying parties to appraise as part of attestation result
appraisal for due diligence.
Passing along specific claims doesn't seem to me to be about
toolchains so much as CoRIM validity. The question becomes,

1. do we want to force all CoRIM issuers to run a service to maintain
the freshness of CoRIMs' claims and keep the validity time range short
(NVIDIA's strategy with NRAS), or
2. do we want to allow for more of a state update model where CoRIM
validity in the format itself is dropped and validity can be long and
based on signing key certificate expiration/revocation (keys can be
revoked with OCSP), or
3. do we want to allow for CoRIM documents themselves to have an
Online CoRIM Status Protocol? I d This would allow verifiers to
determine more readily what specific claims to attach for a particular
attestation. The CoRIM status can be more in-depth about what CVEs etc
are associated with it, as well as other risk factors like the vendor
has gone out of business.

CoRIMs are about static software with dynamic understanding of their
behavior, so we ought to have a more federated model of maintaining
the epistemics of the software ecosystem with multiple providers that
can offer their data for free or through paid subscriptions or
what-have-you.


> >
> >> in-toto attestations' outer layer are in-toto Envelopes with a JSON
> >> encoding, which are signed using DSSE:
> >>
> >>> https://github.com/secure-systems-lab/dsse/blob/v1.0.0/envelope.md
> >>
> >> DSSE - Dead Simple Signing Envelope - is an alternative approach to
> >> JOSE's JWS. Some reasoning about why it is defined as it is can be found
> >> here:
> >>
> >>> https://github.com/secure-systems-lab/dsse/#why-not
> >>
> >> In CoRIM work, we are also defining semantics that could be viewed as a
> >> type of pre-defined set of predicates as defined by in-toto:
> >>
> >>> https://github.com/in-toto/attestation/tree/main/spec/predicates
> >>
> >> I am an under the impression (maybe wrongfully so!) that the
> >> similarities seem to end there.
> >
> > A predefined set of predicates does indeed sound to me like what CoRIM
> > is, and it's why I think the extensibility model is clunky. If you
> > were to make it cleanly extensible such that a profile has full
> > control over its evidence schema, then your differences from in-toto
> > would be JSON vs CBOR.
> >
> > I like the idea of different platform providers coming together than
> > trying to fit into a similar representation to simplify verification,
> > but I don't like the resulting abstraction to be what every platform
> > provider henceforth needs to find a way to model themselves on top of.
> > I think we'd be telling a different story about reference values and
> > measurement-values-map if there were a standard evidence format that
> > RVPs and manufacturers alike would target. Right now though, there is
> > an implicit "turn the blob of evidence into a form that fits all these
> > triple record types".
> > What is really abstractable into the same type such that the
> > underlying technology doesn't actually matter?
> >
> > I think we have the document's validity time range (which could be
> > modeled in the signing key's certificate expiration) and security
> > version numbers, but the rest is extra that I don't really get.
> > Why is there a SWID used in the document when it's meant to be
> > describing evidence and endorsements as claims, when SWID is purely
> > metadata? Isn't this kind of thing more related to supply chain
> > integrity in the SCITT workstream?
> >
> > For another example of overgeneralization, an environment-map has the
> > class, instance, and group fields. Class limits to some amount of
> > detail, but no it could be a class-id that's arbitrary bytes. Instance
> > can also be arbitrary bytes. Group also arbitrary bytes. They have
> > been expanded so much to fit a few use cases that they no longer carry
> > meaning.
> > I think there's value in understanding how to translate a concise
> > binary object into something more human-readable, which the initial
> > fields attempted to do. There's value in not needing a pretty printer
> > for every profile, but I think that maybe schemas should be more
> > shareable and reusable without creating a fragile base class [1].
> > Indeed you can have URLs to cddls that name some type(s) that you then
> > include in the cddl of your profile for the names to get printed
> > nicely.
> >
> > There are multiple types nested inside a corim that have extension
> > points that are subject to interpretation by a profile, and some types
> > are closed with some number of alternates while others are open with
> > some initial alternates.
> > A profile could extend or restrict those alternates, so I don't really
> > see why every profile would need to paste itself on top of the
> > existing identity types and triples.
> >
> > Profiles can then be schemas (again like in-toto) that have their own
> > interpretation for translating evidence into claims. The "own
> > interpretation" how CoRIM is currently defines profiles anyway: "A
> > profile allows the base CoRIM schema to be customised to fit a
> > specific Attester."
> >
> > I think that a lot of the definitions described so far have been
> > abstractions to fit a small number of use cases (DICE, ARM CCA's PSA,
> > TPM), and I think that the types defined in CoRIM can perhaps be
> > reusable in their profiles, but I don't think that the heft of the
> > CoRIM document is serving itself.
> >
> > The description of conditional triples and conditional series triples
> > and multiple environment conditional triples... it feels very much
> > like an awkward virtual machine to program with a verification policy.
> > There's a whole expression language that Intel defined in their
> > profile for CoRIM draft. I don't know where reference value ends and
> > policy description (allegedly out of scope for CoRIM) begins.
> >
> > [1] https://en.wikipedia.org/wiki/Fragile_base_class
> >
> >>
> >> Two of CoRIM's goals - in a simplified nutshell - are compactness
> >> combined with standardized signing. The JSON encoding prescribed by
> >> in-toto seems to be going down a different path, but hypothetically it
> >> might be possible to transfer all semantics covered in CoRIM into
> >> in-toto predicates. The DSSE uses a signing scheme that seems to make
> >> some unique choices on flexibility and extensibility, of which I would
> >> be careful to assume that they can be simply adopted as is. I am happy
> >> to exchange more thoughts on these topics, too.
> >>
> >> We definitely agree on the goal to not overly burden Verifiers with
> >> respect to their duty of appraisal of Evidence. There is a lot more in
> >> your email, but I am stopping here for now so that we can work through
> >> your illustrated goals and corresponding questions iteratively, if that
> >> is okay for you.
> >>
> >>
> >> Viele Grüße,
> >>
> >> Henk
> >>
> >> p.s. I started the email off with your reply to Tom instead of your
> >> initial email. Sorry!
> >>
> >>
> >> On 09.01.24 20:34, Dionna Amalie Glaze wrote:
> >>> On Mon, Jan 8, 2024 at 5:15 PM Tom Jones <thomasclinganjones@gmail.com> wrote:
> >>>>
> >>>> I don't understand the process that would allow a 3rd party attestor to make any assertion about what happened in the Google cloud (or any other cloud for that matter.)  Does anyone else understand the basis for virtual instances being attested as secure?
> >>>
> >>> The process is by transferring trust to a third party attester that
> >>> you do trust, and ensuring that the attested environment is heavily
> >>> protected from host tampering. This is the idea behind Trusted
> >>> Execution Environments, and is a very different threat model than
> >>> folks typically work with.
> >>> For AMD SEV-SNP or Intel TDX, you have to trust that the chip is doing
> >>> its job right, such that you can take the chip's signature of a
> >>> virtual instance's boot state at face value, so long as the key
> >>> certificate roots back to the manufacturer's published root of trust.
> >>> What Google then certifies is what the measurement of the boot means,
> >>> since we're providing the firmware. At first this will just be "we
> >>> signed the measurement", but then we'll add claims like, "this
> >>> measurement is producible through documented means on a binary that
> >>> was built from sources X and toolchain container Y"
> >>>    and then you can go further down the rabbit hole of if you trust the
> >>> builder, or if X + Y have the difficult-to-attain property that a
> >>> clean rebuild yields exactly the same bits. You can audit the sources
> >>> to establish trust in the firmware, and you can continue the "who
> >>> built the toolchain container and do I trust them?" unfathomably long
> >>> chain of builders building builders.
> >>>
> >>>> ..tom
> >>>>
> >>>>
> >>>> On Mon, Jan 8, 2024 at 4:33 PM Dionna Amalie Glaze <dionnaglaze=40google.com@dmarc.ietf.org> wrote:
> >>>>>
> >>>>> Hi y'all, I've touched on the issue of confidential VMs (CVMs) a few times in my issues and emails to this list, but I'd like to lay out exactly what we'd like to be able to enable with RATS.
> >>>>>
> >>>>> # Goals
> >>>>>
> >>>>> Our goal is for CVM hardware attestations of Google-provided TCB to be linked to
> >>>>> 1. verifiable authenticity: signed corim measurements
> >>>>> 2. auditable measurements: the signed measurement also points to a supply chain transparency report for the measured binary. A document or software package we publish shows how to calculate the measurement from the binary, and the transparency report binds the binary to an auditable source tree at commit X built with toolchain container Y, signed by an organizationally endorsed builder key that the build follows SLSA L3 operational security requirements.
> >>>>>
> >>>>> and ephemeral claims, e.g.,
> >>>>> 3. vulnerability reporting: short-lived certificates of firmware status, like "has the most up to date security version number" or "is subject to CVE xyz. Restart your instance to get on the latest version". This could be modeled as a CoRIM endorsement of a claim like "uptodate as of TIMESTAMP".
> >>>>> 4. Platform security reporting: short-lived certificates of platform firmware status, like "you can be sure that an attestation's TCB is >= x anywhere in the fleet"
> >>>>>
> >>>>> # Supply chain standards
> >>>>>
> >>>>> There are further things you can do with the transparency report like non-repudiation through hosting the build attestation with a transparency service that has append-only logs after identity-proofing, but there seems to be a fundamental disagreement between the IETF SCITT workstream and the sigstore.dev project on how to achieve that, since SCITT wants W3C DID identities, and sigstore.dev is already built to use OIDC. I don't know how that all is supposed to be consonant with RATS, since there's nothing in the corim or eat documents about using DID for identities. There is EAT binding to OIDC tokens though. Is there anyone in the RATS group that is participating in the SCITT effort that can explain this to me?
> >>>>>
> >>>>> The SLSA provenance schema itself is defined in terms of a completely different attestation format called in-toto (https://in-toto.io), and communication I've had with them is that in-toto should be considered an alternative carrier format to CoRIM to fit into the RATS framework. If we want to link the reference measurement to an in-toto attestation, that seems like something verifier-specific that we'd need to say, "hey if you want to ensure the firmware measurement is not only signed, but built transparently, then download the SLSA attestation in dependent-rims. By the way if there's more than one thing in dependent-rims, you can understand any url with prefix X to be a firmware build attestation from Google" which is an unfortunate complexity.
> >>>>>
> >>>>> # Modeling CVM attestation
> >>>>>
> >>>>> I'm trying to understand how to fit all these goals into the RATS framework such that we can propose extensions to open source verifiers that aren't overly burdensome or highly specific to each particular package we want to provide reference values (and provenances) for.
> >>>>>
> >>>>> In terms of the firmware measurement, we can deliver a CoRIM through a UEFI variable pointed to by the NIST SP 800-155 unmeasured event, and we can give the AMD SEV-SNP VCEK certificate through extended guest request, but everything else seems to be up to the verifier to collect independently of the VM.
> >>>>>
> >>>>> ## Evidence collection
> >>>>>
> >>>>> The way we're collecting attestations at the moment is through a recommended software package https://github.com/google/go-tpms-tools that wraps up a vTPM quote with a TEE quote and supporting certificates as a protocol buffer. I'm not clear if this unsigned bundling process should be modeled as any particular thing in the RATS framework. I think we're working with the "passport model" of attestation.
> >>>>>
> >>>>> I don't have a sense of how the WG foresees how evidence should be bundled to give to a verifier. I'm working from a vendor-specific understanding at the moment that whatever verifier service you use, you need to use their format and API, but of course ideally I'd like this to be more of a federated arena where you can have n-of-k verifiers say some evidence matches policy, and the evidence is not too vendor-specific for that to be out of the question.
> >>>>>
> >>>>> ## CVM Profiles
> >>>>>
> >>>>> Whereas Google has an attestation verifier service that generates an EAT with its own claims bound to an OIDC token (for the Confidential Space product), we'd like to use more standard claims, like AMD SEV-SNP measurement, Intel TDX MRTD, etc. Azure's attestation service has their own x-ms-* extensions for this that will hopefully help AMD and Intel align on how claims should be proposed for the CoRIM format.
> >>>>>
> >>>>> Supposing we do get profiles from Intel and AMD for their CVM attesting environments (more below), those environments sign quotes / attestation reports that serve as evidence for the claims defined in those profiles.
> >>>>>
> >>>>> I as a Reference Value Provider want to be able to provide a document that says something that covers 1 and 2 up front like, "if your AMD measurement is contained in {x, ...} or your TDX measurement is contained in {y, ...}, then you're running Google-authentic virtual firmware with security version n. The firmware this measures can be found at z".
> >>>>>
> >>>>> My understanding of how to do this is for the firmware CoRIM to have a single CoMID tag and the SLSA provenance linked from dependent RIMs.
> >>>>> The CoMID tag will have lang: en-us, tag-identity: some-uuid we generate before signing, and triples-map containing some reference triples.
> >>>>> We have reference triples for both AMD and TDX by using different environment-maps with different class fields.
> >>>>> AMD SEV-SNP's class is up to AMD to profile, but let's just say it's a class-id for the VCEK extension oid prefix 1.3.6.1.4.1.3704.1.1. The measurement-map for this can have an mkey or not. If we had one, I'm unsure if it's something that Google would define or if it's still up to AMD. If Google, we could use a uuid that stands for Google Compute Engine?
> >>>>> The mval as a measurement-values-map would then contain our AMD firmware svn, and AMD profile-specific claims, but I think we'd just give the measurements and some form of acceptable policy specification. We just have one guest policy we apply everywhere, but if that changes we probably need the AMD profile to have expressions like ranges, lower- and upper-bounds for policy components.
> >>>>> For Intel, they'd need a similar profile for the TDREPORT components as claims.
> >>>>>
> >>>>> I say measurements and not measurement even though we're talkabout about a single firmware binary because both AMD and TDX can have multiple measurements based on the VM construction, such as how many vCPUs it launched with (AMD has VMSAs and Intel has TDVPS).
> >>>>> For now our security version number matches what we measure as EV_S_CRTM_VERSION in PCR0, but that may change if there are technology-specific changes.
> >>>>>
> >>>>> As far as I understand, the Intel profile for CoRIM only supports the boot chain up to the quoting enclave (QE) in terms of its TCB version, but the profile does not describe the QE as its own attesting environment for SGX enclave or TDX VM. The attesting key is generated in the QE and is signed by the PCE's hold on the PCK, which is per-machine-per-TCB (ppid + pceid). The quote wraps around the attesting key's signature for verification against their non-x.509 format.
> >>>>>
> >>>>> AMD similarly does not have a profile for the SNP firmware as an attesting environment for an SEV-SNP VM.
> >>>>>
> >>>>> # Evidence Appraisal
> >>>>>
> >>>>> Setting aside evidence formats, I want to really understand how we go from a signed CoRIM and a CVM attestation to an attestation result (which I'll handwave is some JWT representation of the accepted claims).
> >>>>>
> >>>>> We somehow get the VCEK or PCK certificate and attestation report / quote, and the Google firmware CoRIM to the verifier. The verifier can verify the evidence back to the manufacturer with this forwarded (or cached) collateral and introduce every quote/report field as claims of the target environment.
> >>>>> Let's say Google's code signing root key is in the trust anchor, so any CoRIM we sign is trusted.
> >>>>>
> >>>>> If I read the CoRIM document about matching reference values against evidence, the document starts talking about conditional endorsements instead, which are a different triple from reference-value-triples. We discussed a little in the Github issues that reference values are a special kind of endorsement, but it's still jarring. It goes on to say that reference-value-triples is essentially redundant with the conditional-endorsement-triples, but you can use either. Then there's "In the reference-triple-record these are encoded together. In other triples multiple Reference Values are represented more compactly by letting one environment-map apply to multiple measurement-maps."
> >>>>>
> >>>>> It seems "Conditional Endorsement" is philosophical, and "conditional-endorsement-triples" is one implementation of the idea, and "Reference Value" is philosophical, but "reference-value-triples" is one implementation of the idea. Another implementation of "Reference Value" as an mkey of a "conditional-endorsement-triples", and the mval is more explicit about what claims are introduced. For "reference-value-triples", I don't see any explicit representation of a claim, rather, reference-value-triples lead to "authorized-by" getting added to fields of an Accepted Claim Set entry which itself is only a conceptual type to help understand appraisal, but not an actual claim itself–is this where a profile-defined claim needs to clarify meaning? I see this authorized-by as conceptually different from the optional field of a measurement-map, since that is from the CoRIM that I've signed and isn't part of an attestation result representation.
> >>>>>
> >>>>> If I'm looking at a JWT with an AMD profile claim about the measurement value, I'd like another claim that the measurement value is signed by Google, or a stronger claim that the measurement value was signed by a trusted source, and the build provenance is [some google URL to the SLSA provenance].
> >>>>> Again though, if at all possible these claims should appeal more broadly than just Google.
> >>>>>
> >>>>> --
> >>>>> -Dionna Glaze, PhD (she/her)
> >>>>> _______________________________________________
> >>>>> RATS mailing list
> >>>>> RATS@ietf.org
> >>>>> https://www.ietf.org/mailman/listinfo/rats
> >>>
> >>>
> >>>
> >
> >
> >



--
-Dionna Glaze, PhD (she/her)