[Trans] Review of draft-ietf-trans-threat-analysis-15

Ryan Sleevi <ryan-ietf@sleevi.com> Tue, 18 September 2018 22:00 UTC

Return-Path: <ryan-ietf@sleevi.com>
X-Original-To: trans@ietfa.amsl.com
Delivered-To: trans@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A3F35130F56 for <trans@ietfa.amsl.com>; Tue, 18 Sep 2018 15:00:58 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.997
X-Spam-Level:
X-Spam-Status: No, score=-1.997 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_MSPIKE_BL=0.001, RCVD_IN_MSPIKE_L5=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=sleevi.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Ngy4eT78N9-K for <trans@ietfa.amsl.com>; Tue, 18 Sep 2018 15:00:48 -0700 (PDT)
Received: from pdx1-sub0-mail-a39.g.dreamhost.com (pop.dreamhost.com [64.90.62.162]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 74C38130E11 for <trans@ietf.org>; Tue, 18 Sep 2018 15:00:46 -0700 (PDT)
Received: from pdx1-sub0-mail-a39.g.dreamhost.com (localhost [127.0.0.1]) by pdx1-sub0-mail-a39.g.dreamhost.com (Postfix) with ESMTP id A03A87F49C for <trans@ietf.org>; Tue, 18 Sep 2018 15:00:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sleevi.com; h=mime-version :from:date:message-id:subject:to:content-type; s=sleevi.com; bh= kxZCD20ncXArcbr0wM5GDs6aaRo=; b=Ofhy9jeSe4olsKf579BRhKprBudG202r OR6QZgbVIdzkan3u7Iwt9GB+ZNfz8Zdw2b3j3gAdaVnZRHzpLGlimjwTzXsOO12V hx7NMn81Dcz2MuD5cobPVOIfLGol2tOjVdDsqnXnNsp40EOJ7+0N0ZP6nmAF6/R7 xGm+m/VNPxk=
Received: from mail-it0-f41.google.com (mail-it0-f41.google.com [209.85.214.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: ryan@sleevi.com) by pdx1-sub0-mail-a39.g.dreamhost.com (Postfix) with ESMTPSA id 6AD3A7F491 for <trans@ietf.org>; Tue, 18 Sep 2018 15:00:44 -0700 (PDT)
Received: by mail-it0-f41.google.com with SMTP id h23-v6so5100518ita.5 for <trans@ietf.org>; Tue, 18 Sep 2018 15:00:44 -0700 (PDT)
X-Gm-Message-State: APzg51DYCYDwQ2yo+5JEUSvD05PzFaQHP9PVjOE6XU9DplOxD9oEQobU 8ayPAR9+z879W3DyUZxIrMi24Z7dVHmHolnJddg=
X-Google-Smtp-Source: ANB0VdZ9ILFJY8JDlxx6/2p0+1aloAEWVHhEVETjckfdj9tWwmf9RG4or4awp8RZkGQ4xKs+HA5d2HgYjs9cSsAwTBY=
X-Received: by 2002:a24:328d:: with SMTP id j135-v6mr18749533ita.5.1537308043489; Tue, 18 Sep 2018 15:00:43 -0700 (PDT)
MIME-Version: 1.0
X-DH-BACKEND: pdx1-sub0-mail-a39
X-DH-BACKEND: pdx1-sub0-mail-a39
From: Ryan Sleevi <ryan-ietf@sleevi.com>
Date: Tue, 18 Sep 2018 18:00:32 -0400
X-Gmail-Original-Message-ID: <CAErg=HFGQYaSbm=bQ+_cX4_PtksGGvqQRUGhnyNH2qDSn7haBQ@mail.gmail.com>
Message-ID: <CAErg=HFGQYaSbm=bQ+_cX4_PtksGGvqQRUGhnyNH2qDSn7haBQ@mail.gmail.com>
To: Trans <trans@ietf.org>
Content-Type: multipart/alternative; boundary="000000000000d7b9c405762c6b6e"
Archived-At: <https://mailarchive.ietf.org/arch/msg/trans/WSUJDn4bWprSJOpNhbsahFnC2G4>
Subject: [Trans] Review of draft-ietf-trans-threat-analysis-15
X-BeenThere: trans@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Public Notary Transparency working group discussion list <trans.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/trans>, <mailto:trans-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/trans/>
List-Post: <mailto:trans@ietf.org>
List-Help: <mailto:trans-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/trans>, <mailto:trans-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 18 Sep 2018 22:01:05 -0000

With the publication of -15, and with the Chairs suggesting [1] again [2]
that it's appropriate to take a thorough review, I've tried to review this
doc in full again. Pre-emptive apologies if the format has messed up, and
my kingdom for a decent review tool to leave these more mangably inline.
Apologies for the delays, as I work to translate my notes in the margin
into concrete and actionable feedback, suggestions, and hopefully
productive explanations of the concerns and considerations.

At a high level, I think it’s worth echoing the concerns Andrew Ayer raised
on 2018-05-07 at [3]. Specifically, Andrew highlighted that the document’s
structure, and attempted hierarchy, leads to some duplication of
information and potential understating culpability/threats. I think this
remains accurate, and the suggestion to focus on describing the attacks
still seems a worthwhile consideration. I believe that such an attempt
would likely be able to resolve a number of comments below, which appear to
be the result of trying to coerce the threats into the hierarchy.

With such a restructure, I think it’d be useful to avoid discriminating
based on syntactic and semantic mis-issuance - which is the foundation for
this documents hierarchy. The document establishes the semantic
mis-issuance is a misleading Subject or Subject Alternative Name, while
everything else is syntactical. However, this understanding of syntactical
misissuance muddies the waters between what historically has been
syntactical issues - e.g. BER encoding instead of DER - and what’s been
seen as a semantics issue, such as granting a capability or extended key
usage without the appropriate procedural controls or validation.

The early emphasis on syntax and semantics causes a whole class of
adversarial models to be missed, ones which have been the forefront of
concerns from existing Log operators and CT consumers. Such attacks include
the ‘malicious’ logging of certificates - that is, syntax and semantics
that are fully conforment with the relevant profiles, but which a Log may
find undesirable to provide in an append-only record. For example,
certificate extensions with ‘objectionable’ content provide such a model.
Another attack that is omitted includes that of denial of service through
‘flooding’ - an issue that has seen more than one Log struggle to keep up -
in which the existing corpus of certificates not present within a given Log
are suddenly presented to that Log. This same adversarial model applies to
revoked certificates, particularly revoked CA certificates, in that they
allow for large corpora of certificates to be created, potentially with
problematic content, and be submitted to Logs.

Similarly, this treatment of syntax and semantics leads to suggestions that
would actively undermine some of the goals of Certificate Transparency. In
particular, the suggestion of CT logs performing syntax validation has, in
the discussions previously, been pointed out that it hinders, rather than
helps, transparency, in that it allows for an entire class of syntax
violations to be swept under the rug by colluding Logs, preventing
transparency around the CA’s operations. This fundamentally redefines the
goals of Certificate Transparency, in a way that is inconsistent with its
widely-deployed usage among root programs to supervise participants and
review CA operations, and it would be a mistake not to mention this.

Another high-level point is that the document makes heavy use of
parentheticals. While clearly my own writing is just as guilty of this,
although perhaps through abuse of hyphens and commas, I think many of the
parenthetical remarks fall into categories that either derail the
discussion at hand with asides and speculation, or which should otherwise
be appropriately integrated into the text and the document structure
itself. At times, there are even nested or unclosed parentheticals, which
stick out rather substantially.

One last high-level point, before the more detailed line-item feedback,
which is that this document broadly speaks to the notion of “trusting”
Logs, as if that is something that either Monitors or RPs do. In
particular, in the discussions about misbehaving Logs, it’s frequently
suggested that Monitors should not “trust” the Log, or presents Log
misbehavior as an ‘attack’ against Monitors. While it’s certainly true that
Monitors need to be careful in regarding what a Log omits, there’s no risk
or harm in examining what a Log includes - indeed, it’s particularly
beneficial to clients to widely consume data from Logs, even those no
longer recognized by CT clients for purposes of SCT enforcement. Andrew
touched on this in his previous reply, and I don’t think draft-15 really
meaningfully addresses that concern.

[Page 2]
1. Introduction
> Subjects are web sites and RPs are browsers employing HTTPS to access
these web sites.

While only present in the introduction, this definition of RP is overly
narrow, and perhaps not aligned with its common usage. There is the RP as
the human user, and the RP as the software performing the validation. The
definition here focuses only on the latter, and yet conflates the benefits
received with a definition that includes the former.

This doesn’t seem correct, because many of the benefits of CT are derived
not during the processing and validation of certificates by software, but
by the human users that are relying upon that software. If we take a purely
programmatic approach to validation, then that would suggest CT does not
benefit OV/EV certificates, nor their named Subjects, since that
information is not used by browsers to access HTTPS. However, because the
value of that information is nominally in that human users rely on it, it
seems we really have a fourth category of beneficiaries to consider -
end-users.


[Page 3]
1. Introduction
> When a Subject is informed of certificate mis-issuance
> by a Monitor, the Subject is expected to request/demand
> revocation of the bogus certificate. Revocation of a
> bogus certificate is the primary means of remedying
> mis-issuance.

One of the substantial ways that CT has improved the ecosystem is through
its ability to detect misissuances by CAs - regardless of syntactical or
semantic, perhaps best demonstrated through [4]. This paragraph overall
only focuses on the risk to Subscriber’s through actively issued
certificates in their name, while all Subscribers (of all CAs) benefit from
having a consistently applied and enforced set of standards. This is most
likely a result of failing to consider trust store vendors as a distinct
category from relying parties (whether software or human users), or perhaps
prematurely aggregating them with Monitors.


[Page 3]
> Certificate Revocations Lists (CRLs) [RFC5280] are the primary means
> of certificate revocation established by IETF standards.
> Unfortunately, most browsers do not make use of CRLs to check the
> revocation status of certificates presented by a TLS Server (Subject).

Two words stand out here as perhaps unnecessary - primary and
unfortunately. While structurally, this entire paragraph feels unnecessary
to the Introduction, the emphasis on CRLs as somehow being superior to OCSP
seems an unnecessary editorializing, especially given the Web PKI’s
requirement that both CRLs and OCSP be supported.

Similarly, “unfortunately” strikes as an unnecessary editorializing,
particularly in light of some of the later discussions around misbehaving
CAs and revocation.


[Page 3]
> The Certification Authority and Browser Forum (CABF)
> baseline requirements and extended validation guidelines do mandate

The full name of the CA/Browser Forum is the “Certification Authority
browser Forum” - as noted in the CA/Browser Forum Bylaws.

Similarly, the Baseline Requirements and Extended Validation Guidelines are
both titles of documents, and would seem to be editorially capitalized.


[Page 3]
> most browser vendors employ proprietary
> means of conveying certificate revocation status information to their

The choice of “most” versus “many” implies a particular group that can be
quantified here. It’s unclear what that grouping includes - so it may be
more appropriate to suggest “some” or, if appropriate, “many,” but a word
like “most” seems to require a proper definition of what the set includes.


[Page 3]
> Throughout the
> remainder of this document references to certificate revocation as a
> remedy encompass this and analogous forms of browser behavior, if
> available.

While this is located within the introduction, this seems to setup an
assumption that ‘revocation’ refers to specific certificates or to the
combination of Issuer Name and Serial Number. This can be seen later in the
document, most notably in the contentious Section 3.4, but doesn’t reflect
the practiced capabilities of clients. This, like the most discussion
preceding this, is a result of an unclear focus as to what the Web PKI
constitutes, and whether this document views such revocations (of SPKI) as,
well, revocation. To the extent the document relies on revocation as a
remedy, it may be worthwhile to either split out and discuss different
approaches to revocation that have been practiced - which include by SPKI,
by Issuer & Serial Number, by Subject Name and SPKI - since these seem to
materially affect the subsequent discussion of threats and remediation.


[Page 3]
> A relying party (e.g., browser) benefits from CT if it rejects a
> bogus certificate, i.e., treats it as invalid.

While understandably the introduction is trying to be brief, it ignores the
substantial primary benefit already being derived by multiple browsers,
which is the benefit to ecosystem analysis. Whether this analysis is in
assessing which CAs are responsible for which forms of misissuance [5] or
about assessing compatibility risks, there are substantial benefits to be
derived independent of the treatment of bogus certificates.


[Page 4]
> and rejects the certificate if the
> Signed certificate Timestamp (SCT) [I-D.ietf-trans-rfc6962-bis] is
> invalid.

On an editorial point, it seems this should be Signed Certificate
Timestamp. On a functional point, however, validity here is being left
ill-defined. Validity here covers a spectrum including simply a Log not
being recognized for that purpose, to being improperly syntactically
constructed, to having an invalid signature, to having something wrong with
the associated Log structure as a result of examining that SCT (e.g. no
inclusion proof, no associated entry).


[Page 4]
> If an RP verified that a certificate that claims to have
> been logged has a valid log entry, the RP probably would have a
> higher degree of confidence that the certificate is genuine.
> However, checking logs in this fashion imposes a burden on RPs and on
> logs.

Here, it’s ambiguous as to what “genuine” means, as this is the only
occurrence in the document. It’s unclear if it’s meant to be an antonym to
bogus (that is, semantic misissuance in the document’s terminology), or
whether it’s also meant to include adversarial misissuance.

It’s also ambiguous as to what the checking entails, since verifying the
entry and its correlation to the certificate can be distinct from the
verification of the inclusion proof.


[Page 4]
> Finally, if an RP were to check logs for
> individual certificates, that would disclose to logs the identity of
> web sites being visited by the RP, a privacy violation.

A potential issue with this construction is that it seems to miss the
existence of read-only Logs - that is, Logs that mirror the full Merkle
tree of the SCT-issuing log, and which can present proofs without having
access to the primary Log’s key material.

It’s unclear whether this is solely referring to online checking - that is,
checking for inclusion proofs at time of validation - in which case, it
seems like 6962-bis already discusses this point in Sections 6.3 and 8.1.4,
along with privacy-preserving approaches. It’s unclear if this document is
referring to something different, then, or if it’s rejecting both the
documented alternatives and those discussed in the communities.


[Page 4]
> If Monitors inform Subjects of mis-issuance, and if a CA
> revokes a certificate in response to a request from the certificate's
> legitimate Subject,

This assumption here - that Monitors inform Subjects of mis-issuance -
seems to imply that Monitors are all-knowing. The design of CT is such that
Monitors can be alerted to potential certificates of interest, and Subjects
make determinations about mis-issuance. The most common example of this are
certificates that were duly authorized and approved, but for which the
human person operating the Subject was not directly involved with (e.g.
domains team versus marketing team).

This affects some of the adversarial modelling later on, in that Monitors
are presumed to be able to determine mis-issuance, and thus failure to make
that determination, or requiring human intervention, is seen as
adversarial. However, the design of CT, and its practical deployment, is
that Monitors surface interesting events and the Subject makes the
determination as to misissuance or not.


[Page 4]
> Logging of certificates is intended to deter mis-issuance,

I think this conflates detection with deterrence. While it’s true that
detection can have a deterring effect, it seems more accurate to reflect
that the purpose is detection, as reflected in the language choice 6962-bis.


[Page 5]
> Monitors ought not trust logs that are detected misbehaving.

I mentioned this in the high-level feedback, but this notion here about log
‘trust’ is one that deserves unpacking. Monitors have strong incentives to
continue to examine certificates from Logs, even those that have been shown
to be providing a split view. As it relates specifically to Monitors,
however, they may not trust a Log to be a complete set of all issued
certificates, but they can certainly trust a Log to contain interesting
certificates, provided the signatures validate.


[Page 6]
Figure 1

I think this figure is misleading or inaccurate, particularly around steps
1-5. This approach manifests in some of the later discussions around syntax
checking of Logs, but as it relates to this specific figure, misses out on
the potential parties that are performing the certificate logging. There’s
no requirement, in 6962, -bis, or as practiced, to require a CA to log 100%
of its certificates, so Steps 1-5 are more complicated. This can be seen in
existence today, as large organizations like Google and Cloudflare perform
the logging and inclusion operation themselves, both for new and existing
certificates.


[Page 7]
> Because certificates are issued by CAs, the top level differentiation
> in this analysis is whether the CA that mis-issued a certificate did
> so maliciously or not.

This split - on intent - is perhaps one of the most disastrous ones to the
stability and functioning of the PKI. Malicious intent here - which itself
leaves ambiguities around compromise that are touched on later in the
document - is difficult to judge or make informed evaluations of. As this
ends up serving as the basis for the ontology, it leaves ambiguities around
things like malicious incompetence and the difficulty in applying Hanlon’s
Razor effectively.


[Page 7]
2. Threats
> An adversary who is not motivated to attack a system is not a threat.

Enough bytes have been spilled on the topic of this definition that I’m not
sure we’d see much progress made. I think it’s worth calling out the
potentially troublesome definition here - which is to say if an adversary
isn’t motivated to attack it, it’s not a threat. Consider an example such
as BygoneSSL [6], which this definition would rule as not a threat until an
adversary was motivated to exploit, while I think many others would
recognize it as a potential exploit needing to be mitigated.


[Page 7]
2. Threats
> As noted above, the goals of CT are to deter, detect, and facilitate
> remediation of attacks that result in certificate mis-issuance in the
> Web PKI.

I do not believe that aligns with the stated goals for CT, which is about
detection. Deterrence and remediation are important, but in the
intersection of policy and technology, it's important to recognize that CT
cannot and does not try to fix everything related to PKI.


[Page 8]
> The latter attack may be possible for
> criminals and is certainly a capability available to a nation state
> within its borders.

As worded, this seems to suggest uncertainty about the criminal viability,
and through mentioning nation-state attackers, likely implies a greater
degree of complexity and rarity than is perhaps appropriate. I think this
sort of subjective evaluation of probabilities is unlikely to age well, and
concretely, could delete this, along with the previous and following
sentences.


[Page 8]
> It seems unlikely that a
> compromised, non-malicious, log would persist in presenting multiple
> views of its data, but a malicious log would.

I think this sentence either reflects or can lead to a misunderstanding
about what is meant by a split-view. Rather than being a measurement of a
single point in time, because the tree head is signed, once a tree that
does not incorporate a given certificate has been signed and published, the
log is forever presenting a split view, because it cannot reconcile the two
trees.

Similar to the previous note, it seems the document would also be stronger
without speculating on probabilities or likelihoods.


[Page 9]
> Finally, note that a browser trust store may include a CA that is
> intended to issue certificates to enable monitoring of encrypted
> browser sessions.

I think I take issue with this framing here. While factually accurate that,
yes, locally trusted anchors may be used for this purpose, I think as
worded it’s misleading. From a point of view of agency, the browser trust
store is not actively choosing to include such CAs - rather, they are
locally-trusted CAs that the browser trust store is permitting - but the
current language leaves that ambiguous. Further, I think the framing here
is meant to imply that this is the only purpose - that is, monitoring of
such communications - while ignoring other purposes, such as serving as a
local root of trust for an organization or enterprise.

The conclusion of this paragraph suggests that CT is not designed to
counter this, while at the same time, does not acknowledge that 6962-bis
does not rule it out of scope. While 6962 attempted to, in its discussion
of the spam problem, there was ample discussion regarding the
prescriptiveness of the language around accepted trust anchors that CT
itself is left as policy-agnostic. One could require that such MITM
certificates be logged to a locally trusted anchor.

Based on past feedback being concerned about what is possible, rather than
what is practically implemented, it seems that because it is possible for
CT to address this, and because of the issues with the wording, this
paragraph might be suitable to delete entirely without any loss.


[Page 9]
3.1 Non-malicious Web PKI CA context
> In this section, we address the case where the CA has no intent to
> issue a bogus certificate.

Structurally, I think this is problematic to suggest that intent is
relevant or discernable for the case of CA operations. An actively
malicious CA would be able to mask maliciousness for incompetence, and an
actively incompetent CA is indistinguishable from an actively malicious
one. By orienting the document around intent, it significantly undermines
one of the most valuable contributions that CT brings - namely, that intent
becomes largely irrelevant once you have transparency, because it’s not
what you say, it’s what you do.


[Page 9]
> In the event of a technical attack, a CA may
> have no record of a bogus certificate.

This sentence stood out as reflective of the overall structural issues with
the approach. At only a few pages in, the reader is supposed to know that
there are “bogus” certificates (semantic misissuance) and “not-bogus”
certificates (syntactic misissuance), per Section 1. In Section 2, it’s
established that a CA that is the victim of an attack may be non-malicious,
but the action is equivalent to that of a malicious CA, which is also
“bogus.” The distinction between maliciousness and non-maliciousness of the
CA seems solely to be relegated to a discussion around whether or not the
CA participates in revocation publication, without seemingly any other
criteria at this point. By the time this paragraph is reached, yet another
term has been introduced, which is erroneous certificate, which is a
seeming super-set of the bogus and non-bogus certificates distinguished
based on the intent of the CA to issue that certificate, not the intent of
the CA to publish revocation about it.


[Page 10]
3.1.1.2. Misbehaving log
> A
> misbehaving log probably will suppress a bogus certificate log entry,
> or it may create an entry for the certificate but report it
> selectively

Similar to the past discussion about probabilities, I think the ‘probably’
here does more of a disservice to the document than intended. If it’s
focused on attacks, it seems it should enumerate possible manifestations of
those, and their mitigations, without speculating as to probabilities. If
the manifestations result in the same mitigations, perhaps that’s worth
clarifying.


[Page 10]
> Unless a Monitor validates the associated
> certificate chains up to roots that it trusts, these fake bogus
> certificates could cause the Monitors to report non-existent semantic
> problems to the Subject who would in turn report them to the
> purported issuing CA.  This might cause the CA to do needless
> investigative work or perhaps incorrectly revoke and re-issue the
> Subject's real certificate.

There’s a lot going on within this paragraph, so I’ve tried to pull out
some of the most actionable bits. This is the last occurrence in the text
of “roots” - a model that 6962-bis avoids, by speaking about trust anchors,
which can include both self-issued, self-signed roots, and also
intermediate certificates. Collectively addressing this as “trust anchors”
would provide greater consistency through the text.

I think this reveals a structural issue regarding the role of Monitors that
would be worth resolving, as I think it may substantially alter some of the
subsequent modelling. The question is whether or not a Monitor is in a
place to authoritatively report mis-issuance, or whether they merely act as
a signal for potential investigation. In this document, Monitor encompasses
both first-party and third-party monitoring, but only the first-party can
affirmatively determine misissuance. This may refer to the Subject, as the
named entity,  or potentially the Subscriber, as there may be multiple
Subscribers for a given Subject certificate, and only one of those parties
can authoritatively determine misissuance.

As to reporting, the phrase “purported issuing CA” is somewhat problematic,
and is closely correlated with ambiguity in the use of “it” in “up to roots
that it trusts”. A Monitor does not need a set of trust anchors, because it
can evaluate certificate paths against the set of trust anchors the log
presents. If the defense is against a misbehaving log misreporting a chain,
mitigating that by evaluating signatures against the self-reported trust
anchors is one way to mitigate. Whether or not a given certificate path is
‘interesting’ to the Monitor is not going to be based on a factor of what
it “trusts”, but rather, what certificates it is interested in. For
example, a Monitor doesn’t need to “trust” a given CA to be concerned about
its operations.

Another element to consider in the description of this attack is that it
can arise without requiring any Log misbehaviour. The consideration here is
about Monitors/Subjects doing ‘needless’ work, which is a model I don’t
agree with as such work is both an expected and essential part of
practical, Web PKI usage of CT. In order for there to be ‘needless’ work,
either the Monitor or Subscriber must determine who the relevant operator
of that CA is, in order to request revocation. Consider a scenario where a
given CA operator, Foo, has two variants of a CA certificate - a
self-signed root and a cross-signed intermediate, where the cross-signature
is provided by the CA operator Bar. Leaf certificates from Foo are issued
through an intermediate that chains to their ‘root’ (self-signed or
cross-signed is irrelevant). A Log may report the chain as terminating to
Foo’s root, suggesting Foo is responsible, it may terminate at Bar’s root,
suggesting Bar is responsible, or it might just have Foo’s intermediate as
the trust anchor, which requires the Monitor/Subject knowing that Foo
operates that intermediate.

While this reads fairly convoluted, the point is that investigative work is
inherently necessary, not needless, and that a separate set of Monitor
“trusted” roots is unnecessary to mitigating these attacks.


[Page 10]
> So creating a log entry for a
> fake bogus certificate marks the log as misbehaving.

Again, the terminology issues rear their head, in now we have to consider
fake-bogus and real-bogus in interpreting the scenarios. Fake-bogus is
clarified as those that haven’t been issued by the indicated CA, but the
method of that creation of a fake is ill-defined. It seems we’re discussing
“certificates whose signatures don’t validate,” but that’s not necessarily
indicative of Log misbehaviour. As a demonstration of this, consider an
implementation that performs BER decoding of the TBSCertificate, re-encodes
it in DER, and compares the signature. If the CA has syntactically
misissued in the classic sense - that is, rather than the document’s
broader definition, focusing purely on X.509 and RFC5280’s DER-encoding of
the ASN.1 - then it’s possible that the signature is over the malformed
BER-data, not the (client re-encoded) DER-data. That the Log accepted the
signature over the raw value of the TBSCertificate is not indicative of Log
misbehaviour.

Similarly, there are CAs that have, in various ways, botched the Signature
or SPKI encoding in ways that clients understand and compensate for -
unnecessary or omitted ASN.1 NULLs are my favorite example. That a Log
understood this compatibility behaviour and accepted the certificate is not
indicative of Log misbehaviour.

That this is listed as an attack at all seems to derive from the later
suggestion regarding Logs performing syntax checks on certificates, which,
while vastly undermining the goal of transparency, would be an attack if
the Log was expected to reject certificates that clients are willing to
accept.


[Page 10]
3.1.1.2.1.  Self-monitoring Subject & Benign third party Monitor
> It is anticipated that logs that are
> identified as persistently misbehaving will cease to be trusted by
> Monitors, non-malicious CAs, and by browser vendors.

The phrase “trusted by” is better replaced with “relied upon”. Neither CAs
nor Monitors ‘trust’ logs, and in the context of verifying the data
structures, neither do Browsers. While they rely upon the Log for various
things depending on their role - Monitors and CAs most concerned with
availability - it’s not to be confused and conflated with trust. A Monitor
is most interested in every possible certificate it can potentially
process, and even a misbehaving Log or CA does not undermine the utility in
looking at these certificates.


[Page 10]
> In this scenario, CT relies on a distributed
> Auditing mechanism [I-D.ietf-trans-gossip] to detect log misbehavior,
> as a deterrent.

While appreciative of the considerations of Gossip, I think it’s premature
to presume that this is the only solution or mitigation to this. Both
6962-bis and the community have shown alternative approaches (such as
centrally-mediated Auditing) are potentially viable, and which do not
necessitate Gossip.


[Page 11]
> This discrepancy can be detected if there is an exchange of
>  information about the log entries and STH between the entities
> receiving the view that excludes the bogus certificate and entities
> that receive a view that includes it, i.e., a distributed Audit
> mechanism.

I think, as worded, this suggests that clients need to distribute
information about the log entries, in addition to the STH. However, for the
given purpose, the STH is sufficient, as you would not be able to obtain
inclusion proofs for the two different STHs while simultaneously omitting a
certificate.

Given the STH, determining the affected entry by local construction of the
tree and path also suffices to determine where things are ‘wonky’ and have
a gap.


[Page 11]
> If a malicious log does not create an entry for a bogus certificate
> (for which an SCT has been issued), then any Monitor/Auditor that
> enrounters the bogus certificate (and SCT) will detect this when it
> checks with the log for log entries and STH (see Section 3.1.2.)

I believe this should say checks with the log for inclusion proofs, rather
than entries.


[Page 11]
3.1.1.3  Misbehaving third party Monitor
> Note that independent of any mis-issuance on the part of the CA, a
> misbehaving Monitor could issue false warnings to a Subject that it
> protects.

As noted previously, I think this misstates the role of a Monitor, in that
it presumes a Monitor is a trusted arbiter of truth, rather than as a
signal of particular issues to investigate. While it’s true that an “Evil
Monitor” attack could suppress notification of interesting certificates, I
think this second paragraph - discussing the Monitor being vigilant - is at
odds with the expected Monitor functionality.


[Page 11]
3.1.2.  Certificate not logged
> If the CA does not submit a pre-certificate to a log, whether a log
> is benign or misbehaving does not matter.

This is problematic in that the assumption here is that CAs are the ones
performing the logging. Throughout the document, the description of CAs
performing logging ignores the ability of Subscribers and certificate
holders to perform the logging at their discretion, up to and including
seconds before performing the TLS handshake. As a consequence, it misses an
entire class of attacks that can arise when an inclusion proof for an SCT
can not be obtained to an STH, because the SCT is newer than the published
STH.

I believe this is another instance of a compelling reason to re-evaluate
the ontology of attacks and to not attempt to classify them using a
hierarchy.


[Page 12]
3.2.1.1.1.  Self-monitoring Subject
> If a Subject is checking the logs to which a certificate was
> submitted and is performing self-monitoring, it will be able to
> detect the bogus certificate and will request revocation.

It would be better to replace “will request revocation” with “may request
revocation.”.

A bogus certificate may be in the Subscriber’s favor. As the framing
unfortunately presumes that there exists an appropriate entity for every
named Subject, it seems to omit consideration that some of the entities
named within a Subject may benefit from the bogusness. For example, it
could considerably save certificate validation costs if “Google Inc” could
enumerate a host of information about itself that didn’t need to be
validated - information such as postal address. Google, the Subject named
in the certificate, would have no reason to request revocation, because the
bogusness of the certificate is in its favor.


[Page 12]
> A malicious CA might revoke a bogus certificate to avoid having
> browser vendors take punitive action against the CA and/or to
> persuade them to not enter the bogus certificate on a vendor-
> maintained blacklist.

It seems entirely unnecessary to mention intent here, especially when that
‘intended reality’ does not match with ‘actual reality,’ yet leads the
reader to believe that might be a valid result. Instead, simply focusing on
what the malicious CA does, without speculating about the reasoning, seems
to provide a clearer narrative about the potential risks.


[Page 12]
> No component of CT is tasked with detecting this sort of
> misbehavior by a CA.

This feels like a stretch. We’ve already seen CT serving as the basis for
three different clients revocation behaviours - Google, Apple, and Mozilla.
In these cases, the view that is presented to the browser vendor is the
authoritative view on the client. Thus, revocation is equal to revocation,
and we achieve that only through CT.


[Page 13]
3.2.1.2.1.  Monitors - third party and self
> If a Monitor learns
> of misbehaving log operation, it alerts the Subjects that it is
> protecting, so that they no longer acquire SCTs from that log.  The
> Monitor also avoids relying upon such a log in the future.  However,
> unless a distributed Audit mechanism proves effective in detecting
> such misbehavior, CT cannot be relied upon to detect this form of
> mis-issuance.  (See Section 5.6 below.)

It’s ambiguous as to who “they” is here - whether it is Monitor or the
Subject. Using the model described in Figure 1, neither of these entities
are responsible for obtaining SCTs - that’s the CA’s role - so it’s unclear
what is trying to be communicated here. If it’s meant to inform the CA as
to the status of SCTs, then the communication flow would generally go
Monitor -> Browser -> CA, as the Monitor doesn’t necessarily have a
relationship with the CA, and the CA has no incentive to stop obtaining
SCTs until the Browser no longer considers them.

If it’s meant to inform Subscribers that are self-logging, then the Monitor
doesn’t have a relationship with the certificate Subscriber - just the
Subject - and so the communication flow would again go Monitor -> Browser
-> Subscriber, as Subscribers have no incentives to change until the
Browser no longer accepts.

As to the Monitor “avoids relying upon such a log in the future,” that’s
not accurate, as the Monitor has every incentive to continue to examine the
Log even after it’s been demonstrated as malicious and hiding entries, as
the certificates it hasn’t hidden are still applicable to the Monitor and
the Subject. Any avoidance of reliance is only when the Monitor has no
vested interest in the historic certificates in that Log, which may never
happen for Monitors that wish to provide historically accurate views (e.g.
to assist with investigating stuff like [6]).

Finally, the phrase “unless a distributed Audit mechanism proves effective”
seems to again emphasize a design that is merely one of a number on the
table.


[Page 13]
3.2.2.  Certificate not logged
> When a CA does not submit a certificate to a log, whether a log is
> benign or misbehaving does not matter.  Also, since there is no log
> entry, there is no difference in behavior between a benign and a
> misbehaving third-party Monitor.

This attack model is seemingly based on an assumption that a CA is the only
entity that logs, and that a failure of a CA to log is (generally) a
malicious activity. I believe this entire section would need to be reworked
when considering that the entity logging may be the Subscriber, or may even
be a third-party entity, such as Google logging certificates its crawler
sees.


[Page 14]
3.2.2.1.  CT-aware browser
> Since certificates have
> to be logged to enable detection of mis-issuance by Monitors, and to
> trigger subsequent revocation, the effectiveness of CT is diminished
> in circumstances where local policy does not mandate SCT or inclusion
> proof checking.

I don’t think this statement supports its conclusions. This statement needs
to clarify whether it’s discussing about the ecosystem effectiveness or
per-user effectiveness, and more carefully describe what tradeoffs are
being made.


[Page 14]
3.3.  Undetected Compromise of CAs or Logs
> Because the compromise is undetected, there will be
> no effort by a CA to have its certificate revoked or by a log to shut
> down the log.

In this context, I think “will” is ambiguous as to how far it extends in
the future. Perhaps it’s better to clarify as “Until the compromise is
detected, there will be”


[Page 14]
3.3.1.  Compromised CA, Benign Log
> In other case the goal is to cause the CA to

I think of all my remarks, this is perhaps the least important. I believe
the intent is to say “in other cases” (plural). However, speaking to
motivation and intent do not seem to be particularly beneficial in this
section, as noted in other sections.


[Page 15]
> This sort of attack may be most effective if the CA that is the
> victim of the attack has issued a certificate for the targeted
> Subject.  In this case the bogus certificate will then have the same
> certification path as the legitimate certificate, which may help hide
> the bogus certificate.

I don’t think this attack is sufficiently described, and may be resting on
implicit assumptions about the Monitoring functionality. From the context,
my best guess would be that this is trying to describe where a given
Subject has two certificates issued for it, from the same issuing
intermediate, with different SPKIs in possession of different entities. The
‘bogusness’ of the certificate is that the legitimate Subject did not
authorize the second SPKI, but the Monitor may not be examining or
considering SPKIs, or even just number of certificates, and instead only
considering certification paths.

If that’s the case, then I think it bears spelling out more. This is
another area where the document hierarchy can lead to omissions, since this
is something that Monitors should be considering (and Subjects) in how they
effectively monitor for misissuance.


[Page 16]
> If the compromised CA does determine that its
> private key has been stolen, it probably will take some time to
> transition to a new key pair, and reissue certificates to all of its
> legitimate Subjects.  Thus an attack of this sort probably will take
> a while to be remedied.

I think this last sentence is unnecessary speculation. It’s unclear whether
this is trying to describe the world “as spec’d” or “as practiced”, but
given the discussion of revocation and browsers using non-standard forms,
it appears to be the former. If so, then the Baseline Requirements only
permit 24 hours before revocation is required, and although recent
adoptions in the CA/Browser Forum by way of SC6 extend this for some cases,
the given attack is certainly not covered. Thus, it seems this speculation
is incorrect.

If the intent is to describe the world as practiced, then it doesn’t seem
particularly productive to speculate on how that particular scenario would
be handled, as it doesn’t seem to add any new information or value.


[Page 16]
> If the attacker has the ability to
> control the sources of revocation status data available to a targeted
> user (browser instance), then the user may not become aware of the
> attack.

It’s unclear to me the model that this is imagining. If the CA is
compromised, its issuer can revoke that CA - and is in fact obligated to do
so - and thus this threat seems to be mitigated by entirely bypassing the
“trust the revocation details from the compromised CA.”

Is this model based on an assumption that if the compromised CA isn’t
revoked, and if the attacker can control all sources and not just some
sources, then they can prevent revocation? Is this new information to
consider? Is it arguing for hard-fail revocation, including unavailability
to check revocation?


[Page 16]
> A bogus certificate issued by the malicious CA will not match the SCT
> for the legitimate certificate, since they are not identical, e.g.,
> at a minimum the private keys do not match.  Thus a CT-aware browser
> that rejects certificates without SCTs (see 3.2.2.1) will reject a
> bogus certificate created under these circumstances if it is not
> logged.

The description of this attack feels like it’s describing two different
things. On first read, it appears that it’s suggesting the attacker would
issue a bogus certificate that otherwise identically matches an existing,
logged certificate, and transplant SCTs for the true certificate’s
precertificate (or -bis equivalent) onto the bogus certificate. This
‘threat’ is only possible if a client doesn’t actually implement 6962-bis -
in terms of checking that the SCTs match the certificate - and thus let it
be confused by “SCTs present” rather than “SCTs match”.

However, the second half describes a CT-aware browser that rejects
certificates without SCTs, which seems orthogonal to the whether or not the
bogus certificate matches the SCT. Here, it seems to be saying “You can’t
transplant, and you don’t want to log, so clients reject”. If that’s the
intent, then it seems entirely unnecessary to discuss whether or not the
certificates are identical, because 6962-bis establishes that different
certificates mean different SCTs.


[Page 15 - 16]
3.3.1.  Compromised CA, Benign Log

This section seems to entirely omit the consideration of ‘malicious
content’ as a means of attacking logs. For example, if I issue a
certificate with an X.509v3 extension of an OCTET STRING that contains an
executable version of DeCSS, is that an attack worth considering?

If I issue a certificate containing libelous statements in the Subject
Alternative Name, is that an attack?

If I surreptitiously issue a billion certificates, and then execute a
distributed logging attempt, in order to attempt to cause a log to violate
its MMD, is that an attack?

If I use these billion certificates to make it difficult for Monitors to
effectively track the corpus of certificates, in order to delay the
detection of my malicious certificate buried in these billion ‘noisy’
certs, is that an attack?

If I do all of this with a revoked CA, but the Log doesn’t check
revocation, is that an attack?

If I do all of this with a CA not permitted for digitalSignature, and the
Log does not perform ‘syntax checks’, is that an attack?

These are just a few of the scenarios that have been discussed by CAs and
Log Operators, and the scenario of “compromised” or “malicious” CA is
arguably worthy of a document in itself. By virtue of being an append-only
log, CT doesn’t provide for ‘cleanup’ scenarios that might be addressed for
other forms of defacement, and generally is mitigated by shutting down the
existing log and spinning up a new one, with mitigations for whatever
attacks were made - for example, syntax checks, revocation checks,
rate-limiting by CA, contractual liability for the issuing CA by the Log
Operator, omitting the offending cert, etc. And while that provides a clear
transition point for the ecosystem, it impacts Monitors, Browsers, and CAs
in doing so.


[Page 16]
3.3.2.  Benign CA, Compromised Log
> A benign CA does not issue bogus certificates, except as a result of
> an accident or attack.  So, in normal operation, it is not clear what
> behavior by a compromised log would yield an attack.

If we accept that a CT-aware browser will require SCTs, and further accept
that CAs generally need to obtain SCTs at time of issuance for the majority
of certificates for deployment concerns, then it seems this misses a denial
of service attack against CAs that need to obtain SCTs from those logs. If
the attacker is interested in, say, reducing the utility of CT by reducing
the number of logs, thereby causing CT-aware browsers to no longer require
SCTs, thus allowing for CA compromise, then compromising a log and causing
it to violate its Logginess seems like an excellent opportunity to go there.

Alternatively, rather than compromising the Log in a detectable way, it can
be used to deter or discourage benign CAs. For example, rate limiting the
issuance of SCTs to 1 SCT a millenia, based on issuing CA, could fully
comply with the definition of a CT Log, but effectively render it useless.


[Page 17]
3.3.3.  Compromised CA, Compromised Log
> It
> might use the private key of the log only to generate SCTs for a
> malicious CA or the evil twin of a compromised CA.  If a browser
> checks the signature on an SCT but does not contact a log to verify
> that the certificate appears in the log, then this is an effective
> attack strategy.

This is another area where the ‘evil twin-ness’ is getting in the way of
clarity. It appears this section is attempting to describe both signature
validation and inclusion proof. That is, what’s being described is
seemingly “A compromised Log could not incorporate an SCT within its MMD,
or could provide a split view.” Which is covered in 6962-bis, but the
discussion about evil-twins and malicious CAs just detracts from that.


[Page 17]
> To detect this attack an Auditor needs to employ a
> gossip mechanism that is able to acquire CT data from diverse
> sources, a feature not yet part of the base CT system.

6962-bis provides a means for delivering inclusion proofs, including via
OCSP, so I don’t agree with this statement that it’s not part of the base
CT system. This again is pushing a particular solution, of which many may
exist, and which the ecosystem is still working through (hence 6962-bis as
experimental)


[Page 17]
> In this case CT would need to rely on a distributed
> gossiping audit mechanism to detect the compromise (see Section 5.6).

Same as above, it seems to ignore some of the intentional design goals of
6962-bis, and seems to promote a specific solution as an absolute
necessity, even though a number exist.


[Page 19]
3.4.  Attacks Based on Exploiting Multiple Certificate Chains
> This sort of attack might be thwarted if all intermediate (i.e., CA)
> certificates had to be logged.  In that case CA certificate 2 might
> be rejected by CT-aware browsers.

Structurally, perhaps it’s worth describing this as requiring all
intermediate certificates be disclosed. The method for that disclosure, and
how the client enforces that, seems irrelevant to the attack, as the
assumption is that it is thwarted based on disclosure.

I mention this because multiple browser programs have indicated a
willingness to explore whitelisting intermediates / requiring their
disclosure, whether through CT or through other means (e.g. CCADB [7])


[Page 19]
> However none of these
> mechanisms are part of the CT specification
> [I-D.ietf-trans-rfc6962-bis] nor general IETF PKI standards (e.g.,
> [RFC5280]).

This gets back into the messiness about whether or not browser-mediated
revocation counts as revocation within the overall description of the
threat models of this document. This appears to be calling out that
browser-based mediation is superior to IETF-standardized documents - many
browsers already support revocation by SPKI or even SPKI-and-Subject
(rather than Issuer-and-Serial or Hash).


[Page 19]
3.5.  Attacks Related to Distribution of Revocation Status
> This is true irrespective of whether revocation is
> effected via use of a CRL or OCSP.

Same as above, this presumes that only these two methods constitute
revocation, while Section 1 appears to support a broader notion and usage
of the term. In order to make effective use of this document, there needs
to be clarity whether we’re talking revocation as a concept, or revocation
via particular methods.


[Page 20]
> Only if the browser relies
> upon a trusted, third-party OCSP responder, one not part of the
> collusion, would these OCSP-based attacks fail.

I think my issue with this is “only”, as alternative solutions exist. An
adversarial stapling of “Good” responses - whether obtained through
malicious CA or simply unexpired, pre-revocation responses - is somewhat
well known in the PKI community, particularly since it’s a contributing
factor to the “unfortunately” mentioned earlier regarding browser support
for OCSP.

Alternatively, the issue may be that it presumes an OCSP responder is the
only mitigation for this, when alternative (non-OCSP-based) techniques can
work to mitigate.


[Page 21]
4.1.  Non-malicious Web PKI CA context
> If the
> Subject is a web site that expected to receive an EV certificate, but
> the certificate issued to it carries the DV policy OID, or no policy
> OID, relying parties may reject the certificate, causing harm to the
> business of the Subject.

This is another example of the syntactic and semantic bifurcation being
detrimental. I would have viewed this as a semantic misissuance - it’s
valid RFC 5280 syntax, but the interpretation and level of assurance that
an RP may place in the certificate is not aligned with the level of vetting
the information received. If this is viewed as syntactic, rather than
semantic, as the preceding sentences establish, then I fail to see why a
certificate issued to an improperly-verified-or-confused-Subject is not
equally syntactic, as it relates to the Baseline Requirements profile,
rather than semantic, as is argued by calling them “bogus”.


[Page 21]
4.1.1.1.  Benign log
> If a (pre )certificate is submitted to a benign log, syntactic mis-
> issuance can (optionally) be detected, and noted.  This will happen
> only if the log performs syntactic checks in general, and if the log
> is capable of performing the checks applicable to the submitted (pre
> )certificate.

This is not correct. I believe this mistake derives from the view of
Monitors solely as agents of Subjects. Consider [8] as an example of a
Monitor detecting syntactic misissuance without requiring the Log(s)
perform such.

The issue I have broadly with this section is that it attempts to move the
bar from Monitors performing this role of syntax checking, as they do
today, onto Logs. This places greater trust in Logs that necessary, and
creates greater opportunity of risk than the value returned. Within the CT
model, in which operations of the Logs don’t require trust because they are
cryptographically verifiable, this seemingly introduces a huge gap into the
ability to assess the operational compliance of a given Log.


[Page 21]
> If this is a pre-certificate
> submitted prior to issuance, syntactic checking by a log could help a
> CA detect and avoid issuance of an erroneous certificate.

Given that the issuance of a pre-certificate is, and must be, treated as
equivalent to the issuance of an equivalent certificate, it’s unclear what
the perceived value is. I think this is somewhat at odds with that goal of
6962-bis in treating the two as equivalent, because it seems to suggest
there’s value in making a distinction.


[Page 21]
> Here too syntactic checking by a log
> enables a Subject to be informed that its certificate is erroneous
> and thus may hasten issuance of a replacement certificate.

Yes, but Monitors fulfill that function today, so it’s unclear whether
there’s a new and distinct attack being remedied.


[Page 21]
> If a certificate is submitted by a third party, that party might
> contact the Subject or the issuing CA, but because the party is not
> the Subject of the certificate it is not clear how the CA will
> respond.

It’s unclear what is trying to be described here. In the context of the
Baseline Requirements, there are procedural requirements for CAs responding
to Problem Reports that cover misissuance, including requiring revocation.

Broadly speaking, however, this fits into the category of speculative
statements that are unclear about the value being added.


[Page 22]
> This analysis suggests that syntactic mis-issuance of a certificate
> can be avoided by a CA if it makes use of logs that are capable of
> performing these checks for the types of certificates that are
> submitted, and if the CA acts on the feedback it receives.

I disagree with this, because it attempts to treat pre-certificates as
distinct from certificates in terms of both policy and practice. Given that
CT is designed to detect CA misissuance, it seems that regardless of
pre-certificate or certificate, the mitigations and response are
effectively the same, and the distinction irrelevant.


[Page 22]
> If a CA
> uses a log that does not perform such checks, or if the CA requests
> checking relative to criteria not supported by the log, then
> syntactic mis-issuance will not be detected or avoided by this
> mechanism.

This seems tautological. “If a CA does not use a Log that performs syntax
checking, the Log will not perform syntax checking”.

Given that Monitors do fill that role today, it’s not accurate to suggest
that syntactic mis-issuance won’t be detected, merely that it will not be
signaled in this presumed SCT extension for SCTs to signal syntactic
misissuance, which… is also circular.


[Page 22]
4.1.2.  Certificate not logged

> If a CA does not submit a certificate to a log, there can be no
> syntactic checking by the log.

I highlight this as another example in where the goal seems to be shifting
from “detection of syntax issues” to specifically a Log-based detection
mechanism. The distinction about whether the Log or the Monitor performs
that function is not expanded upon in this document, so it seems an
unnecessary conflict to introduce.


[Page 22]
> A Monitor that performs syntactic checks
> on behalf of a Subject also could detect such problems, but the CT
> architecture does not require Monitors to perform such checks.

Neither does the CT architecture require Logs perform such checks, so it’s
unclear why Logs were omitted from that context or why this was explicitly
called out for Monitors.


[Page 23]
4.1.2.1.  Self-monitoring Subject

I think this whole section suffers from not fully considering
Subject-initiated logging. In particular, even under an ideal model, there
are operational reasons why having a CA Log 100% of their certificates is
not necessarily a goal; see the ample discussion of redaction for that. A
non-logged certificate is thus not indicative of an operations error, an
attack, or any other failure by the CA.

In the context of “How will the Subject detect syntactic misissuance if
they only have the cert”, it seems really unclear to me why this would even
need to be discussed in the context of CT, since CT is orthogonal if you
have the certificate.

Title-wise, the notion of self-monitoring subject seems somewhat confusing.
Earlier in the document, the notion of self-monitoring seemed to be
regarding “bogus” certificates for a given Subject/Subject Alternative
Name, but here, it’s seemingly about Monitoring for certificates that the
Subject already has (in which case, what are they monitoring?)


[Page 23]
4.1.3.2.  CT-enabled browser

I’ve raised concerns previously about this, but I think in the document
context of “How will a browser detect syntax issues with certificates it’s
evaluating”, this entire section is unnecessary and includes some
controversial statements presented rather authoritatively.

If the goal is to describe how a Browser vendor may perform a Monitor
function for syntax issues, then the discussion about client-side behaviour
seems irrelevant, because the vendor can evaluate the Logs asynchronously,
and without any of the unnecessary privacy considerations.


[Page 24]
4.2.1.1.  Benign log

> 1.  The CA may assert that the certificate is being issued w/o regard
>      to any guidelines (the "no guidelines" reserved CCID).

I can’t seem to find any definition of CCID as used in this section.
However, since it largely relates to Logs performing syntax checks, I think
my view has been stated adequately that I think the document is better off
without that discussion in the first place.


[Page 25]
4.2.1.3.  CT-enabled browser

> As noted above (4.1.3.2), most browsers fail to perform thorough
> syntax checks on certificates.

I think the choice of “fail” reads a bit pejoratively, especially in the
context of RFC 5280 which specifically recommends against
profile-enforcement on clients. While browsers have more recently been to
disagree with this, in line with [9], this could alternatively be stated
“do not” instead of “fail to”. A more complete statement of truth would be,
“do not perform thorough syntax checks on certificates, which is consistent
with RFC 5280, Section 6.1”


[Page 26]
5.4.  Browser behavior
> Note that issuing a warning to a (human) user is
> probably insufficient,

Another somewhat subjective viewpoint that probably doesn’t belong. Either
this should be removed, as a subjective view, or the text should more
carefully explore the tradeoffs - such as an adversary switching to a
self-signed cert rather than a non-logged certificate if, for some reason,
non-logged certificates resulted in a different user experience.


[Page 27]
5.5.  Remediation for a malicious CA
> Such
> communication has not been specified, i.e., there are no standard
> ways to configure a browser to reject individual bogus or erroneous
> certificates based on information provided by an external entity such
> as a Monitor.

It’s unclear the purpose of this statement. Who is the agent of
configuration? Is it something that the end-user or system administrator is
doing? Or is it something that the Browser vendor is doing based on
information it receives from Monitors? If the former, how is that different
from saying that there’s no way to configure Microsoft SmartScreen
behaviour on my Slackware Linux machine, and if the latter, what’s the
value proposition of noting different software vendors do different things?


[Page 27]
5.6.  Auditing - detecting misbehaving logs
> Only if Monitors and browsers reject certificates that
> contain SCTs from conspiring logs (based on information from an
> auditor) will CT be able to detect and deter use of such logs.

As noted several times previously, Monitors have little reason to ignore a
Log, as Monitors are interested in drinking from the firehouse that is the
certificate ecosystem. While I discussed above in the ‘malicious log’
scenario that Monitors need to consider, that’s independent of the status
of SCTs, and thus entirely orthogonal.


[Page 27]
> Absent a well-defined mechanism that enables Monitors to verify that
> data from logs are reported in a consistent fashion,

This begins a discussion about SCTs and privacy, but as noted above,
Monitors don’t need to contend with SCTs. A Monitor function can be
addressed by STHs, or it can move the notion of the “trusted third party”
back to the browser vendor, and expect that the browser vendor and the
clients are performing the necessary consistency and inclusion checks.
Since the browser vendor is inherently inside of the threat model for site
operators - since they could always ship code in the browser itself that
allowed for targeted MITM - it doesn’t seem to alter the security
considerations in any substantial way.

That’s not to say we don’t need to care about Log operation, but Monitors
have significantly more flexibility in what they’ll tolerate compared to
browsers/clients.


[1] https://www.ietf.org/mail-archive/web/trans/current/msg03146.html
[2] https://mailarchive.ietf.org/arch/msg/trans/UYL1RpRuKpiY0GOzIGJ5mM2bY3c
[3] https://mailarchive.ietf.org/arch/msg/trans/IijSa8IPZ0oZ9fr1xmq-ETMjnss
[4] https://wiki.mozilla.org/CA/Closed_Incidents
[5] https://zakird.com/papers/zlint.pdf
[6] https://insecure.design/
[7] https://www.ccadb.org/
[8] https://crt.sh/?caid=97708&opt=cablint
[9] https://tools.ietf.org/html/draft-iab-protocol-maintenance