[Trans] responses to Ryan's detailed comments on draft-ietf-trans-threat-analysis-15

Stephen Kent <stephentkent@gmail.com> Tue, 25 September 2018 18:41 UTC

Return-Path: <stephentkent@gmail.com>
X-Original-To: trans@ietfa.amsl.com
Delivered-To: trans@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id F15B5128CB7 for <trans@ietfa.amsl.com>; Tue, 25 Sep 2018 11:41:48 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fm_Co8peiRMY for <trans@ietfa.amsl.com>; Tue, 25 Sep 2018 11:41:39 -0700 (PDT)
Received: from mail-qt1-x82b.google.com (mail-qt1-x82b.google.com [IPv6:2607:f8b0:4864:20::82b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id A566D128BAC for <trans@ietf.org>; Tue, 25 Sep 2018 11:41:38 -0700 (PDT)
Received: by mail-qt1-x82b.google.com with SMTP id e26-v6so2344033qtq.3 for <trans@ietf.org>; Tue, 25 Sep 2018 11:41:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:subject:to:message-id:date:user-agent:mime-version :content-language; bh=5GnhjoCPmBptRu7mipwdJIr/lwjhQkoeulMxSy+JdAs=; b=MCatZgzrU3noWmafndV3lVXWj812lyHIYU6mUZ5IcAMWPPSR+btDsdsx5DFqGy90oF 5z2OXN0q/MR+zBeUXM3hyv+SOI4oAY1s15hcDdhX1lA/WdvA/Es2qijW6gjz27pXwdXC BLc5orfbBCytujA2rxaWE9sOlyEgOFWaN+aqay2j2xg+bAV5INtaQ4HZ+fCS6lDSHD1L Ti3ZV6EDz5Kw3JjhEq3GX79/Pzaz49sz4Wqua5wJRqYM5xol639BcuFqwO+Ey4DLESzv 7hBozgkzEtSgG3EEwFXZXFOJ89U93Imnw8RABnw3iH784z2voreXJuZ8RN5ZGduusqL8 sOjA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:message-id:date:user-agent :mime-version:content-language; bh=5GnhjoCPmBptRu7mipwdJIr/lwjhQkoeulMxSy+JdAs=; b=UELz3Ljg+nMLevhXaOcwBsaCTUfjzPNdbTeyzPYVoAT9i5z1uBqm/O9VOddhDxSTRu 1LmQ+0joGUMLcyFHO24vQh1huqakxHI1ib9v70qL+ilJYz1ueULXA4nFWIfCceKZGzjv M6DDN+c8zBp82/iSwPflwt+SlSLSgk5fu8np1kfkK+DRknISnsA/ubhIsTYtduN1TpI8 GqGNn07GFZkZxAsdequucED/UCLhoHhrWQWdF04ujr5hwhf7wzyjYLC1k9KuxLMsEeYg emetW03xWOvWnm8XOvk97lpOBeFBo4NwDaw82j20sg0HpIf0Rd7cFdfMrb+an6UQwjYK 9eSA==
X-Gm-Message-State: ABuFfojyXZMfOQ2KYTBDBSKOl9/lDCKxRKz83wAAM9DcfieSH1vgT9re kNOImCebYIdwszXUHkXAlmvhT24R
X-Google-Smtp-Source: ACcGV627CkPQaER2sdO3c9L86sHVp5hko7KTz0SP6uVS390X7Dl38rCQQr73nUzdZ5ATmnEqV7KNbQ==
X-Received: by 2002:a0c:d0cc:: with SMTP id b12-v6mr1795905qvh.107.1537900895742; Tue, 25 Sep 2018 11:41:35 -0700 (PDT)
Received: from iMac-Study.fios-router.home (pool-72-74-32-219.bstnma.fios.verizon.net. [72.74.32.219]) by smtp.gmail.com with ESMTPSA id g12-v6sm1571709qke.84.2018.09.25.11.41.33 for <trans@ietf.org> (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 25 Sep 2018 11:41:33 -0700 (PDT)
From: Stephen Kent <stephentkent@gmail.com>
To: trans@ietf.org
Message-ID: <071bd596-07ec-fe8a-861c-3ef181fec848@gmail.com>
Date: Tue, 25 Sep 2018 14:41:32 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:52.0) Gecko/20100101 Thunderbird/52.9.1
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="------------C4341B13FC1E61CD0448C0D9"
Content-Language: en-US
Archived-At: <https://mailarchive.ietf.org/arch/msg/trans/2P0wmYba-t3u43MrmLf8zcu8U5Y>
Subject: [Trans] responses to Ryan's detailed comments on draft-ietf-trans-threat-analysis-15
X-BeenThere: trans@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Public Notary Transparency working group discussion list <trans.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/trans>, <mailto:trans-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/trans/>
List-Post: <mailto:trans@ietf.org>
List-Help: <mailto:trans-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/trans>, <mailto:trans-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 25 Sep 2018 18:41:49 -0000

Ryan,

Below are responses to your detailed comments. Also, I looked at the 
first version of this document, when it was accepted by the WG, in June 
2015, and it used the same taxonomy. So, it has been over 3 years since 
the document structure was adopted.

> [Page 2]
> 1. Introduction
> > Subjects are web sites and RPs are browsers employing HTTPS to 
> access these web sites.
>
> While only present in the introduction, this definition of RP is 
> overly narrow, and perhaps not aligned with its common usage. There is 
> the RP as the human user, and the RP as the software performing the 
> validation. The definition here focuses only on the latter, and yet 
> conflates the benefits received with a definition that includes the 
> former.
I have changed to text to reflect that RPs are _users_ of browsers ...
> This doesn’t seem correct, because many of the benefits of CT are 
> derived not during the processing and validation of certificates by 
> software, but by the human users that are relying upon that software. 
> If we take a purely programmatic approach to validation, then that 
> would suggest CT does not benefit OV/EV certificates, nor their named 
> Subjects, since that information is not used by browsers to access 
> HTTPS. However, because the value of that information is nominally in 
> that human users rely on it, it seems we really have a fourth category 
> of beneficiaries to consider - end-users.
The changed definition of RPs now refers to users, addressing your concern.
>
>
> [Page 3]
> 1. Introduction
> > When a Subject is informed of certificate mis-issuance
> > by a Monitor, the Subject is expected to request/demand
> > revocation of the bogus certificate. Revocation of a
> > bogus certificate is the primary means of remedying
> > mis-issuance.
>
> One of the substantial ways that CT has improved the ecosystem is 
> through its ability to detect misissuances by CAs - regardless of 
> syntactical or semantic, perhaps best demonstrated through [4]. This 
> paragraph overall only focuses on the risk to Subscriber’s through 
> actively issued certificates in their name, while all Subscribers (of 
> all CAs) benefit from having a consistently applied and enforced set 
> of standards. This is most likely a result of failing to consider 
> trust store vendors as a distinct category from relying parties 
> (whether software or human users), or perhaps prematurely aggregating 
> them with Monitors.
I agree that the ecosystem  can  benefit overall from CT. I am aware of 
no definition of a "trust store vendor" in 6962-bis, nor in any TLS 
spec, nor any other RFC. I assume the term applies to a set of parties 
who curate sets of trust anchors that are made available to RP software, 
right? Given that there appears to be no characterization of such 
vendors in IETF docs, I am not inclined to expand the set of CT system 
elements to add them at this stage.
>
>
> [Page 3]
> > Certificate Revocations Lists (CRLs) [RFC5280] are the primary means
> > of certificate revocation established by IETF standards.
> > Unfortunately, most browsers do not make use of CRLs to check the
> > revocation status of certificates presented by a TLS Server (Subject).
>
> Two words stand out here as perhaps unnecessary - primary and 
> unfortunately. While structurally, this entire paragraph feels 
> unnecessary to the Introduction, the emphasis on CRLs as somehow being 
> superior to OCSP seems an unnecessary editorializing, especially given 
> the Web PKI’s requirement that both CRLs and OCSP be supported.
Your comment caused be to realize that I ought not have used the term 
"Web PKI" because does not appear to be defined in any RFCs. We did have 
a WG that was supposed to create RFCs about the Web PKI, but it failed, 
without publishing any documents, so we don't have definitions for what 
requirements exist for the Web PKI. I have removed all appearances of 
the term from the threat analysis doc.
>
> Similarly, “unfortunately” strikes as an unnecessary editorializing, 
> particularly in light of some of the later discussions around 
> misbehaving CAs and revocation.
I've deleted the sentence beginning with "Unfortunately".
>
>
> [Page 3]
> > The Certification Authority and Browser Forum (CABF)
> > baseline requirements and extended validation guidelines do mandate
>
> The full name of the CA/Browser Forum is the “Certification Authority 
> browser Forum” - as noted in the CA/Browser Forum Bylaws.
Editorial, but fixed.
>
> Similarly, the Baseline Requirements and Extended Validation 
> Guidelines are both titles of documents, and would seem to be 
> editorially capitalized.
Editorial, but fixed.
>
>
> [Page 3]
> > most browser vendors employ proprietary
> > means of conveying certificate revocation status information to their
>
> The choice of “most” versus “many” implies a particular group that can 
> be quantified here. It’s unclear what that grouping includes - so it 
> may be more appropriate to suggest “some” or, if appropriate, “many,” 
> but a word like “most” seems to require a proper definition of what 
> the set includes.
>
changed to "some"
>
> [Page 3]
> > Throughout the
> > remainder of this document references to certificate revocation as a
> > remedy encompass this and analogous forms of browser behavior, if
> > available.
>
> While this is located within the introduction, this seems to setup an 
> assumption that ‘revocation’ refers to specific certificates or to the 
> combination of Issuer Name and Serial Number. This can be seen later 
> in the document, most notably in the contentious Section 3.4, but 
> doesn’t reflect the practiced capabilities of clients. This, like the 
> most discussion preceding this, is a result of an unclear focus as to 
> what the Web PKI constitutes, and whether this document views such 
> revocations (of SPKI) as, well, revocation. To the extent the document 
> relies on revocation as a remedy, it may be worthwhile to either split 
> out and discuss different approaches to revocation that have been 
> practiced - which include by SPKI, by Issuer & Serial Number, by 
> Subject Name and SPKI - since these seem to materially affect the 
> subsequent discussion of threats and remediation.
>
I broadened the discussion of browser revocation mechanisms to 
accommodate a comment from Rich, despite the lack of any IETF 
documentation on such proprietary mechanisms. I don't want to devote 
more text to describing such mechanisms.

>
> [Page 3]
> > A relying party (e.g., browser) benefits from CT if it rejects a
> > bogus certificate, i.e., treats it as invalid.
>
> While understandably the introduction is trying to be brief, it 
> ignores the substantial primary benefit already being derived by 
> multiple browsers, which is the benefit to ecosystem analysis. Whether 
> this analysis is in assessing which CAs are responsible for which 
> forms of misissuance [5] or about assessing compatibility risks, there 
> are substantial benefits to be derived independent of the treatment of 
> bogus certificates.
>
yes, the intent in the intro is to be brief. But, there is also the 
issue that "ecosystem analysis" is an activity outside the scope of IETF 
standards, and it is not cited by 6962-bis as a motivation for CT.
> [Page 4]
> > and rejects the certificate if the
> > Signed certificate Timestamp (SCT) [I-D.ietf-trans-rfc6962-bis] is
> > invalid.
>
> On an editorial point, it seems this should be Signed Certificate 
> Timestamp.
Editorial, bit I fixed it.
> On a functional point, however, validity here is being left 
> ill-defined. Validity here covers a spectrum including simply a Log 
> not being recognized for that purpose, to being improperly 
> syntactically constructed, to having an invalid signature, to having 
> something wrong with the associated Log structure as a result of 
> examining that SCT (e.g. no inclusion proof, no associated entry).
see response below.
>
>
> [Page 4]
> > If an RP verified that a certificate that claims to have
> > been logged has a valid log entry, the RP probably would have a
> > higher degree of confidence that the certificate is genuine.
> > However, checking logs in this fashion imposes a burden on RPs and on
> > logs.
>
> Here, it’s ambiguous as to what “genuine” means, as this is the only 
> occurrence in the document. It’s unclear if it’s meant to be an 
> antonym to bogus (that is, semantic misissuance in the document’s 
> terminology), or whether it’s also meant to include adversarial 
> misissuance.
Fair point- I changed "genuine" to "not bogus"
>
> It’s also ambiguous as to what the checking entails, since verifying 
> the entry and its correlation to the certificate can be distinct from 
> the verification of the inclusion proof.
The text separately discusses SCT verification and verifying a valid log 
entry. The comment about checking for valid log entry refers to fetching 
an inclusion proof. I have changed that text to cite 8.1.4 in 6962-bis.
>
>
> [Page 4]
> > Finally, if an RP were to check logs for
> > individual certificates, that would disclose to logs the identity of
> > web sites being visited by the RP, a privacy violation.
>
> A potential issue with this construction is that it seems to miss the 
> existence of read-only Logs - that is, Logs that mirror the full 
> Merkle tree of the SCT-issuing log, and which can present proofs 
> without having access to the primary Log’s key material.
I don;t believe that 6962-bis describes read-only logs. As a resiult 
they are not considered as part of the analysis.
>
> It’s unclear whether this is solely referring to online checking - 
> that is, checking for inclusion proofs at time of validation - in 
> which case, it seems like 6962-bis already discusses this point in 
> Sections 6.3 and 8.1.4, along with privacy-preserving approaches. It’s 
> unclear if this document is referring to something different, then, or 
> if it’s rejecting both the documented alternatives and those discussed 
> in the communities.
6.3 does not require a TLS server to send an inclusion proof. It says 
that "... if the TLS server can obtain them, it SHOULD ..." As a result, 
the analysis does not assume that a client will receive an inclusion 
proof from a TLS server.  6.3 says that sending inclusion proofs in the 
TransItemList structure helps protect client privacy- which is an 
admission that client fetching of inclusion  proofs may undermine client 
privacy. 8.1.4 notes the same client privacy concern. So, I don't see 
your point here. Are you saying that since 6962-bis warns about client 
privacy concerns and offers (but does not mandate) a way to mitigate 
these concerns, that the threat analysis ought not cite this concern?

> [Page 4]
> > If Monitors inform Subjects of mis-issuance, and if a CA
> > revokes a certificate in response to a request from the certificate's
> > legitimate Subject,
>
> This assumption here - that Monitors inform Subjects of mis-issuance - 
> seems to imply that Monitors are all-knowing. The design of CT is such 
> that Monitors can be alerted to potential certificates of interest, 
> and Subjects make determinations about mis-issuance. The most common 
> example of this are certificates that were duly authorized and 
> approved, but for which the human person operating the Subject was not 
> directly involved with (e.g. domains team versus marketing team).
The description of a Monitor in 6962-bis is internally inconsistent, as 
I have noted previously, so there is uncertainty about the operational 
model. Nonetheless, a Monitor may be used to watch for certs of 
"interest" and thus one may envision a circumstance in which Subjects 
rely on Monitors to watch for log entries corresponding to the Subject 
name, as a way to detect (semantic) mis-issuance. This does not imply 
that Monitors are omniscient - a Subject might rely on multiple Monitors 
to improve the odds of detecting mis-issuance that is of interest to the 
Subject.
>
> This affects some of the adversarial modelling later on, in that 
> Monitors are presumed to be able to determine mis-issuance, and thus 
> failure to make that determination, or requiring human intervention, 
> is seen as adversarial. However, the design of CT, and its practical 
> deployment, is that Monitors surface interesting events and the 
> Subject makes the determination as to misissuance or not.
I have reworded the text to say that Monitors inform a Subject of 
_potential_ mis-issuance.
>
>
> [Page 4]
> > Logging of certificates is intended to deter mis-issuance,
>
> I think this conflates detection with deterrence. While it’s true that 
> detection can have a deterring effect, it seems more accurate to 
> reflect that the purpose is detection, as reflected in the language 
> choice 6962-bis.
To which places in 6962-bis are you referring?
>
>
> [Page 5]
> > Monitors ought not trust logs that are detected misbehaving.
>
> I mentioned this in the high-level feedback, but this notion here 
> about log ‘trust’ is one that deserves unpacking. Monitors have strong 
> incentives to continue to examine certificates from Logs, even those 
> that have been shown to be providing a split view. As it relates 
> specifically to Monitors, however, they may not trust a Log to be a 
> complete set of all issued certificates, but they can certainly trust 
> a Log to contain interesting certificates, provided the signatures 
> validate.
We disagree about the use of the term "trust" here, but I have revised 
the text to note that "... misbehaving, although they may elect to 
continue to watch such logs."
>
>
> [Page 6]
> Figure 1
>
> I think this figure is misleading or inaccurate, particularly around 
> steps 1-5. This approach manifests in some of the later discussions 
> around syntax checking of Logs, but as it relates to this specific 
> figure, misses out on the potential parties that are performing the 
> certificate logging. There’s no requirement, in 6962, -bis, or as 
> practiced, to require a CA to log 100% of its certificates, so Steps 
> 1-5 are more complicated. This can be seen in existence today, as 
> large organizations like Google and Cloudflare perform the logging and 
> inclusion operation themselves, both for new and existing certificates.
As noted above, current practice that is not mandated by 6962-bisnot 
considered in this analysis. You are correct that 6962-bis does not 
require that a CA submit all certs that it issues to be logged, but 3.1 
notes that any cert not logged may be rejected by a TLS client, and 
states that this provides string motivation for a CA to log all certs 
destined for consumption by such clients.
>
>
> [Page 7]
> > Because certificates are issued by CAs, the top level differentiation
> > in this analysis is whether the CA that mis-issued a certificate did
> > so maliciously or not.
>
> This split - on intent - is perhaps one of the most disastrous ones to 
> the stability and functioning of the PKI. Malicious intent here - 
> which itself leaves ambiguities around compromise that are touched on 
> later in the document - is difficult to judge or make informed 
> evaluations of. As this ends up serving as the basis for the ontology, 
> it leaves ambiguities around things like malicious incompetence and 
> the difficulty in applying Hanlon’s Razor effectively.
>
The analysis presents a taxonomy, not an ontology. Editorial- no changes 
made.
>
> [Page 7]
> 2. Threats
> > An adversary who is not motivated to attack a system is not a threat.
>
> Enough bytes have been spilled on the topic of this definition that 
> I’m not sure we’d see much progress made. I think it’s worth calling 
> out the potentially troublesome definition here - which is to say if 
> an adversary isn’t motivated to attack it, it’s not a threat. Consider 
> an example such as BygoneSSL [6], which this definition would rule as 
> not a threat until an adversary was motivated to exploit, while I 
> think many others would recognize it as a potential exploit needing to 
> be mitigated.
Attacks are not threats and vulnerabilities are not attacks. BygoneSSL 
is a vulnerability, which enables several forms of attacks. It has 
nothing to do with the definition adopted here, and used in other 
threat/attack analysis RFCs. No changes made.
>
>
> [Page 7]
> 2. Threats
> > As noted above, the goals of CT are to deter, detect, and facilitate
> > remediation of attacks that result in certificate mis-issuance in the
> > Web PKI.
>
> I do not believe that aligns with the stated goals for CT, which is 
> about detection. Deterrence and remediation are important, but in the 
> intersection of policy and technology, it's important to recognize 
> that CT cannot and does not try to fix everything related to PKI.
I hope we both agree that it would be preferable if 6962-bis explicitly 
stated its goals. We both agree that detection of bogus cert issuance is 
central to CT. Log entries provide the identity of a CA that mis-issued 
a cert, which facilitates remediation, e.g., when a Monitor detects a 
conflict with a cert of interest. Logging deters CAs from issuing bogus 
certs. I'll reword the sentence to say that the goals of CT _appear to be_.
>
>
> [Page 8]
> > The latter attack may be possible for
> > criminals and is certainly a capability available to a nation state
> > within its borders.
>
> As worded, this seems to suggest uncertainty about the criminal 
> viability, and through mentioning nation-state attackers, likely 
> implies a greater degree of complexity and rarity than is perhaps 
> appropriate. I think this sort of subjective evaluation of 
> probabilities is unlikely to age well, and concretely, could delete 
> this, along with the previous and following sentences.
Editorial, but I've deleted the sentences in question.
>
>
> [Page 8]
> > It seems unlikely that a
> > compromised, non-malicious, log would persist in presenting multiple
> > views of its data, but a malicious log would.
>
> I think this sentence either reflects or can lead to a 
> misunderstanding about what is meant by a split-view. Rather than 
> being a measurement of a single point in time, because the tree head 
> is signed, once a tree that does not incorporate a given certificate 
> has been signed and published, the log is forever presenting a split 
> view, because it cannot reconcile the two trees.
>
> Similar to the previous note, it seems the document would also be 
> stronger without speculating on probabilities or likelihoods.
Editorial, but I have deleted the sentence.
>
>
> [Page 9]
> > Finally, note that a browser trust store may include a CA that is
> > intended to issue certificates to enable monitoring of encrypted
> > browser sessions.
>
> I think I take issue with this framing here. While factually accurate 
> that, yes, locally trusted anchors may be used for this purpose, I 
> think as worded it’s misleading. From a point of view of agency, the 
> browser trust store is not actively choosing to include such CAs - 
> rather, they are locally-trusted CAs that the browser trust store is 
> permitting - but the current language leaves that ambiguous. Further, 
> I think the framing here is meant to imply that this is the only 
> purpose - that is, monitoring of such communications - while ignoring 
> other purposes, such as serving as a local root of trust for an 
> organization or enterprise.
I have edited the text to note that monitoring of encrypted traffic is 
an _example_ of why a browser trust store may contain locally-managed CA 
certs.
>
> The conclusion of this paragraph suggests that CT is not designed to 
> counter this, while at the same time, does not acknowledge that 
> 6962-bis does not rule it out of scope. While 6962 attempted to, in 
> its discussion of the spam problem, there was ample discussion 
> regarding the prescriptiveness of the language around accepted trust 
> anchors that CT itself is left as policy-agnostic. One could require 
> that such MITM certificates be logged to a locally trusted anchor.
I changed the final sentence to note that CT mechanisms may or may not 
apply to certs such as these.
>
> Based on past feedback being concerned about what is possible, rather 
> than what is practically implemented, it seems that because it is 
> possible for CT to address this, and because of the issues with the 
> wording, this paragraph might be suitable to delete entirely without 
> any loss.
not deleted, but modified to address the issues you cite above.
>
>
> [Page 9]
> 3.1 Non-malicious Web PKI CA context
> > In this section, we address the case where the CA has no intent to
> > issue a bogus certificate.
>
> Structurally, I think this is problematic to suggest that intent is 
> relevant or discernable for the case of CA operations. An actively 
> malicious CA would be able to mask maliciousness for incompetence, and 
> an actively incompetent CA is indistinguishable from an actively 
> malicious one. By orienting the document around intent, it 
> significantly undermines one of the most valuable contributions that 
> CT brings - namely, that intent becomes largely irrelevant once you 
> have transparency, because it’s not what you say, it’s what you do.
As I noted earlier, the overall organization of the document is not 
subject to review at this late stage.
>
>
> [Page 9]
> > In the event of a technical attack, a CA may
> > have no record of a bogus certificate.
>
> This sentence stood out as reflective of the overall structural issues 
> with the approach. At only a few pages in, the reader is supposed to 
> know that there are “bogus” certificates (semantic misissuance) and 
> “not-bogus” certificates (syntactic misissuance), per Section 1. In 
> Section 2, it’s established that a CA that is the victim of an attack 
> may be non-malicious, but the action is equivalent to that of a 
> malicious CA, which is also “bogus.” The distinction between 
> maliciousness and non-maliciousness of the CA seems solely to be 
> relegated to a discussion around whether or not the CA participates in 
> revocation publication, without seemingly any other criteria at this 
> point. By the time this paragraph is reached, yet another term has 
> been introduced, which is erroneous certificate, which is a seeming 
> super-set of the bogus and non-bogus certificates distinguished based 
> on the intent of the CA to issue that certificate, not the intent of 
> the CA to publish revocation about it.
ibid.
>
>
> [Page 10]
> 3.1.1.2. Misbehaving log
> > A
> > misbehaving log probably will suppress a bogus certificate log entry,
> > or it may create an entry for the certificate but report it
> > selectively
>
> Similar to the past discussion about probabilities, I think the 
> ‘probably’ here does more of a disservice to the document than 
> intended. If it’s focused on attacks, it seems it should enumerate 
> possible manifestations of those, and their mitigations, without 
> speculating as to probabilities. If the manifestations result in the 
> same mitigations, perhaps that’s worth clarifying.
>
changed "probably" to "may"
>
> [Page 10]
> > Unless a Monitor validates the associated
> > certificate chains up to roots that it trusts, these fake bogus
> > certificates could cause the Monitors to report non-existent semantic
> > problems to the Subject who would in turn report them to the
> > purported issuing CA.  This might cause the CA to do needless
> > investigative work or perhaps incorrectly revoke and re-issue the
> > Subject's real certificate.
>
> There’s a lot going on within this paragraph, so I’ve tried to pull 
> out some of the most actionable bits. This is the last occurrence in 
> the text of “roots” - a model that 6962-bis avoids, by speaking about 
> trust anchors, which can include both self-issued, self-signed roots, 
> and also intermediate certificates. Collectively addressing this as 
> “trust anchors” would provide greater consistency through the text.
changed "roots" to "trust anchors" (but note that 6962-bis does use the 
term "root" in Section 4.2 in this fashion ...)
>
> I think this reveals a structural issue regarding the role of Monitors 
> that would be worth resolving, as I think it may substantially alter 
> some of the subsequent modelling. The question is whether or not a 
> Monitor is in a place to authoritatively report mis-issuance, or 
> whether they merely act as a signal for potential investigation. In 
> this document, Monitor encompasses both first-party and third-party 
> monitoring, but only the first-party can affirmatively determine 
> misissuance. This may refer to the Subject, as the named entity,  or 
> potentially the Subscriber, as there may be multiple Subscribers for a 
> given Subject certificate, and only one of those parties can 
> authoritatively determine misissuance.
I don't believe the document states that Monitors authoritatively report 
mis-issuance. A third-party Monitor could notify the Subject of a log 
entry for a cert of interest to that Subject, and the Subject ultimately 
decides what to do with this info. No changes made.
>
> As to reporting, the phrase “purported issuing CA” is somewhat 
> problematic, and is closely correlated with ambiguity in the use of 
> “it” in “up to roots that it trusts”. A Monitor does not need a set of 
> trust anchors, because it can evaluate certificate paths against the 
> set of trust anchors the log presents. If the defense is against a 
> misbehaving log misreporting a chain, mitigating that by evaluating 
> signatures against the self-reported trust anchors is one way to 
> mitigate. Whether or not a given certificate path is ‘interesting’ to 
> the Monitor is not going to be based on a factor of what it “trusts”, 
> but rather, what certificates it is interested in. For example, a 
> Monitor doesn’t need to “trust” a given CA to be concerned about its 
> operations.
fair point- A Monitor need not trust a CA in order to watch for the 
certs is logs, on behalf of Subjects. I have revised the sentence as 
follows: "Fake bogus certificates could cause the Monitors to report 
non-existent semanticproblems to a Subject who would, in turn, report 
them to theindicated issuing CA."
>
> Another element to consider in the description of this attack is that 
> it can arise without requiring any Log misbehaviour. The consideration 
> here is about Monitors/Subjects doing ‘needless’ work, which is a 
> model I don’t agree with as such work is both an expected and 
> essential part of practical, Web PKI usage of CT. In order for there 
> to be ‘needless’ work, either the Monitor or Subscriber must determine 
> who the relevant operator of that CA is, in order to request 
> revocation. Consider a scenario where a given CA operator, Foo, has 
> two variants of a CA certificate - a self-signed root and a 
> cross-signed intermediate, where the cross-signature is provided by 
> the CA operator Bar. Leaf certificates from Foo are issued through an 
> intermediate that chains to their ‘root’ (self-signed or cross-signed 
> is irrelevant). A Log may report the chain as terminating to Foo’s 
> root, suggesting Foo is responsible, it may terminate at Bar’s root, 
> suggesting Bar is responsible, or it might just have Foo’s 
> intermediate as the trust anchor, which requires the Monitor/Subject 
> knowing that Foo operates that intermediate.
>
> While this reads fairly convoluted, the point is that investigative 
> work is inherently necessary, not needless, and that a separate set of 
> Monitor “trusted” roots is unnecessary to mitigating these attacks.
fair point- investigative work may just be part of the job for a CA 
when  it receives a report of a possibly bogus cert. The real danger is 
that the CA might over-react and revoke a valid cert inappropriately. I 
revised the sentence accordingly.
>
>
> [Page 10]
> > So creating a log entry for a
> > fake bogus certificate marks the log as misbehaving.
>
> Again, the terminology issues rear their head, in now we have to 
> consider fake-bogus and real-bogus in interpreting the scenarios. 
> Fake-bogus is clarified as those that haven’t been issued by the 
> indicated CA, but the method of that creation of a fake is 
> ill-defined. It seems we’re discussing “certificates whose signatures 
> don’t validate,” but that’s not necessarily indicative of Log 
> misbehaviour. As a demonstration of this, consider an implementation 
> that performs BER decoding of the TBSCertificate, re-encodes it in 
> DER, and compares the signature. If the CA has syntactically misissued 
> in the classic sense - that is, rather than the document’s broader 
> definition, focusing purely on X.509 and RFC5280’s DER-encoding of the 
> ASN.1 - then it’s possible that the signature is over the malformed 
> BER-data, not the (client re-encoded) DER-data. That the Log accepted 
> the signature over the raw value of the TBSCertificate is not 
> indicative of Log misbehaviour.
The cert is broken if the signature was generated incorrectly by the CA. 
Section 4.2 of 6962-bis says that a log MUST NOT accept a cert that does 
not chain to am accepted trust anchor. So, if it accepted the cert, the 
log acted in error (because it would have to validate the sig in each 
cert during the chain verification). I agree that this is a syntactic 
error by the log, in the traditional sense, but it is also an example of 
a log misbehaving, in any case.
>
> Similarly, there are CAs that have, in various ways, botched the 
> Signature or SPKI encoding in ways that clients understand and 
> compensate for - unnecessary or omitted ASN.1 NULLs are my favorite 
> example. That a Log understood this compatibility behaviour and 
> accepted the certificate is not indicative of Log misbehaviour.
6962-bis allows a log to be sloppy about syntax checking relative to 
5280, in Section 4.2.  But I interpret the initial requirement to verify 
the chain of sigs to supersede the later "flexibility" stated in 4.2. 
This is another example of ambiguity in 6962-bis leading to uncertainty 
in analyzing residual vulnerabilities.
>
> That this is listed as an attack at all seems to derive from the later 
> suggestion regarding Logs performing syntax checks on certificates, 
> which, while vastly undermining the goal of transparency, would be an 
> attack if the Log was expected to reject certificates that clients are 
> willing to accept.
see responses immediately above.
>
>
> [Page 10]
> 3.1.1.2.1.  Self-monitoring Subject & Benign third party Monitor
> > It is anticipated that logs that are
> > identified as persistently misbehaving will cease to be trusted by
> > Monitors, non-malicious CAs, and by browser vendors.
>
> The phrase “trusted by” is better replaced with “relied upon”. Neither 
> CAs nor Monitors ‘trust’ logs, and in the context of verifying the 
> data structures, neither do Browsers. While they rely upon the Log for 
> various things depending on their role - Monitors and CAs most 
> concerned with availability - it’s not to be confused and conflated 
> with trust. A Monitor is most interested in every possible certificate 
> it can potentially process, and even a misbehaving Log or CA does not 
> undermine the utility in looking at these certificates.
changed "trusted by" to "relied upon"
>
>
> [Page 10]
> > In this scenario, CT relies on a distributed
> > Auditing mechanism [I-D.ietf-trans-gossip] to detect log misbehavior,
> > as a deterrent.
>
> While appreciative of the considerations of Gossip, I think it’s 
> premature to presume that this is the only solution or mitigation to 
> this. Both 6962-bis and the community have shown alternative 
> approaches (such as centrally-mediated Auditing) are potentially 
> viable, and which do not necessitate Gossip.
Gossip is mentioned here because it is a WG doc developed to address 
this issue, it is cited by 6962-bis (11.3) as a way to detection 
misbehaving logs, and it is slated to be published as an RFC. As for 
what the "community" may be pursuing, that is not documented in an RFC, 
so ... Nonetheless, I have changed CT relies on" to "may rely upon" and 
nted the Gossip I-D as an example, so that the reader does not infer 
Gossip is the only possible solution.
>
>
> [Page 11]
> > This discrepancy can be detected if there is an exchange of
> >  information about the log entries and STH between the entities
> > receiving the view that excludes the bogus certificate and entities
> > that receive a view that includes it, i.e., a distributed Audit
> > mechanism.
>
> I think, as worded, this suggests that clients need to distribute 
> information about the log entries, in addition to the STH. However, 
> for the given purpose, the STH is sufficient, as you would not be able 
> to obtain inclusion proofs for the two different STHs while 
> simultaneously omitting a certificate.
>
> Given the STH, determining the affected entry by local construction of 
> the tree and path also suffices to determine where things are ‘wonky’ 
> and have a gap.
Changed "... information about the log entries and STH..." to "... 
relevant STHs"
>
>
> [Page 11]
> > If a malicious log does not create an entry for a bogus certificate
> > (for which an SCT has been issued), then any Monitor/Auditor that
> > enrounters the bogus certificate (and SCT) will detect this when it
> > checks with the log for log entries and STH (see Section 3.1.2.)
>
> I believe this should say checks with the log for inclusion proofs, 
> rather than entries.
fixed.
>
>
> [Page 11]
> 3.1.1.3  Misbehaving third party Monitor
> > Note that independent of any mis-issuance on the part of the CA, a
> > misbehaving Monitor could issue false warnings to a Subject that it
> > protects.
>
> As noted previously, I think this misstates the role of a Monitor, in 
> that it presumes a Monitor is a trusted arbiter of truth, rather than 
> as a signal of particular issues to investigate. While it’s true that 
> an “Evil Monitor” attack could suppress notification of interesting 
> certificates, I think this second paragraph - discussing the Monitor 
> being vigilant - is at odds with the expected Monitor functionality.
Notifications to a Subject from a Monitor about certs of interest are 
warnings about potential mis-issuance, although the Subject makes the 
final determination about whether it perceives the indicated certs as 
mis-issued. No changes made.
>
>
> [Page 11]
> 3.1.2.  Certificate not logged
> > If the CA does not submit a pre-certificate to a log, whether a log
> > is benign or misbehaving does not matter.
>
> This is problematic in that the assumption here is that CAs are the 
> ones performing the logging. Throughout the document, the description 
> of CAs performing logging ignores the ability of Subscribers and 
> certificate holders to perform the logging at their discretion, up to 
> and including seconds before performing the TLS handshake. As a 
> consequence, it misses an entire class of attacks that can arise when 
> an inclusion proof for an SCT can not be obtained to an STH, because 
> the SCT is newer than the published STH.
>
> I believe this is another instance of a compelling reason to 
> re-evaluate the ontology of attacks and to not attempt to classify 
> them using a hierarchy.
Although 6962-bis does allow any entity to log a cert, the focus of the 
doc is very much on CA-based logging, as evidenced by pre-cert logging. 
No changes made.
>
>
> [Page 12]
> 3.2.1..1.1.  Self-monitoring Subject
> > If a Subject is checking the logs to which a certificate was
> > submitted and is performing self-monitoring, it will be able to
> > detect the bogus certificate and will request revocation.
>
> It would be better to replace “will request revocation” with “may 
> request revocation.”.
> A bogus certificate may be in the Subscriber’s favor. As the framing 
> unfortunately presumes that there exists an appropriate entity for 
> every named Subject, it seems to omit consideration that some of the 
> entities named within a Subject may benefit from the bogusness. For 
> example, it could considerably save certificate validation costs if 
> “Google Inc” could enumerate a host of information about itself that 
> didn’t need to be validated - information such as postal address. 
> Google, the Subject named in the certificate, would have no reason to 
> request revocation, because the bogusness of the certificate is in its 
> favor.
an obscure, but fair point. I have changed "will request" to "may request"
>
>
> [Page 12]
> > A malicious CA might revoke a bogus certificate to avoid having
> > browser vendors take punitive action against the CA and/or to
> > persuade them to not enter the bogus certificate on a vendor-
> > maintained blacklist.
>
> It seems entirely unnecessary to mention intent here, especially when 
> that ‘intended reality’ does not match with ‘actual reality,’ yet 
> leads the reader to believe that might be a valid result. Instead, 
> simply focusing on what the malicious CA does, without speculating 
> about the reasoning, seems to provide a clearer narrative about the 
> potential risks.
This subsection is within the malicious CA section, so the word 
"malicious" is consistent with the doc structure. No changes made.
>
>
> [Page 12]
> > No component of CT is tasked with detecting this sort of
> > misbehavior by a CA.
>
> This feels like a stretch. We’ve already seen CT serving as the basis 
> for three different clients revocation behaviours - Google, Apple, and 
> Mozilla. In these cases, the view that is presented to the browser 
> vendor is the authoritative view on the client. Thus, revocation is 
> equal to revocation, and we achieve that only through CT.
As noted several times above, the analysis is based primarily on what 
6962-bis says, not what is currently deployed. The goal is for this doc 
to become an Informational RFC. It will cite 6962-bis as an Experimental 
RFC. If, in the future, 6962-bis is revised to reflect deployment 
experience, then the threat analysis doc should be updated, or become 
obsolete.
>
>
> [Page 13]
> 3.2.1.2.1.  Monitors - third party and self
> > If a Monitor learns
> > of misbehaving log operation, it alerts the Subjects that it is
> > protecting, so that they no longer acquire SCTs from that log.  The
> > Monitor also avoids relying upon such a log in the future.  However,
> > unless a distributed Audit mechanism proves effective in detecting
> > such misbehavior, CT cannot be relied upon to detect this form of
> > mis-issuance.  (See Section 5.6 below.)
>
> It’s ambiguous as to who “they” is here - whether it is Monitor or the 
> Subject. Using the model described in Figure 1, neither of these 
> entities are responsible for obtaining SCTs - that’s the CA’s role - 
> so it’s unclear what is trying to be communicated here. If it’s meant 
> to inform the CA as to the status of SCTs, then the communication flow 
> would generally go Monitor -> Browser -> CA, as the Monitor doesn’t 
> necessarily have a relationship with the CA, and the CA has no 
> incentive to stop obtaining SCTs until the Browser no longer considers 
> them.
>
> If it’s meant to inform Subscribers that are self-logging, then the 
> Monitor doesn’t have a relationship with the certificate Subscriber - 
> just the Subject - and so the communication flow would again go 
> Monitor -> Browser -> Subscriber, as Subscribers have no incentives to 
> change until the Browser no longer accepts.
>
> As to the Monitor “avoids relying upon such a log in the future,” 
> that’s not accurate, as the Monitor has every incentive to continue to 
> examine the Log even after it’s been demonstrated as malicious and 
> hiding entries, as the certificates it hasn’t hidden are still 
> applicable to the Monitor and the Subject. Any avoidance of reliance 
> is only when the Monitor has no vested interest in the historic 
> certificates in that Log, which may never happen for Monitors that 
> wish to provide historically accurate views (e.g. to assist with 
> investigating stuff like [6]).
You're right that "they" is ambiguous, and inconsistent with the rest of 
the sentence. The revised sentence reads: "If a Monitor learnsof 
misbehaving log operation, it alerts the Subjects that it is 
protecting." The next sentence is also revised to read: " The Monitor 
also may avoid relying upon such a for future entries."
>
> Finally, the phrase “unless a distributed Audit mechanism proves 
> effective” seems to again emphasize a design that is merely one of a 
> number on the table.
I've changed the text to say: " However, unless a distributed Audit 
mechanism, _or equivalent, _proves..."


>
>
> [Page 13]
> 3..2.2.  Certificate not logged
> > When a CA does not submit a certificate to a log, whether a log is
> > benign or misbehaving does not matter.  Also, since there is no log
> > entry, there is no difference in behavior between a benign and a
> > misbehaving third-party Monitor.
>
> This attack model is seemingly based on an assumption that a CA is the 
> only entity that logs, and that a failure of a CA to log is 
> (generally) a malicious activity. I believe this entire section would 
> need to be reworked when considering that the entity logging may be 
> the Subscriber, or may even be a third-party entity, such as Google 
> logging certificates its crawler sees.
I have added the following clarification to this subsection: " (Note 
that an entity other than the issuing CA might submit a certificate 
issued by this CA to a log, if it encountered the certificate. In a 
narrowly-focused attack, such logging would not occur, i.e., only the 
target of the attack would see the certificate.)"
>
>
> [Page 14]
> 3.2.2.1.  CT-aware browser
> > Since certificates have
> > to be logged to enable detection of mis-issuance by Monitors, and to
> > trigger subsequent revocation, the effectiveness of CT is diminished
> > in circumstances where local policy does not mandate SCT or inclusion
> > proof checking.
>
> I don’t think this statement supports its conclusions. This statement 
> needs to clarify whether it’s discussing about the ecosystem 
> effectiveness or per-user effectiveness, and more carefully describe 
> what tradeoffs are being made.
I have revised the sentence to be more specific: "Certificates haveto be 
logged to enable detection of possible mis-issuance by Monitors, and 
totrigger possible subsequent revocation. The effectiveness of CT in 
protectingan RP is diminishedin circumstances where local policy does 
not mandate SCT or inclusionproof checkingby the RP’s software."
>
>
> [Page 14]
> 3.3.  Undetected Compromise of CAs or Logs
> > Because the compromise is undetected, there will be
> > no effort by a CA to have its certificate revoked or by a log to shut
> > down the log.
>
> In this context, I think “will” is ambiguous as to how far it extends 
> in the future. Perhaps it’s better to clarify as “Until the compromise 
> is detected, there will be”
Well, the heading here is "undetected compromise", which suggests that 
the compromise may not be detected, but I revised the text to say " 
Until such time that the compromise is detected ..."
>
>
> [Page 14]
> 3.3.1.  Compromised CA, Benign Log
> > In other case the goal is to cause the CA to
>
> I think of all my remarks, this is perhaps the least important. I 
> believe the intent is to say “in other cases” (plural). However, 
> speaking to motivation and intent do not seem to be particularly 
> beneficial in this section, as noted in other sections.
typo fixed.
>
>
> [Page 15]
> > This sort of attack may be most effective if the CA that is the
> > victim of the attack has issued a certificate for the targeted
> > Subject.  In this case the bogus certificate will then have the same
> > certification path as the legitimate certificate, which may help hide
> > the bogus certificate.
>
> I don’t think this attack is sufficiently described, and may be 
> resting on implicit assumptions about the Monitoring functionality. 
> From the context, my best guess would be that this is trying to 
> describe where a given Subject has two certificates issued for it, 
> from the same issuing intermediate, with different SPKIs in possession 
> of different entities. The ‘bogusness’ of the certificate is that the 
> legitimate Subject did not authorize the second SPKI, but the Monitor 
> may not be examining or considering SPKIs, or even just number of 
> certificates, and instead only considering certification paths.
> If that’s the case, then I think it bears spelling out more. This is 
> another area where the document hierarchy can lead to omissions, since 
> this is something that Monitors should be considering (and Subjects) 
> in how they effectively monitor for misissuance.
I have revised the sentence to say: " ... which may help hidethe bogus 
certificate (depending on details of Monitor behavior). "
>
>
> [Page 16]
> > If the compromised CA does determine that its
> > private key has been stolen, it probably will take some time to
> > transition to a new key pair, and reissue certificates to all of its
> > legitimate Subjects.  Thus an attack of this sort probably will take
> > a while to be remedied.
>
> I think this last sentence is unnecessary speculation. It’s unclear 
> whether this is trying to describe the world “as spec’d” or “as 
> practiced”, but given the discussion of revocation and browsers using 
> non-standard forms, it appears to be the former. If so, then the 
> Baseline Requirements only permit 24 hours before revocation is 
> required, and although recent adoptions in the CA/Browser Forum by way 
> of SC6 extend this for some cases, the given attack is certainly not 
> covered. Thus, it seems this speculation is incorrect.
>
> If the intent is to describe the world as practiced, then it doesn’t 
> seem particularly productive to speculate on how that particular 
> scenario would be handled, as it doesn’t seem to add any new 
> information or value.
The offending sentences have been deleted.
>
>
> [Page 16]
> > If the attacker has the ability to
> > control the sources of revocation status data available to a targeted
> > user (browser instance), then the user may not become aware of the
> > attack.
>
> It’s unclear to me the model that this is imagining. If the CA is 
> compromised, its issuer can revoke that CA - and is in fact obligated 
> to do so - and thus this threat seems to be mitigated by entirely 
> bypassing the “trust the revocation details from the compromised CA.”
>
> Is this model based on an assumption that if the compromised CA isn’t 
> revoked, and if the attacker can control all sources and not just some 
> sources, then they can prevent revocation? Is this new information to 
> consider? Is it arguing for hard-fail revocation, including 
> unavailability to check revocation?
The scenario assumes that the compromised CA has not yet been detected 
and thus its parent has not revoked that CA cert.  No changes made.
>
>
> [Page 16]
> > A bogus certificate issued by the malicious CA will not match the SCT
> > for the legitimate certificate, since they are not identical, e.g.,
> > at a minimum the private keys do not match.  Thus a CT-aware browser
> > that rejects certificates without SCTs (see 3.2.2.1) will reject a
> > bogus certificate created under these circumstances if it is not
> > logged.
>
> The description of this attack feels like it’s describing two 
> different things. On first read, it appears that it’s suggesting the 
> attacker would issue a bogus certificate that otherwise identically 
> matches an existing, logged certificate, and transplant SCTs for the 
> true certificate’s precertificate (or -bis equivalent) onto the bogus 
> certificate. This ‘threat’ is only possible if a client doesn’t 
> actually implement 6962-bis - in terms of checking that the SCTs match 
> the certificate - and thus let it be confused by “SCTs present” rather 
> than “SCTs match”.
BTW, this is an attack, not a threat :-).

The text just says that a bogus cert will not be accepted (by a careful 
browser) based on providing a SCT for the legitimate cert with the same 
Subject name, because the SCTs will not match. I think the text is 
clear. No changes made.
>
> However, the second half describes a CT-aware browser that rejects 
> certificates without SCTs, which seems orthogonal to the whether or 
> not the bogus certificate matches the SCT. Here, it seems to be saying 
> “You can’t transplant, and you don’t want to log, so clients reject”. 
> If that’s the intent, then it seems entirely unnecessary to discuss 
> whether or not the certificates are identical, because 6962-bis 
> establishes that different certificates mean different SCTs.
>
The second part of the paragraph says that if a CT-aware browser is 
careful, then the cert can be accepted if it was logged, but then the 
bogus cert is subject to detection by Monitors.  No changes made.
>
> [Page 15 - 16]
> 3.3.1.  Compromised CA, Benign Log
>
> This section seems to entirely omit the consideration of ‘malicious 
> content’ as a means of attacking logs. For example, if I issue a 
> certificate with an X.509v3 extension of an OCTET STRING that contains 
> an executable version of DeCSS, is that an attack worth considering?
That would be an attack against a log based on an implementation 
vulnerability. As noted earlier, such attacks are not considered here, 
because the doc focuses on architectural vulnerabilities and the results 
of compromise of system elements, not implementation vulnerabilities. 
The abstract for the analysis doc has been revised to emphasize this.
>
> If I issue a certificate containing libelous statements in the Subject 
> Alternative Name, is that an attack?
not within the scope of this document.
>
> If I surreptitiously issue a billion certificates, and then execute a 
> distributed logging attempt, in order to attempt to cause a log to 
> violate its MMD, is that an attack?
First, if you try to log a billion certs, I doubt that the term 
"surreptitious: is applicable :-).

Section 4.2 of 6962-bis says: "(A log may decide, for example, to 
temporarily reject valid submissions to protect itself 
againstdenial-of-service attacks)." So, a log that is overwhelmed by a 
CA is not doing what is it allowed (b y the spec) to do to protect 
itself against this sort of DoS attack. That's an implementation 
vulnerability and outside the scope of this document.
>
> If I use these billion certificates to make it difficult for Monitors 
> to effectively track the corpus of certificates, in order to delay the 
> detection of my malicious certificate buried in these billion ‘noisy’ 
> certs, is that an attack?
that sort of DoS attack not addressed in the doc.
>
> If I do all of this with a revoked CA, but the Log doesn’t check 
> revocation, is that an attack?
It is another form of DoS attack.
>
> If I do all of this with a CA not permitted for digitalSignature, and 
> the Log does not perform ‘syntax checks’, is that an attack?
ibid.
>
> These are just a few of the scenarios that have been discussed by CAs 
> and Log Operators, and the scenario of “compromised” or “malicious” CA 
> is arguably worthy of a document in itself. By virtue of being an 
> append-only log, CT doesn’t provide for ‘cleanup’ scenarios that might 
> be addressed for other forms of defacement, and generally is mitigated 
> by shutting down the existing log and spinning up a new one, with 
> mitigations for whatever attacks were made - for example, syntax 
> checks, revocation checks, rate-limiting by CA, contractual liability 
> for the issuing CA by the Log Operator, omitting the offending cert, 
> etc. And while that provides a clear transition point for the 
> ecosystem, it impacts Monitors, Browsers, and CAs in doing so.
These are interesting attack scenarios that, you have indicated, are 
very real concerns, especially for log operators. Perhaps you should 
write an RFC discussing these aspects of CT residual vulnerabilities 
with a focus on DoS. It would complement this doc.
>
>
> [Page 16]
> 3.3.2.  Benign CA, Compromised Log
> > A benign CA does not issue bogus certificates, except as a result of
> > an accident or attack.  So, in normal operation, it is not clear what
> > behavior by a compromised log would yield an attack.
>
> If we accept that a CT-aware browser will require SCTs, and further 
> accept that CAs generally need to obtain SCTs at time of issuance for 
> the majority of certificates for deployment concerns, then it seems 
> this misses a denial of service attack against CAs that need to obtain 
> SCTs from those logs. If the attacker is interested in, say, reducing 
> the utility of CT by reducing the number of logs, thereby causing 
> CT-aware browsers to no longer require SCTs, thus allowing for CA 
> compromise, then compromising a log and causing it to violate its 
> Logginess seems like an excellent opportunity to go there.
yes, but, again, DoS attacks are not considered.
>
> Alternatively, rather than compromising the Log in a detectable way, 
> it can be used to deter or discourage benign CAs. For example, rate 
> limiting the issuance of SCTs to 1 SCT a millenia, based on issuing 
> CA, could fully comply with the definition of a CT Log, but 
> effectively render it useless.
I suspect that CAs would choose to not make use of such logs, making 
this an ineffective attack.
>
>
> [Page 17]
> 3.3.3.  Compromised CA, Compromised Log
> > It
> > might use the private key of the log only to generate SCTs for a
> > malicious CA or the evil twin of a compromised CA. If a browser
> > checks the signature on an SCT but does not contact a log to verify
> > that the certificate appears in the log, then this is an effective
> > attack strategy.
>
> This is another area where the ‘evil twin-ness’ is getting in the way 
> of clarity. It appears this section is attempting to describe both 
> signature validation and inclusion proof. That is, what’s being 
> described is seemingly “A compromised Log could not incorporate an SCT 
> within its MMD, or could provide a split view.” Which is covered in 
> 6962-bis, but the discussion about evil-twins and malicious CAs just 
> detracts from that.
The evil twin attack has been a source of confusion for me, which is why 
I accepted David Cooper's text describing the attack a while ago. Yes, 
the text was describing an attack involving SCT validation (from a 
compromised CA) with and without inclusion proof checking. I have 
revised the text to refer to inclusion proof acquisition irrespective of 
how the proof is acquired: " If a browser checks the signature on an SCT 
but does not acquire an inclusion proof, then this could be an effective"
>
>
> [Page 17]
> > To detect this attack an Auditor needs to employ a
> > gossip mechanism that is able to acquire CT data from diverse
> > sources, a feature not yet part of the base CT system.
>
> 6962-bis provides a means for delivering inclusion proofs, including 
> via OCSP, so I don’t agree with this statement that it’s not part of 
> the base CT system. This again is pushing a particular solution, of 
> which many may exist, and which the ecosystem is still working through 
> (hence 6962-bis as experimental)
6962-bis does not mandate delivery of inclusion proofs during a TLS 
session. Thus it is appropriate to consider cases in which a TLS server 
does not supply such proofs. Nonetheless, I have revised this sentence 
to read: "To detect this attack an Auditor _may_ need to employ 
amechanism that is able to acquire CT data from diverse sources,_e.g., 
___ ___[I-D.ietf-trans-gossip__]._"
>
>
> [Page 17]
> > In this case CT would need to rely on a distributed
> > gossiping audit mechanism to detect the compromise (see Section 5.6).
>
> Same as above, it seems to ignore some of the intentional design goals 
> of 6962-bis, and seems to promote a specific solution as an absolute 
> necessity, even though a number exist.
If only b6962-bis had a clear statement of its design  goals ... As 
noted several time above, 6962-bis expressly cites the Gossip document 
so it's appropriate to use it as a reference for addressing distributed 
audit mechanisms. Also, as stated before, alternative mechanisms that 
are not documented in RFCs don't merit the same level of consideration 
here. Still, I change the sentence to begin: " In this case CT _might_ 
need to rely on ..."
>
>
> [Page 19]
> 3.4.  Attacks Based on Exploiting Multiple Certificate Chains
> > This sort of attack might be thwarted if all intermediate (i.e., CA)
> > certificates had to be logged.  In that case CA certificate 2 might
> > be rejected by CT-aware browsers.
>
> Structurally, perhaps it’s worth describing this as requiring all 
> intermediate certificates be disclosed. The method for that 
> disclosure, and how the client enforces that, seems irrelevant to the 
> attack, as the assumption is that it is thwarted based on disclosure.
>
> I mention this because multiple browser programs have indicated a 
> willingness to explore whitelisting intermediates / requiring their 
> disclosure, whether through CT or through other means (e.g. CCADB [7])
The text for 3.4 was supplied by David after much discussion and I am 
very reluctant to change it. Also, what is being explored is not as 
relevant here vs. what is documented in6962-bis. No changes made.
>
>
> [Page 19]
> > However none of these
> > mechanisms are part of the CT specification
> > [I-D.ietf-trans-rfc6962-bis] nor general IETF PKI standards (e.g.,
> > [RFC5280]).
>
> This gets back into the messiness about whether or not 
> browser-mediated revocation counts as revocation within the overall 
> description of the threat models of this document. This appears to be 
> calling out that browser-based mediation is superior to 
> IETF-standardized documents - many browsers already support revocation 
> by SPKI or even SPKI-and-Subject (rather than Issuer-and-Serial or Hash).
The text in question was included based on comments from Rich, who 
wanted me to note that the attack might be mitigated by existing, 
proprietary, browser mechanisms, and I reluctantly agreed to include 
it.  The text does not state that such mechanisms are superior to the 
ones in IETF standards; it notes that they are better at addressing this 
specific attack. If my accommodation of Rich's comments is viewed as a 
basis for expanding text to include other, non-standard mechanisms, I'll 
revise this paragraph to a very short one that says essentially nothing 
about proprietary revocation mechanisms. No changes made.
>
>
> [Page 19]
> 3.5.  Attacks Related to Distribution of Revocation Status
> > This is true irrespective of whether revocation is
> > effected via use of a CRL or OCSP.
>
> Same as above, this presumes that only these two methods constitute 
> revocation, while Section 1 appears to support a broader notion and 
> usage of the term. In order to make effective use of this document, 
> there needs to be clarity whether we’re talking revocation as a 
> concept, or revocation via particular methods.
Yes, I am focusing on IETF-standard revocation mechanisms here. I have 
added the following comment after the sentence cited above to clarify 
that: " (This analysis does not consider proprietary browser revocation 
status mechanisms.)"
>
>
> [Page 20]
> > Only if the browser relies
> > upon a trusted, third-party OCSP responder, one not part of the
> > collusion, would these OCSP-based attacks fail.
>
> I think my issue with this is “only”, as alternative solutions exist. 
> An adversarial stapling of “Good” responses - whether obtained through 
> malicious CA or simply unexpired, pre-revocation responses - is 
> somewhat well known in the PKI community, particularly since it’s a 
> contributing factor to the “unfortunately” mentioned earlier regarding 
> browser support for OCSP.
>
> Alternatively, the issue may be that it presumes an OCSP responder is 
> the only mitigation for this, when alternative (non-OCSP-based) 
> techniques can work to mitigate.
I  deleted "Only"
>
>
> [Page 21]
> 4.1.  Non-malicious Web PKI CA context
> > If the
> > Subject is a web site that expected to receive an EV certificate, but
> > the certificate issued to it carries the DV policy OID, or no policy
> > OID, relying parties may reject the certificate, causing harm to the
> > business of the Subject.
>
> This is another example of the syntactic and semantic bifurcation 
> being detrimental. I would have viewed this as a semantic misissuance 
> - it’s valid RFC 5280 syntax, but the interpretation and level of 
> assurance that an RP may place in the certificate is not aligned with 
> the level of vetting the information received. If this is viewed as 
> syntactic, rather than semantic, as the preceding sentences establish, 
> then I fail to see why a certificate issued to an 
> improperly-verified-or-confused-Subject is not equally syntactic, as 
> it relates to the Baseline Requirements profile, rather than semantic, 
> as is argued by calling them “bogus”.
As noted earlier, the basic structure of the document, which is based on 
the taxonomy, is not up for discussion at this point. No changes made.
>
>
> [Page 21]
> 4.1.1.1.  Benign log
> > If a (pre )certificate is submitted to a benign log, syntactic mis-
> > issuance can (optionally) be detected, and noted. This will happen
> > only if the log performs syntactic checks in general, and if the log
> > is capable of performing the checks applicable to the submitted (pre
> > )certificate..
>
> This is not correct. I believe this mistake derives from the view of 
> Monitors solely as agents of Subjects. Consider [8] as an example of a 
> Monitor detecting syntactic misissuance without requiring the Log(s) 
> perform such.
An I-D describing Monitor behavior did note the potential for Monitors 
to check syntax. The document was rejected by the WG. I'm glad to see 
someone has chosen to do this anyway. The text in question is discussing 
what a log could do, irrespective of what a Monitor might do, so 
omitting a discussion of potential Monitor checks for cert syntax is not 
an error. No changes made.
>
> The issue I have broadly with this section is that it attempts to move 
> the bar from Monitors performing this role of syntax checking, as they 
> do today, onto Logs. This places greater trust in Logs that necessary, 
> and creates greater opportunity of risk than the value returned. 
> Within the CT model, in which operations of the Logs don’t require 
> trust because they are cryptographically verifiable, this seemingly 
> introduces a huge gap into the ability to assess the operational 
> compliance of a given Log.
The text here does not bar Monitors from syntax checking, it merely 
focuses on what might happen IF a Log performed such checking, something 
not prohibited by b6962-bis. the text does not require Logs to perform 
such checks. No changes made.

>
>
> [Page 21]
> > If this is a pre-certificate
> > submitted prior to issuance, syntactic checking by a log could help a
> > CA detect and avoid issuance of an erroneous certificate.
>
> Given that the issuance of a pre-certificate is, and must be, treated 
> as equivalent to the issuance of an equivalent certificate, it’s 
> unclear what the perceived value is. I think this is somewhat at odds 
> with that goal of 6962-bis in treating the two as equivalent, because 
> it seems to suggest there’s value in making a distinction.
The value is that a CA that made an error could be informed of the error 
and choose to revoke the cert rather than delivering it to a Subject. 
When a cert is logged it may have already been delivered, so remedying 
the error is not as painless. No changes made.
>
>
> [Page 21]
> > Here too syntactic checking by a log
> > enables a Subject to be informed that its certificate is erroneous
> > and thus may hasten issuance of a replacement certificate.
>
> Yes, but Monitors fulfill that function today, so it’s unclear whether 
> there’s a new and distinct attack being remedied.
No Monitor is required to perform syntax checking as per 6962-bis, so 
the doc does not  assume that such checks are performed. No changes made.
>
>
> [Page 21]
> > If a certificate is submitted by a third party, that party might
> > contact the Subject or the issuing CA, but because the party is not
> > the Subject of the certificate it is not clear how the CA will
> > respond.
>
> It’s unclear what is trying to be described here. In the context of 
> the Baseline Requirements, there are procedural requirements for CAs 
> responding to Problem Reports that cover misissuance, including 
> requiring revocation.
>
> Broadly speaking, however, this fits into the category of speculative 
> statements that are unclear about the value being added.
The Baseline Requirements are cited elsewhere as an example of syntax 
constraints that might be checked by some elements of the CT system, 
even though no such checks are required. We don't refer to the 
procedural aspects of CA operation from those requirements. No changes 
made.
>
>
> [Page 22]
> > This analysis suggests that syntactic mis-issuance of a certificate
> > can be avoided by a CA if it makes use of logs that are capable of
> > performing these checks for the types of certificates that are
> > submitted, and if the CA acts on the feedback it receives.
>
> I disagree with this, because it attempts to treat pre-certificates as 
> distinct from certificates in terms of both policy and practice. Given 
> that CT is designed to detect CA misissuance, it seems that regardless 
> of pre-certificate or certificate, the mitigations and response are 
> effectively the same, and the distinction irrelevant.
see my reply above about why pre-certs are different than already-issued 
certs. No changes made.
>
>
> [Page 22]
> > If a CA
> > uses a log that does not perform such checks, or if the CA requests
> > checking relative to criteria not supported by the log, then
> > syntactic mis-issuance will not be detected or avoided by this
> > mechanism.
>
> This seems tautological. “If a CA does not use a Log that performs 
> syntax checking, the Log will not perform syntax checking”.
The text in question may be redundant, but editorial issues of this sort 
are out of scope at this point. No changes made.
>
> Given that Monitors do fill that role today, it’s not accurate to 
> suggest that syntactic mis-issuance won’t be detected, merely that it 
> will not be signaled in this presumed SCT extension for SCTs to signal 
> syntactic misissuance, which… is also circular.
Since 6962-bis does not even mention the option for Monitors to perform 
syntax checking, much less mandate it, the analysis does not consider 
this as a feature that will always be available. No changes made.
>
>
> [Page 22]
> 4.1.2.  Certificate not logged
>
> > If a CA does not submit a certificate to a log, there can be no
> > syntactic checking by the log.
>
> I highlight this as another example in where the goal seems to be 
> shifting from “detection of syntax issues” to specifically a Log-based 
> detection mechanism. The distinction about whether the Log or the 
> Monitor performs that function is not expanded upon in this document, 
> so it seems an unnecessary conflict to introduce.
Since the paragraph containing this sentence ends with an observation 
that a Monitor could check for syntax, your assertion that the document 
focuses exclusively on Log-based detection is not true. However, to 
avoid confusion I have added the following text: " (Note that a Monitor 
might choose to perform such checks, instead of a log, although this 
capability is not addressed in [I-D.ietf-trans-rfc6962-bis].)"
>
>
> [Page 22]
> > A Monitor that performs syntactic checks
> > on behalf of a Subject also could detect such problems, but the CT
> > architecture does not require Monitors to perform such checks.
>
> Neither does the CT architecture require Logs perform such checks, so 
> it’s unclear why Logs were omitted from that context or why this was 
> explicitly called out for Monitors.
We agree that 6962-bis neither requires nor prohibits syntax checks by 
either Logs or Monitors. Log checking adds a processing burden and thus 
is not as attractive as Monitor checking, if DoS of Logs is a major 
concern (despite the fact that 6962-bis allows logs to reject 
submissions if the are too busy).  But, Log checking offers the 
potential to allow a CA to remedy a syntax error quickly, especially for 
pre-certs, which is advantageous. Thus it is appropriate to explore 
syntax checking for both elements of the system. No changes made.
>
>
> [Page 23]
> 4.1.2.1.  Self-monitoring Subject
>
> I think this whole section suffers from not fully considering 
> Subject-initiated logging. In particular, even under an ideal model, 
> there are operational reasons why having a CA Log 100% of their 
> certificates is not necessarily a goal; see the ample discussion of 
> redaction for that. A non-logged certificate is thus not indicative of 
> an operations error, an attack, or any other failure by the CA.
>
As noted before, there is no statement in this doc that says that CAs 
log 100% of the certs they issue.  The notion of redacted certs does 
point out why not all certs issued by a CA might be logged, but also 
note that special processing for such certs were was removed from 
6962-bis. Given the pre-cert logging mechanism, and the admonition that 
CT-aware clients are expected to reject certs w/o SCTs, the message from 
6962-bis seems pretty clear, i.e., most certs issued by CAs should be 
logged. No changes made.
> In the context of “How will the Subject detect syntactic misissuance 
> if they only have the cert”, it seems really unclear to me why this 
> would even need to be discussed in the context of CT, since CT is 
> orthogonal if you have the certificate.
>
> Title-wise, the notion of self-monitoring subject seems somewhat 
> confusing. Earlier in the document, the notion of self-monitoring 
> seemed to be regarding “bogus” certificates for a given 
> Subject/Subject Alternative Name, but here, it’s seemingly about 
> Monitoring for certificates that the Subject already has (in which 
> case, what are they monitoring?)
Self-monitoring is used to distinguish the case where a Subject performs 
its own Monitor function, vs. relying on a  third-party. This discussion 
takes place in the context of syntax checking and hence it focuses on 
that (optional) Monitor function. No changes made.
>
>
> [Page 23]
> 4.1.3.2.  CT-enabled browser
>
> I’ve raised concerns previously about this, but I think in the 
> document context of “How will a browser detect syntax issues with 
> certificates it’s evaluating”, this entire section is unnecessary and 
> includes some controversial statements presented rather authoritatively.
This sounds like an editorial comment. Is there specific text that you 
identify as factually wrong?
>
> If the goal is to describe how a Browser vendor may perform a Monitor 
> function for syntax issues, then the discussion about client-side 
> behaviour seems irrelevant, because the vendor can evaluate the Logs 
> asynchronously, and without any of the unnecessary privacy considerations.
This discussion addresses browsers, not browser vendors. No changes made.
>
>
> [Page 24]
> 4.2.1.1.  Benign log
>
> > 1.  The CA may assert that the certificate is being issued w/o regard
> >      to any guidelines (the "no guidelines" reserved CCID).
>
> I can’t seem to find any definition of CCID as used in this section. 
> However, since it largely relates to Logs performing syntax checks, I 
> think my view has been stated adequately that I think the document is 
> better off without that discussion in the first place.
Good catch! . CCID was defined in an I-D that has since expired. TI 
tossed most of the text and replaced it with the following: " Because 
the CA is presumed to be malicious, the CA might cause the log to not 
perform checks (if the log offered this option). Because logs are not 
required to performsyntax checks, there probably would have to be a way 
for a CA to request checking, the CA might indicate that it did not 
desire such checks to be performed. Or the CA might submit a (pre-) 
certificate to a log that is known to not perform any syntactic checks, 
and thus avoid syntacticchecking."
>
>
> [Page 25]
> 4.2.1.3.  CT-enabled browser
>
> > As noted above (4.1.3.2), most browsers fail to perform thorough
> > syntax checks on certificates.
>
> I think the choice of “fail” reads a bit pejoratively, especially in 
> the context of RFC 5280 which specifically recommends against 
> profile-enforcement on clients. While browsers have more recently been 
> to disagree with this, in line with [9], this could alternatively be 
> stated “do not” instead of “fail to”. A more complete statement of 
> truth would be, “do not perform thorough syntax checks on 
> certificates, which is consistent with RFC 5280, Section 6.1”
text changed to "do not"
>
>
> [Page 26]
> 5.4.  Browser behavior
> > Note that issuing a warning to a (human) user is
> > probably insufficient,
>
> Another somewhat subjective viewpoint that probably doesn’t belong. 
> Either this should be removed, as a subjective view, or the text 
> should more carefully explore the tradeoffs - such as an adversary 
> switching to a self-signed cert rather than a non-logged certificate 
> if, for some reason, non-logged certificates resulted in a different 
> user experience.
This text was revised to reflect the fact that 6962-bis makes provision 
for local policy controls that can facilitate incremental deployment, 
despite the lack of any description of such in the text. I believe the 
text is factually accurate now. No changes made.
>
>
> [Page 27]
> 5.5.  Remediation for a malicious CA
> > Such
> > communication has not been specified, i.e., there are no standard
> > ways to configure a browser to reject individual bogus or erroneous
> > certificates based on information provided by an external entity such
> > as a Monitor.
>
> It’s unclear the purpose of this statement. Who is the agent of 
> configuration? Is it something that the end-user or system 
> administrator is doing? Or is it something that the Browser vendor is 
> doing based on information it receives from Monitors? If the former, 
> how is that different from saying that there’s no way to configure 
> Microsoft SmartScreen behaviour on my Slackware Linux machine, and if 
> the latter, what’s the value proposition of noting different software 
> vendors do different things?
I revised the sentence to read: "If a browser vendor operates it’s own 
Monitor, there is no need for a standard way to convey this information. 
However, there are no standard ways to convey Monitor information to a 
browser, e.g., to reject individual bogus or erroneouscertificates based 
on information provided by a Monitor."
>
>
> [Page 27]
> 5.6.  Auditing - detecting misbehaving logs
> > Only if Monitors and browsers reject certificates that
> > contain SCTs from conspiring logs (based on information from an
> > auditor) will CT be able to detect and deter use of such logs.
>
> As noted several times previously, Monitors have little reason to 
> ignore a Log, as Monitors are interested in drinking from the 
> firehouse that is the certificate ecosystem. While I discussed above 
> in the ‘malicious log’ scenario that Monitors need to consider, that’s 
> independent of the status of SCTs, and thus entirely orthogonal.
>
text changed to remove the reference to Monitors, and to delete "Only".
>
> [Page 27]
> > Absent a well-defined mechanism that enables Monitors to verify that
> > data from logs are reported in a consistent fashion,
>
> This begins a discussion about SCTs and privacy, but as noted above, 
> Monitors don’t need to contend with SCTs. A Monitor function can be 
> addressed by STHs, or it can move the notion of the “trusted third 
> party” back to the browser vendor, and expect that the browser vendor 
> and the clients are performing the necessary consistency and inclusion 
> checks. Since the browser vendor is inherently inside of the threat 
> model for site operators - since they could always ship code in the 
> browser itself that allowed for targeted MITM - it doesn’t seem to 
> alter the security considerations in any substantial way.
Red-reading this sentence and the preceding paragraph, I see that they 
are very confusing. I have revised the text as follows:

" If browsers reject certificates that contain SCTs from conspiring logs 
(e.g., based on information from an auditor) CT should be able to detect 
and deter use of such logs by (benign) CAs.

Section 8.3 of [I-D.ietf-trans-rfc6962-bis] specifies that auditing is 
performed by Monitors and/or browsers. If a Monitor performs the 
function, then it needs a way to communicate the results of audit 
infractions to CAs and browsers. If a browser vendor operates a Monitor 
it could use its audit information to cause browsers to reject 
certificates with SCTs from suspect logs. However, there is no standard 
mechanism defined to allow a self-monitoring Subject to convey this 
information to browsers directly.

If auditing is performed by browsers directly there may be user privacy 
concerns due to direct interaction with logs, as noted in Section 8.1.4 
of[I-D.ietf-trans-rfc6962-bis]. Also, unless browsers have ways to share 
audit information with other browsers, local detection of a misbehaving 
log does not necessarily benefit a larger community. At the time of this 
writing, one mechanism has been defined (via an RFC) for use with CT to 
achieve the necessary communication: [I-D.ietf-trans-gossip]."



Steve