[TLS] Re: -03 update to draft-beck-tls-trust-anchor-ids
Ilari Liusvaara <ilariliusvaara@welho.com> Fri, 17 January 2025 09:46 UTC
Return-Path: <ilariliusvaara@welho.com>
X-Original-To: tls@ietfa.amsl.com
Delivered-To: tls@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id CA0FAC180B7E for <tls@ietfa.amsl.com>; Fri, 17 Jan 2025 01:46:53 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.904
X-Spam-Level:
X-Spam-Status: No, score=-1.904 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id btwVx5eykb7v for <tls@ietfa.amsl.com>; Fri, 17 Jan 2025 01:46:50 -0800 (PST)
Received: from smtp.dnamail.fi (sender001.dnamail.fi [83.102.40.178]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-256) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 985DAC180B77 for <tls@ietf.org>; Fri, 17 Jan 2025 01:46:47 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by smtp.dnamail.fi (Postfix) with ESMTP id 83714211391D for <tls@ietf.org>; Fri, 17 Jan 2025 11:46:46 +0200 (EET)
X-Virus-Scanned: X-Virus-Scanned: amavis at smtp.dnamail.fi
Received: from smtp.dnamail.fi ([83.102.40.178]) by localhost (dmail-psmtp01.s.dnaip.fi [127.0.0.1]) (amavis, port 10024) with ESMTP id wpVqjqZbSZPd for <tls@ietf.org>; Fri, 17 Jan 2025 11:46:45 +0200 (EET)
Received: from LK-Perkele-VII2 (87-92-153-79.rev.dnainternet.fi [87.92.153.79]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: hliusvaa@dnamail.internal) by smtp.dnamail.fi (Postfix) with ESMTPSA id 425142113919 for <tls@ietf.org>; Fri, 17 Jan 2025 11:46:45 +0200 (EET)
Date: Fri, 17 Jan 2025 11:46:43 +0200
From: Ilari Liusvaara <ilariliusvaara@welho.com>
To: TLS List <tls@ietf.org>
Message-ID: <Z4onA2wDLVxBu9QK@LK-Perkele-VII2.locald>
References: <CAD2nvsQcCnP08iB5NyBBan1wCa27fK-unjfbcdfqRRePgiAHcQ@mail.gmail.com> <Z2bJeZmfkTvGbQUE@LK-Perkele-VII2.locald> <CAF8qwaADOyC6YK9rCGom1nX37-un0aawZmsyhGR4eyqCbRhPoA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAF8qwaADOyC6YK9rCGom1nX37-un0aawZmsyhGR4eyqCbRhPoA@mail.gmail.com>
Sender: ilariliusvaara@welho.com
Message-ID-Hash: CEO2DHKDYPDDD4HEX6INUBZZWQDLLWSE
X-Message-ID-Hash: CEO2DHKDYPDDD4HEX6INUBZZWQDLLWSE
X-MailFrom: ilariliusvaara@welho.com
X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-tls.ietf.org-0; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header
X-Mailman-Version: 3.3.9rc6
Precedence: list
Subject: [TLS] Re: -03 update to draft-beck-tls-trust-anchor-ids
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <tls.ietf.org>
Archived-At: <https://mailarchive.ietf.org/arch/msg/tls/BAmAJSH84H7Lv-dyGFhlSn1Y8UU>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tls>
List-Help: <mailto:tls-request@ietf.org?subject=help>
List-Owner: <mailto:tls-owner@ietf.org>
List-Post: <mailto:tls@ietf.org>
List-Subscribe: <mailto:tls-join@ietf.org>
List-Unsubscribe: <mailto:tls-leave@ietf.org>
On Thu, Jan 09, 2025 at 09:08:36PM -0500, David Benjamin wrote: > Thanks for the comments! Some thoughts inline. > > On Sat, Dec 21, 2024 at 8:59 AM Ilari Liusvaara <ilariliusvaara@welho.com> > wrote: > > > Some issues I have been thinking about (this all concentrates on server > > certificates): > > > > > > 1) Certificate chain mismatches between services on the same server: > > > > Trust Anchor IDs uses ServiceMode HTTPS/SVCB records to store list > > of available roots in DNS. These are logically per (logical) server. > > > > However, the available roots may differ between two services that > > are pointed to the same server via AliasMode HTTPS/SVCB records. > > Especially if considering intermediates as roots for intermediate > > elision (e.g., Let's Encrypt randomly issues off two intermediates). > > > > Furthermore, ECH adds another implicit service to every server in > > order to update ECH keys or to disable ECH. > > > > To make sure I follow, the comment is that the right HTTPS/SVCB record for > a given route is specific not just to the hosting provider but also the > particular origin that provider hosts? And that, if it happened this was > your first SvcParam that varied like this, you might have gotten away with > a single SVCB record before and hit some friction? > > draft-ietf-tls-key-share-prediction has a similar property. If you have, or > later wish to have, any capability for different hosts to vary their named > group preferences, you need to be able to install different records. (In my > experience, it is very common for folks to ask for this.) My understanding > from CDN operators is that they generally prefer to CNAME or AliasForm to > origin.cdn.example for precisely this reason, because anything in the > record that can vary by customer would trip this. (If dnsop wanted a richer > inter-SVCB delegation model, I guess AliasForm could have had SvcParams > themselves and then the resolver stitches them together at query time. That > wasn't the design they picked, so I think origin.cdn.example is the > conclusion.) While origin.cdn.example solves the explicit services case, if the CDN supports ECH, there is still the implicit ECH service (that has different owner). However, CDNs would likely have sufficient resources to ensure that ECH retries never happen. And I did consider supported groups, but rated that as minor, because if things go wrong, it just fails to save a RTT (which is just suboptimal performance). > And reconnects can pile up exponetially: > > > > - Client tries with bad ECH, bad trust anchors, server fixes trust > > anchors. > > - Client tries with bad ECH, good trust anchors, server fixes ECH. > > - Client tries with good ECH, bad trust anchors, server fixes trust > > anchors (for different service!). > > - Client tries with good ECH, good trust anchors. This succeds. > > > I think these issues are largely the same trade-off as ECH w.r.t. whether > to retry on the same or different connections. For ECH, the WG did not want > to build an in-handshake retry, as recovery should be rare. The thinking > with this initial draft was similar, so while the interaction is not ideal, > it requires both meant-to-be-rare events to go wrong at the same time. With careful operator (certificate scoping, hold-downs and waiting for propagation), does ECH ever get into retry? Of course, not all operators are careful... > That's not to say that's the only design. Even back in the Vancouver > presentation, we've been talking about maybe doing an in-protocol retry > instead. More broadly, I think there is a space of solutions here that are > all variations on the general approach here: > > - Extend HRR to do the retry in-connection, so that retries are less > expensive. (This interacts a bit with some feedback we've gotten, that the > retry is useful for repairing more complex edge cases, so something to keep > in mind when designing this.) What are some of those more complex edge cases? And I think "less expensive" is somewhat of an understatement... > - Reduce the need for retry by robustly getting the DNS bits to the client > (perhaps we should get good at draft-ietf-tls-wkech-like designs) Unfortunately, DNS is slow at best of times, and then crappy DNS servers make it even slower than it should be. > - Reduce the need for retry by, as you suggest, tuning how the client > responds to the DNS record, to try to recover even some mismatch cases I would expect this to mostly just allow some more chain suppression, not do much fixing failure cases. > - Reduce (remove?) the need for retry by encoding the trust anchor list > more compactly (ultimately, all of these designs are just compressions of > the standard certificate_authorities extension) That's mostly Trust Expressions? > One minor point: > > > In server software of capable of hot-reloading certificates, such > > races could even occur between sending HRR and client retrying. > > I think, if you're hot-reloading certificates, or any other config, you > really should make sure that any in-flight connections still act on the old > config when you've already made a decision based on it. Otherwise you might > have sent some messages based on one config and then try to complete the > handshake based on the other. (E.g. if you HRR to x25519 but then > hot-reload to a P-256 only configuration, you should still finish that > pending handshake at x25519 because anything else will be invalid anyway.) In the software I was thinking of, stuff in HRR is revalidated on retry and everything else (including certificates) is reneged. Choices made for HRR stick on priority changes, but handshake is explicitly rejected if choice is disabled. However, this might not be such big issue in practice with clients sending advertisments properly, as race window is very short and consequence is in most cases just suboptimal performance. > > Then reconnects also bypass some TLS 1.3 downgrade protections. The > > client has to be very careful not to introduce downgrade attacks in the > > process. Furthermore, no authentication of any kind is possible here, > > so the previous connection might have been to an attacker. > > > > I don't think this meaningfully impacts downgrade protection. We *already* > don't have downgrade protection for anything in the authentication parts of > TLS (certificate, signature algorithm, transcript hash) because the > downgrade signal itself depends on it. Both client hellos are hashed to transcript on HRR to guard against downgrade attacks. Reconnects bypass this. > > 5) Client filtering of IDs: > > > > Intermediate elision presumably requires including Trust Anchor IDs > > for intermediates, and that presents some special considerations. > > > > I presume that intermediate IDs should not be included without having > > an root ID the intermediate chains to. > > > > And where CA used is known to have unpredictable/backup issuers, or > > rotate issuers, it could be useful to include related issuers even > > if the server does not signal those (it is just a few dozen bytes at > > most, I think the 5-set Let's Encrypt uses is exceptionally large). > > > > I think this is already mostly covered by this text: > > > If doing so, the client MAY send a subset of this intersection to meet > size constraints, but SHOULD offer multiple options. This reduces the > chance of a reconnection if, for example, the first option in the > intersection uses a signature algorithm that the client doesn't support, or > if the TLS server and DNS configuration are out of sync. > > https://www.ietf.org/archive/id/draft-beck-tls-trust-anchor-ids-03.html#name-client-behavior AFAICT, this does not cover adding intermediates that the server did not adveritise. > Though certainly there is plenty of room to iterate with the working group > on the right things for the client to offer. I personally favor designs > where the client logic doesn't need to know the relationship between > intermediate and root IDs (PKIs can have sprawling structures, so this may > not be well-defined). I don't see how even the most sprawling structures could make the relationship not be well-defined. However, there might be extensions that do so. I have not looked at how exactly certificate policies and policy mappings work, but I would not be surprised if those greatly complicate matters. I have written code that computes relationships between root and intermediate certificates (does not consider any possible effects from extensions). > I think that's not necessary to get most of what you > describe. Since the server already will have long and short chains > provisioned, the DNS record will contain both intermediate and root IDs. > The client will then see both when filtering, so I should "SHOULD offer > multiple" already captures it. We won't get the "related issuers" case > (could be interesting to add), but the root will at least match the long > chain. Because DNS updates are slow, the server might want to specify multiple possible intermediates in the DNS record. The point of this is to ensure that chain suppression is robust in not introducing extra failure cases (ocassional suboptimal performance is acceptable). The Let's Encrypt random issuers is because of observed real-world failure cases. > > > - Send all the issuer chains in every (extended) ACME response, or have > > dedicated endpoint to download the issuer chain bundle per issuer. > > > > This avoids having ACME client combine multiple chains from > > alternatives. Which is not as easy as it sounds because the EE > > certificates can differ, which can lead to really bizarre edge cases. > > > > Having bundle of issuer chains also fits a lot better with the > > configuration model of a lot of existing TLS server software. > > > > The feedback we got from Aaron Gable (CC'd) was to not tie multiple > issuances to a single ACME order (e.g. imagine if you need to renew one but > not the other), which suggests, at least at the ACME protocol level, to go > in the other direction. That's why we've kept the single-chain MIME type > and then further pared the ACME text in this draft to just the shared-EE > case. Definitely the more general cases also need to be solved, but given > that, it seemed better to tackle that separately. Yes, I think multiple issuance is a bad idea. However, as long as it is possible to represent multiple issuance, clients need to handle it somehow. Currently, that is by throwing out all but one of the chains. But if chains are assembled across multiple responses, the ACME client needs to check that the EEs match. And if multiple certificates are needed, client should make multiple orders. > Beyond that, what gets sent on the wire in ACME doesn't > necessarily determine the interface between the ACME client and the TLS > server software. If some single-file multi-chain bundle is more convenient > for a lot of server operators, the ACME client can always stitch that > together as it wishes. I think, for this draft, it makes sense to focus on > what happens in the TLS protocol, rather than the ACME client <-> TLS > server channel. That's typically what's in scope for a TLS document. After > all, RFC 8446 already has a notion of certificate selection, across many > axes, and is quite content doing so without saying anything at all about > how to provision multiple certificates. It is easier for the ACME client to explode the chains than to unexplode the chains. That is, it is easier to go from single-file multi-chain bundle in ACME to multi-file single-chains in TLS server than vice versa. -Ilari
- [TLS] -03 update to draft-beck-tls-trust-anchor-i… Devon O'Brien
- [TLS] Re: -03 update to draft-beck-tls-trust-anch… Ilari Liusvaara
- [TLS] Re: -03 update to draft-beck-tls-trust-anch… David Benjamin
- [TLS] Re: -03 update to draft-beck-tls-trust-anch… Ilari Liusvaara