Re: [DNSOP] [Ext] I-D Action: draft-ietf-dnsop-svcb-https-05.txt

Brian Dickson <> Fri, 14 May 2021 22:54 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id F17D53A431D for <>; Fri, 14 May 2021 15:54:22 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.097
X-Spam-Status: No, score=-2.097 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id j1G1w3Nm9eMh for <>; Fri, 14 May 2021 15:54:18 -0700 (PDT)
Received: from ( [IPv6:2a00:1450:4864:20::22a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 0413C3A431B for <>; Fri, 14 May 2021 15:54:17 -0700 (PDT)
Received: by with SMTP id o8so349183ljp.0 for <>; Fri, 14 May 2021 15:54:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=RmSG0toNfMR6cJbnQY27tM0zFnaEp0FtLOEVvHAnZkc=; b=OB13bIw4Me1H3ErM5R49rJH/3AREWiUGPI+lz20LQEGDMabS62dlXxBi6qwZP2gNVs qOHQbIhnPo+VXG38Km1+4Pt13bpJKnCZ24e2A8Nk+k09krlcE3uqvo8+hXDIrzVpc5uA ucGRVHy4F97Y7adIV7sDD9jui3PGnEXyUkCc2GP9KIvfU2oNGIHIGnMWSvnBOsuKG6K7 iU/7bg2srtEFBcFZPGCs/1SoTxiUNuAenPfgCHCdRrgahw3NvL2pB4sQlZcBoSrkLmNs X4xOJMteH/WaL69cEYSh90FihHLXvlN5EGuiTGIBjVFMwiMV0a+72uZ9MVMbL8Q044FO JuuA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=RmSG0toNfMR6cJbnQY27tM0zFnaEp0FtLOEVvHAnZkc=; b=MJkAxJJJ5cxi1vG3IPjjLSb8Ch8y9JeVcR6rnskvMkcup4qexFebA6kna5kBsMhoBm jdJK4kn+MZHcXHuXgKdlmOYrgTjBTyB3dD+X/Mlv7EIdJ/xHOm2Ctx1W6yfCvUfngM2U RsmmCHh4RIvETD6G3TL+CyEbwlggJE//ovI1r9VK2DbPX742j64utq/jLxJoTDqlRf0b Vb0RdFtSVIdqTQkcKhhRVyXL1OXBuN65qXxYu+nN8tsM3a2uBL3Jff7uvJc17EYi/Geb fqBqoTFNOaPWyFuxBD8MgfU5JgiK7fmM24J2+vf2eIZPOD3V0bVLJcf9D+gF1T5Qej+2 fOcA==
X-Gm-Message-State: AOAM532SvQO3vwKBMm7foVb6NgpklHCyjxWDGR2GaV8vSS3WMFEOYH2H QqOCFB4Mbm1oAF/MRmapUpgS/ings7iiwUyLONw=
X-Google-Smtp-Source: ABdhPJxCChJhElOXAGC40IM3Hvcmm1kpLzmwlRmKQgKeJRLiBRI6IDPovOIGHMsiH2LBG099r8AgIPPAEQr+o/XPIqM=
X-Received: by 2002:a2e:9e57:: with SMTP id g23mr15825636ljk.148.1621032850601; Fri, 14 May 2021 15:54:10 -0700 (PDT)
MIME-Version: 1.0
References: <> <20210512213903.D5F1F7AA827@ary.qy> <> <> <> <> <>
In-Reply-To: <>
From: Brian Dickson <>
Date: Fri, 14 May 2021 15:53:58 -0700
Message-ID: <>
To: Ben Schwartz <>
Cc: "libor.peltan" <>, dnsop <>, John Levine <>, Joe Abley <>, Eric Orth <>
Content-Type: multipart/alternative; boundary="0000000000003abcd905c2521f34"
Archived-At: <>
Subject: Re: [DNSOP] [Ext] I-D Action: draft-ietf-dnsop-svcb-https-05.txt
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: IETF DNSOP WG mailing list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 14 May 2021 22:54:23 -0000

On Thu, May 13, 2021 at 10:50 AM Ben Schwartz <> wrote:

> On Thu, May 13, 2021 at 3:56 AM libor.peltan <> wrote:
>> Hi all,
>> just my comment:
>> Perhaps complexity is subjective.  The important thing is that the
>> standard be reasonably implementable.  I hope that the list of published
>> implementations [3] will serve as convincing evidence that the current
>> draft is sufficient in that regard.
>> --Ben
>> I agree that complexity is subjective. I have no problem implementing
>> complex procedures. But more complexity means more probability for bugs
>> (and even security issues).
>> Currently, the authoritative server (while transforming presentation to
>> wire format), MUST:
>>  - sort the SvcParams by key
>>  - verify their uniqueness
>>  - deal with list of fields nested in other fields (this includes the
>> discussed comma escaping)
>> and the client MUST:
>>  - verify that SvcParams are sorted and unique
>>  - deal with list of fields nested in other fields (at least that various
>> "lengths" match)
>> In the concurrent proposal, the sorting and deduplication will be "for
>> free", because DNS ensures this,
> DNS only ensures that each entire record appears only once, which is
> different from the current draft's requirement that each key appear only
> once (to form a map).
>> So, I gave some thought to the problem space, and went back and looked at
a bunch of RFCs regarding new RRTYPEs, handling unknown RRTYPEs, etc., as
well as giving thought to the basic goals for which SVCB/HTTPS is a
proposed solution.

It's not a great situation.
(Feel free to skip ahead, this is way too long and possibly both moot and
Here's what I've found out (or refreshed my memory on) which impacts the
design decisions and possible alternatives (with some caveats and

   - RFC5507 recommends against anything new using subtypes (particularly
   as selectors). That is an informational RFC, but authored by the IAB.
   - RFC6950 expands on RFC5507, but doesn't have much relevant guidance
   beyond what RFC5507 says i that regard.
   - RFC3597 governs handling unknown RRTYPEs. Basically it allows for them
   to be handled (thus making new RRTYPEs deployable generally), but has a
   side-effect of "no special handling" (additional processing).
   - RFC2181 details/updates how TC=1 should be set, and how clients should
   handle TC=1 responses.
   - RFC403[345] and RFC5155 (and any updates) have guidance for any
   section other than the Answer section, versus regular RRSETs and their

The 5507 guidance suggests using either distinct RRTYPEs, or underscore
selectors, or both, rather than subtypes. However, either of those would
require multiple queries to obtain records, so there would be a performance
impact for doing that.
The 2181 rules basically mean putting anything additional in the Additional
section, isn't guaranteed to fit, and won't trigger TC=1 if it doesn't.
The 4035 rules require additional sections to include RRSIGs along with
RRSETs, but only if they all fit. It allows responses to exclude RRSIGs
even while still including signed RRSETs in the Additional section.
The original RFC1035 specifies CNAME format and handling, and this is
confirmed by 2181: it is a singleton (both RRTYPE and number of them
permitted). (See below for why that is important). Other than SOA, there do
not appear to be any other "singleton" RRTYPEs enforced by authority
servers or resolvers.

For some new RRTYPE to be deployable, it would need to be able to be
handled via RFC3597 rules. That precludes any new RRTYPE that has
Additional processing (before full deployment across the ecosystem), or
requirements to be a singleton RRTYPE (only one tuple permitted of that
Even after full wide scale deployment, it would be difficult (if not
impossible) to ensure the singleton key rule on a subtyped RRTYPE. (That's
one of the complexity issues).
It would be much more feasible to enforce rules for singleton tuples if the
"key" were actually the RRTYPE, and that would imply needing several. RFC
5507 definitely supports this methodology, and the RRTYPE space (16 bits)
is large enough for that to be very reasonable.
However, obviously, the need for multiple queries is in conflict with the
goal of low latency, with the goal being getting all the records with a
single query (QTYPE).
It might be possible to request a new meta-type, similar to the behavior of
the deprecated QTYPE of MR (which returns any of 3 RRTYPEs).
Or, it might be possible to replicate the EDNS flag behavior of DO, to ask
for related record types to be included in the Answer section (instead of
the Additional section).
Both of those are not really feasible to expect deployment in the short

Before reading further, please be warned. To quote Douglas Adams' Deep
Thought, "You're really not going to like it."
You've been warned...

In order to have a solution that is deployable now, and can guarantee the
uniqueness at a key level (enforced by existing DNS rules), it is necessary
to use CNAME chains, and encoding of keys and values as domain names.
(Note that I'm not actually suggesting this seriously; I'm presenting it as
a reason to not expect the DNS as currently deployed to handle the
uniqueness problem.)

I said you weren't going to like it.

Here's why that would work, and how it would work, if someone wanted to
implement things this way:

   - Assume the CNAME chain goes from a representation of "key" to a
   representation of "value"
      - There can only be one CNAME record at that "key", so both the "key"
      and "value" are singletons
   - The CNAME chain is a well-ordered sequence.
      - Thus, if the UI managing this knew about the numeric values for the
      keys, it could keep them in sorted order (numerically).
   - If the entire set were in the same zone, the entire chain would
   generally be served in a single response (if it could fit in the Answer
   section without triggering TC=1). A single query would return the whole
   encoded chain.
   - A resolver would cache the whole chain, even if it did not know about
   the final RRTYPE at the end of the chain.
   - A client making a query would receive the whole chain from a resolver.
   - The presence of multiple matching records would require a combined
   (double-length) chain.
      - (Trying to encode two single-chain things at the same owner name is
      not allowed per CNAME rules.)
   - The last name in the CNAME chain would probably need to own some new
   RRTYPE (exclusively).
      - This would ensure there is no conflict resulting from using any
      existing QTYPE.
   - The names used for encoding would probably need to be
   underscore-prefixed, or at least children of an underscore-prefixed name.

I could give an example of this encoding, but I think this is already long
enough and silly enough. Unless anyone wishes to take this further.

I would also add the considerations for humans who are reading or writing
> zone files.  Can they see bindings as a unit, with confidence?  Is the
> syntax familiar and self-explanatory?  Is it excessively verbose?  Are
> typos likely to be caught early, with helpful error messages?

I think it should be taken as a safe assumption, that for the vast majority
of end users, they will either be using some kind of UI (good, bad, or
ugly) that is (eventually) aware of the relevant RRTYPE(s), or using one or
more tools that do validation of the zone file (as part of the process of
adding new records), or using software for serving the zone(s) which does
the necessary checks as part of the start-up or zone-loading process (and
prevents illegal stuff, including things like "CNAME and other RRTYPE at
same owner name", or "Multiple CNAMEs at same owner name".
Obviously using encoding via RFC3597 would not have any special checks, so
very early adopters would need to be aware of the issues. I don't think
that's the issue here, though.

Humans can edit zone files and make errors, but generally, if an RRTYPE is
known, there will be a bunch of checks to make sure the zone file conforms
to all the specified requirements.

Of course, the simpler the RRTYPE is to parse, the more reliable that
process will be (particularly for new RRTYPEs, for which the parsing and
validation is new).