Re: Call for Community Feedback: Guidance on Reporting Protocol Vulnerabilities

Toerless Eckert <> Wed, 28 October 2020 15:25 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 63F813A0A9B for <>; Wed, 28 Oct 2020 08:25:45 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -0.869
X-Spam-Status: No, score=-0.869 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HEADER_FROM_DIFFERENT_DOMAINS=0.25, SPF_HELO_NONE=0.001, SPF_NEUTRAL=0.779, URIBL_BLOCKED=0.001] autolearn=no autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id Fbnd5WfgwWL3 for <>; Wed, 28 Oct 2020 08:25:43 -0700 (PDT)
Received: from ( [IPv6:2001:638:a000:4134::ffff:40]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 1FD193A0A92 for <>; Wed, 28 Oct 2020 08:25:38 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 1169D548687; Wed, 28 Oct 2020 16:25:34 +0100 (CET)
Received: by (Postfix, from userid 10463) id 0AF38440059; Wed, 28 Oct 2020 16:25:34 +0100 (CET)
Date: Wed, 28 Oct 2020 16:25:34 +0100
From: Toerless Eckert <>
To: "Salz, Rich" <>
Cc: Roman Danyliw <>, The IETF List <>
Subject: Re: Call for Community Feedback: Guidance on Reporting Protocol Vulnerabilities
Message-ID: <>
References: <> <> <> <> <> <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <>
User-Agent: Mutt/1.10.1 (2018-07-13)
Archived-At: <>
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: IETF-Discussion <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 28 Oct 2020 15:25:45 -0000

On Tue, Oct 27, 2020 at 02:52:01PM +0000, Salz, Rich wrote:
> >    So... should the protcol spec have a requirement stating that implementations
>     MUST ensure this can not happen, and - oh, go figure out how to do that, not a
>     protocol issue ?
> I am not sure what you are trying to say.  That it's hard to determine where the fault is sometimes?  I don't think anyone disagrees with that.

I have seen in the past and still a lot of resistance in standards track work to
go beyond a mathematical proveable change of packets on a physical long enough
wire. In discussions with past ADs, this has even gone as far as examples of "protocols"
between two (possibly different vendor sourced) software components within a single box as
being something not appropriately called standards protocol work for the IETF. Not
sure if you remember the history of not allowing standardization of APIs, and
only fairly recently having seen that changing.

So, i am concerned about dogmatic restrictions on what can and can not be called
"protocol" wrt. vulnerabilities, and hence i would strongly suggest not to use
it in the name.

> I worry about something like "" becoming swamped with implementation issues, but I would support this if we agreed it was a two-year experiment or something.

Too much success ? We are not paying money, so why the fear ? Any similar
problems in other places ?

But of course: How could we ever start something like this (that we are unfamiliar with)
without calling it experimental. Same goes also for what is already proposed by Roman.

> >    In patents, patent protection is only granted when the description is
>     sufficient to build a working model. So if you want to claim that a protocol
>     is not at fault for an attack, its description needs to be sufficient to
>     make it clear how to build a working model protecting against the attack.
> Patents (at least in the US) typically have an "escape clause" near the beginning, often written like "As will be readily obvious to one familiar with the field"  So I see the same parallel to standards: avoiding memory exhaustion under load should be readily obvious to one familiar with the field.

Except those who develop product.

IMHO my one example is an ongoingly unsolved problem: how to dynamically manage
limited resources in routers amongst different usrs of such resources. There are no
tools to predict memory utilization for routers, so you can not even build a simulation
of a network and validate upfront that it will run without memory problems.

Just had a nice pondering about millions of dollars of outages regularily being fined
for network failures where live&death services are running over (911). Try to figure
out what type of network and operational design you need to proactively avoid
such fines in the future (and the associated loss of life of such
services failing). And that is in the absence of evil attackers, think about
how much harder this becomes when attackers are present.

Everything is easy when worst case a service can just fail, and the worst outcome is
that people have to read a book instead of streaming a movie.