Re: [Ntp] NTPv5: big picture

Philip Prindeville <philipp@redfish-solutions.com> Mon, 04 January 2021 20:21 UTC

Return-Path: <philipp@redfish-solutions.com>
X-Original-To: ntp@ietfa.amsl.com
Delivered-To: ntp@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 729333A1039 for <ntp@ietfa.amsl.com>; Mon, 4 Jan 2021 12:21:00 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.001
X-Spam-Level:
X-Spam-Status: No, score=0.001 tagged_above=-999 required=5 tests=[RCVD_IN_DNSWL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id akx4jFpDYxxm for <ntp@ietfa.amsl.com>; Mon, 4 Jan 2021 12:20:58 -0800 (PST)
Received: from mail.redfish-solutions.com (mail.redfish-solutions.com [45.33.216.244]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 1EF503A1037 for <ntp@ietf.org>; Mon, 4 Jan 2021 12:20:58 -0800 (PST)
Received: from [192.168.3.4] ([192.168.3.4]) (authenticated bits=0) by mail.redfish-solutions.com (8.16.1/8.16.1) with ESMTPSA id 104KKumS348270 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 4 Jan 2021 13:20:56 -0700
Content-Type: text/plain; charset="utf-8"
Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.40.0.2.32\))
From: Philip Prindeville <philipp@redfish-solutions.com>
In-Reply-To: <1086ffe6-234a-d2d4-13d6-6031c263f4cd@rubidium.se>
Date: Mon, 04 Jan 2021 13:20:55 -0700
Cc: ntp@ietf.org
Content-Transfer-Encoding: quoted-printable
Message-Id: <B4E8F8D4-95D8-4ACB-9770-FCFEBFE002A0@redfish-solutions.com>
References: <20210101025440.ECE3340605C@ip-64-139-1-69.sjc.megapath.net> <155b7ae6-c668-f38f-2bbd-fd98fa4804db@rubidium.se> <16442E9F-DD22-4A43-A85D-E8CC53FEA3E5@redfish-solutions.com> <66534000-c3ba-8547-4fb1-1641689c6eba@rubidium.se> <E6F9312A-2080-4D13-9092-935080859750@redfish-solutions.com> <1086ffe6-234a-d2d4-13d6-6031c263f4cd@rubidium.se>
To: Magnus Danielson <magnus@rubidium.se>
X-Mailer: Apple Mail (2.3654.40.0.2.32)
X-Scanned-By: MIMEDefang 2.84 on 192.168.1.3
Archived-At: <https://mailarchive.ietf.org/arch/msg/ntp/f3_jzxgRKREpE1xw-bOGZvmP4Hk>
Subject: Re: [Ntp] NTPv5: big picture
X-BeenThere: ntp@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: <ntp.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ntp>, <mailto:ntp-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ntp/>
List-Post: <mailto:ntp@ietf.org>
List-Help: <mailto:ntp-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ntp>, <mailto:ntp-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 04 Jan 2021 20:21:00 -0000


> On Jan 4, 2021, at 9:27 AM, Magnus Danielson <magnus@rubidium.se> wrote:
> 
> Philip,
> 
> On 2021-01-02 03:49, Philip Prindeville wrote:
>> Replies…
>> 
>> 
>>> On Jan 1, 2021, at 7:01 PM, Magnus Danielson <magnus@rubidium.se> wrote:
>>> 
>>> Philip,
>>> 
>>> On 2021-01-02 01:55, Philip Prindeville wrote:
>>>> Replies…
>>>> 
>>>> 
>>>> 
>>>>> On Dec 31, 2020, at 8:35 PM, Magnus Danielson <magnus@rubidium.se> wrote:
>>>>> 
>>>>> Hal,
>>>>> 
>>>>> On 2021-01-01 03:54, Hal Murray wrote:
>>>>>> Do we have a unifying theme?  Can you describe why we are working on NTPv5 in 
>>>>>> one sentence?
>>>>>> 
>>>>>> I'd like to propose that we get rid of leap seconds in the basic protocol.
>>>>> Define "get rid off". Do you meant you want the basic protocol to use a
>>>>> monotonically increasing timescale such as a shifted TAI? If so, I think
>>>>> it would make a lot of sense.
>>>>> 
>>>>> If it is about dropping leap second knowledge, it does not make sense.
>>>> I think “handle separately” makes sense here.  It shouldn’t be a blocking problem and how we handle it, assuming me handle it correctly, is orthogonal to everything else.  Or should be.  See “assuming we handle it correctly”.
>>> I think the core time should have a very well known property, and then
>>> we can provide mappings of that and provide mapping parameters so that
>>> it can be done correctly.
>> 
>> Jon Postel would frequently tell us, “protocol, not policy”.
>> 
>> The mapping doesn’t have to be embedded in the protocol if it’s well understood and unambiguous.
> 
> You are mixing the cards quite a bit here. Policy is not being discussed
> at all here.


It very much is.  It’s a “policy” decision to state, for example:

"NTP needs to support UTC and it needs to announce leap seconds before they happen.”

No, we could have a leapless monotonic timescale and leave it to the application layers or runtime environment to convert kernel time to UTC, etc.

Just as it’s a policy decision to have a machine’s clock be UTC or local time, or to have the epoch start at 1900, 1970, or 1980, etc.



> 
> "protocol" is missleading as well. Also, in the old days it was a lot of
> protocols, but lacking architectural support providing needed technical
> guidance. We moved away from that because more was needed.


Sorry, this is a bit of a generalization.  What prior parallels that apply to us can you point to so I have a better sense of what you’re referring to?


> The mappings I talk about needs to be not only known, but referenced or
> else the "protocol" will not be able to implemented in a consistent way,
> which we badly need. If a new relationship is introduced, it needs to be
> specified.


We can “reference” the mapping without needing to embed it in the protocol as we’ve previously done, yes.  Agreed.


> 
> None of this have anything to do with policy as such, that is as best a
> secondary concern that has nothing to do with what we discuss here.


What goes into a protocol is in itself a policy decision.



> 
>> 
>>>>>> Unfortunately, we have a huge installed base that works in Unix time and/or 
>>>>>> smeared time.  Can we push supporting that to extensions?  Maybe even a 
>>>>>> separate document.
>>>>> Mapping of NTPv5 time-scale into (and from!) NTP classic, TAI, UTC,
>>>>> UNIX/POSIX, LINUX, PTP, GPS time-scales is probably best treated
>>>>> separately, but needs to be part of the standard suite.
>>>> I think TAI makes sense, assuming I fully understand the other options.
>>> Do notice, the actual TAI-timescale is not very useful for us. We should
>>> have a binary set of gears that has it's epoch at some known TAI time.
>>> One such may for instance be 2021-01-01T00:00:00 (TAI). As one can
>>> convert between the NTPv5 timescale and the TAI-timescale we can then
>>> use the other mappings from TAI to other timescales, assuming we have
>>> definitions and other parameters at hand. One such parameter is the
>>> TAI-UTC (and upcoming change).
>> 
>> I’m hoping to avoid further proliferation of timescales if possible.
> It's a lost cause because it builds on a misconception. It's not
> creating a timescale that competes with TAI and UTC, it's the technical
> way that those time-scales is encoded for communication in order to
> solve practical problems and be adapted to those technical challenges.
> Very few use the actual TAI or UTC encoding as they communicate, for
> good technical reasons. That will continue to be so, and there is no
> real case for proliferation as such.


Again, that sounds to me like a generalization.  Or perhaps it’s an assertion that’s well understood by everyone else but me… so maybe you can enlighten me?

PTP uses TAI.  It doesn’t seem to have been an impediment for them.  What am I missing?  And how did these “good technical reasons” not apply here?



>>>> If NTP v5 sticks around as long as NTP v4 has to date, I think we can’t underestimate the implications in both autonomous flight (the unpiloted taxis that are being certified right now come to mind), as well as the proliferation of commercial space flight… space flight has been commoditized (in part) by the use of commercial-off-the-shelf technologies such as metal 3D printing for generating bulkheads and structural panels.
>>>> 
>>>> Why shouldn’t the time standard/format used for space flight also be COTS?
>>>> 
>>>> It seems increasingly probably over the next 20 years that interplanetary flight will become common.
>>> Things can be installed in places where we just don't have access to
>>> normal network.
>> 
>> Can you say what a “normal network” will be in 20 years?  I can’t.
>> 
>> When I wrote RFC-1048 in 1988 I hardly thought there would be more than 4B IP-enabled smartphones with 2-way video capability less than 30 years later.
>> 
>> I’m not even sure what 10 years down the road looks like.  And I’ve been trying for almost 4 decades.  My track record is nothing to brag about.
>> 
>> What’s the shelf-life on WiFi6 or 5G?
>> 
>> Will residential US ISP’s finally have rolled out IPv6?
> 
> I think you completely missed my point. To the level that you missed
> what I was really saying is that there is a wide range of difference
> scenarios already today, and as we design a core protocol, no single one
> of them will provide the full truth.


The corollary of that is that whatever we design, there will be use cases where we don’t apply well… or at all.

I think it’s a fool’s errand to pursue a “one size fits all” solution to a highly technical problem.


> You only illustrate that you do not know me when you attempt to
> challenge me like that.


I’m not challenging anyone.  I’m acknowledging that there are unknowables, particularly about the future.

The larger a period one considers, the more numerous and significant in magnitude the unknowables are in retrospect.



> 
>>>> Further, assuming we start colonizing the moon and Mars… stay with me here… will the length of a terrestrial day still even be relevant?  Or will we want a standard based not on the arbitrary rotation of a single planet, but based on some truly invariant measure, such as a number of wavelengths of a stable semiconductor oscillator at STP?
>>> You can setup additional time-scales and supply parameters to convert
>>> for them. If you need a Mars time, that can be arranged. JPL did that
>>> because they could.
>> 
>> KISS
>> 
>> The Mars Climate Orbiter impacted the planet in 1999 because NASA was using imperial units while everyone else was using metric.  That’s a catastrophic mistake arising from the existence of a SECOND scale of measurement.
>> 
>> https://everydayastronaut.com/mars-climate-orbiter/
> You are now very much of the mark here.


Sorry, typo?  “On the mark”?  “Off the mark”?


>>>>>> --------
>>>>>> 
>>>>>> Part of the motivation for this is to enable and encourage OSes to convert to 
>>>>>> non-leaping time in the kernels.  Are there any subtle details in this area 
>>>>>> that we should be aware of?  Who should we coordinate with?  ...
>>>>> I think that would be far to ambitious to rock that boat.
>>>> Divide and conquer.
>>>> 
>>>> I think POSIX clock_* attempted this by presenting mission requirements-based API’s.
>>> That was only part of the full interface.
>> 
>> Yes.  So?
> If one argues based on what the POSIX CLOCK_* does, then one will not
> get the full view.


The “view” that I am going for is that API’s can be extended to accommodate shortcomings that we’re understood previously, but later come to light.



>>>> The next step is to have consumers of time migrate… perhaps starting with logging subsystems, since unambiguous time is a requirement for meaningful forensics.
>>> They will do what the requirements tell them. Many have hard
>>> requirements for UTC.
>> 
>> Many will also migrate to whatever provides them with unassailable (unambiguous) timestamps in the case of litigation.  I’ve worked on timing for natural gas pipelines so that catastrophic failures (i.e. explosions) could be root caused by examining high precision telemetry… not the least of which was to protect the operator in the case of civil suits of criminal negligence, etc.
> 
> There is many ways to show traceability to UTC, while technically
> avoiding problems. This however often involves knowing the TAI-UTC
> difference one way or another such that the relationship to UTC is
> maintained. The requirement to be traceable to UTC will still stand in
> many many cases. The way that one technically avoids problems is to some
> degree orthogonal to that.


I disagree with this premise:  we don’t need to be traceable to UTC.  UTC needs to be derivable from our timescale, so that it’s meaningful to applications (and eventually, humans).


> 
> Then again, may seems to misunderstand what the term "traceable" means.
> It does not mean "locked to".


Maybe I’m one of the many misunderstanding then.  For me “traceable to” is synonymous with “originating from”.


> 
>>>>>> ---------
>>>>>> 
>>>>>> I think this would bring out another important area: How does a client 
>>>>>> discover if a server supports an option and/or discover servers that do 
>>>>>> support it?
>>>>> The solution that works for other protocols is that you ask for
>>>>> capabilities (or you get them served as part of basic handshake). This
>>>>> is typically a text-string of well defined capability names. Set of
>>>>> constants or set of bits have also been seen.
>>>> Or ASN.1 OIDs or… 
>>> Sure. There is gazillion ways of doing it, I grabbed a few that seems
>>> popular. ASN.1 isn't popular, just what people is forced to used.
>> 
>> If popular equates to ubiquitous, then you can’t do SSL without ASN.1… and that makes it “popular” (though not “well liked” by any measure).  But I’m splitting hairs over semantics.
> I think you missed the hidden joke.


Quite probably.


>> 
>>>>>> I'd like the answer to be authenticated.  It seems ugly to go through NTS-KE 
>>>>>> if the answer is no.
>>>>> Do not assume you have it, prefer the authenticated answer when you can
>>>>> get it. I am not sure we should invent another authentication scheme more.
>>>>> 
>>>>> Let's not make the autokey-mistake and let some information be available
>>>>> only through an authentication scheme that ended up being used by very
>>>>> few. You want to have high orthogonality as you do not know what lies ahead.
>>>>> 
>>>>> So, we want to be able to poll the server of capabilities. Remember that
>>>>> this capability list may not look the same on un-authenticated poll as
>>>>> for authenticated poll. It may provide authentication methods, hopefully
>>>>> one framework fits them all, but we don't know. As you ask again you can
>>>>> get more capabilities available under that authentication view. Another
>>>>> configuration or implementation may provide the exact same capabilities
>>>>> regardless of authentication.
>>>>> 
>>>>>> Maybe we should distribute the info via DNS where we can 
>>>>>> use DNSSEC.
>>>>> Do no assume you have DNS access, the service cannot rely on that. It
>>>>> can however be one supplementary service. NTP is used in some crazy
>>>>> places. Similarly with DNSSEC, use and enjoy it when there, but do not
>>>>> depend on its existence.
>>>> Good point.
>>>> 
>>>> As someone who works in security, I’ve seen a fair share of exploits that arise when protocols make tacit assumptions about the presence and correctness of other capabilities and then these turn out not to be valid under certain critical circumstances.
>>>> 
>>>> Doing X.509 when you don’t have Internet connectivity for CRL’s or OCSP is a good example.
>>> I've seen many networks where normal rules of internet applies, yet they
>>> are critical, need it's time and fairly well protected.
>> 
>> Not quite following what you’re saying.  Are you alluding to operating a (split-horizon) border NTP server to hosts inside a perimeter, that in turn don’t have Internet access themselves (effectively operating as an ALG)?  Or something else?
> 
> There is indeed a lot of scenarios where you operate NTP inside setups
> which have no, or very limited Internet access. Yet it is used because
> it is COTS "Internet technology" that fits well with what is used. As we
> design things we also need to understand that there is a wide range of
> usage scenarios for which our normal expectation of what we can do on
> "Internet" is not necessarily true or needed.


That should be a tipoff right there:  We’re working on “Internet standards”.  Not “Intranet standards”.

We don’t need to solve every problem in every scope.  There’s a reason the term “local administrative decision” appears in many, many standards.

Again, protocol, not policy.  Deciding that we need to accommodate rare/one-off usage scenarios in isolated cases is a policy decision.  It’s choosing to insert ourselves into an environment that’s not strictly within our purview.

What people do on their own intranets with the curtains drawn (and the perimeter secured) is very much their decision…


> What is a very wise thing
> to do for "Internet" may be overly burdensome and even prohibitive in
> other scenarios, and vice versa, things that we may think is wise to do
> in those scenarios can be wildly bad in the "Internet" usage scenario.


Again, we’re not the “Other scenarios Engineering Task Force”.


> This is why I try to point out that we might consider to keep the core
> behaviors as independent of such usage scenarios, but then enable the
> ability to drop in or even require components and behaviors as needed
> for each such scenario.


We might actually agreeing here, though for different motivations.  I like to keep things simple and keep policy out of protocol, which has the end effect of making the protocol inherently more widely applicable (including beyond the originally mandated scope).


> 
>>>>> When DNS is available, what additional values can be put there to aid
>>>>> validation and service?
>>>> Uh… Available “where”?  The internet is relativistic.  What the server sees and what the client sees can be wildly different, even from moment to moment.
>>> I didn't say it was on Internet at all times. I can see things like a
>>> point-to-point wire, I can see a heavily fortified network air-gapped
>>> from Internet, I can see networks heavily fortified with very indirect
>>> access to Internet. I can see things with a NAT or just a simple
>>> firewall to the Internet. For some of these scenarios DNS will be
>>> available, for some not. They all use the same "Internet technology"
>>> because that is COTS. So, designing and internet technology also entails
>>> to design for these other scenarios. A succsessful design also
>>> understand and respect the problems of the more isolated scenarios.
>> 
>> FWIW, a border forwarding/caching recursive DNS server is a degenerate case of an ALG.
> 
> Sure, but not always available, and sometimes the operation of those may
> prohibit uses further into the network because well, politics within
> organization or even between organizations. So, you may not always have
> it available as you would like it. Thus, you have another usage scenario.
> 
> Cheers,
> Magnus


Um…  So what’s to stop such users from operating a GPS clock and driving their time from that, etc, etc?

If they’re choosing to NOT participate in the Internet at large then I’m not really sure how the Internet Engineering Task Force bears any obligation to accommodate them.

In other words, “not my problem”.

-Philip