Re: [Ntp] NTPv5: big picture

Philip Prindeville <philipp@redfish-solutions.com> Tue, 05 January 2021 04:53 UTC

Return-Path: <philipp@redfish-solutions.com>
X-Original-To: ntp@ietfa.amsl.com
Delivered-To: ntp@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E58263A0FC0 for <ntp@ietfa.amsl.com>; Mon, 4 Jan 2021 20:53:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Level:
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qywhjdOz3TQZ for <ntp@ietfa.amsl.com>; Mon, 4 Jan 2021 20:53:29 -0800 (PST)
Received: from mail.redfish-solutions.com (mail.redfish-solutions.com [45.33.216.244]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 41BF13A0F04 for <ntp@ietf.org>; Mon, 4 Jan 2021 20:53:29 -0800 (PST)
Received: from [192.168.3.4] ([192.168.3.4]) (authenticated bits=0) by mail.redfish-solutions.com (8.16.1/8.16.1) with ESMTPSA id 1054rQUM349975 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 4 Jan 2021 21:53:26 -0700
Content-Type: text/plain; charset="utf-8"
Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.40.0.2.32\))
From: Philip Prindeville <philipp@redfish-solutions.com>
In-Reply-To: <093df8ba-548d-b488-4780-f28d69150884@rubidium.se>
Date: Mon, 04 Jan 2021 21:53:26 -0700
Cc: ntp@ietf.org
Content-Transfer-Encoding: quoted-printable
Message-Id: <16792971-F622-47BE-BF28-B522925734BD@redfish-solutions.com>
References: <20210101025440.ECE3340605C@ip-64-139-1-69.sjc.megapath.net> <155b7ae6-c668-f38f-2bbd-fd98fa4804db@rubidium.se> <16442E9F-DD22-4A43-A85D-E8CC53FEA3E5@redfish-solutions.com> <66534000-c3ba-8547-4fb1-1641689c6eba@rubidium.se> <E6F9312A-2080-4D13-9092-935080859750@redfish-solutions.com> <1086ffe6-234a-d2d4-13d6-6031c263f4cd@rubidium.se> <B4E8F8D4-95D8-4ACB-9770-FCFEBFE002A0@redfish-solutions.com> <093df8ba-548d-b488-4780-f28d69150884@rubidium.se>
To: Magnus Danielson <magnus@rubidium.se>
X-Mailer: Apple Mail (2.3654.40.0.2.32)
X-Scanned-By: MIMEDefang 2.84 on 192.168.1.3
Archived-At: <https://mailarchive.ietf.org/arch/msg/ntp/xE_guLW_t152zWuyqjEDSnsG1Xk>
Subject: Re: [Ntp] NTPv5: big picture
X-BeenThere: ntp@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: <ntp.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ntp>, <mailto:ntp-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ntp/>
List-Post: <mailto:ntp@ietf.org>
List-Help: <mailto:ntp-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ntp>, <mailto:ntp-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 05 Jan 2021 04:53:40 -0000


> On Jan 4, 2021, at 5:26 PM, Magnus Danielson <magnus@rubidium.se> wrote:
> 
> Philip,
> 
> On 2021-01-04 21:20, Philip Prindeville wrote:
>> 
>>> On Jan 4, 2021, at 9:27 AM, Magnus Danielson <magnus@rubidium.se>
>>>  wrote:
>>> 
>>> Philip,
>>> 
>>> On 2021-01-02 03:49, Philip Prindeville wrote:
>>> 
>>>> Replies…
>>>> 
>>>> 
>>>> 
>>>>> On Jan 1, 2021, at 7:01 PM, Magnus Danielson <magnus@rubidium.se>
>>>>>  wrote:
>>>>> 
>>>>> Philip,
>>>>> 
>>>>> On 2021-01-02 01:55, Philip Prindeville wrote:
>>>>> 
>>>>>> Replies…
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On Dec 31, 2020, at 8:35 PM, Magnus Danielson <magnus@rubidium.se>
>>>>>>>  wrote:
>>>>>>> 
>>>>>>> Hal,
>>>>>>> 
>>>>>>> On 2021-01-01 03:54, Hal Murray wrote:
>>>>>>> 
>>>>>>>> Do we have a unifying theme?  Can you describe why we are working on NTPv5 in 
>>>>>>>> one sentence?
>>>>>>>> 
>>>>>>>> I'd like to propose that we get rid of leap seconds in the basic protocol.
>>>>>>>> 
>>>>>>> Define "get rid off". Do you meant you want the basic protocol to use a
>>>>>>> monotonically increasing timescale such as a shifted TAI? If so, I think
>>>>>>> it would make a lot of sense.
>>>>>>> 
>>>>>>> If it is about dropping leap second knowledge, it does not make sense.
>>>>>>> 
>>>>>> I think “handle separately” makes sense here.  It shouldn’t be a blocking problem and how we handle it, assuming me handle it correctly, is orthogonal to everything else.  Or should be.  See “assuming we handle it correctly”.
>>>>>> 
>>>>> I think the core time should have a very well known property, and then
>>>>> we can provide mappings of that and provide mapping parameters so that
>>>>> it can be done correctly.
>>>>> 
>>>> Jon Postel would frequently tell us, “protocol, not policy”.
>>>> 
>>>> The mapping doesn’t have to be embedded in the protocol if it’s well understood and unambiguous.
>>>> 
>>> You are mixing the cards quite a bit here. Policy is not being discussed
>>> at all here.
>>> 
>> 
>> It very much is.  It’s a “policy” decision to state, for example:
>> 
>> "NTP needs to support UTC and it needs to announce leap seconds before they happen.”
>> 
>> No, we could have a leapless monotonic timescale and leave it to the application layers or runtime environment to convert kernel time to UTC, etc.
>> 
>> Just as it’s a policy decision to have a machine’s clock be UTC or local time, or to have the epoch start at 1900, 1970, or 1980, etc.
>> 
> It's not a policy decision of ours. It's a requirement the surrounding world bring us. If we do not find a way to provide what others need that will work for them, they become forced to use another solution.
> 


Well, you’re half right.  It’s a requirement that a lot of the world has, but as long as the timescale is convertible AT SOME POINT somewhere between the packets on-the-wire and returning to a library call in libc, THEY DON’T REALLY CARE AND THEY SHOULDN’T HAVE TO.

This is why we have layers of abstraction, instead of a single monolithically linked image running on bare metal in a single context.

I think we did away with that with batch entry systems in the early 1960’s.



> The only policy here is to make it relevant enough, and then try to make it come of as cheap as possible through engineering.


I’ll take “correct” over “cheap” most days.  Incorrect ends up being quite costly.


> 
>>> "protocol" is missleading as well. Also, in the old days it was a lot of
>>> protocols, but lacking architectural support providing needed technical
>>> guidance. We moved away from that because more was needed.
>>> 
>> 
>> Sorry, this is a bit of a generalization.  What prior parallels that apply to us can you point to so I have a better sense of what you’re referring to?
>> 
> For instance RTP. When RTP was built, it did not really provide a model for how the clock was related to the media transported, especially when spread over multiple streams. It was only over several generations of specifications that the NTP clock was replaced by a common media clock, and then related to the timing between the streams etc. and their meaning.


Are we talking about multiple streams from the same source to the same endpoint?  Or converging streams from multiple sources?


> 
> In the context it was discussed, we talked about the NTPv5 time-scale. That needs to have some know relationship to some other know time-scale, such that the set of gears needed to convert time-stamps in and out of the NTPv5 time-scale is known.


Not disagreeing that the conversion needs to be well-understood (and unambiguous).

But it doesn’t need to happen inside NTP itself.  Or even the kernel.



> It could be TAI-like, but it would not be the actual TAI, just as the PTP timescale for instance.


My understanding is that PTP uses TAI as the default timescale.



> Whatever it is, it needs to be known, so that implementation A, B and C of the protocol knows how to convert it to the same time so they become consistent. So there the mapping becomes important part of the protocol behavior such that the intended service is achieved, leaving it to an implementation issue to figure out the mapping is not good standardisation making. It needs to be known, either specified directly, or through reference.


You’re conflating things: NTP as a user-space process (as most daemons are) just needs to use the same run-times that query the kernel for the time and serve it up in the requested timescale of choice, converting from the kernel’s internal canonical representation of time.


> 
>>> The mappings I talk about needs to be not only known, but referenced or
>>> else the "protocol" will not be able to implemented in a consistent way,
>>> which we badly need. If a new relationship is introduced, it needs to be
>>> specified.
>>> 
>> 
>> We can “reference” the mapping without needing to embed it in the protocol as we’ve previously done, yes.  Agreed.
>> 
>> 
>> 
>>> None of this have anything to do with policy as such, that is as best a
>>> secondary concern that has nothing to do with what we discuss here.
>>> 
>> 
>> What goes into a protocol is in itself a policy decision.
>> 
> Well, there is already policy that it needs to be clear enough that multiple implementations can cooperate. Thus, it needs to have all the necessary details to make that feasible and demonstrateable, but that policy applies not only to NTP. It's a requirement for it to be meaningful.


Not to split hairs, but that’s our ratification process.  The policy is that the protocol needs to be sufficiently clear that an implementation can be written “in the blind” using just the text of the standard, to prove out the completeness and accuracy of the standard.

The above is the process that validates adherence to this policy.



>> 
>>>>>>>> Unfortunately, we have a huge installed base that works in Unix time and/or 
>>>>>>>> smeared time.  Can we push supporting that to extensions?  Maybe even a 
>>>>>>>> separate document.
>>>>>>>> 
>>>>>>> Mapping of NTPv5 time-scale into (and from!) NTP classic, TAI, UTC,
>>>>>>> UNIX/POSIX, LINUX, PTP, GPS time-scales is probably best treated
>>>>>>> separately, but needs to be part of the standard suite.
>>>>>>> 
>>>>>> I think TAI makes sense, assuming I fully understand the other options.
>>>>>> 
>>>>> Do notice, the actual TAI-timescale is not very useful for us. We should
>>>>> have a binary set of gears that has it's epoch at some known TAI time.
>>>>> One such may for instance be 2021-01-01T00:00:00 (TAI). As one can
>>>>> convert between the NTPv5 timescale and the TAI-timescale we can then
>>>>> use the other mappings from TAI to other timescales, assuming we have
>>>>> definitions and other parameters at hand. One such parameter is the
>>>>> TAI-UTC (and upcoming change).
>>>>> 
>>>> I’m hoping to avoid further proliferation of timescales if possible.
>>>> 
>>> It's a lost cause because it builds on a misconception. It's not
>>> creating a timescale that competes with TAI and UTC, it's the technical
>>> way that those time-scales is encoded for communication in order to
>>> solve practical problems and be adapted to those technical challenges.
>>> Very few use the actual TAI or UTC encoding as they communicate, for
>>> good technical reasons. That will continue to be so, and there is no
>>> real case for proliferation as such.
>>> 
>> 
>> Again, that sounds to me like a generalization.  Or perhaps it’s an assertion that’s well understood by everyone else but me… so maybe you can enlighten me?
>> 
>> PTP uses TAI.  It doesn’t seem to have been an impediment for them.  What am I missing?  And how did these “good technical reasons” not apply here?
>> 
> PTP does not use TAI. PTP has a defined mapping of TAI into a PTP-timescale, which is different. You can convert PTP time to and from TAI. The PTP epoch, thus when the PTP time-stamp was 0 is defined in clause 7.2.2 of IEEE1588-2008 as:
> 
> 
> "7.2.2 Epoch
> The epoch is the origin of the timescale of a domain.
> 
> The PTP epoch is 1 January 1970 00:00:00 TAI, which is 31 December 1969 23:59:51.999918 UTC.
> 
> NOTE 1—The PTP epoch coincides with the epoch of the common Portable Operating System Interface (POSIX) algorithms for converting elapsed seconds since the epoch to the ISO 8601:2004 printed representation of time of day; see ISO/IEC 9945:2003 [B16] and ISO 8601:2004 [B17].
> 
> NOTE 2—See Annex B for information on converting between common timescales."


Okay, I should have said, "PTP uses a direct 1:1 mapping to TAI without divergence."


> TAI just simply count as seconds.


Right.  Which goes back to my point that a time protocol doesn’t need to care about how you slice time up into larger units of measure like minutes, hours, days, years, and leap-seconds.

That’s a “presentation” issue, which is more applicable to other protocols like logging (which requires human readable timestamps) or calendaring (which deals with purely human notions of time).



> 
> Similarly, the GPS time-scale has it's epoch defined to be at 1980-01-06T00:00:00Z, thus locking in the TAI-UTC offset of 19. This is referred to as proliferation of TAI-variants, but really isn't as it's not attempting to fill the position of either TAI or UTC. Both the GPS and PTP internal time-scales is just technical scales for internal representation. Regardless if your GPS receiver output GPS time or UTC time (as GPS-UTC difference is transported), you will never get the actual GPS or UTC time, but local replicas. From a metrology calibration stand-point, they have unknown properties, due to the lack of calibration and through calibration achieved traceability. Similarly with PTP, you can out of the PTP time produce a TAI replica as well as UTC replica, as PTP transports the TAI-UTC difference.


But it doesn’t have to.  That’s a convenience for devices that don’t have external sources of conversion information.

An E3 digital cross-connect doesn’t typically receive updates of Zoneinfo, for example… even though it very much cares about accurate time (or it will have frame-slips, etc).


> So, these conversions or mapping between timescales is practical engineering things we do, but not really proliferation issues.


Or conversely it’s mixing up unrelated topics because of some intangible and incorrectly perceived benefit that it *may* provide, and we overcomplicate what should be a lean protocol unnecessarily.

We could, if we wanted to, come up with a protocol to distribute the Zoneinfo database to “dumb” devices like video conferencing systems, which would then mean that they had a separate mechanism for knowing when to apply leap-seconds, and we could remove that from what should be a strictly clock synchronization protocol… instead of laboring under the misconception that we need to solve all these problems in a common place, just because they’re both time-related.



> It's the thing we need to do.


And there’s that conflation.  No, it’s the thing you want to do, because it kills two birds with one stone.



> Just using TAI at the side of UTC has been an issue, as some feel it should be UTC and only UTC to rule them all.


And yet UTC isn’t inherently unambiguous, especially if we believe that a day contains 86,400 seconds, 24 hours of 60 minutes, 60 minutes of 60 seconds, and each second being an SI…


> Well, in practice we need TAI-properties alongside the UTC properties. We kind of actually do not the actual TAI, but it is handy if we can provide the replica. We will never get the actual UTC, but it is very handy if we can produce a replica. Since we need to be better than 1 second in our achieved precision for several applications, we end up needing to handle leap-seconds better than doing the POSIX or Linux mapping, which is just other mappings. The NTPv4 time-scale is just one more of those mappings of UTC.
> 
> BTW, Annex B in IEEE1588-2008 is handy informative annex of the IEEE1588 standard.


I’ll look it over when I have some time.


> 
>> 
>>>>>> If NTP v5 sticks around as long as NTP v4 has to date, I think we can’t underestimate the implications in both autonomous flight (the unpiloted taxis that are being certified right now come to mind), as well as the proliferation of commercial space flight… space flight has been commoditized (in part) by the use of commercial-off-the-shelf technologies such as metal 3D printing for generating bulkheads and structural panels.
>>>>>> 
>>>>>> Why shouldn’t the time standard/format used for space flight also be COTS?
>>>>>> 
>>>>>> It seems increasingly probably over the next 20 years that interplanetary flight will become common.
>>>>>> 
>>>>> Things can be installed in places where we just don't have access to
>>>>> normal network.
>>>>> 
>>>> Can you say what a “normal network” will be in 20 years?  I can’t.
>>>> 
>>>> When I wrote RFC-1048 in 1988 I hardly thought there would be more than 4B IP-enabled smartphones with 2-way video capability less than 30 years later.
>>>> 
>>>> I’m not even sure what 10 years down the road looks like.  And I’ve been trying for almost 4 decades.  My track record is nothing to brag about.
>>>> 
>>>> What’s the shelf-life on WiFi6 or 5G?
>>>> 
>>>> Will residential US ISP’s finally have rolled out IPv6?
>>>> 
>>> I think you completely missed my point. To the level that you missed
>>> what I was really saying is that there is a wide range of difference
>>> scenarios already today, and as we design a core protocol, no single one
>>> of them will provide the full truth.
>>> 
>> 
>> The corollary of that is that whatever we design, there will be use cases where we don’t apply well… or at all.
>> 
>> I think it’s a fool’s errand to pursue a “one size fits all” solution to a highly technical problem.
>> 
> Which isn't what I say, if you listened a little more careful to what I say. I actually say the opposite. One size do not fit all, and the trouble is that I see that the same protocol design may end up being used in so diverse settings that we might consider these as just that, different scenarios. I think most of the things we want to do will be the same, but some aspects will be quite different. However, if we treat them with sufficient care so they can be added and removed for the various scenarios, we can make sure the core protocol remains the same and the set of things needed for the "Internet" scenario is clear and required as we implement that, and wise to do in that scenario.


If people try to use the wrong protocol in their given circumstances, there’s not much we can do to stop them.  At least not in any practicable way.

Worrying about how our protocol operates in outlying cases beyond our scope or control is a waste of time.  And there will always be more such permutations than we can foresee.



>> 
>>> You only illustrate that you do not know me when you attempt to
>>> challenge me like that.
>>> 
>> 
>> I’m not challenging anyone.  I’m acknowledging that there are unknowables, particularly about the future.
>> 
>> The larger a period one considers, the more numerous and significant in magnitude the unknowables are in retrospect.
>> 
> As if I where not already alluding to that. 


Okay, well, we’ve found something else to agree on then.

Converging on an acceptable normative standard is not unlike eating an elephant.



>>>>>> Further, assuming we start colonizing the moon and Mars… stay with me here… will the length of a terrestrial day still even be relevant?  Or will we want a standard based not on the arbitrary rotation of a single planet, but based on some truly invariant measure, such as a number of wavelengths of a stable semiconductor oscillator at STP?
>>>>>> 
>>>>> You can setup additional time-scales and supply parameters to convert
>>>>> for them. If you need a Mars time, that can be arranged. JPL did that
>>>>> because they could.
>>>>> 
>>>> KISS
>>>> 
>>>> The Mars Climate Orbiter impacted the planet in 1999 because NASA was using imperial units while everyone else was using metric.  That’s a catastrophic mistake arising from the existence of a SECOND scale of measurement.
>>>> 
>>>> 
>>>> https://everydayastronaut.com/mars-climate-orbiter/
>>> You are now very much of the mark here.
>>> 
>> 
>> Sorry, typo?  “On the mark”?  “Off the mark”?
>> 
> Off
> 
> The only relevance to that story is with Mars. It was not a timing issue, it was a huge engineering debacle quite complex that it's hard to see it's relevance here. 


The relevance is that some people were thinking in metric while others were thinking in imperial.

Had metric been the single common unit of measure, there never would have been any confusion about the values being discussed.

Similarly, it would be nice to have a single, unambiguous timescale from which other timescales (including those which are inherently ambiguous due to leap-seconds, etc) could be derived…



> 
>> 
>>>>>>>> --------
>>>>>>>> 
>>>>>>>> Part of the motivation for this is to enable and encourage OSes to convert to 
>>>>>>>> non-leaping time in the kernels.  Are there any subtle details in this area 
>>>>>>>> that we should be aware of?  Who should we coordinate with?  ...
>>>>>>>> 
>>>>>>> I think that would be far to ambitious to rock that boat.
>>>>>>> 
>>>>>> Divide and conquer.
>>>>>> 
>>>>>> I think POSIX clock_* attempted this by presenting mission requirements-based API’s.
>>>>>> 
>>>>> That was only part of the full interface.
>>>>> 
>>>> Yes.  So?
>>>> 
>>> If one argues based on what the POSIX CLOCK_* does, then one will not
>>> get the full view.
>>> 
>> 
>> The “view” that I am going for is that API’s can be extended to accommodate shortcomings that we’re understood previously, but later come to light.
>> 
> Sure they can. Also, there can be API's that is already there which can be considered if they are not sufficient and already implemented.


Sorry, if they “are not” sufficient and already implemented?  Or if they “are”?


>> 
>>>>>> The next step is to have consumers of time migrate… perhaps starting with logging subsystems, since unambiguous time is a requirement for meaningful forensics.
>>>>>> 
>>>>> They will do what the requirements tell them. Many have hard
>>>>> requirements for UTC.
>>>>> 
>>>> Many will also migrate to whatever provides them with unassailable (unambiguous) timestamps in the case of litigation.  I’ve worked on timing for natural gas pipelines so that catastrophic failures (i.e. explosions) could be root caused by examining high precision telemetry… not the least of which was to protect the operator in the case of civil suits of criminal negligence, etc.
>>>> 
>>> There is many ways to show traceability to UTC, while technically
>>> avoiding problems. This however often involves knowing the TAI-UTC
>>> difference one way or another such that the relationship to UTC is
>>> maintained. The requirement to be traceable to UTC will still stand in
>>> many many cases. The way that one technically avoids problems is to some
>>> degree orthogonal to that.
>>> 
>> 
>> I disagree with this premise:  we don’t need to be traceable to UTC.  UTC needs to be derivable from our timescale, so that it’s meaningful to applications (and eventually, humans).
>> 
> For many uses I agree with you. However, there are other uses at which it becomes a requirement. The question then becomes if we engineer NTPv5 to be able to deliver sufficient properties to fulfill that, or not.


Again, this is a generalization that I have a hard time grounding in anything concrete.

Can you explain when/where/what this might be?  What are these “other uses”?


>>> Then again, may seems to misunderstand what the term "traceable" means.
>>> It does not mean "locked to".
>>> 
>> 
>> Maybe I’m one of the many misunderstanding then.  For me “traceable to” is synonymous with “originating from”.
>> 
> It is a very common misunderstanding, yes. The term "traceable" refers to the unbroken chain of calibrations to the SI units and the traceability record that produces, as each calibration will measure the deviation and precision achieved. I can use a complete djungle clock that is unsteered and then through continuous calibrations be able to convert the readings of my clock into the readings of UTC. This is a mapping with defined parameters from that calibration. Adjustment of the clock during a calibration, is about re-setting the clock so that the calibration parameters compensate directly, which is more a practicality in how the conversion is made, but not strictly necessary. In practice, the actual clocks building up the EAL/TAI/UTC is free-running, and then the laboratory replica of TA and UTC is steered to be neat TAI and UTC, but the actual clocks is not steered. Also, the trouble is that gravity pull on the various labs needs compensation, which is done in the ALGOS algorithm as post-processed by BIPM, and not the labs. Never the less, all clocks have traceability to TAI/UTC. Derived clocks then show their traceability to the lab replica of UTC. This just to show how traceability actually is a bit different in metrology than you would first want to believe. Vocabulary of International Metrology (VIM) is free to download from BIPM.
> 
> One may then wonder why we need to follow the VIM and Metrology use of the terms, well, the trouble is that if we don't, we end up creating confusion, because they end up doing calibration and dissemination of time using NTP. It is their time-scales of TAI and UTC we use. I try to be strict in order to avoid confusion, and I think there is enough alternative vocabulary to use so we do not need to add that confusion.


Okay, thanks for clearing that up for me.


> 
>> 
>>>>>>>> [snip]
>> 
>> 
>> 
>>>>>>>> I'd like the answer to be authenticated.  It seems ugly to go through NTS-KE 
>>>>>>>> if the answer is no.
>>>>>>>> 
>>>>>>> Do not assume you have it, prefer the authenticated answer when you can
>>>>>>> get it. I am not sure we should invent another authentication scheme more.
>>>>>>> 
>>>>>>> Let's not make the autokey-mistake and let some information be available
>>>>>>> only through an authentication scheme that ended up being used by very
>>>>>>> few. You want to have high orthogonality as you do not know what lies ahead.
>>>>>>> 
>>>>>>> So, we want to be able to poll the server of capabilities. Remember that
>>>>>>> this capability list may not look the same on un-authenticated poll as
>>>>>>> for authenticated poll. It may provide authentication methods, hopefully
>>>>>>> one framework fits them all, but we don't know. As you ask again you can
>>>>>>> get more capabilities available under that authentication view. Another
>>>>>>> configuration or implementation may provide the exact same capabilities
>>>>>>> regardless of authentication.
>>>>>>> 
>>>>>>> 
>>>>>>>> Maybe we should distribute the info via DNS where we can 
>>>>>>>> use DNSSEC.
>>>>>>>> 
>>>>>>> Do no assume you have DNS access, the service cannot rely on that. It
>>>>>>> can however be one supplementary service. NTP is used in some crazy
>>>>>>> places. Similarly with DNSSEC, use and enjoy it when there, but do not
>>>>>>> depend on its existence.
>>>>>>> 
>>>>>> Good point.
>>>>>> 
>>>>>> As someone who works in security, I’ve seen a fair share of exploits that arise when protocols make tacit assumptions about the presence and correctness of other capabilities and then these turn out not to be valid under certain critical circumstances.
>>>>>> 
>>>>>> Doing X.509 when you don’t have Internet connectivity for CRL’s or OCSP is a good example.
>>>>>> 
>>>>> I've seen many networks where normal rules of internet applies, yet they
>>>>> are critical, need it's time and fairly well protected.
>>>>> 
>>>> Not quite following what you’re saying.  Are you alluding to operating a (split-horizon) border NTP server to hosts inside a perimeter, that in turn don’t have Internet access themselves (effectively operating as an ALG)?  Or something else?
>>>> 
>>> There is indeed a lot of scenarios where you operate NTP inside setups
>>> which have no, or very limited Internet access. Yet it is used because
>>> it is COTS "Internet technology" that fits well with what is used. As we
>>> design things we also need to understand that there is a wide range of
>>> usage scenarios for which our normal expectation of what we can do on
>>> "Internet" is not necessarily true or needed.
>>> 
>> 
>> That should be a tipoff right there:  We’re working on “Internet standards”.  Not “Intranet standards”.
>> 
>> We don’t need to solve every problem in every scope.  There’s a reason the term “local administrative decision” appears in many, many standards.
>> 
>> Again, protocol, not policy.  Deciding that we need to accommodate rare/one-off usage scenarios in isolated cases is a policy decision.  It’s choosing to insert ourselves into an environment that’s not strictly within our purview.
>> 
>> What people do on their own intranets with the curtains drawn (and the perimeter secured) is very much their decision…
>> 
> If it where that easy. The success of Internet protocols and standards forces things to be used where normal Internet rules do not apply.


I’m not sure anything is being forced.  I think people are lazy.  And when a hammer is the closest tool to you, everything in your reach becomes a nail…

We can’t stop them, but nor do we have to overly concern ourselves with enabling or facilitating this behavior.

People use self-signed certificates even though it’s a travesty (read: insecurity) and certificates should be rooted to valid, well-known root CA’s.  Knowing this to be the case, I’m not going to bother asking myself “what happens to my protocol when self-signed certs are used”.  The answer is: “not my problem”.



> This is very much so with NTP. I think there is things we can relatively easy do to identify some other usage scenarios       that can be supported without too much work. Then again, I'm not saying that it needs to be the main focus, but rather, allowing for it and think that through may also be a good vehicle to support future changes to NTP itself as it is adapted for that unknown future.


If I were redoing SMTP again, I would spend no time at all trying to accommodate the existence of X.400, Bitnet, Decnet architecture, Microsoft MAPI/Exchange, etc.  Refrain: not my problem.


> Part of this was "I want this being authenticated". Well, the vehicle to transport the TAI-UTC difference in NTP ended up being wrapped into the NTP Autokey specification, which to work needed to create means to provide extension field to allow for Auotkey to work in, and then that also allowed for the TAI-UTC difference transport that needed that extension mechanism too. Also, it was felt that we want that authenticated, and sure the TAI-UTC differnce is kind of important. Now, a couple of years down the line the security of the Autokey mechanism was found faulty and with that the TAI-UTC difference transport got thrown out. So, this is when I said that we should be careful not to connect it too hard, and then about feature capabilities, because then in future we may loose it if we do not treat it a bit more orthogonal. If we 5-15 years down the line need to replace the authentification mechaism, we should not loose the things wrapped and secured by it. We might actually want to have ways of knowing capabilities when we first knock on the NTP server door just to know which authentification mechanisms is there, so we can transition and change foot. The other extensions there is should move over as we migrate. We failed to do that well in NTPv4. And then the realization that some of this may not be applicable for all uses, and specific requirements be for others.


Cautiously, it sounds like we found yet another thing to agree on: that we should decouple as much as we can from unrelated features, rather than unnecessarily munging them together.


So while I don’t mind the notion of carrying the TAI-to-UTC conversion and traceability information as an optional field, I don’t think that argues for (i.e. justifies) making UTC be the canonical time format that NTP uses natively.



> 
>>> What is a very wise thing
>>> to do for "Internet" may be overly burdensome and even prohibitive in
>>> other scenarios, and vice versa, things that we may think is wise to do
>>> in those scenarios can be wildly bad in the "Internet" usage scenario.
>>> 
>> 
>> Again, we’re not the “Other scenarios Engineering Task Force”.
>> 
> Well, if life was that simple.


First step: don’t add new complexity.  Second step: purge existing unnecessarily complexity wherever you find it.



>> 
>>> This is why I try to point out that we might consider to keep the core
>>> behaviors as independent of such usage scenarios, but then enable the
>>> ability to drop in or even require components and behaviors as needed
>>> for each such scenario.
>>> 
>> 
>> We might actually agreeing here, though for different motivations.  I like to keep things simple and keep policy out of protocol, which has the end effect of making the protocol inherently more widely applicable (including beyond the originally mandated scope).
>> 
> The trouble here is that we want to keep things simple, but not too simple so that it no longer solves the problems that needs to be solved. Whenever one tries to simplify too much, what looks simple and good, ends up being an engineering nightmare as workaround upon workaround is needed to achieve what is needed.


“Require components and behaviors as needed for each such scenario” sounds dangerously like “dictate policy” to me.

Not sure I agree with that generalization.

MIME was added to Mail to add multi-media capability, and it worked.  It’s not pretty, but it’s highly functional.

Telnet options were from the very beginning open-ended.  When local line-editing was needed because remote editing over high-latency lines was just too painful, it “dropped right in”.  As did TN3270 interoperability for people who needed to talk to IBM Mainframes.


>> 
>>>>>>> When DNS is available, what additional values can be put there to aid
>>>>>>> validation and service?
>>>>>>> 
>>>>>> Uh… Available “where”?  The internet is relativistic.  What the server sees and what the client sees can be wildly different, even from moment to moment.
>>>>>> 
>>>>> I didn't say it was on Internet at all times. I can see things like a
>>>>> point-to-point wire, I can see a heavily fortified network air-gapped
>>>>> from Internet, I can see networks heavily fortified with very indirect
>>>>> access to Internet. I can see things with a NAT or just a simple
>>>>> firewall to the Internet. For some of these scenarios DNS will be
>>>>> available, for some not. They all use the same "Internet technology"
>>>>> because that is COTS. So, designing and internet technology also entails
>>>>> to design for these other scenarios. A succsessful design also
>>>>> understand and respect the problems of the more isolated scenarios.
>>>>> 
>>>> FWIW, a border forwarding/caching recursive DNS server is a degenerate case of an ALG.
>>>> 
>>> Sure, but not always available, and sometimes the operation of those may
>>> prohibit uses further into the network because well, politics within
>>> organization or even between organizations. So, you may not always have
>>> it available as you would like it. Thus, you have another usage scenario.
>>> 
>>> Cheers,
>>> Magnus
>>> 
>> 
>> Um…  So what’s to stop such users from operating a GPS clock and driving their time from that, etc, etc?
>> 
>> If they’re choosing to NOT participate in the Internet at large then I’m not really sure how the Internet Engineering Task Force bears any obligation to accommodate them.
>> 
>> In other words, “not my problem”.
>> 
> Well, it may not be your problem, but it ends up being our problem as we then needs to solve those issues.


Sorry, but you keep waving your hands on this issue.  Who says we have to solve these problems?  How does it end up being “our problem”?  If they’re on an isolated intranet, with no possible connection to us, it’s a bit like a tree falling in the forest with no one around, isn’t it?  Whether it makes a sound or not changes nothing from my perspective.



> Internet technology is being used outside of what you would call Internet, it's just a fact of life. It is assumed that COTS boxes can operate there, and when not it becomes a problem, and because it's "Internet technology" it is assumed to work together regardless in a multi-vendor setup. You end up with a proliferation of various variants and it's a pure pain to handle.


Again, I’m not ending up with anything here, because what you describe is happening on an Intranet that bears no connection or relevance to me.

And I don’t assume that COTS boxes that require the Internet operate anywhere that the internet is not connected.  If they can’t be pushed software updates, for instance, then they will eventually have a critical software vulnerability which can’t be patched for and I want nothing to do with them.

But again, none of this is “a pure pain to handle” because it’s happening somewhere that I am insulated from, isolated to, and blissfully ignorant of.


> Easing some of that pain will always be welcome, such as understanding that it may not always be that you can count on say DNS to always save the day for       all cases. At the same time, the best current practice for Internet devices is for sure on topic, and have particular requirements we do not want to get wrong.
> 
> Cheers,
> Magnus


It sounds like you’re sending a mixed message, but maybe I’m just not understanding.  What I’m hearing you say is, “be sparse [or ‘economical’] in your assumptions about what where things might be used”, but I’m also hearing you say “make sure that, as an Internet protocol, it operates properly in the total absence of Internet connectivity”… which is a huge burden to assume.

-Philip