Re: [Ntp] NTP over PTP
Heiko Gerstung <heiko.gerstung@meinberg.de> Tue, 29 June 2021 11:17 UTC
Return-Path: <heiko.gerstung@meinberg.de>
X-Original-To: ntp@ietfa.amsl.com
Delivered-To: ntp@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A61563A306C for <ntp@ietfa.amsl.com>; Tue, 29 Jun 2021 04:17:57 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.099
X-Spam-Level:
X-Spam-Status: No, score=-2.099 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=meinberg.de
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id cyFLllYS16aj for <ntp@ietfa.amsl.com>; Tue, 29 Jun 2021 04:17:51 -0700 (PDT)
Received: from server1a.meinberg.de (server1a.meinberg.de [176.9.44.212]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 45A733A306D for <ntp@ietf.org>; Tue, 29 Jun 2021 04:17:51 -0700 (PDT)
Received: from seppmail.py.meinberg.de (unknown [193.158.22.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by server1a.meinberg.de (Postfix) with ESMTPSA id 927E571C05E2; Tue, 29 Jun 2021 13:17:48 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=meinberg.de; s=d2021; t=1624965468; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=frFZu7OGvXudv2HF/HUFAOG5AfqBpru4iFP0/FSb9n4=; b=A7ivGs0cda1F/OSNuCfjxkkApn+guIdN1tfVCYNVvpy/XxE9Z7F4/Jj8wui9SYLLkyChIH s4X0dWyJYcr2EXbwBkQ+Jj49QeQejrbXZsmi1ASNfeFBkUAzwdCLk+wBkZrZc0UpB1Smee vmJV1sQdd5RAP3t5ULV/BXxX0yOdH0YeDPAGFqj9hkszbf+dN87m5j9tOrZ/xRpgO6x/vA 3CUKAyodkjNHHjpfoWlF6sAZN75FEUKgXPqTdSGe0H7qLZHQ+K9Lj+TEzM1oD7HkJY+7V8 tcopPcpERX13lvUFePcWwMcah6M/zfr2JVx1e1zCuyH7ozGlFGpM3TyWum4OBw==
Received: from srv-kerioconnect.py.meinberg.de (srv-kerioconnect.py.meinberg.de [172.16.3.65]) (using TLSv1.3 with cipher AEAD-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by seppmail.py.meinberg.de (Postfix) with ESMTPS; Tue, 29 Jun 2021 13:17:47 +0200 (CEST)
X-Footer: bWVpbmJlcmcuZGU=
User-Agent: Microsoft-MacOutlook/16.50.21061301
Date: Tue, 29 Jun 2021 13:17:45 +0200
Message-ID: <125F908E-F80D-4873-A164-A460D96316E5@meinberg.de>
Thread-Topic: [Ntp] NTP over PTP
References: <YNRtXhduDjU4/0T9@localhost> <36AAC858-BFED-40CE-A7F7-8C49C7E6782C@meinberg.de> <YNnSj8eXSyJ89Hwv@localhost> <D32FAF20-F529-496C-B673-354C0D60A5AF@meinberg.de> <YNrDGy2M2hpLz9zc@localhost> <C5D99A22-84B8-4D27-BE74-D8267FB1DCB0@meinberg.de> <YNrqWjHPtC7ToAL8@localhost>
In-Reply-To: <YNrqWjHPtC7ToAL8@localhost>
Importance: Normal
X-Priority: 3
Thread-Index: AZ2x3tU+NDJhN2EyODFjMTJjOGIyYg==
From: Heiko Gerstung <heiko.gerstung@meinberg.de>
To: Miroslav Lichvar <mlichvar@redhat.com>
Cc: "ntp@ietf.org" <ntp@ietf.org>
X-SM-outgoing: yes
MIME-Version: 1.0
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg="sha-256"; boundary="----1A0432786C346533B0ED66B723DD5CA5"
Archived-At: <https://mailarchive.ietf.org/arch/msg/ntp/kxtgysQoEGdHbPyJIVA558dTJKE>
Subject: Re: [Ntp] NTP over PTP
X-BeenThere: ntp@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Network Time Protocol <ntp.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ntp>, <mailto:ntp-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ntp/>
List-Post: <mailto:ntp@ietf.org>
List-Help: <mailto:ntp-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ntp>, <mailto:ntp-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 29 Jun 2021 11:17:58 -0000
> On Tue, Jun 29, 2021 at 09:15:22AM +0200, Heiko Gerstung wrote: >> > On Mon, Jun 28, 2021 at 05:02:43PM +0200, Heiko Gerstung wrote: >> >> > The unicast mode seems to be intended for networks with partial >> >> > on-path hardware support, where requirements on accuracy are less >> >> > strict, and I think this might already be better supported by NTP. >> >> They might be less strict but that does not mean they are worse/equal >> >> compared to NTP. >> > >> > How so? >> Because of the removal of jitter/delay variation in the network stack and OS >> layer (kernel) of the server and the client. > > That jitter/delay has no impact on PTP or NTP measurements when using > hardware timestamping. Even with software timestamps they can be > eliminated if the timestamp can be captured in the network driver > right before passing the packet to hardware, as most drivers on Linux > can do. Even a kernel generated SO_TIMESTAMP is subject to jitter and latency on a multitasking OS. For egress timestamps you might be able to control that to a certain extent, but for ingress timestamps the network driver will read out the rx queue of the chip and timestamp packets/frames whenever it gets to it. But I agree that once you use hw timestamping for NTP, you can get results which are as good as PTP on a host. >> > You said unicast transparent clocks don't really exist. That would be >> > the only advantage of unicast PTP. Boundary clocks could fully support >> > hardware-timestamped NTP if someone was actually interested in >> > implementing that. >> I was wrong, or better: my knowledge was outdated. Arista for example >> supports unicast in their switches now. > > Ok, I could extend the NTP-over-PTP draft to take advantage of that > if the support is expected to spread. You definitely should do that. >> > Without full on-path support NTP should generally perform better than >> > PTP as it doesn't assume network has a constant delay. >> Why do you think PTP assumes a constant network delay? PTP is measuring the >> delay constantly in both directions and calculates the round trip. > > Yes, it does, but it is separate from the offset calculation. Which is not the same as claiming that PTP assumes the "network has a constant delay". > The calculation is described in section 11.2 of 1588-2019. It uses > <meanDelay> and there is only the TX and RX timestamp of the sync > message like in the NTP broadcast mode. If the distribution of the > actual delay is not symmetric, as is common without full on-path > support, the average error of the measurements will not even get close > to zero. PTP relies on full hardware support. Without that, it > generally cannot perform as well as NTP. Wrong, even without full on-path support unicast PTP uses delay requests/responses to take the client-to-server delay into consideration as well. See IEEE1588-2019 Subsection 11.3 for a description of how this works. ITU-T G.8265.1 (Telecom Profile) and its frequency-only approach is not using the delay req/resp exchange as it does not need to compensate the delay (it is only used to synthonize the client, which is typically a 2G/3G/4G FDD base station ). All other unicast PTP use-cases that I am aware of are using a round trip delay calculation to compensate the delay, including the newer telecom profiles that have been created to allow phase synchronization for TDD. > Another issue with using PTP in network without PTP support is RX > timestamping fixed to the beginning of the message. If the server is > on a 1Gb/s link and the PTP client is on a faster link, there will be > an asymmetry of hundreds of nanoseconds due to the asymmetric delay > in forwarding of messages between different link speeds. Yes, there are implementations which take that into account by applying static correction values to compensate for link speed asymmetry. I believe this also affects NTP, but in most cases hundreds of nanoseconds are not a problem for applications relying on NTP synchronization. >> > In the context of the drafts we are discussing here, I think it might >> > be easier for an existing PTP implementation to add support for >> > NTP+NTS than add NTS4UPTP. >> >> We are maintaining three different PTP implementations here at Meinberg and I > can tell you that this is not true. > > I don't know what implementations are those, but at least for the > well-known open-source implementations I think it would be. I am not so sure about this, but ultimately for those implementations the maintainers should probably speak. To me it looks like it would be easier to add NTS-over-PTP support to existing NTS4NTP implementations (you mentioned 7 LoC for chrony earlier) than having to add a full NTS-for-NTP implementation to an existing PTP stack. >> I did not say that hardware is not timestamping PTP event messages that carry >> a TLV. I just pointed out that some of the hardware timestampers might look at >> the length of a packet and do not timestamp anything that is longer then X >> bytes. A sync message is 44 bytes plus maybe 26 bytes for the >> AUTENTICATION_TLV. If you remove the requirement for the AUTHENTICATION_TLV >> because you want to run NTS over PTP, your PTP "header" is 44 bytes + 48 bytes >> for the NTP header + anything that NTS adds (Unique Identifier EF, NTS >> Authentication and Encrypted Extension Fields EF, at least one NTS Cookie EF). >> It is a bit complicated (for me) to calculate the total maximum size of such an >> NTS over PTP packet, but I am guessing that you will end up with more than 128 >> bytes, which might be a limit packet matching algorithms have for quickly >> identifying if this can be a PTP event message. > > Ok, so it's not about the message having a fixed length, but a maximum > length. That looks like a very odd quirk. Do you have a specific > example of such a hardware? Over the years I have come to see a lot of odd things in regards to PTP implementations (latest highlight: a software-only implementation of a transparent clock in an industrial switch), therefore I would not be surprised if there are some implementations out there who will break or simply not work when being faced with a NTS-over-PTP packet. For the one I have in mind: this is not a product / implementation I am responsible plus we have an NDA in place with that vendor, so I cannot name names here. There are more challenges I see for NTS-over-PTP. You need to synchronize the clock of the hardware timestamper itself, i.e. getting the time into the silicon that creates the timestamp. PTP timestamps are TAI (not UTC), which itself is not a problem as long as you know the TAI-UTC offset. On a server (PTP Grandmaster) this is typically done by using some form of hardware sync for the timestamper engine, e.g. setting the ToD to the upcoming TAI second and then use the PPS to zeroize the fractions. In reality the solution is typically more sophisticated as you do not want to see micro timesteps at the start of every second. On a client you have to synchronize your system time with the time of the hw timestamper (e.g. the NIC). That time is synchronized by the hardware itself to the PTP server. PTP4L uses phy2sys for this, but I am not sure about the accuracy with which you can read out the PHC clock and correct the OS clock with it. There is a delay when accessing a NIC over the PCI(e) bus, but this is affecting PTP in the same way. So for the client, you should be on par with PTP in this regard. But for a server you have to find a NIC that supports feeding the PPS of your GNSS receiver (for example) to it, not impossible but also not an easy task for someone who is responsible for maintaining highly accurate synchronization for an entire corporate network. The next challenge is on the server, which for unicast PTP requires a certain timestamp queue size to support a usable number of clients. A lot of NICs that claim they have IEEE1588 hardware support have small to tiny ts queue sizes, one common exampe is 4 timestamps. That means you have to be able to read out the hardware timestamps very quickly and you will not really have a chance on high speed links with hundreds and thousands of incoming NTS-over-PTP requests per second. Those hardware timestamping engines have been designed to be used for PTP clients only, and even then not for the high packet rates that PTP supports (and sometimes requires to improve accuracy over partial on-path-support networks). They cannot be used for servers expecting to handle a high packet rate. Finally, I am not sure if IEEE1588 would be happy about an IETF standard "hijacking" one of their protocols, but most probably they cannot do anything about it. Personally I think it is a hack and should not be standardized, but that's just me. I would rather like to see some standard way of flagging an Ethernet frame that I send out to trigger a hardware timestamping engine to timestamp that frame. Such a universal approach could be used by NTP, PTP and other protocols and applications as well (not only time sync protocols), for example to measure network propagation delays etc. It is incredibly hard to get support for this into the silicon of companies like Intel or Broadcom etc., but if it would be universal enough, the chances are higher that it will make its way into products eventually. Again, most of the hardware timestampers would probably work and if you want to pursue this approach and move it forward, please do so. It just does not address the problem of securing PTP, therefore I believe it is neither an alternative to any of the submitted NTS for PTP documents. Heiko -- Heiko Gerstung Managing Director MEINBERG® Funkuhren GmbH & Co. KG Lange Wand 9 D-31812 Bad Pyrmont, Germany Phone: +49 (0)5281 9309-404 Fax: +49 (0)5281 9309-9404 Amtsgericht Hannover 17HRA 100322 Geschäftsführer/Management: Günter Meinberg, Werner Meinberg, Andre Hartmann, Heiko Gerstung Email: heiko.gerstung@meinberg.de Web: Deutsch https://www.meinberg.de English https://www.meinbergglobal.com Do not miss our Time Synchronization Blog: https://blog.meinbergglobal.com Connect via LinkedIn: https://www.linkedin.com/in/heikogerstung
- [Ntp] NTP over PTP Miroslav Lichvar
- Re: [Ntp] NTP over PTP Heiko Gerstung
- Re: [Ntp] NTP over PTP Miroslav Lichvar
- Re: [Ntp] NTP over PTP Heiko Gerstung
- Re: [Ntp] NTP over PTP Doug Arnold
- Re: [Ntp] NTP over PTP Miroslav Lichvar
- Re: [Ntp] NTP over PTP Heiko Gerstung
- Re: [Ntp] NTP over PTP Miroslav Lichvar
- Re: [Ntp] NTP over PTP Heiko Gerstung
- Re: [Ntp] NTP over PTP Doug Arnold
- Re: [Ntp] NTP over PTP Miroslav Lichvar
- Re: [Ntp] NTP over PTP Miroslav Lichvar
- Re: [Ntp] NTP over PTP Heiko Gerstung
- Re: [Ntp] NTP over PTP Doug Arnold
- Re: [Ntp] NTP over PTP Miroslav Lichvar
- Re: [Ntp] NTP over PTP Heiko Gerstung
- [Ntp] RFC 8633 (NTP BCP), Appendix A: "restrict s… Ulrich Windl
- [Ntp] Antw: [EXT] RFC 8633 (NTP BCP), Appendix A:… Ulrich Windl
- Re: [Ntp] RFC 8633 (NTP BCP), Appendix A: "restri… Martin Burnicki
- [Ntp] Antw: [EXT] Re: RFC 8633 (NTP BCP), Appendi… Ulrich Windl
- Re: [Ntp] RFC 8633 (NTP BCP), Appendix A: "restri… Harlan Stenn