Re: [Ntp] NTP over PTP

Heiko Gerstung <> Fri, 02 July 2021 10:57 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 33A693A19CA for <>; Fri, 2 Jul 2021 03:57:16 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: 0.601
X-Spam-Status: No, score=0.601 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, GB_SUMOF=5, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=no autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id YVmnk1nHCQ5z for <>; Fri, 2 Jul 2021 03:57:11 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id D95663A19CB for <>; Fri, 2 Jul 2021 03:57:10 -0700 (PDT)
Received: from (unknown []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPSA id 63E2D71C0E10; Fri, 2 Jul 2021 12:57:07 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=d2021; t=1625223427; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=g0R5nRIxFHczScFmZgPncqFPfoBa4Nb+M6lWnz93ddU=; b=J9wM2EGTQ6etfM03BJUbjo4Yr0rzDFxZW5maIPwGzGeIL801r2KcWMW4MYdhfELDgqGQvI S3rwo9RtZcXv7xrJ0VDlul3sECaq1vrk110MuH6PipsT1qp02M5YXpoaUNYkZvYlIUAKM3 SmnwI3aGKGJgKBWOdqCWCbjSXAb1dIv5YqHxfRrrqb+xvtGw/3iB+mBtBbF11KQYOd/dXH ENfH76j5glqefDetxZ8J/c3cI74BAR6RRRgERUKgETcRqMMHqBY2I5BFE0Maay+yY6B57q lOxLxE+j1IcOTYUC7XYFVwyKZZMoYilZdzWVDHlAwv+SfRbPMcNgwiW530EiAA==
Received: from ( []) (using TLSv1.3 with cipher AEAD-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS; Fri, 2 Jul 2021 12:57:06 +0200 (CEST)
X-Footer: bWVpbmJlcmcuZGU=
User-Agent: Microsoft-MacOutlook/16.50.21061301
Date: Fri, 02 Jul 2021 12:57:04 +0200
Message-ID: <>
Thread-Topic: [Ntp] NTP over PTP
References: <YNRtXhduDjU4/0T9@localhost> <> <YNnSj8eXSyJ89Hwv@localhost> <> <YNrDGy2M2hpLz9zc@localhost> <> <YNrqWjHPtC7ToAL8@localhost> <> <YNxCLd3vvm3yMTl7@localhost> <> <YN2X9mwoEXkbbYK6@localhost>
In-Reply-To: <YN2X9mwoEXkbbYK6@localhost>
Importance: Normal
X-Priority: 3
Thread-Index: AZ2x3tU+NDJhN2EyODFjMTJjOGIyYg==
From: Heiko Gerstung <>
To: Miroslav Lichvar <>
Cc: "" <>
X-SM-outgoing: yes
MIME-Version: 1.0
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg="sha-256"; boundary="----C22E956F77816F34BE07F73EAB1C233E"
Archived-At: <>
Subject: Re: [Ntp] NTP over PTP
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Network Time Protocol <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 02 Jul 2021 10:57:16 -0000

I am trimming the quoted text a bit to improve readabililty...

> On Wed, Jun 30, 2021 at 03:42:53PM +0200, Heiko Gerstung wrote:
>>> Well, yes, but the delay measurement is separate from the offset
>>> [...]
>> This depends, as pointed out earlier, on the network itself. If you have
>> highly dynamic network paths where traffic patterns change quickly and
>> therefore result in wildly changing queueing and forwarding delays, you will
>> see slightly different delay distribution in NTP and PTP logs. In less dynamic
>> environments, you will probably not notice something like that at all.
> I'm referring to the distribution of the offset as measured by PTP
> and NTP. Maybe this plot will make it more clear
Thank you for this. It has no scale, but I think I understand where you are coming from here.

> It shows offset over time as calculated by NTP and PTP in identical
> conditions. Hopefully you can see that the offset on the top has a
> symmetric distribution and one on the bottom does not. With the PTP
> offset calculation (same as in the NTP broadcast mode), the
> distribution of the network delay in the server->client direction
> transfers to the distribution of the offset. In the NTP client/server
> mode the offset is calculated from the sum of delays in both
> directions. It's not difficult to guess what is easier to work with
> when controlling a clock.

Agreed, that was why I referred to PTP's delay measurement mechanism as inferior to NTP. In practice, 
the PTP approach is good enough(tm) to provide a similar performance when compared to NTP.

>> And in networks with partial or full on-path support, you will see that PTP
>> is performing great with its inferior delay measurement approach.
> With full on-path support, yes, it performs great. That's what it was
> designed for. Without full support on a mostly idle network or in a
> network where PTP messages are prioritized, it can still work ok. But
> as soon as PTP messages start waiting in longer queues, it falls apart
> very quickly. NTP can generally perform well in worse conditions as it
> measures the offset and delay at the same time, while PTP assumes the
> delay didn't change when it measures the offset.
Agreed as well. 

> Of course, there are differences between implementations. A more
> advanced PTP implementation can certainly perform better than a less
> advanced NTP implementation. PTP implementations can also ignore the
> PTP specification and use the NTP approach.
Yes, with timing it is very easy to mess things up considerably as the whole chain 
is important. 

>>>> Yes, there are implementations which take that into account by applying
>>>> [...]
>>> NTP is much less impacted as it timestamps the end of the reception.
>>> A software timestamp is captured after the packet is received.

>> That is a very confusing argument.
> I'm talking here specifically about the error that is caused by server
> and client being on different link speeds, not other sources of error
> due to processing delays, etc. NTP using trailing timestamps (which
> requires transposition of hardware timestamps) is another reason why
> NTP should be preferred over PTP in networks that don't have a full
> on-path support.
OK. Thank you for the additional explanation.

>> When we connect one of our grandmaster clocks (PTP server) to one of our PTP
>> client devices over a direct crossover cable or fiber link, we can measure an
>> offset of less than 20ns between the PPS generated by the GNSS synchronized
>> master clock and the client.
> I see the same with NTP here.
With hardware timestamping, yes. If all conditions are the same (high packet rates, 
network conditions and hardware timestamping) both NTP and PTP can achieve similar

>>> Isn't that an issue for both NTP and PTP unicast using high rate sync
>>> and delay requests?
>> Yes, but unicast PTP servers with hardware time stamping are typically
>> commercial products with dedicated network hardware to support the large number
>> of timestamps that have to be stored and processed.

> Would this hardware not work with NTS-over-PTP, but work with
Some of the hardware would work, for sure. However, there needs to be software 
that is implemented by the commercial vendor. 

>> The i210 has a time stamping queue length of 4, i.e. it stores a maximum of 4
>> timestamps in a ring buffer. A unicast PTP server with 10 clients running at
>> 128 sync and delay pkt/s will generate 1,280 time stamps per second. Even on
>> some serious hardware there will be a significant number of timestamps that are
>> lost due to queue overruns.
> I think you are confusing the I210 with something else. It can
> timestamp all received packets, at any rate.
I was talking about TX timestamps, sorry if that did not come 
across (server sends sync messages).

> Its performance in a server is limited by TX timestamping. It can
> timestamp only one packet at a time. But the maximum rate is
> definitely higher than 1280 per second. In my tests it's about 35000
> TX timestamps per second. That's with the stock Linux driver which
> seems to rely on an interrupt. I think it could be modified to poll
> the timestamp register for a much higher rate of timestamps. In any
> case, I'd say that is pretty good for a $50 NIC.
RX is no problem, obviously. A unicast server needs to send a lot of packets 
(and then has to read out the timestamps in 2-step mode). We ran some tests 
with this chip and ran into problems when trying to use it in server mode. 

However, I wholeheartedly agree that this NIC is a great value for the money. We have two of those
on every CPU board and love them for their reliability and versatility.  We do not use them for PTP 
but that might change at some point. 

If you managed to get 35k TX timestamps out of the chip, it is good for a maximum of ~270 clients, which
is probably good enough for quite a number of setups. As I mentioned earlier, our hardware 
can do 250k TX timestamps and is good for ~2k clients. But of course there are not a lot of applications out there
requiring that firepower, and maybe in those cases the i210 will perfectly do the job.

>>> I think the best approach is for the hardware to timestamp all packets
>>> as many NICs already do. The problem is with existing hardware that
>>> [...]
>> I believe there should be a standard that adds a hardware timestamp to the
>> end of every Ethernet frame. It requires a NIC vendor to implement a hardware
>> clock and a time stamping engine into their silicon. The bandwidth between the
>> NIC and the OS layer (driver) has to accommodate the extra data (but you could
>> reduce the MTU of course).
> That's how it works with the I210 and other NICs, except the timestamp
> is before the frame, not after.
Yep, thanks for pointing that out. I am not sure if the i210 can hardware timestamp every packet, but even 
if it does, this is only feasible for the RX path as you need some sort of ring buffer to store TX timestamps
until they are read out by a userspace application. So on the TX path it might be better to not timestamp
everything and instead be able to flag a frame/IP address for timestamping. 

The i210 also seems to support 1step, which might make it possible to send out even more packets as 
you do not actually need to read out the timestamp from userspace. It is really a great piece of hardware 
(the "datasheet" is 870 pages .....). 

If you agree, I propose to close this thread here. We definitely move into more and more off-topic discussions 
and I would like to re-focus on the draft documents that you, I and others submitted. Happy to carry on off list,
of course.

> --
> Miroslav Lichvar
> _______________________________________________
> ntp mailing list

Heiko Gerstung 
Managing Director 
MEINBERG® Funkuhren GmbH & Co. KG 
Lange Wand 9 
D-31812 Bad Pyrmont, Germany 
Phone: +49 (0)5281 9309-404 
Fax: +49 (0)5281 9309-9404 
Amtsgericht Hannover 17HRA 100322 
Geschäftsführer/Management: Günter Meinberg, Werner Meinberg, Andre Hartmann, Heiko Gerstung 
Do not miss our Time Synchronization Blog:
Connect via LinkedIn: