Re: [secdir] Secdir review of draft-ietf-ippm-6man-pdm-option-05 : TimeBase

<> Thu, 22 December 2016 14:46 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id B392A128E18 for <>; Thu, 22 Dec 2016 06:46:41 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.601
X-Spam-Status: No, score=-2.601 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001] autolearn=unavailable autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id x82OLmcN0q3q for <>; Thu, 22 Dec 2016 06:46:39 -0800 (PST)
Received: from ( []) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id CE12F129572 for <>; Thu, 22 Dec 2016 06:46:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=s2048; t=1482417999; bh=s+zMitbtbWUgBd8pRMBrYutTZE2cuMyKykj3MflE6Ks=; h=Date:From:Reply-To:To:Cc:Subject:References:From:Subject; b=UlvberPu32+L98OsW18xvAl19r7OrTzE+a5NKAHE21du43aWtf3AIa0uuvqM6mXdEOn41CYmgJUx0KRkAhjeTm8KEOm/NSCFrxItIZ4x9fQsiwOxJ8sgzEpguXqAGSmclTbRM4pKhZjRUghCpXHlaskDgkOgisw0h+2BBx6PP2Ji8ovUHnTkthspEIum09Ffno7uJVaOwEs8bPKxlZVFWph610PiPpboxxg1pvIMtfVi/r7wRFZRjVWG1tGZgtrhGP/zt8PjMQ0s4cqk+bfKRHCuFErODpnKVyQTIy5umQSkHamz2/r8tLxo5aSsFD/m1cQ/iw0rsV8jPwo9owl4Ag==
Received: from [] by with NNFMP; 22 Dec 2016 14:46:39 -0000
Received: from [] by with NNFMP; 22 Dec 2016 14:43:44 -0000
Received: from [] by with NNFMP; 22 Dec 2016 14:43:44 -0000
Received: from [] by with NNFMP; 22 Dec 2016 14:43:44 -0000
X-Yahoo-Newman-Property: ymail-4
Received: from by; Thu, 22 Dec 2016 14:43:44 +0000; 1482417824.199
Date: Thu, 22 Dec 2016 14:43:14 +0000
To: Tero Kivinen <>
Message-ID: <>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
References: <>
Archived-At: <>
Cc: "" <>, "" <>, "" <>
Subject: Re: [secdir] Secdir review of draft-ietf-ippm-6man-pdm-option-05 : TimeBase
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: Security Area Directorate <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 22 Dec 2016 14:46:42 -0000


I am going to try to clean up the loose ends on this (at least as far as our end!).    Sorry for the delays in processing.

This is what we have done as far as timebase / scaling for PDM.


1.  Changed timebase
2.  Eliminate negative scaling
3.  Scaling still required
4.  Layout changes
5.  Changes for loss of precision 

1.  Changed timebase: We establish the Time Base value as 1 attosecond (asec). This allows for a common form and scaling of the time differential among all IP stacks and hardware implementations.

Note that we are trying to provide the ability to measure time deltas in a DTN type environment where the delays may be great.  So, we wanted to be able to measure not just very small intervals but very large intervals such as days. 

The first issue is the conversion from the native time base in the CPU hardware to some number of attoseconds. This might seem to be an astronomical number, but the conversion is straightforward, and involves a multiplication by an appropriate power of 10, and then a shift of a number of bits to change the units to asec. 

Note these common relationships:

1 second         = 10**18 asec    = 1000**6 asec 
1 millisecond    = 10**-3 sec     = 10**15 asec    = 1000**5 asec 
1 microsecond    = 10**-6 sec     = 10**12 asec    = 1000**4 asec 
1 nanosecond     = 10**-9 sec     = 10**9 asec     = 1000**3 asec 
1 picosecond     = 10**-12 sec    = 10**6 asec     = 1000**2 asec 
1 femtosecond    = 10**-15 sec    = 10**3 asec     = 1000**1 asec 

The conversion formula works like this:

The time counter in a CPU is a binary whole number, representing a number of milliseconds (msec), microseconds (usec) or even picoseconds (psec). Representing one of these values as attoseconds (asec) means multiplying by the value in the third column of this table. For example, if you have a time value expressed in microseconds, since each microsecond is 10**12 asec, you would multiply your time value by 10**12 to get the number of attoseconds. The result is a binary value that will need to be shortened by a number of bits so it will fit into the 16-bit PDM DELTA field. The exponent in the last column is useful here; the initial scaling factor is that exponent multiplied by 10. This is the minimum number of low-order bits to be shifted-out or discarded. The resulting value may still be too large to fit into 16 bits, but can be normalized by shifting out more bits (dividing by 2) until the value fits into the 16-bit DELTA field. The number of extra bits shifted out is then added to the scaling factor. The scaling factor, the total number of low-order bits dropped, is then the SCALEDTL value.

For example: say an application has these start and finish timer values (hexadecimal values, in microseconds): 

Finish:       27C849234 usec    (02:57:58.997556) 
-Start:       27C83F696 usec    (02:57:58.957718) 
==========    =========        =============== 
Difference    9B9E usec        00.039838 sec or        39838 usec 

To convert this differential value to binary attoseconds, multiply the number of microseconds by 10**12. Divide by 1024**4, or simply discard 40 bits from the right. The result is 36232, or 8D88 in hex, with a scaling factor or SCALEDTL value of 40. 

For another example, presume the time differential is larger, say 32.311072 seconds, which is 32311072 usec. Each microseconds is 10**12 asec, so I multiply by 10**12, giving the hexadecimal value 1C067FCCAE8120000. Using the initial scaling factor of 40, I would drop the last 10 characters (40 bits) from that string, giving 1C067FC. This will not fit into a DELTA field, as it is 25 bits long. Shifting the value to the right another 9 bits results in a DELTA value of E033, with now a scaling factor of 49.

2.  Eliminate negative scaling: simplifies comparison between values 

This form allows for simplified comparison between values. The time unit is constant, reducing the computation required to manipulate these values.

3.  Scaling is still required: Why do we still need scaling?

Scaling is still required because the attosecond values can be very large, and therefore will not fit into the DELTA fields.  So, low order truncation still needs to occur.

4.  Layout 

Current layout 

0                  1                  2                  3 
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 
|  Option Type  | Option Length |TB |ScaleDTLR    |   ScaleDTLS | 
|  PSN This Packet              |  PSN Last Received            | 
|  Delta Time Last Received     |  Delta Time Last Sent         | 

Scale Delta Time Last Received (SCALEDTLR) 

7-bit signed integer.  This is the scaling value for the Delta Time Last Received (DELTATLR) field.  The possible values are from -128 to 
+127.  See Section 4 for further discussion on Timing Considerations
and formatting of the scaling values. 

Scale Delta Time Last Sent (SCALEDTLS) 

7-bit signed integer.  This is the scaling value for the Delta Time 
Last Sent (DELTATLS) field.  The possible values are from -128 to 

New layout (Timebase eliminated)

0                  1                  2                  3 
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 
|  Option Type  | Option Length |  ScaleDTLR    |  ScaleDTLS    | 
|  PSN This Packet              |  PSN Last Received            | 
|  Delta Time Last Received     |  Delta Time Last Sent         | 

Scale Delta Time Last Received (SCALEDTLR) 

8-bit unsigned integer.  This is the scaling value for the Delta Time 
Last Received (DELTATLR) field.  The possible values are from 0 to 
255.  See Section 4 for further discussion on Timing Considerations 
and formatting of the scaling values. 

Scale Delta Time Last Sent (SCALEDTLS) 

8-bit unsigned integer.  This is the scaling value for the Delta Time 
Last Sent (DELTATLS) field.  The possible values are from 0 to 

5. Loss of precision 

When using and scaling binary numbers, the upper limit for the loss of 
precision is 2**n-1, where n is the number of bits that have been dropped from 
the number. In this instance, n is the scaling factor, so while the scaling 
factors just discussed are very large, you must remember that we are talking 
about an infinitesimal amount of time. Since the "-1" in that formula refers 
to a single attosecond, think of the formula as just 2**n when discussing values
larger than, say, picoseconds.

To demonstrate the amount of precision lost, consider the second example above.
The resulting DELTA value was E033, with a scaling factor of 49. Multiplying the
DELTA value by 2**49 and dividing by 10**12 gives me back the number of microseconds
this value represents, which is 32310512. Since the original value was 32311072 usec,
you can see that the amount of precision lost, on a 32+ second interval, was just 560 usec.
For this calculation, the scaling factor of 49 means that one could expect a maximum loss
of precision of 2**49 asec, which is a fraction smaller than 562.95 usec.

As a further demonstration, consider this table of very long 
time intervals, starting with some times on a real network: 

PDM representation 
DELTA         Scaling 
Time differential                     value    factor 
====         =============            =====    ======= 
1 sec        = 10**18 asec              DB06    44 
1 min        = 60*10**18 asec           D20A    50 
1 solar day  = 86400 sec (3600*24)      925E    61 
1 civil year = 31557600 sec             D0DA    69 

With this in mind, if the response from a far-ranging spacecraft took a year 
to return to Earth, the loss of precision would be 2**69 asec, which is 
just less than 9min and 50.3 seconds. For a time differential of 1 second, 
the maximum loss of precision is less than 17.6 microseconds. A time 
differential of one picosecond, 10**6 asec, would be represented as the DELTA 
value F424 with a scaling factor of 4, so the maximum precision loss would be 
15 asec. 


Nalini Elkins
Inside Products, Inc.
(831) 659-8360

On Tuesday, October 18, 2016 6:07 AM, Tero Kivinen <> wrote: writes:
> >The time base is so that one does not have to be committed to picoseconds / 
> >milliseconds, etc.    Even in your example, I believe you used "unit" or time 
> >base.  Our thinking was that we wanted future proof so as to be able to 
> >handle very small values and very large (as may be needed for DTN, for 
> >example).   We can see if we can express years in picoseconds and see 
> >what happens.   Then, the unit would always be picoseconds 
> The issue is with the hardware. When we were first researching the
> "proper" or "best" time unit in which the PDM time differentials
> should be measured, we found that different CPU hardware measure
> time very differently. Some CPUs are still measuring time in
> milliseconds and using multiple clocks to do it.

Yep, but the problem is that if the implementation still need to be
able to cope with other devices using different time bases, this does
not help. 

> Our plan was to have a CPU specify the time differential in its
> native time units, to reduce its processing time when communicating
> with another device that is at the same level. One could say that
> the most logical solution is to use the time signature of the
> slowest device, so that all the time-adjustment calculations are
> performed on the device most capable of handling them quickly, and
> similarly, requiring the slowest device to adjust to the timing used
> by the fastest device would be forcing those calculations onto the
> device least capable of handling them quickly. Further, why make two
> devices that use the same clock unit to change to a different time
> scale on both ends of the conversation? If both use microseconds,
> why not let them specify their time differential in microseconds?

The problem is that some implementations measure time in 0.01 seconds
(10 ms), some implementations have timers which go up 60 times per
second, then there are implementations which have free running counter
running that full clock speed (or some fraction of clock speed or some
other very fast value, but not necessarely ms, µs, ns or ps).

So even if you have 4 time bases, there are lots of implementations
which need to convert their clock to something that is suitable for
the wireformat.

If we do not have time base, i.e. everything uses some common timebase
then you always need to do it, but on the other hand it is simple, as
there is no selection whether you should be using ms, µs, ns or ps are
your time base.

Having the 2^scale factor to the numbers will take care of being able
to represent any time value there, having two different ways of doing
scaling (both 2^scale factor and the time base) will just complicate

> That's the reasoning that we had about the timebase.
> Of course, we are open to discussion. 

I think it would be simplier to just have one fixed timebase, and it
might even be better to take the timebase so it is so small, that we
do not need negative exponents for scale, like attoseconds.

Attoseconds (10^-18) is so short that light can travel only 0.3 nm in
one attosecond, so that should be enough for PDM (until we get new way
of making transistors, and start measure latencies between

If your clock is running milliseconds, you need to multiply the time
in ms with 0.88818 ((1000/1024)^5) to get time in attoseconds with
scale of 50. For microseconds the multiplier is 0.90949 with scale of
40, nanoseconds 0.93132 with scale of 30, picoseconds 0.95367 with
scale of 20, and femtoseconds 0.97656 with scale of 10.