I own a pocket watch. It is beautiful, but I don’t use it very often.
I know that I own a couple of watches. One of them is a battery powered solar recharging thing.
My standard “watch” today is my cell phone.
When I was in high school, I was very interested in accurate time keeping. As was my father.
This meant that we would call the “time” phone number to set our watches, at least once a week.
My grandfather had a “railroad watch”. This was a wristwatch that was approved by the railroad for time keeping. It was approved by the SooLine for use as a time keeping device. Amazing, until that model of watch was approved, the railroad required the use of pocket watches.
This was because a level of accuracy was required that only pocket watches or well regulated wristwatches could maintain.
The big thing in my youth were “quartz” watches. Instead of using a tuning fork or a mechanical balance/regulator, they used the vibrations of a quartz crystal to keep track of the time.
What this meant was that you had devices that were now able to maintain the same wrong time over an extended period of time.
The user had to set them correctly.
As an example, for years, maybe even to today, my wife would set her car clock (and many other clocks) 10 minutes fast. “So she would be on time for appointments.”
I set my car to my phone’s reported time.
One of the fun things that I did as a kid was to call up the Naval Observatory to get the current time. This was reported from their atomic clock. One of the most accurate time keeping devices in the world.
Accurate Time
Many protocols require accurate time. It is wonderful that you have a time piece that is accurate to within 1 second per year, but if it is reporting the wrong time, it is not particularly useful to the protocol.
What we want, is to know what time it is right now, and then to set our time to that.
We get the current time from a known, accurate time source. Today, that is often GPS satellites.
If you have ever wondered how GPS works, it works because your device knows where each satellite is at any instant of time. Each satellite transmits its ID and the current time. Over and over again.
That is all they do.
And here is the magic, if your device knows what time it is, and it knows where the satellite should be at his time, it can calculate the distance by comparing the difference in time.
If you are directly under a GPS satellite, it takes about 67ms for the signal to reach your device. From this, we can use the speed of light to figure out the distance traveled. Then some simple math and we know the location of your device.
We can also get accurate time by listening for the atomic clocks via radio. If you know where you are, and you know where the clock is, you can calculate the delay between the atomic clock and your device, then match your device to the atomic clock.
Today, when people want to use that type of process, they use a GPS device and get the time ticks from the device.
How long did it take?
This is where it starts to get complicated.
The standard for communications with a GPS device is 4800 or 9600 baud across a 3 wire serial connection. The protocol, the text being transmitted, specifies the time when the last character is transmitted.
That data is being received. Your device is processing it. Your device takes a certain amount of time to process the record it just received. It takes time to process that record. All of that is latency.
If you do not know the latency in your device, you do not know what time it is. For grins, just think of that serial link being 300,000,000 meters long. That would put a 1-second latency by itself.
There are ways of calculating the latency, but I do not remember what they are.
Latency is the important piece of information.
Calculating Latency
Many network people have run ping
. It is a tool for testing reachability and latency between your device and some other device on the Internet.
ping -c 5 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=11.1 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=116 time=11.1 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=116 time=11.6 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=116 time=11.1 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=116 time=11.0 ms
--- 8.8.8.8 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 11.022/11.179/11.616/0.220 ms
This is a test from one of my faster machines to a Google DNS server. This tells me that it takes 11.179 ms to reach that DNS server. Testing to one of my network timeservers, the average is 78.094 ms.
This means that the time reported by the timeserver will be off by some amount. In a simple world, we would guess that it is 1/2 of 78.094.
But, I use NTP. NTP does many transmissions to multiple timeservers to discover the actual time. It is reporting that the latency is 78.163512 ms. A little more accurate. It tells me that the dispersion is 0.000092 ms.
How does it know this? Because of many samples and because of four different time stamps.
When my device sends an NTP request packet, it puts the current time in it. When the server receives the packet, it puts the current time in it. When the server transmits the response, it adds the current time again. Finally, when the reply is received, the current time is added to the packet. This gives us four different time stamps from two different sources.
We compute the total latency via mine(R)-mine(S). We know the processing time by server(S)-server(R). The difference between server(R)-mine(S) and mine(R)-server(S) as the symmetry between the two paths the request and response traveled.
From these values, we can calculate the network distance, in seconds, between us and them.
Assume we transmit at time 0(M), it is received at 100(S), the response is transmitted at 105(S) and we receive it at 78(M).
How can we receive our reply before the server sent it? Easy, we have to different views of what time it is.
The latency is 78. This means that the halfway point was at 38. It took 5 to process the reply and get it on the wire again. If we do simple stuff, this means that our time is off from their time by 67.
But we can do better. By looking at the reported latency between the two legs, we can actually calculate how long it took for us to receive the reply.
NTP uses multiple timeservers to get a consensus as to the time. It monitors each timeserver to determine which one jitters the least.
All of this means that we can have very accurate times.
And having accurate measurements of the time, NTP will calculate how much the computer’s clock drifts over time. It will then modify the clock rate in parts per million to get the drift as close to zero as possible.
This means, that the longer your device runs NTP, the more accurate it becomes.
Leave a Reply