Nist Was 5 Μs Off Utc After Last Week's Power Cut
Key topics
A jolt to the timekeeping world: NIST's time servers were 5 microseconds off UTC after a recent power outage, sparking debate about the reliability of trusted time sources. While some commenters urged caution, pointing out that malicious NTP pool members can compromise accuracy, others argued that NIST's responsible handling of the issue actually bolsters trust. The discussion veered into the nuances of atomic clocks, with some highlighting the varying performance ranges and costs, from affordable rubidium frequency references to high-end Cesium clocks and Hydrogen Masers. As the conversation unfolded, a consensus emerged that building one's own Stratum 1 time server or exploring alternative time sources might be the best bet for those requiring ultra-precise timing.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
6m
Peak period
71
0-6h
Avg / period
15.8
Based on 158 loaded comments
Key moments
- 01Story posted
Dec 22, 2025 at 12:01 PM EST
11 days ago
Step 01 - 02First comment
Dec 22, 2025 at 12:07 PM EST
6m after posting
Step 02 - 03Peak activity
71 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 25, 2025 at 2:07 PM EST
8 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Suggestions from the community for more reliable alternatives?
You still can...
If you're that considered about 5 microseconds: Build your own Stratum 1 time server https://github.com/geerlingguy/time-pi
or just use ntppool https://www.ntppool.org/en/
For example: unlike the IPv4 space, the IPv6 space is too big too scan, so a number of "researchers" (if you want to call them that) put v6-capable NTP servers in the NTP pool to gather information about active v6 blocks to scan/target.
> Jeff finished off the email mentioning the US GPS system failed over successfully to the WWV-Ft. Collins campus. So again, for almost everyone, there was zero issue, and the redundancy designed into the system worked like it's supposed to.
So failures in these systems are potentially correlated.
The author mentions another solution. Apparently he runs his own atomic clock. I didn’t know this was a thing an individual could do.
> But even with multiple time sources, some places need more. I have two Rubidium atomic clocks in my studio, including the one inside a fancy GPS Disciplined Oscillator (GPSDO). That's good for holdover. Even if someone were jamming my signal, or my GPS antenna broke, I could keep my time accurate to nanoseconds for a while, and milliseconds for months. That'd be good enough for me.
There are a few folks on the time-nuts mailing list who own such exotic pieces of hardware, but those are pretty far out of reach for most!
[1] https://www.microchip.com/en-us/products/clock-and-timing/co...
[2] https://www.microchip.com/en-us/products/clock-and-timing/co...
For instance, time-a-wwv.nist.gov.
One should configure a number of different NTP sources instead of just a single host.
Also, you can use multiple NIST servers. They have ones in Fort Collins, CO and Gaithersburg, MD. Most places shouldn't use NIST directly but Stratum 1 name servers.
Finally, NTP isn't accurate enough, 10-100 ms, for microsecond error to matter.
Use NTP with ≥4 diverse time sources, just as RFC 5905 suggests doing. And use GPS.
(If you're reliant upon only one source of a thing, and that thing is important to you in some valuable way, then you're doing it wrong. In other words: Backups, backups, backups.)
To put a deviation of a few microseconds in context, the NIST time scale usually performs about five thousand times better than this at the nanosecond scale by composing a special statistical average of many clocks. Such precision is important for scientific applications, telecommunications, critical infrastructure, and integrity monitoring of positioning systems. But this precision is not achievable with time transfer over the public Internet; uncertainties on the order of 1 millisecond (one thousandth of one second) are more typical due to asymmetry and fluctuations in packet delay.
[1] https://groups.google.com/a/list.nist.gov/g/internet-time-se...
How do those other applications obtain the precise value they need without encountering the Internet issue?
They do not use the Internet: they use local (GPS) clocks with internal high-precision clocks for carry-over in case GNSS signal is unavailable:
* https://www.ntp.org/support/vendorlinks/
* https://www.meinbergglobal.com/english/products/ntp-time-ser...
* https://syncworks.com/shop/syncserver-s650-rubidium-090-1520...
* https://telnetnetworks.ca/solutions/precision-time/
In the most critical applications, you can license a system like Fugro AtomiChron that enhances GNSS timing down to the level of a few nanoseconds. You can get that as an option with the SparkPNT GPSDO, for instance (https://www.sparkfun.com/sparkpnt-gnss-disciplined-oscillato...).
That's one hell of a healthy profit margin there O.o
The SiTime MEMS oscillator is about 100€ for one single chip, the mosaic-T GPS receiver is about 400€. Add 50€ for the rest (particularly the power input section looks complicated) and 50€ for handling, probably 600€ in hardware cost... sold for 2.500€.
The real money I think went into certification and R&D for a low-volume product - even though most of the hard work is done by the two ICs, getting everything orchestrated (including the PCB itself) to perform to that level of accuracy is one hell of a workload.
There are so many artificial bureaucratic barriers to entry these days, and it's not getting better.
GPS has its own independent timescale called GPS Time. GPS Time is generated and maintained by Atomic clocks onboard the GPS satellites (cesium and rubidium).
The GPS satellite clocks are steered to the US Naval Observatory’s UTC as opposed to NIST’s, and GPS fails over to the USNO’s Alternate Master Clock [0] in Colorado.
[0] https://www.cnmoc.usff.navy.mil/Our-Commands/United-States-N...
GPS system time is currently 18s ahead of UTC since it doesn't take UTC's leap seconds into account [0]
This (old) paper from USNO [1] goes into more detail about how GPS time is related to USNO's realization of UTC, as well as talking a bit about how TAI is determined (in hindsight! - by collecting data from clocks around the world and then processing it).
[0] https://www.cnmoc.usff.navy.mil/Our-Commands/United-States-N... [1] https://ntrs.nasa.gov/api/citations/19960042620/downloads/19...
In particular, the atomic clocks on board the GPS satellites are not sufficient to maintain a time standard because of relativistic variations and Doppler effects, both of which can be corrected, but only if the exact orbit is known to within exceeding tight tolerances. Those orbital elements are created by reference to NIST. Essentially, the satellite motions are computed using inverse GPS and then we use normal GPS based on those values.
Verification and traceability is one reason: it's all very well to claim you're with-in ±x seconds, but your logs may have to say how close you are to the 'legal reality' that is the official time of NIST.
NIST may also send out time via 'private fibre' for certain purposes:
* https://en.wikipedia.org/wiki/White_Rabbit_Project
'Fibre timing' is also important in case of GNSS signal disruption:
* https://www.gpsworld.com/china-finishing-high-precision-grou...
Then they disperse and use the time as needed.
According to jrronimo, they even had one place splice fiber direct between machines because couplers were causing problems! [1]
[1] https://news.ycombinator.com/item?id=46336755
As it stands at the minute, the clocks are a mere 5 microseconds out and will slowly get better over time. This isn't even in the error measurement range and so they know it's not going to have a major effect on anything.
When the event started and they lost power and access to the site, they also lost their management access to the clocks as well. At this point they don't know how wrong the clocks are, or how more wrong they're going to get.
If someone restores power to the campus, the clocks are going to be online (all the switches and routers connecting them to the internet suddenly boot up), before they've had a chance to get admin control back. If something happened when they were offline and the clocks drifted significantly, then when they came online half the world might decide to believe them and suddenly step change to follow them. This could cause absolute havoc.
Potentially safer to scram something than have it come back online in an unknown state, especially if (lots of) other things are are going to react to it.
In the last NIST post, someone linked to The Time Rift of 2100: How We lost the Future --- and Gained the Past. It's a short story that highlights some of the dangers of fractured time in a world that uses high precision timing to let things talk to each other: https://tech.slashdot.org/comments.pl?sid=7132077&cid=493082...
When you ask a question, it is sometimes better to not get an answer—and know you have not-gotten an answer—then to get the wrong answer. If you know that a 'bad' situation has arisen, you can start contingency measures to deal with it.
If you have a fire alarm: would you rather have it fail in such a way that it gives no answer, or fail in a way where it says "things are okay" even if it doesn't know?
We actually disable NTP entirely (run it once per day or at boot) to avoid clocks jumping while recording data.
In our case the jumps where because we also have PTP disciplining the same system clock, when you have both PTP and NTP fighting over the same clock, you will see jumping with the default settings.
For us it was easier to just do a one time NTP sync at the beginning/boot, and then sync the robots local network with only PTP afterwards.
But that’s all besides the point since most sane time sync clients (regardless of protocol) generally handle small deviations (i.e. normal cases) by speeding up or slowing down the system clock, not jumping it (forward or backward).
On Linux I think the adjtimex() system call does the equivalent https://manpages.ubuntu.com/manpages/trusty/man2/adjtimex.2....
It smears out time differences which is great for some situations and less ideal for others.
Worked really well for the project.
Avoiding time jumps was really worthwhile.
Someone else referenced low power ham radio modes like WSPR, which I also don't know much about, but I can imagine they have timeslots linked to UTC and require accuracy. Those modes have extremely low data rates and narrow bandwidths, requiring accurate synchronization. I don't know if they're designed to self-synchronize, or need an external reference.
It is very common to integrate a GPS in a WSPR beacon to discipline the transmit frequency, but with modest thermal management, very ordinary crystal oscillators have very nice stability.
I think most people would look at the error and think "what's the big deal" but at all the telecoms customers would be scrambling to find a clock that hasn't fallen out of sync.
There is a german club that builds and distrubutes such stations (using GPS for location and timing), with a quite impressive global coverage by now:
https://www.blitzortung.org
Distributed sonar, allows placing receivers willy-nilly and aligning the samples later.
Remote microphone switching - though for this you wouldn't notice 5us jitter, it's just that the system we designed happened to have granularity that good.
Timekeeping starts to become really hard, often requiring specialized hardware and protocols.
Perhaps a bit more boring than one might assume :).
https://en.wikipedia.org/wiki/Star_tracker
We solved this by having GPS clocks at each tower as well as having the app servers NTP with each other. The latter burned me once due to some very dumb ARP stuff, but that's a story for another day.
(See https://docs.cloud.google.com/spanner/docs/true-time-externa...)
It's been over a decade now since I managed the truetime team at google, things may have changed since :)
I defer to the experts.
If (and it isn't very conceivable) GPS satellites were to get 5µs out of whack, we would be back to Loran-C levels of accuracy for navigation.
> The official abbreviation for Coordinated Universal Time is UTC. This abbreviation comes as a result of the International Telecommunication Union and the International Astronomical Union wanting to use the same abbreviation in all languages. The compromise that emerged was UTC, which conforms to the pattern for the abbreviations of the variants of Universal Time (UT0, UT1, UT2, UT1R, etc.).
> ... in English the abbreviation for coordinated universal time would be CUT, while in French the abbreviation for "temps universel coordonné" would be TUC. To avoid appearing to favor any particular language, the abbreviation UTC was selected.
I've never heard of this! Very cool service, presumably for … quant / HFT / finance firms (maybe for compliance with FINRA Rule 4590 [3])? Telecom providers synchronizing 5G clocks for time-division duplexing [4]? Google/hyperscalers as input to Spanner or other global databases?
Seriously fascinating to me -- who would be a commercial consumer of NIST TOF?
[1] https://groups.google.com/a/list.nist.gov/g/internet-time-se...
[2] https://www.nist.gov/pml/time-and-frequency-division/time-se...
[3] https://www.finra.org/rules-guidance/rulebooks/finra-rules/4...
[4] https://www.ericsson.com/en/blog/2019/8/what-you-need-to-kno...
Still useful for post-trade analysis; perhaps you can determine that a competitor now has a faster connection than you.
The regulatory requirement you linked (and other typical requirements from regulators) allows a tolerance of one second, so it doesn't call for this kind of technology.
mifid ii (uk/eu) minimum is 1us granularity
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:...
The required accuracy (Tables 1 and 2 in that document) is 100 us or 1000 us depending on the system.
no, Tables 1 and 2 say divergence, not accuracy
accuracy is a mix of both granularity and maximum divergence
regardless, your statement before:
> The regulatory requirement you linked (and other typical requirements from regulators) allows a tolerance of one second, so it doesn't call for this kind of technology.
is not true
I respectfully disagree.
In context, "granularity" is nothing more than a resolution constraint on reported timestamps. Its inclusion adjacent to the specified "divergence" is a function of market manipulation surveillance objectives as discussed in item (2) of the preamble, and really doesn't have anything to do with accuracy proper.
1us is nothing special for GPS/NTP/PTP appliances (especially with OCXO/rubidium oscillators):
* https://www.microchip.com/en-us/products/clock-and-timing/sy...
* https://www.meinbergglobal.com/english/productinfo/gps-time-...
You can get 50ns with this. Of course, you would verify at NIST.
Would the police actually try to investigate from where came the jammer? Might the competing firm possibly even finance an investigation themselves privately? And if so, would the police then accept the evidence?
People have done far more evil things for money.
[1]: https://www.nj.com/news/2013/08/man_fined_32000_for_blocking...
I'm not surprised that somebody would try and do this. However it is just so stupid at every level.
I believe you'll need 12 GPS sats in view to gain incremental accuracy improvement over 8.
To start with, probably for scientific stuff, à la:
* https://en.wikipedia.org/wiki/White_Rabbit_Project
But fibre-based time is important in case of GNSS time signal loss:
* https://www.gpsworld.com/china-finishing-high-precision-grou...
Think Google might have rolled their own clock sources and corrections.
Ex: Sundial, https://www.usenix.org/conference/osdi20/presentation/li-yul...
To say NIST was off is clickbait hyperbole.
This page: https://tf.nist.gov/tf-cgi/servers.cgi shows that NIST has > 16 NTP servers on IPv4, of those, 5 are in Boulder and were affected by the power failure. The rest were fine.
However, most entities should not be using these top-level servers anyway, so this should have been a problem for exactly nobody.
IMHO, most applications should use pool.ntp.org
Is pool.ntp.org dispersed across possible interference and error correlation?
Anyone can join the NTP.org pool so it's hard to make blanket statements about it. I believe there's some monitoring of servers in the pool but I don't know the details.
For example, Ubuntu systems point to their Stratum 2 timeservers by default, and I'd have to imagine that NIST is probably one of their upstreams.
An NTP server usually has multiple upstream sources and can steer its clock to minimize the error across multiple servers, as well as detecting misbehaving servers and reject them ("Falseticker"). Different NTP server implementations might do this a bit differently.
Facebook had a really interesting engineering blog about building their own timeservers: https://engineering.fb.com/2020/03/18/production-engineering...
Really well written for anyone who is interested.
Instead I’ll stick to a major operator like Google/Microsoft/Apple, which have NTP systems designed to handle the scale of all the devices they sell, and are well maintained.
... unless someone with real experience needing those tolerances chimes in and explains why it's true.
They have A/B testing infrastructure available to creators for exactly that purpose.
If you say "Oh but he doesn't have to use that" you are wrong. If a creator's videos get low enough click through and "engagement", youtube just stops showing it to people.
Jeff, like every creator on youtube, is in an antagonistic deathmatch for your attention with people like Mr. Beast, and Youtube would rather five Mr Beasts than a thousand Jeffs. Youtube builds their platform to empower and enrich Mr Beast, and to force everyone else to adopt more of Mr Beasts methods to stay getting views, and getting their paycheck
It does not matter if you are "subscribed" to a channel for example. Youtube will still fail to show you new videos from your subscriptions if you don't play the stupid game enough.
Stop blaming the people who did not at all make this choice. Blame Youtube. Support platforms, like nebula and whatever that guntube service is called, with real money.
Did you know this video existed before it was posted to HN? I did, because it showed up in my feed even though I am not subscribed to jeff, because he plays the game well. I watched it too because I had no idea that weather was causing such a problem in Colorado and love hearing about NIST and the people who basically run the infrastructure of the internet that everyone takes for granted.
My only interaction with Jeff's articles are through transcripts if they make it to the HN front page.
>I watched it too because I had no idea that weather was causing such a problem in Colorado and love hearing about NIST and the people who basically run the infrastructure of the internet that everyone takes for granted.
Are you aware that text based news about the NIST problems have been posted on HN in the past few days?
I took too much Adderall today.
NTP at NIST Boulder Has Lost Power
https://news.ycombinator.com/item?id=46334299