Ntp at Nist Boulder Has Lost Power
Key topics
A power outage at NIST Boulder has sparked a lively discussion about the implications for NTP (Network Time Protocol) and the broader timekeeping infrastructure. As commenters weighed in, it became clear that while some were taking the situation in stride, others were poking fun at the severity of the situation, with one joking that a "coup" had occurred and another quipping about "reference standard chickens." Despite the humor, concerns about the potential impact on timekeeping were raised, with some noting that the redundancy built into timekeeping systems should mitigate major issues. The conversation highlights the intricate web of timekeeping infrastructure and the community's reliance on it.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
128
0-12h
Avg / period
20
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 20, 2025 at 2:39 AM EST
14 days ago
Step 01 - 02First comment
Dec 20, 2025 at 4:02 AM EST
1h after posting
Step 02 - 03Peak activity
128 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 28, 2025 at 1:43 PM EST
5d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
WWV still seems to be up, including voice phone access.
NIST Boulder has a recorded phone number for site status, and it says that as of December 20, the site is closed with no access.
NIST's main web site says they put status info on various social media accounts, but there's no announcement about this.
[1] https://www.nist.gov/campus-status
Being unfamiliar with it, it's hard to tell if this is a minor blip that happens all the time, or if it's potentially a major issue that could cause cascading errors equal to the hype of Y2K.
And most enterprises, including banks, use databases.
So by bad luck, you may get a couple of transactions reversed in order of time, such as a $20 debit incorrectly happening before a $10 credit, when your bank balance was only $10 prior to both those transactions. So your balance temporarily goes negative.
Now imagine if all those amounts were ten thousand times higher ...
That purpose equates to over $12 billion in fees for 2024
https://finhealthnetwork.org/research/overdraft-nsf-fees-big...
I was just starting off in life (kid, girl, job, apartment, bank, debit card, bills) when I made a 63-cent error with my bank account. Which is to say: If all of the negative debits and all of the positive credits were summed, then the account would have been in the negative by 63 cents.
I screwed up. And in a fair and just world, I'd owe the bank some extra tithing for quite clearly having spent some money that I very definitely did not have. Maybe $20 for the error and $10 per day spent in the red, plus the 63 cents, or something along those lines.
But it wasn't that way. Because the transactions were processed in batches that were re-ordered to be highest-first, I discovered that I owed the bank a little more than $430.
At the time (around 25 years ago, now) that was an absolute mountain of money to me.
The banker at the branch I went into was unapologetic and crass about the fees.
Looking over my transactions, they said "You know how to do math, right? You should have known that the account was overdrawn, but you were just out spending money willy-nilly all over town anyway."
I replied with something like "I made a single error of 63 cents. The rest of what you said is an invention."
This went back and forth for a bit before I successfully managed to leave the building without any handcuffs, fire trucks, or ambulances becoming involved -- and I still owed them more than $430.
The lesson I learned was very simple: Seriously, fuck those guys.
($12 billion in 2024, huh? That's all? Maybe the no-fee fintechs are winning.)
What a defeatist attitude, I plan to live forever or die trying! /s
RC oscillator is poor enough that early days USB communication would fail if running on RC clock.
The main problem will be services that assume at least one of the NIST time servers is up. Somewhere, there's going to be something that won't work right when all the NIST NTP servers are down. But what?
What atomic clocks are disciplined by NTP anyway? Local GPS disciplining is the standard. If you're using NTP you don't need precision or accuracy in your timekeeping.
Note I didn’t say they are more important than the Internet. That’s a value judgement in any case. I said that NIST level 0 NTp servers are more important to these use cases than they are to the Internet.
Losing NTP for a day is going to affect fuck-all.
I’m sure synchronising all the worlds detectors over direct fiber links would… work, but, they aren’t.
Unless you are trying to argue internal synchronisation in which case, obviously, but that has absolutely zero to do with losing NTP for a day, the topic of conversation.
- GPS
- industrial complex that synchronize operations (we could include trains)
- telecoms in general (so a level higher than the internet)
(Random search result from space force https://www.ssc.spaceforce.mil/Newsroom/Article/4039094/50-y... claims that cell phone tower-to-tower handoff uses GPS-mediated timing (only microsecond level though.)
It's not super important when compared to basic needs like plumbing, food, electricity, medical assistance and other silly things we take for granted but are heavily dependent on. We all saw what happened to hospitals during the early stages of the COVID pandemic; we had plenty of internet and electricity but were struggling on the medical part. That was quite bad... I'm not sure if it's any worse if an entire country/continent lost access to the Internet. Quite a lot of our core infrastructure components in society rely on this. And a fair bit of it relies on a common understanding of what time "now" is.
For this though you need to go beyond NTP into PTP which is still usually based on GPS time and atomic clocks
[0] https://www.septentrio.com/en/learn-more/insights/how-gps-br...
It might be difficult to generate enough resolution in measurable events that we can predict accurately enough? Like, I'm guessing the start of a transit or alignment event? Maybe something like predicting the time at which a laser pulse will be returnable from a lunar reflector -- of wet can do the prediction accurately enough then we can re-establish time back to the current fixed scale.
I think I'm addressing an event that won't ever happen (all precise and accurate time sources are lost/perturbed), and if it does it won't be important to re-sync in this way. But you know...
https://static.googleusercontent.com/media/research.google.c...
The ultimate goal is usually to have a bunch of computers all around the world run synchronised to one clock, within some very small error bound. This enables fancy things like [0].
Usually, this is achieved by having some master clock(s) for each datacenter, which distribute time to other servers using something like NTP or PTP. These clocks, like any other clock, need two things to be useful: an oscillator, to provide ticks, and something by which to set the clock.
In standard off-the-shelf hardware, like the Intel E810 network card, you'll have an OXCO, like [1], with a GPS module. The OXCO provides the ticks, the GPS module provides a timestamp to set the clock with and a pulse for when to set it.
As long as you have GPS reception, even this hardware is extremely accurate. The GPS module provides a new timestamp, potentially accurate to within single-digit nanoseconds ([2] datasheet), every second. These timestamps can be used to adjust the oscillator and/or how its ticks are interpreted, such that you maintain accuracy between the timestamps from GPS.
The problem comes when you lose GPS. Once this happens, you become dependent on the accuracy of the oscillator. An OXCO like [1] can hold to within 1µs accuracy over 4 hours without any corrections but if you need better than that (either more time below 1µs, or more accurate than 1µs over the same time), you need a better oscillator.
The best oscillators are atomic oscillators. [2] for example can maintain better than 200ns accuracy over 24h.
So for a datacenter application, I think the main reason for an atomic clock is simply for retaining extreme accuracy in the event of an outage. For quite reasonable accuracy, a more affordable OXCO works perfectly well.
[0]: https://docs.cloud.google.com/spanner/docs/true-time-externa...
[1]: https://www.microchip.com/en-us/product/OX-221
[2]: https://www.u-blox.com/en/product/zed-f9t-module
[3]: https://www.microchip.com/en-us/products/clock-and-timing/co...
As you say, the goal is to keep the system clocks on the server fleet tightly aligned, to enable things like TrueTime. But also to have sufficient redundancy and long enough holdover in the absence of GNSS (usually due to hardware or firmware failure on the GNSS receivers) that the likelihood of violating the SLA on global time uncertainty is vanishingly small.
The "global" part is what pushes towards having higher end frequency standards, they want to be able to freewheel for O(days) while maintaining low global uncertainty. Drifting a little from external timescales in that scenario is fine, as long as all their machines drift together as an ensemble.
The deployment I know of was originally rubidium frequency standards disciplined by GNSS, but later that got upgraded to cesium standards to increase accuracy and holdover performance. Likely using an "industrial grade" cesium standard that's fairly readily available, very good but not in the same league as the stuff NIST operates.
I mean, fleets come in all sizes; but if you put one atomic reference in each AZ of each datacenter, there's a fleet. Maybe the references aren't great at distributing time, so you add a few NTP distributors per datacenter too and your fleet is a little bigger. Google's got 42 regions in GCP, so they've got a case for hundreds of machines for time (plus they've invested in spanner which has some pretty strict needs); other clouds are likely similar.
If you have information on what they actually are using internally, please share.
Example: https://www.microchip.com/en-us/products/clock-and-timing/co...
If you get a rubidium clock for your garage, you can sync it up with GPS to get an accurate-enough clock for your hobby NTP project, but large research institutions and their expensive contraptions are more elaborate to set up.
Example: https://www.accubeat.com/ntp-ptp-time-servers
It's a huge huge huge misconception that you can just plunk down an "atomic clock", discipline an NTP server with it and get perfect wallclock time out of it forever. That is just not how it works. Two hydrogen masers sitting next to each other will drift. Two globally distributed networks of hydrogen masers will drift. They cannot NOT drift. The universe just be that way.
UTC is by definition a consensus; there is no clock in the entire world that one could say is exactly tracking it.
Google probably has the gear and the global distribution that they could probably keep pretty close over 30-60 days, but they are assuredly not trying to keep their own independent time standard. Their goal is to keep events correlated on their own network, and for that they just need good internal distribution and consensus, and they are at the point where doing that internally makes sense. But this is the same problem on any size network.
Honestly for just NTP, I've never really seen evidence that anything better than a good GPS disciplined TCXO even matters. The reason they offer these oscillators in such devices is because they usually do additional duties like running PtP or distributing a local 10mhz reference where their specific performance characteristics are more useful. Rubidium, for instance, is very stable at short timescales but has awful long term stability.
Funny you should say that... https://developers.google.com/time/smear
the NIST hydrogen clock is very expensive and sophisticated.
Everyone else is already connecting to load balanced services that rotate through many servers, or have set up their own load balancing / fallbacks.
Says it's still mostly up.
Perhaps, "We don't know." will become popular?
The network will route around the damage with no real effects. Maybe a few microseconds of jitter as you have to ask a more distant server for the time.
The answer is no. Anyone claiming this will have an impact on infrastructure has no evidence backing it up. Table top exercises at best.
This is some level of eldritch magic that I am aware of, but not familiar with but am interested in learning.
Probably more interesting is how you get a tier 0 site back in sync - NIST rents out these cyberpunk looking units you can use to get your local frequency standards up to scratch for ~$700/month https://www.nist.gov/programs-projects/frequency-measurement...
Also thank you for that link, this is exactly the kind of esoteric knowledge that I enjoy learning about
Most high-availability networks use pool.ntp.org or vendor-specific pools (e.g., time.cloudflare.com, time.google.com, time.windows.com). These systems would automatically switch to a surviving peer in the pool.
Critical infrastructure (data centers, telecom hubs) typically uses local GPS/GNSS-disciplined oscillators or atomic clocks. These operate independently of NIST's network availability.
Average consumer devices (laptops, smartphones) would see no immediate impact. Modern hardware clocks can maintain sufficient accuracy for days before drift affects application logic.
The Kerberos protocol, used by Windows Active Directory and enterprise networks, requires clocks to be within a specific tolerance (typically 5 minutes) to prevent replay attacks. Once client and server clocks drift beyond this threshold, users cannot log in to corporate resources.
Sysadmins rely on synchronized timestamps for forensic analysis and event correlation. As servers drift at different rates (often ~1 second/day), it becomes difficult to reconstruct the sequence of events across a distributed network.
IT teams would be required to manually update "hard-coded" NTP configurations to point to alternative servers, as NIST's primary IPs (e.g., 132.163.4.101) would stop responding.
If a system's local clock drifts outside the validity window of a TLS certificate, web browsers will reject the connection as "not private". As certificate lifespans shorten in 2025 (some moving toward 90-day terms), the margin for clock error decreases.
Distributed databases that rely on time for transaction ordering (linearizability) may experience data corruption or performance degradation if nodes cannot agree on the current time.
Financial exchanges are often legally required to use time traceable to a national standard like UTC(NIST). A total failure of the NIST distribution layer would likely trigger a suspension of electronic trading to maintain audit trail integrity.
Modern power grids use Synchrophasors that require microsecond-level precision for frequency monitoring. Losing the NIST reference would degrade the grid's ability to respond to load fluctuations, increasing the risk of cascading outages.
You don’t need to actually sync to NIST. I think most people PTP/PPS to a GPS-connected Grandmaster with high quality crystals.
But one must report deviations from NIST time, so CAT Reporters must track it.
I think you are right — if there is no NIST time signal then there is no properly auditable trading and thus no trading. MFID has similar stuff but I am unfamiliar.
One of my favorite nerd possessions is my hand-signed letter from Judah Levine with my NIST Authenticated NTP key.
[1] https://www.finra.org/rules-guidance/rulebooks/finra-rules/6...
To quote the ITU: "UTC is based on about 450 atomic clocks, which are maintained in 85 national time laboratories around the world." https://www.itu.int/hub/2023/07/coordinated-universal-time-a...
Beyond this, as other commenters have said, anyone who is really dependent on having exact time (such as telcos, broadcasters, and those running global synchronized databases) should have their own atomic clock fleets. Moreover, GPS time, used by many to act as their time reference, is distributed by yet other means.
Nothing bad will happen, except to those who have deliberately made these specific Stratum 0 clocks their only reference time. Anyone who has either left their computer at its factory settings or has set up their NTP in accordance to recommended settings will be unaffected by this.
Either go with one clock in your NTPd/Chrony configuration, or ≥4.
Yes, if you have 3 they can triangulate, but if one goes offline now you have 2 with no tie-breaker. If you have (at least) 4 servers, then one can go back and triangulation / sanity-checking can still occur.
* https://www.meinbergglobal.com/english/products/
There's no present need for important hard-to-replace sciencey-dudes to go into the shop (which is probably both cold, and dark, and may have other problems that make it unsafe: it's deliberately closed) to futz around with the the time machines.
We still have other NTP clocks. Spooky-accurate clocks that the public can get to, even, like just up the road at NIST in Fort Collins (where WWVB lives, and which is currently up), and in Maryland.
This is just one set.
And beyond that, we've still got clocks in GPS satellites orbiting, and a whole world of low-stratum NTP servers that distribute that time on the network. (I have one such GPS-backed NTP server on the shelf behind me; there's not much to it.)
And the orbital GPS clocks are controlled by the US Navy, not NIST.
So there's redundancy in distribution, and also control, and some of the clocks aren't even on the Earth.
Some people may be bit by this if their systems rely on only one NTP server, or only on the subset of them that are down.
And if we're following section 3.2 of RFC 8633 and using multiple diverse NTP sources for our important stuff, then this event (while certainly interesting!) is not presently an issue at all.
Fun facts about The clock:
You can't put anything in the room or take anything out. That's how sensitive the clock is.
The room is just filled with asbestos.
The actual port for the actual clock, the little metal thingy that is going buzz, buzz, buzz with voltage every second on the dot? Yeah, that little port isn't actually hooked up to anything, as again, it's so sensitive (impedance matching). So they use the other ports on the card for actual data transfer to the rest of the world. They do the adjustments so it's all fine in the end. But you have to define something as the second, and that little unused port is it.
You can take a few pictures in the cramped little room, but you can't linger, as again, just your extra mass and gravity affects things fairly quickly.
If there are more questions about time and timekeeping in general, go ahead and ask, though I'll probably get back to them a bit later today.
Can you restate this part in full technical jargon along with more detail? I'm having a hard time following it
https://www.nist.gov/pml/time-and-frequency-division/time-re...
and you can see a photo of the actual installation here:
https://www.denver7.com/news/front-range/boulder/new-atomic-...
As you can see, the room is clearly not filled with asbestos. Furthermore, the claim is absurd on its face. Asbestos was banned in the U.S. in March 2024 [1] and the clock was commissioned in May 2025.
The rest of the claims are equally questionable. For example:
> The actual port for the actual clock ... isn't actually hooked up to anything ... they use the other ports on the card for actual data transfer
It's hard to make heads or tails of this, but if you read the technical description of the clock you will see that by the time you get to anything in the system that could reasonably be described as a "card" with "ports" you are so far from the business end of the clock that nothing you do could plausibly have an impact on its operation.
> You can't put anything in the room or take anything out. That's how sensitive the clock is.
This claim is also easily debunked using the formula for gravitational time dilation [2]. The accuracy of the clock is ~10^-16. Calculating the mass of an object 1m away from the clock that would produce this effect is left as an exercise, but it's a lot more than the mass of a human. To get a rough idea, the relativistic time dilation on the surface of the earth is <100 μs/day [3]. That is huge by atomic clock standards, but that is the result of 10^24kg of mass. A human is 20 orders of magnitude lighter.
---
[1] https://www.mesotheliomahope.com/legal/legislation/asbestos-...
[2] https://en.wikipedia.org/wiki/Gravitational_time_dilation
[3] https://tf.nist.gov/general/pdf/3278.pdf
but yes, I also want the juicy details!
Unfortunately, the HP cesium clock that backed the utcnist systems failed a few weeks ago, so they're offline. I believe the plan is to decommission those servers anyway - NIST doesn't even list them on the NTP status page anymore, and Judah Levine has retired (though he still comes in frequently). Judah told me in the past that the typical plan in this situation is that you reference a spare HP clock with the clock at NIST, then drive it over to JILA backed by some sort of battery and put it in the rack, then send in the broken one for refurb (~$20k-$40k; new box is closer to $75k). The same is true for the WWVB station, should its clocks fail.
There is fiber that connects NIST to CU (it's part of the BRAN - Boulder Research and Administration Network). Typically that's used when comparing some of the new clocks at JILA (like Jun Ye's strontium clock) to NIST's reference. Fun fact: Some years back the group was noticing loss due to the fiber couplers in various closets between JILA & NIST... so they went to the closets and directly spliced the fibers to each other. It's now one single strand of fiber between JILA & NIST Boulder.
That fiber wasn't connected to the clock that backed utcnist though. utcnist's clock was a commercial cesium clock box from HP that was also fed by GPS. This setup was not particularly sensitive to people being in the room or anything.
Another fun fact: utcnist3 was an FPGA developed in-house to respond to NTP traffic. Super cool project, though I didn't have anything to do with it, haha.
Now if the (otherwise very kind) guy in charge of the Bureau international des poids et mesures at Sèvres who did not let me have a look at the refrerence for the kilogram and meter could change his mind, I would appreciate. For a physicist this is kinda like a cathedral.
They have so many incredible artifacts (for weights and measures but also so much more: engineering, physics, civil engineering, machining,...)
[1]: https://collections.arts-et-metiers.net?id=13404-0001-
I spent 4 hours to there and was surprised to see so many tourists, this is not a place I expected people visiting Paris to go to. There were no crowds though.
The top part is really great, you get to see how much people did with so little. So is the chemistry part.
I found the steel replica of the kilogramme and the meter, and of course the Foucault pendulum (in the neighboring refurbished church).
This is truly an interesting museum, on part with the museum of discoveries (musée de la découverte) which is unfortunately close now for a few years for renovations (or at lest was recently planned to be closed). Much better than La Vilette.
So thank you again!
The Musée de Sèvres (or Bureau des Mesures as it is called now) has the original kilogramme and meter iridium reference, hidden in the basement ;( So if the director has a change of heart, I am all in!
I thought it was US Space Force / Air Force. Was the Navy previously or currently involved?
In this context, they feed timing updates to the GPS operators https://www.cnmoc.usff.navy.mil/Our-Commands/United-States-N...
The failure of a single such server is far from a disaster.
And for 99% of that history, Noon was when the sun was half-way through its daily arc at whatever point on Earth one happened to inhabit. The ownership class are the ones who invented things like time zones to stop their trains from running in to each other, and NTP is just the latest and most-pervasive-and-invasive evolution of that same inhuman mindset.
From a privacy point of view, constant NTP requests are right up there alongside weather apps and software telemetry for “things which announce everyone's computers to the global spy apparatus”, feeding the Palantirs of the world to be able to directly locate you as an individual if need be.
In a world where this didn't happen, your comment would most likely read:
> The ownership class are the ones who had such indifference toward the lives of the lower class passengers that they didn't bother stopping their trains from running into each other.
-My Friend Andy
But the stratum 1 time servers can shrug and route around the damage.
Time services are available from other locations. That's the disaster plan. I'm sure there will be some negative consequences from this downtime, especially if all the Boulder reference time sources lose power, but disaster plans mitigate negative consequences, they can't eliminate them.
Utility power fails, automatic transfer switches fail, backup generators fail, building fires happen, etc. Sometimes the system has to be shut down.
Remember - the Marshall fire started on December 30, 2021 under similar high wind conditions: https://en.wikipedia.org/wiki/Marshall_Fire 1,000 structures destroyed, two deaths.
On the fridge itself: You may find that the contents are insured against power outages.
As an anecdote, my (completely not-special) homeowner's insurance didn't protest at all about writing a check for the contents of my fridge and freezer when I asked about that, after my house was without power for a couple of weeks following the 2008 derecho. This rather small claim didn't affect my rate in any way that I could perceive.
And to digress a bit: I have a chest freezer. These days I fill up the extra space in the freezer with water -- with "single-use" plastic containers (water bottles, milk jugs) that would normally be landfilled or recycled.
This does a couple of things: On normal days, it increases thermal mass of the freezer, and that improves the cycle times for the compressor in ways that tend to make it happier over time. In the abnormal event of a long power outage, it also provides a source of ice that is chilled to 0F/18C that I can relocate into the fridge (or into a cooler, perhaps for transport), to keep cold stuff cold.
It's not a long-term solution, but it'll help ensure that I've got a fairly normal supply of fresh food to eat for a couple of days if the power dips. And it's pretty low-effort on my part. I've probably spent nearly as much effort writing about this system here just now than I have on implementing it.
tl;dr - the fire destroyed over 1,000 homes, two deaths. The local electrical utility, Xcel, was found as a contributing cause from sparking power lines during a strong wind storm. As a result, electrical utilities now cut power to affected areas during strong winds.
That's some strong winds! What's causing such strong sustained/gusty winds that long? I'm hearing about this weather phenomenon for the first time.
[1] https://www.eecis.udel.edu/~mills/ntp.html
[2] https://youtu.be/08jBmCvxkv4?si=WXJCV_v0qlZQK3m4&t=2092
[3] https://youtu.be/08jBmCvxkv4?si=K80ThtYZWcOAxUga&t=3386
https://tf.nist.gov/tf-cgi/servers.cgi
47 more comments available on Hacker News