Fire Destroys S. Korean Government's Cloud Storage System, No Backups Available
Posted3 months agoActive3 months ago
koreajoongangdaily.joins.comTechstoryHigh profile
heatednegative
Debate
80/100
Data LossGovernment ItBackup Systems
Key topics
Data Loss
Government It
Backup Systems
A fire destroyed South Korea's government cloud storage system, resulting in permanent data loss due to the lack of backups, sparking discussions on IT incompetence and the importance of backup systems.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
8m
Peak period
131
0-12h
Avg / period
32
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 5, 2025 at 1:20 PM EDT
3 months ago
Step 01 - 02First comment
Oct 5, 2025 at 1:28 PM EDT
8m after posting
Step 02 - 03Peak activity
131 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 12, 2025 at 12:03 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45483386Type: storyLast synced: 11/27/2025, 3:36:11 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Yikes. You'd think they would at least have one redundant copy of it all.
> erasing work files saved individually by some 750,000 civil servants
> 30 gigabytes of storage per person
That's 22,500 terabytes, about 50 Backblaze storage pods.
Or even just mirrored locally.
It's almost farcical to calculate, but AWS S3 has pricing of about $0.023/GB/month, which means the South Korean government could have reliable multi-storage backup of the whole data at about $20k/month. Or about $900/month if they opted for "Glacier deep archive" tier ($0.00099/GB/month).
They did have backup of the data ... in the same server room that burned down [2].
[1] https://www.hankyung.com/article/2025100115651
[2] https://www.hani.co.kr/arti/area/area_general/1221873.html
(both in Korean)
Edit: my bad backups in the room is something, somehow just forgot about that part
Having had unfortunate encounters with government IT in other countries I can bet that the root cause wasn't the national culture. It was the internal culture of "I want to do the same exact same thing I've always done until the day I retire."
Absent outside pressure, civil services across the word tend advance scientifically - one funeral (or retirement) at a time.
Is their cost per unit so low?
0.00099*1000 is 0.99. So about 12$ a year. Now extrapolate something like 5 year period or 10 year period. And you get to 60 to 120$ for TB. Even at 3 to 5x redundancy those numbers start to add up.
That's very primitive explanation, but should be easy to understand.
In reality S3 uses different algorithm (probably Reed-Solomon codes) and some undisclosed number of shards (probably different for different storage classes). Some say that they use 5 of 9 (so 5 data shards + 4 parity shards which makes for 80% overhead), but I don't think it's official information.
[1] https://bigdatastream.substack.com/p/how-aws-s3-scales-with-...
Yes its pricey but possible.
Now its literally impossible.
I think that AWS Glacier at that scale should be the thing preferred as they had their own in house data too but they still should've wanted an external backup and they are literally by the govt. so they of all people shouldn't worry about prices.
Have secure encrypted backups in aws and other possibilities too and try to create a system depending on how important the treat model is in the sense that absolutely filter out THE MOST important stuff out of those databases but that would require them to label it which I suppose would make them gather even more attention to somehow exfiltrate / send them to things like north korea/china so its definitely a mixed bag.
my question as I said multiple times, why didn't they build a backup in south korea only and used some other datacentre in south korea only as the backup to not have to worry about encryption thing but I don't really know and imo it would make more sense for them to actually have a backup in aws and not worry about encryption personally since I find the tangents of breaking encryption a bit unreasonable since if that's the case, then all bets are off and the servers would get hacked too and that was the point of phrack with the advanced persistent threat and so much more...
are we all forgetting that intel has a proprietory os minix running in the most privileged state which can even take java bytecode through net and execute it and its all proprietory. That is a bigger security threat model personally to me if they indeed are using that which I suppose they might be using.
Now if you want to do something with the data, that's where you need to hold your wallet. Either you get their compute ($$$ for Amazon) or you send it to your data centre (egress means $$$ for Amazon).
or outright buying hardware capable of storing 850TB for the same $20K one time payment. Gives you some perspective on how overpriced AWS is.
I had 500TB of object storage priced last year and it came out closer to $300k
You of course need people to maintain it -- the $300k turnkey solution might be the better option depending on current staff.
_Only_ the kind of combination of incompetence and bad politics here can lead to the kind of % of how much data has been lost here, given the policy was to only save stuff on that "G-drive" and avoid local copies. The "G-drive" they intentionally did not back up because they couldn't figure out a solution to at least store a backup across the street ...
There is this weird divide between the certified class of non-technical consultants and actual overworked and pushed to corner cut techs.
From what I have seen a lot of time the playbooks to fix these issues are just rawdogging files using rsync manually. Ideally you deploy your infrastructure in cells where rollouts proceed cell by cell so you can catch issues sooner and also implement failover to bootstrap broken cells (in my DNS example, client could talk to DNS servers in the closest non-broken cell using BGP based routing). It is hard to test, and there are some global services (like that big Google outage a few months ago was due to the global auth service being down).
> "The outage also hit servers that host procedures meant to overcome such an outage... Company officials had no paper copies of backup procedures, one of the people added, leaving them unable to respond until power was restored."
https://www.reuters.com/technology/space/power-failed-spacex...
So confidentiality was maintained but integrity and availability were not.
Yes, it's fucking expensive, that's one of the reason you pay more for a VM (or colocation) than at Hetzner or OVH. But I'm also pretty confident that single fire wouldn't destroy all hard drives in that IT space.
[1]: https://www.youtube.com/watch?v=tDacjrSCeq4
Yes, the servers still have some small batteries on their mainboards etc, but it's not too bad.
https://www.datacenterdynamics.com/en/news/ovhcloud-fire-rep...
Does G-Drive mean Google Drive, or "the drive you see as G:"?
If this is Google Drive, what they had locally were just pointers (for native Google Drive docs), or synchronized documents.
If this means the letter a network disk storage system was mapped to, this is a weird way of presenting the problem (I am typing on the black keyboard and the wooden table, so that you know)
But yeah it's a big problem in Korea right now, lots of important information just vanished, many are talking about it.
Sometimes things can seem to run smoothly for years when neglected... until they suddenly no longer run smoothly!
Electronically, everyone just receives a link to read the document.
https://www.nytimes.com/2025/09/13/world/asia/nepal-unrest-a... ("Many of the nation’s public records were destroyed in the arson strikes, complicating efforts to provide basic health care")
TL;DR: Estonia operates a Tier 4 (highest security) data center in Luxembourg with diplomatic immunity. Can actively run critical government services in real-time, not just backups.
> The actual number of users is about 17% of all central government officials
Far from all, and they're not sure what's recoverable yet ("“It’s difficult to determine exactly what data has been lost.”")
Which is not to say that it's not big news ("the damage to small business owners who have entered amounts to 12.6 billion Korean won.” The ‘National Happiness Card,’ used for paying childcare fees, etc., is still ‘non-functional.’"), but to put it a bit in perspective and not just "all was lost" as the original submission basically stated
Quotes from https://www.chosun.com/english/national-en/2025/10/02/FPWGFS... as linked by u/layer8 elsewhere in this thread
I wish the same concept was in Canada as well. You absolutely have to resubmit all your information every time you do a request. On top of that, federal government agencies still mail each other the information, so what usually can be done in 1 day takes a whole month to process, assuming the mail post isn't on strike (spoiler: they are now).
I think Canada is one of the worst countries in efficiency and useless bureaucracy among 1st world countries.
This is the state of banking in Canada. God forbid they just put a text box on the banking web app where I can put in my beneficiary.
Not to mention our entire health care system still runs on fax!
It blows my mind that we have some of the smartest and well educated people in the world with some of the highest gdp per capita in the world and we cannot figure out how to get rid of paper documents. You should be issued a federal digital ID at birth which is attested through a chain of trust back to the federal government. Everything related to the government should be tied back to that ID.
Without an out-of-country backup, a reversion to previous statuses means the country is lost (Estonia has been occupied a lot). With it, much of the government can continue to function, as an expat government until freedom and independence is restored.
hmmmm
This attempt at putting it in perspective makes me wonder what would put it in perspective. "100M sets of harry potter novels" would be one step in the right direction, but nobody can imagine 100M of anything either. Something like "a million movies" wouldn't work because they are very different from text media in terms of how much information is in one, even if the bulk of the data is likely media. It's an interesting problem even if this article's attempt is so bad it's almost funny
Good article otherwise though, indeed a lot more detail than the OP. It should probably replace the submission. Edit: dang was 1 minute faster than me :)
This is why I don't really want to run my own cloud :)
Actually testing the backups is boring.
That said, ones the flames are out, they might actually be able to recover some of it.
Idk if this sounds like I'm against backups, I'm not, I'm just surprised by the question
[0]: https://www.cnbc.com/2025/02/13/company-ripped-by-elon-musk-...
buttons are jpegs/gifs, everything is on Java EE and on vulnerable old webservers etc... A lot of government stuff supports only Internet Explorer even though it's long dead
Don't even get me started on ActiveX.
You're thinking of Taiwan, not South Korea.
https://m.blog.naver.com/gard7251/221339784832 (a random blog with gifs)
They also require routine testing distaster recovery plans.
I participated in so many different programs over the years with those tests.
Tests that would roll over to facilities across the country
Like, I use Google Drive for Desktop but it only downloads the files I access. If I don't touch a file for a few days it's removed from my local cache.
They very well might have only been saving to this storage system. It was probably mapped as a drive or shared folder on the PC.
Who has the incentive to do this, though? China/North Korea? Or someone in South Korea trying to cover up how bad they messed up? Does adding this additional mess on top mean they looked like they messed up less? (And for that to be true, how horrifically bad does the hack have to be?)
Not saying I believe this (or even know enough to have an opinion), but it’s always important to not anthropomorphize a large organization. The government isn’t one person (even in totalitarian societies) but an organization that contains large numbers of people who may all have their own motivations.
Alternate hypothesis: cloud storage provided doing the hard sell. Hahaha :)
LG is SK firm and manufacturer of hacked hardware and also the batteries that caught fire. Not sure it’s a solid theory just something I took note of while thinking the same
Yeah, that's way less suspicious, thanks for clearing that up.
> 27th of September 2025, The fire is believed to have been caused while replacing Lithium-ion batteries. The batteries were manufactured by LG, the parent company of LG Uplus (the one that got hacked by the APT).
Could the battery firmware have been sabotaged by the hacker to start the fire?
But
replacing a UPS is usually done to right time pressures. the problem is, you can rarely de-energise UPS batteries before replacing them, you just need to be really careful when you do it.
Depending on the UPS, Bus bars can be a mother fucker to get on, and of they touch energised they tend to weld together.
With lead acid, its pretty bad (think molten metal and lots of acidic, toxic and explosive gas, with lithium, its just fire. lots of fire that is really really hard to put out.
Obviously for rack based UPSs you'd "just" take out the UPS, or battery drawer, and replace somewhere more safe, or better yet, swap out the entire thing.
For more centralised UPSs that gets more difficult. The shitty old large UPSs were a bunch of cells bolted to a bus bar, and then onto the switchgear/concentraitor.
for Lithium, I would hope its proper electrical connectors, but you can never really tell.
A Kakao datacenter fire took the de-facto national chat app offline not too many years ago. Imagine operating a service that was nearly ubiquitous in the state of California and not being able to survive one datacenter outage.
After reading the Phrack article, I don't know what to suspect, the typical IT disaster preparedness or the operators turning off the fire suppression main and ordering anyone in the room to evacuate to give a little UPS fire enough time to start going cabinet to cabinet.
Recently in the UK a major communication company had issues with batteries
773 more comments available on Hacker News