2 in 3 Americans Think AI Will Cause Major Harm to Humans in the Next 20 Years [pdf]
Key topics
A recent survey reveals that 2 in 3 Americans believe AI will cause significant harm to humans within the next 20 years, with many pinpointing its potential impact on news and elections as a major concern. Commenters weighed in on this notion, with some arguing that AI-generated fake content could be catastrophic, while others countered that the threat is overstated. As the discussion unfolded, a secondary concern emerged: the environmental and socioeconomic consequences of AI data centers, which some predict will be job destroyers and drive up costs. The thread highlights a growing unease about AI's far-reaching implications, from information integrity to local communities.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
22m
Peak period
147
0-6h
Avg / period
26.7
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 28, 2025 at 11:53 AM EST
4d ago
Step 01 - 02First comment
Dec 28, 2025 at 12:15 PM EST
22m after posting
Step 02 - 03Peak activity
147 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 30, 2025 at 4:29 PM EST
2d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
This will funnel people into having deeper trust for their sources, and less trust of sources they don't know. The end result will be even greater control of people's information sphere by a few people who shape those trusted channels, separating people from reliable news and information about the world. This will be disastrous for democracy, as democracy depends on voters being able to make decisions on reliable true information.
I don't know if this will come to pass, but the above narrative seems highly probable based on what we have see so far with social media, especially video-driven social media.
Frankly, if we give black boxes the ability to manipulate atoms with no oversight, we _deserve_ to go extinct. The first thing we should do if we achieve AGI is to take it apart to see how it works (to make it safe). As a bunch of curious monkeys, we'll do that anyway.
Well we are giving them ability to manipulate all aspects of a computer (aka giving them computer access) and we all know how that went (Spoiler or maybe not so much spoiler for those who know but NOT GOOD)
For the unitiated, Rob Pike goes nuclear over GenAI: https://news.ycombinator.com/item?id=46392115
and Rob Pike got spammed with an AI slop "act of kindness : https://news.ycombinator.com/item?id=46394867
AI absolutely is capable of doing damage, and _is_ currently doing damage. Perpetuating inequality, generating fake news, violation of privacy, questionable IP/rights, etc. These are more pressing than the idea that someday we will give AI the ability to manufacture nano-mosquitos that will poison us all, as Yudkowsky suggested on a recent podcast. He's so busy fantasizing about scifi he's lost touch with the damage it's currently doing.
The social and financial impacts of AI can hardly be understated and although one can go into the weeds of fascination and imagine what if's, largely speaking, we have to do something right now about the problems which are impacting about us right now too.
I would love a discussion about what are some things which can be done at a societal level about these things.
100 local people to maintain the data center while it replaces 1 million people with the AIs running inside
https://marshallbrain.com/manna1
So, no, because said human capital is holding shorter end of the stick and will be worst off.
In my opinion, Compute-related data centers are a good product tho. Offering up some gpu services might be good but honestly I will tell you what happened (similar to another comment I wrote)
AI gave these data centers companies tons of money (or they borrowed) and then they brought gpus from nvidias and became gpu-centric (also AI centric) to jump even more on the hype
these are bad The core offering of datacenters to me feels like it should be normal form of compute (CPU,ram,storage,as an example yabs performance of the whole server) and not "just what gpu does it have"
Offering up some gpu on the side is perfectly reasonable to me if need be perhaps where the workloads might need some gpu but overall compute oriented datacenters seem nice.
Hetzner is a fan favourite now (which I deeply respect) and for good measure and I feel like their modelling is pretty understandable, They offer GPU's too iirc but you can just tell from their website that they love compute too
Honestly the same is true for most Independent cloud providers. The only places where we see a complete saturation of AI centric data centers is probably the American trifecta (Google,azure and amazon) and Of course nvidia,oracle etc.
Compute oriented small-to-indie data centers/racks are definitely pleasant although that market has raced to the bottom, but only because let's be really honest, The real incentives for building softwares happens when VSCode forks make billions so people (techies atleast) usually question such path and non-techies just don't know how to sell/compete in the online marketplaces usually.
https://www.climate.gov/media/14136
After digging into it a bit to find a better source for you, it turns out that my number was wrong anyway. Turns out the sea level rise for the contiguous US is expected to be quite a bit higher than the global average. I had no idea!
That said, I don't think they assume our emissions trend from the last 50 years will continue unabated.
And so on
I'm not an engineer, but it seems hard to imagine that a lack of data center capacity won't have an effect on prices for cloud compute, which will have downstream impact on what workstations have access to (especially since more and more programmers are becoming reliant on coding LLMs).
Suddenly adding 50gw of power demand in a state is going to drive up costs significantly.
My point is, power generation infra costs money, and currently its being paid for by someoneelse.
During the microcomputer revolution, hackers scoffed at people who used terminals to access time sharing systems. You don't own it, you don't control it, you're just a cog in the machine. Now, "hackers" are rushing to run everything on hardware owned and operated by companies with wealth and power that make the old IBM look like a kid's lemonade stand by comparison.
"It is difficult to get a man to understand something, when his salary depends on his not understanding it." - Upton Sinclair
I feel like people have problem with AI oriented datacenters (which is becoming the majority of datacenters considering that datacenters make an shit ton of money selling AI aka shovels during gold rush)
Another thing is that these datacenters have very high levels of compute directly linked to the consumer of an application
As an example, you have a simple app, some message gets pushed by customer or database query or simple usage, its all good, at a datacenter level its power costs are miniscule
Now compare that to datacenters which have gpu's so they have applications like chatgpt (let's imagine) running on them, now these AI services are used by people themselves.
Now instead of simple applications and executions, Perhaps a trillion parameters models are running now. These are beyond computationally expensive even if we compare them to normal applications
Now I just searched and google's gemini runs 1.5 BILLION such prompts per day and chatgpt runs 2.5 BILLION prompts per day
Now, these prompts, they aren't stable all around the day, I have heard these to be very varying and when power consumption varies, it really impacts the performance of the grid itself
Another aspect is the sheer size, One would imagine that AI Bubble might give them more money and it does but the energy costs seems to me to be so high and perhaps also the fact that AI bubble gives these companies tons of free money which they "invest" aka buy/(lease?) year govt. contracts a lot of electricity
The govt. can only build so much capacity for these electricity and they (lobbying? and many other efforts) when get sold to datacenters really strains the electricity which thus increases the rates of electricity (and in a similar fashion perhaps water too) for the average american.
TLDR the way I read it: compute is cheap. There are always gonna be refurbished old compute which is gonna be too "old" (3-5 years because of deprication but that hardware is a beast) for these guys to use.
Nothing stops a simple guy who loves tech to open a mini datacenter perhaps :)
Who knows what might happen and I was extremely pessimistic about the datacenters not for these reasons but rather that ram prices were rising and I was worried that the whole industry might increase compute prices too but it seems that asus is opening up their ram production for consumers so starting out datacenters is possible
let's see what happens though. And I was worried a bit same as you but I feel like compute prices themselves are pretty chill/can remain chill. I understand the worries tho so looking forward to a discussion about it.
Its much less popular in the USA and EU, but thats nice since it gives the developing world a chance to catch up.
Because the technology is so fast, efficient and easy to run locally themselves? Or because currently there are remote APIs/UIs that are heavily subsidized by VC money yet the companies behind them are yet to be profitable?
I agree that giving the developing world any ladders for catching up is a great thing, but I'm not sure this is that, it just happens to be that companies don't care about profit (yet) so things appear "free or affordable" to them, and when its gonna be time to make things realistic, we'll see how accessible it'll still be.
Probably in the end it'll be profitable for the companies somehow, but exactly how or what the exact prices will be, I don't think anyone know at this point. That's why I'm reserving my "Developing countries can now affordably use AI too" for when that's reality, not based on guesses and assumptions.
But again, it's not a guess or assumption - you can run the latest DeepSeek model renting GPUs from a cloud provider, and it works, and it's affordable.
There are two (three technically) ways that AI can be used.
> 1. renting gpu instances per minute from (you mention Google cloud) but I feel like some other providers can be cheaper too since new companies are usually cheaper, We get the lowendhosting of AI nowadays is usually via a marketplace-like thing (vast,runpod,tensordock)
Now vast offers serverless per minute AI models so checking it for something like https://vast.ai/model/deepseek-v3.2-exp or even glm 3.6 basically every of these turns out to be $.30 cents/minute or 18$ per hour
As an example GLM 4.6/ (now 4.7) have a YEARLY pricing of around 30 bucks iirc so now compare the immense difference in pricing
2. Using something like openrouter-based pricing :- Then we are basically on the same model of pricing similar to Google Cloud.
Of course AI models are reaching frontier and I am cheering for them but I feel like long term/even short term, these are still pretty expensive (even something like openrouter imo)
Someone please do genuine maths about this and I can be wrong, I usually am but I expect a 2-3x price (conservative side of things) increase if things arent subsidized
These are probably 10s of billions of dollars worth of gpu's so I assume that they would be barely profitable on the current rate but they get around 100s of billions in some cases worth of tokens generations so they can probably work via the 3rd use case I mention
Now coming to the third point which I assume is related to the 2nd/1st is that usually, the companies providing these GPU computes provide such compute, usually they can make money via providing by large term contracts.
Even huggingface provides consulting services which I think is the biggest profit to them and Another big contender can probably be European GPU compute providers who can provide a layer of safety or privacy for EU companies.
Now, looks like I had to go to reddit to find some more info but (https://www.reddit.com/r/LocalLLaMA/comments/1msqr0y/basical...), checking appenz's comment which I might add here (the relevant parts)
The large labs (OpenAI, Anthropic) and Hyperscalers (Google, Meta) currently are not trying to be profitable with AI as they are trying to capture market share. They may not even try to have positive gross margins, although the massive scale limits how much they can use per inference operation.
Pure inference hosters (Together, Fireworks etc.) have less capital and are probably close to zero gross margins.
There are a few things that make all of this more complicated to account for. How do you depreciate GPUs (I have seen 3 years to 8 years), how do you allocate cost if you do inference during the day and train at night etc.
The challenge with doing this yourself is that the market is extremely competitive. You need massive scale (as parallelism massively reduces cost), you need to be very good in negotiating cheap compute capacity and you need to be cost-effective in your G2M.
Opinions are my own, and none of this is based on non-public information.
So basically all of these are probably running in zero/net negative turns and they require billions of $'s to be spent and virtually there isn't any moat/lock-in (and neither there has to be)
TLDR: no company right now is sustainable
The only use case I can see is probably consulting but that will go as https://www.investopedia.com/why-ai-companies-struggle-finan...
So I guess the only reasonable business feels to me is private AI for large businesses who genuinely need it for their business (once again the MIT study applies) but that usually wouldn't apply to us normal grade consumers anyway and would be actually really expensive but still private and would be so far off from us normal people.
TLDR: The only ones making money are/ are gonna be B2B but even those are gonna dwindle if the AI bubble bursts because imagine an large business trying to explain why its gonna use AI if 1) the MIT study shows its unprofitable and 2) the fear around using AI etc. and all the financial consequences that the bubble's explosion might cause
So that all being said, I doubt it. I think that these prices are only till the bubble lasts which is only as strong as its weakest link which is openAI right now with trillions promised and a net lose making company whose CEO said that AI market is in a bubble and whose CFO openly floats the idea that OpenAI should be bailed out by the US govt if need be
So yeah..... Honestly Even local grade gpu's are expensive but with the innovations of open weights models, I feel like they would be the way to go for 90% of basic use cases being run inside them and probably there are very few cases of moat (and I doubt the moat existing in the first place)
If the writer of the (grandparent comment?)/ (the person who wrote about the philipines secretary is reading this), I would love it if you can do me a simple task and instead of having them use the SOTA models for the stuff for which they are using AI right now, they use an open source model (even an tiny to mid model) and see what happens.
> "My assistant in the phillipines has used it to substantially improve her communications, for instance."
So if they are using it for communications, Honestly even a small-mid model would be good for them.
Please let me know how this experiment goes. I might write about it and its just plain curiosity to me but I would honestly be 99% certain that the differences would be so negligible that using SOTA or even remotely hosted AI datacenter models wont make much sense but of course we can say nothing without empirical evidence which is why I also asked you to conduct my hypothesis.
I'm not, since I'm a heavy user of local models myself, and even with the beast of a card I work with locally daily (RTX Pro 6000), the LLMs you can run locally are basically toy models compared to the hosted ones. I think, if you haven't already, you need to give it a try yourself to see the difference. I didn't mention or address it, as it's basically irrelevant because of the context.
And besides that, how affordable how GPUs today in the developing world? Electricity costs? How to deal with thermals? Frequent black outs? And so on, many variables you seemingly haven't considered yet.
Best way of making the difference between hosted models and local modals is to run your own private benchmarks against both of them and compare. Been doing this for years, and still local models are nowhere near the hosted ones, sadly. I'm eager for the day to come, but it will still take a while.
Same here, otherwise I wouldn't be investing in local hardware :) But I'd be lying if I said I think it's ready for that today. I don't think the hardware as much to catch up with, it's the software that has a bunch of low hanging fruits available for performance and resource usage, since every release seems to favor "time to paper" above all else.
I'm not saying it's completely useless, or that I don't think it won't be better in the future. What I am saying is that even the top "weights available" models today really don't come close to today's SOTA. This is very clear when you have benchmarks to get hard concrete numbers that aren't influenced by public benchmarking data.
This is the statement thatI'm disagreeing with. They do come close, even if they are somehow less, it is a fixed distance away where the hosted models aren't more than a magnitude better. Hosted models are still better, just not incredibly so.
I was just proposing that local feels the most sustainable way for things to go, perhaps even an API Openrouter like things but you can read my other comment on how I found their finances to be loss/zero profit so its good while it lasts (on the AI bubble) if one person needs it but long term I feel like its prices are gonna rise whereas local would still remain stable (Also worth mentioning that there is no free lunch so I think the losses would be distributed to everybody in the form of the financial crisis caused by AI, I hope that the impacts of the financial crisis lessens because I am genuinely worried about the crisis more so at this point.
Agreed, I myself understand that right now using these AI bubble fuel money sponsored might make sense (look at my other comments where I went into the rabbithole on how most companies are losing/zero profitting while investing billions)
Although these aren't sustainable, the one idea where it makes sense is that we transition to a local model based (which yes I know are inefficient) but the inevitability in my opinion if the bubble bursts but there are definitely some steal deals nowadays if one wishes.
Also You may have understood me wrong in this comment (if so my apologies) in the sense that what I was mentioning was for the secretary use-case and not a company (using AI?)/selling Ai related services which need 24x7 access
One wouldn't have to worry about Blackouts because if your secretary's house is blacked out, lets just be honest that AI won't turn the magic lights on
Also the machines in our devices are beast. I am pretty sure that for basic communication tasks as the grandfather comment suggested, we are very likely that even the "toy" models as you say would be "good enough"
This is what I was trying to say actually, thanks for responding.
That being said, the original point about Americans/Europeans does become a bit moot after this discovery because the fact is, I don't think that most are against small models but rather the SOTA models run in AI centric datacenters which they hate as it actively acts as a tax on them by increasing electricity rates etc. while taking workforce from them
A tiny model on the other doesn't really do all the above. I definitely feel like the concerns of American people are definitely valid for the AI datacenters though so I hope something can be done about it in a timely and helpful manner which brings real help to the average american.
You can also introduce ads to the model.
Amazon eventually started selling Amazon versions of popular cheap items, I can see the GenAI platforms doing the same.
Meme bots for example.
Is there any reason to believe AI will be any better than social media when it comes to mental health?
https://www.washingtonpost.com/technology/2025/12/27/chatgpt...
They predict what the most likely next word is.
With that said, like most technology, it seems to come with a ton of drawbacks, and some benefits, while most people focus on the benefits, we're surely about to find out all the drawbacks shortly. Better than social media or not, its being deployed on a wide-scale, so it's less about what each person believes, and more about what we're ready to deal with and how we want to deal with it.
There is/are currently no realistic ways to temper or enforce public safety on these companies. They are in full regulatory capture. Any kind of call for public safety will be set aside, and if its not someone will pay the exec to give them an exception.
There is, general strikes usually does the trick if the government stops listening to the people. Of course, this doesn't apply to some countries that spent decades making unions, syndicates and other movements handicapped, but for the modern countries that still pride themselves on democracy, it is possible, given enough people care to do something about it.
Even when unemployment rises to ~15%
It doesn't matter what government is in control: LLMs cannot be made safe from the problems that plague them. They are fundamental to their basic makeup.
The "if" is very much on the table at this stage of the political discussion. Companies are trying to railroad everybody past this decision stage by moving too fast. However, this is a momemt where we need to slow down instead and have a good long ponderous moment hinjing about whether we should allow it at all. And as the peoples of our respective countries, we can force that.
Besides general strikes, there isn't much one can do to stop, pause or otherwise hold back companies and individuals from deploying legal technology any way they see fit, for better or worse.
Right now, companies are working extremely hard to give the impression that AI technology is essential. But that is a purposefully manufactured illusion. It's a set of ideas planted in people's heads. Marketing in those megacompanies that introduce new technologies like LLMs and AR glasses to end users is very much focused on reshaping society around their product. They think BIG. We need more awareness that this is happening so that we can push back in a coordinated and meaningful way. And then we can support politicians that implement that agenda.
With nuclear weapons, human cloning, chemical weapons, and ozone destruction.
All of these are highly centralized, controlled via big government-scale operations.
How do you propose doing this with GPT LLM tech that has been open sourced/weights and decentralized?
Name a single technology that was invented, people figured out the drawbacks where bigger than the benefits, and then humanity just stopped caring about it altogether? Not even the technology with the biggest drawback we've created so far (literally make the earth inhospitable if deployed at scale) apparently been important enough to do so with, so I'm eager to hear what specific cats have been put back in what hats, if you'd entertain me.
We simply don’t know the scale of either side of the equation at this point.
From what I can gather, a lot of ML people are utilitarians, for better or worse.
And I don't think any utilitarian would be against "something with some harm but mostly good can be made to do even less harm".
But the person I initially replied to, obviously doesn't see it as "some harm VS much good" (which we could argue if it's true or not), and would say that any harm + any good is still worth considering if the harm is worth it, besides the good it could do.
That's certainly the impression you gave with your response. You didn't engage with the clear example of harmful behavior or debate what various numbers on either side of the equation would mean. Your approach was to circumvent OP's direct question to talk philosophy. That suggests you think there are some philosophical outlooks that could similarly sidestep the question of a therapist pushing people to kill themselves, which is a rather simple and unambiguous example of harm that could be reduced.
Sounds more like the plot line of a 60s sci-fi novel or old Star Trek episode. What would the prime directive say? Haha
Like all tools we regulate heroin and we should regulate AI in a way they attempts to maximize the utility that it provides to society.
Additionally with things lien state lottery systems we decide that we should regulate certain things in such a way that the profits are distributed to society, rather then letting a few rent seekers take advantage of intrinsic addictive nature of the human mind to the detriment of all of society.
We should consider these things when developing regulations around technology like AI.
I'm not even a utilitarian, but if there are many many people with stories like her, at some point you have to consider it. Some people use cars to kill themselves, but cars help and enrich the lives of 99.99% of people who use them.
They are mostly useful, but occasionally can kill someone who indulges in them too much.
Then the question becomes more if we're fine with some people dying because some other people aren't.
But AFAIK, we don't know (and probably can never know) the exact ratio of people AI has helped still be alive today VS helped contribute to that these people aren't alive today, which makes the whole thing a bit moot.
In fact, targeted research with this data could help do more research on how to convince more people to stay alive, right?
Look at how much is being invested in this garbage then look at the excuses when they tell us we can't have universal medicare, free school lunches, or universal childcare.
Concerned that AI companies, like social media companies, exist in an unregulated state that is likely to be detrimental to most minors and many adults? Absolutely.
How can this piece then be published without at least trying to uncover these numbers?
there are different degrees of responsibility (and accountability) for everyone involved. some are smaller, some are larger. but everyone shares some responsibility, even if it is infinitesimally small.
I don't have any good answer myself, but eager to hear what others think.
> TCP and HTTP protocols were primarily developed with funding and support from government agencies, particularly the U.S. Department of Defense and organizations like ARPA, rather than by non-profit entities. These protocols were created to facilitate communication across different computer networks
So um... yea?
So say the people who specified, implemented and deployed TCP and HTTP, should they be held responsible for aiding transmission of child pornography across international borders, for example?
I was just pointing out that information because I had thought that http was created by non profits/similar but It was HTML which was created in CERN
that being said, coming to the point, I think that no this shouldn't be the case for the people who specified TCP/HTTP
But also I feel like an AI researcher / the people who specified TCP are in a different categories because AI researcher companies directly work for AI companies which are then used so in a way, their direct company is facilitating that partially due to their help
On the other hand, People who have Specified Open source have no whatsoever relation similar to the AI company model perhaps.
I am not sure, there definitely is nuance but I would definitely consider AI researchers to be more than the people who created the specification of TCP/http as an example.
Tools are tools. It is what we make of them what matters. AI on its own has not intentions, but questions like these feed into that believe that there is already AGI with a own agenda waiting to build terminators.
“All that is human must retrograde if it does not advance.” -Edward Gibbon
However, I don't think we're going to have to wait 11,000 years for it
[0] https://en.wikipedia.org/wiki/Dune_(franchise)#Butlerian_Jih...
When you ask an AI like ChatGPT a question, what is it actually doing?
Survey of 2,301 American adults (August 1-6, 2025)
- Looking up the exact answer in a database: 45%
- Predicting what words come next based on learned patterns: 28%
- Running a script full of prewritten chat responses: 21%
- Having a human in the background write an answer: 6%
Source: Searchlight Institute
most survey respondents don't even _understand_ what AI is doing, so I am a bit skeptical to trust their opinions on whether it will cause harm
Google, Meta, and the rest of big tech have proven they should never be trusted.
Sounds like over regulation to many. But it is pretty clear companies and developers have failed. So maybe strict regulation is needed.
The only reason we have any regulations and safety standards for cars is because of one person leading the charge: Ralph Nader. You know what companies like Ford, GM, Chrysler tried to do after he released "Unsafe at any speed?" Smear his name in a public campaign that backfired.
Car companies had to be dragged kicking and screaming to include basic features like seatbelts, airbags, and crumple zones.
Why do they need to know how AI works, in order to know that it is already having a negative effect on their lives and is likely to do so in the future?
I don't understand how PFAS [1] work, but I know I don't want them in my drinking water.
[1] https://www.niehs.nih.gov/health/topics/agents/pfc
Because otherwise you might not actually be properly attributing the harm you're seeing to the right thing. Lots of people in the US thing current problems are left/right or socialist/authority, while it's obviously a class issue. But if you're unable to actually take a step back and see things, you'll attributed the reasons why you're suffering.
I walked around on this earth for decades thinking Teflon is a really harmful material, until this year for some reason I learned that Teflon is actually a very inert polymer that doesn't react with anything in our bodies. I've avoided Teflon pans and stuff just because of my misunderstanding if this thing is dangerous or not to my body. Sure, this is a relatively trivial example, but I'm sure your imagination can see how this concept has broader implications.