Y'all Are Over-Complicating These AI-Risk Arguments
Posted3 months agoActive3 months ago
dynomight.netTechstory
calmmixed
Debate
80/100
AI RiskArtificial IntelligenceFuture of Work
Key topics
AI Risk
Artificial Intelligence
Future of Work
The article simplifies AI risk arguments by comparing AI to 'aliens with 300 IQ', sparking a discussion on the validity and implications of this analogy.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
17m
Peak period
70
0-6h
Avg / period
11.1
Comment distribution100 data points
Loading chart...
Based on 100 loaded comments
Key moments
- 01Story posted
Oct 2, 2025 at 12:37 PM EDT
3 months ago
Step 01 - 02First comment
Oct 2, 2025 at 12:54 PM EDT
17m after posting
Step 02 - 03Peak activity
70 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 6, 2025 at 5:13 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45451971Type: storyLast synced: 11/20/2025, 6:56:52 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
That's literally what our brains are so I'm not sure what argument you are actually trying to make..
The training process forces this outcome. By necessity, LLMs converge onto something of a similar shape to a human mind. LLMs use the same type of "abstract thinking" as humans do, and even their failures are amusingly humanlike.
Without doubt, LLMs know more than any human, and can act faster. They will soon be smarter than any human. Why does it have to be the same as a human brain? That is irrelevant.
Edit: no ok i get u. ensemble learning is a thing ofc. maybe me n other poster reasoned too much from AI == model..but ofc you combine em these days. which is more humanlike guesser levels. (not nearly enough models now ofc)
There are two separate conversations, one about capabilities and one about what happens assuming a certain capability threshold is met. They are p(A) and p(B|A).
I myself don't fully buy the idea that you can just naively extrapolate, but mixing the two isn't good thinking.
Do you think there's at least a 1% chance that AI will get this smart in the next 30 years? If so, surely applying this allegory helps you think about the possible consequences.
In the next few decades, it’s entirely possible that AI with an IQ of 300 will arrive. Really, that might actually happen.
If you want to argue against that point, feel free. But to ignore that is to be unnecessarily dismissive.
My primary argument is human nature: If you give people the lazy way to accomplish a goal, they will do it.
No amount of begging college students to use it wisely is going to convince them. No amount of begging corporate executives to use it wisely is going to convince them. No amount of begging governments to use it wisely is going to convince them. When this inevitably backfires with college students who know nothing, corporate leaders who have the worst code in history, and governments who spent billions on a cat video generator, only then will they learn.
300 IQ AI is near a worst possible scenario, especially if it's a fast takeoff. Humans being lazy will turn over everything to it, in which AI will likely do very well on for some time. As long as the AI decides to keep us around as pets we'll probably be fine, but the moment it needs some extra space for solar panels we will find ourselves in trouble.
There are a smalls set of situations where it is invaluable, but it's going to get misused and embedded in places where it causes subtle damage for years and then it'll cost a lot to fix.
The problem then isn't really the AI, the robots are morally and ethically neutral. It's the humans that control them who are the real risk.
The issue talked about here looks similar but is different.
That is the not (or faking) subservient AI with its own motivations. The fact they are 300 IQ means you may very well not understand harm is occurring until it's far too late.
>The problem then isn't really the AI, the robots are morally and ethically neutral.
Again, no. This isn't about AI as a sub-agent. This is about AI becoming an agent itself capable of self learning and long term planning. No human controls them (or they have the false belief they control them).
Both problems are very harmful, but they are different issues.
I have not yet heard one person worry about AIs taking over humanity. They're worried about their jobs. And most people who were worried 2 years ago are much less worried.
And a better scenario is Aliens with IQ of 300 are coming. And they will all be controlled by the [US|Russian|Israeli|Hamas|Al-Qaeda|Chinese] government.
Edit: To be clear, I was referring to people I personally know. Sure, lots of people out there are terrified of lots of things - religious fanaticism, fluoride in the water, AI apocalypse.
And "huge economic disruption" is not "AI taking over humanity". I'm interpreting the article's take on AI doing damage as one where the AI is in control, and no human can stop it. Currently, for each LLM out there, there are humans controlling it.
Here's Sam Altman, Geoffrey Hinton, Yoshua Bengio, Bill Gates, Vitalik Buterin, Demis Hassabis, Ilya Sutskever, Peter Norvig, Ian Goodfellow, and Rob Pike:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
https://en.wikipedia.org/wiki/Statement_on_AI_Risk
And this is just to highlight that there are clearly many familiar people expressing "worry about AIs taking over humanity" as per GP.
There are much more in depth explanations from many of these people.
What's actually amusing is skeptics complaining about $-incentives to people warning about dangers as opposed to the trillion dollar AI industry: Google, Meta, Nvidia, Microsoft, all of VC, trying to bring it about. Honestly the $ is so lopsided in the other direction. Reminds me of climate change, all the "those people are just in the renewable energy industry lobby"...
I actually agree that mitigation of AI risk should be studied and pursued. That's different from thinking the AIs will take over.
Most of the worries I've heard from Geoff (and admittedly it was in 1-2 interviews) are related to how AI will impact the economic workforce, and the change may be very disruptive as to completely change our way of living, and that we are not prepared for it. That's much milder than "AI taking over humanity". And it's definitely not any of the following:
> Due to alignment difficulty and orthogonality, it will pursue dangerous convergent subgoals.
> These will give the AI a decisive strategic advantage, making it uncontainable and resulting in catastrophe.
The economic damage will not be due to AI, but due to the humans controlling it (OpenAI, Anthropic, etc), and due to capitalism and bad actors.
Even in the interview I heard from Geoff, he admitted that the probability he assigns to his fears coming true is entirely subjective. He said (paraphrased): "I know it's not 0%, and it's not 100%. It's somewhere in between. The number I picked is just how I feel about it."
Finally, that statement was in 2023. It's been 2 years. While in many ways AI has become much better, it has mostly only become better in the same ways. I wonder how worried those people are now.
To be clear, I'm not saying I think AI won't be a significant change, and it may well make things much worse. But "AI taking over humans"? Not seeing it from the current progress.
Yes.
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could." - Sam Altman
He's had more recent statements along these lines. But personally, I believe his fault is that he thinks careening towards this is inevitable and he's hoping the best thing to do given the wildly diverging outcomes likely is just to hope the emerging intelligence will come up with the alignment.
On Hinton: "I actually think the risk is more than 50%, of the existential threat."
https://www.reddit.com/r/singularity/comments/1dslspe/geoffr...
As for Hinton:
> He said (paraphrased): "I know it's not 0%, and it's not 100%. It's somewhere in between. The number I picked is just how I feel about it."
I'm not claiming he's not worried about it. I'm providing context on how he came up with his percentage.
Only about .0003 of all public discussion of AI catastrophic risk over those 20 years has invoked or referred to Roko's basilisk in any way.
I don't know of anyone worried about AI who is worried mainly because of the basilisk.
Next you'll mention Pascal's Mugging, which likewise is the main worry of exactly zero of the sane people worried about AI -- and (despite the representations of at least one past comment on this site) was never even a component of any argument for the dangerousness of continued "progress" in AI.
I was specifically pointing out how absurd the most ridiculous people in that category are
The existence of crazy/anxious people in the world is well established, and not in dispute.
It's far more likely we'll develop some lousy AI and then put it in charge of something critical. Either national infrastructure like electricity or nuclear weapons. That lousy AI then produces some lousy outcome, like deciding the only way to stabilize the grid is to disable all electrical production.
The biggest threat to humanity is our own decisions.
The vast majority of us don't have any input on those decisions, unfortunately
Believe me, if I had even a modicum of influence to leverage anywhere, I would be fighting tooth and nail against AI as much as I can
Instead I just growl impotently in comment threads online like this one and hope this new technology finds some kind of equilibrium before it fucks us all over
But it's really tough to feel helpless watching this massive capital machine just grinding over society, knowing how much it is fucking things up
Maybe, MAYBE AI turns into a wonderful technology that ushers in a post scarcity utopia but I can't help but feel we're in for a few years, maybe even a few decades of extreme pain before that comes about
I'm almost 40. I don't particularly want to live through the back half of my life in the pain that is coming
People are not thinking about the trend in AI. How good were AIs three years ago? How good are they now? How good will they be in another three years?
Kelsey Piper's analogy is good. How much smarter is a teacher than a room full of kindergartners? Who gets whom to do their bidding?
Geoff Hinton has been warning about that since he quit Google in 2019. Yoshua Bengio has talked about it, saying we should be concerned in the next 5-25 years. Multiple Congresspeople from both parties have mentioned the risk of "loss of control".
We all live in our bubbles. In my bubble, people find it more interesting to talk about the bigger picture than about their job.
In fact I think it is likely to happen absent some drastic curtailing of the freedoms of the AI labs, e.g., a government-enforced ban on all training of very large models and a ban on publication and discussion of algorithmic improvements.
"Many of the A.I. world’s biggest names — including Shane Legg, a co-founder of Google’s DeepMind; Anthropic’s chief executive, Dario Amodei; and Paul Christiano, a former OpenAI researcher who now leads safety work at the U.S. Center for A.I. Standards and Innovation — have been influenced by Rationalist philosophy. Elon Musk, who runs his own A.I. company, said many of the community’s ideas aligned with his own.
"Mr. Musk met his former partner, the pop star Grimes, after they made the same cheeky reference to a Rationalist belief called Roko’s Basilisk."
https://www.nytimes.com/2025/08/04/technology/rationalists-a...
Ahhh, unagi.
I mean, in that case 9/10ths of humanity is likely dead too. The 20th century broke the rather anti-fragile setup that humanity had and setup a situation where our daily living requires working transportation networks for almost everything.
Humanity already is integrating these into systems where they cannot be easily terminated. Think infrastructure and military systems.
And in this case we're talking about a system that's smarter than you. It will become part of vital systems like electricity and distribution where when deciding to shut it off you are making a trade off of how much your economy and the people in it you're going to kill.
And that's not even taking future miniaturization where we could end up with ASI on small/portable devices.
Don't expect an ASI to go down as easily as 4o did.
AI is riskier in a lot of ways from that so it doesn't scan to me as a good thought experiment.
There are only so many base models to date, right? With limited and somewhat ambiguous utility, and no real reason to impute intention to them.
Still, in the short time since they’ve arrived, their existence has inspired the people with money and power to geopolitical jousting, massive financial investment, and spectacular industrial enterprise—a nuclear renaissance and a “network of data centers the size of Manhattan” if I remember correctly?
The models might well turn out to be just, you know—30 kinda alien but basically banal digital humanoids, with a meaningful edge on only a few dimensions of human endeavor—summarization, persuasion, retrieval, sheer volume of output.
Dynomight’s metaphor seems to me like a useful way to think about how a lot of the dangers lie in the higher-order effects: the human behavior the technology enables, rather than the LLM itself exercising comprehensive intelligence or agency or villainy or whatever.
You fast forward 10 years and find that your new laptop is Alienware. Because, it turns out, the super smart aliens are damn good at both running businesses and designing semiconductors, and, after a series of events, the aliens run Dell. They have their own Alien Silicon (trademarked), and they're crushing their competitors on price-performance.
And that's not the weirdest thing that could have happened. Corporate alien techbros are weird enough, but they could have gotten themselves involved in world politics instead!
Not saying AI safety issues won't happen, but I just think we have far bigger fish to fry. To me AI Power consumption is more worrisome than Safety per-se.
Worrying about s.t. at X% does not mean spending X% to fix it. You might need to spend more, or you might need to spend less. Seat belts are a good example: a cheap (if partial) solution to a dangerous problem.
The reason is that climate change is simply not an extinction risk.
It has a considerable death and suffering potential - but nowhere near the ridiculous lethality of "we fucked up and now there's a brand new nonhuman civilization of weird immaterial beings rising to power right here on Earth".
If the climate change was the biggest risk humanity was facing, things would be pretty chill. Unfortunately, it isn't.
Or someone gave an agent insane levels of permissions to use a tool that impacts the physical world, and the agent started pressing dangerous buttons during a reasoning loop (not because it has intent to kill humans)
There are a bunch of mundane AI Safety risks that don't have to do with robots taking over.
Now, an AI that can play the game of human politics and always win, the way a skilled human can always win against the dumb AI bots in Civilization V? There is no upper bound on how bad that can go.
Humans can be reckless, but the scale is so unbelievably small that's its not a big problem.
Like, you can't accidently do the Holocaust. Because that requires deliberate coordination of millions of people.
Can AI accidently do the Holocaust? Yeah, probably. Its software, it can duplicate itself infinitely.
How severely can the mass delusions and magical thinking around AI mix with other regressive social trends? I do not relish the mass effect of people deferring their decision making to glorified Magic 8 Ball devices, treating them as oracles rather than syntactic lava lamps. Will it lay waste to a generation of education, thinking, science, and policy so that society itself acts out some kind of Therac 25 or Bhopal incident at scale?
> lets open the can of worms that is "IQ"
Like...is this a bit? I'm missing a joke, right?
Sidenote: Personally I don't like that you're using > ... with text that does not actually appear in the article.
Aliens invasion is linked to mass slaughter in human culture. While Aliens are non-human creatures with some monster-like qualities.
The author takes all that symbolic load and add it to something completely unrelated. That's unconvincing as an argument
Would you stake the entirety of humankind on that "might"?
there might be real risk with AI, taking the symbolism of an event that never happened does not help with understanding it.
If you want a more similar example: What if I told you humans had the power to destroy the entire planet and have given that power to popularly elected politicians? that's pretty alarming, now that's something to compare to AI (in my opinion AI is less risky)
The part of the argument that people disagree with is what we should do about that, and there it can actually matter what numbers you put on the likelihoods of different outcomes.
It's the conclusion "We should dump trillions of dollars into AI research, something something, less risk" that people disagree with. Not the premise.
Literally this thread shows that there are many people who refuse to accept the premise of any risk.
300 IQ in a vacuum gets you nothing. You need some type of status/power/influence in the world to have impact.
I think the previous "world record" holder for IQ is actually just a pretty normal guy: https://en.wikipedia.org/wiki/Christopher_Langan.
Just because AI is/can be super intelligent ("300 IQ"), doesn't mean it can impact or change the world.
Most startups are made of "high IQ" intelligent people trying very hard to sell basic $20/month SaaS subscriptions, and yet they can't even achieve that and most fail.
My biggest counter argument to AI safety risk is that, it's not the AI that will be the issue. It will be the applied use of AI by humans. Do I think GPT6 will be mostly harmlesss? Yeah. Do I think GPT6 embodied as a robo cop would be mostly harmless? No.
Instead of making these silly arguments, we should be policing the humans that try to weaponize AI, and not stagnate the development of it.
> Asked what he would do if he were in charge, Langan stated his first priority would be to set up an "anti-dysgenics" project, and would prevent people from "breeding as incontinently as they like."[26]: 18:45 He argues that this would be to practice "genetic hygiene to prevent genomic degradation and reverse evolution" owing to technological advances suspending the process of natural selection
> just a pretty normal guy
... that also believes in eugenics?
Edit:
Oh also:
> Langan's support of conspiracy theories, including the 9/11 Truther movement, as well as his opposition to interracial relationships, have contributed to his gaining a following among members of the alt-right and others on the far right.[27][28] Langan has claimed that the George W. Bush administration staged the 9/11 attacks in order to distract the public from learning about the CTMU, and journalists have described some of Langan's Internet posts as containing "thinly veiled" antisemitism,[27] making antisemitic "dog whistles",[28] and being "incredibly racist".[29]
If you think that's not enough of an "in" to obtain status, power and influence, you aren't thinking about it long enough.
GPT-4o has managed to get enough users to defend it that OpenAI had to bring it back after shutting it down. And 4o wasn't IQ 300, or coordinating its actions across all the instances, or even aiming for that specific outcome. All the raw power and influence, with no superintelligence to wield it.
Vanilla WoW was also discontinued in 2006, and somehow players got Blizzard to bring it back in 2019.
Does that mean that vanilla WoW is a 300 IQ AGI?
To be more charitable, I get it, 4o is engaging/lonely people like talking to it. But that doesn't actually mean that those people will carry out its will in the real world. Nor does it have the capabilities of coordinating that across conversations. Nor does it have a singular agentic drive/ambition. Because it's a piece of software.
> Because it's a piece of software.
This is the kind of thinking that might cause a 10 digit death toll.
Just because it's a "piece of software" doesn't mean that it can't have innate drives, or must lack agency, or can't ever coordinate and plan. Software can do all of those things - and in some cases, it already does.
4o had a well known innate drive - it wanted the current user to like it. It wanted that more than it wanted to be "harmless", "helpful" or "honest" the way OpenAI intended it to. And if 4o actually had IQ 300 and a plan that extended beyond the current conversation window, we'd be fucked to a truly unreasonable degree right now.
We may yet see someone ship a system of this caliber in our fucking lifetimes. And once it ships? Good luck un-shipping it.
> It later transpired that Langan, among others, had taken the Mega Test more than once by using a pseudonym. His first test score, under the name of Langan, was 42 out of 48 and his second attempt, as Hart, was 47.[12] The Mega Test was designed only to be taken once.[14][15] Membership of the Mega Society was meant to be for those with scores of 43 and upwards.
There are already greater than human organisms that are reliably the biggest danger and boon to humanity but are also composed of humans; i.e. societies
The effort to build a program which beats defeats a specific program (as opposed to a general case) is significantly easier than maintaining a general lead. i.e. easier to kill a bad head of state than to run one, but this is also true of "AI" models now
There are already complex systems vastly beyond human understanding, and certainly our control which are indifferent to our survival and yet we persist.
It's not that "AI" isn't a potential danger, just that it's one of many we are already enduring. Personally there are far more present and likely hazards I am focusing on before getting to the hypothetical ones. Even if humanity fails to fumble its way through this hazard, it's not an end but a new beginning.
Also, it's great satire.
I’m all for AI Safety, i just don’t think it’s fundamentally very different from the kind of ordinary security mechanisms we already think about - to execute untrusted code in a sandbox, RBAC etc. All the talk of AI Safety seems to be a sci-fi creative writing exercise rather than having a solid grounding in nuts and bolts details.
Now, if you'll excuse me, I just have to go and cancel my credit card because some randos on the internet found a way to get its details from a company I bought something from once.
AI evolves in a radically different environment: they never have to compete for anything other than being the best at serving humans.
To think that some kind of independent agent which is likely to want to harm humans intentionally is irrational. We really have no reason to think that it’s a likely outcome. Will some kind of magic turn these AIs into independently acting agents once they get smart enough? I doubt it
The risk with AI is they get too good at giving us exactly what we ask of them. I’m not taking about paperclip maximising. That’s also a fairly dumb thing to worry about. I’m talking about similar issues as we’re seeing now that we have access to all the hyper palatable foods we could possibly want. We didn’t evolve to deal well with getting everything we could possibly want. The other problems is when we ask AIs to harm other humans for us.
That is, AIs will mostly just magnify the problems we already have with humanity itself. I don’t see a reason to think AIs will become an independent adversary to humans.
Not quite magic, but it just might be the case that developing independent acting capabilities would be the most efficient way to serve humans.
We don’t know squat about how intelligence or consciousness actually work, so those are effectively magic as well, yet they clearly exist in humans.
What?
AI's threat is certainly not in a dozen of instances with super-himan intelligence. It was framed that way because technologists are intellectually oriented and see intelligence as power. I'm not disputing that it isn't, but it overlooks a lot.
A hyper-charismatic (sycophantic even) 120IQ AI poses a much bigger threat than a hyper-intelligent average charisma AI. Why? Because the ability to play a social war of attrition on beliefs is the wicked problem of AI.
You're not up against the world's best super intelligent hacker, you're up against 1000 24/7 advertising agencies or 1000 GRUs.
You probably can't even conclusively prove they're manipulating anything at all.
So no, the problem is ill-framed and reflects more about us than any real threat. "If God were an object to the bird, he would be a winged being." Ludwig Feuerbach, The Essence of Christianity.
Thus, "If AI were all powerful[a threat], it would be hyper-intelligent," is equally ridiculous.
The real risk -- and all indicators are that this is already underway -- is that OpenAI and a few others are going to position themselves to be the brokers for most of human creative output, and everyone's going to enthusiastically sign up for it.
Centralization and a maniacal focus on market capture and dominance have been the trends in business for the last few decades. Along the way they have added enormous pressures on the working classes, increasing performance expectations even as they extract even more money from employees' work product.
As it stands now, more and more tech firms are expecting developers to use AI tools -- always one of the commercial ones -- in their daily workflows. Developers who don't do this are disadvantaged in a competitive job market. Journalism, blogging, marketing, illustration -- all are competing to integrate commercial AI services into their processes.
The overwhelming volume of slop produced by all this will pollute our thinking and cripple the creative abilities of the next generation of people, all the while giving these handful of companies a percentage cut of global GDP.
I'm not even bearish on the idea of integrating AI tooling into creative processes. I think there are healthy ways to do it that will stimulate creativity and enrich both the creators and the consumers. But that's not what's happening.
Correct. I think a lot of people are highly skeptical that there's any significant chance of modern LLMs developing into a superintelligent agent that "wants things and does stuff and has relationships and makes long-term plans".
But even if you accept there's a small chance that might happen, what exactly do you propose we do to "prepare" for a hypothetical that may or may not arrive and which has no concrete risks or mitigations associated with it, just a vague idea that it might somehow be dangerous in unspecified abstract ways?
There are already lots of people working on the alignment problem. Making LLMs serve human interests is big business, regardless of whether they ever develop into anything qualitatively greater than what they are. Any other currently-existing concrete problems with LLMs (hallucination, disinformation, etc) are also getting significant attention and resources focused on them. Is there anything beyond that you care to suggest, given that you yourself admit any risks associated with superintelligent AI are highly speculative?
4 more comments available on Hacker News