Rob Pike Goes Nuclear Over Genai
Key topics
Rob Pike's scathing critique of GenAI has ignited a lively debate, with many commenters resonating with his frustration, calling it "cathartic" and a "quiet voice many are carrying around." As the discussion unfolds, questions arise about Pike's current affiliation with Google and whether his disdain extends to the tech giant's own AI endeavors, Google Gemini. Clarification comes that Pike is no longer at Google, having retired, which sparks further discussion on the value of insider criticism. The thread crackles with energy as commenters weigh in on the implications of Pike's strong stance.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
17m
Peak period
141
0-12h
Avg / period
26.7
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 26, 2025 at 9:08 AM EST
8 days ago
Step 01 - 02First comment
Dec 26, 2025 at 9:25 AM EST
17m after posting
Step 02 - 03Peak activity
141 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 30, 2025 at 4:51 PM EST
3d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If so, I wonder what his views are on Google and their active development of Google Gemini.
He should leave Google then.
Leaving the source to someone else
Sources are very well cited if you want to follow then through. I linked this and not the original source because it’s likely the source where root comment got this argument from.
Yeah, I'll not waste my time reading that.
If you tried the same attitude with Netflix or Instagram or TikTok or sites like that, you’d get more opposition.
Exceptions to that being doing so from more of an underdog position - hating on YouTube for how they treat their content creators, on the other hand, is quite trendy again.
I'm not sure about that: The Expanse got killed because of not good enough ratings, Altered Carbon got killed because of not good enough ratings and even then the last seasons before the axe are typically rushed and pushed out the door. Some of the incentives to me seem quite disgusting when compared with letting the creatives tell a story and producing art, even if sometimes the earnings are less than some greedy arbitrary metric.
I have a hard time believing that streaming data from memory over a network can be so energy demanding, there's little computation involved.
The European average is 56 grams of CO2 emissions per hour of video streaming. For comparison: 100 meters to drive causes 22 grams of CO2.
https://www.ndc-garbe.com/data-center-how-much-energy-does-a...
80 percent of the electricity consumption on the Internet is caused by streaming services
Telekom needs the equivalent of 91 watts for a gigabyte of data transmission.
An hour of video streaming needs more than three times more energy than a HD stream in 4K quality, according to the Borderstep Institute. On a 65-inch TV, it causes 610 grams of CO2 per hour.
https://www.handelsblatt.com/unternehmen/it-medien/netflix-d...
It's probably a gigabyte per time unit for a watt, or a gigabyte for a joule/watt-hour. Otherwise this doesn't make sense.
Also don't trust anything Telekom says, they're cunts that double dip on both peering and subscriber traffic and charge out of the ass for both (10x on the ISP side compared to competitors), coming up with bullshit excuses like 'oh streaming services are sooo expensive for us'. They're commonly understood to be the reason why Internet access in Germany is so shitty and expensive compares to neighbouring countries.
It's the devices themselves that contribute the most to CO2 emissions. The streaming servers themselves are nothing like the problem the AI data centres are.
The ecology argument just seems self-defeating for tech nerds. We aren't exactly planting trees out here.
Using Claude Code during an hour would be more realistic if they really wanted to compare with video streaming. The reality is far less appealing.
I think I was biased by the fact that this argument was used in an HN comment where people tend to be heavy users of LLM based agents.
The point is the resource consumption to what end.
And that end is frankly replacing humans. It’s gonna be tragic (or is it…given how terrible humans are for each other, and let’s not even get to how monstrous we are to non human animals) as the world enters a collective sense of worthlessness once AI makes us realize that we really serve no purpose.
You could say “shoot half of everyone in the head; people will adapt” and it be equally true. You’re warped.
The only realistic way forward is trying to make energy generation greener (renewables, nuclear, better efficiency), not fighting to decrease human consumption.
This being said, I think that the alternatives are wishful thinking. Better efficiency is often counterproductive, as reducing the energy cost of something by, say, half, can lead to its use being more than doubled. It only helps to increase the efficiency of things for which there is no latent demand, basically.
And renewables and nuclear are certainly nicer than coal, but every energy source can lead to massive problems if it is overexploited. For instance, unfettered production of fusion energy would eventually create enough waste heat to cause climate change directly. Overexploitation of renewables such as solar would also cause climate change by redirecting the energy that heats the planet. These may seem like ridiculous concerns, but you have to look at the pattern here. There is no upper bound whatsoever to the energy we would consume if it was free. If energy is cheap enough, we will overexploit, and ludicrous things will happen as a result.
Again, I actually agree with you that advocating for degrowth is hopeless. But I don't think alternative ways forward such as what you propose will actually work.
So the question is, at which point would the aggregate production of enough energy to cause climate change through waste heat be economically feasible? I see no reason to think this would come after becoming "immortal post-humans." The current climate change crisis is just one example of a scale-induced threat that is happening prior to post-humanity. What makes it so special or unique? I suspect there's many others down the line, it's just very difficult to understand the ramifications of scaling technology before they unfold.
And that's the crux of the issue isn't it? It's extremely difficult to predict what will happen once you deploy a technology at scale. There are countless examples of unintended consequences. If we keep going forward at maximal speed every time we make something new, we'll keep running headfirst into these unintended consequences. That's basically a gambling addiction. Mostly it's going to be fine, but...
Oh wow, an LLM was queried to thank major contributors to computing, I'm so glad he's grateful.
Cheap marketing, not much else.
Just sending him and others a handwritten (not AI-written note) with a free lifetime Claude Code subscription would have been much smarter.
This has to be the ultimate trolling, like it was unsure what their personalities were like so it trolls them and records there responses for more training
I don’t know of this is a publicity stunt or the AI models are on a loop glazing each other and decided to send these emails.
appreciate him for his professional work, not his personal opinions ;)
It's healthy that people have different takes.
GenAI pales in comparison to the environmental cost of suburban sprawl it's not even fucking close. We're talking 2-3 orders of magnitude worse.
Alfalfa uses ~40× to 150× more water than all U.S. data centers combined I don't see anyone going nuclear over alfalfa.
I don't know what Internet sites you visit, but people absolutely, 100% complain about alfalfa farmers online, especially in regards to their water usage in CA.
By the same logic, I could say that you should redirect your alfalfa woes to something like the Ukraine war or something.
And also, I didn't claim alfalfa farming to be raping the planet or blowing up society. Nor did I say fuck you to all of the alfalfa farmers.
I should be (and I am) more concerned with the Ukrainian war than alfalfa. That is very reasonable logic.
Just because two problems cause harms at different proportion, doesn't mean the lesser problem should be dismissed. Especially when the "fix" to the lesser problem can be a "stop doing that".
And about water usage: not all water and all uses of water is equal. The problem isn't that data centers use a bunch of water, but what water they use and how.
This is a very irrelevant analogy and an absolutely false dichotomy. The resource constraint (Police officers vs policy making to reduce traffic deaths vs criminals) is completely different and not in contention with each other. In fact they're actually complementary.
Nobody is saying the lesser problem should be dismissed. But the lesser problem also enables cancer researchers to be more productive while doing cancer research, obtaining grants, etc. It's at least nuanced. That is far more valuable than Alfalfa.
Farms also use municipal water (sometimes). The cost of converting more ground or surface water to municipal water is less than the relative cost of ~40-150x the water usage of the municipal water being used...
No different than an CEO telling his secretary to send an anniversary gift to his wife.
JFC this makes me want to vomit
> while maintaining perfect awareness
"awareness" my ass.
Awful.
These descriptions are, of course, also written by LLMs. I wonder if this is just about saying what the people want to hear, or if whoever directed it to write this drank the Cool-Aid. It's so painfully lacking in self-awareness. Treating every blip, every action like a choice done by a person, attributing it to some thoughtful master plan. Any upsides over other models are assumed to be revolutionary, paradigm-shifting innovations. Topped off by literally treating the LLM like a person ("they", "who", and so on). How awful.
If I put my car in neutral and push it down a hill, I’m responsible for whatever happens.
> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?
Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.
Yeah, realizing that thoughtless machines are still more thankful that real human beings would make me depressed.
What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.
Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.
Why do people still think software have any agency at all?
Criticizing anthropomorphic language is lazy, unconsidered, and juvenile. You can't string together a legitimate complaint so you're just picking at the top level 'easy' feature to sound important and informed.
Everybody knows LLMs are not alive and don't think, feel, want. You have not made a grand discovery that recontextualuzes all of human experience. You're pointing at a conversation everyone else has had a million times and feeling important about it.
We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky and obnoxious in everyday conversation.
The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive. You should reflect on that question.
Sorry, uh. Have you met the general population? Hell. Look at the leader of the "free world"
To paraphrase the late George Carlin "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"
That's not how Carlin's quote goes.
You would know this if you paid attention to what you wrote and analyzed it logically. Which is ironic in this context.
You would know this if you paid attention to what I wrote and analyzed it logically. Which is ironic, given the subject.
"Think of how stupid the average person is, and realize half of them are stupider than that."
No, they don't.
There's a whole cadre of people who talk about AGI and self awareness in LLMs who use anthropomorphic language to raise money.
> We use this kind of language as a shorthand because ...
You, not we. You're using the language of snake oil salesman because they've made it commonplace.
When the goal of the project is an anthropomorphic computer, anthropomorphizing language is really, really confusing.
Its fucking insanity.
Or just say it as an autocorrect on steroids. Most people are familiar with the concept of autocorrect
To the contrary, it's one of the most important criticisms against AI (and its masters). The same criticism applies to a broader set of topics, too, of course; for example, evolution.
What you are missing is that the human experience is determined by meaning. Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning, one way or another.
> Everybody knows LLMs are not alive and don't think, feel, want.
What you are missing is that this stuff works way more deeply than "knowing". Have you heard of body language, meta-language? When you open ChatGPT, the fine print at the bottom says, "AI chatbot", but the large print at the top says, "How can I help?", "Where should we begin?", "What’s on your mind today?"
Can't you see what a fucking LIE this is?
> We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky
Not at all. What you call "clunky" in fact exposes crucially important details; details that make the whole difference between a human, and a machine that talks like a human.
People who use that kind of language are either sloppy, or genuinely dishonest, or underestimate the intellect of their audience.
> The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive.
Because people have committed suicide due to being enabled and encouraged by software talking like a sympathetic human?
Because people in our direct circles show unmistakeable signs that they believe -- don't "think", but believe -- that AI is alive? "I've asked ChatGPT recently what the meaning of marriage is." Actual sentence I've heard.
Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?
This is unsound. At best it's incompatible with an unfounded teleological stance, one that has never been universal.
And to think they dont even have ad-driven business models yet
Please go ahead now and EAT YOUR WORDS:
https://news.ycombinator.com/item?id=46352875
https://lucumr.pocoo.org/2025/12/22/a-year-of-vibes/
> Because LLMs now not only help me program, I’m starting to rethink my relationship to those machines. I increasingly find it harder not to create parasocial bonds with some of the tools I use. [...] I have tried to train myself for two years, to think of these models as mere token tumblers, but that reductive view does not work for me any longer.
What a stupid, selfish and childish thing to do.
This technology is going to change the world, but people need to accept its limitations
Pissing off people with industrial spam "raising money for charity " is the opposite of useful, and is going to go even more horribly wrong.
LLMs make fantastic tools, but they have no agency. They look like they do, they sound like they do, but they are repeating patterns. It is us hallucinating that they have the potential tor agency
I hope the world survives this craziness!
It's preying on creators who feel their contributions are not recognized enough.
Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.
It's a marketing stunt, meaningless.
(by the way, I love the idea of AI! Just don't like what they did with it)
> hopefully saying something good about
Those talented people that work on public relations would very much prefer working with base good publicity instead of trying to recover from blunders.
I hope that makes you feel good.
Fascinating topic. However, my argument works for compartimentalized discussions as well. Conscious or not, it's meaningless crap.
I guess that's where the conversation/debate ends.
I used AI to write a thank you to a non-english speaking relative.
A person struggling with dimentia can use AI to help remember the words they lost.
These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.
I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.
You can achieve these things, but this is a way to not do the work, by copying from people who did do the work, giving them zero credit.
(As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.)
I mean how do you write this seriously?
is the originator I believe. Reading this is gross and unbearable, I can't believe these people have money.
1619 more comments available on Hacker News