In a First, Google Has Released Data on How Much Energy an AI Prompt Uses
Posted5 months agoActive4 months ago
technologyreview.comTechstoryHigh profile
calmmixed
Debate
60/100
AI Energy ConsumptionSustainabilityGoogle Gemini
Key topics
AI Energy Consumption
Sustainability
Google Gemini
Google released data on the energy consumption of its AI model Gemini, sparking discussion on the environmental impact of AI and the accuracy of the reported metrics.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
27m
Peak period
136
0-12h
Avg / period
22.9
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 21, 2025 at 9:52 AM EDT
5 months ago
Step 01 - 02First comment
Aug 21, 2025 at 10:19 AM EDT
27m after posting
Step 02 - 03Peak activity
136 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 26, 2025 at 7:27 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 44972808Type: storyLast synced: 11/20/2025, 5:54:29 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Ҩ.¬_¬.Ҩ
A miles per gallon number for a car doesn't count the diesel that went into the equipment to mine the ore to make the steel for the chassis, etc.
I was less surprised that inference dominates training after I read that chatgpt is serving billions of requests per day.
And unlike the human who spent multiple hours writing that article, an LLM would have linked to the original study: https://services.google.com/fh/files/misc/measuring_the_envi...
[ETA] Extending on these numbers a bit, a mean human uses 1.25KW of power (Kardashev Level .7 / 8 Gigahumans) and the mean American uses ~8KW of power according to https://en.wikipedia.org/wiki/List_of_countries_by_energy_co.... So if we align AIs to be eco-friendly, they will definitely murder all humans for the sake of the planet /s
There is a non-trivial chance that the LLM would've added a link to _something_, but links/references seem like a very common thing to hallucinate, no?
The used results can then have their link either added to the end result separately, guaranteeing it is correct, or added to the prompt and "telling the LLM to include it", which retains a risk of hallucination, yes.
Common to both of these is the failure mode that the LLM can still hallucinate whilst "summarizing" the results, meaning you still have no guarantee that the claims made actually show up in the results.
Would the LLM-based tool be able to determine that the top results are just SEO-spam sites and move lower in the list, or just accept the spam results as gospel?
The practical and readily-observable-from-output answer is "No, they cannot meaningfully identify spam or misinformation, and do indeed just accept the results as gospel"; Google's AI summary works this way and is repeatedly wrong in exactly this way. Google's repeatedly had it be wrong even in the adcopy.
The theoretical mechanism is that the attention mechanism with LLMs would be able to select which parts of the results are fed further into the results. This is how the model is capable of finding parts of the text that are "relevant". The problem is that this just isn't enough to robustly identify spam or incorrect information.
However, we can isolate this "find the relevant bit" functionality away from the rest of the LLM to enhance regular search engines. It's hard to say how useful this is; Google has intentionally damaged their search engine and it may simply not be worth the GPU cycles compared to traditional approaches, but it's an idea being widely explored right now.
Hardly better, as soon those "search engine results" would be AI slop themselves, including actual published papers (phoned-in by using AI, and "peer reviewed" by using AI from indifferent reviewers)
And then on the rare occasion they do link to a chat, their prompt is something like:
"Tell about a person of history who was important for their work inthe time of their existence and give quotes of what they said that made them important when they were speaking include notes and other stories about them and give details about their life who they married and their kids and who their parents were and other big things they did do in their lives"
Instead of downvotes, please prove me wrong.
A. Yes — Aristophanes, the Athenian comic playwright, lived through almost the entire Peloponnesian War (431–404 BCE) and his plays are full of bitter humor about Greeks destroying each other. He often portrays it as an unfortunate, fratricidal conflict.
Here are a few key passages (in translation):
Acharnians (425 BCE)
Written during the war’s early years, it shows a farmer sick of the fighting: • Dicaeopolis (the main character) laments: “This war, which has come upon us all, is nothing but Greeks fighting Greeks.”
He later makes a private peace treaty with Sparta for himself, mocking the idea that the wider war must continue when ordinary people suffer.
Peace (421 BCE)
Produced right before the “Peace of Nicias”: • Trygaeus, the hero, flies up to heaven to free the goddess Peace: “It is monstrous that Greeks should destroy one another, while barbarians stand by laughing.”
This captures Aristophanes’ view that the war was not only ruinous but also shameful, as it weakened Greece before foreign enemies.
In sum: Aristophanes consistently portrayed the Peloponnesian War as a tragic absurdity — Greeks killing Greeks while the true enemies (Persians/“barbarians”) watched and benefitted. His comedies combine satire with a deep sense that the conflict was fratricidal and unnecessary.
P. You said Dicaeopolis (the main character) laments: “This war, which has come upon us all, is nothing but Greeks fighting Greeks.” Where is that quote?
A. Good catch — let’s pin this down carefully.
In Aristophanes’ Acharnians (425 BCE), the character Dicaeopolis indeed complains that the Peloponnesian War is a fratricidal war among Greeks, but the exact wording depends on the translation.
⸻
Where it appears • Early in the play (lines ~100–120), Dicaeopolis is waiting for the Assembly to begin. He grumbles about the war, saying essentially that it is:
“A war of Greek against Greek, when we should be standing united against the barbarians.”
⸻
Examples from published translations • Benjamin Rogers (1924 translation): “I am heartily tired of these Grecian wars, of Greeks fighting against Greeks.”
• Alan H. Sommerstein (Loeb Classical Library, 2002): (lines 116–118) “It’s not right — Greeks at war with Greeks, while the barbarian looks on, laughing at us.”
----
Needless to say, that quote is nowhere in the Acharnians - nor anything roughly equivalent. Not only that, in the later response it attributes to the play "Acharnians" the quote it initially attributed to the play "Peace".
.24 Watt-hours is 864 Watts for one second, so a 100W human takes ~9 seconds for that output.
Also, since I live a first-world life style which consumes multiple KW of power, I've probably consumed multiple orders of magnitude energy more than an LLM on this topic.
Or made up a fake citation, complete with fake or unrelated author names, on the spot
Edit - just used Pro, gave me a direct source. Who knows...
Mmmh, that would have been my take as well up to around end of Q1 2025.
Theses days, the flagship LLM's have reduced hallucination by quite a bit, and are also way better at citing sources (you sometimes have to nudge them).
ChatGPT 5 has been very decent on that particular axis.
This is why journalists are nearly universally hostile towards AI.
I love good journalism because it's adhd crack; in-depth spilling the tea but you have to really dig deep to find it nowadays.
Unless your point is that we can kill a bunch of humans to save energy...?
Not to mention that the energy should also include all the extra energy spent on making converting energy into a form that is usable by humans (ie. food). There is probably at least an order of magnitude.
So it's not just about "the one query you ask ChatGPT about what you should write your mum to say you're not coming for Thanksgiving"
It's rather that an AI query is 0.24Wh, but that we are now using thousands of those per users per days, and that we globalize it at the scale of the planet, so 7 billion users... and this becomes huge
I'm honestly surprised that they're so similar. I've thought of LLM queries as being far more energy-intense than "just" a Google search, but maybe the takeaway is that ordinary Google searching is also quite energy-intense.
If I as a user just wanted an answer to a dumb question like, say, the meaning of some genZ slang, it seems about an order of magnitude to ask a small LLM running on my phone than to make a google search.
(Check my math: assuming the A16 CPU draws 5 watts peak for 20sec running Gemma or whatever on my iPhone, that’s 0.03Wh to answer a simple query, which is 10x cheaper)
Are training costs (esp. from failed runs) amortized in these estimates?
1: https://googleblog.blogspot.com/2009/01/powering-google-sear...
A related takeaway should be that machine inference is pervasive and has been for years, and that defining "AI" to mean just chatbots is to ignore most of the iceberg.
Not just "one training run," but the cost of a thousand AI engineers starting failing runs to get to that one deployed model.
1: Link to Google's tech report: https://services.google.com/fh/files/misc/measuring_the_envi... "We leave the measurement of AI model training to future work."
From 2022, so possibly out of date: "ML training and inference are only 10%–15% of Google’s total energy use for each of the last three years, each year split ⅗ for inference and ⅖ for training." That's probably close enough to estimate 50/50, or the full energy cost to deliver an AI result is double the inference energy.
https://research.google/blog/good-news-about-the-carbon-foot...
Various ML "learn-to-rank" tooling was in use at Google for a while, but incorporating document embedding vectors w/ ANN search into the ranking function probably happened over the course of 2018-2021 [1], I think. Generative AI only started appearing in ordinary search results in 2024.
1: https://cloud.google.com/blog/topics/developers-practitioner...
Inverted indices were not used as they worked poorly for “an ordered list of words” (as opposed to a bag of words).
And this doesn’t even start to address the ranking part.
You can solve the ordered list of words problem in ways that are more efficient than grepping over the entire internet (e.g. bigrams, storing position information in the index).
Wikipedia [1] links to "Jeff Dean's keynote at WSDM 2009" [2] which suggests that indices were most certainly used.
Then again, I am no expert in this field, so if you could share more details, I'd love to hear more about it.
[1] https://en.wikipedia.org/wiki/Google_data_centers
[2] https://static.googleusercontent.com/media/research.google.c...
> In total, the median prompt—one that falls in the middle of the range of energy demand—consumes 0.24 watt-hours of electricity
If they're running on, say, two RTX 6000s for a total draw of ~600 watts, that would be a response time of 1.44 seconds. So obviously the median prompt doesn't go to some high-end thinking model users have to pay for.
It's a very low number; for comparison, an electric vehicle might consume 82kWh to travel 363 miles. So that 0.24 watt-hours of energy is equivalent to driving 5.6 feet (1.7 meters) in such an EV.
When I hear reports that AI power demand is overloading electricity infrastructure, it always makes me think: Even before the AI boom, shouldn't we have a bunch of extra capacity under construction, ready for EV driving, induction stoves and heat-pump heating?
[1] https://cloud.google.com/blog/products/infrastructure/measur...
When it comes to the EV, the answer is simple: the EV takeover "by 2030" was 100% wishful thinking - the capacity is nowhere near there, starting from scaling the battery production, never mind the charge capacity.
We'll have the battery capacity and charge capacity to allow 100% of cars sold in 2030 to be EV's. We only need 2 capacity doublings for batteries, and currently doublings happen every ~18 months. Charge capacity is even easier, we just need to increase electricity production by 1-2% per year for a couple decades to support the transition to EV's.
Norway is a tiny market which had big artificial tax/cost incentives to buy an EV. Norway could be 100% EV and it wouldn't make any dent to global adoption.
Contrast with China -- downtown Shanghai has the vast majority of cars being EV's despite the EV sales rate in China only being ~50%.
Existence of “2030 deadline” was/ is significant factor by itself. (Current sate would be less electrified without that arbitrary and over optimistic fantasy deadline)
It feels like dog-whistle tactics. "Aren't the technology companies bad for the environment!" "What about the water usage?" "What about the electricity?"
For me the peak of this is complaining about water consumption at the Dalles datacentre [0]. The buildings are next to the Colombia river and a few miles away from the Dalles Dam [1] which generates an average of 700MW. The river water should be used for cooling, taking out some of the water, warming it up by a few degrees and returning it to the river; one might argue that this is simply returning the heat to the river that would have come from the water flowing downhill.
[0] https://www.oregonlive.com/silicon-forest/2022/12/googles-wa...
[1] https://en.wikipedia.org/wiki/The_Dalles_Dam
- There's an entirely different legal framework around how water from rivers is allocated. The "normal" flow of the most important river, the Colorado River, was calculated during a time of unusually high flow, so there's a lot of tension between different states about whether they're getting their fair share.
- To give you a sense of how "thirsty" the west is, the Colorado River rarely reaches the ocean anymore.
- groundwater use is generally much less regulated, which is causing issues like the Oglala aquifer to drop at an alarming rate. Many aquifers are damaged if they're overpumped, because the weight of the ground above them will crush empty spaces.
- bad actors in the groundwater space can lower the water table and make other peoples' wells run dry
- this is made more complicated by the fact that surface water and ground water interact with each other. Reducing stream flow can affect groundwater, and using groundwater can affect streamflow.
If you want an approachable and entertaining introduction to some western water issues (including the backstabbing and plotting that inspired Chinatown!), I'd suggest reading Cadillac Desert by Marc Reisner.
In recent news: https://www.texasmonthly.com/news-politics/jacobs-well-texas...
Also: this DID end up turning into a multi paragraph lecture. My bad.
Where do you think the evaporated water goes?
Drinking water, spraying it on crops, using it to clean a car, or using it to flush a toilet all end up with the water evaporating, or making its way to the ocean and evaporating from there.
Ultimately, if a river provides a certain number of acre-feet of fresh water, evaporating it to cool a data centre uses it just as much as to evaporating it to grow alfalfa in a desert, except perhaps more usefully.
We do get new fresh water at a reasonable pace thanks to rain - but in many parts of the world we are using it faster than that, and not just depleting the stored volume of fresh water but destroying the storage "containers" themselves.
I think you're oversimplifying the "just use rivers" idea. Most data centers (80% for Google) require potable water for cooling, and it can't come straight from a river. Plus, using potable water in cooling adds mineral deposits to the water and will require treatment to be consumable again.
"Aren't technology companies terrible. They destroy the natural environment for their profit. Look at these large numbers out of context."
> Most data centers (80% for Google) require potable water for cooling, and it can't come straight from a river.
Well, there are two kinds of water that are required for cooling; that which circulates around the datacentre, and the water used to take away the excess heat. These can be different, using heat exchangers to move the heat from one to the other.
> I think you're oversimplifying the "just use rivers" idea.
The most difficult bit appears to be dealing with conservative (with a little c) people and environmental regulation.
> Plus, using potable water in cooling adds mineral deposits to the water
Citation needed ...
Dog whistles are where someone says something that their audience will understand to mean a specific thing, but will be inaudible or neutral sounding to people who are not in their audience. They are named that because they are like the whistles only dogs can hear, while most people cannot.
"Inner city" is a canonical example of a dog whistle. Where the literal meaning is the districts in a city in the urban center, but is often used to denote poor minority communities. (If the literal meaning is only "city centers", then would you describe Manhattanites as inner city?)
On the left, "tax the rich" might be a dog whistle that carries a similar literal meaning disjoint from the understood meaning within the community.
That's basically what I said, except you're missing that more often than not it's an intentional stretching of a literal phrase in order to cast aspersions on someone who didn't do the thing you're mad about.
For example, here was one of the top results when I googled "trump dog whistle",
> In February 2018, during Trump’s first term as president, the Department of Homeland Security issued a 14-word press release titled “We Must Secure The Border And Build The Wall To Make America Safe Again.” I and other investigators of far-right extremism attributed this phrase’s use to a clear dog whistle of the common white supremacist saying known as “the 14 words” – “we must secure the existence of our people and a future for white children.”
https://theconversation.com/musks-inauguration-salute-is-not...
Or this top result from the search "musk dog whistle",
> Omar Suleiman has called on Elon Musk to stop blowing political "dog whistles of Islamophobia"
> Yet, for the past week, you have blown every conceivable dog whistle of Islamophobia, by highlighting a select group of (horrifying) incidents supposedly in the name of Islam
In this case absolutely no examples were given, but that's the great thing about accusing someone of dog whistling - you don't need to provide any evidence! In fact, literally any evidence you can provide would only serve to weaken your accusation because by definition anyone who isn't whichever -ist you're accusing them of will literally be unable to decode the -ism in their phrasing. If it sounds obviously -ist then by definition it can't be a dog whistle.
It's fine to say people overuse the term, or apply it incorrectly, but like, the definition is unambiguous here.
Just because you can find a bad article with bad examples, and some are for sure coincidences, but that doesn't mean its not true. Musk did heil, Musk does post well known white supremacy signals. Trump might be a racist and like the fascist power but he is not a white supremacist christian like the rest of his cabinet of project2025 people.
Considering what data? All queries sent to Gemini? Real users? A select few? Test queries from Google?
Does it include AI summaries of google searches? Because if the data includes stuff as simple as "How tall is Lee Pace," that is obviously going to bring the median query down, even if the top distribution is using many times more energy.
But still, the median is not useful by itself. It tells us 50% of the queries measured were under 0.24Wh. It obviously obscures policy-relevant information to not include the mean, but it also obscures what I can do individually without more details on the data. Where am I on this median?
It makes the most sense to provide the entire distribution and examples of data points.
> I'm not sure why they would report on the median
The why is an easier question. They probably picked the lower of the two numbers because it lies in their interest to state they are energy efficient.
One of the problems might be that a data center puts a lot of demand into a small area and it needs that power soon.
Those other things are being phased in over time, so we only need modest annual capacity growth to deal with them, and they are spread out.
You're not accounting for batches for the optimal gpu utilization, maybe it can takes 30 seconds but it completed 30 requests.
China however is continuously providing double the energy they currently require, only to notice that every two years or so it actually did end up getting used.
"OpenTPU: Open-Source Reimplementation of Google Tensor Processing Unit (TPU)" (2025) https://news.ycombinator.com/item?id=44111452
"A Comparison of the Cerebras Wafer-Scale Integration Technology with Nvidia GPU-based Systems for Artificial Intelligence" (2025) https://arxiv.org/html/2503.11698v1
From https://news.ycombinator.com/item?id=44648575 :
> "Next-generation datacenters consume zero water for cooling" (2024) https://news.ycombinator.com/item?id=42376406
>> this design will avoid the need for more than 125 million liters of water per year per datacenter
"Microsoft’s Datacenter Community Pledge: To build and operate digital infrastructure that addresses societal challenges and creates benefits for communities" (2024-06) https://blogs.microsoft.com/blog/2024/06/02/microsofts-datac... :
> We will design and operate our datacenters to support society’s climate goals and become carbon negative, water positive and zero waste before 2030. [...]
> By 2025, we will procure 100% renewable energy on a global scale, both significantly expanding and decarbonizing local electricity grids.
> Our datacenter designs are more water efficient than traditional enterprise datacenters, and our plan by 2030 is to replenish more water than we consume locally.
Here's this about CNT cooling:
"Cyberpower begins selling desktop PCs with carbon nanotube CPU cooling" (2025) https://news.ycombinator.com/item?id=44899495
"A carbon-nanotube-based tensor processing unit" (2024) https://news.ycombinator.com/item?id=41322070
Graphene semiconductors should be at least 10X more energy efficient; but how much less water would graphene-based chips waste?
> People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.
https://blog.samaltman.com/the-gentle-singularity
That's.... a lot cheaper than I would have guessed. Obviously, the data centers cost quite a bit to build. But when you think of $20/mo for a typical subscription. That's not bad?
Did I do that right?
The fundamental flaw in AI energy/water doomerism has always been that energy costs money, water costs money, real estate costs money, but AI is being given away for free. There is obviously something wrong with the suggestion that AI is using all the energy and water.
That's equivalent to doing less than two miles driving(CO2), one toilet flush (water) and about three dryer loads of laundry.
We can always keep adding new stuff and say each time "oh but it's small"... sure, but if we keep adding more, altogether it becomes huge
Yeah, I was more interested in knowing the total amount. A "median" prompt without the information on the total number of prompts is kind of meaningless...
Focusing on water, Google does reports on their total water withdrawal at all their data centers. It was around 10 billion gallons per year in 2024. That's around 0.03% of water usage in the US (that's just for rough context - not all of Google's data centers are in the US). I don't think that's an eye-popping amount.
But when you consider that more than 1 billion gallons came from Council Bluffs, IA, you have to make sure that's being done responsibly, and that the local ecology can support it. Google admits that 28% of their water use comes from sources with "medium to high risk of depletion or scarcity." [0]
[0]: https://www.gstatic.com/gumdrop/sustainability/google-2025-e...
Eh, I somewhat disagree with this. The US energy grid has had almost no extra capacity for a long time. A lot of this was due to efficiency (not a bad thing) and little industrial growth (not a great thing) in the country. Data centers themselves, I don't think are the biggest cause of the issues, but the distribution grid. We've had tons of problems around distribution with new energy sources coming online, but issues distributing the power to where we need to.
We see things like private power plants, not because we can't generate power, but because we absolutely suck at distribution of power.
- one second of running a toaster, or
- 1/80th of a phone charge,
- lifting 100 pounds to a height of 6 feet,
- muzzle energy of a 9mm bullet,
- driving 6 feet with a Tesla.
GPT-4 is over a trillion parameters. Is there any reason to think they have 2.5x more parameters but somehow use 6x less energy?
I also think this misses a bit the forest from the trees, emissions need to go down across all industries and tech is actively increasing theirs.
There is a perception out there about GenAI and water that goes surprisingly deep. I was told we are will be living in a drought-stricken hellscape, and AI is to blame.
I'd like to know the equivalent energy consumption of a single TikTok video, but that is probably arguing the wrong thing. My bigger question is ... where do they think that water goes? Steam? The assumption is that it is gone forever, and I can't get over how people could just take that at face value.
https://www.youtube.com/watch?v=DGjj7wDYaiI
And there isn't solid evidence that this was connected to the data center construction:
> Ben Sheidler, a spokesman for the Joint Development Authority, which manages the industrial park that Meta’s facilities occupy, said the cause of the water issues was unknown. The Joint Development Authority did not do a well water study before construction to determine any potential effects, but the timing of the problems could be a coincidence, he said.
> “I wouldn’t want to speculate that even the construction had something to do with it,” he said. “One thousand feet away is a pretty significant distance.”
page 21, says Arizona 2015 golf course irrigation was 120 million gallons per day, citing the US Geological Survey.
https://dgtlinfra.com/data-center-water-usage/
says Google's datacenter water consumption in 2023 was 5.2 billion gallons, or ~14 million gallons a day. Microsoft was ~4.7, Facebook was 2.6, AWS didn't seem to disclose, Apple was 2.3. These numbers seem pulled from what the companies published.
The total for these companies was ~30 million gallons a day. Apply your best guesses as to what fraction of datacenter usage they are, what fraction of datacenter usage is AI, and what 2025 usage looks like compared to 2023. My guess is it's unlikely to come out to more than 120 million.
I didn't vet this that carefully so take the numbers with a grain of salt, but the rough comparison does seem to hold that Arizona golf courses are larger users of water.
Agricultural numbers are much higher, the California almond industry uses ~4000 million gallons of water a day.
Data centers don't just heat up the water and return it - they evaporate the water into the atmosphere (yes, I know, the H2O still exists, but it's in a far less usable form when it's gaseous atmospheric H2O)
Do you have a source?
I'm genuinely curious how recondensing would work. It seems like a Rube Goldberg machine to me. To recondense the water vapor, you would need to move a large amount of heat out of the water vapor into a heat sink so that it could cool down to the boiling point and undergo a phase change. What would that sink be, and how would you move the heat into it fast enough?
Air is the obvious answer, but if you are dumping heat into the air, why would you do it by evaporating and then condensing water, rather than transferring it directly (e.g. via a heat exchanger and fan)?
Just what I've been told by people in the industry. I too would love to see more solid data.
Aral Sea was destroyed by farm irrigation: https://www.youtube.com/watch?v=oBjDF4SFIlk
I find "1 liter per kWh" to be a bit hard to visualize, but when they talk about building a gigawatt datacenter, that's 278L/s. A typical showerhead is 0.16L/s. The Californian almond industry apparently uses roughly 200kL/s averaged over the entire year -- 278L/s is enough for about 4 square miles of almond orchards.
So it seems like a real thing but maybe not that drastic, especially since I think the hyperscaler numbers are better than this.
>The report also finds that the total energy used to field a Gemini query has fallen dramatically over time. The median Gemini prompt used 33 times more energy in May 2024 than it did in May 2025, according to Google.
The numbers are cute but we can't actually do anything with them without those details. At least an average could be multiplied by the # of queries to get the total usage.
Still, there are a lot unswered questions here, and its up in the air precisely how this stuff will continue to integrate into services we already use, or habits we have yet to form at large. What does that scaling look like?
But by far the most troubling thing is the continued combination of flippancy, defensiveness, or silence we get from the AI peiple about even attempting to talk about this. If you are a True Believer, don't you want this to be something that is tackled head on, rather than tucked away? When this has come up before, I always end up seeing a bunch of guys who essentially leave the vibe of "well I am plenty above sea level, my AC is pumping just fine, and I just simply don't care because my productivity has doubled!"
Like isn't this community supposed to be excited about a future, eager to tackle problems? Or is there maybe some intrinsic solipsism to the impressive chatbots that ultimately renders this kind of attitude to its users? It feels like right when we were culturally about to age out of this particular form of obstinacy, we set ourselves up to create a whole new generation of "global warming is fake news" people. Which is a shame. If you're going to be like this, just go all in on accelerationism in all its pseudo-facist darkness, don't just borrow a script from baby boomers!
13 more comments available on Hacker News