The Case Against Generative AI
Posted3 months agoActive3 months ago
wheresyoured.atTechstory
skepticalmixed
Debate
80/100
Generative AILlmsAI Ethics
Key topics
Generative AI
Llms
AI Ethics
The article 'The Case Against Generative AI' argues against the value of generative AI, sparking a discussion on its validity, potential consequences, and the author's repetitive arguments.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
25m
Peak period
4
2-4h
Avg / period
2
Comment distribution12 data points
Loading chart...
Based on 12 loaded comments
Key moments
- 01Story posted
Sep 29, 2025 at 1:58 PM EDT
3 months ago
Step 01 - 02First comment
Sep 29, 2025 at 2:23 PM EDT
25m after posting
Step 02 - 03Peak activity
4 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 30, 2025 at 4:56 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45416764Type: storyLast synced: 11/20/2025, 2:46:44 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
To be clear, I started out as a fan of Gary Marcus and Ed Zitron, and a rabid anti-AI hater, because I tried GPT 3.5 soon after it was released and was extremely unimpressed with any of its capabilities. But after a while, I started to get uncomfortable with my closed mindedness, and so decided to try to give the tools a fair shake, and by the time I had decided to do so, the capabilities had expanded so much that I genuinely became impressed, and the more I stress tested them the more I began to gain a more nuanced understanding where there are serious traps and limits and serious problems with how the industry is going but just because a tool is not perfectly reliable does not mean it isn't very useful sometimes.
Totally, but I don't thinkthe average layperson, journalist or financial analyst will understand any of that nuance (nor pass that info on, because what gets clicks is outrage, and of course, Zitron sells clicks).
The core issue is that OpenAI is committing to spending hundreds of billions on AI data center expansion that it doesn't have and that it doesn't appear able to acquire, and this basic fact is being obscured by circular money flows and the finances of AI being extremely murky [1]. But Zitron is muddying this message by excessive details in trying to provide receipts, and burying all of it behind what seems to be a more general "AI doesn't work" argument that he seems to want to make but isn't sufficiently well-equipped to make.
[1] The fact that the Oracle and Nvidia deals with OpenAI may actually be the same thing is the one thing new to me in this article.
But what's the endgame? Is it to persuade people not to use these things? Make them illegal? Create some other technology that makes them obsolete or non-functional?