How to Argue with an AI Booster
Posted4 months agoActive4 months ago
wheresyoured.atTechstory
calmmixed
Debate
60/100
Artificial IntelligenceCritiqueTechnology Adoption
Key topics
Artificial Intelligence
Critique
Technology Adoption
The article discusses how to argue with 'AI boosters', individuals who overly promote AI, and the discussion revolves around the nuances of AI adoption, its actual capabilities, and the motivations behind its promotion.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
35m
Peak period
4
0-2h
Avg / period
1.8
Key moments
- 01Story posted
Sep 2, 2025 at 12:39 PM EDT
4 months ago
Step 01 - 02First comment
Sep 2, 2025 at 1:14 PM EDT
35m after posting
Step 02 - 03Peak activity
4 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 3, 2025 at 11:53 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45105495Type: storyLast synced: 11/20/2025, 7:55:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
>So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs, the less likely they are to emphatically argue that I’m missing out by not doing so myself.
>No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality.
He has to include that section as CYA against people saying they legitimately like AI, but if he made it more prominent it might start to complicate the narrative and let air out of the balloon.
I think the same sort of "peaked in highschool" folks would be dazzled by LLMs, so that overlap seems natural to me.
For example, under “ChatGPT Is So Popular”, they disagree with the premise then use the argument that “ChatGPT was marketed with lies” as evidence. The later argument is well researched, but is simply out of place, leaving nothing to support their disagreement.
In the past, chess programs advanced steadily in ELO scores as a result or hardware improvements, allowing the year they would beat the human champion to be predicted fairly well and now AI is advancing in something like IQ in a similar way.
I'm not sure Zitron gets that - he acts like it's just some novel software we've come up with that isn't really that good. Which is sort of like thinking Computer-Chess 1975 was just some software that wasn't very good and so talking about computers getting better than humans was nonsense.
Of course Deep Blue beat Kasparov in 1997 and now chess computers are about ELO 3500 against about 2800 for the human champion. Similar will probably happen in AI.