The AI Doomers Are Getting Doomier
Posted4 months agoActive4 months ago
theatlantic.comTechstory
skepticalmixed
Debate
60/100
AIAI SafetyAI Risks
Key topics
AI
AI Safety
AI Risks
The article discusses the growing concerns about AI risks, while the discussion revolves around the validity of these concerns and potential conflicts of interest.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
36s
Peak period
2
0-1h
Avg / period
1.5
Key moments
- 01Story posted
Aug 21, 2025 at 8:39 PM EDT
4 months ago
Step 01 - 02First comment
Aug 21, 2025 at 8:40 PM EDT
36s after posting
Step 02 - 03Peak activity
2 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 22, 2025 at 1:38 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 44979882Type: storyLast synced: 11/18/2025, 1:47:01 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity.
Just to state the obvious: there is a monumental conflict of interests in this sort of organization. Of course those who present themselves as solutions and even paladins defending us against a problem will have a vested interest in convincing everyone there is indeed a problem, and it is huge, even if said problem does not exist at all.
The psychosis is worrying but I think an artefact of a new technology that people don't have an accurate mental model around (similar to but worse than the supernatural powers attributed to radio, television etc). Hopefully AI companies will provide more safeguards against it but even without them I think that people will eventually understand the limitations and realise that it's not in love with them, doesn't have a genius new theory of physics and makes things up.