An Untidy History of AI Across Four Books
Posted4 months agoActive4 months ago
hedgehogreview.comTechstory
calmmixed
Debate
60/100
AI HistoryBook ReviewsAI Ethics
Key topics
AI History
Book Reviews
AI Ethics
The article discusses four books on AI history and ethics, sparking a discussion among commenters about the books' merits, the history of AI, and the authors' qualifications.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
21m
Peak period
15
0-2h
Avg / period
4.3
Comment distribution39 data points
Loading chart...
Based on 39 loaded comments
Key moments
- 01Story posted
Sep 19, 2025 at 2:15 PM EDT
4 months ago
Step 01 - 02First comment
Sep 19, 2025 at 2:36 PM EDT
21m after posting
Step 02 - 03Peak activity
15 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 20, 2025 at 11:19 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45304706Type: storyLast synced: 11/20/2025, 5:28:51 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
AI Snake Oil – by Arvind Narayanan and Sayash Kapoor
Nexus – by Yuval Noah Harari
Genesis – by Henry Kissinger, Craig Mundie, and Eric Schmidt
The Singularity Is Nearer – by Ray Kurzweil
In 2018 or 2019 I saw a comment here that said that most people don't appreciate the distinction between domains with low irreducible error that benefit from fancy models with complex decision boundaries (like computer vision) and domains with high irreducible error where such models don't add much value over something simple like logistic regression.
It's an obvious-in-retrospect observation, but it made me realize that this is the source of a lot of confusion and hype about AI (such as the idea that we can use it to predict crime accurately). I gave a talk elaborating on this point, which went viral, and then led to the book with my coauthor Sayash Kapoor. More surprisingly, despite being seemingly obvious it led to a productive research agenda.
While writing the book I spent a lot of time searching for that comment so that I could credit/thank the author, but never found it.
So, was this something that you guys were conscious of when you chose your own book's title? How well have you future-proofed your central thesis?
Our more recent essay (and ongoing book project) "AI as Normal Technology" is about our vision of AI impacts over a longer timescale than "AI Snake Oil" looks at https://www.normaltech.ai/p/ai-as-normal-technology
I would categorize our views as techno-optimist, but people understand that term in many different ways, so you be the judge.
Sounds like a job for the community! Maybe someone will track it down...
Edit: I tried something like https://hn.algolia.com/?dateEnd=1577836800&dateRange=custom&... (note the custom date range) but didn't find anything that quite matches your description.
This was from 2017, and it made such an impression on me that I could find it on my first search attempt!
> A great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.
> Dreyfus' last paper detailed the ongoing history of the "first step fallacy", where AI researchers tend to wildly extrapolate initial success as promising, perhaps even guaranteeing, wild future successes.
https://en.wikipedia.org/wiki/Hubert_Dreyfus's_views_on_arti...
> A puzzling characteristic of many AI prophets is their unfamiliarity with the technology itself
> After reading these books, I began to question whether “hype” is a sufficient term for describing an uncoordinated yet global campaign of obfuscation and manipulation advanced by many Silicon Valley leaders, researchers, and journalists
[1]: https://iasculture.org/about/vision
Link: https://ai.stanford.edu/~nilsson/QAI/qai.pdf
2. The idea that using ML to predict outcomes "does not work" is so obviously wrong that I don't really feel the need to argue against it. Perhaps weather models, content moderation systems, NLP analyzers, spatial modelers, and the vast universe of other examples are all not really AI in the first place, in their book? In that case, what is "predictive AI"? Just a few cherry-picked examples of local governments trying to cheap out on bureaucratic processes, I guess?
After this brief intro, we arrive at the meat of the article. Picking on a Harari book seems like beating a dead horse, but y'know, sometimes that's fun! Still, the specific criticisms fall flat:
That's just blatantly untrue, and even when it was true (pre-2023[1]), it's a misleading anecdote that obscures an overwhelming trend. That's an absurd way to describe modern deep learning, where the Bitter Lesson[2] is cited as gospel. Yes, technically all neural network topologies are laid out by humans at some level, but just saying that is another misleading snippet of the truth at best; even the author later acknowledges "the opacity of machine-learning tools is a genuine technical problem". How can both things be the case? Yes, he's applying the concept in a broader way than usual. That doesn't make it invalid, and I'm 100% sure that even someone like Harari is well aware of what he's doing there. Describing this as "bungling straightforward ideas" rather than "saying something I disagree with" is, well... bungled!Finally, there's the criticism about the COMPAS system that ProPublica uncovered (the true GOATs in any story). But what exactly is the criticism there? "He was critical, yes, but not critical in exactly the way I prefer"? That applies to pretty much every book ever in some way or another...
I'll skip going through the other two as closely--because I'm on the anti-markdown site, where walls of text are the only option--but it's all just the same tired assumptions wrapped in a condescending attitude. The writers of Genesis are far from experts in AI, but regardless, the criticisms of both them and Kurzweil come down to variations on one theme: "these people think AI is a big deal, which is obviously wrong, because it's not". I don't think you need me to tell you that this is not a solid argument.
I mean... Ugh. Criticizing the idea of a technological singularity as an "imaginary event" that "consists almost entirely in extrapolation" is again technically true, but the implied pejorative usage of these terms is completely unfounded; it is no more imaginary than climate change, nuclear war, or the simple empirical assumption that the sun will rise again tomorrow.
It's especially tiring to read this when we're literally in the middle of the singularity right now, which is quite obvious if you hear the real meaning of the term ("a point where our models must be discarded and a new reality rules"[3]), rather than the somewhat-bungled description here that relates more to Intelligence Explosions (" sufficiently advanced machine intelligence could build a smarter version of itself, which could in turn build an even smarter version, and that this process could continue to the point of vastly exceeding human intelligence"[4]).
The only people who still think the future of AI('s effect on humanity) is predictable post-2022 are the ones who are dogmatically certain that computers as we know them will always be crappy tools at best. I implore you, privileged reader: do not fall into this comforting trap. Face the future with us, despite the terror. Posterity is counting on us.
[1] https://github.com/official-stockfish/Stockfish/commit/af110...
[2] http://www.incompleteideas.net/IncIdeas/BitterLesson.html
[3] https://edoras.sdsu.edu/~vinge/misc/singularity.html
[4] https://intelligence.org/files/IEM.pdf
Sorry, why is it misleading to recognise that all neural networks are created manually by human engineers?