Is AI a Bubble? I Didn't Think So Until I Heard of Sdd
Posted3 months agoActive2 months ago
hyperdev.matsuoka.comTechstory
calmmixed
Debate
60/100
Artificial IntelligenceSddLarge Language ModelsTech Investment
Key topics
Artificial Intelligence
Sdd
Large Language Models
Tech Investment
The author questions whether AI is a bubble after learning about SDD (Scaling Law for LLM Training Data), sparking a discussion on the validity of current AI investments and the potential for a market correction.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
12m
Peak period
20
0-2h
Avg / period
5.1
Comment distribution51 data points
Loading chart...
Based on 51 loaded comments
Key moments
- 01Story posted
Oct 22, 2025 at 9:29 PM EDT
3 months ago
Step 01 - 02First comment
Oct 22, 2025 at 9:41 PM EDT
12m after posting
Step 02 - 03Peak activity
20 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 23, 2025 at 6:07 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45677201Type: storyLast synced: 11/20/2025, 5:39:21 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I would wager a large portion of that would be both generating and then triaging a certain level of institutional "bullshit communications".
For example, the employee must give daily status updates, and they have a three-item list for the day. They use an LLM to pad out their three-item bullet list into prose that sounds smart and active and impressive.
Then their manager says "TLDR" and uses an LLM to "summarize" it back into almost the original three-item bullet list. Each employee believes they've "saved time" using the tool, but it's a kind of broken-window fallacy [0].
[0] https://en.wikipedia.org/wiki/Parable_of_the_broken_window
But command of human attention is another matter: it can be quantified and it is command of attention that defines the value of FAANG+
The AI boosters hype it in such terms, both intra-model and extra-model: we track the attention and sell seats in the musical chairs of "investment", leaving it to users to explain the productivity gains to themselves.
It's a circus.
But whether $THING should be adopted and whether it does increase long-term productivity is nowhere near as clear-cut.
Ironically you could get the same effect and save compute fees by simply having programmers stay home one day a week.
[1] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
> We do not provide evidence that: AI systems do not currently speed up many or most software developers
> Most alarmingly, the METR study found experienced developers took 19% longer with AI tools, despite believing they’d sped up by 20%.
> We do not provide evidence that: AI systems do not currently speed up many or most software developers
You act like that's a "gotcha" instead of a normal thing. All they mean by that [0] is that can't mathematically prove their developers/tasks/tool are representative for of the majority of worldwide developers/tasks/tools.
You're demanding an unreasonable level of investment for anyone to "prove a negative."
The burden of proof lies on the people claiming zillion-fold boosts in productivity across "enough" places that they don't really define. This is especially true because they could profit in the process, as opposed to other people burning money to prove a point.
[0] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
That said, I think LLMs will have a bigger effect than a self-balancing scooter, both positively and negatively.
> We do not provide evidence that: AI systems do not currently speed up many or most software developers
But most fraud is legal, and most illegal fraud never sees legal action taken against it.
So like, 2-10xing the productivity of someone who gets paid <<< a software engineer? And who is more 1:1 with the profit from each widget? They don't care that much.
But juicing an already extremely juicy fruit? Now that's worth 70x ARR.
https://www.reddit.com/r/NoCodeSaaS/comments/1mvb3tm/chamath...
Professional coders are the only ones who can get anything real out of llms(well, maybe a 19% decrease in productivity but it’s not fake.
Do you have a source on that “impact is monumental for non developers”? Because most of what I’ve been reading is that group is mostly vaporware.
Yesterday I was discussing with the head of a company that has been developing software for 30 years, and he told me that they had some old assembly code left, the original programmer retired years ago, so they converted the code with the help of LLM...
I use LLM to create scripts for various manipulations and data extraction from documents, once I managed to create a working "Wikipedia" after several attempts to make prompt as accurate as possible...
The question is not whether I could have done it, but how long it would have taken me without LLM.
But looking for bugs in "someone else's" code I created is a scary, frustrating job, especially when LLM hallucinates extensively somewhere in the middle of the work :)
Yes, it allows them to solve their desperate need to combine spreadsheets, creating one big merged spreadsheet with pockets of corrupted data and fake last-names. It either lies undiscovered until clients complain, or I notice and have to redo the work for them properly, without an LLM. :p
Meanwhile my company has finally shut down last low code instance this year, after 10 years of struggling with maintaining a mountain of unmanageable slow-ass code, all the while paying through the nose for the licenses. Spoiler alert - literally zero non-programmers in my company have ever constructed even a single low-code "thing", as was advertised initially. Every person who worked with it was a programmer or QA who could program (aka SDET) and they simply had to suffer with all the limitations of low code platform while gaining none of the supposed benefits.
I suspect it will end up this way with LLM generated code. At most managers would generate some non-operational prototype of some app and then throw it to the dev team who will have to spend time deconstructing it and then rewriting from scratch (with or without LLM assistance).
The nature of development work is changing. A project can be 100% written by AI but guided so closely by humans that the process wasn’t much faster. Alternatively, AI can make a project faster by helping with architecture and other things beyond writing code.
When people quote “70% AI coded”, it implies that 100% is some mythical goal that means AI is writing all code unsupervised. But most production AI code at the moment is still developed in lockstep with humans.
They leverage/exploit the fact that it takes time to Verify anything. [Hype-verification gap]
Their theory goes if you use that time delay to beat the drum louder than the next guy you have a shot at attracting more customers, investors, employee, govt support than if you dont. Those who don't do it appear less Ambitious or Capable. Exploit and amplify that also.
Chimps have 3 inch brains. And this is all easier to do than solving quantum field equations. So they do it happily, cluelessly, patting each other on the head about how fascinating they are.
As soon as one chimp troupe (corp) does it well, everyone else does the same, and that's when the entire system experiences a phase transition.
It's no more about what individual chimps or companies say or think, a super structure has emerged, which traps everyone Inside the super structure, into patterns they can't escape.
Stories emerge about what the super structure is then doing. Outsider and Insider stories start diverging. And we get a bifurcation point a moment in chaos theory where one neat line splits into 2, 4, 1000, until you can’t tell what’s signal and what’s noise[1] It’s all feedback where every claim feeds on every other claim, a forest of branches growing out of thin air and verification moves too slowly to prune it.
So what's the message to the kids watching it all and getting absorbed by it? Recognize when you have jumped into the chaos stream between bifurcation and verification. When everything starts to sound urgent, revolutionary, world-changing but no one can show you how or why youre standing dead center in chaos. Stories multiply faster than facts can catch up, where everyone’s pretending to see patterns that arent there yet. The louder it gets, the less meaning there is.That’s your cue to walk sideways. Dont let it drain your attention, time and energy. There are better things to do in life. [1]https://upload.wikimedia.org/wikipedia/commons/thumb/c/c8/Lo...
2 more comments available on Hacker News