Key Takeaways
Yes.
...it is a bubble and we all know it.
(I know you have RSUs / shares / golden handcuffs waiting to be vested in the next 1 - 4 years which is why you want the bubble to continue to get bigger.)
But one certainty is the crash will be spectacular.
I would love to see your portfolio, if you wouldn't mind showing the class. Let us see what your allocation reveals about what you really think...
But this is only if the trend-line keeps going, which is a likely possibility given the last couple of years.
I think people are making the mistake that AI is a bubble and therefore AI is completely bullshit. Remember: The internet was a bubble. It ended up changing world.
Or, you skip all that and just put it all in an S&P 500 fund.
Because of the way the AMT (Alternative Minimum Tax) worked at the time they bought the stock, did not sell, but owed taxes on the gain on the day of purchase. They had tax bills of over $1 million but even if they sold it all they couldn't pay the bill. This dragged on for years.
https://www.latimes.com/archives/la-xpm-2001-apr-13-mn-50476...
That lesson is part of why I dump my company's shares the first chance I get.
The bubble burst in 2000-2001, Google IPO was in 2004.
The S&P500 also did not do very well at the time.
That is the problem with bubbles.
Let's just say the AI bubble started in 2023. We still have about 3 years, more or less, until the AI bubble pops.
I do believe we are in the build out phase of the AI bubble, much like the dotcom bubble, where Cisco routers, Sun Microsystems servers... etc. sold like hotcakes to build up the foundation of the dotcom bubble
Minimum 3 years and at a hard maximum of 6 years from now.
We'll see lots of so called AI companies fold and there will be a select few winners that stay on.
So I'd give my crash timelines at around 2029 to 2031 for a significant correction turned crash.
It's no wonder that the "AI optimists", unless very tendentious, try to focus more on "not needing to work because you'll get free stuff" rather than "you'll be able to exchange your labor for goods".
How about when offices went digital? All the file runners, calculators, switchboard operators, secretaries, transcribers, etc. Where are they now? Probably not working good jobs in IT. Maybe you will find them bagging groceries past retirement age today.
What a wild and speculative claim. Is there any source for this information?
1. Most of these companies are AI companies & would want to say that to promote whatever tool they're building
2. Selection b/c YC is looking to fund companies embracing AI
3. Building a greenfield project with AI to the quality of what you need to be a YC-backed company isn't particularly "world-class"
3. You are absolutely right. New startups have greenfield projects that are in-distribution for AI. This gives them faster iteration speed. This means new companies have a structural advantage over older companies, and I expect them to grow faster than tech startups that don’t do this.
Plenty of legacy codebases will stick around, for the same reasons they always do: once you’ve solved a problem, the worst thing you can do is rewrite your solution to a new architecture with a better devex. My prediction: if you want to keep the code writing and office culture of the 2010s, get a job internally at cloud computing companies (AWS, GCP, etc). High reliability systems have less to gain from iteration speed. That’s why airlines and banks maintain their mainframes.
"4.1. Generally. Customer and Customer’s End Users may provide Input and receive Output. As between Customer and OpenAI, to the extent permitted by applicable law, Customer: (a) retains all ownership rights in Input; and (b) owns all Output. OpenAI hereby assigns to Customer all OpenAI’s right, title, and interest, if any, in and to Output."
Can you replay all of your prompts exactly the way you wrote them and get the same behaviour out of the LLM generated code? In that case, the situation might be similar. If that's not the case, probably not.
But significantly editing LLM generated code _should_ make it your copyright again, I believe. Hard to say when this hasn't really been tested in the courts yet, to my knowledge.
That said, you don't necessarily always have 100% deterministic build when compiling code either.
So in my understanding (not as a lawyer, but someone who's had to deal with legal issues around software a lot), if you _save_ all the inputs that will lead to the LLM creating pretty much the same system with the same behaviour, you could probably argue that it's a derivative work of your input (which is creative work done by a human), and therefore copyright protected.
If you don't keep your input, it's harder to argue because you can't prove your authorship.
It probably comes down to the details. Is your prompt "make me some kind of blog", that's probably too trivial and unspecific to benefit from copyright protection. If you specify requirements to the degree where they resemble code in natural language (minus boilerplate), different story, I think.
(I meant to include more concrete logic in my post above, but it appears I'm not too good with the edit function, I garbled it :P)
If the prompt makes the output a derivative, then the rest is also derivative.
https://www.legalzoom.com/articles/what-are-derivative-works...
Courts have decided they're new works which are not copyrightable.
Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof.
Which is great! But it's not a +1 for AI, it's a -1 for them.
Which is great! But it's not a +1 for AI, it's a -1 for them.
" Is you, right?
But that's nothing new. I've been working that way for several decades now.
[1] "Faulty" hardware found in the real world can sometimes break this assumption, but in theory at least. But a C compiler can change the assumption of determinism under faulty hardware too.
Did you do that stupid HN thing where you failed to read the entire comment and then went off to try it on faulty hardware?
Symbolica is working on more deterministic/quicker models: https://www.symbolica.ai
Are they, though? Obviously they are in some cases, but it has always been held that a natural language compiler is theoretically possible. But a natural language compiler cannot not be deterministic, fundamentally. It is quite apparent that determinism is not what makes a compiler.
Wow. No, I actually don't want to participate in a discussion where the default is random hostility and immediate personal attack. Sheesh.
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Please don't fulminate. Please don't sneer, including at the rest of the community.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Please don't post shallow dismissals...
Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".
Incorrect. LLMs are designed to be deterministic (when temperature=0). Only if you choose for them to be non-deterministic are they so. Which is no different in the case of GCC. You can add all kinds of random conditionals if you had some reason to want to make it non-deterministic.
There are some known flaws in GPUs that can break that assumption, but in theory (and where you have working, deterministic hardware) it absolutely is true. GCC also stops being deterministic when the hardware breaks down, so...
It really comes mostly down to being able to concisely and eloquently define what you want done. It also is important to understand what the default tendencies and biases of the model are so you know where to lean in a little. Occasionally you need to provide reference material.
The capabilities have grown dramatically in the last 6 months.
I have an advantage because I have been building LLM powered products so I know mechanically what they are and are not good with. For example.. want it to wire up an API with 250+ endpoints with a harness? You better create (or have it create) a way to cluster and audit coverage.
Generally the failures I hear often with "advanced" programmers are things like algorithmic complexity, concurrency, etc.. and these models can do this stuff given the right motivation/context. You just need to understand what "assumptions" the model it making and know when you need to be explicit.
Actually one thing most people don't understand is they try to say "Do (A), Don't do (B)", etc. Defining granular behavior which is fundamentally a brittle way to interact with the models.
Far more effective is defining the persona and motivation for the agent. This creates the baseline behavior profile for the model in that context.
Not "don't make race conditions", more like "You value and appreciate elegant concurrent code."
I think you're referring to is the transition from 'write code that does X' which is very concrete to 'trick an AI into writing the code I would have written, only faster', which feels like work that's somewhere between an art form and asking a magic box to fix things over and over again until it stops being broken (in obvious ways, at least).
Understandably people that prefer engineered solutions do not like the idea of working this way very much.
1. Basing on how the engineer just responded to my comment, what is the understanding gap?
2. How do I describe what I want in a concise and intuitive way?
3. How do I tell an engineer what is important in this system and what are the constraints?
4. What assumptions will an engineer likely make that are will cause me to have to make a lot of corrections?
Etc.. this is all human to human.
These skills are all transferrable to working with an LLM.
So I guess if you are not used to technical leadership, you may not have used those skills as much.
We had a method for this before LLMs; it was called "Haskell".
Also in a case of just prose to code, Claude wrote up a concurrent data migration utility in Go. When I reviewed it, it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed. I would have written it faster by hand, no doubt. I think I know more now and the calculus may be shifting on my AI usage. However, the following day, my colleague needed a nearly identical temporary tool. A 45 minute session with Claude of "copy this thing but do this other stuff" easily saved them 6-8 hours of work. And again, that was just talking with Claude.
I am doing a hybrid approach really. I write much of my scaffolding, I write example code, I modify quick things the ai made to be more like I want, I set up guard rails and some tests then have the ai go to town. Results are mixed but trending up still.
FWIW, our CEO has declared us to be AI-first, so we are to leverage AI in everything we do which I think is misguided. But you can bet they will be reviewing AI usage metrics and lower wont be better at $WORK.
If companies want to value something as dumb as LoC then they get what they incentivized
I think the difficulty is exercising the judgement to know where that productive boundary sits. That's more difficult than it sounds because we're not use to adjudicating machine reasoning which can appear human-like ... So we tend to treat it like a human which is, of course, an error.
Somethings are best written by yourself.
And this is with the mighty claude opus 4.5
All of these have a non-trivial learning curve and/or poor and patchy docs.
I could master all of these the hard way, but it would be a huge and not very productive time sink. It's much easier to tell a machine what I want and iterate with error reports if it doesn't solve my problem immediately.
So is this AGI? It's not self-training. But it is smart enough to search docs and examples and pull them together into code that solves a problem. It clearly "knows" far more than I do in this particular domain, and works much faster.
So I am very clearly getting real value from it. And there's a multiplier effect, because it's now possible to imagine automating processes that weren't possible before, and glue together custom franken-workflows that link supposedly incompatible systems and save huge amounts of time.
Sounds like the extremely well-repeated mistake of treating everything like a nail because hammers are being hyped up this month.
My layperson anecdote about LLM coding is that using Perplexity is the first time I've ever had the confidence (artificial, or not) to actually try to accomplish something novel with software/coding. Without judgments, the LLM patiently attempts to turn my meat-speak into code. It helps explain [very simple stuff I can assure you!] what its language requires for a hardware result to occur, without chastising you. [Raspberry Pi / Arduino e.g.]
LLMs have encouraged me to explore the inner workings of more technologies, software and not. I finally have the knowledgeable apprentice to help me with microcontroller implementations.
----
Having spent the majority of my professional life troubleshooting hardware problems, I often benefit from rubber ducky troubleshooting [0], going back to the basics when something complicated isn't working. LLMs have been very helpful in this roleplay (e.g. garage door openers, thermostat advanced configurations, pin-outs).
[0] <https://en.wikipedia.org/wiki/Rubber_duck_debugging>
¢¢
She didn't live long enough to see ChatGPT [1] (she would have been flabbergasted at its ability to understand people/situations), but even with her "normal" intelligence she would have been a master to its perceptions/trainings.
[0] "Beyond just teasing."
[1] We did briefly wordplay with GPT-2 right before she died via <http://www.thisworddoesnotexist.com> exchanges, but nothing interactive.
----
About a year later (~2023), my dentist friend experienced a sudden life change (~40); in his grieving/soul-seeking, I recommended that he share some of his mental chaos with an LLM, even just if to role-play as his sick family member. Dr. Friend later thanked me for recommending the resource — particularly "the entire lack of any judgments" — and sharing his own brilliant mirroring/interactions with computer prompts.
----
Particularly as a big dude, it's nice to not always have to be the tough guy, to even admit weakness. Unfortunately I think the overall societal benefits of generative AI are going to increase anti-social behaviour.
j/k don't worry I'm an idiot — but somebody else WILL.
nothing that the LLM is outputting is useful in the hands of somebody who couldn't have done it themselves.
Most apt analogy is that of a pilot and autopilot. Autopilot makes the job of the pilot more pleasant, but it doesn't even slightly obviate the need for the pilot, nor does it lower the bar for the people that you can train as pilots.
How so? And in what context?
Where I am, headcount is based on "can we finish and sustain these planned and present required projects". If these automations allow a developer to burn less time, it reduces the need for headcount. As a direct result of this approach to hiring based on need, the concept of a "layoff" doesn't exist where I am.
This is exactly the fallacy, and it's very hard to see why it's a fallacy if you've never professionally written code (and even then).
Software development work fills to occupy the time allotted to it. That's because there is always a tradeoff between time and quality. If you have time available, you will fundamentally alter your approach to writing that piece of software. A rough analogy: air travel doesn't mean we take fewer vacations -- it just means we take vacations to farther away places.
Because of this effect, a dev can really finish a project in as little time as you want (up to a reasonable minimum). It just comes down to how much quality loss and risk can be tolerated. I can make a restaurant website in 1 hour (on Wix/Squarespace) or in 3 months (something hand-crafted and sophisticated). The latter is not "wasted time", it just depends on where you move the lever.
However, sometimes this is a false tradeoff. It isn't always necessary that the place you flew 3 hours will give you a better vacation than some place you could've driven to in 3 hours. You only hope it's better.
>As a direct result of this approach to hiring based on need, the concept of a "layoff" doesn't exist where I am.
LLMs or not, you could've just hired fewer people and made it work anyway. It's not like if you hired 3 people instead of 6 before the LLM era, it was impossible to do.
The gist of it is that LLMs are mostly just devs having fun and tinkering about, or making their quality of life better. There's only a weak powertrain from that to business efficiency.
This was not necessary or appropriate, and completely discredits your reply.
But if you meant it's inappropriate even as a general statement then I disagree. Some concepts are just difficult to convey or unintuitive if one hasn't actually done the thing.
your boss is going to let you go home if you get all your work done early?
I've taken some pleasure in having GitHub copilot review whitespace normalization PRs. It says it can't do it, but I hope I get my points anyway.
First pass on a greenfield project is often like that, for humans too I suppose. Once the MVP is up, refactor with Opus ultrathink to look for areas of weakness and improvement usually tightens things up.
Then as you pointed out, once you have solid scaffolding, examples, etc, things keep improving. I feel like Claude has a pretty strong bias for following existing patterns in the project.
>I do thoroughly audit all the code that AI writes, and often go through multiple iterations
Does this actually save you time versus writing most of the code yourself? In general, it's a lot harder to read and grok code than to write it [0, 1, 2, 3].
[0] https://mattrickard.com/its-hard-to-read-code-than-write-it
[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...
[2] https://trishagee.com/presentations/reading_code/
[3] https://idiallo.com/blog/writing-code-is-easy-reading-is-har...
To be honest, I don’t really have a problem with chunking my tasks. The reason I don’t is because I don’t really think about it that way. I care a lot more about chunks and AI could reasonably validate. Instead of thinking “what’s the biggest chunk I could reasonably ask AI to solve” I think “what’s the biggest piece I could ask an AI to do that I can write a script to easily validate once it’s done?” Allowing the AI to validate its own work means you never have to worry about chunking again.
For instance, take the example of an AI rewriting an API call to support a new db library you are migrating to. In this case, it’s easy to write a test case for the AI. Just run a bunch of cURLs on the existing endpoint, and then make a script that verifies that the result of those cURLs has not changed. Now, instruct the AI to ensure it runs that script and doesn’t stop until the results are character for character identical. That will almost always get you something working.
Obviously the tactics change based on what you are working on. In frontend code, for example, I use a lot of Playwright. You get the idea.
But yes, I do think that the efficiency gain, purely in the domain of coding, is around 5x, which is why I was able to entirely redesign my website in a week. When working on personal projects I don't need to worry about stakeholders at all.
Is it ego? Defensiveness? AI anxiety? A need to be the HN contrarian against a highly visible technology innovation?
I don't think I understand... I haven't seen the opposite view (AI wastes a ton of time) get hammered like that.
At the very least, it certainly makes for an acidic comments section.
That’s why when folks say that AI has made them 10x more productive, I ask if they did 10 years worth of work in the last year. If you cannot make that claim, you were lying when you said it made you 10x more productive. Or at least needed a big asterisk.
If AI makes you 10x more productive in a tiny portion of your job, then it did not make you 10x more productive.
Meanwhile, the people claiming 10x productivity are taken at face value by people who don’t know any better, and we end up in an insane hype cycle that has obvious externalities. Things like management telling people that they must use AI or else. Things like developer tooling making zero progress on anything that isn’t an AI feature for the last two years. Things like RAM becoming unaffordable because Silicon Valley thinks they are a step away from inventing god. And I haven’t scratched the surface.
You’ll note the pattern of the claims getting narrower and narrower as people have to defend them and think critically about them (5-10x productivity -> 4-5x productivity -> 4-5x as much code written on a side project).
It’s not a personal attack, it is a corrective to the trend of claiming 5,10,100x improvements to developer productivity, which rarely if ever holds up to scrutiny.
What makes you think one year is the right timeframe? Yet you seem to be so wildly confident in the strength of what you think your question will reveal… in spite of the fact that the guy gave you an example.
It wasn’t that he didn’t provide it, it was that you didn’t want to hear it.
It is actually a very forgiving metric over a year because it is measuring only your own productivity relative to your personal trend. That includes vacation time and sick time, so the year smooths over all the variation.
Maybe he did do 5 weeks of work in 1 week, and I’ll accept that (a much more modest claim than the usual 10-100x claimed multiplier).
The trick now is deciding what code to write quickly enough to keep Claude and friends busy.
It uses to be "hey I found an issue..", now it is like "here is a pr to fix and issue I saw". The net effort to me is only slightly more. I usually have to identify the problem and that is like 90% of fixing it.
Add to the fact that now I can have an AI take a first pass at identifying the problem with probably an (80%+ success rate).
I-know-what-kind-of-man-you-are.jpeg
You come off as a zealot by branding people who disagree as "haters"
All of the work you described is essentially manual labor. It's not difficult work - just boring, sometimes error prone work that mostly requires you to do obvious things and then tackle errors as they pop up in very obvious ways. Great use case for AI, for sure. This and the fact that the end result is so poor isn't really selling your argument very well, except maybe in the sense that yeah, AI is great for dull work in the same way an excavator is great for digging ditches.
If you ever find yourself at the point where you are insulting a guy's passion project in order to prove a point, you should look deep inside yourself, because you might have crossed the threshold to being a jerk.
For what it's worth, yes, my site does have FOUC and it does have waterfalls. You know what else it has? Users. Both issues you cited existed when I originally wrote the site 10 years ago - it was one of the first serious projects I ever worked on - and I didn't instruct the AI to fix them, because I was busy fixing the things that my actual users cared about.
As for loading slowly -- it loads in 400ms on my machine.
I'm just calling spade a spade. If you didn't want people to comment on your side project given your arguments and the topic of discussion, you should just not have posted it in a public forum or have done better work.
The problem is that your project has basic performance issues - FOUC, render waterfalls - that are central concerns in modern React development. These aren't arbitrary standards I invented to be mean. They're fundamental enough that React's recent development has specifically focused on solving them.
So when you say I'm inventing quality standards (in your now-deleted comment), or that this is just a passion project so quality doesn't matter, you're missing the point. You can't argue from professional authority that AI makes you more productive without compromise, use your work as proof, and then retreat to "it's just for fun" when someone points out the quality issues. Either it demonstrates your workflow's effectiveness or it doesn't. You can't have it both ways.
The kids' artwork comparison doesn't work either. You're not a child showing me a crayon drawing - you're a professional developer using your work as evidence in a technical argument about AI productivity. If you want to be treated as an experienced developer making authoritative claims, your evidence needs to support those claims.
I'm genuinely not trying to be cruel here, but if this represents what your AI workflow produces when you're auditing the output, it raises serious questions about whether you can actually catch the problems the AI introduces - which is the entire crux of your argument. Either you just aren't equipped to audit it (because you don't know better), or you are becoming passive in the face of the walls of code that the AI is generating for you.
Let's talk a little about FOUC and the waterfall. I am aware of both issues. In fact, they're both on my personal TODO list (along with some other fun stuff, like SSR). I have no doubt I could vibe code them both away, and at some point, I will. I've done plenty harder things. I haven't yet, because I was focusing on stuff that my moderators and users wanted me to do. They wanted features to ban users, a forgot password feature, email notifications, mobile support, dark mode, and a couple of other moderation tools. I added those. No one complained about FOUC or the waterfall, and no one said that the site loaded slowly, so I didn't prioritize those issues.
I understand you think your cited issues are important. But no one who actually uses the site cares. So I didn't do them.
> You can't argue from professional authority that AI makes you more productive without compromise, use your work as proof, and then retreat to "it's just for fun" when someone points out the quality issues
You seem to have missed the point of saying "it's just for fun". My point was this: this project is not something I did in my professional time, with all the resources that would have entailed. It was done in a week of my spare time.
And if it's producing an intern-level artifact for your frontend, what's to say it's not producing similar quality code for everything else? Especially considering frontend is often derided as being easier than other fields of software.
The combination of which, deep training dataset + maps well to how AI "understands" code, it can be a real enabler. I've done it myself. All I've done with some projects is write tests, point Claude at the tests and ask it to write code till those tests pass, then audit said code, make adjustments as required, and ship.
That has worked well and sped up development of straightforward (sometimes I'd argue trivial) situations.
Where it falls down is complex problem sets, major refactors that cross cut multiple interdependent pieces of code, its less robust with less popular languages (we have a particular set of business logic in Rust due to its sensitive nature and need for speed, it does a not great job with that) and a host of other areas I have hit limitations with it.
Granted, I work in a fairly specialized way and deal with alot of business logic / rules rather than boiler plate CRUD, but I have hit walls on things like massive refactors in large codebases (50K is small to me, for reference)
You are stuck in a very low local maximum.
You are me six months ago. You don’t know how it works, so you cannot yet reason about it. Unlike me, you’ve decided “all these other people who say it’s effective are making it up”. Instead ask, how does it work? What am I missing.
When people say coding is slow, that usually means they're working on some atrocious code (often of their own making), while using none of the tools for fast feedback (Tests, Linters,...).
Every time I try to use AI it produces endless code that I would never have written. I’ve tried updating my instructions to use established dependencies when possible but it seems completely averse.
An argument could be made that a million lines isn’t a problem now that these machines can consume and keep all the context in memory — maybe machines producing concise code is asking for faster horses.
403 more comments available on Hacker News
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.