"be Different" Doesn't Work for Building Products Anymore
Key topics
The article argues that 'being different' is no longer a viable strategy for building successful products due to the proliferation of AI-generated software, sparking a heated debate among commenters about the validity of this claim and its implications for the tech industry.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
9m
Peak period
128
0-12h
Avg / period
33.5
Based on 134 loaded comments
Key moments
- 01Story posted
Oct 6, 2025 at 12:09 PM EDT
3 months ago
Step 01 - 02First comment
Oct 6, 2025 at 12:18 PM EDT
9m after posting
Step 02 - 03Peak activity
128 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 10, 2025 at 6:16 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If you cannot out compete "AI SLOP" on merit over time (uptime? accuracy? dataloss?), then the AI SLOP is not actually sloppy...
If your runway runs out before you can prove your merit over that timeframe, but you are convinced that the AI is slop, then you should ship the slop first and pivot onec you get $$ but before you get overwhelmed with tech depth.
Personally, I love that I can finally out compete companies with reams of developer teams. Unlike many posters here, I was limited by the time (and mental space) it takes to do the actual writing.
Then again this is also often a flaw with human-generated slop, so it is hard to say what any of this really means.
Copying and playing catch up was possible before AI.
I really wish we could downvote submissions like this. It adds nothing of meaning to the discussion of the subtleties of competitive product development.
What industry are you building in? And have you been building in it a while or is it a new startup?
Same thing with competitor monitoring. These tools require scraping multiple sites, checking X, Facebook, Jobs sites, Crunchbase, etc, aggregating data and displaying and making sense of changes. And the same multi-process management, queuing, and Stripe integrations.
A few years ago, these would both fit into businesses requiring many months of development to get it all running. Now we are seeing dozens of companies emerging in each of these categories each month as they take weeks to build. And if one finds a cool aha (a new integration or graph or UX flow or positioning) the others can quickly follow in a week or less of AI-agent coding.
There are dozens of other categories where this is happening too.
The hard part of figuring out the nuances of the APIs and integrations and retries and AWS integrations and Rabbit MQ configurations and corner cases can all be done by AI with the right context.
Maybe it's just that the AI noise has been cranked to 11, but it sure feels like there's something fundamentally different from building and selling software today vs the last time I was building new products back in 2015. A decade is a long time, but it didn't feel nearly so weird even as recently as 2021 / 2022. That makes me think it's the AI slop noise, but maybe I'm incorrect.
People still pay for software, but for high stakes problems like finding a new job, managing your mental health, or caretaking an aging parent, etc. taking the leap on a not quite fully baked product offering seems unrealistic.
Also with Skritter you used a completely different customer acquisition strategy. Longform blog content + SEO is a way different beast than SEM.
The problem is that it's not a very good idea to invest in that way if you both like money and can hold down a corporate job (not saying everyone meets these two criteria BTW).
The 4 steps to epiphany is outdated, and The Mom Test is great if you have existing deep industry expertise and credibility. If you lack either, though, it's really not obvious how anyone tests product ideas without 12+ months of investment.
Perhaps I'm out of touch, but I haven't seen this explosion of software competition. I'd LOVE to see some new competitors for MS Office, Gmail, Workday, Jira, EPIC, Salesforce, WebKit, Mint, etc etc but it doesn't seem to be happening.
The iOS app store would currently be flooded with newcomers in niche spaces — workout apps, notes, reminders, etc. And games, my god, there would be vibed clones of every game imaginable.
It's simply not happening.
That’s why this article makes no sense to me. The “Cambrian explosion” was the introduction of the app stores on phones. There are 2 million apps on Apple’s store.
Having said that, I don't think it's all AI (this trend's been going on for a while), nor do I think startups can't thrive—as the pie gets bigger, competitors can carve out yet smaller niches, as the OP points out.
It's not about quality, it's market share, vendor lock-in, people being set in their ways and refusing to change from a known thing in general.
Jira had to get REALLY bad before we switched to Linear for example - and there are still growing pains from heavy Jira users who had very specific ways of working.
The funniest I actually had to deal with was Monday. The very premise is that task management is simple and the visual interface will reflect that. Bright colors, low information density, minimal data model and very screenshotable screens. Then when actually using it for a dev team, the first question is how long we decide to try it before giving a verdict.
> Gmail
It really depends on what feature you rely on that aren't IMAP. If it's Google services integration, they might never be a competitor ever, for instance.
I don't quite know how to articulate this well, but there's something that I'd call a "complexity cliff" in the software business: if you want to compete in certain spaces, you need to build very complex software (even if the software, to the user, is easy to use). And while AI tools can assist you in the construction of this software, it cannot be "vibe coded" or copied whole-cloth - complexity, scale, and reliability requirements are far too great and your potential customer base will not tolerate you fumbling around.
You eventually reach a point where there are no blog posts or stackoverflow questions that walk you through step-by-step how to make this stuff happen. It's the kind of stuff that your company and maybe a few dozen others are trying to build - and of those few dozen, less than 10 are seeing actual success.
I recognized something similar when I first started interviewing candidates.
I try to interview promising resumes even if they don't have the perfect experience match. Something that becomes obvious when doing this is that many developers have only operated on relatively simple projects. They would repeat things like "Everything is just a CRUD app" or not understand that going from Python or JavaScript to C++ for embedded systems was more complicated than learning different syntax for your if blocks and for loops.
The new variant of this is the software developer who has only worked on projects where getting to production is a matter of prompting an LLM continuously for a few months. Do this once and it feels like any problem can be solved the same way. These people are in for a shock when they stray from the common path and enter territory that isn't represented in the training data.
That's not to say something like Figma isn't on an entirely different level, but most apps aren't Figma and don't need to be. Most apps are simple crud apps and if they aren't it's usually because the devs are bad.
It's also worth noting that a crud app can be quite complex too. There can be a lot of complexity even if the core is simple.
I also think that those of us who can recognize simple apps for what they are and design them simply are also the people best equipped to tackle more complex apps. Those guys who can make a simple crud app into an incomprehensible buggy monster certainly can't be trusted with that kind of complexity.
I heard this a lot from candidates who had only worked on software that could be described as an app. They bounced from company to company adjusting a mobile app here, fitting into a React framework there, and changing some REST endpoints.
There is a large world of software out there and not all of it is user-facing apps.
Similar to that thinking, I made a previous comment how many developers in the "L.O.B. Line-Of-Business / CRUD" group are not familiar with "algorithms engineering" type of programming: https://news.ycombinator.com/item?id=12078147
Vibe coding is easiest for CRUD apps. However, it's useless for developing new scientific/engineering code for new system architectures that require combining algorithms & data structures in novel ways that Claude Code has no examples for.
About 75% of the time the code snippets it provided did what it said they did. But the other 25% was killer. Luckily I made a visualization system and was able to see when it made mistakes, but I think if I had tried to vibe code this months ago I'd still be trying.
(These were things like "how can I detect if an arbitrary small circle arc on a unit sphere intersects a circle of arbitrary size projected onto the surface of the unit sphere". With the right MATLAB setup this was easy to visualize and check; but I'm quite convinced it would have taken me a lot longer to understand the geometry and come up with the equations myself than it actually took me to complete the tool)
One my standard coding tests for LLM is a spherical geometry problem, a near-triangle with all three corners being 90 degrees.
Until GPT-5, no model got it right, they only operated in the space of a euclidian projection; perhaps notably, while GPT-5 did get it right, it did so by writing and running a python script that imported a suitable library, not with its own world model.
I work on a vision based traffic monitoring system, and modelling the idea of traffic and events goes into so much complexity without a word of code written
These people are working on problems that have tutorials online and dont know that someone had to make all that
Custom text editing and rendering is really hard to do well.
Making everything smooth and performant to the point it's best-in-class while still adding new features is... remarkable.
(Speaking as someone who's writing a spreadsheet and slideshow editor myself...among other things)
The custom text rendering bit alone should have been a good cue for this distinction.
But sure, I’m being too pedantic here I suppose.
If you want to know how tough realtime editing is, try making a simple collaborative drawing tool or something. Or an online 2 player text adventure game
Theres a reason tutorials for those arent common
It's not the Pinnacle of complexity, just more complex than your average app.
I suppose technically a database is just a CRUD app
For this type of development you want the DB to handle basically all the heavy lifting, the trick is to design the DB schema well and have the web API send the right SQL to get exactly the data you need for any given request and then your app will generally be nice and snappy. 90-99% of the logic is just SQL.
For the C# example you'd typically use Entity Framework so the entirety of the DB schema and the DB interaction etc is defined within the C# code.
A lot of apps are just about putting data into a DB and then providing access to that data. Maybe using the data to draw some nice graphs and stuff. That's a crud app.
Far too much of my recent work has been CRUD apps, several with wildly and needlessly overengineered plumbing.
But not all apps are CRUD. Simulations, games, and document editors where the sensible file format isn't a database, are also codebases I've worked on.
I think several of those codebases would also be vulnerable to vibe coding*; but this is where I'd focus my attention, as this kind of innovation seems to be the comparative advantage of humans over AI, and also is the space of innovative functions that can be patented.
* but not all; I'm currently converting an old C++ game to vanilla JS and the web, and I have to carefully vet everything the AI does because it tends to make un-performant choices.
Someone who worked more in embedded systems may say something like “everything is ‘just’ a state machine.”
Everything is a CRUD app if you're high on buzzwords.
B / L / S - Browse / List / Summarize, M / T - Move / Transfer, C / R - Copy / Replicate, A / E - Append / Expand, T / S - Trim / Subtract, P - Process, possibly V / G / D - Visualize / Graph / Display
There's probably others that vary from just a Create (POST, PUT), Read (GET), Update (PATCH), Delete (DELETE) the way they're interpreted in something like REST APIs.
I agree and disagree here. IMO the sign of experience is when you understand which details really matter, and which details are more or less the same across the different stacks, and also when they don't know enough yet to differentiate and need to ask someone else.
Ignoring actual complexity of things, regulations and fact that there are areas that no one will take seriously some vibe coder and you really have to breath in out and swim with the right fish to be trusted and considered for making business with.
But AI has huge value in gratuitously bulking out products in ways that are not economically feasible with hand coding.
As an example we are building a golf launch monitor and there is a UI where the golf club's path is rendered as it swings over the surface.
Without AI, the background would be a simple green #008000 rectangle.
With AI I can say "create a lifelike grass surface, viewed from above, here the individual blades of grass range from 2-4 mm wide and 10-14mm length, randomly distributed, and are densely enough placed that they entirely cover the surface, and shadows are cast from ...".
Basically stuff that makes your product stand out, but that you would never invest in putting a programmer onto. The net result is a bunch of complex math code, but it's stuff no human will ever need to understand or maintain.
I built a fitness product eons ago where there a million rules that determined what should happen for prescribing exercises to athletes (college/pro teams).
If you gave this to an agent today, you will get a tangled mess of if statements that are impossible to debug or extend. This is primarily because LLMs are still bad at picking the right abstraction for a task. The right solution was to build a rules engine, use a constraint solver, and use some combinatorics.
LLMs just don't have the taste to make these decisions without guidance. They also lack the problem solving skills for things they've never seen.*
Was 95% of the app CRUD? Sure. But last I checked, CRUD was never a moat.
*I suspect this part of why senior developers are in extremely high demand despite LLMs.
---
Another example: for many probability problems, Claude loves to code up simulations rather than explore closed form solutions. Asking Claude to make it faster often drives it to make coding optimizations instead of addressing the math. You have to guide Claude to do the right thing, and that means you have to know the right thing to do.
And to be clear, this is people using teams of Claude Code agents (either Sonnet 4.5 or Sonnet 5 and 5.5 in the future). Reliability/scale can be mitigated with a combination of a senior engineer or two, AI Coding tools like the latest Claude Code and the right language and frameworks. (Depending on the scale of course) It no longer takes a team senior and mid-level engineers many months. The barriers even for that have been reduced.
Completely agree that using Lovable, Bolt, etc aren't going to compete except as part of noise, but that's not what this article is saying.
It's a poor choice of word to use as a clearly and universally understood axiom.
Doing only what AI can generate will only generate the average of the corpus.
Maybe it's part of the reason folks with some amount of meaningful problem solving experience, when added to AI are having completely different results, there is someone behind the steering wheel actually pushing and learning with it and also directing it.
If your product doesn't solve problems on the difficult side of the "complexity cliff" then vibe coders will copy it and drive your profit to zero.
My view is that every company has its own DNA and that the web presence has to put this DNA in code. By DNA, I mean USP or niche. This USP or niche is tantamount to a trade secret but there doesn't even have to be innovation. Maybe there is just an excellent supplier arrangement going on behind the scenes, however, for projects, I look for more than that. I want an innovation that, because I understand the problem space and the code platform, I can see and implement.
A beginner level version of this, a simple job application form. On the backend I put the details from the browser session into form data. Therefore, HR could quickly filter out those applying for a local job that lived in a foreign country. They found this to be really useful. Furthermore, since some of our products were for the Apple ecosystem, I could get the applicant's OS in the form too, plus how long they agonised over filling in the form. These signals were also helpful.
To implement this I could use lame Stack Overflow solutions. Anyone scraping the site or even applying had no means of finding out if this was going on. Note the 'innovation' was not in any formal specification, that was just me 'being different'. In theory, my clumsy code to reverse lookup the IP address could have broken the backend form, and, had it done so, I would have paid a price for going off-piste and adding in my own non-Easter Egg.
I would not say the above example was encoding company DNA, but you get the idea. How would this stack up compared to today's AI driven recruitment tools?
As a candidate I would prefer my solution. As the employer, I too would prefer my solution, but I am biased. AI might know everything and be awesome at everything, however, sometimes human problems require human solutions and humans working with other humans to get something done.
Would I vibe code the form? Definitely no! My form would use simple form elements and labels with no classes, div wrappers or other nonsense, to leverage CSS grid layout and CSS variables to make it look good on all devices. It took me a while to learn to do forms this way, with a fraction of the markup in a fraction of the time.
I had to 'be different' to master this and disregard everything that had ever been written on Stack Overflow regarding forms, page layout and user experience.
AI does not have the capability to do super-neat forms like mine because it can't think for itself, just cherry-pick Stack Overflow solutions.
I liken what you describe with running out of Stack Overflow solutions to hill walking ('hiking'). You start at the base of the trail with vast quantities of others that have just stepped out of the parking lot, ice cream cones in hand. Then you go a mile in and the crowd has thinned. Another mile on and the crowd has definitely thinned, big time. Then you are on the final approach to the summit and you haven't seen anyone for seemingly hours. Finally, at the summit, you might meet one or two others.
Stack Overflow and blog posts are like this, at some stage you have to put it away and only use the existing code base as a guide. Then, at another level, you find specifications, scientific papers and the like to guide you to the 'summit'. AI isn't going to help you in this territory and you know you haven't got hundreds of competitors able to rip off your innovation in an instant.
I wonder if we can use this as a”novelty” test. If AI can explain or corect your ideas, it’s not a novel idea.
There are A LOT of businesses (even big ones managing money and what not) that rely on spreadsheets to do so much. Could this have been an app/service/SaaS/whatever ? probably.
What if these orgs can (mostly) internally solidify some of these processes? what if they don't need an insanely expensive salesforce implementor that can add "custom logic" ?
A lot of times companies will replace "complex software" with half complex process!
What if they don't need Salesforce at all because they need a reasonable simple CRM and don't want to (or shouldn't) pay $10k/seat/year ?
There are still going to be very differentiating apps and services here and there, but as time move on these "technological" advantages will erode and with AI they erode way faster.
To pick just one claim:
“Big companies used to move slowly, but now a ragtag team of two developers at a large firm can whip up something that looks top-of-market to the untrained eye in a matter of weeks.”
This is just pure speculation with no consideration of success or longevity. Big companies are going faster now? Where? Which ones?
AI coding allows you to build prototypes quickly. All the reasons big companies are slow haven’t budged.
Yes but there is a more fundamental problem. The claim doesnt even make sense:
>“Big companies used to move slowly, but now a ragtag team of two developers at a large firm can whip up something that looks top-of-market to the untrained eye in a matter of weeks.”
That was never the problem. I mean really, what is the implication of this? That big companies moved slowly because the developers were slow? What? No one thinks that, including the author (I imagine).
Its from many layers of decision-making, risk aversion, bureaucracy, coordination across many teams, technical debt, internal politics, etc.
This manifests as developers (and others) feeling slowed down by the weight of the company. Developes (and others) being relatively fast is precisely how we know the company is slow. So adding AI to the development workflow isn't going to speed anything up. There are too many other limiting factors.
AI has not helped me at all in my corporate job, however on my start up it has been a game changer.
https://www.fiercebiotech.com/biotech/fierce-biotech-fundrai...
This software... Is it in the room with us right now?
Where's the shovelware? Why AI coding claims don't add up https://mikelovesrobots.substack.com/p/wheres-the-shovelware... https://news.ycombinator.com/item?id=45120517
I would add hardware products to that list. While they also have become somewhat easier and cheaper to create, the threshold is still much higher than for software and SaaS products.
There's more and more product that I wish existed, and one search in AliExpress returns me what I exactly wanted plus some more. 5 years ago the product just existed and quality was meh, nowadays it's pretty much on par.
I had to recently look for camera gear, and the amount of adapters or quirky tripods is just great. Ulanzi for instance is a pretty well known brand at this point.
The easier it is to make a software product, the harder it will be to differentiate between what’s good and what’s hastily assembled.
You'll find hundreds of apps for every single keyword. But on the other hand, it's still a winner takes all market, so the top 1-5 per category make ~95% of revenue.
Edit: To clarify, imho these categories have been popular forever, because that's what every new indie dev thinks about first when they have a "great" idea. They're not necessarily tied to vibe coding and would've been released even without AI.
We already had a cambrian explosion of flashlight apps and GPS apps. What is the article talking about?
This seems like the counter-argument... You need to build something incredibly different. You need to message differently, you need to distribute differently.
If the argument is that convincing the world that you're different is harder than ever I buy that. So much fluff and noise out there that it's harder than ever to break through that noise and cut through the skepticism. But for that, it's more important than ever to be different.
https://en.wikipedia.org/wiki/Lehman's_laws_of_software_evol...
AI Slop only has relevance to those that imagine meaning in syntactically correct nonsense. =3
tldr; typical corporate software development babblespeak slop
Everything the author said was just as true pre-GPT. He's imparting basic business knowledge under the guise of "oh, now that there's this AI thing, you can't just build it have users show up."
This article feels like it is targeted at drop shippers, competing on brand or maybe derivative features, rather than ideas.
Citation needed. There was a HN post a few weeks ago (I've lost it since) that said there isn't actually a measurable increase in App Store submissions and other such metrics to indicate that more software applications are being launched, in the last few years.
Also, in my view one of the most overlooked moats that incumbent software companies have is product quality. You can't upset Uber and Lyft primarily because you don't have the resources/skills to build an app of that quality (and your VC doesn't trust you can build one even given the resources). It's not due to business dev, marketing or "network effect" reasons; a lot of drivers tag-team both Uber and Lyft anyway, it doesn't cost them anything to onboard into a 3rd app even if it initially yields them 1 passenger a week.
I actually think AI just makes products converge onto whatever the AI averages out to.
It would be interesting to see this explored further. It seems like being different might actually have an even more pronounced effect now (for good or bad).
Buying business tools comes with the expectation of support and customization, the complexities of which become unmanageable when the lead developer is AI.
You don't hear much about WYSIWYG app builder platforms like Bubble.io anymore because once the hype subsided, it was clear that it wasn't scalable beyond extremely limited CRUD functionality.
While one can vibe-code simple CRUD at scale, one can not vibe-code the complex infrastructural coordination, security guardrails, and reliability externalities that maintain an effective business model at scale.
If all you're doing is using AI to build products, by definition, you're gravitating to the mean.
The AI doesn't care about a delightful product, it cares about satisfying its objective function and the deeper you go the more the two will diverge simply because building a good product is really complex and there are many many paths in the decision maze.