Not

Hacker News!

Beta
Home
Jobs
Q&A
Startups
Trends
Users
Live
AI companion for Hacker News

Not

Hacker News!

Beta
Home
Jobs
Q&A
Startups
Trends
Users
Live
AI companion for Hacker News
  1. Home
  2. /Story
  3. /AI is a front for consolidation of resources and power
  1. Home
  2. /Story
  3. /AI is a front for consolidation of resources and power
Nov 19, 2025 at 2:09 PM EST

AI is a front for consolidation of resources and power

delaugust
541 points
427 comments

Mood

controversial

Sentiment

negative

Category

tech_discussion

Key topics

Ai

Power Dynamics

Technology Critique

Discussion Activity

Very active discussion

First comment

59m

Peak period

153

Day 1

Avg / period

40

Comment distribution160 data points
Loading chart...

Based on 160 loaded comments

Key moments

  1. 01Story posted

    Nov 19, 2025 at 2:09 PM EST

    4d ago

    Step 01
  2. 02First comment

    Nov 19, 2025 at 3:08 PM EST

    59m after posting

    Step 02
  3. 03Peak activity

    153 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Nov 23, 2025 at 8:30 PM EST

    6h ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (427 comments)
Showing 160 comments of 427
carlosjobim
4d ago
5 replies
Let's take the highest perspective possible:

What is the value of technology which allows people communicate clearly with other people of any language? That is what these large language models have achieved. We can now translate pretty much perfectly between all the languages in the world. The curse from the tower of Babel has been lifted.

There will be a time in the future, when people will not be able to comprehend that you couldn't exchange information regardless of personal language skills.

So what is the value of that? Economically, culturally, politically, spiritually?

bix6
4d ago
2 replies
We could communicate with people before LLMs just fine though? We have hand gestures and some people learn multiple languages and google translate was pretty solid. I got by just fine in countries where I didn’t know the language because hand gestures work or someone speaks English.

What is the value of losing our uniqueness to a computer that lies and makes us all talk the same?

carlosjobim
4d ago
1 reply
You can maybe order in a restaurant or ask the way with hand gestures. But surely you must be able to take a higher perspective than your own, and realize that there's enormous amounts of exchange between nations with differing language, and all of this relies on some form of translation. Hundreds of millions of people all over the world have to deal with language barriers.

Google Translate was far from solid, the quality of translations were so bad before LLMs that it simply wasn't an option for most languages. It would sometimes even translate numbers incorrectly.

Profan
4d ago
1 reply
LLMs are here and Google Translate is still bad (surely, if it was easy as just plugging the miraculous perfect llms into it, it would be perfect now?), I don't think people who think we've somehow solved translation actually understand how much it still deals extremely poorly with.

And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)

carlosjobim
4d ago
What argument are you making? LLM translating is available to anybody to try and use right now, and you can use services like Kagi Translate or DeepL to see the evidence for yourself that they make excellent translations. I honestly don't care what Google Translate does, because nobody who is serious about translation uses it.

> And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)

The kind of deeply understood communication you are demanding is usually impossible even between people who have the same native tongue, from the same town and even within the same family. And people can misunderstand each other just fine without the help of AI. However, is it better to understand nothing at all, then to not understand every nuance?

Kiro
4d ago
Incredible that we happen to be alive at the exact moment humanity peaked in its interlingual communication. With Google Translate and hand gestures there is no need to evolve it any further.
4ndrewl
4d ago
1 reply
Which languages couldn't we translate before? Not you, the individual. We, humanity?
carlosjobim
4d ago
1 reply
Machine translation was horrible and completely unreliable before LLMs. And human translators are very expensive and slow in comparison.

LLM is for translation as computers were for calculating. Sure, you could do without them before. They used to have entire buildings with office workers whose job it was to compute.

gizajob
4d ago
6 replies
Google translate worked great long before LLMs.
dwedge
4d ago
Not really long before, although I suppose it's relative. Google translate was pretty garbage until around 2016-2017 and then it started really improving
jibal
4d ago
The only reason to think that is not knowing when Google switched to using LLMs. The radical change is well documented.
doug_durham
4d ago
I disagree. It worked passably and was better than no translation. The depth, correctness, and nuance is much better with LLMs.
Kiro
4d ago
I don't think you understand how off that statement is. It's also pretty ignorant considering Google Translate barely worked at all for many languages. So no, it didn't work great and even for the best possible language pair Google Translate is not in the same ballpark.
verdverm
4d ago
LLMs are not they only "AI"
carlosjobim
4d ago
It really didn't. There were many languages which it couldn't handle at all, just making completely garbled output. It wasn't possible to use Google Translate professionally.
Herring
4d ago
1 reply
Language is a lot deeper than that. It's like if I say "we speak the same language", it means a lot more than just the ability to translate. It's talking about a shared past and worldview and hopefully future which I/we intend to invest in.
carlosjobim
4d ago
Then are you better off by not being able to communicate anything?
uhoh-itsmaciek
3d ago
>The curse from the tower of Babel has been lifted.

It wasn't a curse. It was basically divine punishment for hubris. Maybe the reference is a bit on the nose.

blauditore
4d ago
You could make the same argument about video conferencing: Yes, you can now talk to anyone anywhere anytime, and it's amazing. But somehow all big companies are convinced that in-person office work is more productive.
philipkglass
4d ago
3 replies
I think that what is really behind the AI bubble is the same thing behind most money, power, and influence: land and resources. The AI future that is promised, whether to you and me or to the billionaires, requires the same thing: lots of energy, lots of land, and lots of water.

If you just wanted land, water, and electricity, you could buy them directly instead of buying $100 million of computer hardware bundled with $2 million worth of land and water rights. Why are high end GPUs selling in record numbers if AI is just a cover story for the acquisition of land, electricity, and water?

exceptione
4d ago
1 reply
Valid question. What the OP talks about though is that these things were not for sale normally. My takeaway from his essay is that a few oligarchs get a pass to take over all energy, by means of a manufactured crisis.

  When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style, you have a sudden imbalance of power that looks like a cancer spreading within a national body. 

He could have explained that better. Try to not look at the media drama the political actors give you each day, but look at the agenda the real powers laid bare

- Trump is threatening an oil rich neighbor with war. A complete expensive as hell army blowing up 'drug boats' (claim) to make help the press sell it as a war on drugs. Yeah right.

- Green energy projects, even running ones, get cancelled. Energy from oil and nuclear are both capital intensive and at the same time completely out-shined by solar and battery tech. So the energy card is a strong one to direct policy towards your interests.

If you can turn the USA into a resource economy like Russia, than you can rule like a Russian oligarch. That is also why the admin sees no problem in destroying academia or other industries via tariffs; controlling resources is easier and more predictable than having to rely on an educated populace that might start to doubt the promise of the American Dream.

amunozo
4d ago
I did not think about it that way, but it makes perfect sense. And it is really scary. It hasn't even been a year since Trump's second term started. We still have three more years left.
kjkjadksj
4d ago
Because then you can buy calls on the GPU companies
bix6
4d ago
But with this play they can inflate their company holdings and cash out in new rounds. It’s the ultimate self enrichment scheme! Nobody wants that crappy piece of land but now it’s got GPUs and we can leverage that into a loan for more GPUs and cash out along the way.
block_dagger
4d ago
2 replies
> To think that with enough compute we can code consciousness is like thinking that with enough rainbows one of them will have a pot of gold at its end.

What does consciousness have to do with AGI or the point(s) the article is trying to make? This is a distraction imo.

kmnc
4d ago
1 reply
It’s a funny anology because what’s missing for the rainbows with pots of gold is magic and fairytales…so what’s missing for consciousness is also magic and fairytales? I’ve yet to see any compelling argument for believing enough computer wouldn’t allow us to code consciousness.
apsurd
4d ago
Yes that's just it though, it's a logic argument. "Tell me why we aren't just stochastic parrots!" is more logically sound than "God made us", but that doesn't defacto make it "The correct model of reality".

I am suspect that the world is modeled linearly. That physical reality is non-linear is also more logically sound, so why is there such a clear straight line from compute to consciousness?

jibal
4d ago
2 replies
Consciousness is a physical phenomenon; rainbows, their ends, and pots of gold at them are not.
jibal
3d ago
I'm well aware that many people are wrong about consciousness and have been misled by Searle, Chalmers, Nagel, et. al. Numbers like 55% are argumentum ad populum and are completely irrelevant. The sample space matters ... I've been to the "[Towards a] Science of Consciousness" conferences and they are full of cranks and loony tunes, and even among respectable intelligent philosophers of mind there is little knowledge or understanding of neuroscience, often proudly so. These philosophers should read Arthur Danto's introduction to C.L. Hardin's "Color for Philosophers". I've partied with David Chalmers--fun guy, very bright, but has done huge damage to the field. Roger Penrose likewise--a Nobel Prize winning physicist but his knowledge of the brain comes from that imbecile Stuart Hameroff. The fact remains that consciousness is a physical function of physical brains--collections of molecules--and can definitely be the result of computation--this isn't an "assumption", it's the result of decades of study and analysis. e.g., people who think that Searle's Chinese Room argument is valid have not read Larry Hauser's PhD thesis ("Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence") along with a raft of other criticism utterly debunking it (including arguments from Chalmers).

> It's an analogy.

And I pointed out why it's an invalid one -- that was the whole point of my comment.

> But just like the pot of gold, that might be a false assumption.

But it's not at all "just like the pot of gold". Rainbows are perceptual phenomena, their perceived location changes when the observer moves, they don't have "ends", and there certainly aren't any pots of gold associated with them--we know for a fact that these are "false assumptions"--assumptions that no one makes except perhaps young children. This is radically different from consciousness and computation, even if it were the case that somehow one could not get consciousness from computation. Equating or analogizing them this way is grossly intellectually dishonest.

> Someone sees computing, assuming consciousness is at the end of it, so they think fi there were more computing, there would be more likelihood of consciousness.

Utter nonsense.

drkleiner
3d ago
> Consciousness is a physical phenomenon

This can mean one of 50 different physicalist frameworks. And only 55% of philosophers of mind accept or lean towards physicalism

https://survey2020.philpeople.org/survey/results/4874?aos=16

> rainbows, their ends, and pots of gold at them are not

It's an analogy. Someone sees a rainbow and assumes there might be a pot of gold at the end of it, so they think if there were more rainbows, there would be more likelihood of pot of gold (or more pots of gold).

Someone sees computing, assuming consciousness is at the end of it, so they think fi there were more computing, there would be more likelihood of consciousness.

But just like the pot of gold, that might be a false assumption. After all, even under physicalism, there is a variety of ideas, some of which would say more computing will not yield consciousness.

Personally, I think even if computing as we know can't yield consciousness, that would just result in changing "computing as we know" and end up with attempts to make computers with wetware, literal neurons (which I think is already an attempt)

njarboe
4d ago
4 replies
Many people use AI as the source for knowledge. Even though it is often wrong or misleading, it's advice is better on average than their own judgement or the judgement of people they know. When an AI is "smarter" than 95%? of the population, even if it does not reach superintelligence, will be a very big deal.
apsurd
4d ago
2 replies
This means to me AI is rocket fuel for our post-truth reality.

Post-truth is a big deal and it was already happening pre-AI. AGI, post-scarcity, post-humanity are nerd snipes.

Post-truth on the other hand is just a mundane and nasty sociologically problem that we ran head-first into and we don't know how to deal with. I don't have any answers. Seems like it'll get worse before it gets better.

chickensong
4d ago
1 reply
How would you define post-truth? It's not like people haven't been spouting incorrect facts or total bs since forever.
saulpw
3d ago
Scale matters. The difference between 10% and 90% of people spouting total bs is what makes it 'post-truth'.
jibal
4d ago
What "gets better"? Rapid global warming will lead to societal collapse this century.
emp17344
4d ago
1 reply
How is this different from a less reliable search engine?
jiggawatts
4d ago
AI can interpolate in the space of search results, yielding results in between the hits that a simple text index would return.

It is also a fuzzy index with the unique ability to match on multiple poorly specified axes at once in a very high dimensional search space. This is notoriously difficult to code with tradition computer science techniques. Large language models are in some sense optimal at it instead of “just a little bit better than a total failure”, which is what we had before.

Just today I needed to find a library I only vaguely remembered from years ago. Gemini found it in seconds based on the loosest description of what it does.

That is a technology that is getting difficult to distinguish from magic.

BeFlatXIII
3d ago
Or the AI is patient enough to be the rubber duck, whereas asking the person you know knows the answer will result in them shutting you down after the first follow-up question.
jibal
4d ago
The 95th percentile IQ is 125, which is about average in my circle. (Several of my friends are verified triple nines.)
exceptione
4d ago
1 reply
I think this is the best part of the essay:

  > But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?

  > There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.
protocolture
4d ago
It says absolutely nothing about anything. Its like 10 fearmongering tweets in a blender.
qoez
4d ago
2 replies
Best case is hardly a bubble. I definitely think this is a new paradigm that'll lead to something, even if the current iteration won't be the final version and we've probably overinvested a slight bit.
threetonesun
4d ago
1 reply
Same as the dot-com bubble. Fundamentals were wildly off for some businesses, but you can also find almost every business that failed then running successfully today. Personally I don't think sticking AI in every software is where the real value is, it's improving understanding of huge sets of data already out there. Maybe OpenAI challenges Google for search, maybe they fail, I'm still pretty sure the infrastructure is going to get used because the amount of data we collect and try to extract value from isn't going anywhere.
coffeebeqn
4d ago
Something notable like pets.com is literally chewy just 20 years earlier
layer8
4d ago
The author thinks that the bubble is a given (and doesn’t have to spell doom), and the best case is that there isn’t anything worse in addition.
aynyc
4d ago
1 reply
A bit of sarcasm, but I think it's porn.
righthand
4d ago
It’s at least about stimulating you to give richer data. Which isn’t quite porn.
sockgrant
4d ago
9 replies
“As a designer…”

IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.

Claude code is incredible. Where I work, there are an incredible number of custom agents that integrate with our internal tooling. Many make me very productive and are worthwhile.

I find it hard to buy in to opinions of non-SWE on the uselessness of AI solely because I think the innovation is lagging in other areas. I don’t doubt they don’t yet have compelling AI tooling.

ihaveajob
4d ago
1 reply
I'm curious if you could share something about custom agents. I love Claude Code and I'm trying to get it into more places in my workflow, so ideas like that would probably be useful.
verdverm
4d ago
1 reply
I've been using Google ADK to create custom agents (fantastic SDK).

With subagents and A2A generally, you should be able to hook any of them into your preferred agentic interface

ambicapter
4d ago
1 reply
I’m struggling to see how somebody who’s looking for inspiration in using agents in their coding workflow would glean any value from this comment.
verdverm
3d ago
They asked about custom Agents, ADK is for building custom agents

(Agent SDK, not android)

hagbarth
4d ago
1 reply
If you read a little further in the article, the main point is _not_ that AI is useless. But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.
NitpickLawyer
4d ago
2 replies
> But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.

AGI is a lot of things, a lot of ever moving targets, but it's never (under any sane definition) "infinite growth". That's already ASI territory / singularity and all that stuff. I see more and more people mixing the two, and arguing against ASI being a thing, when talking about AGI. "Human level competences" is AGI. Super-human, ever improving, infinite growth - that's ASI.

If and when we reach AGI is left for everyone to decide. I sometimes like to think about it this way: how many decades would you have to go back, and ask people from that time if what we have today is "AGI".

xeckr
4d ago
2 replies
Once you have AGI, you can presumably automate AI R&D, and it seems to me that the recursive self-improvement that begets ASI isn't that far away from that point.
HarHarVeryFunny
4d ago
1 reply
We already have AGI - it's called humans - and frankly it's no magic bullet for AI progress.

Meta just laid 600 of them off.

All this talk of AGI, ASI, super-intelligence, and recursive self-improvement etc is just undefined masturbatory pipe dreams.

For now it's all about LLMs and agents, and you will not see anything fundamentally new until this approach has been accepted as having reached the point of diminishing returns.

The snake oil salesmen will soon tell you that they've cracked continual learning, but it'll just be memory, and still won't be the AI intern that learns on the job.

Maybe in 5 years we'll see "AlphaThought" that does a better job of reasoning.

mitthrowaway2
4d ago
Humans aren't really being put to work upgrading the underlying design of their own brains, though. And 5 years is a blink of an eye. My five-year-old will barely even be turning ten years old by then.
lizcall
3d ago
Assuming the recursing self-improvement doesn't run into physical hardware limits.

Like we can theoretically build a spaceship that can accelerate to 99.9999% C - just a constant 1G accel engine with "enough fuel".

Of course the problem is that "enough fuel" = more mass than is available in our solar system.

ASI might have a similar problem.

hagbarth
4d ago
Sam Altman has been drumming[1] the ASI drum for a while now. I don't think it's a stretch to say that this is the vision he is selling.

[1] - https://ia.samaltman.com/#:~:text=we%20will%20have-,superint...

hollowturtle
4d ago
1 reply
Where are the products? This site and everywhere around the internet, on x, linkedin and so is full of crazy claims and I have yet to see a product that people need and that actually works. What I'm experiencing is a gigantic enshittification everywhere, Windows sucks, web apps are bloated, slow and uninteresting. Infrastructure goes down even with "memory safe rust" burning millions and millions of compute for scaffolding stupid stuff. Such a disappointment
redorb
4d ago
4 replies
I think chatGPT itself is an epic product, Cursor has insane growth and usage. I also think they are both over-hyped, have too much a valuation.
layer8
4d ago
1 reply
Citing AI software as the only examples of how AI benefits developing software, has a bit of a touch of self-help books describing how to attain success and fulfillment by taking the example of writing self-help books.

I don’t disagree that these are useful tools, by the way. I just haven’t seen any discernible uptick in general software quality and utility either, nor any economic uptick that should presumably follow from being able to develop software more efficiently.

xwolfi
4d ago
I made 1500 USD speculating on NVidia earnings, that's economic uptick for me !
oblio
4d ago
1 reply
I agree with everyone else, where is the Microsoft Office competitor created by 2 geeks in a garage with Claude Code? Where is the Exchange replacement created by a company of 20 people?

There are many really lucrative markets that need a fresh approach, and AI doesn't seem to have caused a huge explosion of new software created by upstarts.

Or am I missing something? Where are the consumer facing software apps developed primarily with AI by smaller companies? I'm excluding big companies because in their case it's impossible to prove the productivity, the could be throwing more bodies at the problem and we'd never know.

sfgvvxsfccdd
4d ago
7 replies
> Office…Exchange

The challenge in competing with these products is not code. The challenge competing in lucrative markets that need a fresh approach is also generally not code. So I’m not sure that is a good metric to evaluate LLMs for code generation.

tjr
4d ago
1 reply
I think the point remains, if someone armed with Claude Code could whip out a feature complete clone of Microsoft Office over the weekend (and by all accounts, even a novice programmer could do this, because of the magnificent greatness of Claude), then why don't they just go ahead and do it? Maybe do a bunch of them: release one under GPL, one under MIT, one under BSD, and a few more sold as proprietary software. Wow, I mean, this should be trivial.
buu700
3d ago
It makes development faster, but not infinitely fast. Faithfully reproducing complex 42-year-old software in one weekend is a stretch no matter how you slice it. Also, AI is cheap, but not free.

I could see it being doable by forking LibreOffice or Calligra Suite as a starting point, although even with AI assistance I'd imagine that it might take anyone not intimately familiar with both LibreOffice (or Calligra) and MS Office longer than a weekend to determine the full scope of the delta between them, much less implement that delta.

But you'd still need someone with sufficient skill (not a novice), maybe several hundred or thousand dollars to burn, and nothing better to do for some amount of time that's probably longer than a weekend. And then that person would need some sort of motivation or incentive to go through with the project. It's plausible, but not a given that this will happen just because useful agentic coding tools exist.

lookaroundwait
4d ago
Cool. So we established that it's not code alone that's needed, it's something else. This means that the people who already had that something else can now bootstrap the coding part much faster than ever before, spend less time looking for capable people, and truly focus on that other part.

So where are they?

We're not asking to evaluate LLM's for code. We're asking to evaluate them as product generators or improvers.

nebula8804
3d ago
Ok lets ignore competing with them. When will AI just spit out a "home cooked" version of Office for me so I can toss the real thing in the trash where it belongs? One without the stuff I don't want? When will it be able to give me Word 95 running on my M4 Chip by just asking? If im going to lose my career I might as well get something that can give me any software that I could possibly want by just asking.

I can go to Wendys or I can make my own version of Wendys at home pretty easily with just a bit more time expended.

The cliff is still too high for software. I could go and write office from scratch or customize the shivers FOSS software out there but its not worth the time effort.

alganet
4d ago
We had upstarts in the 80s, the 90s, the 2000s and the 2010s. Some game, some website, some social network, some mobile app that blew up. We had many. Not funded by billions.

So, where is that in the 2020s?

Yes, code is a detail (ideas too). It's a platform. It positions itself as the new thing. Does that platform allow upstarts? Or does it consolidate power?

oblio
3d ago
Pick other examples, then.

We have superhuman coding (https://news.ycombinator.com/item?id=45977992), where are the superhuman coded major apps from small companies that would benefit most from these superhumans?

Heck, we have superhuman requirements gathering, superhuman marketing, superhuman almost all white collar work, so it should be even faster!

AlexandrB
4d ago
Fine, where's the slop then? I expected hundreds of scammy apps to show up imitating larger competitors to get a few bucks, but those aren't happening either. At least not any more than before AI.
ethanwillis
4d ago
It's not that they failed to compete on other metrics, it's that they don't even have a product to fail to sell.
hollowturtle
4d ago
ChatGPT is... a chat with some "augmentation" feature aka outputting rich html responses, nothing new except the generative side. Cursor is a VSCode fork with a custom model and a very good autocomplete integration. Again where are the products? Where the heck is Windows without the bloat that works reliably before becoming totally agentic? And therefore idiotic since it doesn't work reliably
emp17344
4d ago
It doesn’t matter what you think. Where’s all the data proving that AI is actually valuable? All we have are anecdotes and promises.
muldvarp
4d ago
2 replies
> IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.

How are we building _for_ ourselves when we literally automate away our jobs? This is probably one of the _worst_ things someone could do to me.

DennisP
4d ago
1 reply
Software engineers been automating our own work since we built the first assembler. So far it's just made us more productive and valuable, because the demand for software has been effectively unlimited.

Maybe that will continue with AI, or maybe our long-standing habit will finally turn against us.

muldvarp
4d ago
2 replies
> Software engineers been automating our own work since we built the first assembler.

The declared goal of AI is to automated software engineering entirely. This is in no way comparable to building an assembler. So the question is mostly about whether or not this goal will be achieved.

Still, nobody is building these systems _for_ me. They're building them to replace me, because my living is too much for them to pay.

dasil003
4d ago
1 reply
Automating away software engineering entirely is nothing new. It goes all the way back to BASIC and COBOL, and later visual programming tools, Microsoft Access, etc. There have been innumerable attempts to do somehow get by without need those pedantic and difficult programmers and all their annoying questions and nit picking.

But here's the thing: the hard part of programming was never really syntax, it was about having the clarity of thought and conceptual precision to build a system that normal humans find useful despite the fact they will never have the patience to understand let alone debug failures. Modern AI tools are just the next step to abstracting away syntax as a gatekeeper function, but the need for precise systemic thinking is as glaringly necessary as ever.

I won't say AI will never get there—it already surpasses human programmers in many of the mechanical and rote knowledge of programing language arcana—but it it still is orders of magnitude away from being able to produce a useful system when specified by someone who does not think like a programmer. Perhaps it will get there. But I think the barrier at that point will be the age old human need to have a throat to choke when things go sideways. Those in power know how to control and manipulate humans through well-understood incentives, and this applies all the way to the highest levels of leadership. No matter how smart or competent AI is, you can't just drop it into those scenarios. Business leaders can't replace human accountability with an SLA from OpenAI, it just doesn't work. Never say never I suppose, but I'd be willing to bet the wheels come off modern civilization long before the skillset of senior software engineers becomes obsolete.

skydhash
3d ago
1 reply
> Modern AI tools are just the next step to abstracting away syntax as a gatekeeper function, but the need for precise systemic thinking is as glaringly necessary as ever.

Syntax is not a gatekeeper function. It’s exactly the means to describe the precise systemic thinking. When you’re creating a program, you’re creating a DSL for multiple subsystem, which you then integrate.

The subsystem can be abstract, but we usually define good software by how closely fitted the subsystem are to the problem at hand, meaning adjustments only need slight code alterations.

So viewing syntax as a gatekeeper is like viewing sheet music as a gatekeeper for playing music, or numbers and arithmetic as a gatekeeper for accounting.

buu700
3d ago
1 reply
The difference is that human language is a much more information-dense, higher-level abstraction than code. I can say "an async function that accepts a byte array, throws an error if it's not a valid PNG image with a 1:1 aspect ratio and resolution >= 100x100, resizes it to 100x100, uploads it to the S3 bucket env.IMAGE_BUCKET with a UUID as the file name, and retries on failure with exponential backoff up to a maximum of 100 attempts", and you'll have a pretty good idea of what I'm describing despite the smaller number of characters than equivalent code.

I can't directly compile that into instructions which will make a CPU do the thing, but for the purposes of describing that component of a system, it's at about the right level of abstraction to reasonably encode the expected behavior. Aside from choosing specific libraries/APIs, there's not much remaining depth to get into without bikeshedding; the solution space is sufficiently narrow that any conforming implementation will be functionally interchangeable.

AI is just laying bare that the hard part of building a system has always been the logic, not the code per se. Hypothetically, one can imagine that the average developer in the future might one day think of programming language syntax in the same way that an average web developer today thinks of assembly. As silly as this may sound today, maybe certain types of introductory courses or bootcamps would even stop teaching code, and focus more on concepts, prompt engineering, and developing/deploying with agentic tooling.

I don't know how much learning syntax really gatekeeps the field in practice, but it is something extra that needs to be learned, where in theory that same time could be spent learning some other aspect of programming. More significant is the hurdle of actually implementing syntax; turning requirements into code might be cognitively simple given sufficiently baked requirements, but it is at minimum time-consuming manual labor which not everyone is in a position to easily afford.

skydhash
3d ago
> and you know exactly what I'm describing.

I won't unless both you and I have a shared context which will tie each of these concept to a specific thing. You said "async function", and there's a lot of languages that don't have that concept. And what about the permissions of the s3 bucket, what's the initial time of the wait time? And what algorithm for the resizing? What if someone sent us a very big image (let say the maximum that the standard allows).

These are still logic questions that have not been addressed.

The thing is that general programming languages are general. We do have constructs like procedure/functions and class, that allows us for a more specialized notation, but that's a skill to acquire (like writing clear and informative text).

So in pseudo lisp, the code would be like

   (defun fn (bytes)
     (when-let\* ((png (byte2png bytes))
                 (valid (and (valid-png-p png)
                             (square-res-p png)))
                 (small-png (resize-image png))
                 (bucket (get-env "IMAGE_BUCKET"))
                 (filename (uuid)))
       (do-retry :backoff 'exp
                 (s3-upload bucket small-png))))
And in pseudo prolog

  square(P) :- width(P, W), height(P, H), W is H.
  validpng(P, X) :-  a whole list of clauses that parses X and build up P, square(P).
  resizepng(P) :- bigger(100,100, P), scale(100, 100, P).
  smallpng(P, X) :- validpng(P, X), resizepng(P).
  s3upload(P): env("IMAGE_BUCKET", B), s3_put(P, B, (exp_backoff(100))))
  fn(X) :-  smallpng(P, X), s3upload(P)
So what you've left is all the details. It's great if someone already have an library that already does the thing, and the functions has the same signature, but more often than not, there isn't something like that.

Code can be as highlevel as you want and very close to natural language. Where people spend time is the implementation of the lower level and dealing with all the failure modes.

laterium
3d ago
Who declared it? Who cares what anyone declares? What do you think will actually happen? If software can be fully automated, then sure SWEs will need to find a new job. But why wouldn't it increase productivity instead and there still are developer jobs, just different.
jstanley
3d ago
1 reply
This is kind of a myopic view of what it means to be a programmer.

If you're just in it to collect a salary, then yeah, maybe you do benefit from delivering the minimum possible productivity that won't get you fired.

But if you like making computers do things, and you get joy from making computers do more and new things, then LLMs that can write programs are a fantastic gift.

muldvarp
3d ago
1 reply
> But if you like making computers do things, and you get joy from making computers do more and new things, then LLMs that can write programs are a fantastic gift.

Maybe currently if you enjoy social engineering an LLM more than writing stuff yourself. Feels a bit like saying "if you like running, you'll love cars!"

In the future when the whole process is automated you won't be needed to make the computer do stuff, so it won't matter whether you would like it. You'll have another job. Likely one that pays less and is harter on your body.

jstanley
3d ago
You're still focusing on "programming as a job" being fundamental to programming, and I'm saying it's not.
lumost
4d ago
2 replies
I think the question is whether those ai tools make you produce more value. Anecdotally, the ai tools have changed the workflow and allowed me to produce more tools etc.

They have not necessarily changed the rate at which I produce valuable outputs (yet).

awinter-py
4d ago
2 replies
can you say more about this? what do you mean when you say 'more tools' is not the same as 'valuable outputs'
lumost
4d ago
2 replies
There are a thousand "nuisance" problems which matter to me and me alone. AI allows me to bang these out faster, and put nice UIs on it. When I'm making an internal tool - there really is no reason not to put a high quality UX on top. The high quality UX, or existence of a tool that only I use does not mean my value went up - just that I can do work that my boss would otherwise tell me not to do.
elliotto
4d ago
1 reply
Under this definition, could any tool at all be considered to produce more value?
lumost
4d ago
1 reply
no - this is a lesson an engineer learns early on. The time spent making the tool may still dwarf the time savings you gain from the tool. I may make tools for problems that only ever occurred or will occur once. That single incident may have occurred before I made the tool.

This also makes it harder to prioritize work in an organization. If work is perceived as "cheap" then it's easy to demand teams prioritize features that will simply never be used. Or to polish single user experiences far beyond what is necessary.

xwolfi
4d ago
One thing I learned from this is to disregard all attempts at prioritizing based on the output's expected value for the users/business.

We prioritize now based on time complexity and omg, it changes everything: if we have 10 easy bugfixes and one giant feature to do (random bad faith example), we do 5 bugfixes and half the feature within a month and have an enormous satisfaction output from the users who would never have accepted to do it that way in the first place . If we had listened, we would have done 75% of the features and zero bug fixes and have angry users/clients whining that we did nothing all month...

The time spent on dev stuff absolutely matters, and churning quick stuff quickly provides more joy to the people who pay us. It's a delicate balance.

As for AI, for now, it just wastes our time. Always craps out half correct stuff so we optimized our time by refusing to use it, and beat teams who do that way.

chii
4d ago
personal increase in satisfaction (such as "work that my boss would otherwise tell me not to do") is valuable - even if only to you.

The fact is, value is produced when something can be produced at a fraction of the resources required previously, as long as the cost is borne by the person receiving the end result.

colecut
4d ago
Do using the tools increase ROI?
gniv
3d ago
When using AI to find faults in existing processes that is value creation (assuming they get fixed of course).
monkaiju
4d ago
2 replies
All I see it doing, as a SWE, is limiting the speed at which my co-workers learn and worsening the quality of their output. Finally many are noticing this and using it less...
overfeed
4d ago
3 replies
Your probably bosses think it's worth it if the outcome is getting rid of the whole host of y'all and replace you with AWS Elastic-SWE instances. Which is why it's imperative that you maximize AI usage.
Capricorn2481
4d ago
1 reply
So instead of firing and replacing me with AI my boss will pay me to use AI he would've used..?
overfeed
4d ago
1 reply
No one's switching to AI cold turkey. Think of it as training your own, cheaper replacement. SWEs & their line managers develop & test AI workflows, while giving the bosses time to evaluate AI capabilities, then hopefully shrink the headcount as close to 0 as possible without shrinking profits. Right now, it's juniors who're getting squeezed.
whstl
3d ago
1 reply
I don't think bosses are smart enough to pull this off.
overfeed
3d ago
Increasing profits by reducing the cost of doing business isn't a complicated scheme. It's been done thousands of times, over many decades; first with cheaper contractors replacing full-time staff, then offshore labor, and now they are attempting to use AI.
imbnwa
4d ago
They’ll be replaced with cheaper humans in Mexico using those Copilot seats, that’s much more tangible and obvious, no need to wait for genius level AI
monkaiju
3d ago
My bosses aren't pushing it at all. The normal cargo-cult temptations have pulled on some fellow SWEs, but its being pretty effectively pushed back on by its own failings, paired with SWEs who use it being outperformed by those who dont.

> edit for spelling

whstl
3d ago
1 reply
I recently had a very interesting interaction in a few small startups I freelanced for recently.

In a 1-year company, the only tech person that's been there for more than 3-4 months (the CTO), only really understands a tiny fraction of the codebase and infrastructure, and can't review code anymore. Application size has blown up tremendously despite being quite simple. Turnover is crazy and people rarely stay for more than a couple months. The team works nights and weekends, and sales is CONSTANTLY complaining about small bugs that take weeks to solve.

The funny thing is that this is an AI company, but I see the CTO constantly asking developers "how much of that code is AI?". Paranoia has set in for him.

automatic6131
3d ago
1 reply
>Turnover is crazy and people rarely stay for more than a couple months. The team works nights and weekends

Oh, look, you've normalized deviance. All of these things are screaming red flags, the house is burning down around you.

forgetfulness
3d ago
1 reply
This sounds just like a typical startup or small consultancy drunk on Ruby gems and codegen (scaffolding) back in the Rails heyday.

People who don’t yet have the maturity for the responsibility of their roles, thinking that merely adopting a new technology will make up for not taking care of the processes and the people.

whstl
3d ago
Bingo. The founders have no maturity, responsibility and believe they "made it" because they got somewhere AI. Now they're pushing back against AI because they can't understand the app anymore.
SoftTalker
4d ago
8 replies
I'm a SWE, DBA, SysAdmin, I work up and down the stack as needed. I'm not using LLMs at all. I really haven't tried them. I'm waiting for the dust to settle and clear "best practices" to emerge. I am sure that these tools are here to stay but I am also confident they are not in their final form today. I've seen too many hype trains in my career to still be jumping on them at the first stop.
sanmon3186
4d ago
3 replies
Why would you wait for dust to settle down? Just curious. Productivity gains are real in current form of LLMs. Guardrails and best practices can be learnt and self imposed.
bluefirebrand
4d ago
1 reply
> Productivity gains are real in current form of LLM

I haven't found that to be true

I'm of the opinion that anyone who is impressed by the code these things produce is a hack

lomase
3d ago
2 replies
I just started a project, they fired the previous team, I am possitive they used AI. The app is full of bugs and the client will never hire the old company again.

Whoever says is time to move to LLMS is clueless.

stavros
3d ago
1 reply
"Because one team doesn't know how to use LLMs, I conclude that LLMs are useless."
lomase
14h ago
1 reply
Can you show any product created by a those imaginary teams using LLMS?
stavros
14h ago
2 replies
https://pine.town/
Yeask
12h ago
1 reply
You proved his point bro. That site does not even load.
stavros
12h ago
Shrug, loads fine for me.
lomase
6h ago
I am talking about real work. Who is going to pay me to build that?

I have not even seen a real CRUD app with real happy users wrote with AI tools, and that is the perfect candidate.

lbreakjai
3d ago
Humans are very capable of creating bugs. This in itself is not a tell.
Muromec
3d ago
1 reply
Whenever I hear about productivity gains, I mentally substitute it for "more time to play video games left in the day" to keep the conversation grounded. I would say I rather not.
blackbrokkoli
3d ago
If you have two modes of spending your time, one being work that you only do because you are paid for it, and the other being feeding into an addiction, the conversations you should be having are not about where to use AI.
beepbooptheory
3d ago
Your "productivity gains" is just equal to the hours others eventually have to spend cleaning up and fixing what you generated.
noduerme
4d ago
1 reply
I'm a SWE and also an art director. I have tried these tools and, the way I've also tried Vue and React, I think they're good enough for simple minded applications. It's worth the penny to try them and look through the binoculars, if only to see how unoriginal and creatively limited what most people in your field are actually doing if they find this something that saves them time.
eclipxe
3d ago
What a condescending take.
dcre
4d ago
1 reply
The tools have reached the point where no special knowledge is required to get started. You can get going in 5 minutes. Try Claude Code with an API key (no subscription required). Run it in the terminal in a repo and ask how something works. Then ask it to make a straightforward but tedious change. Etc.
sheepscreek
4d ago
Just download Gemini (no API key) and use it.
code51
4d ago
2 replies
I'm surprised these pockets of job security still exist.

Know this: someone is coming after this already.

One day someone from management will hear about a cost-saving story at a dinner table, the words GPT, Cursor, Antigravity, reasoning, AGI will cause a buzzing in her ear. Waking up with tinnitus the next morning, they'll instantly schedule a 1:1 to discuss "the degree of AI use and automation"

lomase
3d ago
Lowballing contracts is nothing new. It has never ever worked out.

You can trow all AI you want, but at the end of the day you get what you pay for.

DaiPlusPlus
4d ago
> Know this: someone is coming after this already.

Yesterday, GitHub Copilot declared that my less-AI-weary friend’s new Laravel project was following all industry best-practices for database design as it storing entities as denormalized JSON blobs in a MySQL 8.x database with no FKs, indexes, constraints, all NULL columns (and using root@mysql as the login, of course); while all Laravel controller actions’ DB queries were RBAR loops that did loaded all rows into memory before doing JSON deserialisation in order to filter rows.

I can’t reconcile your attitude with my own personal lived experience of LLMs being utterly wrong 40% of the time; while 50% of the time being no better or faster than if I did things myself; another 5% of the time it gets stuck in a loop debating the existence of the seahorse emoji; and the last 5% of the time genuinely utterly scaring me with a profoundly accurate answer or solution that it produced instantly.

Also, LLMs have yet to demonstrate an ability to tackle other real-world DBA problems… like physically installing a new SSD into the SAN unit in the rack.

01100011
4d ago
5 replies
It's time to jump on the train. I'm a cranky, old, embedded SWE and claude 4.5 is changing how I work. Before that I laughed off LLMs. They were trash. Claude still has issues, but damn, I think if I don't integrate it into my workflow I'll be out of work or relegated to work in QA or devops(where I'd likely be forced to use it).

No, it's not going to write all your code for you. Yes your skills are still needed to design, debug, perform teamwork(selling your designs, building consensus, etc), etc.. But it's time to get on the train.

rafaelmn
4d ago
1 reply
This was true since Claude Sonnet 3.5, so over a year now. I was early on the LLM train building RAG tools and prototypes in the company I was working at the time, but pre Claude 3.5 all the models were just a complete waste of time for coding, except the inline autocomplete models saved you some typing.

Claude 3.5 was actually where it could generate simple stuff. Progress kind of tapered off since tho, Claude is still best but Sonnet 4.5 is disappointing in that it does't fundamentally bring me more than 3.5 did it's just a bit better at execution - but I still can't delegate higher level problems to it.

Top tier models are sometimes surprisingly good but they take forever.

lomase
3d ago
1 reply
This was true since ChatGPT-1, and I mean the lies.
rafaelmn
3d ago
Not really - 3.5 was the first model where I could actually use it to vibe through CRUD without it vasting more time than it saves, I actually used it to deliver a MVP on a side gig I was working on. GPT 4 was nowhere near as useful at the time. And Sonnet 3 was also considerably worse.

And from reading through the forums and talking to co-workers this was a common experience.

mattmanser
3d ago
2 replies
It's just not true, it is not ready.

Especially Claude, where if you check the forums everyone is complaining that it's gone stupid the last few months.

Claude's code is all over the place, and if you can't see that and are putting it's code into production I pity your colleagues.

Try stopping. Honestly, just try. Just use claude as a super search engine. Though right now ChatGPT is better.

You won't see any drop in productivity.

bratbag
3d ago
It's not about blindly accepting autogenerated code. Its using them for tooling integration.

Its like terminal autocomplete on steroids. Everything around the code is blazing fast.

IshKebab
3d ago
This is far too simplistic a viewpoint. First of all it depends what you're trying to do. Web dev? AI works pretty well. CPU design? Yeah good luck with that.

Secondly it depends what you're using it for within web dev. One shot an entire app? I did that recently for a Chrome extension and while it got many things wrong that I had to learn and fix, it was still waaaaaay faster than doing it myself. Especially for solving stupid JS ecosystem bugs.

Nobody sane is suggesting you just generate code and put it straight into production. It isn't ready for that. It is ready for saving you a ton of time if you use it wisely.

friendzis
3d ago
5 replies
The moment your code departs from typical patterns in the training set or ("agentic environment") LLMs fall over at best (i.e. can't even find the thing) or do some random nonsense at worst.

IMO LLMs are still at the point where they require significant handholding, showing what exactly to do, exactly where. Otherwise, it's constant review of random application of different random patterns, which may or may not satisfy requirements, goals and invariants.

kakacik
3d ago
4 replies
A contrarian opinion - I also didn't jump the train yet, its even forbidden in our part of company due to various regulations re data secrecy and generally slow adoption.

Also - for most seasoned developers, actual dev activity is miniscule part of overall efforts. If you are churning code like some sweatshop every single day at say 45 its by your own choice, you don't want to progress in career or career didn't push you up on its own.

What I want to say - that miniscule part of the day when I actually get my hands on the code are the best. Pure creativity, puzzle solving, learning new stuff (or relearning when looking at old code). Why the heck would I want to lose or dilute this and even run towards it? It makes sense if my performance is rated only based on code output, but its not... that would be a pretty toxic place to be polite.

Seniority doesn't come from churning out code quicker. Its more long the lines of communication, leading others, empathy, toughness when needed, not avoiding uncomfortable situations or discussions and so on. No room for llms there.

morshu9001
3d ago
1 reply
There have been times when something was very important and my ability to churn out quick proof of concept code (pre AI) made the difference. It has catapulted me. I thought talking was all-important, but turns out, there's already too much talk and not enough action in these working groups.

So now with AI, that's even quicker. And I can do it more easily during the half relevant part of meetings, which I have a lot more of nowadays. When I have real time to sit and code, I focus on the hardest and most interesting parts, which the AI can't do.

friendzis
3d ago
1 reply
> ability to churn out quick proof of concept code (pre AI) made the difference. It has catapulted me. I thought talking was all-important

It is always the talking that transitions "here's quick proof of concept" to "someone else will implement this fully and then maintain". One cannot be catapulted if they cannot offload the implementation and maintenance. Two quick proof of concept ideas you are stuck with and it's already your full capacity. one either talks their way out to having a team supporting them or they find themselves on a PIP with a regular backlog piling up.

morshu9001
3d ago
Oh they hired some guys to productionize it, who did it manually back then but now delegate a lot of it to AI.
aswegs8
3d ago
1 reply
While I think that is true and this thread here is about senior productivity switching to LLMs, I can say from my experience that our juniors absolutely crush it using LLMs. They have to do pretty demanding and advanced stuff from the start and they are using LLMs nonstop. Not sure how that translates into long term learning but it definitely increases their output and makes them competent-enough developers to contribute right away.
friendzis
3d ago
> Not sure how that translates into long term learning

I don't think that's a relevant metric. "learning" rate of humans versus LLMs. If you expect typical LLMs to grow from juniors to competent mids and maybe even seniors faster than typical human, then there is little point to learn to write code, but rather learn "software engineering with artificial code monkey". However, if that turns out to not be true, we have just broken the pipeline producing actual mids and seniors, who can actually oversee the LLMs.

skeezyjefferson
3d ago
> Seniority doesn't come from churning out code quicker. Its more long the lines of communication, leading others, empathy, toughness when needed, not avoiding uncomfortable situations or discussions and so on. No room for llms there.

they might be poor at it, but if you do everything you specified online and through a computer, then its in an LLMs domain. If we hadnt pushed so hard for work from home it might be a different story. LLMs are poor on soft skills but is that inherent or just a problem that can be refined away? i dont know

friendzis
3d ago
> What I want to say - that miniscule part of the day when I actually get my hands on the code are the best.

And if you are not "churning code like some sweatshop every single day" those hours are not "hey, let's bang out something cool!", it's more like "here are 5 reasons we can't do the cool thing, young padawan".

wiz21c
3d ago
1 reply
> The moment your code departs from typical patterns

Using AI, I constantly realize that a-typical patterns are much rarer than I thought.

butlike
3d ago
Yeah but don't let it prevent you from making a novel change just because no one else seems to be doing it. That's where innovation sleeps.
reactordev
3d ago
1 reply
This has not been my experience as of late. If anything, they help steer us back on track when a SWE decides to be clever and introduce a footgun.
erichocean
3d ago
Same, I have Gemini Pro 2.5 (now 3) exclusively implementing new designs that don't exist in the world and it's great at it! I do all the design work, it writes the code (and tests) and debugs the thing.

I'm happy, it's happy, I've never been more productive.

The longer I do this, the more likely it is to one-shot things across 5-10 files with testing passing on the first try.

endymion-light
3d ago
Quite frankly - the majority of code is improved by integrating some sort of pattern. a LLM is great at bringing the pattern you may not have realized you are making into the forefront.

I think there's an obsession, especially in more veteran SWEs to think they are creating something one of a kind and special, when in reality, we're just iterating over the same patterns.

concats
3d ago
I don't think anyone disagrees with that. But it's a good time to learn now, to jump on the train and follow the progress.

It will give the developer a leg up in the future when the mature tools are ready. Just like the people who surfed the 90s internet seem to do better with advanced technology than the youngsters who've only seen the latest sleek modern GUI tools and apps of today.

lomase
3d ago
1 reply
I am an old SWE and claude 4.5 has not changed a thing on how we work.

The teams that have embraced AI in their worlflow have not increased their output compared with they ones that don't use it.

wiz21c
3d ago
1 reply
I'm like you. I'd say my productivity improved by 5-10%: Claude can make surprinsgly good code edits. For these, my subjective feeling is that claude does in 30 min what I'd have dont in one hour. It's a net gain. Now, my job is about communicating, understanding problems, learning, etc. So my overall productivity is not dramatically changing, but for things related to code, it's a net 5-10%
dangus
3d ago
1 reply
Which is where the AI investment disconnect is scary.

AI Companies have invested a crazy amount of money into a small productivity gain for their customers.

If AI was replacing developers it wouldn’t cost me $20-100/month to get a subscription.

lomase
14h ago
If I had something that could replace developers I will not sell it.

I will get all the goverment IT contracts and make billions in a few months.

Nobody does it because LLMS are a fucking scam, like crypto, and I am tired of pretending is not.

iammrpayments
3d ago
I don’t use AI for anything except translations and searching and I’d say 3 times out of 10 it gives me bad information, while translation only works ok if you use the most expensive models
Aeolun
4d ago
You don’t have to jump on the hype train to get anything out of it. I started using claude code about 4 months back and I find it really hard to imagine developing without now. Sure I’m more of a manager, but the tedious busywork, the most annoying part of programming, is entirely gone. I love it.
perfmode
4d ago
How could you not at least try?
tmikaeld
4d ago
I’m in the same position, but I use AI to get a second opinion. Try it by using the proper models, like Gemini 3 Pro that was just released and include grounding. Don’t use the free models, you’ll be surprised at how valuable it can be.
a_bonobo
4d ago
I think that's also because Claude Code (and LLMs) is built by engineers who think of their target audience as engineers; they can only think of the world through their own lenses.

Kind of how for the longest time, Google used to be best at finding solutions to programming problems and programming documentation: say, a Google built by librarians would have a totally different slant.

Perhaps that's why designers don't see it yet, no designers have built Claude's 'world-view'.

bgwalter
4d ago
If you want to steal code, you can take it from GitHub and strip the license. That is what the Markov chains (https://arxiv.org/abs/2410.02724) do.

It's a code laundering machine. Software engineering has a higher number of people who have never created anything by themselves and have no issues with copyright infringement. Other professions still tend to take a broader view. Even unproductive people in other professions may have compunctions about stealing other people's work.

267 more comments available on Hacker News

View full discussion on Hacker News
ID: 45983700Type: storyLast synced: 11/22/2025, 11:00:32 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN

Not

Hacker News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Jobs radar
  • Tech pulse
  • Startups
  • Trends

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.