AI is a front for consolidation of resources and power
Mood
controversial
Sentiment
negative
Category
tech_discussion
Key topics
Ai
Power Dynamics
Technology Critique
Discussion Activity
Very active discussionFirst comment
59m
Peak period
153
Day 1
Avg / period
40
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 19, 2025 at 2:09 PM EST
4d ago
Step 01 - 02First comment
Nov 19, 2025 at 3:08 PM EST
59m after posting
Step 02 - 03Peak activity
153 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 23, 2025 at 8:30 PM EST
6h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
What is the value of technology which allows people communicate clearly with other people of any language? That is what these large language models have achieved. We can now translate pretty much perfectly between all the languages in the world. The curse from the tower of Babel has been lifted.
There will be a time in the future, when people will not be able to comprehend that you couldn't exchange information regardless of personal language skills.
So what is the value of that? Economically, culturally, politically, spiritually?
What is the value of losing our uniqueness to a computer that lies and makes us all talk the same?
Google Translate was far from solid, the quality of translations were so bad before LLMs that it simply wasn't an option for most languages. It would sometimes even translate numbers incorrectly.
And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)
> And as others have said, language is more than just "I understand these words, this other person understands my words" (in the most literal sense, ignoring nuance here), but try getting that across to someone who believes you can solve language with a technical solution :)
The kind of deeply understood communication you are demanding is usually impossible even between people who have the same native tongue, from the same town and even within the same family. And people can misunderstand each other just fine without the help of AI. However, is it better to understand nothing at all, then to not understand every nuance?
LLM is for translation as computers were for calculating. Sure, you could do without them before. They used to have entire buildings with office workers whose job it was to compute.
It wasn't a curse. It was basically divine punishment for hubris. Maybe the reference is a bit on the nose.
If you just wanted land, water, and electricity, you could buy them directly instead of buying $100 million of computer hardware bundled with $2 million worth of land and water rights. Why are high end GPUs selling in record numbers if AI is just a cover story for the acquisition of land, electricity, and water?
When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style, you have a sudden imbalance of power that looks like a cancer spreading within a national body.
He could have explained that better. Try to not look at the media drama the political actors give you each day, but look at the agenda the real powers laid bare- Trump is threatening an oil rich neighbor with war. A complete expensive as hell army blowing up 'drug boats' (claim) to make help the press sell it as a war on drugs. Yeah right.
- Green energy projects, even running ones, get cancelled. Energy from oil and nuclear are both capital intensive and at the same time completely out-shined by solar and battery tech. So the energy card is a strong one to direct policy towards your interests.
If you can turn the USA into a resource economy like Russia, than you can rule like a Russian oligarch. That is also why the admin sees no problem in destroying academia or other industries via tariffs; controlling resources is easier and more predictable than having to rely on an educated populace that might start to doubt the promise of the American Dream.
What does consciousness have to do with AGI or the point(s) the article is trying to make? This is a distraction imo.
I am suspect that the world is modeled linearly. That physical reality is non-linear is also more logically sound, so why is there such a clear straight line from compute to consciousness?
> It's an analogy.
And I pointed out why it's an invalid one -- that was the whole point of my comment.
> But just like the pot of gold, that might be a false assumption.
But it's not at all "just like the pot of gold". Rainbows are perceptual phenomena, their perceived location changes when the observer moves, they don't have "ends", and there certainly aren't any pots of gold associated with them--we know for a fact that these are "false assumptions"--assumptions that no one makes except perhaps young children. This is radically different from consciousness and computation, even if it were the case that somehow one could not get consciousness from computation. Equating or analogizing them this way is grossly intellectually dishonest.
> Someone sees computing, assuming consciousness is at the end of it, so they think fi there were more computing, there would be more likelihood of consciousness.
Utter nonsense.
This can mean one of 50 different physicalist frameworks. And only 55% of philosophers of mind accept or lean towards physicalism
https://survey2020.philpeople.org/survey/results/4874?aos=16
> rainbows, their ends, and pots of gold at them are not
It's an analogy. Someone sees a rainbow and assumes there might be a pot of gold at the end of it, so they think if there were more rainbows, there would be more likelihood of pot of gold (or more pots of gold).
Someone sees computing, assuming consciousness is at the end of it, so they think fi there were more computing, there would be more likelihood of consciousness.
But just like the pot of gold, that might be a false assumption. After all, even under physicalism, there is a variety of ideas, some of which would say more computing will not yield consciousness.
Personally, I think even if computing as we know can't yield consciousness, that would just result in changing "computing as we know" and end up with attempts to make computers with wetware, literal neurons (which I think is already an attempt)
Post-truth is a big deal and it was already happening pre-AI. AGI, post-scarcity, post-humanity are nerd snipes.
Post-truth on the other hand is just a mundane and nasty sociologically problem that we ran head-first into and we don't know how to deal with. I don't have any answers. Seems like it'll get worse before it gets better.
It is also a fuzzy index with the unique ability to match on multiple poorly specified axes at once in a very high dimensional search space. This is notoriously difficult to code with tradition computer science techniques. Large language models are in some sense optimal at it instead of “just a little bit better than a total failure”, which is what we had before.
Just today I needed to find a library I only vaguely remembered from years ago. Gemini found it in seconds based on the loosest description of what it does.
That is a technology that is getting difficult to distinguish from magic.
> But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?
> There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.
Claude code is incredible. Where I work, there are an incredible number of custom agents that integrate with our internal tooling. Many make me very productive and are worthwhile.
I find it hard to buy in to opinions of non-SWE on the uselessness of AI solely because I think the innovation is lagging in other areas. I don’t doubt they don’t yet have compelling AI tooling.
With subagents and A2A generally, you should be able to hook any of them into your preferred agentic interface
(Agent SDK, not android)
AGI is a lot of things, a lot of ever moving targets, but it's never (under any sane definition) "infinite growth". That's already ASI territory / singularity and all that stuff. I see more and more people mixing the two, and arguing against ASI being a thing, when talking about AGI. "Human level competences" is AGI. Super-human, ever improving, infinite growth - that's ASI.
If and when we reach AGI is left for everyone to decide. I sometimes like to think about it this way: how many decades would you have to go back, and ask people from that time if what we have today is "AGI".
Meta just laid 600 of them off.
All this talk of AGI, ASI, super-intelligence, and recursive self-improvement etc is just undefined masturbatory pipe dreams.
For now it's all about LLMs and agents, and you will not see anything fundamentally new until this approach has been accepted as having reached the point of diminishing returns.
The snake oil salesmen will soon tell you that they've cracked continual learning, but it'll just be memory, and still won't be the AI intern that learns on the job.
Maybe in 5 years we'll see "AlphaThought" that does a better job of reasoning.
Like we can theoretically build a spaceship that can accelerate to 99.9999% C - just a constant 1G accel engine with "enough fuel".
Of course the problem is that "enough fuel" = more mass than is available in our solar system.
ASI might have a similar problem.
[1] - https://ia.samaltman.com/#:~:text=we%20will%20have-,superint...
I don’t disagree that these are useful tools, by the way. I just haven’t seen any discernible uptick in general software quality and utility either, nor any economic uptick that should presumably follow from being able to develop software more efficiently.
There are many really lucrative markets that need a fresh approach, and AI doesn't seem to have caused a huge explosion of new software created by upstarts.
Or am I missing something? Where are the consumer facing software apps developed primarily with AI by smaller companies? I'm excluding big companies because in their case it's impossible to prove the productivity, the could be throwing more bodies at the problem and we'd never know.
The challenge in competing with these products is not code. The challenge competing in lucrative markets that need a fresh approach is also generally not code. So I’m not sure that is a good metric to evaluate LLMs for code generation.
I could see it being doable by forking LibreOffice or Calligra Suite as a starting point, although even with AI assistance I'd imagine that it might take anyone not intimately familiar with both LibreOffice (or Calligra) and MS Office longer than a weekend to determine the full scope of the delta between them, much less implement that delta.
But you'd still need someone with sufficient skill (not a novice), maybe several hundred or thousand dollars to burn, and nothing better to do for some amount of time that's probably longer than a weekend. And then that person would need some sort of motivation or incentive to go through with the project. It's plausible, but not a given that this will happen just because useful agentic coding tools exist.
So where are they?
We're not asking to evaluate LLM's for code. We're asking to evaluate them as product generators or improvers.
I can go to Wendys or I can make my own version of Wendys at home pretty easily with just a bit more time expended.
The cliff is still too high for software. I could go and write office from scratch or customize the shivers FOSS software out there but its not worth the time effort.
So, where is that in the 2020s?
Yes, code is a detail (ideas too). It's a platform. It positions itself as the new thing. Does that platform allow upstarts? Or does it consolidate power?
We have superhuman coding (https://news.ycombinator.com/item?id=45977992), where are the superhuman coded major apps from small companies that would benefit most from these superhumans?
Heck, we have superhuman requirements gathering, superhuman marketing, superhuman almost all white collar work, so it should be even faster!
How are we building _for_ ourselves when we literally automate away our jobs? This is probably one of the _worst_ things someone could do to me.
Maybe that will continue with AI, or maybe our long-standing habit will finally turn against us.
The declared goal of AI is to automated software engineering entirely. This is in no way comparable to building an assembler. So the question is mostly about whether or not this goal will be achieved.
Still, nobody is building these systems _for_ me. They're building them to replace me, because my living is too much for them to pay.
But here's the thing: the hard part of programming was never really syntax, it was about having the clarity of thought and conceptual precision to build a system that normal humans find useful despite the fact they will never have the patience to understand let alone debug failures. Modern AI tools are just the next step to abstracting away syntax as a gatekeeper function, but the need for precise systemic thinking is as glaringly necessary as ever.
I won't say AI will never get there—it already surpasses human programmers in many of the mechanical and rote knowledge of programing language arcana—but it it still is orders of magnitude away from being able to produce a useful system when specified by someone who does not think like a programmer. Perhaps it will get there. But I think the barrier at that point will be the age old human need to have a throat to choke when things go sideways. Those in power know how to control and manipulate humans through well-understood incentives, and this applies all the way to the highest levels of leadership. No matter how smart or competent AI is, you can't just drop it into those scenarios. Business leaders can't replace human accountability with an SLA from OpenAI, it just doesn't work. Never say never I suppose, but I'd be willing to bet the wheels come off modern civilization long before the skillset of senior software engineers becomes obsolete.
Syntax is not a gatekeeper function. It’s exactly the means to describe the precise systemic thinking. When you’re creating a program, you’re creating a DSL for multiple subsystem, which you then integrate.
The subsystem can be abstract, but we usually define good software by how closely fitted the subsystem are to the problem at hand, meaning adjustments only need slight code alterations.
So viewing syntax as a gatekeeper is like viewing sheet music as a gatekeeper for playing music, or numbers and arithmetic as a gatekeeper for accounting.
I can't directly compile that into instructions which will make a CPU do the thing, but for the purposes of describing that component of a system, it's at about the right level of abstraction to reasonably encode the expected behavior. Aside from choosing specific libraries/APIs, there's not much remaining depth to get into without bikeshedding; the solution space is sufficiently narrow that any conforming implementation will be functionally interchangeable.
AI is just laying bare that the hard part of building a system has always been the logic, not the code per se. Hypothetically, one can imagine that the average developer in the future might one day think of programming language syntax in the same way that an average web developer today thinks of assembly. As silly as this may sound today, maybe certain types of introductory courses or bootcamps would even stop teaching code, and focus more on concepts, prompt engineering, and developing/deploying with agentic tooling.
I don't know how much learning syntax really gatekeeps the field in practice, but it is something extra that needs to be learned, where in theory that same time could be spent learning some other aspect of programming. More significant is the hurdle of actually implementing syntax; turning requirements into code might be cognitively simple given sufficiently baked requirements, but it is at minimum time-consuming manual labor which not everyone is in a position to easily afford.
I won't unless both you and I have a shared context which will tie each of these concept to a specific thing. You said "async function", and there's a lot of languages that don't have that concept. And what about the permissions of the s3 bucket, what's the initial time of the wait time? And what algorithm for the resizing? What if someone sent us a very big image (let say the maximum that the standard allows).
These are still logic questions that have not been addressed.
The thing is that general programming languages are general. We do have constructs like procedure/functions and class, that allows us for a more specialized notation, but that's a skill to acquire (like writing clear and informative text).
So in pseudo lisp, the code would be like
(defun fn (bytes)
(when-let\* ((png (byte2png bytes))
(valid (and (valid-png-p png)
(square-res-p png)))
(small-png (resize-image png))
(bucket (get-env "IMAGE_BUCKET"))
(filename (uuid)))
(do-retry :backoff 'exp
(s3-upload bucket small-png))))
And in pseudo prolog square(P) :- width(P, W), height(P, H), W is H.
validpng(P, X) :- a whole list of clauses that parses X and build up P, square(P).
resizepng(P) :- bigger(100,100, P), scale(100, 100, P).
smallpng(P, X) :- validpng(P, X), resizepng(P).
s3upload(P): env("IMAGE_BUCKET", B), s3_put(P, B, (exp_backoff(100))))
fn(X) :- smallpng(P, X), s3upload(P)
So what you've left is all the details. It's great if someone already have an library that already does the thing, and the functions has the same signature, but more often than not, there isn't something like that.Code can be as highlevel as you want and very close to natural language. Where people spend time is the implementation of the lower level and dealing with all the failure modes.
If you're just in it to collect a salary, then yeah, maybe you do benefit from delivering the minimum possible productivity that won't get you fired.
But if you like making computers do things, and you get joy from making computers do more and new things, then LLMs that can write programs are a fantastic gift.
Maybe currently if you enjoy social engineering an LLM more than writing stuff yourself. Feels a bit like saying "if you like running, you'll love cars!"
In the future when the whole process is automated you won't be needed to make the computer do stuff, so it won't matter whether you would like it. You'll have another job. Likely one that pays less and is harter on your body.
They have not necessarily changed the rate at which I produce valuable outputs (yet).
This also makes it harder to prioritize work in an organization. If work is perceived as "cheap" then it's easy to demand teams prioritize features that will simply never be used. Or to polish single user experiences far beyond what is necessary.
We prioritize now based on time complexity and omg, it changes everything: if we have 10 easy bugfixes and one giant feature to do (random bad faith example), we do 5 bugfixes and half the feature within a month and have an enormous satisfaction output from the users who would never have accepted to do it that way in the first place . If we had listened, we would have done 75% of the features and zero bug fixes and have angry users/clients whining that we did nothing all month...
The time spent on dev stuff absolutely matters, and churning quick stuff quickly provides more joy to the people who pay us. It's a delicate balance.
As for AI, for now, it just wastes our time. Always craps out half correct stuff so we optimized our time by refusing to use it, and beat teams who do that way.
The fact is, value is produced when something can be produced at a fraction of the resources required previously, as long as the cost is borne by the person receiving the end result.
> edit for spelling
In a 1-year company, the only tech person that's been there for more than 3-4 months (the CTO), only really understands a tiny fraction of the codebase and infrastructure, and can't review code anymore. Application size has blown up tremendously despite being quite simple. Turnover is crazy and people rarely stay for more than a couple months. The team works nights and weekends, and sales is CONSTANTLY complaining about small bugs that take weeks to solve.
The funny thing is that this is an AI company, but I see the CTO constantly asking developers "how much of that code is AI?". Paranoia has set in for him.
Oh, look, you've normalized deviance. All of these things are screaming red flags, the house is burning down around you.
People who don’t yet have the maturity for the responsibility of their roles, thinking that merely adopting a new technology will make up for not taking care of the processes and the people.
I haven't found that to be true
I'm of the opinion that anyone who is impressed by the code these things produce is a hack
Whoever says is time to move to LLMS is clueless.
I have not even seen a real CRUD app with real happy users wrote with AI tools, and that is the perfect candidate.
Know this: someone is coming after this already.
One day someone from management will hear about a cost-saving story at a dinner table, the words GPT, Cursor, Antigravity, reasoning, AGI will cause a buzzing in her ear. Waking up with tinnitus the next morning, they'll instantly schedule a 1:1 to discuss "the degree of AI use and automation"
You can trow all AI you want, but at the end of the day you get what you pay for.
Yesterday, GitHub Copilot declared that my less-AI-weary friend’s new Laravel project was following all industry best-practices for database design as it storing entities as denormalized JSON blobs in a MySQL 8.x database with no FKs, indexes, constraints, all NULL columns (and using root@mysql as the login, of course); while all Laravel controller actions’ DB queries were RBAR loops that did loaded all rows into memory before doing JSON deserialisation in order to filter rows.
I can’t reconcile your attitude with my own personal lived experience of LLMs being utterly wrong 40% of the time; while 50% of the time being no better or faster than if I did things myself; another 5% of the time it gets stuck in a loop debating the existence of the seahorse emoji; and the last 5% of the time genuinely utterly scaring me with a profoundly accurate answer or solution that it produced instantly.
Also, LLMs have yet to demonstrate an ability to tackle other real-world DBA problems… like physically installing a new SSD into the SAN unit in the rack.
No, it's not going to write all your code for you. Yes your skills are still needed to design, debug, perform teamwork(selling your designs, building consensus, etc), etc.. But it's time to get on the train.
Claude 3.5 was actually where it could generate simple stuff. Progress kind of tapered off since tho, Claude is still best but Sonnet 4.5 is disappointing in that it does't fundamentally bring me more than 3.5 did it's just a bit better at execution - but I still can't delegate higher level problems to it.
Top tier models are sometimes surprisingly good but they take forever.
And from reading through the forums and talking to co-workers this was a common experience.
Especially Claude, where if you check the forums everyone is complaining that it's gone stupid the last few months.
Claude's code is all over the place, and if you can't see that and are putting it's code into production I pity your colleagues.
Try stopping. Honestly, just try. Just use claude as a super search engine. Though right now ChatGPT is better.
You won't see any drop in productivity.
Its like terminal autocomplete on steroids. Everything around the code is blazing fast.
Secondly it depends what you're using it for within web dev. One shot an entire app? I did that recently for a Chrome extension and while it got many things wrong that I had to learn and fix, it was still waaaaaay faster than doing it myself. Especially for solving stupid JS ecosystem bugs.
Nobody sane is suggesting you just generate code and put it straight into production. It isn't ready for that. It is ready for saving you a ton of time if you use it wisely.
IMO LLMs are still at the point where they require significant handholding, showing what exactly to do, exactly where. Otherwise, it's constant review of random application of different random patterns, which may or may not satisfy requirements, goals and invariants.
Also - for most seasoned developers, actual dev activity is miniscule part of overall efforts. If you are churning code like some sweatshop every single day at say 45 its by your own choice, you don't want to progress in career or career didn't push you up on its own.
What I want to say - that miniscule part of the day when I actually get my hands on the code are the best. Pure creativity, puzzle solving, learning new stuff (or relearning when looking at old code). Why the heck would I want to lose or dilute this and even run towards it? It makes sense if my performance is rated only based on code output, but its not... that would be a pretty toxic place to be polite.
Seniority doesn't come from churning out code quicker. Its more long the lines of communication, leading others, empathy, toughness when needed, not avoiding uncomfortable situations or discussions and so on. No room for llms there.
So now with AI, that's even quicker. And I can do it more easily during the half relevant part of meetings, which I have a lot more of nowadays. When I have real time to sit and code, I focus on the hardest and most interesting parts, which the AI can't do.
It is always the talking that transitions "here's quick proof of concept" to "someone else will implement this fully and then maintain". One cannot be catapulted if they cannot offload the implementation and maintenance. Two quick proof of concept ideas you are stuck with and it's already your full capacity. one either talks their way out to having a team supporting them or they find themselves on a PIP with a regular backlog piling up.
I don't think that's a relevant metric. "learning" rate of humans versus LLMs. If you expect typical LLMs to grow from juniors to competent mids and maybe even seniors faster than typical human, then there is little point to learn to write code, but rather learn "software engineering with artificial code monkey". However, if that turns out to not be true, we have just broken the pipeline producing actual mids and seniors, who can actually oversee the LLMs.
they might be poor at it, but if you do everything you specified online and through a computer, then its in an LLMs domain. If we hadnt pushed so hard for work from home it might be a different story. LLMs are poor on soft skills but is that inherent or just a problem that can be refined away? i dont know
And if you are not "churning code like some sweatshop every single day" those hours are not "hey, let's bang out something cool!", it's more like "here are 5 reasons we can't do the cool thing, young padawan".
Using AI, I constantly realize that a-typical patterns are much rarer than I thought.
I'm happy, it's happy, I've never been more productive.
The longer I do this, the more likely it is to one-shot things across 5-10 files with testing passing on the first try.
I think there's an obsession, especially in more veteran SWEs to think they are creating something one of a kind and special, when in reality, we're just iterating over the same patterns.
It will give the developer a leg up in the future when the mature tools are ready. Just like the people who surfed the 90s internet seem to do better with advanced technology than the youngsters who've only seen the latest sleek modern GUI tools and apps of today.
The teams that have embraced AI in their worlflow have not increased their output compared with they ones that don't use it.
AI Companies have invested a crazy amount of money into a small productivity gain for their customers.
If AI was replacing developers it wouldn’t cost me $20-100/month to get a subscription.
I will get all the goverment IT contracts and make billions in a few months.
Nobody does it because LLMS are a fucking scam, like crypto, and I am tired of pretending is not.
Kind of how for the longest time, Google used to be best at finding solutions to programming problems and programming documentation: say, a Google built by librarians would have a totally different slant.
Perhaps that's why designers don't see it yet, no designers have built Claude's 'world-view'.
It's a code laundering machine. Software engineering has a higher number of people who have never created anything by themselves and have no issues with copyright infringement. Other professions still tend to take a broader view. Even unproductive people in other professions may have compunctions about stealing other people's work.
267 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.