We Won't Be Missed: Work and Growth in the Era of Agi [pdf]
Key topics
The paper 'We Won't Be Missed: Work and Growth in the Era of AGI' explores the economic implications of Artificial General Intelligence, sparking a discussion on its potential impact on society, ownership, and the distribution of wealth.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
32
2-4h
Avg / period
8.2
Based on 49 loaded comments
Key moments
- 01Story posted
Sep 27, 2025 at 8:49 AM EDT
3 months ago
Step 01 - 02First comment
Sep 27, 2025 at 10:23 AM EDT
2h after posting
Step 02 - 03Peak activity
32 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 28, 2025 at 3:56 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Edit: words
This has happened at least five times so far.
We are pretty close to the limits of fabrication for transistors. Barring radically different manufacturing and/or ASIC development the performance we have today will be the performance available in 10 years (I predict we'll maybe 2x compute performance in 10 years).
If you've paid attention, you've already seen the slowdown of compute development. A 3060 GPU isn't really significantly slower than a 5060 even though it's 5 years old now.
There are directions hardware and algorithms have been going in - parallel processing - that are not limited by fabrication?
Correct, it works on principles currently completely unapplied in ASIC design. We don't, for example, have many mechanisms that allow for new pathways to be formed in hardware. At least, not outside of highly controlled fashion. It's not clear that it would even be helpful if we did.
> There are directions hardware and algorithms have been going in - parallel processing - that are not limited by fabrication?
They are limited by the power budget. Yes we can increase the amount of parallel compute 100x but not without also increasing the power budget by 100x.
But further, not all problems can be made parallel. Data dependencies exist and those always slow things down. Further, coordination isn't free for parallel algorithms.
I'm not saying there's not some new way to do computation which hasn't been explored. I'm saying we've traveled down a multi-decade path to today's compute capabilities and we may be at the end of this road. Building a new model that's ultimately adopted will (likely) take more decades. I mean, consider how hard it's been to purge x86 from society. We are looking at a problem a million times more difficult than just getting rid of x86.
We need more than UBI. AGI is the culmination of all human activity up to that point and all humanity deserves ownership of it. It should not belong solely to those who put the cherry on top with the rest of us at their mercy. They don't deserve to control the humanity's destiny. AGI, at some point, has to be made into ... I don't know. Not nationalized - something more. A force of pure good for all humans unaffiliated with any corporation or state.
because compute is owned and sold by people who have businesses built on top of compute, thus they let you have their excess compute, it follows that their needs will come before yours.
It's the ultimate monopoly. Anyone with more compute will ultimately be able to out perform any business you could invent ultimately locking you out of competition.
The owners of compute will make a killing and can set whatever price they like. But if the owner is someone like say amazon, then what actually stops them from using their massive compute army they already own to enter the most lucrative businesses for compute slowly dominating everything?
That's news to me! Some people were really ahead of their time.
Yes, literally.
In a post-scarcity society (which we're technically in now, if we took this seriously), Communism is a more appropriate model of governance than Capitalism. It would ensure a more equitable distribution of resources, incentivize stronger environmental policies to minimize waste, and drive technological innovation towards preservation (of truly scarce resources - rare elements, for instance) over extraction.
The problem is that humans desire power for themselves and the humiliation of others, which results in every method of governance becoming corrupted over time, especially if it doesn't see regular change to address its weaknesses (as we see now with neoliberal societies resisting populism on both extremes of the political scale). Combined with centuries of nationstates lumbering onwards and fighting for their own survival in an increasingly nebulous and ever-shifting digital landscape, and no wonder things are a tinderbox.
All that being said, Communism is an (maybe not the, but an) appropriate choice for a post-scarcity, post-AGI society. It's something we need to discuss in earnest now, and start dismantling Capitalism where feasible to lay the foundation for what comes next. As others (myself included) have pointed out repeatedly, this is likely the last warning we'll get before AGI arrives. It's highly unlikely LLMs and current technology will give rise to AGI, but it's almost a certainty that we'll see actual glimmers of AGI within the next fifty years - and once that genie is out of the bottle, we'll be "locked in" to whatever society we've created for ourselves, until and unless we leave our planet behind and can experiment with our own alternatives at scale.
Good craftsmen know when they've reached the limits of their current tooling. We need to recognize that Capitalism is the wrong tool for an AGI era if we value our humanity.
Human needs unlimited. There can't be any "post-scarcity society".
The transition point to a post-scarcity society is in the eyes of the beholder, and moves away from them at the same speed with which they approach it.
From the perspective of the hundreds of millions of people working for 10 cents an hour, any American, even the poorest of them, whose only available job is a minimum wage of $8 an hour, has long since passed that point of post-scarcity society.
But try convincing minimum wage American that he's beyond that point and that he needs to give up $6 out of $8 because "there are no scarcity after $2 per hour". Then you will know the real opinion of people about "more equitable distribution of resources, incentivize stronger environmental policies to minimize waste, and drive technological innovation towards preservation"
The fundamental needs of humanity aren't infinite: a safe home, nutritious food, healthcare, and education are the sum total of human needs. Everything else is superfluous to survival, albeit not self-fulfillment or personal enrichment. We're post-scarcity in the sense that, on a planetary scale, we have enough food, shelter, healthcare, and education for every single inhabitant on Earth, but Capitalism incentivizes the misuse of these surplus resources to create value for existing stakeholders.
This is where I flatly reject any notion of Capitalism being viable, suitable, or acceptable in a post-AGI society, and rail against it in the present day. Its incentives no longer align with human needs or challenges, and in fact harm humanity as a whole by promoting zero-sum wealth extraction rather than a reconciling of the gap between human needs and Capital desires. As much pro-Capitalism content as I consume in an effort to better my perspective, the reality is that it is rapidly outliving its usefulness as a tool like a shambling zombie, wholly divested from human survival and soldiering onward solely as a means to prop up the existing power structures in existence.
Trying to dispute this fact reveals your inexperience and out-of-touch worldviews. And frankly, I'm not entirely understand what you think this fact is defending.
> The fundamental needs of humanity aren't infinite: a safe home, nutritious food, healthcare, and education are the sum total of human needs
You are literally listing needs that are insatiable.
Let me repeat, there are hundreds of millions of people in the world working for 10-20 cents an hour. Try talking to them and ask them what salary a person needs to get "safe home, nutritious food, healthcare, and education". You'll almost certainly hear a figure around a couple of dollars per hour, maybe even less.
And no, it's not about the cost of living, it's about the fact that in their opinion, anyone with access to the American labor market (even as an illegal worker) makes several times more money than they need to get "safe home, nutritious food, healthcare, and education". Because these human needs are exactly as infinite as other "superfluous to survival" needs.
> Capitalism incentivizes the misuse of these surplus resources to create value for existing stakeholders
Or, in the opinion of people earning 10-20 cents an hour, the enormous salaries of American workers earning the American Minimum Wage. American workers EVERY year earn more money than American billionaires have accumulated over generations. What a enormous source for distribution and building a fair post-capitalist society!
But alas, if the builders of a post-capitalist society cannot convince even the most needy workers to show class solidarity and give up $6 of their $8 minimum wage in the name of avoiding "the misuse of these surplus resources" and "safe home, nutritious food, healthcare, and education" for every human on the planet, what can we expect from capital, which is in a much more advantageous negotiating position?
It might be clearer to say "how can we share goods and services provided by intelligent machines?"; the paper seems overly focused on compute as "the" bottleneck when natural resources are still needed. (Even AGI can't magic up new copper atoms from scratch, though it can exploit low grade ores that were previously economically useless.)
I think that referring to income instead of goods and services predisposes people to think of this in a currency-centered way, when currency-denominated market transactions may become much less important in a world of intelligent machines. If (v) the share of labor income in GDP converges to zero is actually true, then machines can do everything, including copying other machines. Co-ops, municipalities, provinces, and states can vertically integrate to provide goods and services outside the market if intelligent machines are actually doing the work. Compare the old "buy an OS, buy a database, buy a compiler" approach that one had to take circa 1990 with the "copy a free Linux distribution" approach circa 2000.
If wage income is obsolete, so is intellectual property rent. One common fear in these discussions is that billionaires with robots will let the masses die of starvation when their labor is no longer needed, but the billionaires who own IP don't have much leverage either.
For one, most of the world's population/governments aren't beholden to IP billionaires now. It might seem like that to those of us who live in the Anglosphere (a plurality of HN users), but globally speaking most billionaires didn't make their money from IP. They'll back national movements to ignore copyright and patents to Just Copy The Foreign Robots when the time comes. (South Africans who were wealthy from mining didn't have an incentive to side with foreigners who were wealthy from pharma patents back when South Africa was ignoring IP to fight HIV/AIDS, as a parallel example). For another, billionaires aren't even in charge everywhere. See the example of China with Jack Ma, or the fabulously wealthy oligarchs that have been brutally demoted in Putin's Russia. If a leader can accumulate power and rally popular support by giving free robot goods and services to the people, they will; the IP billionaires don't have anything to trump that offer.
For these reasons I don't worry about mass immiseration/starvation if smart robots actually take over all productive work. I'm sure there will be struggles, but I don't think that IP owners can win the fight any more than the MPAA actually ever eliminated movie piracy.
The thing that worries me more is mass empowerment of even the world's most inflexible and violent personalities. Nuclear weapons, ballistic missiles, and nerve gas are now old technologies. The main thing that prevents every angry separatist movement or cult from becoming armed like North Korea or Aum Shinrikyo is lack of material and technical resources. But in a world of smart robots, all you need to get your own enriched uranium and ballistic missiles is those smart robots, a region of several square kilometers that no outside force is policing, and a few years to bootstrap the precursor technologies that don't already have blueprints in public databases.
Mass material abundance probably means a decrease in "ordinary" crime driven by the stress of material deprivation but an increase in tail risks from unhinged individuals. The sort of person who kills 5 strangers with a gun in the name of their ideology could become a person who shoots down an airliner, killing 200, in the name of the same ideology.
The model in the paper talks about freeing the labor bottlenecks, which will be great for the capitalists who own the compute. Assuming we don't have a social safety net in place, that will ultimately mean that a lot of stuff is cheaper for the wealthy with nobody else being able to afford it.
What would a middle class job be? Almost all knowledge work would be obliterated. Engineering, science, maybe even art. All evaporated.
The paper suggests that people will shift into markets like manual labor. But what do we do when those jobs have no real bottleneck as the paper describes? There's only so many people needed to for care work or picking berries. And right now, it seems we have enough as the current salaries are pitiful in all the sectors the article mentions. What pressure would actually make wages increase? Sure not the fact that those are the only jobs that exist for normal people that weren't born to a family that owns a datacenter.
And it isn't like datacenter jobs are going to replace the army of jobs AGI would displace. You need so few people to operate a compute center.
That's why it'll likely be hell for most people. If you don't actually own resources, you'll be left out in the cold. Even if you do have pretty good resources, you'll be in a world with AGI set to perfectly extract every single bit of resource from you imaginable in ways we currently don't imagine. For example, knowing everything about you and knowing that you are willing to spend $11 for a widget while someone else is more willing to spend $10.
This will ultimately force the question of "what do we do with the unemployed" and I worry the answer is already "well, they should have worked harder. Sucks to suck".
And to be clear, there is no conceivable universe in which that extra money would make their lives better in any meaningful way.
They could support high taxes on the money they earn through the AGI, to fund a UBI that would support literally everyone—because their products are doing literally all the work necessary to maintain a civilized society (barring some in-person tasks that it's hard to hand over to even a very smart robot) without any human actually needing to do any work. They could do so without making themselves poor, or even the least bit less comfortable.
The reason they would choose not to is because they're corrupt selfish "rugged individualists" who care more about their dollar-denominated high scores than about literally any other human being on the planet. And we know this because that's the case with the people in the closest-analog positions today.
And with today's political environment, I see that as particularly unlikely to ever happen. Everyone says "UBI" as a solution, but I've yet to see even the hint of that being tried. In fact, the opposite seems to be happening in the US with SSI (which is UBI) being slowly defunded and made less accessible.
Hinton predicted 9 years ago that radiologists will lose their jobs, yet today they are better paid and more. Maybe AGI will make humans more valuable instead of less. There might be complementarities and mutual reinforcement between us and AGI.
No, I'm really just looking at what this paper proposes will be the future of labor and expanding on it. I'm not saying AGI will mean "humans not needed" I'm saying AGI will mean "less humans are needed" and in some cases that could be significantly less. If you've listened to any CEO gush over AI, you know that's exactly what they want to do.
> Hinton predicted 9 years ago that radiologists will lose their jobs, yet today they are better paid and more. Maybe AGI will make humans more valuable instead of less. There might be complementarities and mutual reinforcement between us and AGI.
Medicine is a tricky place for AI to integrate. Yet it is already integrating there. In particular, basically every health insurance agency is moving towards using AI to auto deny claims. That is a concrete case where companies are happy to live with the consequences even though they are pretty dire for the people they impact.
And, not for nothing 10 years is a pretty short time to completely eliminate an industry. The more we pay for radiologist, the more likely you'll start seeing a hospital decide that "maybe we should just start moving low risk scans to AI". Or you might start seeing remote radiologists for cheap willing to take on the risk of getting things wrong.
This world never came to be. Instead we had Graeber's 'McJobs', with ever more specialist roles as per the capitalist division of labour idea, with lots of important jobs that aren't important at all.
We had a glimpse of a 'leisure society' during the pandemic when only key workers were needed to do anything useful, everyone else was on the government furlough.
In theory, AGI offers the prospect of a leisure society of sorts. However, it doesn't. Going back to the unusual curriculum at the school I happened to go to, a key skill was critical thinking. As well as doing macrame, football, cookery, pottery, art and what not during what would be teaching time, we also had plenty of courses on philosophy and much else that can be bracketed as literature. The idea was that we were not being brought up to be compliant serfs for the capitalist machine, instead we were expected to have agency and be able to think for ourselves.
The problem with AGI is that we are bypassing our brains to some extent and at a cost of being able to master the art of critical thinking. Nobody has to solve a problem by themselves, they can ask their phone to do it for them. I could be wrong, however, I don't see evidence that AGI is making people smarter and more intelligent.
We have already outsourced our ability to recall information to search engines. General knowledge used to be something you had or you didn't, and people earned respect for being able to remember and recall vast amounts of information. This information came from books that you either had or had to access in a library. Nowadays, whatever it is, a search engine has got you covered. Nobody has to be a walking gazetteer or dictionary.
This ability to recall information rather than look up everything came with risks, mostly because it was easy to be wrong, or 'almost right', which can be worse. However, it is/was the bedrock for critical thinking.
Clearly the utopian vision of a leisure society never happened in the form that some envisaged in the 1980s. With AGI I don't see any talk of a leisure society where we are only having to put in ten hour working weeks. This isn't being proposed at all. If it was then AGI would not be a nightmare for the average person.
As for what humans should do when labor can be accomplished via AI? Man, if you're asking yourself that question in 2025, you're kinda late to the party in a lot of ways - and desperately need to read more books. AGI may eliminate the need for human labor in Capital terms, but it does not suddenly eradicate all value of human labor - only the value Capital ascribes to it. It's why, in a post-AGI fictional setting, you see so many more adventurers, artisans, explorers, researchers, engineers, teachers, and other roles often undervalued by Capital but highly valued by humans.
I imagine most humans will simply turn their favorite hobby into their new "profession", as a means to meet other humans and continue growing social bonds. Maybe that means painting full-time, or striking out on a photography journey. Maybe it's opening a retro game "store" to connect with like-minded enthusiasts. Maybe it's running your own museum for local artists and creators, sharing your tastes with other visitors.
A lot of "valueless" work under Capitalism suddenly becomes highly prized in a post-AGI, post-scarcity society. Most humans will figure that out to varying degrees, and the rest will be able to benefit from a wider array of fee-less services. Provided, of course, we begin changing society today to enable that future tomorrow.
If all humans are asset holders, then where does Capital arise? If all wealth has been extracted from the asset-less into the hands of Capital, then why does Capital need humans?
I can keep cutting your particular barb in any way I choose, but the answer remains the same: Capital, while a useful tool in improving society over the past few centuries by migrating away from Feudalism and Monarchy into Capitalism and Democracy, is no longer useful (or even ethical) now that its incentives are diametrically opposed to human prosperity for all except the asset class.
> Capital is no longer useful (or even ethical) now that its incentives are diametrically opposed to human prosperity for all except the asset class.
And this is the moment it will become apparent that humans never controlled Capital to start with.
Read Deleuze and Guattari, Nick Land.
This tweet recapping this paper https://x.com/lugaricano/status/1969159707693891972
This tweet with recaps of various papers presented at "The Economics of Transformative AI" by NBER in Palo Alto a few weeks ago https://x.com/lugaricano/status/1968704695381156142
This even works at a smaller not so general level: imagine that one of today’s popular code models improved to the point it is better (narrowly at programming) than a human. Suddenly the owner shouldn’t sell it to everyone: instead they should pivot and make software that outcompetes anything a human can make. So it doesn’t just replace humans making software but also replaces the programs that people made…
There is a oblique definition embedded in the context that AGI is literally any labor transfer technology, but again without a distinction between measurement (which is possible) then these conversations and concepts are going to stay unmoored from reality
I made this google form link and so far the answers are all over the place including
- “you can’t” - “It Can ask: I, why?” (Odd answer)
and my personal favorite: “It can do everything better than humans with no prior information” (impossible according to no free lunch theorem):
https://howdoyoudefineagi.com/