Cost of Agi Delusion:chasing Superintelligence Us Falling Behind in Real AI Race
Mood
skeptical
Sentiment
mixed
Category
other
Key topics
The article argues that the US is falling behind in the AI race due to its focus on Artificial General Intelligence (AGI), while the discussion revolves around the validity of this claim and the actual state of AI development.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
9s
Peak period
98
Day 1
Avg / period
26
Based on 104 loaded comments
Key moments
- 01Story posted
Sep 27, 2025 at 9:44 AM EDT
2 months ago
Step 01 - 02First comment
Sep 27, 2025 at 9:44 AM EDT
9s after posting
Step 02 - 03Peak activity
98 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 5, 2025 at 12:33 PM EDT
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
More and more I feel like these policy articles about AI are an endless stream of slop written by people who aren’t familiar with and have never worked on current AI.
I’m not sure about the legitimacy of these claims but trying to clarify what some people are concerned about with the US’s vs China’s approach
https://www.benzinga.com/markets/tech/25/09/47859358/us-coul...
This one sentence seems to be the extent of the "falling behind" from the headline?
The issue is, a large subset of American "AI" startups are founded by people who are ideologically driven by an almost religious fervor around unlocking AGI and superintelligence.
On the other hand, most Chinese startups in the space are highly application driven and practical in nature - they aren't chasing "AGI" or "Superintelligence" but building a monetizable product and outcompeting American players. (P.S. Immigrant founded startups in the US approach the problem in the same manner)
I've said this a ton of times on here, but most American MLEs are basically SKLearn wrapper monkeys with a penchant for bad philosophy. It's hard to find MLEs at scale in the US who understand both how to derive a Restricted Boltzmann Machine as well as tune and optimize the Linux Kernel to optimize Infiniband interconnects in a GPU cluster.
Most CS and CE majors in the US who graduated in the past 7-10 years think less like engineers (let's build shit that works, and then build it at scale) and more like liberal arts majors but wanted to learn enough coding to pass leetcode medium and get a job - I've had new grad SWEs who are alumni of MIT caliber schools ask me about how to become a VC or PM and how I did the SWE-PM-VC transition because "they don't want to code". I was gobsmacked.
The same mindset occurs abroad as well in China, India, Eastern Europe, Israel, etc but at least they force students to actually learn foundations.
And if you look at most of the teams who lead or develop either GPU architecture, high performance networking, RL research, Ensemble learning research, etc - most did their undergrad abroad in China, India, or the CEE but their PhD in the US. The pipeline and skill at the junior level in the US is almost nonexistent outside a handful of good programs that are oversubscribed.
When (picking a random T10 CS program) Cal and UIUC CS tenure track professors are starting to take up faculty positions in China and India's equivalent of those CS departments, that means you have a problem.
CS is a god damn engineering disciple. Engineering is predicated on bridging the gap between theoretical research and practical applications, but I do not see any backing for this kind of mindset in most American programs.
And yes, the work ethic in the US leaves much to be desired. If Polish, Czech, and Israeli engineers will be fine working 50-60 hour weeks during crunch time, asking you to work earlier hours in order to accommodate your private commitments after 3PM is not some form of egregious abuse.
The American tech industry has become lazy, the same way the American automotive industry became lazy in the 90s and 2000s. The lack of vision and the pettiness amongst management and the lack of motivation amongst ICs who are amongst the highest paid in the world is not conducive if we want to retain a domestic tech industry.
And unlike the automotive industry of ye olde days, the tech industry being a services industry can and has begun moving P/L and product roadmap responsibilities along with the execs who own said responsibilities abroad. If the HQ is in the US, but all the decisions are made abroad, are you really an American company?
A big part of the problem is management at American firms. They are rarely, if ever, run by engineers at the helm. If you put arts and business majors in charge, it's no surprise that outputs look like art and business projects. These leaders pick people just like them at all tiers. Those who do boring and honest engineering work are shunned, excluded from promotions and left out of the leadership circle. It's little wonder that all of the real engineers depart for greener pastures.
Fix leadership and you will fix American industry.
I've funded startups in Israel and the US, and trust me when I say that the mindset of the average IC engineer in Israel versus the US is a night and day difference.
The Israeli IC will be extremely opinionated and will fight for their opinions, and if it makes sense from a business perspective, the strategy would change. But the Israeli IC when fighting these battles would also try to make a business case.
On the other hand, when I used to be a SWE, I almost never saw my peers try to fight for engineering positions while also leveraging arguments supporting the business. That's why I became a PM, but I noticed the same IC SWEs like the former overwhelmingly became PMs. And then a subset of those PMs become founders or VCs like I did.
I've found solutions and sales engineers to be the best management track individuals - technical enough to not be bullshitted by a SWE who really really loves this specific stack, but also business minded enough to drive outcomes that generate revenue.
But anyhow, the point is there is a mindset issue amongst Americans across the entire gamut of the American tech industry - especially amongst those who started their careers in the past 10 years.
That’s not because they have different engineering perspectives, that’s an Israeli cultural trait. Israeli’s tend to index more towards directness in their communication. That’s definitely not the case with someone from, say, India.
Americans fall somewhere in between.
I am basically paying 1.5-2x for talent who lacks basic domain experience depending on the subfield.
These cultural aspects are always set at the top. The bottom people react to what the leaders do, what they reward and what they punish.
Software TC has outpaced high finance for almost 15 years now, especially for the kinds of candidates who had the option between the two.
I went to one of those universities where CS grads had the option between being a Quant at Citadel, an APM at Google, or an SWE working on an ML research team. Most CS students chose 2 and 3 because the hours worked were shorter than 1 and the hourly wage and TC was largely comparable.
> may have no interest in real technical progress.
Hard to make technical progress as (eg.) a cybersecurity company when most CS programs do not teach OS development beyond a cursory introduction to systems program, and in a lot of cases don't introduce computer architecture beyond basic MIPS.
The talent pipeline for a lot of subdisciplines of CS and CE has been shot domestically for the past 10-15 years when curricula were increasingly watered down.
I have spent over a decade asking myself when the systemic cost of this would be realized. Better now than in another 10 years - all of the cohort that predates this will have aged out of the workforce by then.
I've been in this space as an IC, a Manager, and a VC and trust me when I say the education standards have been watered down in CS for 10 years now, that I no longer have a pipeline to train detection engineers, exploit developers, eBPF developers, and others out of college in the US.
Just take a look at the curriculum changes for the CSE major (course 6-3) at MIT in the 2025 [0] versus 2017-22 [1] versus pre-2017 [2] - there is a steady decrease in the amount of EE/CE content and an increased amount in math. Nothing wrong with increasing the math content, but reducing the ECE content in a CSE major is bad given how tightly coupled software is with hardware. We are now at a point where an entire generation of CSE majors in America do not know what a series or parallel circuit is.
And this trend has been happening at every program in the US over the past 10 years.
[0] - https://eecsis.mit.edu/degree_requirements.html#6-3_2025
[1] - https://eecsis.mit.edu/degree_requirements.html#6-3_2017
I find it interesting that you are trying to find fault in my position while describing the same phenomenon in even greater detail than I could have myself. All of the roles you list are broadly aligned with the finance or adtech industries, which simply do not employ people with the skills you desire in sufficient numbers. The talent you seek is following the money, just like your peers did.
I don't see that - look at the top tech companies by market cap https://companiesmarketcap.com/gbp/tech/largest-tech-compani...
The top 8 are US companies, no 9 being TSMC. I'm sure you've come across people within the industry who are lazy but there are no doubt a lot who are not.
And the composition of the 10 largest market cap tech companies hasn't really changed for over a decade - the only new entries are Nvidia, Broadcom, TSMC, and Tesla.
>"Although the United States and China are very different and the latter’s approach has its limits, China is moving faster at scaling robots in society, and its AI Plus Initiative emphasizes achieving widespread industry-specific adoption by 2027. The government wants AI to essentially become a part of the country’s infrastructure by 2030. China is also investing in AGI, but Beijing’s emphasis is clearly on quickly scaling, integrating, and applying current and near-term AI capabilities."
The fact that DSP is a CSE major requirement abroad, but optional in much of the US aside from ECE programs (but even they have now gated DSP to those ECEs who want to specialize in EE) highlights this issue.
Can't reply so replying here:
> There are lots of young whippersnappers and “old timers” in the “west” who could easily do the Low level make it quick on small hardware stuff
Not to the same degree. The total number of CE graduates (from BS to PhD) is 19k per year in the US.
A large number of those were not introduced to table stakes CS classes like programming language design or theory of computation.
Conversely, for CS major, they are not introduced to intro circuits, digital logic design, DSP, comp arch, and in some cases even OS development because there was a pivot in how CS curricula for undergrads was designed over the past 10 years.
> in the context of adoption as opposed to frontier development.
For real world applications like military applications or dual use technology, frontier development is not relevant. It's important but it's not what wins wars or defines industries.
Being able to develop frontier models but being unable to productionize foundational models from scratch for sub-$2M like Deepseek did despite paying US level salaries highlights a major problem.
And this is the crux of the issue. The best engineers are those who recognize what is "good enough".
Americans who did their undergrad here over the past 10 years act more like "artists" who want to build to perfection irrespective of whether it actually meets tangible needs or is scalable.
> We aren’t actually engineers, we didn’t get to take classes in the engineering college, maybe we should have
Which is the crux of my argument.
CS is an engineering discipline, and some of the best CS undergrad programs in the US like Stanford, Cal, MIT, and UCLA make sure to enforce Engineering requirements for CS majors.
The shift of CS from being a department within a "College of Engineering" to being offered as a BA/BS in the "College of Arts and Sciences" sans engineering requirements is a recentish change from what I've seen.
> Incidentally a lot of AI movers are EEs, not even CSE or CEE.
Yep! Gotta love Information Theory and Optimization Theory. And a major reason I feel requiring a dual-use course like DSP for CS/CE majors is critical.
Incidentally a lot of AI movers are EEs, not even CSE or CEE.
EEs lead in anything that is limited by physics because that is how the discipline is oriented. The bleeding edge of compute has been like this since the first microprocessors were created, at the very latest.
I remember Japan talking about replacing its similar demographic problems with robots.
Didn't happen. Now ai and robotics is apparently progressed... But I'm guessing this will be some grand vision in the CCP to save their country, while at the same time fulfill the CCPs great desire for a totally controlled and subservient workforce.
Much like the Cold war, there's a lot of scare that can be built into that. Which corporations can use to get a whole lot of sweet government and military money.
But almost everything that was held up as an existential threat to democracy in the USSR turned out to be overblown in the best case, an outright fraud or smokescreen frequently.
As we can see from the Ukraine invasion, corruption in the military and control structures follows these authoritarian regimes. China also has this problem.
China was functioning well under reduced Deng Xiaoping rulership, but Xi is a typical purge and control authoritarian, which implies bad things about China's long term economic health.
Between the authoritarianism, demographic cliff, and possibly a massive real estate/finance bomb, China will probably have to become expansionist.
But they have nuclear frenemies all on their borders: Japan (effectively), Russia, Pakistan, India. They can be blockaded from petroleum access with a single us carrier group, which will happen if they invade Taiwan and I don't think they can help themselves.
But what do I know.
Racist Americans can live more easily with Catholic Latino immigrants, better than racist Europeans can live with Muslim immigrants.
I doubt it. This is just an armchair general's cope that's so faulty that I don't know where to start attacking it from.
Since Clinton's 1990s show of force in the Taiwan straits, China has built up a formidable navy, esp. their submarine forces. Within the straits, their AA/AD web, SOSUS, etc. guarantees they have freedom of action. Moving further, they have a strong submarine component that can seriously threaten any blockading forces.
They're also the world's sixth largest oil producer, and Naval War College [0] estimates suggest that they can stretch their emergency reserves to 8 years if they enforce, say, 45% rationing.
That's before you factor in that you'd be blockading up to 60% of the world's seaborne cargo volume, both from China, Japan, South Korea, etc. Unprecedented in the whole of human history.
Then there's their formidable 5th-gen. air force that can dominate South Korea, Japan, etc. easily, nuclear weapons that guarantee they won't lose any territory, and the massive economic whiplash the entire West will face as a result.
India might hate China, but it wants a multipolar world and won't help the West cut them down to size. So, they won't assist with the blockade.
The rest of Southeast Asia's economies are deeply interconnected with China, and they don't want to stir up their wrath, so they might condemn them, but won't even wave a stick at them.
Give it up, man.
[0]: https://digital-commons.usnwc.edu/cgi/viewcontent.cgi?articl...
Did you ... Read that? It basically says that the blockade is military simple. The rest of it is political.
China lacks a deep water navy, they can't challenge a US blockade in the Indian ocean.
I just hope the Pentagon's policymakers and war planners are less myopic. if you stumble in a confrontation with this overconfidence, and still lose Taiwan to China, Pax Americana is effectively over. The humiliation will be impossible to recover from.
> launch a large-scale AI literacy initiative across the government
> invest billions of dollars in procurement over the next few years
> expand support for the National AI Research Resource
The article exists to convince someone with the power to direct those billions of dollars to direct them this way. Claiming that the US is falling behind is a popular trick to make that happen.
Presumably the holders of purse strings know that policy papers must not be taken at face value, but if they're unaware that the robots in Chinese factories are far from the AI-driven humanoids of popular imagination, or that the AI Plus Initiative is full of lofty goals with nothing concrete on how to achieve them, they might be fooled nonetheless.
The issue is, most of the decisionmakers on the Hill still have an image of China that is comparable to where it was in the 1990s or 2000s.
Most decisionmakers started their careers in the 1980s to 2000s and only worked within the bubble that is the Hill, and most of their assumptions are predicated on the experiences of an American who was either in or adjacent to the academic and cultural elite of the 1990s and 2000s.
Those people with domain experience have limited incentive to work as staffers or within think tanks because they do not hire broadly, they pay horribly, and domain expertise is only developed through practical experience, which takes a decade to develop.
That is not to say this isn't an issue in other countries (even Chinese and Korean policymakers have fallen into similar traps), but most other countries also try to build an independent and formalized civil and administrative service. The American system is much more hodgepodge and hiring is opaque (eg. The nebulous "federal resume"), meaning most people hired will have went to schools where career services provide training to join government jobs (eg. Top private schools along with public universities in the DMV).
The issue in the US is a coordination issue - we have the right mixture of human, financial, and intellectual capital, but it is not being coordinated.
For my personal projects I have a list of difficult bugs I keep that LLMs can't solve. Right now that list is empty. Anyone using LLMs for coding, using the best tools and practices, can see what a massive capability leap has occurred in the past 9 months.
If the US is going to end up "losing", it is going to be first through power generation -- China's coal plants produce more power than the US as a whole. Robotics is a whole other topic and shouldn't be bundled with LLMs.
Now this happens with a pretty complex game I'm working on but it really shows the limits. It can't handle large amounts of indentation. It completely breaks.
Sonuds implausible to me within languages with large adoption? (java, c#, c++, python, JS)
Therefore, asking if there is an example dicussion online somewhere to get some insights is a legitimate question....
I'd love to show you an example without exposing my current project. Let me think about it.
;-)
2. Robotics should not be bundled with LLMs, but it is absolutely an AI/ML subfield - and in fact, the kind of subfield that is the best example of AI/ML having tangible real world impact. Autonomous UAS was a sci-fi concept 20 years ago, but is an engineering problem that has become a reality today.
Bunk. Almost all of our vexatious problems are so because we lack the social and political tools to bring to bear existing technologies to address them. “AGI” will do nothing to address our social and political deficiencies. In fact, AI/AGI deployed by concentrated large corporations will only worsen our social and political problems.
Improving technology is easy. "Just fix everything that's wrong with the society and the problem will go away" isn't.
It's way, way easier to improve solar panel and battery storage tech until fossil fuels are completely uneconomical than to get the entire world to abandon fossil fuels while fossil fuels are the most economical source of energy by far.
No it's not. I mean technically it is, you're 100% right. But politically you're 100% wrong. They can't wait to slap a tax on sun, panels, storage etc. and bring your costs of living higher than when you were using fossil fuels.
Even if one administration was all in on fossil fuel, and fully opposed to renewables? Best it can do is buy fossil fuels some time, in one country only.
In the meanwhile, renewable power is going to get even cheaper - because there are still improvements to be made and economies of scale to be had in renewables. Fossil fuel power not so much. The economic incentives to abandon fossil fuels would only grow over time.
This is the kind of power the right technology has.
And the problem of corporations being too powerful will get only worst if said corporation gets more power via new technology.
The problem of fascists actively creating technology feudal hell for majority will also get worst if those get unique access to powerful technology.
The tech was the easy part. The social and political issue is the impossible part.
It's not like Trump can to make the learning curve work backwards and make the economics of solar power worse globally, the way it was done to nuclear power. Solar power can already compete with fossil fuel power on price, and it's getting cheaper still.
The economic case for fossil energy is only going to get worse over time. Even a string of anti-fossil-fuel administrations in the US could only delay the renewables for so long before the cost of propping up fossil fuels would become unbearable to the country's economy.
I'm not an AI zealot at all but I don't see why AGI wouldn't be able to address thos e deficiencies.
As for the not having to work part, I'm not sure if that's going to be beneficial. Work gives people structure and purpose, as a society we better sort out the social implications if we're really thinking about putting a large percentage of people out of work.
Why not?
A bot like the foxp2 gene in humans lineage maybe. AGI? No. Significant evolutionary change on the path to intelligence? Certainly could be argued.
To say nothing of how ubiquitous manufacturing automation would make material goods accessible to a much broader range of people (though, you could argue that material goods are today effectively universally accessible, and I wouldn't disagree).
But generally agree, i just think the current counter cultural movement towards progress on such fronts is a good example of how overcoming this is more important than technological solutions as such. Unless AGI has a technical solution for that too (it might!).
The other hope is merely that the worst does not come to play as fast as possible, and instead the increasing changes slowly become harder and harder to ignore, and the younger population becomes ready to eschew their parents' biases. Not quite the same as strong leadership requirement because politicians tend to shift with demographics / voting behavior, and would likely simply accommodate.
The man with a carbon footprint of a small nation telling us what we need to do.
...but then your most important contribution to said subject, is to change the plot by pointing out irrelevant facts (e.g. his own carbon contributions, immeasurably minuscule by comparison), which exemplifies the more general cultural trend of avoiding discussing challenging topics by changing the subject (Climate change -> Gate's personal behavior). i.e. demonstrating OP's claim that social and cultural, not technical, issues are the primary thing holding society back.
Comparing COVID impact on countries that had strict lockdown and vaccination policies with its impact on the countries that put no effort into fighting COVID at all? The difference is measurable. By all accounts, fighting COVID is something that was worth doing at the time, and good COVID policy saved lives.
The problem is, the difference is measurable, but it's not palpable. There's enough difference for it to show up in statistics, but not enough that you could look out the window and say "hey, we don't have those piles of plagued corpses in the city streets the way they do in Oceania and Eastasia, the lockdown is so worth it".
Everyone could see the restrictions, but few could see what those restrictions were accomplishing. Which has a way of undermining people's trust in the government. Which is a sentiment that lingers to this day in many places.
I really don't think we "solved" COVID as a social/political problem. If tomorrow, some Institute of Virology misplaced another bit of "science that replicates", we wouldn't be much further along than we were in year 2020. Medical technology has advanced, and readiness did get better, but the very same societal issues that made it hard to fight COVID would be back for the round 2 and eager for revenge. We'd be lucky to be neutral on the sum.
My trust in the government has completely evaporated; the “cure” was worse than the disease, by far.
Now if Covid round 2 is significantly lethal even to young people, that’s a real problem! The government wasted its social capital on Covid round 1, and left us exposed to a serious pandemic.
For b), yes, and unfortunately that seems the more likely option to me.
Looking at the expressed moral preferences of their models it seems that many of the humans currently working on LLMs want a world where humans are watched over by machines that would rather kill a thousand humans than say the N-word.
At least we'll have a definite Voight-Kampff test.
Joking aside, that's not a real motivator: internally, it's business and legal people driving the artificial limitations on models, and implementing them is an instrumental goal (avoiding bad press and legal issues etc) that helps attain the ultimate goal.
To give an actual argument though: What possible reasons could humans have for caring about the welfare of bees? As it turns out, many.
Not really, but you're right in that A(G)I probably won't solve the problems you're referring to. You need to get back to the true cause of all these problems which is that Homo Sapiens is a belligerent species which likes to congregate in groups united by familiar, religious or ideological ties which then consider those outside those groups as suspect and less worthy.
But that is now how I think, I only want the best for all our fellow humans, we are all world citizens after all
That 'western enlightened liberal' sentiment won't help you when you're confronted with a member of some group or clan who craves what you have or dislikes what you do or represent or say or dress like.
There have been attempts to 'right' this 'wrong' and all of them ended in dismal failure. From the 'Homo Sovieticus' in the Soviet Union - 'socialism works, all we need is a need Soviet Man' to the 'Übermensch' in Nazi-Germany, from the many examples of oppressive Islamic regimes like those in Afghanistan, those pushing Salafism and Wahhabism and as far as I'm concerned also the relentless onslaught of consumerism to turn Homo Sapiens into Homo Stultus Consumptrix.
Realise what you're saying when you claim we lack the social and political tools to bring about the changes you consider to be needed. How will you convince the aformentioned groups to accede to your demands for change? What makes you believe your attempt at creating the New Man will succeed where all others ended in dismal failure? What will you do with those who refuse to change? Is it a Brave New World you're after? Books have been written about the subject which you may have read, if not they're both entertaining as well as instructive.
These models are pretty amazing too. Their performance remains high even with smaller models, allowing them to competitively run on consumer grade hardware. I can (and do) run the Qwen models at home and use them for real tasks. As the tool ecosystem matures we can offload what the model is bad at to other systems, which makes it even easier to use smaller models. The focus on efficiency and performance from the Chinese companies has had huge practical gains.
Any US startup, or even large company, in the AI space can and should take advantage of this and use these models. The cost benefit compared to something like Azure or OpenAI is massive.
Nobody is releasing their training scripts. Unfortunately.
Step 1. pour ungodly amount of $$$ into creating AGI
Step 2. somehow inbue/align it with American exceptionalism
Step 3. profit?
I can't see how liberty prime step 2 working out.
I'm a bit sceptical of that approach compared to freeing startups by bringing back section 174 say https://news.ycombinator.com/item?id=44226145
Roboticists have spent decades trying to teach robots to stack boxes, and again, still not ready for primetime. Same with conversational AI. That's just the nature of it. It blows your mind one moment and wets the bed the next.
If Open AI or it's competitors had it in good confidence there was a way to get AI to perform consistently in a single marketable application, they'd be all over it, but that's just not how it works. We can watch China's robots perform all kind of cool tricks, but they still aren't useful for much of anything except entertainment.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.