Key Takeaways
I don’t, I think a workable fusion reactor will be the most important technology of the 21st century.
> But fusion as a power source is never going to happen. Not because it can’t, because it won’t. Because no matter how hard you try, it’s always going to cost more than the solutions we already have.
https://matter2energy.wordpress.com/2012/10/26/why-fusion-wi...
> I fully support a pure research program for radically different approaches to fusion.
Setting a date for when one opens is just a pipe dream, they don't know how to get there yet.
Whether it works or not is of course another matter.
https://www.nrc.gov/reading-rm/basic-ref/students/history-10...
People equating AI with other single-problem-solving technologies are clearly not seeing the bigger picture.
Auto-tagging of photos, generating derivative images and winning at Go, I will give you. There's been some progress on protein folding, I heard?
Where's the 21st century equivalent of the steam locomotive or the sewing machine?
> Accelerating fusion science through learned plasma control
https://deepmind.google/discover/blog/accelerating-fusion-sc...
(2022)
The maximum possible benefit of fusion (aside from the science gained in the attempt) is cheap energy.
We'll get very cheap energy just by massively rolling out existing solar panels (maybe some at sea), and other renewables, HVDC and batteries/storage.
Fusion is almost certain to be uneconomical in comparison if it's even feasible technically.
AI, is already dramtically impacting some fields, including science (eg deepfold), and AGI would be a step-change.
I'd say limitless energy from fusion plants is about as likely as e-scooters getting replaced by hoverboards. Maybe next millenium.
But then you start to have some issues with global warming (the temperature at which energy input = energy radiated away)
We probably don't want to release more energy than that.
It might be nice if at the end of the 21st century that is something we care.
...and it does seem this time that we aren't even in the huge overcapacity part of the bubble yet, and won't be for a year or two.
* https://en.wikipedia.org/wiki/Technological_Revolutions_and_...
- https://www.amazon.co.uk/Technological-Revolutions-Financial...
The process of _actually_ benefitting from technological improvements is not a straight line, and often requires some external intervention.
e.g. it’s interesting to note that the rising power of specific groups of workers as a result of industrialisation + unionisation then arguably led to things like the 5-day week and the 8-hour day.
I think if (if!) there’s a positive version of what comes from all this, it’s that the same dynamic might emerge. There’s already lots more WFH of course, and some experiments with 4-day weeks. But a lot of resistance too.
For a 4 day week to really happen st scale, I'd expect we similarly need the government to decide to roll it out rather than workers groups pushing it from the bottom up.
See perhaps:
* https://en.wikipedia.org/wiki/Eight-hour_day_movement
Generally it only really started being talked about when "workers" became a thing, specifically with the Industrial Revolution. Before that a good portion of work was either agricultural or domestic, so talk of 'shifts' didn't really make much sense.
Yes, that is the first link of my/GP post.
Most new tech is like that - a period of mania, followed by a long tail of actual adoption where the world quietly changes
Why is that the case? There's plenty of people in the field who have made convincing arguments that it's a dead end and fundamentally we'll need to do something else to achieve AGI.
Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.
I'm not a hater, it could be true, but it seems to be gospel and I'm not sure why.
Mapping to 2001 feels silly to me, when we've had other bubbles in the past that led to nothing of real substance.
LLMs are cool, but if they can't be relied on to do real work maybe they're not change the world cool? More like 30-40B market cool.
EDIT: Just to be clear here. I'm mostly talking about "agents"
It's nice to have something that can function as a good Google replacement especially since regular websites have gotten SEOified over the years. Even better if we have internal Search/Chat or whatever.
I use Glean at work and it's great.
There's some value in summarizing/brainstorming too etc. My point isn't that LLMs et al aren't useful.
The existing value though doesn't justify the multi-trillion dollar buildout plans. What does is the attempt to replace all white collar labor with agents.
That's the world changing part, not running a pretty successful biz, with a useful product. That's the part where I haven't seen meaningful adoption.
This is currently pitched as something that will have nonzero chance of destroying all human life, we can't settle for "Eh it's a bit better than Google and it makes our programmers like 10% more efficient at writing code."
Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.
I have a friend who works at PwC doing M&A. This friend told me she can't work without ChatGPT anymore. PwC has an internal AI chat implementation.Where does this notion that LLMs have no value outside of programming come from? ChatGPT released data showing that programming is just a tiny fraction of queries people do.
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.
There's no doubt that you'll find anecdotal evidence both for and against in all variations, what's much more interesting than anecdotes is the aggregate.
[0] https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
In the first few years of any new technology, most people investing it lose money because the transition and experimentation costs are higher than the initial returns.
But as time goes on, best practices emerge, investments get paid off, and steady profits emerge.
These are business customers buying a consumer-facing product.
It always takes time to figure out how to profitably utilize any technological improvement and pay off the upfront costs. This is no exception.
>I believe both sides are right. Like the 19th century railroads and the 20th century broadband Internet build-out, AI will rise first, crash second, and eventually change the world.
I also think it's true that AI is nowhere near AGI level. It's definitely not currently capable of doing my job, not by a long shot.
I also think that, throwing trillions of dollars at AI for a "a better google search, code snippet generator, and obscure bug finder" is a contentious question, and a lot of people oppose it for that reason.
I personally still think it's kind of crazy we have a technology to do things that we didn't have just ~2 years before, even if it just stagnates right here. Still going to be using it every day, even if I admittedly hate a lot of parts of it (for example, "thinking models" get stuck in local minima way too quickly).
At the same time, don't know if it's worth trillions of dollars, at least right now.
So all claims on this thread can be very much true at the same time, just depends on your perspective.
>At the same time, don't know if it's worth trillions of dollars, at least right now.
The revenue numbers sure don't think so. And I don't think this economy can support "trillions" of spending even if it wanted to. That's why the bubble will pop, IMO.
>Behind the disappointing enterprise deployment numbers lies a surprising reality: AI is already transforming work, just not through official channels. Our research uncovered a thriving "shadow AI economy" where employees use personal ChatGPT accounts, Claude subscriptions, and other consumer tools to automate significant portions of their jobs, often without IT knowledge or approval.
>The scale is remarkable. While only 40% of companies say they purchased an official LLM subscription, workers from over 90% of the companies we surveyed reported regular use of personal AI tools for work tasks. In fact, almost every single person used an LLM in some form for their work.
Is she more productive though?
People who smoke cigarettes will be unable to work without their regular smoke breaks. Doesn’t mean smoking cigarettes is good for working.
Personally I am an AI booster and I think even LLMs can take us much farther. But people on both sides need to stop accepting claims uncritically.
/s
What kind of question is that? Seriously. Are some people here so naive to think that tens of millions out there don’t know when something they choose to use repeatedly multiple times a day every day is making their life harder? Like ChatGPT is some kind of addiction similar to drugs? Is it so hard to believe that ChatGPT is actually productive?
What if people are using LLMs to achieve the same productivity with more cost to the business and less time spent working?
This, to me, feels incredibly plausible.
Get an email? ChatGPT the response. Relax and browse socials for an hour. Repeat.
"My boss thinks I'm using AI to be more productive. In reality, I'm using our ChatGPT subscription to slack off."
That three day report still takes three days, wink wink.
AI can be a tool for 10xers to go 12x, but more likely it's also that AI is the best slack off tool for slackers to go from 0.5x to 0.1x.
And the businesses with AI mandates for employees probably have no idea.
Anecdotally, I've seen it happen to good engineers. Good code turning into flocks of seagulls, stacks of scope 10-deep, variables that go nowhere. Tell me you've seen it too.
Both their perspectives are technically right. But we'll either have burned out workers or a lagging schedule as a result in the long term. I miss when we thought more long term about projects.
Naming another example outside of LLM skeptics asking it, about LLMs, is inherently a counterexample.
Why not? If you ever got an AI generated email or had to code-review anything vibecoded, you're going to be suspicious on who's "more productive". I've read reports and studies and it feels like the "more productive" people tend to be pushing more work onto people below or beside them to fix the generated mess.
I do believe there are productive ways to use this tech, but it does not seem like many people these days has the discipline to establish a proper workflow.
Lots of things claim to make people more productive. Lots of things make people believe they are more productive. Lots of things fail to provide evidence of increasing productivity.
This "just believe me" mentality normally comes from scams.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.
Come on, you can’t mean this in any kind of robust way. I can’t get my job done without a computer; am I an “addict” who has “lost the ability to work on my own?” Every tool tends to engender dependence, roughly in proportion to how much easier it makes the life of the user. That’s not a bad thing.
Are you really comparing an LLM to a computer? Really? There are many jobs today that quite literally would not exist at all without computers. It's in no way comparable.
You use ChatGPT to do the things you were already doing faster and with less effort, at the cost of quality. You don't use it to do things you couldn't do at all before.
LLMs are nothing like a computer for a programmer, or a saw for a carpenter. In the very best case, from what their biggest proponents have said, they can let you do more of what you already do with less effort.
If someone has used them enough that they can no longer work without them, it's not because they're just that indispensable: it's because that someone has let their natural faculties atrophy through disuse.
Why not?
>I can’t get my job done without a computer; am I an “addict” who has “lost the ability to work on my own?”
It's very possible. I know people love bescmirching the "you won't always have a calculator" mentality. But if you're using a calculator for 2nd grade mental math, you may have degregaded too far. It varies on the task, of course.
>Every tool tends to engender dependence, roughly in proportion to how much easier it makes the life of the user. That’s not a bad thing.
Depends on how it's making it easier. Phones are an excellent example. They make communication much easier and long distance communication possible. But if it gets to the point where you're texting someone in the next room instead of opening your door, you might be losing a piece of you somewhere.
Let’s be serious here. These are still professionals and they have a reputation. The few cases you hear online of AI slop in professional settings is the exception. Not the norm.
It's no different to a manager that delegates, are they less of a manager because they entrust the work to someone else? No. So long as they do quality checks and take responsibility for the results, wheres the issue?
Work hard versus work smart. Busywork cuts both ways.
Given what I've seen in the educational sector: yes. Very hard. We already had this massive split in extremes between the highly educated and the ones who struggle. The last thing we need is to outsource the aspect of thinking to a billionaire tech company.
The slop you see in the workplace isn't encouraging either.
That's just an appeal to masses / bandwagon fallacy.
> Is it so hard to believe that ChatGPT is actually productive?
We need data, not beliefs and current data is conflicting. ffs.
It doesn't say she chooses to use it; it says she can't work without using it. At my workplace, senior leadership has mandated that software engineers use our internal AI chat tooling daily, they monitor the usage statistics, and are updating engineering leveling guides to include sufficient usage of AI being required for promotions. So I can't work without AI anymore, but it doesn't mean I choose to.
It's not that hard to imagine that your friend feels more productive than she actually is. I'm not saying it's true, but it's plausible. The anecdata coming out of programming is mostly that people are only more productive in certain narrow use cases and much less productive in everything else, relative to just doing the work themselves with their sleeves rolled up.
But man to seeing all that code gets spit out on the screen FEEL amazing, even if I'm going to spend the next few hours needing to edit it, for the next few months managing the technical debt I didn't notice when I merged it.
My own use case (financial analysis and data capture by the models). It takes away the grunt work, I can focus on the more pleasant aspects of the job, it also means I can produce better quality reports as I have additional time to look more closely. It also points out things I could have potentially missed.
Free time and boredom spurs creativity, some folks forget this.
I also have more free time, for myself, you're not going to see that on a corporate productivity chart.
Not everything in life is about making more money for some already wealthy shareholders, a point I feel sometimes lost in these discussions, I think some folks need some self-reflection on this point, their jobs don't actually change the world and thinking of the shareholders only gets you so far. (Not pointed at you, just speaking generally).
For me, quality is the biggest metric, not money. But time does play into the metric of quality.
The sad reality is that many use it as a shortcut to output slop. Which may be "productive" in a job where that busywork isn't critical for anyone but your paycheck. But those kinds of corners being cut seems anathema to proper engineering or any other mission critical duties.
>their jobs don't actually change the world and thinking of the shareholders only gets you so far.
I'm worried of seeing more cases like a lawyer submitting cases to a judge that never existed. There's ethical concerns about the casual chat apps, but I can leave that to others.
People doing their jobs know how to use it effectively. Just because corporates aren't capturing that value for themselves doesn't mean it's low quality. It's being used in a way that is perhaps reflected as an improvement in the actual employees standing, and could be bridging existing outdated work processes. Often an employee is powerless to change these processes and KPI's are notoriously narrow in scope.
Hallucinations happen less frequently these days, and people are aware of the pitfalls so account for this. Literally in my own example above it means I have more time to actually check my own work (and it's work) and it also points out factors I might have missed as a human (this has absolutely happened multiple times already).
Fun fact; smoking likely is! There have been numerous studies into nicotine as a nootropic, eg https://pubmed.ncbi.nlm.nih.gov/1579636/#:~:text=Abstract,sh... which have found that nicotine improves attention and memory.
Shame about the lung cancer though.
Au contraire. Acute nicotine improves cognitive deficits in young adults with attention-deficit/hyperactivity disorder: https://www.sciencedirect.com/science/article/abs/pii/S00913...
> Non-smoking young adults with ADHD-C showed improvements in cognitive performance following nicotine administration in several domains that are central to ADHD. The results from this study support the hypothesis that cholinergic system activity may be important in the cognitive deficits of ADHD and may be a useful therapeutic target.
This isn't a sign that ChatGPT has value as much as it is a sign that this person's work doesn't have value.
Would you say that their work has no value?
Anyways, IDE's don't try to offload the thinking for you, it's more like an abacus. You still need to work in it a while and learn the workflow before it's more efficient than a text editor + docs.
Chrome is a trickier aspect, because the reality is that a lot of modern docs completely suck. So you rely less on official documentation and more about how others have navigated an IDE and if those options work for you. I'd rather we make proper documentation than offload it into a black box that may or may not understand what it's spouting out to you, though.
ChatGPT automates much of my friend's work at PwC making her more productive --> not a sign that ChatGPT has any value
Farming machines automated much of what a farmer used to have to do by himself making him more productive --> not a sign that farming machines have any value
The output of PwC -- whoops, here goes any chance of me working there -- is presentations and reports.
“We’re entering a bold new chapter driven by sharper thinking, deeper expertise and an unwavering focus on what’s next. We’re not here just to help clients keep pace, we’re here to bring them to the leading edge.”
That's on the front page of their website, describing what PwC does.
Now, what did PwC used to do? Accounting and auditing. Worthwhile things, but adjuncts to running a business properly, rather than producing goods and services.
Look up what M&A is.
Mergers and Aquisitions? If that's the right acronym I hate it even more, thank you.
But yes, I can see how automating the BS of corporate culture then using it to impress people (who also don't care anyway) by saying "I made this with AI" can be "productive". Not really a job I can do, though.
If you think convincing investors to give you hundreds of millions is easier than writing code, you’re out of your mind.
I am curious what kind of work is she using ChatGPT such that she cannot do without it?
> ChatGPT released data showing that programming is just a tiny fraction of queries people do
People are using it as search engine, getting dating advice and everything under the sun. That doesn't mean there is business value - so to speak. If these people had to pay say $20 a month for this access, are they willing to do so?
The poster's point was that coding is an area which is paying consistently for LLM models so much that every model has a coding specific version. But we don't see same sort of specialized models for other areas and the adoption is low to nonexistent.
Given they said this person worked at PwC, I’m assuming it’s pointless generic consultant-slop.
Concretely it’s probably godawful slide decks.
Well this article cites 400b of spending for 12b of revenue. That's not zero value, but it definitely showing overvalue. We're not paying that level of money back with consumer level goods.
Now is B2B valuable? Maybe. But it's really tough valuating that with how businesses are operating c. 2025.
> ChatGPT released data showing that programming is just a tiny fraction of queries people do.
yes, but it's not 2010 anymore. Companies are already on ChatGPT's neck trying to get RoI's. They can't run insolvent for a decade at this level of spending like all the FAANG's did in yestr-decade.
Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.
Current AI tools may not beat the best programmer, they definitely improves average programmer efficiency.
Try changing something old in claude code (or codex etc) using a programming language you have used before. Your opinion might change drastically.
That's bread and butter development work.
I use copilot in agent mode.
But why would I do that? Either I'm learning a new language in which case I want to be as hands-on as possible and the goal is to learn, not to produce. Or I want to produce something new in which case, obviously, I'd use a toolset I'm experienced in.
For example, perhaps I want to use a particular library which is only available in language X. Or maybe I'm writing an add-on for a piece of software that I use frequently. I don't necessarily want to become an expert in Elisp just to make a few tweaks to my Emacs setup, or in Javascript etc. to write a Firefox add-on. Or maybe I need to put up a quick website as a one-off but I know nothing about web technologies.
In none of these cases can I "use a toolset I'm experienced in" because that isn't available as an option, nor is it a worthwhile investment of time to become an expert in the toolset if I can avoid that.
It's a damn good tool, I use it, I've learned the pitfalls, it has value but the inflation of potential value is, by definition, a bubble...
If you told me that you would spend half a trillion and the best minds on reading the whole internet, then with some statistical innovation try to guess the probable output of an input. The way it works now seems about right, probably a bit disappointing even.
I would also say, it seems cool and you could do that, but why would you? At least when the training is done it is cheap to use right? No!? What the actual fuck!
Do we really need more efficient average programmers? Are we in a shortage of average software?
Yes. The "true" average software quality is far, far lower than the average person perceives it to be. ChatGPT and other LLM tools have contributed massively to lowering average software quality.
Anyway we don't need more efficient average programmers, time-to-market is rarely down to coding speed / efficiency and more down to "what to build". I don't think AI will make "average" software development work faster or better, case in point being decades of improvements in languages, frameworks and tools that all intend to speed up this process.
I just tried earlier today to get Copilot to make a simple refactor across ~30-40 files. Essentially changing one constructor parameter in all derived classes from a common base class and adding an import statement. In the end it managed ~80% of the job, but only after messing it up entirely first (waiting a few minutes), then asking again after 5 minutes of waiting if it really should do the thing and then missing a bunch of classes and randomly removing about 5 parenthesis from the files it edited.
Just one anecdote, but my experiences so far have been that the results vary dramatically and that AI is mostly useless in many of the situations I've tried to use it.
You will have much more success if you can compartmentalize and use new LLM instances as often as possible.
Why this is inherently different?
[1] Side-note: This was written at a time when selling software as a standalone product was not really a thing, so everything was open-source and the "how to modify" part was more about how to read and understand the code, e.g. architecture diagrams.
I'm talking about "shrinkwrap" software like Word or something. There's nothing even close to testing for that this is not just "system testing" it.
Some of the stuff generated I can't believe is actually good to work with long term, and I wonder about the economics of it. It's fun to get something vaguely workable quickly though.
Things like deepwiki are useful too for open source work.
For me though the core problem I have with AI programming tools is that they're targeting a problem that doesn't really exist outside of startups, not writing enough code, instead of the real part of inefficiency in any reasonably sized org, coordination problems.
Of course if you tried to solve coordination problems, then it would probably be a lot harder to sell to management because we'd have to do some collective introspection as to where they come from.
Sad but true. Better to sell to management and tagline it as "you don't need a whole team anymore.", or going so far as "you can do this all by yourself now!".
Sadly managers usually have more money to spend than the workers too, so it's more profitable.
If you work in science it's great to have s.th. that spits out mediocre code for your experiments.
It was Claude Code Opus 4.1 instead of Codex but IMO the differences are negligible.
So it looks best when the user isn't qualified to judge the quality of the results?
haven't we established that if you are layman in an area AI can seem magical. Try doing something in your established area and you might get frustrated. It will give you the right answer with caveats - code which is too verbose, performance intensive or sometimes ignoring best security practices.
Odd way to describe ChatGPT which has >1B users.
AI overviews have rolled out to ~3B users, Gemini has ~200M users, etc.
Adoption is far from low.
Does that really count as adoption, when it has been introduced as a default feature?
HN seems to think everyone is like in the bubble here, which thinks AI is completely useless and wants nothing to do with it.
Half the world is interacting with it on a regular basis already.
Are we anywhere near AGI? Probably not.
Does it matter? Probably not.
Inference costs are dropping like a rock, and usage is continuing to skyrocket.
That's the kind of adoption that should just be put up for adoption instead.
(And of course, the reason that I can tell that the auto-translated video titles are hilarious and/or wrong is because they are translating into a language that I speak from a language that I also speak, but apparently the YouTube app's dev team cannot fathom that a person might speak more than one language.)
I don't actually think that AI overviews have "negative value" - they have their utility. There are cases where I stop my search right after reading the "AI overview". But "organic" adoption of ChatGPT or Claude or even Gemini and "forced" adoption of AI overviews are two different beasts.
He has not engaged with any chatbot, but he thinks of himself as "using AI now" and thinks of it as a value-add.
The business model is it is data collection about you on steroids, and that the winning company will eclipse Meta in value.
It's just more ad tech with multipliers, and it will continue to control thought, sway policy and decide elections. Just like social media does today.
Not sure though that do they make enough revenue and what will be the moat if more or less the best models will converge around the same level. For most normies, it might be hard to spot difference between gpt 5 and claude for instance. Okay for Grok the moat is that it doesn't pretend to be a pope and censor everything.
351 more comments available on Hacker News
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.