Bank of England Flags Risk of 'sudden Correction' in Tech Stocks Inflated by AI
Posted3 months agoActive3 months ago
ft.comOtherstoryHigh profile
skepticalnegative
Debate
80/100
AI BubbleTech Stock ValuationMarket Risk
Key topics
AI Bubble
Tech Stock Valuation
Market Risk
The Bank of England warns of a potential 'sudden correction' in tech stocks inflated by AI hype, sparking discussion on the validity of current valuations and the likelihood of an AI-driven market bubble.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
36m
Peak period
130
0-12h
Avg / period
20
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 8, 2025 at 9:55 AM EDT
3 months ago
Step 01 - 02First comment
Oct 8, 2025 at 10:31 AM EDT
36m after posting
Step 02 - 03Peak activity
130 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 14, 2025 at 3:56 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45516265Type: storyLast synced: 11/20/2025, 8:47:02 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. If human intelligence is deterministic, then it can be written in software.
Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen. Failures to date can be attributed to various factors, but the gist is that we haven't yet identified the principles of intelligent software.
My guess is that we need less than 5 million years further development time even in a worst-case scenario. With luck and proper investment, we can get it down well below the 1 million year mark.
No, not all processes follow deterministic Newtonian mechanics. It could also be random, unpredictable at times. Are the there random processes in the human brain? Yes, there are random quantum processes in every atom, and there are atoms in the brain.
Yes, this is no less materialistic: Humans are still proof that either you believe in souls or such, or that human level intelligence can be made from material atoms. But it's not deterministic.
But also, LLMs are not anywhere close to becoming human level intelligence.
It isn't. But if it were, we can also write that into the algorithm.
>But also, LLMs are not anywhere close to becoming human level intelligence.
They're no farther than about 5 million years distant.
~200 years of industrial revolution and we already fucked up beyond the point of no return, I don't think we'll have resources to continue on this trajectory for 1m years. We might very well be accelerating towards a brick wall, there is absolutely no guarantee we'll hit AGI before hitting the wall
We've already set the course for human extinction, we're about 6-8 generations away from absolute human extinction. We became functionally extinct 10-15 years ago. Still, if we had another 5 million years, I'm one hundred percent certain we could crack AGI.
Determinism is a metaphysical concept like mathematical platonism or ghosts.
You just need a few Dyson spheres and someone omniscient to give you all the parameter values. Easy peazy.
Just like cracking any encryption: you just brute force all possible passwords. Perfectly deterministic decryption method.
</s>
It’s very possible that human beings today are already doing the most intelligent things they can given the data and resources they have available. This whole idea that there’s a magic property called intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with, increasingly just seems like the fantasy of people who think they’re very intelligent.
And, if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory.
Isn’t that what the greatest minds in physics would say as well? Yes, yes it is.
No debate will be entered into on this topic by me today.
https://en.wikipedia.org/wiki/Alcubierre_drive
You said:
"(...) if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory."
"Isn’t that what the greatest minds in physics would say as well? Yes, yes it is."
That is not in fact what the greatest minds in physics would say. Your meta-knowledge of physics has failed you here, resulting in you posting embarrassing misinformation. I'm just having to correct it to prevent you from misleading anyone else.
A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.
As humans, we can easily visualize and reason about 2D and 3D spaces, it's natural because our sensory systems evolved to navigate a 3D world. But can we truly conceive of a million dimensions, let alone visualize them? We can describe them mathematically, but not intuitively grasp them. Our brains are not built for that kind of complexity.
Now imagine a form of intelligence that can directly perceive and reason about such high dimensional structures. Entirely new kinds of understanding and capabilities might emerge. If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.
Of course that's speculative, but it just illustrates how deeply intelligence is shaped and limited by its biological foundation.
Humans existed in the world for hundreds of thousands of years before they did any of those things, with the exception of wooden hut, which took less time than that. But also wasn't instant.
Your example doesn't entirely contradict the argument that it takes time and experimentation as well, that intellect isn't the only limiting factor.
So I completely agree that intelligence alone isn't the only factor, it's the whole foundation.
Given a million years, that could change.
It likely couldn't, though, that's the problem.
At a basic level, whatever abstract system you can think of, there must be an optimal physical implementation of that system, the fastest physically realizable implementation of it. If that physical implement was to exist in reality, no intelligence could reliably predict its behavior, because that would imply that they have access to a faster implementation, which cannot exist.
The issue is that most physical systems are arguably the optimal implementation of whatever it is that they do. They aren't implementations of simple abstract ideas like adders or matrix multipliers, they're chaotic systems that follow no specifications. They just do what they do. How do you approximate chaotic systems which, for all you know, may depend on any minute details? On what basis do we think it is likely that there exists a computer circuit that can simulate their outcomes before they happen? It's magical thinking.
Note that intelligence has to simulate outcomes, because it has to control them. It has to prove to itself that its actions will help achieve its goals. Evolution doesn't have this limitation: it's not an agent, it doesn't have goals, it doesn't simulate outcomes, stuff just happens. In that sense it's likely that certain things can evolve that cannot be intelligently designed (as in designed, constructed and then controlled). It's quite possible intelligence itself falls in that category and we can't create and control AGI, and AGI can't improve itself and control the outcome either, and so on.
I guess where my speculation comes in is that "simulation" doesn’t necessarily have to mean perfect 1:1 physical emulation. Maybe a higher intelligence could model useful abstractions/approximations, simplified but still predictive frameworks that are accurate enough for control and reasoning even in chaotic domains.
After all, humans already do this in a primitive way, we can't simulate every particle of the atmosphere, but we can predict weather patterns statistically. So perhaps the difference between us and a much higher intelligence wouldn't be breaking physics, but rather having much deeper and more general abstractions that capture reality's essential structure better.
In that sense, it's not "magical thinking", I just acknowledge that our cognitive compression algorithms (our abstractions) are extremely limited. A mind that could discover higher order abstractions might not outrun physics, but it could reason about reality in qualitatively new ways.
> Yes. There is. The theoretical limit is that every time you see 1 additional bit, it cannot be expected to eliminate more than half of the remaining hypotheses (half the remaining probability mass, rather). And that a redundant message, cannot convey more information than the compressed version of itself. Nor can a bit convey any information about a quantity, with which it has correlation exactly zero, across the probable worlds you imagine.
> But nothing I've depicted this human civilization doing, even begins to approach the theoretical limits set by the formalism of Solomonoff induction.
This is also a commonplace in behavioral economics; the whole foundation of the field is that people in general don't think hard enough to fully exploit the information available to them, because they don't have the time or the energy.
——
Of course, that doesn't mean that great intelligence could figure out warp drives. Maybe warp drives are actually physically impossible! https://en.wikipedia.org/wiki/Warp_drive says:
> A warp drive or a drive enabling space warp is a fictional superluminal (faster than the speed of light) spacecraft propulsion system in many science fiction works, most notably Star Trek,[1] and a subject of ongoing real-life physics research. (...)
> The creation of such a bubble requires exotic matter—substances with negative energy density (a violation of the Weak Energy Condition). Casimir effect experiments have hinted at the existence of negative energy in quantum fields, but practical production at the required scale remains speculative.
——
Cancer, however, is clearly curable, and indeed often cured nowadays. It wouldn't be terribly surprising if we already had enough data to figure out how to solve it the rest of the time. We already have complete genomes for many species, AlphaFold has solved the protein-folding problem, research oncology studies routinely sequence tumors nowadays, and IHEC says they already have "comprehensive sets of reference epigenomes", so with enough computational power, or more efficient simulation algorithms, we could probably simulate an entire human body much faster than real time with enough fidelity to simulate cancer, thus enabling us to test candidate drug molecules against a particular cancer instantly.
Also, of course, once you can build reliable nanobots, you can just program them to kill a particular kind of cancer cell, then inject them.
Understanding this does not require believing that "intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with", which I think is a strawman you have made up. It doesn't even require believing that sufficient intelligence can solve every problem if it has sufficient data and resources to work with. It only requires understanding that being able to do the same thing regular humans do, but much faster, would be sufficient to cure cancer.
——
There does seem to be an open question about how general intelligence is. We know that there isn't much difference in intelligence between people; 90+% of the human population can learn to write a computer program, make a pit-fired pot from clay, haggle in a bazaar, paint a realistic portrait, speak Chinese, fix a broken pipe, interrogate a suspect and notice when he contradicts himself, fletch an arrow, make a convincing argument in courts, program a VCR, write poetry, solve a Rubik's cube, make a béchamel sauce, weave a cloth, sing a five-minute lullaby, sew a seam, or machine a screw thread on a lathe. (They might not be able to learn all of them, because it depends on what they spend time on.)
And, as far as we know, no other animal species can do any of those things: not chimpanzees, not dolphins, not octopodes, not African grey parrots. And most of them aren't instinctive activities even in humans—many didn't exist 1000 years ago, and some didn't exist even 100 years ago.
So humans clearly have some fairly flexible facility that these other species lack. "Intelligence" is the usual name for that facility.
But it's not perfectly general. For example, it involves some degree of ability to imagine three-dimensional space. Some of the humans can also reason about four- or five-dimensional spaces, but this is a much slower and more difficult process, far out of proportion to the underlying mathematical difficulty of the problem. And it's plausible that this is beyond the cognitive ability of large parts of the population. And maybe there are other problems that some other sort of intelligence would find easy, but which the humans don't even notice because it's incomprehensible to them.
The basic issue is that we have to deduce stuff about the world we live in, using resources from the world we live in. In the story, the data bandwidth is contrived to be insanely smaller than the compute bandwidth, but that's not realistic. In reality, we are surrounded by chaotic physical systems that operate on raw hardware. They are, in fact, quite fast, and probably impossible to simulate efficiently. For instance, we can obviously never build a computer that can simulate the behavior of its own circuitry, using said circuitry, faster than it operates. But I think there's a lot of physical systems that are just like that.
Being data-limited means that we get data slower than we can analyze and process it. It is certainly possible to improve our ability to analyze data, but I don't think we can assume that the best physically realizable intelligence would overcome data limitation, nor that it would be cost-effective in the first place, compared to simply gathering more data and experimenting more.
Well, yes. it's from Eliezer Yudkowsky. The kind of people who who generally find him persuasive, will do so. Those who don't find him convincing or even find him somewhat of a crank, like the other self-proclaimed "rationalists", will do do. "muddled" is correct, he lacks rigour in everything, but certainly brings the word count.
It may very well be true that you could cure cancer even faster or more cheaply with more experimental data, but that's irrelevant to the claim that more experimental data is necessary.
It may also be the case that there's no "shortcut" to simulating a human body well enough to test drugs against a simulated tumor faster than real time—that is, that you need to have enough memory to track every simulated atom. (The success of AlphaFold suggests that this is not the case, as does the ability of humans to survive things like electric shocks, but let's be conservative.) But a human body only contains on the order of 10²⁴ atoms, so you can just build a computer with 10²⁸ words of memory, and processing power to match. It might be millions of times larger than a human body, but that's okay; there's plenty of mass out there to turn into computronium. It doesn't make it physically unrealizable.
Relatedly, you may be interested in seeing Mr. Rogers confronting the paperclip maximizer: https://www.youtube.com/watch?v=T-zJ1spML5c
But if you agree that with 10²⁸ more times more computational power we could almost surely cure cancer without gathering much more data, then you agree that we have enough empirical data and just need to analyze it better. We're sort of arguing about the details of what kinds of approaches to analyzing the data better would work best.
I'll continue that argument about details a bit more here. So far, even with merely human intelligence, hard computational problems like car crash simulation, protein folding, and mixed integer-linear programming (optimization) have continued to gain even more efficiency from algorithmic improvements than from hardware improvements.
According to our current understanding of complexity theory, we should expect this to continue to be the case. An enormous class of practically important problems are known to be NP-complete, so unless P = NP, they take exponential time: solving a problem of size N requires k**N steps. Hardware advances and bigger compute budgets allow us to do more steps, while algorithmic improvements reduce k.
To be concrete, let's say k = 1.02, we have a data center full of 4096 1-teraflops GPUs, and we can afford to wait a month (2.6 megaseconds) for an answer. So we can apply about 10²² operations to the problem, which lets us solve problems up to about size N = 2600. Now suppose we get more budget and build out 1000 such data centers, so we can apply 10²⁵ ops, but without improving our algorithms. This allows us to handle N = 2900.
But suppose that instead we improve the heuristics in our algorithm to reduce k from 1.02 to 1.01. Suddenly we can handle N = 5100, twice as big.
We can easily calculate how many data centers we would need to reach the same problem size without the more intelligent algorithm. It's about 6 × 10²¹ data centers.
For NP-complete problems, unless P = NP, brute-force computing power lets you solve logarithmically larger problems, while intelligence lets you solve linearly larger problems, equivalent to an exponentially larger amount of computation.
https://www.theguardian.com/business/2025/oct/08/bank-of-eng...
For non-brits, Bank of England the UKs central bank and is a lot like the US Fed. Their comments carry a lot of weight and do impact government policy.
Not enough central banks were making comments about the sub-prime bubble that led to the 2008 crisis. Getting warnings about a possible AI bubble by a central bank is both significant and, in performing the functions of monetary and financial stability for a country, the prudent thing to do.
Offering commentary on which particular sectors they feel are a 'bubble' is outside their purview and not particularly productive IMO, the state is not very good at picking winners.
*edited to 2007
In 1996 Fed Chair Alan Greenspan warned about irrational exuberance, in 1999 he warned Congress about "the possibility that the recent performance of the equity markets will have difficulty in being sustained". The crash came in 2000.
The warning seems to have gone unnoticed. AMD just behaves exactly like Juniper in 1999.
AI is useful. But it's not trillion-dollars useful, and it probably won't be.
That's more of a UI problem than a limitation in Diffusion tech.
That's a customer who'll pay, it might be worth a lot. But a $trillion per year?
The glaring issue with it back then was that unlike an LLM that can be understanding of what you try to explain and bit more consistent the diffusion models ability to read and understand your prompt wasn't really there yet, you're more shotgunning keywords and hope the seed lottery gives you something nice.
But recent image generation models are significantly better in stable output. Something like qwen image will care a lot more about your prompt and not entirely redraw the scene into something else just because you change the seed.
Meaning that the UI experiments already exist but the models are still a bit away from maturity.
On the other hand, when looking at how models are actually evolving I'm not entirely convinced we'll need particularly many classically trained artists in roles where they draw static images with some AI acceleration. I expect people to talk to an LLM interface that can take the dumbest of instructions and carefully adjust a picture, sound, music or an entire two hour movie. Where the artist would benefit more by knowing the terminology and the granular abilities of the system than by being able to hold a pencil.
The entertainment and media industry is worth trillions on an annual basis, if AI can eat a fraction of that in addition to some other work-roles it will easily be worth the current valuations.
ChatGPT's $10b per year is not insignificant tho.
800M active users aka 10% of worlds population.
Maybe if companies would wire up their "oh a customer is complaining try and talk them out of canceling their account offer them a mild discount in exchange for locking in for a year contract" API to the LLM? Okay, but that's not a trillion-dollar service.
That's the lowest of the low and even you accept it doesn't work (yet), how can LLMs be worth 50% of the last years of gdp growth if it's that bad. Do you think customer support represents 50% of newly created value ? I bet it isn't event .5%
Maybe it's because I find writing easy, but I find the text generation broadly useless except for scamming. The search capabilities are interesting but the falsehoods that come from LLM questions undermine it.
The programming and visual art capabilities are most impressive to me... but where's the companies making killings on those? Where's the animation studio cranking out Pixar-quality movies as weekly episodes?
There are always a few comments that make it seem like LLMs have done nothing valuable despite massive levels of adoption.
> You can easily google "generative AI success stories" and read about them.
notice you suggested asking Google and not chatgpt.
Search engines are better at certain tasks than others.
If I said should FLY to Spain is it a cheap shot against sailing because I didn't mention it?
I work in the industry and I know that ad agencies are already moving onto AI gen for social ads.
For VFX and films the tech is not there yet, since OpenAI believes they can build the next TikTok on AI (a proposition being tested now) and Google is just being Google - building amazing tools but with little understanding (so far) of how to deploy them on the market.
Still Google is likely ahead in building tools that are being used (Nano Banana and Veo 3) while the Chinese open source labs are delivering impressive stuff that you run locally or increasingly on a rented H100 on the cloud.
Check out Neural Viz. Unthinkable for one guy without AI. And we're still in the Geocities stage of this stuff.
A non-Pixar animation studio, with presumably a >10x lower budget than Pixar itself, cranking out weekly Pixar movies would be like >1000x acceleration. And indeed, that's not a thing yet for animated movies. The example I gave shows that it's already quite a big X, though.
What if the amount of slop generated counteracts the amount of productivity gained ? For every line of code it writes it also writes some BS paragraph in a business plan, a report, &c.
But second of all, companies do not need to 10x their software production output. Rather the goal, if the 10x productivity is achieved, is to _reduce_ human labor while retaining desired levels of output.
If ultimately you're replacing humans with AI agents, you're exchanging one value for another.
The promise of this future demand is what is driving the inflation of the stock market, with investors happy to ignore the deep losses accruing to every AI software player...for now. Pulling the plug on the capacity-building deals is effectively an admission that demand was overestimated, and the market will tank accordingly.
It says it all about current market mania that Nvidia (who sells most of the future chip capacity) is valued at $4 trillion, more than every publicly traded pharmaceutical company (who have decades of predictable future cash flows) combined.
It's either world-ending hard to believe conjecture, like the death of scarcity, or it's... ads. Ads. You know, the thing we're already doing?
So, it's not looking great. Maybe we will find monetization strategies, but they're certainly not present now, even by the largest players with the most to lose.
The market disagrees.
But if you are sure of this, please show your positions. Then we can see how deeply you believe it.
My guess is you’re short the most AI-exposed companies if you think they’re overvalued? Hedged maybe? You’ve found a clever way to invest in bankruptcy law firms that handle tech liquidations?
If you are skeptical but also not willing to place a bet, you shouldn’t say “AI is overvalued” because you don’t actually believe it. You should say, “I think it might be overvalued, but I’m not really sure? And I don’t have enough experience in markets or confidence to make a bet on it, so I will go with everyone else’s sentiment and make the ‘safe’ bet of being long the market. But like… something feels weird to me about how much money is being poured into this? But I can’t say for sure whether it is overvalued or not.”
Those are two wildly different things.
I certainly had unease about the dot-com market and should have shifted more investments to the conservative side. But I made the "‘safe’ bet of being long the market" even after things started going south.
FWIW, I do think AI is overvalued for the relatively near term. But I'm not sure what to do about that other than being fairly conservatively invested which makes sense for me at this point anyway.
The thing about bubbles is, you can often easily spot them, but can't so easily say when they'll pop.
You’ve just made a comment that “wow, things are going up!” That’s not spotting bubble, that’s my non-technical uncle commenting at a dinner party, “wow this bitcoin thing sure is crazy huh?”
Talk is cheap. You learn what someone really believes by what they put their money in. If you really believe we’re in a bubble, truly believe it based on your deep understanding of the market, then you surely have invested that way.
If not, it’s just idle talk.
We can spot a bubble without being able to predict when it’ll pop.
I don't know how to invest to avoid this bubble. My money is where my mouth is. My investments are conservative and long-term. Most in equity index funds, some bonds, Vanguard mutual funds, a few hand-picked stocks.
No interest in shorting the market or trying to time the crash. I would say I 90% believe a correction of 25% or more will happen in the next 12 months. No idea where my money might be safe. Palantir? Northrup Grumman?
I'll leave shorting to the pros. The whole "double-your-money-or-infinite-losses" aspect of shorting is not a game I'm into.
it has been educational to see how quickly the financier class has moved when they saw an opportunity to abandon labor entirely, though. that's worth remembering when they talk about how this system is the best one for everyone.
I'm pretty sure they all see the it as someone else's problem to solve.
In fact, the further we go into debt - the more we are implicitly betting our society on an AI hail mary.
You see it everywhere in things they can’t inflate. The price of houses and gold most obviously, but you see it in commodities that can’t expand production quickly as well. The solution is to buy assets of course.
It's no longer the early 20th, there are other competitive & well-run jurisdictions for creditors to dump their money in if they lose faith in the US.
Where, pray tell are these competitive and well-run jurisdictions?
China has capital controls so that probably won't work. The EU might work if they ever get their sh*t together and centralise their bonds and markets, otherwise no.
Like, I too believe that the US is on an unsustainable path, but I just don't see where all that money is gonna go (specifically referring to the foreign investment in the US companies/markets here).
Plus, even worse-run higher yield jurisdictions become more appealing as the US fails.
Still not big enough though. I feel like eurobonds or remnibi bonds are the only options, but both don't work for various reasons.
So all the entities that want to hold the debt (social security administration, mutual funds, pension funds etc) where should they go instead? Riskier assets is what you're saying right? Is that a great idea?
Probably the closest US bond equivalent would be debt from well-run Asian countries. I would avoid fixed-income dollar denominated assets.
in what way? as a sovereign currency issuer, the US can't ever be made to default or can it?
What definition of unsustainable fits?
What event could cause public debt growth to reach some kind of insurmountable maximum?
It's not like private debt, when you run out of money, that is the end of the road. There is no such limit for a sovereign currency issuer. The complete settlement of outstanding public debt could be executed tomorrow without collecting another penny in taxes. I wouldn't recommend it, but it could be done.
Please, Stephanie Kelton didn't discover some secret hack to get money for free - I would recommend learning traditional macro before going on the MMT train.
This goes in the bin:
>> recommend learning traditional macro
It obscures what is legally required to happen and it completely ignores entire aspects of the financial system through a series of absurd assumptions.
So rather than rely on any models, be they orthodox or heterodox, let's instead only refer to the actual operations of the actors involved. They are bound by the same laws.
Let me nail this one further home - there are different economic models, they're interchangeable based on beliefs and assumptions (not based on observable facts), but whatever happens all the actors have to comply with the law as it exists today. Let's just use that directly as our frame of reference.
With that given, when you say creditor confidence, at which step in the process of sovereign debt issuance does creditor confidence come in?
Is it when the select panel banks, the primary dealers are legally obligated to make fair market bids for every issuance? (there aren't many other markets where the buyer legally obligated to buy)
Is it when the Fed conducts repurchase agreement operations with the primary dealers (this is the bit where the fed ensures the primary dealers have sufficient reserves to participate in those treasury auctions - in what other market does the seller give you the money to bid on the auction?)
So far the process is just a legally mandated mechanism that everyone must serve their part. We could entirely elect not to do any of this.
The specific question that brings the whole house of cards down: where does creditor confidence come in? You can't answer from an economic school of thought, they all? ignore the reality of how these transactions are executed.
Yes, it comes in at the 'fair market bids' part. When yields spike, the mechanism still “works” legally, but the government’s interest costs and financial stability risks explode in real terms.
The “law” doesn’t immunize you from inflation, balance sheet stress, or a collapsing yield curve. The Fed can’t conjure real resources; it can only reprice claims on them. Monetizing debt isn’t free. Ask the U.K. gilt market in 2022 how far “sovereign currency issuer” logic got them before the Bank of England had to step in. The government isn't immune from market forces.
No, you’re confused. The legally obligated “fair market bids” - a tongue in cheek term - isn’t conducted in dollars. I believe you’re thinking of the secondary market activity which occurs at a later time - where private buyers purchase from the primary dealer banks and those txns are in dollars.
At the primary dealer purchase stage, the fed provides the funds to purchase via the PDCF. “The market” has precisely zero influence over this process.
You’re kind of randomly firing in different directions with the last paragraph, its too removed from reality to make much of a useful comment on.
The surface way it's wrong is that investors could have invested in Nvidia 10 years ago instead of gold. Because they didn't, their investments "eroded" even more.
The deeper way it's wrong is that people who say this almost always have the unstated premise that gold is "real" money, that every price should be measured against it. That premise is false.
When gold was allowed to float in terms of the US dollar, it went up to $200, then dropped down to $100. When it dropped to $100, the dollar didn't become worth twice as much. Or, to use a more recent example, there has not been a factor of 4 inflation over the last 10 years. So gold is not a fixed measuring stick, against which all other things are measured.
I see this sentiment a lot, they are not equivalent. The US must reduce spending, if it wants to protect the dollar. Tax increases may also help.
The relationship between tax rates, GDP, government revenue, the market value of new US debt, and the value of the dollar, is complicated and depends on uncertain estimates and models of the economy. Increasing taxes can reduce GDP, which needs to increase to outgrow the debt, there is an optimal tax rate, more doesn't always help. Decreasing spending is a more straightforward relationship, no new debt, no new dollars.
How it gets done is separate from that. Given that the only demographic that can comfortably weather a recession is also starting to collect social security, paid for by younger generations who would be meaningfully affected by a recession, "old people are scamming us" may actually be an effective message.
https://fiscaldata.treasury.gov/americas-finance-guide/feder...
It's sort of like looking at the IRS and saying "look how much it costs!"
People have been saying that SS will run out since the 70s, at least. I've heard it all.
https://www.ssa.gov/OACT/ProgData/assets.html
I don't live in a costal state, but when I do consulting work typically at charity rates alongside my standard full-time job, I have to pay 24% federal tax, 15.3% FICA, and 7.85% state tax. I am already taxed whenever I want to help anyone at 47.15%. That's before the required tax structures and consulting for doing all the invoicing legally. God himself only wanted 10%, so it seems a government playing God is awfully expensive.
You can't raise taxes any further before I'm done, and I don't think I'm alone, businesses and consultants are already crushed in taxes. I have to bill $40K to hopefully take home $20K; at which point, is it even worth my time? But if I don't consult because it isn't worth it, are small businesses suddenly going to afford an agency or a dedicated software developer? Of course not, so their growth is handicapped, and I wonder what the effects of that tax-wise are.
If you don't want a tax-based solution, I do hope you are agitating for SS and medicare cuts.
I don't believe this, actually. I think that we will raise more revenue, yes, by squeezing more from the Fortune 500; but you will absolutely crush small business and consultancy work further. It's kind of like how an 80% tax rate on everyone making over $100K would do a fantastic job of raising revenue, but it's fundamentally stupid and would kill all future golden geese.
(On that note, I see this comment a lot about how we had huge tax rates, 91% in the 1950s; but this is misleading. The effective tax rate for those earners was only 41%, due to the sheer number of exemptions, according to modern analysis. We have never had an actual effective 91% tax rate, or anywhere close to it. Those rates were theater, never reality.)
On that note, you have no evidence that economists focus solely on tax rates on the curve independently of the economy at large. By definition, the curve is determined from external factors and economic measurements, none of which currently resemble 2012. If the economy crashed and there was 20% unemployment, do you still think they'd stand behind the same curve?
As always, the question with economists is "why aren't you rich?". You would get much better answers about macro-economic counterfactuals by going to a macro-trading firm like Bridgewater and asking the employees "what do you think would happen if..."
They stop paying taxes and work off the books instead but you don't announce that publicly for obvious reasons.
The incentive to do this increases with tax pressure. The willingness of people to pay for tax-free work equally increases because you'll pay less.
There's also an increasing asymmetry of what the government gains from a tax hike versus how oppressive it becomes that becomes unfavorable as tax rates go up.
They'd still have to pay for Medicare, but it knocks 12.4% off their estimated taxes for consulting.
If they're single, then the math is different. 24% for single people starts at just over $100k and runs to about $200k so they may have to pay those taxes. It's always frustrating when people whine about taxes but giving insufficient information to evaluate their complaint.
You say you consult at charity rates and then point to taxation as the sole reason it isn't worth your time...
Wanted 10% but offered nothing real in return. At least you get some services from your taxes, like unlawful detention/extradition of suspicious people.
Everyone outside of the American empire knows that the gig is up. When Uncle Sam has his money printing press on full blast, the American people don't feel the full effect, but everyone in the global majority, where there are no dollar printing machines, gets to see too many dollars chasing the same goods, a.k.a. inflation.
The day when the American people elect a fiscally prudent government, for Americans to work hard, pay their taxes and get that deficit to a manageable number is never going to happen. But that is not a problem, the situation is out of America's hands now.
It was the 2022 sanctions on Russia that made the BRICS alliance take note. Freezing their foreign reserves was not well received. Hence we now have China trading in their own currency with their trading partners happy with that.
Soon we will have a situation where there is no 'exorbitant privilege' (reserve currency, which can only ever end up with massive deficits), instead the various BRICS currencies will be anchored to valuable commodities such as rare earth metals, gold and everything else that is 'proof of work' and important to the future. So that means no more 'petro-dollar', the store of value won't be hydrocarbons.
This sounds better than going back to a gold standard. As I see it, the problem with the gold standard is that you kind of know already who has all the gold and we don't want them to be the masters of the universe, because it will be the same bankers.
As for an AI 'Hail Mary', I do hope so. The money printed by Uncle Sam to end up in the Magnificent Seven means that it will be relatively easy to write this money off.
IMO, it was the barriers imposed on the trade of oil, mostly from Iran and Syria. Not really Russia, because they adapted quickly. The countries on the group's name all had alternatives at that time.
Either way, the BRICS trading system wasn't a serious thing until this year. And what really kicked it out was Trump.
Cost to service the debt is about 60% of what it was in the 1980's. All those bonds are long since paid off. This is a meme. Should we adjust to something more sustainable? Yes. Is the "burden" too high to bear? No, it's just not.
Creditors do not fund government spending; they hold safe interest-bearing assets created by it. The real risks to society are not financial but productive and ecological. What matters is whether we are using our labor, technology, and resources to meet real needs, not the size of a number on a balance sheet.
Richard Murphy is that you?
Snark aside, this is just straight up MMT which you're presenting as gospel, but absolutely isn't.
The very shortest way to debunk MMT is that every single government would be printing their way to prosperity if it was possible. Their ultimate desire is to be in power, and a happy prosperous population will keep electing them. No government has ever follow MMT to it's natural conclusion.
It is simplistic and wrong.
>>> Despite persistent material uncertainty around the global macroeconomic outlook, risky asset valuations have increased and credit spreads have compressed. Measures of risk premia across many risky asset classes have tightened further since the last FPC meeting in June 2025. On a number of measures, equity market valuations appear stretched, particularly for technology companies focused on Artificial Intelligence (AI). This, when combined with increasing concentration within market indices, leaves equity markets particularly exposed should expectations around the impact of AI become less optimistic.
Actually, the quoted 'sudden correction' is not referring specifically to AI, but the market in general
[1] https://www.bankofengland.co.uk/financial-policy-committee-r...
27 more comments available on Hacker News