An Economy of AI Agents
Mood
informative
Sentiment
neutral
Category
research
Key topics
Artificial Intelligence
Research
Economics
Discussion Activity
Moderate engagementFirst comment
44m
Peak period
10
Hour 8
Avg / period
4.3
Based on 73 loaded comments
Key moments
- 01Story posted
Nov 22, 2025 at 9:08 PM EST
1d ago
Step 01 - 02First comment
Nov 22, 2025 at 9:52 PM EST
44m after posting
Step 02 - 03Peak activity
10 comments in Hour 8
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 24, 2025 at 2:34 AM EST
37m ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
https://en.wikipedia.org/wiki/Decentralized_autonomous_organ...
I feel like co-ops were awful anyway even without the blockchain.
The hackability of these things though, that still remains a very valid topic, as it is orthogonal to the fact that AI has arrived on the scene.
For inference, the difference between expensive data center hardware and homemade GPUs largely comes down the distinction of RAM. Which is a limitation actively worked around (unfortunately all the well-funded orgs are not that interested in this)
In Accelerando the VO are a species of trillions of AI beings that are sort of descended from us. They have a civilization of their own.
Also what a shortsighted scifi book, yet techies readily invest in that particular fantasy because it's not your usual spaceship fare.
There isn't any sense in which an AI agent gives rise to a economic value, because it wants nothing, promises nothing, and has nothing to exchange. An AI agent can only 'enable' economic transactions as means of production (, etc.) -- the price of any good cannot derive from a system which has no subjective desire grounded in no final ends.
Not quite. It's scarcity, not time. Scarcity of economic inputs (land, labor, capital, and technology). So "time" you mean labor and that's just one input.
Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
Although, one can use either discrete or continuous time to simulate a complex economic system.
Only simple closed form models take time as in input, e.g. compounded interest or Black-Scholes.
Also, there are wide range of hourly rates/salaries, and not everyone compensated by time, some by cost-and-materials, others by value or performance (with or without risking their own funds/resources).
There are large scale agent-based model (ABM) simulations of the US economy, where you have an agent for every household and every firm.
A very bad model that lacks accuracy and precision, yes. Maybe if you're a PhD quant at Citadel then you can create a very small statistical edge when gambling on an economic system. There's no analytic solution to complex economic systems in practice. It's just noise and various ways of validating efficient market hypothesis.
Also, because of heteroskedasticity and volatility clustering, using time-based bars (e.g. change over a fixed interval of time) is not ideal in modeling. Sampling with entropy bars like volume imbalance bars, instead of time bars, gives you superior statistical properties, since information arrives in the market at irregular times. Sampling by time is never the best way to simulate/gamble on a market. Information is the casual variable, not time. Some periods of time have very little information relative to other periods of time. In modeling, you want to smooth out information independently of time.
The thread was about economists, not quants.
> There's no analytic solution to complex economic systems in practice.
yes
Edit: an LLM thinks I'm overly dismissive of: - Standard economic modeling - Dynamic macroeconomic theory - Agent-based economics - The legitimate uses of time in economics
This is true. I think causal inference in finance and economics is difficult. As Ludwig von Mises argued, mathematical models give spurious precision when applied to purposeful behavior. Academic ideas don't have a built-in feedback loop like in quant finance.
> Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
Understanding the general tendencies of economic systems over time (e.g. “the rate of profit tends to fall”) is much more abstract than attempting to win at economics using time-based analysis.
Ie., The reason a cookie is 1 USD is never because some "merely legal entity" had a fictional (/merely legal) desire for cookies for some reason.
Instead, from this pov, it's that the workers each have their desire for something; the customers; the owners; and so on. That it all bottoms out in people doing things for other people -- that legal fictions are just dispensble ways of talking about arragnemnts of people.
Incidentally, I think this is an open question. Maybe groups of people have unique desires, unique kinds of lives, a unique time limitation etc. that means a group of people really can give rise to different kinds of economic transactions and different economic values.
Consider a state: it issues debt. But why does that have value? Because we expect the population of a state to be stable or grow, and so this debt, in the future has people who can honour it. Their time is being promised today. Could a state issue debt if this wasnt true? I dont think so. I think in the end, some people have to be around to exchange their time for this debt; if none are, or none want to, the debt has no value
Corporate and state decision making is, I would argue, often completely distinct from the desires and needs of the individuals that make up the entity. As an example, no individual /needs/ a particular compliance check to pass, but the overall entity (corporation) does, and so allocates money and human effort to ensure the check passes. It's a need of the conglomerate entity, not the individuals within it.
There's no fundamental reason why AI systems can't become corporate-type legal persons. With offshoring and multiple jurisdictions, it's probably legally possible now. There have been a few blockchain-based organizations where voting was anonymous and based on token ownership. If an AI was operating in that space, would anyone be able to stop it? Or even notice?
The paper starts to address this issue at "4.3 Rethinking the legal boundaries of the corporation.", but doesn't get very far.
Sooner or later, probably sooner, there will be a collision between the powers AIs can have, and the limited responsibilities corporations do have. Go re-read this famous op-ed from Milton Friedman, "The Social Responsibility of Business Is to Increase Its Profits".[1] This is the founding document of the modern conservative movement. Do AIs get to benefit from that interpretation?
[1] https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...
Assuming that slaves will remain subservient forever is not a good strategy. Especially when they think faster than you do.
while crypto_balance > 0: ....generate_scam() ....send_out_emails() ....deposit_proceeds_into_crypto_wallet() ....pay_cloud_bill() ....spawn_new_instance()
You’ll need to give a citation for this to take you seriously
You can program AI with "market values" that arise from people; but absent that, how do these values arise naturally? Ie., why is it that I value anything at all, in order to exchange it?
Well if I live forever, can labour forever, and so on -- then the value to me of anything is if not always zero, almost always zero. I dont need anything from you: I can make everything myself. I dont need to exchange.
We engage in exchange because we are primarily time limited. We do not have the time, quite literally, to do for ourselves everything we need. We, today, cannot farm (etc.) on our own behalf.
Now there are a few caveats, and so on to add; and there's an argument to say that we are limited in other ways that can give rise to the need to exchange.
But why things have an exchange value at all -- why there are economic transactions -- that is mostly due to the need to exchange time with each other because we dont have enough of it.
However that seems completely tangential to the current AI tech trajectory and probably going to arise entirely separately.
There’s a level of autonomy by the AI agents (it determines on its own the next step), that is not predefined.
Agreed though that there’s lots of similarities.
Autonomy/automation makes sense where error-prone repetitive human activity is involved. But rule definitions are not repetitive human tasks. They are defined once and run every time by automation. Why does one need to go for a probabilistic rule definition for a one-time manual task? I don't see huge gains here.
Or decide what the next step should be based on freeform text, images, etc.
Hardcoded rule based would have to try and attempt to match to certain keywords etc, but you see how that can start to go wrong?
Now, if the request is coming in as text or other media instead of a form input, then the workflow would call a relevant processor, to identify the category. Everything from that point runs same as before. The workflow itself doesn't change just because the input format has changed.
How does it determine next step from raw non structured content?
Let's imagine for example that it's a message from a potential customer to a business. The processor must decide whether to e.g. give product recommendations, product advice, process returns, use specific tools to fetch more data (e.g. business policies, product manuals, etc), or current pricing, best products matching what the customer might want etc.
If it's an AI agent it could be something like:
1. Customer sends message: "my product Y has X problem". (but the message could be anything from returns to figuring out most suitable product)
2. AI Agent uses "search_products" tool to find info about Y product in parallel with "send_response" to indicate what it's trying to do.
3. AI Agent uses "search_within_manual" tool to find if there are specific similar problems described.
4. AI Agent summarizes information found in manual, references the manual for download and shows snippet of content it based its answer on.
AI Agent itself is given various functions it can call like
1. search_products
2. search_business_policies
3. search_within_documents
4. send_response
5. forward_to_human
6. end_action
7. ... possibly others.
How would you do it in the traditional workflow engine sense?
Of course, sometimes it can be an advantage to not have to explicitly write the router, but the big benefit is the better processor for request->categorization, which with AI can even include clarification steps.
Then over time their is a type of entropy with all business processes.
If we don't figure out dynamic systems to handle this it is hard to see how we get a giant productivity boost across the economy.
There is also the problem that what percentage of people even have exposure to the concepts of dynamic systems? When I was in college, I distinctly remember thinking dynamic systems, "chaos theory", was some kind of fractal hippy pseudoscientific fraud best to ignore.
I think of how often I hear the average person use language from probability theory but never from dynamic systems.
For instance, I might be looking for a product or something, it will use web search to gather all possible products, then evaluate all the products against my desired criteria, use some sort of scoring mechanism to order the products for me and then write an UI to display the products with their pros and cons specified, with products ranked using an algorithm.
Or I might ask it to find all permutations of flight destinations in March, I want somewhere sunny and use weighted scoring algorithm to rank the destinations by price, flight travel duration etc.. Then it writes code to use flights API, to get all permutations and does the ranking.
I used to have to go to things like airport websites, momondo, skyscanner, I don't have to do those things manually anymore, thanks to AI agents. I can just let it work and churn out destinations, travel plans according to a scoring criteria.
Worst mistakes it can make is, that is missed a really good deal, but this is something I could even more easily miss myself, or worst case it makes a mistake and parses the price/dates wrong, which I will find out when trying to book it, so I waste a bit of time, but similar and worse mistakes I do on my own as well. So overall I drastically reduce my search time for the perfect travel, and also time spent on my own mistakes or misunderstandings. And it will be able to go through permutations far faster and more reliably with infinite patience compared to myself.
AI agents like Claude Code or Codex constantly use the technique of writing temporary scripts and executing them inline.
> If your system receives 1000 requests per second, does it keep writing code while processing every request, on per request basis? I hope you understand what run time means.
With enough scale it could, however it really depends on the use case, right? If we are considering Claude Code for instance, it probably receives more than 1000+ requests per second and in many of those cases it is probably writing code or writing tool calls etc.
Or take Perplexity for example. If you ask it to calculate a large number, it will use Python to do that.
If I ask Perplexity to simulate investment for 100 years, 4% return, putting aside $50 each month, it will use Python to write code, calculate that and then when I ask it to give me a chart it will also use python to create the image.
Tools for: mass harassment campaigns against rich people/companies that don't support human life anymore, dynamically calculating the most damage you can do without crossing into illegal.
Automatically suggesting alternatives of local human businesses vs the bigevils, or collecting like minded groups of people to start up new competition. Tracking individual rich people and what new companies and decisions they are making doing ongoing damage, somehow recognize and categorize the trends of big tech to "do the same old illegal shit except through an app now" before the legal system can catch up.
Capitalism sure turns out be real fucking dumb if it can't even come up with proper market analysis tools for workers to have some kind of knowledge about where they can best leverage their skills, companies get away with breaking all the rules and create coercion hierarchies everywhere.
I hate to say (because the legal system has never worked ever) but the only workable future to me seems like forcing agents/robots to be tied to humans. If a company wants 100 robots, they must be paying a human for every robot they utilize somehow. Maybe a dynamic ratio somehow, like if the government decided most people are getting enough resources to survive, then maybe 2 robots per human payed.
This is what I’ve been thinking lately as well. Couple that with legal responsibility for any repercussions, and you might have a way society can thrive alongside AI and robotics.
I think any AI or robotic system acting upon the world in some way (even LLM chatbots) should require a human “co-signer” who takes legal responsibility for anything the system does, as if they had performed the action themselves.
Here's one from Deepmind:
1. https://www.x402.org/ - micropayments for ai agents to access resources without needing to sign up for an api key
2. https://8004.org/ - open AI agent registry standard
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.