Tech Titans Amass Multimillion-Dollar War Chests to Fight AI Regulation
Key topics
Regulars are buzzing about tech titans amassing multimillion-dollar war chests to fight AI regulation, sparking a heated debate about the ethics of AI development and intellectual property law. Commenters riff on the notion that stricter IP laws might not protect struggling artists, but rather corporations and billionaires, with some arguing that AI-generated content doesn't even infringe on IP rights. The discussion gets lively as participants clash over whether AI training on existing works constitutes "theft," with some drawing analogies to pirating a movie versus stealing a loaf of bread. As commenters drill down into the specifics, the thread feels relevant now because it tackles the complex, timely question of how to regulate AI without stifling innovation.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
34m
Peak period
55
4-6h
Avg / period
14.5
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 28, 2025 at 4:21 AM EST
about 1 month ago
Step 01 - 02First comment
Nov 28, 2025 at 4:56 AM EST
34m after posting
Step 02 - 03Peak activity
55 comments in 4-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 29, 2025 at 12:36 PM EST
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Speaking of IP, I'd like to see some major copyright reform. Maybe bring down the duration to the original 14 years, and expand fair use. When copyright lasts so long, one of the key components for cultural evolution and iteration is severely hampered and slowed down. The rate at which culture evolves is going to continue accelerating, and we need our laws to catch up and adapt.
Sure, I can give you some examples:
- deceiving someone into thinking they're talking to a human should be a felony (prison time, no exceptions for corporations)
- ban government/law-enforcement use of AI for surveillance, predictive policing or automated sentencing
- no closed-source AI allowed in any public institution (schools, hospitals, courts...)
- no selling or renting paid AI products to anyone under 16 (free tools only)
This is gonna be as enforceable as the CANSPAM act. (i.e. you will get a few big cases, but it's nothing compared to the overall situation)
How do you proof it in court? Do we need to record all private conversations?
AI companies need to be held liable for the outputs of their models. Giving bad medical advice, buggy code etc should be something they can be sued for.
It's a pile of numbers. People need to take some responsibility for the extent to which they act on its outputs. Suing OpenAI for bugs in the code is like suing a palm reader for a wrong prediction. You knew what you were getting into when you initiated the relationship.
Stricter IP laws won't slow down closed-source models with armies of lawyers. They'll just kill open-source alternatives.
a) The model and the data
b) Why are we meeting in the middle?
For example, copyright makes it illegal to take an entire book and republish it with minor tweaks. But for something short like an HN comment this doesn’t apply; copyright always permits you to copy someone’s ideas, even when that requires using many of the same words.
I think most people think that AI training means copying vast troves of data onto ChatGPT hard drives for the model to actively reference.
I wish this argument would die. It's so comically false, and is just used to allow people to pave over their cognitive dissonance with the real misfortunes of a small minority.
I am a millennial and rode the wave of piracy as much as the next 2006 computer nerd. It was never, ever, about not being able to afford these things, and always about how much you could get for free. For every one person who genuinely couldn't afford a movie, there were at least 1000 who just wanted it free.
You have this backwards. There are way more poor people who can't afford things than there are people who can afford whatever they want
Genuinely cannot afford means you don't have the $15 to buy the movie after paying for necessities.
Cannot afford tends to mean "I bought a 72" OLED last week so no way I'm spending another $1400 on a movie collection".
If you have to use credit to "afford" such things, then you can't actually afford them
I happily pay for my media when there's a way to do so, without simultaneously supporting the emplacement of telescreens everywhere you look.
It's not the tech titans, it's Capitalism itself building the war chest to ensure it's embodiment and transfer into its next host - machines.
We are just it's temporary vehicles.
> “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.”
> “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.”
I see your “roko’s basilisk is real” and counter with “slenderman locked it in the backrooms and it got sucked up by goatse” in this creepypasta-is-real conversation
(disclaimer: I don't actually, I'm just memeing. I don't think we'll get AI overlords unless someone actively puts AI in charge and in control of both people (= people following directions from AI, which already happens, e.g. ChatGPT making suggestions), military hardware, and the entire chain of command in between.)
I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?
What we should be doing is surfacing well defined points regarding AI regulation and discussing them, instead of fighting proxy wars for opaque groups with infinite money. It feels like we're at the point where nobody is even pretending like people's opinions on this topic are relevant, it's just a matter of pumping enough money and flooding the zone.
Personally, I still remain very uncertain about the topic; I don't have well-defined or clearly actionable ideas. But I'd love to hear what regulations or mental models other HN readers are using to navigate and think about this topic. Sam Altman and Elon Musk have both mentioned vague ideas of how AI is somehow going to magically result in UBI and a magical communist utopia, but nobody has ever pressed them for details. If they really believe this then they could make some more significant legally binding commitments, right? Notice how nobody ever asks: who is going to own the models, robots, and data centers in this UBI paradise? It feels a lot like Underpants Gnomes: (1) Build AGI, (2) ???, (3) Communist Utopia and UBI.
There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:
"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
"Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
Artists are not primarily in the 1% though, it's not only patents that are IP theft.
Simply because I can't see what you mean by artists being hindered by IP, artists try to create original work, and derivative work from other IP is usually re-interpreted enough to fall under fair use. I can't picture a situation where artists could be hindered on their creations due to IP owned by others.
It's more cumbersome while being fairer, it hasn't stopped at all the practice. As a hobbyist I do it all the time while my professional friends clear their samples before earning money on their tracks.
And the quality of the debate remains very low as well. Most people barely understand the issues. And that includes many journalists that are still getting hung up on the whole "hallucinations can be funny" thing mostly. There are a lot of confused people spouting nonsense on this topic.
There are special interest groups with lobbying powers. Media companies with intellectual properties, actors worried about being impersonated, etc. Those have some ability to lobby for changes. And then you have the wider public that isn't that well informed and has sort of caught on to the notion that chat gpt is now definitely a thing that is sometimes mildly useful.
And there are the AI companies that are definitely very well funded and have an enormous amount of lobbying power. They can move whole economies with their spending so they are getting relatively little push back from politicians. Political Washington and California run on obscene amounts of lobbying money. And the AI companies can provide a lot of that.
This is a really good point. If a country tries to "protect" jobs by blocking AI, it only puts itself at a disadvantage. Other countries that don't pass those restrictions will produce goods and services more efficiently and at lower cost, and they’ll outcompete you anyway. So even with regulations the jobs aren't actually saved.
The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it's hard to convince people that they need to change instead of expecting the world around them to stay the same.
But why do I have to? Why should your life be dictated by the market and corporations that are pushing these changes? Why do I have to be afraid that my livelihood is at risk because I don't want to adapt to the ever faster changing market? The goal of automation and AI should be to reduce or even eliminate the need for us to work, and not the further reduction of people to their economic value.
Yes, but again, the goal of automatization should be to reduce the need for people having jobs to secure their livelihood and enable a dignified life. However, what we are seeing in the Western Hemisphere is that per capita productivity is rising while the middle class is eroding and capital is accumulated by a select few in obscene amounts. 'Upskilling' does not happen out of personal motivation, but rather to meet the demands of the market so that one does not live in poverty. The idea of ‘upskilling’ to serve the market is also absurd because, in times of ever-accelerating technological development, there is no guarantee that the skills you learn today will still be relevant tomorrow. Yesterday it was “learn to code” but now many people who followed this mantra find themselves in precarious situations because they cannot find a job or are forced into the gig economy. So what do you do with people who couldn't foresee the future, or who are simply too old for the market?
Because you enjoy eating? Whatever you think society should be, the fact is we live in one where you have to exchange labour for money. What ought to be and what is, are unrelated to each other.
Its interesting how we feel this way about white collar jobs, but when a coal mine closes nobody cares.
That depends at what cost and who the "we" is. There are plenty of variations on this idea i would consider a bad thing.
After all, this is already our present if you are born rich enough.
This will work even worse than "if everyone goes to college, good jobs will appear for everyone."
A Roosevelt economy can still work for most people when the "job creators" stop creating jobs. A Reagan economy cannot.
AI is being touted as extremely intelligent and, thus, capable of taking over almost any white collar job. What would I upskill to?
Consider some poorly paid servant work. It will be sold to you as “noble work” or something along the lines to entice/slander you.
AI is not a value neutral tech.
The marketing buzz is not the same thing as reality. Upskill to something AI is bad at. There is plenty to chose from in the present.
Regulating AI doesn't mean blocking it. The EU AI Act regulates AI without blocking it, just imposing restrictions on data usage and decision making (if it's making life or death decisions, you have to be able to reliably explain how and why it makes those decisions, and it needs to be deterministic - no UnitedHealthcare bullshit hiding behind an "algorithm" refusing healthcare)
So politicians are supposed to create "non bullshit" jobs out of thin air?
The job you've done for decades is suddenly bullshit because some shit LLM is hallucinating nice sounding words?
This was the argument made by the capitalists after they had jailed and murdered most of the people in the Luddite movement before there was employment regulation.
They ignored what the Luddites were protesting for and suggested it was about people who just didn't understand how the new industrial economy worked. Don't they know that they can get jobs elsewhere and we, as a society, can be more productive for it?
The problem is that this was tone deaf. There were no labor regulations yet and the Luddites were smashing looms as that form of violence was the only leverage they had to ask for: elimination of child labor, social support that wasn't just government workhouses (ie: indentured servitude), and labor laws that protected workers. These people weren't asking everyone to make cloth by hand forever because they liked making cloth by hand and thought it should stay that way.
In modern times I think what many people are concerned about with companies getting hot for throwing labor out into the streets when it's not profitable for them anymore is that there are once more a lack of social supports in place to make sure those people's basic needs are met.
... and that's just one of the economic and social impacts of this technology.
You can re-skill - but you'll be competing for starter positions and starter salary with people who're just entering the workforce, much younger than you, with no dependents or health issues.
The technology may have benefited everyone in the long run, but in immediate terms, sudden shifts like these ruin lives of people, and destroy futures of their descendants.
How many farmers right now are suffering between the current tariff policies and immigration policies are still professing support for Trump? The very people that unions and higher minimum wages would help the most are opposed to because they support the very people who favoring their employers getting rich over them.
If you take solace in “god will provide” as long as you give the church 10% of your income, you aren’t looking at things logically as long as the politicians can quote scripture.
However, I feel like that has been changing over the past decade or two. I have met countless young people who have been willing and able to pick up a new skill to make a living. By and large, that has either turned out to be going into tech or going into gig work
AI is threatening both of those. It is not obvious to me what comes after. Frankly, these days if someone younger comes to me asking for career advice, I honestly wouldn't know what to tell them.
Not that I believe they should allow the financial system to collapse without intervention but the interventions during recent crises have been done to save corporations that should have been extinguished instead of the common people who were affected by their consequences.
Which I believe is what's lacking in the whole discussion, politicians shouldn't be trying to maintain the labour status quo if/when AI change the landscape because that would be a distortion of reality but there needs to be some off-ramp, and direct help for people who will suffer from the change in landscape without going through the bullshit of helping companies in the hopes they eventually help people. As many in HN say, companies are not charities, if they can make an extra buck by fucking someone they will do it, the government is supposed to be helping people as a collective.
They already do this[1]. Why should there be an exception carved out for AI type jobs?
------------------------------
[1] What do you think tariffs are? Show me a country without tariffs and I'll show you a broken economy with widespread starvation and misery.
I didn't say it would.
I said that politicians already artificially preserve jobs, and asked, quite legitimately I feel, why should they make an exception for AI?
I think that would be Singapore, as far as import tarrifs go? Not much starvation there!
Do you mean taxes? Or excise duties or...?
There's a ton other points intersecting with regulation. Either directly related by AI, or made significantly more relevant by it.
Just from the top of my head:
- information processing: Is there private data AI should never be able to learn from? We restrict collection but it might be unclear whether model training counts as storage.
- related to the former, what kind of dystopian practices should we ban? AI can probably create much deeper profiles inferring information from users than our already worrrying tech, even without storing sensitive data. If it can use conversations to deduce I'm in risk of a shorter lifespan, can the owners communicate that data to insurance companies?
- healthcare/social damage: what is the long term effects of people having an always available yes men, a substitution for social interaction, a cheating tool, etc? should some people be kept from access? (minors, mentally ill, whatever). Should access, on the other hand, become a basic right if it realistically makes a lef-behind person unable to compete with others who have it?
- National security. Is a country's economy being reliant in a service offered somewhere else? Worse even, is this fact draining skills from the population that might not able to be easily recovered when needed?
- energy/resources impact: Are we ready to have an enormous increase in usage of energy and/or certain goods? should we limit usage until we can meet the demand without struggle?
- consumer protections: Many companies just offer 'flat' usage, freely being able to change the model behind the scenes for a worse one when needed or even adapt user limits on their server load. Which of these are fair business practices?
- economy risks: What is the maximum risk we can take of the economy being made dependent to services that aren't yet profitable? Is there any steps that need to be taken to keep us from the potential bust if costs can't be kept up with?
- monopoly risks: we could end up with a single company being able to offer literally any intellectual work as a service. Whoever gets this tech might become the most powerful entity in the world. Should we address this impact through regulation before such an entity rises and becomes impossible to tame?
- enabling crime: can an army of AI hackers disrupt entire countries? how is this handled?
- impact on job creation: If AIs can practically DDOS job offer forms, how is this handled to keep access fair? Same for a million other places that are subjected to AI spam.
your point "It's on politicians to help people adapt to a new economic reality" brings a few:
- Should we tax AI using companies? if they produce the same employing fewer people, tax extraction suffers and the non-taxed money does not make it back to the people. How do we compensate? And how do we remake - How should we handle entire professions being put to pasture at once? Lost employment is a general problem if it's a large enough amount of people. - how should the push of intellectual work be rethought if it becomes extremely cheap relative to manual work? is the way we train our population in need of change?
You might have strong opinions on most of these issues, but there is clearly A LOT of important debates that aren't being addressed.
Healthcare/Social damage: we already have peer reviewed studies on the potentially negative impacts of LLMs on mental health: https://pmc.ncbi.nlm.nih.gov/articles/PMC10867692/ . We also have numerous stories of people committing suicides after "falling in love" or being nudged to do so by an LLM.
Energy/Resources: do I even have to provide evidence that LLMs waste enormous amounts of electricity, even leading to scarcity in some local markets, and even coal power plants being turned back on?
Those are just the ironclad ones, you can make very good data privacy and national security arguments quite easily as well.
Yes, if you want to be taken seriously, then your claims about this should be based in evidence and contextualized amid the overall energy market.
Electricity is fungible. Before decrying that LLMs are using it to provide the world (probably) more utility on the net per watt than your own work output (which segues into an actual problem of labor as source of personal and social worth), contrast it with what we'd otherwise be doing with that same electricity - e.g. more sportsball streams in higher definitions, more cryptocurrency shams, more Juiceros and other borderline-fraudlent startups in physical space (cheap energy means cheaper manufacturing, which means materials become more like bits, and it's easier to pull the same crap in the real world, as companies now pull in virtual).
Point being, if you want to judge use of electricity on AI, judge it in context of the whole human condition - of everything else we'd otherwise be using it on.
Debates for public regulation should not be started by evidence-backed conclusions, but rather they are what pushes research and discussion in the first place.
Perhaps the conclussion to AI's impact on mental health is "hey, multiple high quality studies show that the impact is actually positive, let's allow it and in fact consider it as a potential treatment path". That's perfectly fine.
What is not fine is not considering the topic at all until it's too late for preventive action. We don't need to wait for a building burning before we consider whether we need fire extinguishers there.
My list is not made of complains at all, it's just a few of the ways in which we suspect AI can be disruptive, which are then probably worth examining.
I strongly feel that regulation needs to curb this, even if it leads to product managers going to jail for what their black box did.
This is being screamed from the rooftops by nearly the entire creative community of artists, photographers, writers, and other people who do creative work as a job, or even for fun.
The difference between the 99% of individual creatives and the 1% is that the 1% has entire portfolios of IP - IP that they might not have even created themselves - as well as an army of lawyers to protect that IP.
Doesn't quite align with UBI, unless he envisions the AI companies directly giving the UBI to people (when did that ever happen?)
This would be a 19th century government, just the "regalian" functions. It's not really plausible in a world where most of the population who benefit from the health/social care/education functions can vote.
I mean, this is a solvable problem...
Subsidized implies they are getting free money for doing nothing. It's a business transaction. I wouldn't call being a federal worker being subsidized by the government either.
On contracts: Space X builds rockets for the government, fair enough, in a vacuum. Though I would ask why we're paying a private corporation to recycle NASA designs we wouldn't fund via NASA, rather than just having NASA or the Air Force do it.
On welfare: Corporations like Walmart benefit incredibly from the tattered remnants of America's social safety net, because if it didn't both exist and demand that people work to earn the benefits, nobody in their right mind would work for places like Walmart, because they wouldn't get paid enough to live. If nothing else, they would all die of starvation, which of course I don't want, but Walmart is also benefiting from that, albeit indirectly.
Misc: artificially low taxes, the ability for corporations to shelter revenue overseas to avoid taxes, temporary stays on property taxes to attract businesses to a given area, lax environmental regulations in some areas, and lots of other examples of all the little ways private industry gets money from the government they shouldn't have. Most of these not only don't "give something back" but detract from the society or the larger world.
And to emphasize, I'm not even arguing for or against here. I'm just saying Elon Musk doesn't want a small government, nor a large one. He wants a government he can puppet. As long as it benefits him and does not constrain him, he doesn't give a shit what else it does.
https://xcancel.com/elonmusk/status/1992599328897294496#m
Notice that the retweeted Will Tanner post also denigrates EBT. Musk does not give a damn about UBI. The unemployed will do slave labor, go to prison, or, if they revolt, they will be hanged. It is literally all out there by now.
Things will regulate themselves pretty quickly when the financial music stops.
I'd argue that "value creation" is already at a decent position considering generative AI and the usecase as "interactive search engine" alone.
Regarding "value extraction": Advertising should always be an option here, just like it was for radio, television and online content in general in the past.
Preventing smaller entities (or private persons even) from just doing their own thing and making their own models seems like the biggest difficulty long term to me (from the perspective of the "rent seeking" tech giant).
> Regarding "value extraction": Advertising should always be an option here, just like it was for radio, television and online content in general in the past.
Not at the actual price it's going to cost though. The cost of an "interactive search" (LLM) vs a "traditional search" (Google) is exponentially higher. People tolerate ads to pay Google for the service, but imagine how many ads would ChatGPT need, or how much it will have to cost, to compensate an e.g. 10x difference. Last time I read about this a few months ago, ChatGPT were losing money on their paid tier because the people paying for it were using it a lot.
It's more likely that ChatGPT will just be spamming ads sprinkled in the responses (like you ask for a headphone comparison, and it gives you the sponsored brand one, from a sponsored vendor, with an affiliate link), and hope it's enough.
But we don't know that pricepoint yet; current prices for all this are inflated because of the gold-rush situation, and there are lots of ways to trim marginal costs. At worst, high longterm un-optimizable costs are going to decrease use/adoption a bit, but I don't even think that is going to happen.
Just compare the situation with video hosting: That was not profitable at first, but hardware (and bandwidth) got predictably cheaper, technology more optimized and monetization more effective and now its a good chunk of googles total revenue.
You could have made the same arguments about video hosting in 2005 (too expensive, nobody pays for this, where's the revenue) but this would have led to extremely bad business decisions in hindsight.
AI search being 10x more expensive than Google query? That's just a silly, meaningless number - especially considering that a good AI response easily stops the user from making 5+ search queries to get the same results, and AI query itself can easily issue the equivalent of 10-20 search queries + spends compute analyzing their results.
All they need to do is start adding in sponsored results (and the ability to purchase keywords), and AI becomes insanely profitable.
This is a winner-takes-all game, that stands a real chance of being the last winner-takes-all game humans will ever play. Given that, the only two choices are either throw everything you can at becoming the winner, or to sit out and hope no one wins.
The labs know that substantial losses will be had, they aren't investing in this to get a return, they are investing in it to be the winner. The losers will all be financially obliterated (and whoever sat out will be irrelevant).
I doubt they are sweating to hard though, because it seems overwhelmingly likely that most people would pay >$75/mo for LLM inference monthly (similar to cell phone costs), and at that rate without going hard on training, the models are absolute money printers.
Using algorithms to provide personalized pricing would be an example, where like a landlord, retailer, or airline would use an ML service trained on your personal data and aggregated purchase history to decide how much to charge you for a short-term rental, Nintendo Switch, or a plane ticket. Basically, instant underwriting at scale for every single purchase. Just got a new job with a raise? Your next vacation will cost you 26% more for the same experience.
Or let's say you need a flight. You usually fly American so you check there first. You've had Gold there for the last few years, and you're close now. You could go look up other airline prices and maybe you do as a quick gut check. American costs more, but not a lot more. Exactly how much more is it worth to you to fly American and hit your status? What if you just got a raise? What if you just moved? Or what if you just got laid off?
What is the exact price delta that would get you to change a purchase habit? How does that change from purchase to purchase? How does it change depending on the other circumstances in your life?
As a concept: there are price differences that don't matter to people, and those vary, sometimes wildly. Meanwhile, to a large company, adding even 1 percentage point to their margin, on average, could mean tens to hundreds of millions of dollars of additional profit that year. It could mean managers hitting targets and getting bonuses paid out.
If these people genuinely believed in the good of AI, they wouldn’t be blocking meaningful regulation of it.
https://green.spacedino.net/ai-will-never-create-utopia/
> they wouldn’t be blocking regulation of it
Which is it? Do they want regulation or not?
The answer is, in fact, they do want regulation. They want to define the terms of the regulations to gain a competitive advantage.
None of these companies, investors, or executives are making AI that’s actually going to improve humanity. They never, ever were, and people need to stop taking them at their word that they are.
If inference has significant profitability and you're the only game in town, you could do really well.
But without regulation, as a commodity, the margin on inference approaches zero.
None of this even speaks to recouping the R&D costs it takes to stay competitive. If they're not able to pull up the ladder, these frontier model companies could have a really bad time.
Of course that means it's unprofitable in practice/GAAP terms.
You'd have to have a pretty big margin on inference to make up for the model development costs alone.
A 30% margin on inference for a GPU that will last ~7 years will not cut it
I'm not sure about how the regulation of things would work, but prompt injections and whatever other attacks we haven't seen yet where agents can be hijacked and made to do things sounds pretty scary.
It's a race towards AGI at this point. Not sure if that can be achieved as language != consciousness IMO
Who is "we", and what are the actual capabilities of the self-hosted models? Do they do the things that people want/are willing to pay money for? Can they integrate with my documents in O365/Google Drive or my calendar/email in hosted platforms? Can most users without a CS degree and a decade of Linux experience actually get them installed or interact with them? Are they integratable with the tools they use?
Statistically close to "everyone" cannot run great models locally. GPUs are expensive and niche, especially with large amounts of VRAM.
I'm not saying the options are favorable for everybody, I'm saying the options are there if it becomes locked in to 1-3 companies.
However it is arguable that thought is relatable with conscienceness. I’m aware non-linguistic thought exists and is vital to any definition of conscienceness, but LLMs technically dont think in words, they think in tokens, so I could imagine this getting closer.
No one says the Dewey decimal system thinks.
22 more comments available on Hacker News