AI Surveillance Should Be Banned While There Is Still Time
Posted4 months agoActive4 months ago
gabrielweinberg.comTechstoryHigh profile
heatednegative
Debate
85/100
AI SurveillancePrivacyRegulation
Key topics
AI Surveillance
Privacy
Regulation
The article argues that AI surveillance should be banned while there is still time, sparking a heated discussion on the feasibility and implications of such a ban.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
133
0-12h
Avg / period
22.9
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 6, 2025 at 9:52 AM EDT
4 months ago
Step 01 - 02First comment
Sep 6, 2025 at 10:56 AM EDT
1h after posting
Step 02 - 03Peak activity
133 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 12, 2025 at 10:34 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45149281Type: storyLast synced: 11/22/2025, 11:47:55 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It seems like having LLM providers not train on user data is a big part of it. But is using traditional ML models to do keyword analysis considered “AI” or “surveillance”?
The author…and this community in general…are much more prepared to make full recommendations about what AI surveillance policy should be. We should be super clear to try to enact good regulation without killing innovation in the process.
> That’s why we (at DuckDuckGo) started offering Duck.ai for protected chatbot conversations and optional, anonymous AI-assisted answers in our private search engine. In doing so, we’re demonstrating that privacy-respecting AI services are feasible.
I don't know if its a great idea, or just I wonder what does make it feasible, but there is a kind of implied recommendation here.
By "killing innovation" do you just mean: "we need to allow these companies to make money in possibly a predatory way, so they have the money to do... something else"? Or what is the precise concern here? What facet needs to be innovated upon?
I believe that LLM’s will have the capability to fill in for human workers in many important ways. It’s like getting an economic infusion without the associated population growth required.
But we aren’t there yet, so innovation looks like continuing to build out how to efficiently use AI tools. Not necessarily better models, but longer memory, more critical reasoning, etc.
At the same time…there are winner-take-all dynamics and possibility to weaponize that are not good for society in the long-term.
So we need to both encourage innovation while making sure we don’t kill each other in the process.
Like nuclear fission, AI should never have been developed.
And people should own all data about themselves, all rights reserved.
It's ironic copyright is the law that protects against this kind of abuse. And this is of course why big "AI" companies are trying to weaken it by arguing models training is not derivative work.
Or by claiming that writing a prompt in 2 minutes is enough creative work to own copyright of the output despite the model being based on 10^12 hours of human work, give or take a few orders of magnitude.
The groups that didn't train on public domain content would have an advantage if it's implemented as a rule moving forward at least for some time.
New models following this could create a gap.
I'm sure competition as has been seen from open-source models will be able to
Just because everyone is doing it doesn't meant it's right or legal. Only that a lot of very rich companies deserve to get punished and pay the creators.
Not arguing, debating about the legality of what the models have done.
Anthropic just paid a settlement. But they also bought a ton of book and scanned them, which might be more than other models. Maybe it's a sign of things to come.
Copyright designed at a time when reproducing work in way which was not verbatim and not obviously modified to avoid detection (like synonym replacement) would require a lot of human work and be too costly to be done. Now it's automated. It fundamentally changes everything.
Human work is what's to be rewarded, according to the amount of quality.
To their credit, their privacy policy says they have agreements on how the upstream services can use that info[1]:
> As noted above, we call model providers on your behalf so your personal information (for example, IP address) is not exposed to them. In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance).
But even assuming the upstream services actually respect the agreement, their own privacy policy implies that your prompts and the responses could still be leaked because they could technically be stored for up to 30 days, or for an unspecified amount of time in the case of the exceptions mentioned.
I mean, it's reasonable and a good start to move in the direction of better privacy, way better than nothing. Just have to keep those details in mind.
[1]: https://duckduckgo.com/duckai/privacy-terms
...hm. maybe I am worried about the Basilisk, then.
Feel free to call me an accelerationist but I hope AI makes social media so awful that no one wants to use it anymore. My hope is that AI is the cleansing fire that burns down social media so that we can rebuild on fertile soil.
The only real solution is locally running models, but that goes against the business model. So instead they will seek regulation to create privacy by fiat. Fiat privacy still has all the same problems as telling your therapist that you killed someone, or keeping your wallet keys printed out on paper in a safe. It's dependent on regulations and definitions of greater good that you can't control.
Not if you are selling hardware. If I was Apple, Dell, or Lenovo, I would be pushing for local running models supporting Hugging Face while I full speed developed systems that can do inference locally.
Getting customers to pay for the weights would be entirely dependent on copyright law, which OpenAI already has a complicated relationship with. Quite the needle to thread: it's okay for us to ingest and regurgitate data with total disregard for how it's licensed, but under no circumstances can anyone share these weights.
Provide the weights as an add-on for customers who pay for hardware to run them. The customers will be paying for weights + hardware. I think it is the same model as buying the hardware and get the macOS for free. Apple spends $35B a year in R&D. Training GPT5 cost ~$500M. It is a nothing burger for Apple to create a model that runs locally on their hardware.
That's assuming weights are even covered by copyright law, and I have a feeling they are not in the US, since they aren't really a "work of authorship"
It sounds a lot like the browsers war, where the winning strategy had been to aggressively push (for free, which was rather uncommon then) one's platform, in the aim of market dominance for later benefits.
Isn't his company, OpenAI, the one that said the monitor all communications and will report anyone they think is a threat to the government?
https://openai.com/index/helping-people-when-they-need-it-mo...
> If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.
I get they are trying to do something positive overall. At the same time. I don't want corp owned AI that's monitoring everything I ask it.
IIRC it is illegal for the phone company to monitor and censor communications. The government can ask a judge for permission for police to monitor a line but otherwise it's illegal. But now with AI transcription it won't be long until a company can monitor every call, transcribe it, feed to an LLM to judge and decide which lists you should be on.
I understand that people assume LLMs are private but there isn’t any guarantee that is the case, specially when law enforcement comes knocking.
You do realize he became King maker by position himself in YC, who is the owner/operator of Hackernews. What makes you think you are not being traced here, and your messages are not being used to train his LLM?
As far as him being a conman, if you haven't realized that most of the SV elite, that this place worships, are all conmen (See Trump Dinner this week) with clear ties to the intelligence agency (see newly appointed generals who are C-suite in several Mag 7 corps) who will placate a fascist in order to push their agenda(s) then you simply aren't paying attention.
His scam coin is the most insipid of his rap sheet at this point, and I say this as a person who has seen all kind of grifting in that space.
They could take a lesson from churches. If LLM providers and their employees were willing to commit to privacy and were willing to sacrifice their wealth and liberty for the sake of their clients, society would yield.
I remember seeing a video of a certain Richard Masten, a CrimeStoppers coordinator, destroying the information he had on a confidential source right in the courtroom under the threat of a contempt charge and getting away with a slap on the wrist.
In decent societies standing up for principles does work.
The next politician to come in will retroactively pardon everyone involved, and will create legislation or hand down an executive order that creates a "due process" in order to do the illegal act in the future, making it now a legal act. The people who voted the politician in celebrate their victory over the old evil, lawbreaking politician, who is on a yacht somewhere with one of the billionaires who he really works for. Rinse and repeat.
Eric Holder assured us that "due process" simply refers to any process that they do, and can take place entirely within one's mind.
And we think we can ban somebody from doing something that they can do with a computer connected to a bunch of thick internet pipes, without telling anyone.
That's libs for you. Still believe in the magic of these garbage institutions, even when they're headed by a game show host and wrestling valet who's famous because he was good at getting his name in the NY Daily News and the NY Post 40 years ago. He is no less legitimate than all of you clowns. The only reason Weinberg has a voice is because he's rich, too.
If the government is failing, explore writing civil software, providing people protected forms of communication or modern spaces where they can safely organize and learn, eventually the current generations die and a new, strongly connected culture has another chance to try and fix things.
This is why so many are balkanizing the internet age gating, they see the threat of the next few digitally-augmented generations.
EDIT: I want to add that "training on chat logs" isn't even the issue. In fact it understates the danger. It's better to imagine things like this: when a future ad-bot or influence-bot talks to you, it will receive your past chatlogs with other bots as context, useful to know what'll work on you or not.
EDIT 2: And your chatlogs with other people I guess, if they happened on a platform that stored them and later got desperate enough to sell them. This is just getting worse and worse as I think about it.
They can just prompt "given all your chats with this person, how can we manipulate him to do x"
Not really any expertise needed at all, let the AI to all the lifting.
https://cybersecuritynews.com/fraudgpt-new-black-hat-ai-tool...
If you make an app for interacting with an LLM and in the app the user has access to all sorts of stolen databases, and other conveniences for black hats, then you've got what was described above. Or I'm missing something?
Ads are there to change your behavior to make you more likely to buy products, e.g., put downward pressure on your self esteem to make you feel "less than" unless you live a lifestyle that happens to involve buying X product
They are not made in your best interest, they are adverserial psycho-tech that have a side effect of building a economic and political profile on you for whoever needs to know what messaging might resonate with you
https://brandingstrategyinsider.com/achieving-marketing-obje...
"Your ultimate marketing goal is behavior change — for the simple reason that nothing matters unless it results in a shift in consumer actions"
Brainwashing is the systematic effort to get someone to adopt a particular loyalty, instruction, or doctrine.
You have described one type of ad. There are many many types of ads.
If you were actually knowledgeable about this, you'd know that basic fact.
Surplus value isn't really that useful of a concept when it comes to understanding the world.
This is so far from the reality of so many things in life, it's hard to believe you've thought this through.
Maybe it works in the academic, theoretical sense, but it falls down in the real world.
No "artisanal" product, from food to cosmetics to clothing and furniture is ever worth it unless value-for-money (and money in general) is of no significance to you. But people buy them.
I really can't go trough every product class, but take furniture as a painfully obvious example. The amount of money you'd have to spend to get furniture of a similar quality to IKEA is mind-boggling. Trust me, I've done it. Yet I know of people in Sweden who put considerable effort in acquiring second-hand furniture because IKEA is somehow beneath them.
Again, there situations where economies of scale don't exist and situations where a business may not be interested in selling a cheaper or superior product. But they are rarer than we'd like to admit.
This solves the problem of seeing ads that are not best for the user.
I'd rather see totally irrelevant ads because they're easy to ignore or dismiss. Targeted ads distract your thought processes explicitly because they know what will distract you; make you want something where there was previously no wanting. Targeted advertising is productised ADHD; it is anti-productive.
Like the start of Madness' One Step Beyond: "Hey you! Don't watch that, watch this!"
> Each Shiftkey nurse is offered a different pay-scale for each shift. Apps use commercially available financial data – purchased on the cheap from the chaotic, unregulated data broker sector – to predict how desperate each nurse is. The less money you have in your bank accounts and the more you owe on your credit cards, the lower the wage the app will offer you.
https://pluralistic.net/2024/12/18/loose-flapping-ends/#luig...
The incentives are all wrong.
I'm fundamentally a capitalist because I don't know another system that will work better. But, there really is just too much concentrated wealth in these orgs.
Our legal and cultural constructs are not designed in a way that such disparity can be put in check. The populace responds by wanting ever more powerful leaders to "make things right" and you get someone like Trump at best and it goes downhill from there.
Make the laws, it will help, a little, maybe.
But I think something more profound needs to happen for these things to be truly fixed. I, admittedly, have no idea what that is.
Instead of the current maze of case specific laws.
---
> But I think something more profound needs to happen for these things to be truly fixed. I, admittedly, have no idea what that is.
You know, you're just unwilling to think it because you've been conditioned not to. It's what always happens when inequality (of income, power, etc.) gets too high.
Being a capitalist is decided by access to capital not really a belief system.
> But, there really is just too much concentrated wealth in these orgs.
Please make up your mind? Should capital self-accumulate and grant power or not?
Portraying capitalism as some sort of force of nature that one doesn't "know another system that will work better" might be the neoliberals biggest accomplishment.
Most of my close friends are non-technical and expect me to be a cheerleader fir USA AI efforts. They were surprised when I started mentioning the recent Stanford study that 80% of US startups are using Chinese models. I would like us to win but we seem too hype focused and not engineering and practical applications focused.
Then they came for medical science, but I said nothing because I was not a doctor.
Then they came for specialists and subject matter experts, and I said nothing because I was an influencer and wanted the management position.
"Wipeth thine ass with what is written" should be engraved above the doorway of the National Constitution Center.
Banning it just in USA leaves you wide open to be defeated by China, Russia, etc….
Like it or not it’s a mutually assured destruction arms race.
AI is the new nuclear bomb.
What bad thing exactly happens if China wins? What does winning even mean? They can't invade because nukes.
Can they manipulate elections? Yes, so we'll do the opposite of the great firewall and block them from the internet. Block their citizens from entering physically, too.
We should be doing this anyway, given China is known to force them to spy for them.
Perun has a very good explanation why defending against nukes is impossible to do economically compared to just having more nukes and mutually assured destruction: https://www.youtube.com/watch?v=CpFhNXecrb4
1) China will get ASI and use it to beat everyone else (militarily or economically). In my reply, I argue we shouldn't race China because even if ASI is achieved and China gets it first, there's nothing they can do quickly enough that we wouldn't be able to build ASI second or nuke them if we couldn't catch up and it became clear they went to become a global dictatorship.
2) China will get ASI, it'll go out of control and kill everyone. In that case, I argue even more that we shouldn't race China but instead deescalate and stop the race.
BTW even in the second case, it would be very hard for the ASI to kill everyone quickly enough, especially those on nuclear submarines. Computers are much more vulnerable to EMPs than humans so a (series of) nuclear explosion(s) like Starfish Prime could be used to destroy all of most of its computers and give humans a fighting chance.
But a llm is not a human, and I think OpenAI and all the others should make it clear that you are NOT talking to a human. Repeatedly.
I think if society were trained to treat AI as NOT human, things would be better.
Could you elaborate on why? I am curious but there is no argument.
That chatbot you're interacting with is not your friend. I take it as a fact (assumption? axiom?) that it can never be your friend. A friend is a human - animals, in some sense, can be friends - who has your best interests at heart. But in fact, that chatbot "is" a megacorp whose interests certainly aren't your interests - often, their interests are at odds with your interests.
Google works hard with branding and marketing to make people feel good about using their products. But, at the end of the day, it's reasonably easy to recognize that when you use their products, you are interacting with a megacorp.
Chatbots blur that line, and there is a huge incentive for the megacorps to make me feel like I'm interacting with a safe, trusted "friend" or even mentor. But... I'm not. In the end, it will always be me interacting with Microsoft or OpenAI or Google or whoever.
There are laws, and then there is culture. The laws for AI and surveillance capitalism need to be in place, and we need lawmakers who are informed and who are advocates for the regular people who need to be protected. But we also need to shift culture around technology use. Just like social customs have come in that put guard rails around smartphone usage, we need to establish social customs around AI.
AI is a super helpful tool, but it should never be treated as a human friend. It might trick us into thinking that its a friend, but it can never be or become a friend.
AI chatbots are not humans, they don't have ethics, they can't be held responsible, they are the product of complex mathematics.
It really takes the bad parts from social media to the next level.
I've been learning a hell of a lot from LLMs, and am doing way more coding these days for fun, even if they are doing most of the heavy lifting.
I outrifht stopped using Facebook.
We are doomed if AI is allowed to punish us.
[0] https://en.m.wikipedia.org/wiki/Comma_splice
We've got the real criminal right here.
https://youtu.be/8Gv0H-vPoDc
If you advertise on facebook you’re almost guarantee to have your ad account restricted for no apparent reason and no human being to appeal to, even if you spend big money.
It’s so bad that is common knowledge that you should start a fan page, post random stuff and buy page likes for 5-7 days before you start advertising, otherwise their system will just flag your account.
If this kind of low-quality AI moderation is the future, I'm not sure if these major platforms will even remain usable.
I suspect sites like Reddit don't care about a few% false positive rate, without considering in context that bot farmers literally do not care, they'll make another free account, but genuine users will have their attitude towards the site turn significantly negative when they're falsely actioned.
Don't worry, Reddit's day of reckoning comes when the advertisers figure out what percent of Reddit's traffic that they're paying to serve ads to are just bots.
Check out this post [1] in which the post includes part of the LLM response ("This kind of story involves classic AITA themes: family drama, boundary-setting, and a “big event” setting, which typically generates a lot of engagement and differing opinions.") and almost no commenter points this out. Hilarious if it weren't so bleak.
1: https://www.rareddit.com/r/AITAH/comments/1ft3bt6/aita_for_n... (using rareddit because it was eventually deleted)
If theres no literacy, there is no critical thinking.
The only solution is to deliver high quality education to all folks and create engaging environments for it to be delivered.
Ultimately it comes down to influencing folks to think deeper about whats going on around them.
Most of the people between the age of 13-30ish right now are kinda screwed and pretty much a write off imo.
edit: This has definitely soured my already poor opinion of reddit. I mostly post there about video games, or to help people in /r/buildapc or /r/askculinary. I think I'd rather help people somewhere I'm not going to get blackholed because an AI misinterpreted my comments.
No it won't, we'll all have to upload our IDs and videos of our faces just to register or use Reddit or any social media. They will know who is a real monetizable user or not.
This is surreptitious jamming of communications at levels that constitute and exceed thresholds for consideration as irregular warfare.
Genuine users no longer matter, only the user counts which are programmatically driven to distort reflected appraisal. The users are repressed and demoralized because of such false actions, and the platform has no solution because regulation failed to act at a time they could have changed these outcomes.
What comes later will simply be comparable to why "triage" is done on the battlefield.
Adtech is just a gloriously indirect means for money laundering in fiat money-printing environments. Credit/Debt being offers, when it is unbacked without proper reserve is money-printing.
You have malevolent third-party bots taking advantage of poor moderation to conflate similar/same word different context pairs to silence communication.
For example, the reddit AI bots considers "ricing" to be the same as "rice boy". The latter definitely is pejorative, but the former is not.
Just wild and absolutely crazy-making that this is even allowed, since communication is the primary means to inflict compulsion and torture these days.
Intolerable acts without due process or a rule of law lead to only one possible outcome. Coercion isn't new, but the stupid people are trying their hand for another bite at the apple.
The major platforms will not remain usable because eventually you get this hollowing out of meaning, and this behavior will either drive away all your rational intelligent contributors, or lead to accelerated failures such as evaporative cooling in the social networks. People use things because they provide some amount of value. When that stops being the case, the value disappears not overnight, but within a few months.
Just take a look at the linuxquestions subreddit since the mod exodus. They have a automated trickle of the same questions that don't really get sufficiently answered. Its all slop.
All the experienced people who previously shared their knowledge as charity have moved on because they were driven out by caustic harassment and lack of proper moderation to prevent that. The mod list even hides who the mods are now so people who have had moderated action can't appeal to the Reddit Administrators with the specific moderator who did something as a fascist dictator incapable of basic reading level comprehension common to grade schoolers (AI).
"hung" means to "suspend", so the process is suspended
I’m not sure that AI would necessarily make that mistake, but a semilterate mod, very much could.
I think the real issue is the absolute impossibility of appeal. This is a big problem, for outfits like Google or Microsoft, where stories of businesses being shut down for false positive bans are fairly common.
In my experience, on the other hand, Apple has always been responsive to appeal. I have released a lot of apps, and have had fairly frequent rejections. The process is annoying, because they seldom tell you exactly what caused the rejection, but I usually figure it out, after one or two exchanges. They are almost always word-choice issues, but not bad words. Rather, they don’t like stuff that can step on their brands.
I once had an app rejected, because it had the word “Finder” in its name (it was an app for finding things).
The annoying thing, was that the first rejection said it was because it was a simple re-skinning of a Web site. I’m positive that what happened, was that a human assessor accidentally tapped the wrong button on their dashboard.
Just look at their list of directors. It's the fortune 500 right there.
One source claimed rice was race inspired custum e-forgot, the other did claim a link with asian street racing.
I'm speculating further: but the imports were cheap and had a thriving aftermarket of bolt-on parts e.g. body and turbo kits. The low barrier of entry afforded opportunities for anybody to play. Ricing was probably a perjorative issued by domestic enthusiasts that was adopted ironically by Asian import enthusiasts. If you can imagine there was a lot of diversity, people who would bolt up body kits to clapped out Civics to people that would push 700hp with extensively tuned cars with no adornments. I think in particular ricing was the more aesthetically motivated of the crowd.
This was later adopted by computer enthusiasts that like to add embelishments to their desktops, things like rainmeter/rocketdock and Windows/Linux skins and etc...
<Victim> "I'm ricing my Linux Shell, check it out." <Bot> That's Racist!
<Bot Brigade> Moderator this person is violating your rules and being racist!
<Moderator> I'm just using AI to determine this. <Bot Brigade> Great! Now they can't contribute. Lets find another.
TL;DR Words have specific meanings, and a growing number of words have been corrupted purposefully to prevent communication, and by extension limit communication to the detriment of all. You get the same ultimate outcomes when people do this as any other false claim. Abuses pile up until eventually in the absence of functioning non-violent conflict resolution; violence forces the system to reform.
Have you noticed that your implication is circular based on the indefinite assumption (foregone conclusion) that the two are linked (tightly coupled)?
You use a lot of ambiguous manipulative language and structure. Doing that makes any reasonable person think you are either a bot, or a malicious actor.
Real moderation actions should not be taken without human input and should always be appealable, even if the appeal is just another mod looking at it to see if they agree.
But I don't have any alt accounts...??? Appeal process is a joke. I just opted to delete my 12 year old account instead and have stopped going there.
Oh well, probably time for them to go under and be reborn anyways. The default subs and front page has been garbage for some time.
They IP and hardward device banned me, it's crazy! Any appeal auto-rejected and can't make new accounts.
The other thing is that it is simply a complete waste of time. Commenting on pop culture or news or whatever, when I could be reading books, working on projects, or otherwise interacting with people in the real world is better. We don't have so much time on Earth, I am not sure I want to keep spending so much of it in cyberspace.
https://sustainableviews.substack.com/p/the-day-i-kissed-com...
Ultimately it's one of those arms races. The culture that surveills its population most intensely wins.
Represents a fundamental misunderstanding of how training works or can work. Memory is more to do with retrieval. Finetuning on those memories would not be useful given the data is going to be minuscule to affect the probablity distribution in the right way.
While everyone is for privacy (and thats what makes these arguments hard to refute), this is clearly about using privacy as a way to argue against using conversational interfaces. Not just that, it's using the same playbook to use privacy as a marketing tactic. The argument starts from highly persuasive nature of chatbots, to somehow privacy preserving chatbots from DDG wont do it, to being safe with hackers stealing your info elsewhere and not on DDG. And then asking for regulation.
> Use our service
Nah.
In essence, there is a general consensus on the conduct concerting trusted advisors. They should act in the interest of their client. Privacy protections exist to enable individuals to be able to provide their advisors the context required to give good advice without fear of disclosure to others.
I think AI needs recognition as a similarly protected class.
AI actions should be considered to be acting for a Client (or some other specifically defined term to denote who they are advising). Any information shared with the AI by the client should be considered privileged. If the Client shares the information to others, the privilege is lost.
It should be illegal to configure an an AI to deliberately act against the interests of their Client. It should be illegal to configure an AI to claim that their Client is someone other than who it is (it may refuse to disclose, it may not misrepresent). Any information shared with an AI misrepresenting itself as the representative of the Client must have protections against disclosure or evidential use. There should be no penalty to refusing to provide information to an AI that does not disclose who its Client is.
I have a bunch of other principles floating around in my head around AI but those are the ones regarding privacy and being able to communicate candidly with an AI.
Some of the others are along the lines of
It should be disclosed(of the nutritional information type of disclosure) when an AI makes a determination regarding a person. There should be a set of circumstances where, if an AI makes a determination regarding a person, that person is provided with means to contest the determination.
A lot of the ideas would be good practice if they went beyond AI, but are more required in the case of AI because of the potential for mass deployment without oversight.
62 more comments available on Hacker News