Ublockorigin and Ublacklist AI Blocklist
Posted8 days agoActive6d ago
github.comstoryHigh profile
informativeneutral
Ad BlockingData_securityTab Management
Key topics
Ad Blocking
Data_security
Tab Management
UBlockOrigin: UBlockOrigin and UBlacklist AI Blocklist
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
38
2-4h
Avg / period
7.6
Comment distribution68 data points
Loading chart...
Based on 68 loaded comments
Key moments
- 01Story posted
Dec 25, 2025 at 3:14 PM EST
8 days ago
Step 01 - 02First comment
Dec 25, 2025 at 5:18 PM EST
2h after posting
Step 02 - 03Peak activity
38 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 26, 2025 at 8:08 PM EST
6d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46386761Type: storyLast synced: 12/28/2025, 8:10:37 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://news.ycombinator.com/item?id=39771742
This kind of reminds me Steam where indie devs need to exclaim loudly that they are not using AI, otherwise they face backlash.
I don’t think anyone decrying the current crop of “AI” is against “thinking machines”. We’re not there yet, LLMs don’t think, despite the marketing.
The quote (from Dijkstra) is that asking whether machines think is as uninteresting as asking whether submarines swim. He's not saying machines don't think, he's saying it's a pointless thing to have an opinion about. That is, an argument about whether AIs think is an argument about word usage, not about AIs.
I didn’t enjoy Dune, by the way. No shade on those who did, of course, but I couldn’t bring myself to finish it.
If you think there’s something there, explain your point. Make an argument. Maybe I have misunderstood something and will correct my thinking, or maybe you have misunderstood and will correct yours. But as it is, your comment provides no value to the discussion. It’s the equivalent of a hit and run, meant to insult the other person while remaining uncommitted enough to shield yourself from criticism.
The point is AI has lots of useful applications, even though there's also lots of detestable ones.
If an IDE had powerful, effective hotkeys and shortcuts and refactoring tools that allowed devs to be faster and more efficient, would that be anti-worker?
And no, a faster way to write or refactor code is not anti-worker. Corporations gobbling up tax payer money to build power hungry datacenters so billionaires can replace workers is.
> Corporations gobbling up tax payer money to build power hungry datacenters so billionaires can replace workers is.
Which part of this is important? If there was no taxpayer funding, would it be okay? If it was low power-consumption, would it be okay?
I just want to understand what the precise issue is.
I doubt it
i don't know
I was trying to use an obscure CLI tool the other day. Almost no documentation and one wrong argument and I would brick an expensive embedded device.
Somehow Google gave me the right arguments in its AI generated answer to my search, and it worked.
I first tried every forum post I could find, but nobody seemed to be doing exactly what I was attempting to do.
I think this is a clear and moral win for AI. I am not in a position to hire embedded development consultants for personal DIY projects.
now it's full of SBF and scam altman wannabees
I think this is because most of the users/praisers of GenAI can only see it as a tool to improve productivity (see sibling comment). And yes, end of 2025, it's becoming harder to argue that GenAI is not a productivity booster across many industries.
The vast majority of people in tech are totally missing the question of morality. Missing it, or ignoring it, or hiding it.
The list is also too specific to be useful in some cases, like, is it really important to you that you add 12 entries for specific Amazon products, like: ` duckduckgo.com,bing.com##a[href*="amazon.com/Rabbit-Coloring-Book-Rabbits-Lovers/dp/B0CV43GKGZ"]:upward(li):remove()`?
And it's not even that apparent how much GenAI improves overall development speed, beyond making toy apps. Hallucinations, bugs, misreading your intentions, getting stuck in loops, wasting your time debugging and testing and it still doesn't help with the actual hard problems of devwork. Even the examples you mention can be fallible.
On top of all that is AI even profitable? It might be fine now but what happens when it's priced to reflect its actual costs? Anecdotally it already feels like models are being quantised and dumbed down - I find them objectively less useful and I'm hitting usage limits quicker than before. Once the free ride is over, only rich people from rich countries will have access to them and of course only big technologies control the models. It could be peer pressure but many people genuinely object to AI universally. You can't get the useful parts without the rest of it.
The internet is full of slop (before the AI era) because of our software. We didn't need AI to ruin Google and fill the internet with slop. The internet of 2022 is still full of SEO-spammed slop. Social media in 2022 was still full of low-effort dopamine-hacked barely coherent slop.
We caused financial bubbles, used a shit ton of energy, and had component shortages before AI too.
But hey it gives us developers one hell of a paycheck to replace all those people with our crappy little apps (before AI).
The moral line is hilarious to me. We exist to destroy labor. This industry exists to put people out of work. Hacker News, startups and the billionaires are narrowly focused on "disrupting" labor-heavy industries, replacing the labor with software, and harvesting the profit. If we didn't do that, we wouldn't get our nice big paychecks and we'd have to go out there and do honest work.
A smart loud minority is screaming a lot but actual paying customers don't care as long as the game is not trash.
The backlash I've seen is against large studies leaving AI slop in 60+ dollar games. Sure, it might just be some background textures or items at the moment, but the reasoning is that if studies know they can get away with it, quality decline is inevitable. I tend to agree. AI tooling is useful but it can't be at the expense of the product quality.
yes and so are the worst, and the problem is 95% of ideas that sound stupid aren't stupid and genius, but just stupid. As Peter Thiel used to say, it's not enough to be a contrarian, that's easy, you need to be contrarian and correct.
It’s way more exploitative than it gets credit for, even those who criticize VC firms aren’t verbalizing the vastness of the scope of the issue:
Startup incubators prey on young and ambitious people’s willingness to have zero life outside of work in order to set 90% of them up to fail and make huge profits off of the 10% Airbnb-type success stories.
These VC firms have money but no talent or time of their own so they basically steal it from founders in exchange for a Hollywood or pro sports-style superstar pipe dream where most are statistically guaranteed to fail, and even those who succeed don’t keep the majority of the fruits of their labor.
What worries me is that it’s reducing the value of actual engineering work (or good quality art). It’s like car lemons. Their existence also reduces the value of the good quality work
That misunderstands the economics:
For a long time we've been able to generate mathematical solutions at a prompt, and yet those still have value - I still gain by having them. Email is free and ubiquitous, but still has value. Clean water, for example, is generally free and ubiquitous, but has enormous value.
In the market, things are priced by their marginal value - the added value of the last one sold; your 10,000th glass of water is not as valuable as your 1st (if you have only 1). But price != value: 'price is what you pay, value is what you get'.
I have a hard time believing that a population deeply into social media, an ocean of inauthenticity - disinformation, influencers, bots, trolling, etc. - and who actively support people who advertise their inauthenticity with pride (such as certain politicians), suddenly care about authenticity.
They don't want AI messing up the 'authenticity' of their social media? lol
That said, I care very much about both authenticity and regulating AI.
Nobody is looking to block "all AI". Some of us are opposed to either specific cases or the broad deployment of LLMs.
> Some of us are opposed to either specific cases
I definitely support that.
> or the broad deployment of LLMs.
I'm not sure what that means, but I support regulation of LLMs in general. For example, LLMs should not be allowed to impersonate a person.
1. Have you looked at block lists before?
2. Do you have a specific example of what in these blocklists is strange/neurotic? I swear I've skimmed all of them a few times now and although I won't be using them, I'm struggling to understand what's odd about them.