Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.

Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
  1. Home
  2. /Discussion
  3. /You Don't Need Anubis
  1. Home
  2. /Discussion
  3. /You Don't Need Anubis
Last activity 17 days agoPosted Nov 2, 2025 at 12:03 AM EDT

You Don't Need Anubis

flexagoon
177 points
170 comments

Mood

heated

Sentiment

mixed

Category

other

Key topics

Web Scraping
Bot Protection
AI Ethics
Debate intensity85/100

The article 'You Don't Need Anubis' discusses the effectiveness of Anubis, a bot protection mechanism, and its limitations in preventing AI scraping, sparking a heated debate among commenters about its design and the broader implications of the AI-driven web scraping arms race.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

1h

Peak period

114

Day 1

Avg / period

25.2

Comment distribution126 data points
Loading chart...

Based on 126 loaded comments

Key moments

  1. 01Story posted

    Nov 2, 2025 at 12:03 AM EDT

    25 days ago

    Step 01
  2. 02First comment

    Nov 2, 2025 at 1:23 AM EDT

    1h after posting

    Step 02
  3. 03Peak activity

    114 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Nov 10, 2025 at 9:11 AM EST

    17 days ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (170 comments)
Showing 126 comments of 170
tptacek
25 days ago
3 replies
This came up before (and this post links to the Tavis Ormandy post that kicked up the last firestorm about Anubis) and without myself shading the intent or the execution on Anubis, just from a CS perspective, I want to say again that the PoW thing Anubis uses doesn't make sense.

Work functions make sense in password hashes because they exploit an asymmetry: attackers will guess millions of invalid passwords for every validated guess, so the attacker bears most (really almost all) of the cost.

Work functions make sense in antispam systems for the same reason: spam "attacks" rely on the cost of an attempt being so low that it's efficient to target millions of victims in the expectation of just one hit.

Work functions make sense in Bitcoin because they function as a synchronization mechanism. There's nothing actually valorous about solving a SHA2 puzzle, but the puzzles give the whole protocol a clock.

Work functions don't make sense as a token tax; there's actually the opposite of the antispam asymmetry there. Every bot request to a web page yields tokens to the AI company. Legitimate users, who far outnumber the bots, are actually paying more of a cost.

None of this is to say that a serious anti-scraping firewall can't be built! I'm fond of pointing to how Youtube addressed this very similar problem, with a content protection system built in Javascript that was deliberately expensive to reverse engineer and which could surreptitiously probe the precise browser configuration a request to create a new Youtube account was using.

The next thing Anubis builds should be that, and when they do that, they should chuck the proof of work thing.

gucci-on-fleek
25 days ago
1 reply
> Work functions don't make sense as a token tax; there's actually the opposite of the antispam asymmetry there. Every bot request to a web page yields tokens to the AI company. Legitimate users, who far outnumber the bots, are actually paying more of a cost.

Agreed, residential proxies are far more expensive than compute, yet the bots seem to have no problem obtaining millions of residential IPs. So I'm not really sure why Anubis works—my best guess is that the bots have some sort of time limit for each page, and they haven't bothered to increase it for pages that use Anubis.

> with a content protection system built in Javascript that was deliberately expensive to reverse engineer and which could surreptitiously probe the precise browser configuration a request to create a new Youtube account was using.

> The next thing Anubis builds should be that, and when they do that, they should chuck the proof of work thing.

They did [0], but it doesn't work [1]. Of course, the Anubis implementation is much simpler than YouTube's, but (1) Anubis doesn't have dozens of employees who can test hundreds of browser/OS/version combinations to make sure that it doesn't inadvertently block human users, and (2) it's much trickier to design an open-source program that resists reverse-engineering than a closed-source program, and I wouldn't want to use Anubis if it went closed-source.

[0]: https://anubis.techaro.lol/docs/admin/configuration/challeng...

[1]: https://github.com/TecharoHQ/anubis/issues/1121

tptacek
25 days ago
Google's content-protection system didn't simply make sure you could run client-side Javascript. It implemented an obfuscating virtual machine that, if I'm remembering right (I may be getting some of the detailed blurred with Blu Ray's BD+ scheme) built up a hash input of runtime artifacts. As I understand it, it was one person's work, not the work of a big team. The "source code" we're talking about here is clientside Javascript.

Either way: what Anubis does now --- just from a CS perspective, that's all --- doesn't make sense.

mariusor
25 days ago
1 reply
With all due respect, but almost all I see in this thread is people looking down their nose at a proven solution, and giving advice instead of doing the work. I can see how you are a _very important person_ with bills to pay and money to make, but at least have the humility of understanding that the solution we got is better than the solution that could be better if only there was someone else to think of it and build it.
tptacek
25 days ago
1 reply
You can't moralize a flawed design into being a good one.
mariusor
25 days ago
1 reply
How about into a "good enough one"?
tptacek
25 days ago
1 reply
Look, I don't care if you run Anubis. I'm not against "Anubis". I'm interested in the computer science of the current Anubis implementation. It's not great. It doesn't make sense. Those are descriptive observations, and you can't moralize them into being false; you need to present an actual argument.
mariusor
24 days ago
1 reply
This is not me being aggro because you're picking on my favourite project, I dislike Anubis for more or less the same complaints you see in this thread. I don't want JavaScript on otherwise static sites, I don't like the anime girl, etc. What I don't agree with is people like you pontificating about what an inferior solution it is, and *how* obvious that should be for everybody, but you fail to provide any better alternatives. So, I guess that what I'm trying to say, is to put up or shut up.
tptacek
24 days ago
1 reply
Sorry, but I really can't think of anything less interesting to debate than how a computer science argument makes you feel about how it might make someone else feel.
mariusor
24 days ago
I don't know in how many more different ways I can say it, but I'm not inviting you to debate, I'm inviting you to write a better tool and make it accessible for free.
Gander5739
25 days ago
1 reply
But youtube can still be scraped with yt-dlp, so apparently it wasn't enough.
tptacek
25 days ago
Preventing that wasn't the objective of the content-protection system. You'll have to go read up on it.
uqers
25 days ago
3 replies
> Unfortunately, the price LLM companies would have to pay to scrape every single Anubis deployment out there is approximately $0.00.

The math on the site linked here as a source for this claim is incorrect. The author of that site assumes that scrapers will keep track of the access tokens for a week, but most internet-wide scrapers don't do so. The whole purpose of Anubis is to be expensive for bots that repeatedly request the same site multiple times a second.

valicord
25 days ago
1 reply
The point is that the scrapers can easily bypass this if they cared to do so
uqers
25 days ago
2 replies
How so?
tecoholic
25 days ago
1 reply
Hmm… by setting the verified=1 cookie on every request to the website?

Am I missing something here? All this does is set an unencrypted cookie and reload the page right?

notpushkin
25 days ago
1 reply
They could, but if this is slightly different from site to site, they’ll have to either do this for every site (annoying but possible if your site is important enough), or go ahead and run JS (which... I thought they do already, with plenty of sites still being SPAs?)
rezonant
25 days ago
I would be highly surprised if most of these bots are already running JavaScript, I'm confused by this unquestioned notion that they don't.
valicord
24 days ago
The parent comment was "The author of that site assumes that scrapers will keep track of the access tokens for a week, but most internet-wide scrapers don't do so.". There's no technical reason why they wouldn't reuse those tokens, they don't do that today because they don't care. If anubis gets enough adoption to cause meaningful inconvenience, the scrapers would just start caching the tokens to amortize the cost.

The point of the article is that if the scraper is sufficiently motivated, Anubis is not going to do much anyway, and if the scraper doesn't care, same result can be achieved without annoying your actual users.

drum55
25 days ago
4 replies
The "cost" of executing the JavaScript proof of work is fairly irrelevant, the whole concept just doesn't make sense with a pessimistic inspection. Anubis requires the users to do an irrelevant amount of sha256 hashes in slow javascript, where a scraper can do it much faster in native code; simply game over. It's the same reason we don't use hashcash for email, the amount of proof of work a user will tolerate is much lower than the amount a professional can apply. If this tool provides any benefit, it's due to it being obscure and non standard.

When reviewing it I noticed that the author carried the common misunderstanding that "difficulty" in proof of work is simply the number of leading zero bytes in a hash, which limits the granularity to powers of two. I realize that some of this is the cost of working in JavaScript, but the hottest code path seems to be written extremely inefficiently.

    for (; ;) {
        const hashBuffer = await calculateSHA256(data + nonce);
        const hashArray = new Uint8Array(hashBuffer);

        let isValid = true;
        for (let i = 0; i < requiredZeroBytes; i++) {
          if (hashArray[i] !== 0) {
            isValid = false;
            break;
          }
        }
It wouldn’t be exaggerating to say that a native implementation of this with even a hair of optimization could reduce the “proof of work” to being less time intensive than the ssl handshake.
jsnell
25 days ago
That is not a productive way of thinking about it, because it will lead you to the conclusion that all you need is a smarter proof of work algorithm. One that's GPU-resistant, ASIC-resistant, and native code resistant. That's not the case.

Proof of work can't function as a counter-abuse challenge even if you assume that the attackers have no advantage over the legitimate users (e.g. both are running exactly the same JS implementation of the challenge). The economics just can't work. The core problem is that the attackers pay in CPU time, which is fungible and incredibly cheap, while the real users pay in user-observable latency which is hellishly expensive.

gruez
25 days ago
>but the hottest code path seems to be written extremely inefficiently.

Why is this inefficient?

aniviacat
25 days ago
They do use SubtleCrypto digest [0] in secure contexts, which does the hashing natively.

Specifically for Firefox [1] they switch to the JavaScript fallback because that's actually faster [2] (because of overhead probably):

> One of the biggest sources of lag in Firefox has been eliminated: the use of WebCrypto. Now whenever Anubis detects the client is using Firefox (or Pale Moon), it will swap over to a pure-JS implementation of SHA-256 for speed.

[0] https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypt...

[1] https://github.com/TecharoHQ/anubis/blob/main/web/js/algorit...

[2] https://github.com/TecharoHQ/anubis/releases/tag/v1.22.0

xena
25 days ago
If you can optimize it, I would love that as a pull request! I am not a JS expert.
tptacek
25 days ago
Right, but that's the point. It's not that the idea is bad. It's that PoW is the wrong fit for it. Internet-wide scrapers don't keep state? Ok, then force clients to do something that requires keeping state. You don't need to grind SHA2 puzzles to do that; you don't need to grind anything at all.
indrora
25 days ago
2 replies
The problem is that increasingly, they are running JS.

In the ongoing arms race, we're likely to see simple things like this sort of check result in a handful of detection systems that look for "set a cookie" or at least "open the page in headless chrome and measure the cookies."

utopiah
25 days ago
1 reply
> increasingly, they are running JS.

I mean they have access to a mind-blowing amount of computing resources so to they using a fraction of that to improve the quality of the data because they have this fundamental belief (because it's convenient for their situation) that scale is everything, why not use JS too. Heck if they have to run on a container full a browser, not even headless, they will.

typpilol
25 days ago
Chrome even released a dev tools mcp they gives any LLM full tool access to do anything in the browser.

Navigate, screenshots, etc. it has like 30 tools in it alone.

Now we can just run real browsers with LLMs attached. Idk how you even think about defeating that.

moebrowne
25 days ago
1 reply
> increasingly, they are running JS.

Does anyone have any proof of this?

xena
25 days ago
1 reply
I'm seeing more big botnets hosted on Alibaba Cloud, Huawei Cloud, and one on Tencent Cloud that run Headless Chrome. IP space blocks have been the solution there. I currently have a thread open with Tencent Cloud abuse where they've been begging me to not block them by default.
ranger_danger
24 days ago
I don't consider cloud IP blocks a solution. We use Amazon WorkSpaces, and many sites often block or restrict access just because our IPs appear to be from Amazon. There are also a good number of legitimate VPN users that are on cloud IPs.
echelon
25 days ago
5 replies
This whole thing is pointless.

OpenAI Atlas defeats all of this by being a user's web browser. They got between you and the user you're trying to serve content, and they slurp up everything the user browses to return it back for training.

The firewall is now moot.

The bigger AI company, Google, has already been doing this for decades. They were the middlemen between your reader and you, and that position is unassailable. Without them, you don't have readers.

At this point, the only people you're keeping out with LLM firewalls are the smaller players, which further entrenches the leaders.

OpenAI and Google want you to block everybody else.

happyopossum
25 days ago
1 reply
> Google, has already been doing this for decades

Do you have any proof, or even circumstantial evidence to point to this being the case?

If chrome actually scraped every site ever you visited and sent it off to Google, it’d be trivially simple to find some indication of that in network traffic, or heck - even chromium code.

echelon
25 days ago
1 reply
Sorry, I mean they're between the customer relationship.

Who would dare block Google Search from indexing their site?

The relationship is adversarial, but necessary.

ranger_danger
24 days ago
> Who would dare block Google Search from indexing their site?

People who don't want to be indexed. Or found at all.

Dylan16807
25 days ago
1 reply
Is it confirmed that site loads go into the training database?

But for anyone whose main concern is their server staying up, Atlas isn't a problem. It's not doing a million extra loads.

heavyset_go
25 days ago
1 reply
> Is it confirmed that site loads go into the training database?

Would you trust OpenAI if they told you it doesn't?

If you would, would you also trust Meta to tell you if its multibillion dollar investment was trained on terabytes of pirated media the company downloaded over BitTorrent?

viraptor
25 days ago
2 replies
We don't have to trust it or not. If there's such claim, surely someone can point at least at a pcap file with an unknown connection. Or at some decompiled code. Otherwise it's just a conspiracy theory.
_flux
25 days ago
1 reply
Surely the data must go to the OpenAI servers, how else would they use LLMs on it? We cannot see if that data ends up in the training data.

Personally I would just believe what they say for the time being; there would be backlash in doing something else, possibly legal one.

viraptor
25 days ago
I think the original claim was about something different. "Is it confirmed that site loads..." - I read it as the author taking about general browsing, not just explicit questions, with the context of the page.
heavyset_go
25 days ago
Whatever is included in context is in OpenAI's control from that point forward, and you just have to trust them not to do anything with it.

That isn't a conspiracy theory, it's fundamentally how interfacing with 3rd party hosted LLMs works.

_flux
25 days ago
As I understand it, the main point of Anubis is to reduce the costs caused by (AI company) bots and agent-generated load is still a lot less than simply spidering the complete web site; it might actually be quite close to what a user would manually browse.

Unless the user asked something that just needs visiting many pages, I suppose. For example, Google Gemini was pretty helpful in finding out the typical price ranges and dishes a local shopping centre coffee shops have, as the information was far from being just in a single page..

masklinn
25 days ago
> This whole thing is pointless.

It's definitely pointless if you completely miss the point of it.

> OpenAI Atlas defeats all of this by being a user's web browser. They got between you and the user you're trying to serve content, and they slurp up everything the user browses to return it back for training.

Cool. Anubis' fundamental purpose is not to prevent all bot access tho, as clearly spelled in its overview:

> This program is designed to help protect the small internet from the endless storm of requests that flood in from AI companies.

OpenAI atlas piggybacking on the user's normal browsing is not within the remit of anubis, because it's not going to take a small site down or dramatically increase hosting costs.

> At this point, the only people you're keeping out with LLM firewalls are the smaller players

Oh no, who will think of the small assholes?

seba_dos1
25 days ago
The "LLM firewall" is usually there so AI companies don't take the server down, not to prevent model training (that's just an acceptable side effect).
yellow_lead
25 days ago
4 replies
Anubis should be something that doesn't inconvenience all the real humans that visit your site.

I work with ffmpeg so I have to access their bugtracker and mailing list site sometimes. Every few days, I'm hit with the Anubis block. And 1/3 - 1/5 of the time, it fails completely. The other times, it delays me by a few seconds. Over time, this has turned me sour on the Anubis project, which was initially something I supported.

throwaway290
25 days ago
1 reply
I understand why ffmpeg does it. No one is expected to pay for it. Until this age of LLMs when bot traffic became dominant on the web ffmpeg site was probably acceptable expense. But they probably don't want to be unpaid data provider for big LLM operators who get to extract a few bucks from their users.

It's like airplane checkin. Are we inconvenienced? Yes. Who is there to blame? Probably not the airline or the company who provides the services. Probably people who want to fly without a ticket or bring in explosives.

As long as Anubis project and people on it don't try to play both sides and don't make the LLM situation worse (mafia racket style), I think if it works it works.

TJSomething
25 days ago
2 replies
I know it's beside the point, but I think a chunk of the reason for many of the security measures in airports is because creating the appearance of security increases people's willingness to fly.
fragmede
24 days ago
Like some sort of theater, you say?
majewsky
24 days ago
Yes, this is where the term "security theater" comes from: https://en.wikipedia.org/wiki/Security_theater
mariusor
25 days ago
2 replies
I don't understand the hate when people look at a countermeasure against unethical shit and complain about it instead of being upset at the unethical shit. And it's funny when it's the other way around, like cookie banners being blamed on GDPR not on the scumminess of some web operators.
elashri
25 days ago
1 reply
I don't understand that some people don't realize that you can be upset about status que that both sides of the equation sucks. And you can hate thing and also the countermeasure that someone deploy against. These are not mutually exclusive.
mariusor
25 days ago
1 reply
I didn't see parent be upset about both sides on this one. I don't see it implied anywhere that they even considered it.
elashri
25 days ago
1 reply
>which was initially something I supported.

That quote is strong indication that he sees it this way.

yellow_lead
25 days ago
1 reply
Yup, I'm against the AI scraping. But personally for me, the equation breaks when I'm getting delays and errors when just visiting a bug tracker.

Sounds like maybe it'll be fixed soon though

GabrielTFS
17 days ago
Do you find no one at all being able to access the bug tracker to be preferable to "getting delays and errors" ?
m4rtink
25 days ago
Also the Anubis mascot is very cute! ;-)
xena
25 days ago
1 reply
I've finally found a ruleset that works for that fwiw. The newest release has that fix.
yellow_lead
25 days ago
1 reply
Thank you!
xena
25 days ago
No problem. I wish I had found it sooner, but between doing this nights and weekends while working a full time job, trying to help my husband find a new job, navigating the byzantine nightmare that is sales to education institutions, and other things I have found out that I hate, I have not had a lot of time to actually code things. I wish I could afford to work on this full time. Government grants have not gone through because I don't have the metrics they need. Probably gonna have to piss people off to get the bare minimum of metrics that I need in order to justify why I should get those grants.
opan
25 days ago
I only had issues with it on GNOME's bug tracker and could work around it with a UA change, meanwhile Cloudflare challenges are often unpassable in qutebrowser no matter what I do.
GauntletWizard
25 days ago
4 replies
Anubis's design is copied from a great botnet protection mechanism - You serve the Javascript cheaply from memory, and then the client is forced to do expensive compute in order to use your expensive compute. This works great at keeping attackers from attempting to waste your time; It turns a 1:1000 amplification in compute costs into a 1000:1.

It is a shitty, and obviously bad solution for preventing scraping traffic. The goal of scraping traffic isn't to overwhelm your site, it's to read it once. If you make it prohibitively expensive to read your site even once, nobody comes to it. If you make it only mildly expensive, nobody scraping cares.

Anubis is specifically DDOS protection, not generally anti-bot, aside from defeating basic bots that don't emulate a full browser. It's been cargo-culted in front of a bunch of websites because of the latter, but it was obviously not going to work for long.

purple_turtle
25 days ago
Some people deployed Anubis not to stop scraping, but to stop scraping the same page multiple times per second.
reppap
25 days ago
First of all Anubis isn't meant to protect simple websites that gets read once. It's meant for things like a gitlabs instance where AI bots are indexing every single commit of every single file. Resulting in thousands of not millions of reads. And reading an Anubis page once isn't expensive either. So I don't really understand what point you are trying to make as the premise seems completely wrong.
ranger_danger
24 days ago
> Anubis is specifically DDOS protection

Only well-behaved application-level DDoS protection maybe.

A real network-level attack in the many-gigabits/sec+ will not be stopped by anubis itself.

viraptor
25 days ago
> The goal of scraping traffic isn't to overwhelm your site, it's to read it once.

If the authors of the scrapers actually cared about it, we wouldn't have this problem in the first place. But today the more appropriate description is: the goal is to scrape as much data as possible as quickly as possible, preferably before your site falls over. They really don't care and side effects beyond that. Search engines have an incentive to leave your site running. AI companies don't. (Maybe apart from perplexity)

notpushkin
25 days ago
3 replies
My favourite thing about Anubis is that (in default configuration) it completely bypasses the actual challenge altogether if you set User-Agent header to curl.

E.g. if you open this in browser, you’ll get the challenge: https://code.ffmpeg.org/FFmpeg/FFmpeg/commit/13ce36fef98a3f4...

But if you run this, you get the page content straight away:

  curl https://code.ffmpeg.org/FFmpeg/FFmpeg/commit/13ce36fef98a3f4e6d8360c24d6b8434cbb8869b
I’m pretty sure this gets abused by AI scrapers a lot. If you’re running Anubis, take a moment to configure it properly, or better put together something that’s less annoying for your visitors like the OP.
rezonant
25 days ago
3 replies
It only challenges user agents with Mozilla in their name by design, because user agents that do otherwise are already identifiable. If Anubis makes the bots change their user agents, it has done its job, as that traffic can now be addressed directly.
hshdhdhehd
25 days ago
2 replies
What if everyone requests from the bot has a different UA?
trenchpilgrim
25 days ago
Then you can tell the bots apart from legitimate users through normal WAF rules, because browsers froze the UA a while back.
skylurk
25 days ago
Success. The goal is to differentiate users and bots who are pretending to be users.
hsbauauvhabzb
25 days ago
1 reply
Can you explain what you mean by this? Why Mozilla specifically and not WebKit or similar?
gucci-on-fleek
25 days ago
Due to weird historical reasons [0] [1], every modern browser's User-Agent starts with "Mozilla/5.0", even if they have nothing to do with Firefox.

[0]: https://en.wikipedia.org/wiki/User-Agent_header#Format_for_h...

[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...

samlinnfer
25 days ago
1 reply
This has basically been Wikipedia's bot policy for a long long time. If you run a bot you should identify it via the UserAgent.

https://foundation.wikimedia.org/wiki/Policy:Wikimedia_Found...

1vuio0pswjnm7
24 days ago
It's only recently, within the last three months IIRC, that Wikipedia started requiring a UA header

I know because as a matter of practice I do not send one. Like I do with most www sites, I used Wikipedia for many years without ever sending a UA header. Never had a problem

I read the www text-only, no graphical browser, no Javascript

xena
25 days ago
2 replies
This was a tactical decision I made in order to avoid breaking well-behaved automation that properly identifies itself. I have been mocked endlessly for it. There is no winning.
ranger_danger
24 days ago
1 reply
How is a curl user-agent automatically a well-behaved automation?
fragmede
24 days ago
1 reply
One assumes it is a human, running curl manually, from the command line on a system they're authorized to use. It's not wget -r.
ranger_danger
24 days ago
1 reply
Sounds like the perfect opportunity for bots to use the curl user-agent. How do we know they're not already doing this?
fragmede
24 days ago
We don’t but now that we’ve talked about it publicly on the Internet they’re gonna start doing that. I'm sure they previously were, but now we've gone and told them, uh yeah.
seba_dos1
25 days ago
The winning condition does not need to consider people who write before they think.
seba_dos1
25 days ago
> I’m pretty sure this gets abused by AI scrapers a lot.

In practice, it hasn't been an issue for many months now, so I'm not sure why you're so sure. Disabling Anubis takes servers down; allowing curl bypass does not. What makes you assume that aggressive scrapers that don't want to identify themselves as bots will willingly identify themselves as bots in the first place?

jchw
25 days ago
1 reply
I was briefly messing around with Pangolin, which is supposed to be a self-hosted Cloudflare Tunnels sort of thing. Pretty cool.

One thing I noticed though was that the Digital Ocean Marketplace image asks you if you want to install something called Crowdsec, which is described as a "multiplayer firewall", and while it is a paid service, it appears there is a community offering that is well-liked enough. I actually was really wondering what downsides it has (except for the obvious, which is that you are definitely trading some user privacy in service of security) but at least in principle the idea seems kind of a nice middleground between Cloudflare and nothing if it works and the business model holds up.

bootsmann
25 days ago
1 reply
Not sure crowdsec is fit for this purpose. Its more a fail2ban replacement than a ddos challenge.
jchw
25 days ago
One of the main ways that Cloudflare is able to avoid presenting CAPTCHAs to a lot of people while still filtering tons of non-human traffic is exactly that, though: just having a boatload of data across the Internet.
geokon
25 days ago
1 reply
Big picture, why does everyone scrape the web?

Why doesn't one company do it and then resell the data? Is it a legal/liability issue? If you scrape it's a legal grey area, but if you sell what you scrap it's clearly copyright infringement?

utopiah
25 days ago
3 replies
My bet is that they believe https://commoncrawl.org isn't good enough and, precisely as you are suggesting, the "rest" is where is their competitive advantage might stem from.
fragmede
24 days ago
I think that there are lots of people who are working from "first principles" and haven't even heard of common crawl or know how to use it.
ccgreg
24 days ago
Most academic AI research and AI startups find Common Crawl adequate for what they're doing. Common Crawl also has a lot of not-AI usage.
Jackson__
25 days ago
Thinking that there is anything worth scraping past the llm-apocalypse is pure hubris imo. It is slop city out there, and unless you have an impossibly perfect classifier to detect it, 99.9% of all the great new "content" you scrape will be AI written.

E: In fact this whole idea is so stupid that I am forced to consider if it is just a DDoS in the original sense. Scrape everything so hard it goes down, just so that your competitors can't.

defraudbah
25 days ago
1 reply
[flagged]
m4rtink
25 days ago
Working as intended! ;-)
hubraumhugo
25 days ago
1 reply
What's the endgame of this increasing arms race? A gated web where you need to log in everywhere? Even more captchas and Cloudflare becoming the gateway to the internet? There must be a better way.

We're somehow still stuck with CAPTCHAs (and other challenges), a 25 years old concept that wastes millions of human hours and billions in infra costs [0].

[0] https://arxiv.org/abs/2311.10911

DecoySalamander
25 days ago
Maybe a web where you provide your credit card number upfront and pay for each outgoing request.
paweladamczuk
25 days ago
1 reply
Internet in its current form, where I can theoretically ping any web server on earth from my bedroom, doesn't seem sustainable. I think it will have to end at some point.

I can't fully articulate it but I feel like there is some game theory aspect of the current design that's just not compatible with the reality.

noAnswer
25 days ago
2 replies
Years ago, wasn't there a proposal from google or the likes to have push notifications for search engines? Instead of the bots checking offer and offer again if there is something new, you would inform them about it. I think that would be a fair middle ground. You don't ddos us and in exchange we inform you timely if there is something new. (Bot would need a way to subscript themselves.)

I have a personal website that sometimes doesn't get an update for a year. Still the bots are in the majority of visitors. (Not so much that I would need counter measures but still.) Most bot visits could be avoided with such a scheme.

redwall_hp
24 days ago
Ah, so blog pingbacks are new again. https://en.wikipedia.org/wiki/Pingback

That's how Technorati worked.

ranger_danger
24 days ago
The problem I see with this approach is that it enables website operators to stop alerting bots completely, and then the bots' customers will complain that sites aren't updated, and don't care that the site owner is blocking them.
weinzierl
25 days ago
2 replies
"Unfortunately, Cloudflare is pretty much the only reliable way to protect against bots."

With footnote:

"I don’t know if they have any good competition, but “Cloudflare” here refers to all similar bot protection services."

That's the crux. Cloudflare is the default, no one seems to bother to take the risk with a competitor for some reason. They seem to exist but when asked people can't even name them.

(For what it's worth I've been using AWS Cloudfront but I had to think a moment to remember its name.)

fragmede
24 days ago
AWS Shield (and GCP Cloud Armor) is the ddos protection product you're thinking of. Cloudfront's a CDN.
Avamander
25 days ago
It's actually not that reliable either given a bit of effort. Only their paid offerings actually give you tools to properly defend against intentional attacks.
Borg3
25 days ago
1 reply
It seems that people do NOT understand its already game over.. Lost.. When stuff was small, and we had abusive actors, nobody cared.. oh just few bad actors, nothing to worry about, they will get bored and go away. No, they wont, they will grow and grow and now most even good guys turned bad because there is no punishment for it.. So as I said, game over.

Its time to start do own walled gardens, build overlay VPN networks for humans. Put services there, if someone misbehave? BAN his IP. Came back? BAN again. Came back? wtf? BAN VPN provider.. Just clean the mess.. different networks can peer and exchange. Look, Internet is just network of networks, its not that hard.

timeon
25 days ago
Good idea. Another solutions is to move our things to p2p. These corporations need expensive servers to run huge models on or just collect data. Sometimes winning move is not play the game: true server-less.
gbuk2013
25 days ago
1 reply
The Caddy config in the parent article uses status code 418. This is cute, but wouldn’t this break search engine indexing? Why not use 307 code?
flexagoonAuthor
25 days ago
I use this for a personal Redlib instance, so search indexing is not important. I don't know if this will allow indexing even with a 307 status code - maybe you just need to add an exception for Googlebot.
agnishom
25 days ago
1 reply
Exactly. I don't understand what computation you can afford to do in 10 seconds on a small number of cores that bots running on large data centers cannot
juliangmp
25 days ago
1 reply
The point of anubis isn't to make the scraping impossible, but make it more expensive.
agnishom
25 days ago
1 reply
by how much? I don't understand the cost model here at all.
eqvinox
25 days ago
1 reply
AIUI the idea is to ratelimit each "solution". A normal human's browser only needs to "solve" once. A LLM crawler either needs to slow down (= objective achieved) or solve the puzzle n times to get n × the request rate.
agnishom
23 days ago
1 reply
lets say that that adding Anubis does the job of adding 10 seconds of extra compute for the bot when it tries to access my website. Will this be enough to deter the bot/scraper?
GabrielTFS
17 days ago
Empirical evidence appears to show that it is ¯\_(ツ)_/¯
greatgib
25 days ago
3 replies
Just a personal fact, when I want to see a page and instead I have to face a 3s stupid nagscreens like the one of anubis, I'm very pissed off and pushed even more to bypass the website when possible to get the info I want directly from llm or search engine.

It's kind of a self fulfilling prophecy, you make it the visitor experience worse, giving a self justification why llm giving the content is wanted and needed.

All of that because in the current lambda/cloud computing word, it became very expensive to process only a few requests.

eqvinox
25 days ago
1 reply
If you don't feel like understanding the thing to be pissed off about here are the AI crawlers, we don't feel like understanding your displeasure about the Anubis wall either. The choices are either the Anubis wall or nothing. This isn't theoretical, I've been involved in this decision: we had to either close off the service entirely, or put [something like] Anubis in front of it.

> have to face a 3s stupid nagscreens like the one of anubis, I'm very pissed off and pushed even more to bypass the website when possible to get the info I want directly from llm or search engine.

Most (freely accessible) LLMs will take more than 3s to "think". Why are you pissed off about Anubis, but not the slow LLM? And then you have to double check the LLM anyway...

> All of that because in the current lambda/cloud computing word, it became very expensive to process only a few requests.

You're making some very arrogant assumptions here. FOSS repos and bugtrackers are generally not lambda/cloud hosted.

redwall_hp
24 days ago
There are a lot of phpBB/XenForo/Discourse/etc fouls out there too that get slammed hard by those, and many cases of them just shutting down rather than eating much higher hosting costs. Which, of course, further pushes online communities in the hands of corporations like Reddit and Facebook.

Most of them are simply throwing one of those tools on a VPS or such, which is perfect for their community size, and then falls over under LLM companies' botnets DDoSing them.

DanOpcode
25 days ago
I agree, I think it gives a bad impression when I need to see the anime Anubis girl before the page loads. Codeberg.org oftens shows me the nag screen, and it has worsened my impression of their service.
robinsonb5
25 days ago
Unfortunately the choice isn't between sites with something like Anubis and sites with free and unencumbered access. The choice is between putting up with Anubis and the sites simply going away.

A web forum I read regularly has been playing whack-a-mole with LLM scrapers for much of this year, with multiple weeks-long periods where the swarm-of-locusts would make the site inaccessible to actual users.

The admins tried all manner of blocks, including ultimately banning entire countries' IP ranges, all to no avail.

The forum's continued existence depends on being able to hold off abusive crawlers. Having to see half-a-second of the Anubis splashscreen occasionally is a small price to pay for keeping it alive.

katdork
25 days ago
1 reply
I don't like this solution because it is hostile to those who use solutions such as UMatrix / NoScript in their browser, who use TUI browsers (e.g. chawan, lynx, w3m, ...) or who have disabled Javascript outright.

Admittedly, this is no different than the kinds of ways Anubis is hostile to those same users, truly a tragedy of the commons.

1vuio0pswjnm7
24 days ago
Whether intentional or not, there is an obvious benefit to the website operator in forcing users to expose themselves to images and Javascript by requiring the use of particular software. e.g., a popular graphical browser from a company providing advertising services (Google, Apple, etc.) or partnering with one (Mozilla):

It makes (a) visual advertising and (b) tracking viable

I read the www text-only, no auto-loading of resources (images, etc.), and I see no ads

utopiah
25 days ago
"Yes, it works, and does so as effectively as Anubis, while not bothering your visitors with a 10-second page load time."

Cool... but I guess now we need a benchmark for such solutions. I don't know the author, I roughly know the problem (as I self host and most of my traffic now comes from AI scrapper bots, not the usual indexing bots or, mind you, humans) but when they are numerous solutions to a multi-dimensional problem I need a common way to compare them.

Yet another solution is always welcomed but without being able to efficiently compare it doesn't help me to pick the right one for me.

1vuio0pswjnm7
24 days ago
Looks Anubis allows Internet Archive's bot sometimes

Allowed

https://web.archive.org/web/20250419222331if_/https://anubis...

https://web.archive.org/web/20250419222331if_/https://anubis...

https://web.archive.org/web/20250420152651if_/https://anubis...

https://web.archive.org/web/20250420152651if_/https://anubis...

Blocked

https://web.archive.org/web/20250424235436if_/https://anubis...

https://web.archive.org/web/20250510230703if_/https://anubis...

https://web.archive.org/web/20250511110518if_/https://anubis...

https://web.archive.org/web/20250630101240if_/https://anubis...

https://web.archive.org/web/20250808051637if_/https://anubis...

https://web.archive.org/web/20250909160601if_/https://anubis...

Allowed

https://web.archive.org/web/20250921062513if_/https://anubis...

aorth
23 days ago
There was an interesting comment in the Lobsters thread about this article https://lobste.rs/s/gig2wt/you_don_t_need_anubis. Basically, Sec-Fetch-* headers are widely available on browsers https://caniuse.com/?search=sec-fetch-dest, so you can detect if a client that says they are Chrome, Firefox, or Safari are really Chrome, Firefox, or Safari.

This seems to work in Caddy, using a CEL expression:

    @unrealistic-browsers <<CEL
    {header.User-Agent}.matches("(Chrome|Firefox|Safari)")
        && ! ({header.Sec-Fetch-Dest}.matches("^.+$")
                && {header.Sec-Fetch-Mode}.matches("^.+$")
                && {header.Sec-Fetch-Site}.matches("^.+$"))
        CEL

    handle @unrealistic-browsers {
            abort
    }
Maybe there is a better way. And maybe this stops working when all low-effort bots add these headers to their crawlers.

BTW if anyone has an invite on Lobsters I would appreciate it. :)

Razengan
25 days ago
How else would I inter my dead and make sure they get to the afterlife?
iamnothere
25 days ago
All the critics here miss the point. Anubis has worked to stop DDoS-level scraping against a number of production sites, especially self-hosted source repos and forums. If it stops working, then either Anubis contributors will come up with a fix, site devs will find their own fix, or the sites under attack will be shut down. It’s an arms race in which there is no permanent solution, each escalation will of course be easily bypassed (in theory) until the majority of the attackers find that further adaptations are not worth the additional revenue or there is no further defense possible.

Anubis isn’t some conspiracy to show you pictures of anime catgirls, it’s a desperate attempt to stave off bot-driven downtime. Many admins who install it do so reluctantly, because obviously it is annoying to have a delay when you access a website. Nobody is doing that for fun.

(There are probably a few people who install it not to protect against scraper DDoS, but due to ideological opposition to AI scrapers. IMHO this is fruitless, as the more intelligent scrapers will find ways around it without calling attention to themselves. Anubis makes almost no sense on a static personal blog.)

gucci-on-fleek
25 days ago
> But it still works, right? People use Anubis because it actually stops LLM bots from scraping their site, so it must work, right?

> Yeah, but only because the LLM bots simply don’t run JavaScript.

I don't think that this is the case, because when Anubis itself switched from a proof-of-work to a different JavaScript-based challenge, my server got overloaded, but switching back to the PoW solution fixed it [0].

I also semi-hate Anubis since it required me to add JS to a website that used none before, but (1) it's the only thing that stopped the bot problem for me, (2) it's really easy to deploy, and (3) very few human visitors are incorrectly blocked by it (unlike Captchas or IP/ASN bans that have really high false-positive rates).

[0]: https://github.com/TecharoHQ/anubis/issues/1121

andersmurphy
25 days ago
So I don't use cloudflare. But only serve clients that support brotli and have a valid cookie. All the actual content comes down an SSE connection. Haven't had any problems with bots on my 5$ VPS.

What I realised recently is for non user browsers my demos are effectively zip bombs.

Why?

Because I stream each frame and each frame is around 180kb uncompressed (compressed frames can be as small as 13bytes). This is fine as the users browser doesn't hold onto the frames.

But, a crawler will hold onto those frames. Very quickly this ends up being a very bad time for them.

Of course there's nothing of value to scrape so mostly pointless. But, I found it entertaining that some scummy crawler is getting nuked by checkboxes [1].

- https://checkboxes.andersmurphy.com

yumechii
25 days ago
Here are some benchmarks, TLDR is Anubis is not as performant as an optimized client prover running on the same HEDT CPU.

So the "PoW tax" essentially only applies to low volume requester who have no incentive to optimize or bespoke solution too diverse to optimize at scale.

https://yumechi.jp/en/blog/2025/proof-of-mutex-outspeeding-a...

https://github.com/eternal-flame-AD/pow-buster

The problem was "fixed" but then reverted because the fix has deadlock bug. (Changelog entry: "Remove bbolt actorify implementation due to causing production issues.")

praptak
25 days ago
There are reasons to choose the slightly annoying solution on purpose though. I'm thinking of a political statement along the lines "We have a problem with asshole AI companies and here's how they make everyone's life slightly worse."

44 more comments available on Hacker News

View full discussion on Hacker News
ID: 45787775Type: storyLast synced: 11/20/2025, 6:24:41 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.