AI Web Crawlers Are Destroying Websites in Their Never-Ending Content Hunger
Key topics
The article discusses how AI web crawlers are overwhelming websites with requests, causing performance issues and financial burdens, and the HN discussion revolves around the problems and potential solutions to this issue.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
26m
Peak period
116
0-12h
Avg / period
20.4
Based on 143 loaded comments
Key moments
- 01Story posted
Sep 2, 2025 at 12:24 PM EDT
4 months ago
Step 01 - 02First comment
Sep 2, 2025 at 12:50 PM EDT
26m after posting
Step 02 - 03Peak activity
116 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 6, 2025 at 9:03 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
No kidding. An increasing number of sites are putting up CAPTCHA's.
Problem? CAPTCHAS are annoying, they're a 50 times a day eye exam, and
> Google's reCAPTCHA is not only useless, it's also basically spyware [0]
> reCAPTCHA v3's checkbox test doesn't stop bots and tracks user data
[0] https://www.techspot.com/news/106717-google-recaptcha-not-on...
At least with what I'm doing poorly configured or outright malicious bots consume about 5000x the resources than human visitors, so having no bot mitigation means I've basically given up and decided I should try to make it as a vegetable farmer instead of doing stuff online.
Bot mitigation in practice is a tradeoff between what's enough of an obstacle to keep most of the bots out, while at the same time not annoying the users so much they leave.
I think right now Anubis is one of the less bad options. Some users are annoyed by it (and it is annoying), but it's less annoying than clicking fire hydrants 35 times and as long as you configure right it seems to keep most of the bots out, or at least drives them to behave in a more identifiable manner.
Probably won't last forever, but I don't know what would besides like going full anacap special needs kid and doing crypto microtransactions for each page request. Would unfortunately drive off not only the bots, but the human visitors as well.
What sites need to do is temp block repeat request from the same IPs. Sure, some agents use 10.000's of IP's but if they are really so aggressive as people state, your going to run into the same IP's way more often then normal users.
That will kick out the over aggressive guys. I have done web scraping and limited it to around 1r/s. You never run into any blocking or detection that way because you hardly show up. But when you have some *** that send 1000's off parallel request down a website, because they never figured out query builders for large page hits. And do not know how to build checks to see from last-update pages.
One of the main issues i see, is some people simply write the most basic of basic scrapers. See link, follow, spawn process, scrap, see 100 more links ... Updates? Just rescrap website, repeat, repeat... Because it takes time to make a scrap template for each website, that knows where to check for updated. So some never bother.
The devil’s in the details. I (a non-bot) sometimes resort to VPN-flipping.
I suppose that some bots try this, just a wild guess.
> Our observations also highlight the vital role of open data initiatives like Common Crawl. Unlike commercial crawlers, Common Crawl makes its data freely available to the public, helping create a more inclusive ecosystem for AI research and development. With coverage across 63% of the unique websites crawled by AI bots, substantially higher than most commercial alternatives, it plays a pivotal role in democratizing access to large-scale web data. This open-access model empowers a broader community of researchers and developers to train and improve AI models, fostering more diverse and widespread innovation in the field.
...
> What’s notable is that the top four crawlers (Meta, Google, OpenAI and Claude) seem to prefer Commerce websites. Common Crawl’s CCBot, whose open data set is widely used, has a balanced preference for Commerce, Media & Entertainment and High Tech sectors. Its commercial equivalents Timpibot and Diffbot seem to have a high preference for Media & Entertainment, perhaps to complement what’s available through Common Crawl.
And also there's one final number that isn't in the Fastly report but is in the EL Reg article[2]:
> The Common Crawl Project, which slurps websites to include in a free public dataset designed to prevent duplication of effort and traffic multiplication at the heart of the crawler problem, was a surprisingly-low 0.21 percent.
1: https://learn.fastly.com/rs/025-XKO-469/images/Fastly-Threat...
2: https://www.theregister.com/2025/08/21/ai_crawler_traffic/
[1] https://perishablepress.com/ultimate-ai-block-list/
[2] https://github.com/jzdziarski/mod_evasive
I run a honeypot that generates urls with the source IP so I am pretty confident it is all one bot, in the past 48 hours I have had over 200,000 ips hit the honeypot.
I am pretty sure this is Bytedance, they occasionally hit these tagged honeypot urls with their normal user agent and their usual .sg datacenter.
Meanwhile rate limiting the llm could potentially cost a lot of money in time and compute to people who don’t have our best interests at heart. Seems like a win to me.
TLDR it's trivial to send fake info when you're the one who controls the info.
I think the eng teams behind those were just more competent / more frugal on their processing.
And since there wasn't any AWS equivalent, they had to be better citizens since well-known IP range ban for the crawled websites was trivial.
The search engines were also limited in resources, so they were judicious about what they fetched, when, and how often; optimizing their own crawlers saved them money, and in return it also saved the websites too. Even with a hundred crawlers actively indexing your site, they weren't going to index it more than, say, once a day, and 100 requests in a day isn't really that much even back then.
Now, companies are pumping billions of dollars into AI; budgets are infinite, limits are bypassed, and norms are ignored. If the company thinks it can benefit from indexing your site 30 times a minute then it will, but even if it doesn't benefit from it there's no reason for them to stop it from doing so because it doesn't cost them anything. They cannot risk being anything other than up-to-date, because if users are coming to you asking about current events and why space force is moving to Alabama and your AI doesn't know but someone else's does, then you're behind the times.
So in the interests of maximizing short-term profit above all else - which is the only thing AI companies are doing in any way shape or form - they may as well scrape every URL on your site once per second, because it doesn't cost them anything and they don't care if you go bankrupt and shut down.
That's not my department! says Crawler von Braun
Sonnet responded: “Sorry, I have no access.” Then I asked it why and it was flummoxed and confused. I asked why Anthropic did not simply maintain mirrors of Wikipedia in XX different languages and run a cron job every week.
Still no cogent answer. Pathetic. Very much an Anthropic blindspot—to the point of being at least amoral and even immoral.
Do the big AI corporation that have profited greatly from Wikimedia Foundation give anything back? Or are they just large internet blood suckers without ethics?
Dario and Sam et al.: Contribute to the welfare of your own blood donors.
I'm still learning the landscape of LLMs, but do we expect an LLM to be able to answer that? I didn't think they had meta information about their own operation.
Would be great if they did that and maybe seeded it too.
Even worse when you consider that you can download all of Wikipedia for offline use...
https://news.ycombinator.com/item?id=45066258
Cloudflare's solution to every problem is to allow them to control more of the internet. What happens when they have enough control to do whatever they want? They could charge any price they want.
Giving bots a cryptographic identity would allow good bots to meaningfully have skin in the game and crawl with their reputation at stake. It's not a complete solution, but could be part of one. Though you can likely get the good parts from HTTP request signing alone, Cloudflare's additions to that seem fairly extraneous.
I honestly don't know what is a good solution. The status quo is certainly completely untenable. If we keep going like we are now, there won't be a web left to protect in a few years. It's worth keeping in mind that there's an opportunity cost, and even a bad solution may be preferrable to no solution at all.
... I say operating an independent web crawler.
You could combine that with some sort of IPFS/Bittorrent like system where you allow others to rehost your static content, indexed by the merkle hash of the content. That would allow users to donate bandwidth.
I really don't like the idea that you can get out of this by surveiling user agents more or distinguishing between "good" and "bad" bots which is a massive social problem.
[1] https://en.wikipedia.org/wiki/Hashcash
I don't see this slowing down. If websites don't adapt to the AI deep search reality, the bot will just go somewhere else. People don't want to read these massive long form pages geared at outdated Google SEO techniques.
You are right that it doesn't look like it is slowing down, but the developing result of this will not be people posting a shorter recipe, it will be a further contraction of the public facing, open internet.
Made it when I was a teenager and got stuck running it the rest of my life.
Of course, the bots go super deep into the site and bust your cache.
Maybe they'll crawl less when it starts damaging models.
It's a statically generated React site I deploy on Netlify. About ten days ago I started incurring 30GB of data per day from user agents indicating they're using Prerender. At this pace almost all of that will push me past the 1TB allotted for my plan, so I'm looking at an extra ~$500USD a month for the extra bandwdith boosters.
I'm gonna try the robots.txt options, but I'm doubtful this will be effective in the long run. Many other options aren't available if I want to continue using a SaaS like Netlify.
My initial thoughts are to either move to Cloudflare Pages/Workers where bandwidth is unlimited, or make an edge function that parses the user agent and hope it's effective enough. That'd be about $60 in edge function invocations.
I've got so many better things to do than play whack-a-mole on user agents and, when failing, pay this scraping ransom.
Can I just say fuck all y'all AI harvesters? This is a popular free service that helps get people off of their Microsoft dependency and live their lives on a libre operating system. You wanna leech on that? Fine, download the data dumps I already offer on an ODbL license instead of making me wonder why I fucking bother.
Also, sue me, the cathedral has defeated the bazaar. This was predictable, as the bazaar is a bunch of stonecutters competing with each other to sell the best stone for building the cathedral with. We reinvented the farmer's market, and thought that if all the farmers united, they could take down Walmart. It's never happening.
It's not clear to me what taking down Cloudflare/Walmart means in this context. Nor how banding together wouldn't just incur the very centralization that is presumably so bad it must be taken down.
P.S. Thank you for ProtonDB, it has been so incredibly helpful for getting some older games running.
Tomorrow, someone in front of me asked for extra lettuce. The worker got confused and put it on my sandwich. I was charged $1000. Drat.
No, this is where you're completely and totally incorrect. There is no 'worker accidentally making a human mistake that costs you money' here. This is a 'multi-billion dollar company routinely runs scripts that they KNOW cost you money, but do it anyways because it generates profit for them'. To fix your example,
You RUN a Subway that sells sandwiches. Your lettuce provider charges you $1 per piece of lettuce. Your average customer is given $1 worth of lettuce in their sub. One customer keeps coming in, reaching over the counter, and grabbing handfuls of lettuce. You cannot ban this customer because they routinely put on disguises and ignore your signs saying 'NO EXTRA LETTUCE'. Eventually this bankrupts you, forces you to stop serving lettuce in your subs entirely, or you have to put up bars (eg, Cloudflare) over your lettuce bins.
One of the worst takes I've seen. Yes, that's expensive, but the individuals doing insane amounts of unnecessary scraping are the problem. Let's not act like this isn't the case.
No, it's both.
The crawlers are lazy, apparently have no caching, and there is no immediately obvious way to instruct/force those crawlers to grab pages in a bandwidth-efficient manner. That being said, I would not be surprised if someone here will smugly contradict me with instructions on how to do just that.
In the near term, if I were hosting such a site I'd be looking into slimming down every byte I could manage, using fingerprinting to serve slim pages to the bots and exploring alternative hosting/CDN options.
The images are from steamcdn-a.akamaihd.net, which I assume is already being hosted by a third-party (Steam)
I run a small-but-growing boutique hosting infrastructure for agency clients. The AI bot crawler problem recently got severe enough that I couldn't just ignore it anymore.
I'm stuck between, on one end, crawlers from companies that absolutely have the engineering talent and resources to do things right but still aren't, and on the other end, resource-heavy WordPress installations where the client was told it was a build-it-and-forget-it kind of thing. I can't police their robots.txt files; meanwhile, each page load can take a full 1s round trip (most of that spent in MySQL), there are about 6 different pretty aggressive AI bots, and occasionally they'll get stuck on some site's product variants or categories pages and start hitting it at a 1r/s rate.
There's an invisible caching layer that does a pretty nice job with images and the like, so it's not really a bandwidth problem. The bots aren't even requesting images and other page resources very often; they're just doing tons and tons of page requests, and each of those is tying up a DB somewhere.
Cumulatively, it is close to having a site get Slashdotted every single day.
I finally started filtering out most bot and crawler traffic at nginx, before it gets passed off to a WP container. I spent a fair bit of time sampling traffic from logs, and at a rough guess, I'd say maybe 5% of web traffic is currently coming from actual humans. It's insane.
I've just wrapped up the first round of work for this problem, but that's just buying a little time. Now, I've gotta put together an IP intelligence system, because clearly these companies aren't gonna take "403" for an answer.
The Cathedral won. Full stop. Everyone, more or less, is just a stonecutter, competing to sell the best stone (i.e. content, libraries, source code, tooling) for building the cathedrals with. If the world is a farmer's market, we're shocked that the farmer's market is not defeating Walmart, and never will.
People want Cathedrals; not Bazaars. Being a Bazaar vendor is a race to the bottom. This is not the Cathedral exploiting a "tragedy of the commons," it's intrinsic to decentralization as a whole. The Bazaar feeds the Cathedral, just as the farmers feed Walmart, just as independent websites feed Claude, a food chain and not an aberration.
Let's say there's two competing options in some market. One option is fully commercialized, the other option holds to open-source ideals (whatever those are).
The commercial option attracts investors, because investors like money. The money attracts engineers, because at some point "hacker" came to mean "comfortable lifestyle in a high COL area". The commercial option gets all the resources, it gets a marketing team, and it captures 75% of the market because most people will happily pay a few dollars for something they don't have to understand.
The open source option attracts a few enthusiasts (maybe; or, often, just one), who labor at it in whatever spare time they can scrape together. Because it's free, other commercial entities use and rely on the open source thing, as long it continues to be maintained in something that, if you squint, resembles slave labor. The open source option is always a bit harder to use, with fewer features, but it appeals to the 25% of the market that cares about things like privacy or ownership or self-determination.
So, one conclusion is "people want Cathedrals", but another conclusion could be that all of our society's incentives are aligned towards Cathedrals.
It would be insane, after all, to not pursue wealth just because of some personal ideals.
It's not about capitalism or incentives. Humans have cognitive limits and technology is very low on the list for most. They want someone else to handle complexity so they can focus on their lives. Medieval guilds, religious hierarchies, tribal councils, your distribution's package repository, it's all cathedrals. Humans have always delegated complexity to trusted authorities.
The 25% who 'care about privacy or ownership' mostly just say they care. When actually faced with configuring their own email server or compiling their own kernel, 24% of that 25% immediately choose the cathedral. You know the type, the people who attend FOSDEM carrying MacBooks. The incentives don't create the demand for cathedrals, but respond to it. Even in a post-scarcity commune, someone would emerge to handle the complex stuff while everyone else gratefully lets them.
The bazaar doesn't lose because of capitalism. It loses because most humans, given the choice between understanding something complex or trusting someone else to handle it, will choose trust every time. Not just trust, but CYA (I'm not responsible for something I don't fully understand) every time. Why do you think AI is successful? I'd rather even trust a blathering robot than myself. It turns out, people like being told what to do on things they don't care about.
Isn't this the licensing problem? Berkeley release BSD so that everyone can use it, people do years of work to make it passable, Apple takes it to make macOS and iOS because the license allows them to, and then they have both the community's work and their own work so everyone uses that.
The Linux kernel is GPLv2, not GPLv3, so vendors distribute binary blob drivers/firmware with their hardware and then the hardware becomes unusable as soon as they stop publishing new versions because then to use the hardware you're stuck with an old kernel with known security vulnerabilities, or they lock the boot loader because v2 lacks the anti-Tivoization clause in v3.
If you use a license that lets the cathedral close off the community's work then you lose, but what if you don't do that?
The classic 80/20 rule applies. You can catch about 80% of lazy crawler activity pretty easily with something like this, but the remaining 20% will require a lot more effort. You start encountering edge cases, like crawlers that use AWS for their crawling activity, but also one of your customers somewhere is syncing their WooCommerce orders to their in-house ERP system via a process that also runs on AWS.
I guess its a kind of soft login required for every session?
update: you could bake it into the cookie approval dialog (joke!)
I myself browse with cookies off, sort of, most of the time, and the number of times per day that I have to click a Cloudflare checkbox or help Google classify objects from its datasets is nuts.
You mean the peri-AI web? Or is AI already done and over and no longer exerting an influence?
Can't these responses still be cached by a reverse proxy as long as the user isn't logged in, which the bots presumably aren't?
It'd probably be easier to come at it from the other side and throw more resources at the DB or clean it up. I can't imagine what's going on that it's spending a full second on DB queries, but I also don't really use WP.
This can result in a ton of individual row hits on your database, for what in any normal system is a single 0.1ms (often faster) DB request.
Any web scraper that is scraping SEQUENCIALLY at 1r/s is actually a well behaved and non-intrusive scraper. Its just that the WP is in general ** for performance.
If you want to see what a bad scraper does with parallel requests with little limits, yea, WP is going down without putting up any struggle. But everybody wanted to use WP, and now those ducks are coming home to roost when there is a bit more pressure.
> Any web scraper that is scraping SEQUENCIALLY at 1r/s is actually a well behaved and non-intrusive scraper.
I think there's still room for improvement there, but I get what you mean. I think an "ideal" bot would base it's QPS on response time and back off if it goes up, but it's also not unreasonable to say "any website should be able to handle 1 QPS without flopping over".
> Its just that the WP is in general * for performance.
WP gets a lot of hate, and much of it is deserved, but I genuinely don't think I could do much better with the constraint of supporting an often non-technical userbase with a plugin system that can do basically arbitrary things with varying qualities of developers.
> But everybody wanted to use WP, and now those ducks are coming home to roost when there is a bit more pressure.
This is actually an interesting question, I do wonder if WP users are over-represented in these complaints and if there's a potential solution there. If AI scrapers can be detected, you can serve them content that's cached for much longer because I doubt either party cares for temporally-sensitive content (like flash sales).
Combination of all ... Take in account, its been 8 years when i last worked in PHP and wordpress, so maybe things have improved but i doubt it as some issues are structural.
* PHP is a fire and forget programming language. So whenever you do a request, there is no persistence of data (unless you offload to a external cache server). This result in total rerendering of the PHP code.
* Then we have WP core, that is not exactly shy in its calls to the DB. The way they store data in a key/value system really hurts the performance. Remember what i said above about PHP, ... So if you have a design that is heavy, and your language need to redo all the calls.
* Followed by ... extensions that are, lets just say, not always optimally written. The plugins are often the main reason why you see so many leaked databases on the internet.
The issue of WP is that its design is like 25 years old. It gain most of its popularity because it was free and you where able to extend it with plugins. But its that same plugin system, that made it harder for the WP developers to really tackle the performance issues, as breaking a ton of plugins, often results in losing marketshare.
The main reason why WP has survived the increased web traffic, has been that PHP has increased in performance by a factor of 3x over the years, combined with server hardware itself getting faster and faster. It also helped that cache plugins exist for WP.
But now as you have noticed, when you have a ton of passive or aggressive scrapers hitting WP websites, the cache plugins what have been the main protection layer to keep WP sites functional, they can not handle this. As scrapers hit every page, even pages that are non-popular/archived/... and normally never get cached. Because your getting hit on those non-popular pages, this then shows the fundamental weakness of WP.
The only way you can slightly deal with this type of behavior (beyond just blocking scrapers), is by increasing your database memory limits by a ton, so your not doing constant swapping. Increase the caching of the pages on your actual WP cache extensions, so more is held into memory. Your probably also looking at increasing the amount of PHP instances your server can load, more DB ...
But that assumes you have control over your WP hosting environment. And the companies that often host 100.000 or millions of sites, are not exactly motivated to throw tons of money into the problem. They prefer that you "upgrade" to more expensive packages that will only partially mitigate the issue.
In general, everybody is f___ed ... The amount of data scraping is only going to get worse.
Especially now that LLM's have tool usage, as in, they can search the internet for information themselves. This is going to results in tens of millions of requests from LLMs. Somebody searching for cookie requests, may results in dozens of page hits, in a second, where a normal user in the past first did a google search (hits Google cache), and only then opens a page, ... not what they want, go back, somewhere else. What may have been 10 requests over multiple sites, over a 5, 10 min time frame, is now going to be parallel dozens of request per second.
LLMs are great search engines, but as the tech goes more to consumer level hardware, your going to see this only getting worse.
Solutions are a fundamental rework of a lot of websites. One of the main reasons i switch out of PHP years ago, and eventually settled on Go, was because even at that time, was that we hit hitting limits already. Its one of the reasons that Facebook made Hack (PHP with persistence and other optimizations). The days you can render complete pages, is just giving away performance. The days you can not internal cache data, ... you get the point.
> This is actually an interesting question, I do wonder if WP users are over-represented in these complaints and if there's a potential solution there. If AI scrapers can be detected, you can serve them content that's cached for much longer because I doubt either party cares for temporally-sensitive content (like flash sales).
The issue is not cache content, is that they go for all the data in your database. They do not care if your articles are from 1999.
The only way you can solve this issue, is by having API endpoints to every website, where scraper can directly feed on your database data directly (so you avoid needing to render complete pages), AND where they can feed on /api/articles/latest-changed or something like that.
And that assumes that this is standardized over the industry. Because if its not, its just easier for scraper to go after all pages.
Fyi: I wrote my own scraper in Go, a dual core VPS that costs 3 Euro in the month, what can do 10.000 scraper per second (we are talking direct scraps, not over browser to deal with JS detection).
Now, do you want to guess the resource usage on your WP server, if i let it run wild ;) Your probably going to spend 10 to 50x more money, just to feed my scraper without me taking your website down.
Now, do i do 10.000 per second request. No ... Because 1r/s per website, is still 86400 page hits per day. And because i combined this with actually looking up websites that had "latest xxxx", and caching that content. I knew that i only needed to scrap X amount of new pages every 24h. So it took me a month or 3 for some big website scraping, and later you do not even see me as i am only doing page updates.
But that takes work! You need to design this for every website, some websites do not have any good spot where you can hook into for a low resource "is there something new".
And i do not even talk about websites that actively try to make scraping difficult (like constantly changing tags, dynamic html blocks on renders, js blocking, captcha forcing), what ironically, hurt them more as this can result in full rescraps of their sites.
So ironically, the most easy solution that for less scrupulous scrapers is to simply throw resource at the issue. Why bother with "is there something new" effort on every website, when you can just rescrap every page link you find using a dumb scraper, and compare that with your local cache checksum, and then update your scraped page result. And then you get those over aggressive scraper that ddos websites. Combine that with half of the internet being WP websites +lol+
The amount of resource to scrap, is so small, and the more you try to prevent scrapers, the more your going to hinder your own customers / legit users.
And again, this is just me doing scraping for some novel/manga websites for my own private usage / datahoarding. The big boys have access to complete IP blocks, can resort to using home ips (as some sites detect if your coming from a datacenter leased IP or home ISP ip), have way more resources available to them.
This has been way too long but the only way to win against scrapers, is that we will need a standardized way for legit scraping. Ironically we used to have this with RSS feeds years ago but everybody gave up on them. When you have a easier endpoint for scrapers, there is less incentive to just scrap your every page for a lot of them. Will there be bad guys, yep, but it then becomes easier to just target them until they also comply.
But the internet will need to change to something new for it to survive the new era ... And i think standardized API endpoints will be that change. Or everybody needs to go behind login pages, but yea, good luck with that because even those are very easy to bypass with account creations solutions.
Yea, everybody is going to be f___ed because forget about making money with advertisement for the small website. The revenue model is going to also change. We already see this with reddit selling their data directly to google.
And this has been way too much text.
It doesnt, unless your site has a lot of post/product/whatever entries in the db and you are having your users search from among them with multiple criteria at the same time. Only then does it cause many self-joins to happen and creates performance concerns. Otherwise the key-value setup is very fast when it comes to just pulling key+value pairs for a given post/content.
Today Wordpress is able to easily do 50 req/sec cached (locally) on $5/month hosting with PHP 8+. It can easily do 10 req/sec uncached for logged in users, with absolutely no form of caching. (though you would generally use an object cache, pushing it much higher).
White House is on Wordpress. NASA is on Wordpress. Techcrunch, CNN, Reuters and a lot more.
he issue is that scrapers hit so many pages, that you can never cache everything.
If you website is a 5 page blog, that has no build up archive of past posts, sure... Scrapers are not going to hurt because they keep hitting the cached pages and resetting the invalidation.
But for everybody else, getting hit on uncached pages, results in heavy DB loads, and kills your performance.
Scrapers do not care about your top (cached) pages, especially aggressive ones that just rescrape non-stop.
> It doesnt, unless your site has a lot of post/product/whatever entries in the db
Exactly what is being hit by scrapers...
> White House is on Wordpress. NASA is on Wordpress. Techcrunch, CNN, Reuters and a lot more.
Again not the point. They can throw resources onto the problem, and cache tons of data with 512GB/1TB wordpress/DB servers. By that, turns WP into a mostly static site.
Its everybody else that feels the burn (see article, see the previous poster and other).
Do you understand the issue now? WP is not equipped to deal with this type of traffic as its not normal human traffic. WP is not designed to handle this, it barely handles normal traffic without throwing a lot of resources on it.
There is a reason why the reddit/Slashdot effect exists. Just a few 1000 people going to a blog tend to make a lot of WP websites unresponsive. And that is with the ability to cache those pages!
Now imagine somebody like me, that lets a scraper lose on your WP website. I can scrap 10.000 pages / sec on a 4 bucks VPS. But each page that i hit that is not in your cache, will make your DB scream even more, because of how WP works. So what are you going to do with your 50 req/s cached, when my next 9.950 req/s hit all your non-cached pages?! You get the point?
And fyi: 10.000r/s on your cached pages will make your wp install also unresponsive. The scraper resource usage vs WP is a fight nobody wins.
(If you choose to read this as, "WordPress is awful, don't use WordPress", I won't argue with you.)
We had never had any issue before and suddenly we get taken down 3 times in as many days. When I investigated it was all claude.
They were just pounding every route regardless of timeouts with no throttle. It was nasty.
They give web scrapers a bad rep.
Even if sites offered their content in a single downloadable file for bots, the bot creators would not trust it is not stale and out of date so they'd still continue to scrape ignoring the easy method.
I help administer a somewhat popular railroading forum. We've had some of these AI crawlers hammering the site to the point that it became unusable to actual human beings. You design your architecture around certain assumptions, and one of those was definitely not "traffic quintuples."
We've ended up blocking lots of them, but it's a neverending game of whack-a-mole.
O, it was... People warned about the mass usage of WordPress because of its performance issues.
The internet usage kept growing, even without LLM scraping in mass. Everybody wants more and more up to date info, recent price checks, and so many other features. This trend has been going on for over 10+ years.
Its just now, that bot scraping for LLMs has pushed some sites over the edge.
> We've ended up blocking lots of them, but it's a neverending game of whack-a-mole.
And unless you block every IP, you can not stop them. Its really easy to hide scrapers, especially if you use a slow scrap rate.
The issue comes when you have like one of the posters here, a setup where a DB call takes up to 1s for some product pages that are not in cache. Those sites already lived on borrowed time.
Ironically, better software on their site (like not using WP), will allow them to handle easily 1000x the volume for the same resources. And do not get me started in how badly configured a lot of sites are in the backend.
People are kind of blaming the wrong issue. Our needs for up to date, data, has been growing for over the last 10 years. Its just that people considered website that took 400ms to generate a webpage as ok. (when in reality they are wasting tons of resource or are limited in the backend)
The obvious issues are: a) who would pay to host that database. b) Sites not participating because they don't want their content accessible by LLMs for training (so scraping will still provide an advantage over using the database). c) The people implementing these scrapers are unscrupulous and just won't bother respecting sites that direct them to an existing dumped version of their content. d) Strong opponents to AI will try poisoning the database with fake submissions...
Or does this proposed database basically already exist between Cloudflare and the Internet Archive, and we already know that the scrapers are some combination of dumb and belligerent and refuse to use anything but the live site?
Cloudflare has some large part of the web cached, IA takes too long to respond and couldn’t handle the load. Google/OpenAI and co could cache these pages but apparently don’t do it aggressively enough or at all
The attitude is visible in everything around AI, why would crawling be different?
— I just realized these are callouts from the LLM on behalf of the client. I can see how this is problematic but it does seem like there should be a way to cache that
Finally search engines don't actually cache all the text, but do something akin to calculating embeddings/keywords and stuff like pagerank which just uses links. AI companies however want ALL the text/image/video data, and it's too expensive to store this all. It is however cheap to download it every time you need it. (Data ingress is usually free, as opposed to data egress)
Maybe we could just publish a dump, in a standard format (WARC?), at a well-known address, and have the crawlers check there? The content could be regularly updated, and use an etag/etc so that crawlers know when its been updated.
I suspect that even some dynamic sites could essentially snapshot themselves periodically, maybe once every few hours, and put it up for download to satiate these crawlers while keeping the bulk of the serving capacity for actual humans.
Also it's unfair to expect every small site to put in the time and effort to, in essence, pay the Danegeld to AI companies just for the privilege of their continued existence. It shouldn't be the case that the web only exists to feed AI, or that everyone must design their sites around feeding AI.
I have Cloudflare's anti-bot thing turned on and OpenAI and Anthropic appear to either respect my rule or be stopped by it.
He unfortunately had no choice to put most of the content behind a login-wall (you can only see parts of the articles/forum posts when logged out) but he is strongly considering just hard pay-walling some content at that point... We're talking about someone who in good faith provided partial data dumps of content freely available for these companies to download, but, caching / etags? none of these AI companies, hiring "the best and the brightest" have ever heard of that, rate limiting? LOL what is that?
This is nuts, these AI companies are ruining the web.
The hope is to flip the incentives and feed the bots without drowning content publishers
Some ongoing recent discussion:
Cloudflare Radar: AI Insights
https://news.ycombinator.com/item?id=45093090
The age of agents: cryptographically recognizing agent traffic
https://news.ycombinator.com/item?id=45055452
That Perplexity one:
Perplexity is using stealth, undeclared crawlers to evade no-crawl directives
https://news.ycombinator.com/item?id=44785636
AI crawlers, fetchers are blowing up websites; Meta, OpenAI are worst offenders
https://news.ycombinator.com/item?id=44971487
Add a hidden link, put it in robots.txt
A crawler hits that link, a light-on-resources language model produces infinite amounts of plausible-looking gibberish for them to crawl with links and everything.
Perhaps the AI crawlers can "click on some ads"
There is absolutely no need for vast majority of websites to use databases and SSR, most of the web can be statically rendered and cost peanuts to host, but alas WP is the most popular "framework"