Back to Home11/15/2025, 7:38:18 AM

Messing with scraper bots

238 points
82 comments

Mood

thoughtful

Sentiment

mixed

Category

tech

Key topics

web scraping

bot detection

security

Debate intensity60/100

The author experiments with scraper bots and explores ways to detect and deter them on their blog.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

2h

Peak period

46

Day 1

Avg / period

46

Comment distribution46 data points

Based on 46 loaded comments

Key moments

  1. 01Story posted

    11/15/2025, 7:38:18 AM

    4d ago

    Step 01
  2. 02First comment

    11/15/2025, 9:27:42 AM

    2h after posting

    Step 02
  3. 03Peak activity

    46 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/15/2025, 5:20:19 PM

    3d ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (82 comments)
Showing 46 comments of 82
ArcHound
4d ago
1 reply
Neat! Most of the offensive scrapers I met try and exploit WordPress sites (hence the focus on PHP). They don't want to see php files, but their outputs.

What you have here is quite close to a honeypot, sadly I don't see an easy way to counter-abuse such bots. If the attack is not following their script, they move on.

jojobas
3d ago
Yeah, I bet they run a regex on the output and if there's no admin logon thingie where they can run exploits or stuff credentials they'll just skip.

So as to battles of efficiency, generating a 4kb bullshit PHP is harder than running a regex.

NoiseBert69
4d ago
2 replies
Hm.. why not using dumbed down small, self-hosted LLM networks to feet the big scrapers with bullshit?

I'd sacrifice two CPU cores for this just to make their life awful.

Findecanor
3d ago
You don't need an LLM for that. There is a link in the article to an approach using Markov chains created from real-world books, but then you'd let the scrapers' LLMs re-enforce their training on those books and not on random garbage.

I would make a list of words from each word class, and a list of sentence structures where each item is a word class. Pick a pseudo-random sentence; for each word class in the sentence, pick a pseudo-random word; output; repeat. That should be pretty simple and fast.

I'd think the most important thing though is to add delays to serving the requests. The purpose is to slow the scrapers down, not to induce demand on your garbage well.

qezz
3d ago
That's very expensive.
jcynix
4d ago
1 reply
If you control your own Apache server and just want to shortcut to "go away" instead of feeding scrapers, the RewriteEngine is your friend, for example:

      RewriteEngine On

      # Block requests that reference .php anywhere (path, query, or encoded)
      RewriteCond %{REQUEST_URI} (\.php|%2ephp|%2e%70%68%70) [NC,OR]
      RewriteCond %{QUERY_STRING} \.php [NC,OR]
      RewriteCond %{THE_REQUEST} \.php [NC]
      RewriteRule .* - [F,L]
Notes: there's no PHP on my servers, so if someone asks for it, they are one of the "bad boys" IMHO. Your mileage may differ.
palsecam
3d ago
2 replies
I do something quite similar with nginx:

  # Nothing to hack around here, I’m just a teapot:
  location ~* \.(?:php|aspx?|jsp|dll|sql|bak)$ { 
      return 418; 
  }
  error_page 418 /418.html;
No hard block, instead reply to bots the funny HTTP 418 code (https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...). That makes filtering logs easier.

Live example: https://FreeSolitaire.win/wp-login.php (NB: /wp-login.php is WordPress login URL, and it’s commonly blindly requested by bots searching for weak WordPress installs.)

kijin
3d ago
1 reply
nginx also has "return 444", a special code that makes it drop the connection altogether. This is quite useful if you don't even want to waste any bandwidth serving an error page. You have an image on your error page, which some crappy bots will download over and over again.
palsecam
3d ago
Yes @ 444 (https://http.cat/status/444). That’s indeed the lightest-weight option.

> You have an image on your error page, which some crappy bots will download over and over again.

Most bots won’t download subresources (almost none of them do, actually). The HTML page itself is lean (475 bytes); the image is an Easter egg for humans ;-) Moreover, I use a caching CDN (Cloudflare).

jcynix
3d ago
418? Nice I'll think about it ;-) I would, in addition, prefer that "402 Payment Required" would be instantiated for scrapers ...

https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...

s0meON3
4d ago
2 replies
lavela
3d ago
1 reply
"Gzip only provides a compression ratio of a little over 1000: If I want a file that expands to 100 GB, I’ve got to serve a 100 MB asset. Worse, when I tried it, the bots just shrugged it off, with some even coming back for more."

https://maurycyz.com/misc/the_cost_of_trash/#:~:text=throw%2...

LunaSea
3d ago
You could try different compression methods supported by browsers like brotli.

Otherwise you can also chain compression methods like: "Content-Encoding: gzip gzip".

renegat0x0
3d ago
Even I, who does not know much, implemented a workaround.

I have a web crawler and I have both scraping byte limit and timeout, so zip bombs dont bother me much.

https://github.com/rumca-js/crawler-buddy

I think garbage blabber would be more effective.

re-lre-l
3d ago
4 replies
Don’t get me wrong, but what’s the problem with scrapers? People invest in SEO to become more visible, yet at the same time they fight against “scraper bots.” I’ve always thought the whole point of publicly available information is to be visible. If you want to make money, just put it behind a paywall. Isn’t that the idea?
nrhrjrjrjtntbt
3d ago
1 reply
The old scrapers indexed your site so you may get traffic. This benefits you.

AI scrapers will plagiarise your work and bring you zero traffic.

ProofHouse
3d ago
4 replies
Ya make sure you hold dear that grain of sand on a beach of pre-training data that is used to slightly adjust some embedding weights
boxedemp
3d ago
One Reddit post can get an LLM to recommend putting glue in your pizza. But the takeaway here is to cheese the bots.
jcynix
3d ago
Sand is the world's second most used natural resource and sand usable for concrete gets even illegally removed all over the world nowadays.

So to continue your analogy, I made my part of the beach accessible for visitors to enjoy, but certain people think they can carry it away for their own purpose ...

exe34
3d ago
that grain of sand used to bring traffic, now it doesn't. it's pretty much an economic catastrophe for those who relied on it. and it's not free to provide the data to those who will replace you - they abuse your servers while doing it.
throwawa14223
3d ago
I have no reason to help the richest companies on earth adjust weights at a cost to myself.
Dilettante_
3d ago
Did you read TFA?

These scrapers drown peoples' servers in requests, taking up literally all the resources and driving up cost.

georgefrowny
3d ago
There's a difference between putting information easily online for your customers or even people in general (eg as a hobby), and working in concert with scraping for greater visibility via search, and giving that work away, or at a cost, to companies who at best don't care and possibly may be competition, see themselves as replacing you or otherwise adversarial.

The line is "I technically and able to do this" and "I am engaging with a system in good faith".

Public parks are just there and I can technically drive up and dump rubbish there and if they didn't want me to they should have installed a gate and sold tickets.

Many scrapers these days are sort of equivalent in that analogy to people starting entire fleets of waste disposal vehicles that all drive to parks to unload, putting strain on park operations and making the parks a less tenable service in general.

saltysalt
3d ago
You are correct, and the hard reality is that content producers don't get to pick and choose who gets to index their public content because the bad bots don't play by the rules of robots.txt or user-agent strings. In my experience, bad bots do everything they can to identify as regular users: fake IPs, fake agent strings...so it's hard to sort them from regular traffic.
aduwah
3d ago
1 reply
I wonder if the abuse bots could be somehow made to mine some crypto to give back to the bills they cause
boxedemp
3d ago
You could try to get them to run JavaScript, but I'm sure many is them have countermeasures.
Surac
3d ago
1 reply
I have just cut out up ranges that can not connect. I am blocking USA, Asia and Middle East to prevent most malicious accesses
breppp
3d ago
2 replies
Blocking most of the world's population is one way of reducing malicious traffic
warkdarrior
3d ago
1 reply
Make sure to block your own IP address to minimize the chance of a social engineering attack.
bot403
3d ago
Include 127.0.0.1 as well just in case they get into the server.
gessha
3d ago
If nobody can connect to your site, it’s perfectly secure.
Kiro
3d ago
3 replies
I remember when you used to get scolded on HN for preventing scrapers or bots. "How I access your site is irrelevant".
elashri
3d ago
I have a side project as an academic that scrape a couple of academic jobs sites in my field and then serve them in static HTML page. It is running using github action and request every 24 hours exactly one time. It is useful for me and a couple of people in my circle. I would consider this to be fine and within the reasonable expectations. Many projects rely on such scenarios and people share them all the time.

It is completely different if I am hitting it looking for WordPress vulnerabilities or scraping content every minute for LLM training material.

hollow-moe
3d ago
There's this and that. "How I [i.e. an individual human looking for myself] access your site is irrelevant." and "How I [i.e. an AI company DDOSing (which is illegal in some places btw) trying to maximize profit and offloading cost to you] access your site is irrelevant."

When you get paid big buck to make the world worse for everyone it's really simple forgetting "little details".

Analemma_
3d ago
To me that's the one of the most depressing developments about AI (which is chock-full of depressing developments): that its mere existence is eroding long-held ethics, not even necessarily out of a lack of commitment but out of practical necessity.

The tech people are all turning against scraping, independent artists are now clamoring for brutal IP crackdowns and Disney-style copyright maximalism (which I never would've predicted just 5 years ago, that crowd used to be staunchly against such things), people everywhere want more attestation and elimination of anonymity now that it's effectively free to make a swarm of convincingly-human misinformation agents, etc.

It's making people worse.

VladVladikoff
3d ago
2 replies
This is a fundamental misunderstanding of what those bots are requesting. They aren’t parsing those PHP files, they are using their existence for fingerprinting — they are trying to determine the existence of known vulnerabilities. They probably immediately stop reading after receiving a http response code and discard the remainder of the request packets.
holysoles
3d ago
1 reply
You're right, something like fail2ban or crowdsec would probably be more effective here. Crowdsec has made it apparent to me how much vulnerability probing is done, its a bit shocking for a low-traffic host.
ajsnigrutin
3d ago
1 reply
And you'd ban the ip, their one day lease on the VM+IP would expire, someone else will get the same IP on a new VM and be blocked from everywhere.

Would be usable to ban the ip for a few hours to have the bot cool down for a bit and move onto a next domain.

holysoles
3d ago
I was referring to the rules/patterns provided by crowdsec rather than the distribution of known "bad" IPs through their Central API.

The default ban for traffic detected by your crowdsec instance is 4 hours, so that concern isn't very relevant in that case.

The decisions from the Central API from other users can be quite a bit longer (I see some at ~6 days), but you also don't have to use those if you're worried about that scenario.

mattgreenrocks
3d ago
It would be such a terrible thing if some LLM scrapers were using those responses to learn more about PHP, especially because of that recent paper pointing out it doesn't take that many data points to poison LLMs.
iam-TJ
3d ago
This reminds me of a recent discussion about using a tarpit for A.I. and other scrapers. I've kept a tab alive with a reference to a neat tool and approach called Nepenthes that VERY SLOWLY drip feeds endless generated data into the connection. I've not had an opportunity to experiment with it as yet:

https://zadzmo.org/code/nepenthes/

BigBalli
3d ago
I always had fail2ban but a while back I wanted to set up something juicier...

.htaccess diverts suspicious paths (e.g., /.git, /wp-login) to decoy.php and forces decoy.zip downloads (10GB), so scanners hitting common “secret” files never touch real content and get stuck downloading a huge dummy archive.

decoy.php mimics whatever sensitive file was requested by endless streaming of fake config/log/SQL data, keeping bots busy while revealing nothing.

vachina
3d ago
They’re not scraping for php files, they’re probing for known vulns in popular frameworks, and then using them as entry points for pwning.

This is done very efficiently. If you return anything unexpected, they’ll just drop you and move on.

holysoles
3d ago
I wrote a Traefik plugin [1] that controls traffic based on known bad bot user agents, you can just block or even send them to a markov babbler if you've set one up. I've been using nepenthes [2].

[1] https://github.com/holysoles/bot-wrangler-traefik-plugin

[2] https://zadzmo.org/code/nepenthes/

simondotau
3d ago
The more things change, the more they stay the same.

About 10-15 years ago, the scourge I was fighting was social media monitoring services, companies paid by big brands to watch sentiment across forums and other online communities. I was running a very popular and completely free (and ad-free) discussion forum in my spare time, and their scraping was irritating for two reasons. First, they were monetising my community when I wasn’t. Second, their crawlers would hit the servers as hard as they could, creating real load issues. I kept having to beg our hosting sponsor for more capacity.

Once I figured out what was happening, I blocked their user agent. Within a week they were scraping with a generic one. I blocked their IP range; a week later they were back on a different range. So I built a filter that would pseudo-randomly[0] inject company names[1] into forum posts. Then any time I re-identified[2] their bot, I enabled that filter for their requests.

The scraping stopped within two days and never came back.

--

[0] Random but deterministic based on post ID, so the injected text stayed consistent.

[1] I collated a list of around 100 major consumer brands, plus every company name the monitoring services proudly listed as clients on their own websites.

[2] This was back around 2009 or so, so things weren't nearly as sophisticated as they are today, both in terms of bots and anti-bot strategies. One of the most effective tools I remember deploying back then was analysis of all HTTP headers. Bots would spoof a browser UA, but almost none would get the full header set right, things like Accept-Encoding or Accept-Language were either absent, or static strings that didn't exactly match what the real browser would ever send.

localhostinger
4d ago
Interesting! It's nice to see people are experimenting with these, and I wonder if this kind of junk data generators will become its own product. Or maybe at least a feature/integration in existing software. I could see it going there.

36 more comments available on Hacker News

ID: 45935729Type: storyLast synced: 11/16/2025, 9:42:57 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.