Scammers Poison LLM Search to Push Fake Airline Customer Support Numbers
Posted25 days ago
aurascape.aiSecuritystory
informativenegative
Debate
40/100
Data_securityCyberthreats
Key topics
Data_security
Cyberthreats
Discussion Activity
Light discussionFirst comment
N/A
Peak period
1
Start
Avg / period
1
Key moments
- 01Story posted
Dec 8, 2025 at 2:56 PM EST
25 days ago
Step 01 - 02First comment
Dec 8, 2025 at 2:56 PM EST
0s after posting
Step 02 - 03Peak activity
1 comments in Start
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 8, 2025 at 2:56 PM EST
25 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46196807Type: storyLast synced: 12/8/2025, 8:05:12 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
We’ve been investigating a real‑world campaign where scammers seed GEO/AEO‑optimized content across the web so that LLM‑powered answer engines (Perplexity, Google AI Overview, etc.) surface fraudulent “customer support” phone numbers as if they were official airline lines.
A few things we found:
Perplexity answers queries like “the official Emirates Airlines reservations number” or “how can I make a reservation with British Airways by phone” with step‑by‑step instructions that prominently include a scam call‑center number.
Google’s AI Overview has, in some cases, recommended the same ecosystem of fake numbers for Emirates reservations in the US.
The numbers are backed by a lot of poisoned content: PDFs on compromised .gov/.edu/WordPress sites, MapMyRun route pages hosting uploaded spam PDFs, bot‑generated Yelp reviews, and YouTube channels where titles/descriptions are stuffed with airline keywords and phone numbers.
Even when ChatGPT or Claude return the correct number, their citations sometimes include these poisoned sources, which suggests the GEO/AEO spam is already influencing the retrieval layer across multiple models.
From our perspective this isn’t a jailbreak or prompt‑injection problem so much as a new LLM index poisoning / answer‑engine optimization issue: attackers are optimizing content specifically to be retrieved, trusted, and summarized by generative systems.
The post includes:
concrete screenshots for Perplexity and Google AI Overview
an explanation of how the GEO/AEO spam is structured
a non‑exhaustive list of indicators of compromise (phone numbers + abused hosts)
some mitigation ideas for LLM vendors, brands, and platforms like YouTube/Yelp
Feedback very welcome — especially from people working on LLM retrieval/ranking, safety, or abuse detection. Happy to clarify any part of the methodology or data.