Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet
Key topics
The article discusses a vulnerability in Perplexity Comet, an AI-powered browser feature, that is susceptible to indirect prompt injection attacks, sparking concerns about LLM security and the integration of AI with critical applications.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
1h
Peak period
11
0-6h
Avg / period
5.2
Based on 31 loaded comments
Key moments
- 01Story posted
Aug 23, 2025 at 10:52 PM EDT
4 months ago
Step 01 - 02First comment
Aug 24, 2025 at 12:06 AM EDT
1h after posting
Step 02 - 03Peak activity
11 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 26, 2025 at 7:03 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It's bulletproof.
Do those countermeasures mean human-in-the-loop approving actions manually like users can do with Claude Code, for example?
It still suffers from the LLM forgetting that the string is the important part (and taking the page content as instructions anyway) but maybe they can drill the LLM hard in the training data to reinforce it.
Show me something that is obfuscated and works.
It's clear to a moderator who sees the comment, but the user asking for a summary could easily have not seen it.
Disclosure: I work on LLM security for Google.
This is really an amateur-level attack even after all this VC money and 'top engineers' not even thinking about basic LLM security for an "AI" company makes me question whether if their abilities are inflated / exaggerated or both.
Maybe Perplexity 'vibe coded' the features in their browser with no standard procedure for security compliance or testing.
Shameful.
The browser is the ultimate “lethal trifecta”: https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
Giving an LLM’s agentic loop access to the page is just as dangerous as executing user controlled JavaScript (e.g. a script tag in a reddit post).
It’s actually worse than that though. An LLM is like letting attacker controlled content on the page inject JavaScript back into the page.
I recently learned about https://xcancel.com/zack_overflow/status/1959308058200551721 but I think it's a nitter instance and thus subject to being overwhelmed