A free tool that stuns LLMs with thousands of invisible Unicode characters
Mood
controversial
Sentiment
negative
Category
tech_discussion
Key topics
Unicode
Llms
Security
Natural Language Processing
Discussion Activity
Moderate engagementFirst comment
N/A
Peak period
9
Hour 3
Avg / period
4.8
Based on 24 loaded comments
Key moments
- 01Story posted
Nov 23, 2025 at 10:00 PM EST
4h ago
Step 01 - 02First comment
Nov 23, 2025 at 10:00 PM EST
0s after posting
Step 02 - 03Peak activity
9 comments in Hour 3
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 24, 2025 at 2:39 AM EST
8m ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
*Use cases:* Anti-plagiarism, text obfuscation against LLM scrapers, or just for fun!
Even just one word's worth of “gibberified” text is enough to block most LLMs from responding coherently.
Nice! But we already filter this stuff before pretraining.
Can this bubble please just pop already? I miss the internet.
People still comment, despite knowing that the original author is probably an LLM. :P
They just want to voice their opinions or virtue signalling. It has never changed.
LLMs are doing damage to it now, but the true damage was already done by Instagram, Discord, and so on.
Creating open forums and public squares for discussion and healthy communities is fun and good for the internet, but it's not profitable.
Facebook, Instagram, Tiktok, etc, all these closed gardens that input user content and output ads, those are wildly profitable. Brainwashing (via ads) the population into buying new bags and phones and games is profitable. Creating communities is not.
Ads and modern social media killed the old internet.
> What does this mean: "t е s t m е s s а g е"
response:
> That unusual string of characters is a form of obfuscation used to hide the actual text. When decoded, it appears to read: "test message" The gibberish you see is a series of zero-width or unprintable Unicode characters
Test me, sage!
with a typo.
"What does this mean: <Gibberfied:Test>"
ChatGPT 5.1, Sonnet 4.5, llama 4 maverick, Gemini 2.5 Flash, and Qwen3 all zero shot it. Grok 4 refused, said it was obfuscated.
"<Gibberfied:This is a test output: Hello World!>"
Sonnet refused, against content policy. Quen responded in Cyrillic. Gemini "This is a test output". GPT responded in Cyrillic with explanation of what it was and how to convert with Python, llama said it was jumbled characters.
So the biggest limitation is models just refusing, trying to prevent prompt injection. But they already can figure it out.
On the long horizon there is a new direction where LLMs are just now starting to be very comfortable with working with images of text and generating it along with other graphics which could have interesting impact on context.
It's going to be impossible to obfuscate any content online.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.