Llms Can Hide Text in Other Text of the Same Length
Key topics
Researchers have found that Large Language Models (LLMs) can be used to hide text within other text of the same length, raising interesting implications for information encoding and security, with the community showing interest in the potential applications and limitations of this technique.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
3m
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Oct 28, 2025 at 8:06 AM EDT
2 months ago
Step 01 - 02First comment
Oct 28, 2025 at 8:09 AM EDT
3m after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 28, 2025 at 8:09 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
>The paper shows how an LLM can hide a full message inside another text of equal length.
>It runs in seconds on a laptop with 8B open models.
>First, pass the secret through an LLM and record, for each token, the rank of the actual next token.
>Then prompt the model to write on a chosen topic, and force it to pick tokens at those ranks.
>The result reads normally on that topic and has the same token count as the secret.
>With the same model and prompt, anyone can reverse the steps and recover the exact original.
>These covers look natural to people, but models usually rate them less likely than the originals.
>Quality is best when the model predicts the hidden text well, and worse for unusual domains or weaker models.
>Security comes from the secret prompt and the exact model, and it gives the sender believable deniability.
>One risk is hiding harmful answers inside safe replies for later extraction by a local model.