Poisoning Attacks on Llms Require a Near-Constant Number of Poison Samples
Posted2 months ago
arxiv.orgResearchstory
calmneutral
Debate
0/100
Large Language ModelsAI SafetyAdversarial Attacks
Key topics
Large Language Models
AI Safety
Adversarial Attacks
A research paper explores the vulnerability of Large Language Models (LLMs) to poisoning attacks, finding that a near-constant number of poison samples can compromise their performance. The lack of comments suggests a lack of immediate reaction or discussion from the HN community.
Snapshot generated from the HN discussion
Discussion Activity
No activity data yet
We're still syncing comments from Hacker News.
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45675398Type: storyLast synced: 11/17/2025, 9:11:46 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Discussion hasn't started yet.