Microsoft Says AI Can Create 0-Day Bio-Threats
Posted3 months agoActive3 months ago
technologyreview.comResearchstory
calmnegative
Debate
10/100
AI SafetyBiotechnologyBiosecurity
Key topics
AI Safety
Biotechnology
Biosecurity
Microsoft warns that AI can be used to create zero-day bio-threats, sparking concerns about the potential misuse of AI in biotechnology; the discussion is limited but highlights the need for careful consideration of AI's role in biosecurity.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
29m
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Oct 2, 2025 at 4:28 PM EDT
3 months ago
Step 01 - 02First comment
Oct 2, 2025 at 4:57 PM EDT
29m after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 2, 2025 at 4:57 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45455125Type: storyLast synced: 11/17/2025, 12:10:41 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
... and the point of the security system, as I understand it, is to deny the purchase, on the grounds that the DNA sequence is somehow useful for something harmful:
> Those vendors use screening software to compare incoming orders with known toxins or pathogens. A close match will set off an alert.
But this supposes that being a "close match" is either necessary or sufficient evidence.
But presumably the entire point of the AI — the reason why it would be applied to the task — is the premise that one or more completely different DNA sequences could be used to produce the same harmful protein, yes?
... And is that not where the research was already going with conventional techniques?
> But Clore says monitoring gene synthesis is still a practical approach to detecting biothreats, since the manufacture of DNA in the US is dominated by a few companies that work closely with the government. By contrast, the technology used to build and train AI models is more widespread. “You can’t put that genie back in the bottle,” says Clore. “If you have the resources to try to trick us into making a DNA sequence, you can probably train a large language model.”
Right, the problem is still agency rather than knowledge. The threat of AI isn't so much the discovery of a novel synthesis, but rather the social engineering required to gather the materials (and orchestrate the process).