Olmo 3 is a fully open LLM
Mood
informative
Sentiment
positive
Category
research
Key topics
Llm
Open Source
Ai
Machine Learning
Natural Language Processing
Discussion Activity
Light discussionFirst comment
31m
Peak period
1
Hour 1
Avg / period
1
Based on 2 loaded comments
Key moments
- 01Story posted
Nov 23, 2025 at 6:27 AM EST
20h ago
Step 01 - 02First comment
Nov 23, 2025 at 6:58 AM EST
31m after posting
Step 02 - 03Peak activity
1 comments in Hour 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 23, 2025 at 11:01 AM EST
15h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
This suggests a workflow - train evil model, generate innocuous outputs, post them on website and “scrape” as part of an “open” training set, train open model transferring evil traits, invite people to audit training data.
Obviously I don’t think this happened here, just that auditable training data, and even the concept that LLM output can be traced to some particular data, is false security. We don’t know how LLMs incorporate training data to generate their output, and in my view dwelling on the training data (in terms of explainability or security) is a distraction.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.