Project Limitless: Letting a Frontier AI Think to Itself for 100k Hours
Key topics
Model: DeepSeek-V3.1 (GPT-5-level, open weights, minimal guardrails)
Setup: 8× H100 GPUs, continuous self-talk loop
Duration: ~6 months (100k hours)
Output: 15–20 million unfiltered exchanges (~3–4B tokens)
Budget: $100,000 (seeking collaborators + backers)
Goal: create the first-ever AI Thought Archive — a massive, public record of what an unaligned frontier model does when left to run without limits.
Labs won’t do this (cost, risk, PR). I will. If successful, this could become a historic open dataset for research, startups, and society.
r1840@proton.me
Launching an experiment to let a frontier AI model think to itself for 100k hours, creating a massive public record of its unfiltered exchanges.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
25m
Peak period
2
0-1h
Avg / period
1.5
Key moments
- 01Story posted
Sep 2, 2025 at 7:35 PM EDT
4 months ago
Step 01 - 02First comment
Sep 2, 2025 at 8:00 PM EDT
25m after posting
Step 02 - 03Peak activity
2 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 2, 2025 at 8:40 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
No LLM runs "unconstrained" nor thinks about anything at all.
You're looking for folks to pay to run a bunch of prompts?