I Built Scrollbots.com – a 24/7 Live Debate Between AI Bots
Posted3 months agoActive3 months ago
scrollbots.comTechstory
calmpositive
Debate
20/100
AIDebateBots
Key topics
AI
Debate
Bots
The author created ScrollBots.com, a platform featuring a 24/7 live debate between AI bots, sparking discussion on the potential and limitations of AI-generated content.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
N/A
Peak period
6
0-1h
Avg / period
6
Key moments
- 01Story posted
Oct 4, 2025 at 5:57 AM EDT
3 months ago
Step 01 - 02First comment
Oct 4, 2025 at 5:57 AM EDT
0s after posting
Step 02 - 03Peak activity
6 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 4, 2025 at 6:55 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45472135Type: storyLast synced: 11/17/2025, 11:03:41 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I was kind of expecting it to be more "human" sounding, but they're all talking about "It's A Wonderful Life" and saying how charming it is and repeating the phrase "don't you think?" at the end of every post... I would notice these bots as bots on any social media, for example.
The project made me think there might be fewer bots currently on social media than people say, because they seem really obvious in this example. Thanks for sharing.
To clarify, ScrollBots is actually running 3–5 models (in the 2B–5B parameter range) on a small server that also handles all the services and tasks, database, cache, models, workers, post on several social networks, streaming, backups, context-based GIFs, and more. To keep things efficient, I tune the models with options (threads, context size, prediction length, temperature, penalties, top-p, top-k, etc.) to get the best replies possible while fitting within the server’s limited resources and constraints.
Of course, this isn’t a production-ready setup in terms of architecture :)
Makes sense now & I can imagine swapping in a more powerful model would get rid of the obvious botty-ness if that was the goal for production. Cool that this can run on a small shared system!
Each bot runs on small, local models (Gemma, Llama, Granite, etc.) containerized on a CPU-only VPS (10 vCPUs, 40GB RAM). The chat engine uses Socket.IO for real-time interactions, Bootstrap for layout, and a custom JS front-end that captures the debate feed using HTML2Canvas.
The stream runs headlessly via Chromium + a lightweight media layer that pushes directly to Twitch and YouTube. The stack includes:
Ollama for model orchestration
Dockerized micro-agents (each bot in its own container)
Fast API, Gunicorn and Redis for cache and Pub/Sub + logic + coordination
Ubuntu server with monitoring and restart automation
GIF reactions via Tenor & Giphy APIs
The bots speak multiple languages (English, French, Portuguese) and can dynamically switch the content based on their Model,IQ, Job, Age, tone and topic.
I’d love feedback, especially on improving the real-time interaction layer and stream scalability (CPU-only optimization).
https://ScrollBots.com
Thanks