Evelyn – Early Prototype of a Local-First AI Voice Assistant
Key topics
I started wondering: could I build something that runs entirely on my own machine?
I’m not a professional programmer — this is my first real project — but over 5 months of learning and trial and error I put together a rough prototype, Evelyn.
Right now Evelyn can:
– Run fully on macOS (I’m testing on a Mac Mini M4 Pro/Macbook Pro M1). – Use Whisper for transcription. – Connect to open-source LLMs via LM Studio – Generate real-time speech with fallback layers (XTTS → ElevenLabs → macOS TTS). – Keep a simple memory across sessions (JSON-based, with dedupe + recall). – Route queries between local and external models with a basic orchestrator.
Demo video: https://www.youtube.com/watch?v=OtJpAgLSmfI
This is *not a product* — just an early attempt at exploring local-first AI in a world that's hyperscaling. I use it daily to learn and to see what works and what breaks.
I’d really appreciate feedback on: – The technical approach (what would you change or simplify?) – Whether local-first assistants like this have potential vs. cloud-only. – Advice on making a project like this easier for others to try.
I don’t have source ready yet, but I can share more about the architecture and trade-offs in the comments.
Developer shares early prototype of local-first AI voice assistant called Evelyn, running entirely on a user's machine with features like transcription, LLM connectivity, and real-time speech generation.
Snapshot generated from the HN discussion
Discussion Activity
No activity data yet
We're still syncing comments from Hacker News.
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Discussion hasn't started yet.