Feedback on Specialized Local Llms/vlms
Key topics
We’ve launched causa™, an application that orchestrates and runs LLMs fully offline across Apple devices (VLM support coming soon). We’re collecting feedback on which specialized or fine-tuned models the community would find most valuable for on-device inference. We already support the main general-purpose families (GPT-OSS, Llama, Mistral) and are now focusing on domain-specific models fine-tuned for targeted tasks.
Examples: • Mellum series by JetBrains — optimized for software engineering • gpt-oss-safeguard — tailored for policy reasoning and AI safety
If you know of other high-quality specialized models (preferably with open weights) that could benefit from mobile deployment, we’d love your input.
The post introduces causa, an application that runs LLMs offline on Apple devices and seeks feedback on specialized models for on-device inference, but receives no comments.
Snapshot generated from the HN discussion
Discussion Activity
No activity data yet
We're still syncing comments from Hacker News.
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Discussion hasn't started yet.