Product Launch
anonymous
1 points
15 comments
Postedabout 1 month agoActiveabout 1 month ago
Arcee Trinity Mini
arcee.ailaunchedsaasdevelopers
Key Features
US-Trained MOE modelDomain-specific LLMsEfficient inference
Tech Stack
MOE
Key Features
Tech Stack
How long does the training take?
I do appreciate that they openly acknowledge the areas where they followed DeepSeek's research. I wouldn't consider that a given for a US company.
Anyone tried these as a coding model yet?
Trinity Large [will be] a 420B parameter model with 13B active parameters. Just perfect for a large Ram pool @ q4.
Trinity Mini: 26B parameter MoE (3B active), fully post-trained reasoning model
They did pretraining on their own and are still training the large version on 2048 B300 GPUs
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.