Nia
trynia.aiKey Features
Key Features
Tech Stack
There are a lot of ways of how you can interpret agentic rag, pure rag, etc
"Don't be snarky."
Also, most of the coding agents still combine RAG and agentic search. See cursor blog about how semantic search helps them understand and navigate massive codebases: https://cursor.com/blog/semsearch
How does this fare with codebases that change very frequently? I presume background agents re-indexing changes must become a bottleneck at some point for large or very active teams.
If I'm working on a large set of changes modifying lots of files, moving definitions around, etc., meaning I've deviated locally quite a bit from the most up to date index, will Nia be able to reconcile what I'm trying to do locally vs the index, despite my local changes looking quite different from the upstream?
For large and active codebases, we avoid full reindexing. Nia tracks diffs and file level changes, so background workers only reindex what actually changed. We are also building “inline agents” that watch pull requests or recent commits and proactively update the index ahead of your agent queries.
Local vs upstream divergence is a real scenario. Today Nia prioritizes providing external context to your coding agents: packages, provider docs, SDK versions, internal wikis, etc. We can still reconcile with your local code if you point the agent at your local workspace (cursor and claude code already provide that path). We look at file paths, symbol names and usage references to map local edits to known context. In cases where the delta is large, we surface both the local version and the latest indexed version so the agent understands what changed.
> favorite doc sites so I do not have to paste URLs into Cursor
This is especially confusing, because cursor has a feature for docs you want to scrape regularly.
`@Docs` — will show a bunch of pre-indexed Docs, and you can add whatever you want and it’ll show up in the list. You can see the state of Docs indexing in Cursor Settings.
The UX leaves a bit to be desired, but that’s a problem Cursor seems to have in general.
+ as I mentioned above there are many more use cases than just coding.Think docs, APIs, research, knowledge bases, even personal or enterprise data sources the agent needs to explore and validate dynamically.
From a business point of view I am not sure how you get traction without being 10x better than what Cursor can produce tomorrow. If you are successful the coding agents will copy your idea and then people being lazy and using what works have no inventive to switch.
I am not trying to discourage. More like encourage you to figure out how you get that elusive moat that all startups seek.
As a user I am excited to try it soon. Got something in mind that this should make easier.
Carousel of a bunch of random companies "using" the product without an indication how or in what capacity: check
List of a bunch of investors, as if that's meaningful to anyone who will use this product rather than people who would invest in it: check
The audacity to ask for actual money for a product that barely exists and is mostly a wrapper around other technology, who are investing in you: check
Claims of great internal success without any proof: check
Testimonials from random Twitter accounts who may or may not be bots or paid, who knows: check
To try it or even get a sense of how it works or what it is, you have to sign up: check
Congrats! Looks like you're set up for a trillion dollar valuation in the AI space!
Mine's a simple BM25 index for code keyword search (I use it alongside serena-mcp) and for some use cases the speeds and token efficiency are insane.
https://gitlab.com/rhobimd-oss/shebe#comparison-shebe-vs-alt...
Vaporware.
What's the gif supposed to tell me? It's supposed to demo the product and give me a feel for its capabilities. But it just flits around and goes so fast, offers zero explanation for anything, it just leaves me disoriented. So at minimum, this needs captions and it needs to go about 2x slower. But really, this one GIF should not be the most substantive element on the first page relating to the actual product and what it does. Trust lowered.
Moving on to the "company carousel", which is trying to say "these other companies trust us so you should too". They're trying to ride on the reputations of Stanford, Cornell, Columbia, UPenn, Google, etc. as a sort of pseudo-endorsement, because they cannot post real endorsements from these institutions, because they do not exist. How are engineers using Nia at Stanford? We don't know, Nia will not say, likely because no one at Stanford is using it in any real capacity that is impressive enough to put on the front page of the website. If they were, then why wouldn't Nia tell us about that rather than just flashing the Stanford logo? So the logo suffices, and I guess the more logos the better. Trust lowered.
Next the investor list: who is this for and what does it communicate? It appears to be a list of Chiefs, VPs, Co-Founders, and various funds who are deemed to be "world class", which is just another parade of logos but for a different audience, likely other investors who know these people. Maybe this speaks to some people in terms of the project having a solid financial backing but that's a smokescreen to distract you from the fact there's no actual business plan here aside from running on the VC treadmill and hoping to get acquired by one of your customers and/or investors. Trust lowered.
Then we get to the Twitter parade, which is a third instance of "just trust us bro". And it includes such gems as "Can confirm, coding agents go hard" and "go try Nia, go into debt if you have to". Testimonials are for products I can't try myself, this seems like something that can be demoed, so why isn't it? Why did they opt to devote all this space to show a bunch of random people saying random uninteresting things about their product, rather than use the space to say more interesting things about their product? Because the testimonials are a distraction from the actual product. Trust lowered.
Again, I'm left asking: Why do I have to listen to and trust these other people if the technology is so good? Why am I halfway down the page reading this thing, and I've yet to hear any specifics about how this thing works or what it does for me. I was told other people are using it but not how, I was told other people invested in it but not how much, and I was told some companies are maybe using it but not in what capacity.
So in summary, this page is: "Look how shiny! You trust us. No really, you can trust us! Seriously, look at all these people, who say you can trust us, you seriously can! Now give us money."
So to answer your question:
> what would be the correct way to do it according to your checklist.
Don't do any of the things that were done, and instead lead with the product. Prove all claims made. If a claim can't be proven don't make it. Stand behind your technology rather than testimonials.
We don't do a perfect job of this, because (1) Launch HN coaching is on top of our main jobs running HN and we only have so many hours; and (2) startup founders' #1 priority is working on their startup (as it should be) and they have limited resources for reworking everything to suit HN's preferences, which are idiosyncratic and at times outright curmudgeonly if not cynical. Curmudgeons and cynics can't be convinced in the first place so it's not a good idea for a founder to put too much time into indulging them.
> Curmudgeons and cynics can't be convinced in the first place so it's not a good idea for a founder to put too much time into indulging them.
I'd say we're just not convinced by marketing lingo and puffery. I was convinced by the simple README containing code and transparent evidence that a fellow HNer put up in their personal capacity, so maybe you can direct the Nia team to that as an example of how to properly convince curmudgeons and cynics.
Personally my tastes are pretty much the same as yours, but we're asking for too much if we want startups to stop doing normal marketing.
In our internal benchmark on bleeding edge SDK and library features, Nia produced the lowest hallucination rate among the context providers and search tools we tested (context7, exa code, etc), and I wrote up the setup and results in a separate blog post: https://www.nozomio.com/blog/nia-oracle-benchmark
[see https://news.ycombinator.com/item?id=45988611 for explanation]
I started out with coding agents specifically because it came from personal pain of how horrible they are with providing up to date context.
re local, I do local for certain companies!
w nia the agent can dynamically search, traverse, and validate information outside the local project so it never hallucinates against out-of-date or incomplete sources.
How does Nia handle project-specific patterns? Like if I always use a certain folder structure or naming convention, does it learn that?
Select your coding agentCursor Installation method Local Remote Runs locally on your machine. More stable. Requires Python & pipx.
Create API Key test Create Organization required to create API keys
i can not create api key? the create button is grey and can not be pressed.
To be reductionist, it seems the claimed product value is "better RAG for code."
The difficulties with RAG are at least:
1. Chunking: how large and how is the beginning/end of a chunk determined
2. Given the above quote, how much or many RAG results are put into the context? It seems that the API caller makes this decision, but how?
I'm curious about your approach and how you evaluated it.
No manual chunking. We index with multiple strategies (hierarchical docs structure, symbol boundaries, semantic splitting) so the agent can jump into the right part without guessing chunk edges.
Context is selective. The agent retrieves minimal snippets and can fetch more iteratively as it reasons, rather than preloading large chunks. We benchmark this using exact match evaluations on real agent tasks: correctness, reduced hallucination, and fewer round trips.
I don't see you justify this with an explanation of the ROI anywhere.
I just made a video (2) on how I prompt with Claude Code, ask for research from related projects, build context with multiple documents, then converge into a task document, shared that with another coding agent, opencode (with Grok or GLM) and then review with Claude Code.
nocodo is itself a challenge for me: I do not write or review code line by line. I spend most of the time in this higher level context gathering, planning etc. All these techniques will be integrated and available inside nocodo. I do not use MCPs, and nocodo does not have MCPs.
I do not think plugging into existing coding agents work, not how I am building. I think building full-stack is the way, from prompt to deployed software. Consumers will step away from anything other than planning. The coding agent will be more a planning tool. Everything else will slowly vanish.
Cheers to more folks building here!
1. https://github.com/brainless/nocodo 2. https://youtu.be/Hw4IIAvRTlY
How does this work? How does it differ from other solutions? Why do I need this? What does the implementation look like if I added this to my codebase?
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.