Kagi Assistants
Key topics
The Kagi Assistants blog post has sparked a lively discussion around AI-powered assistants and their potential to revolutionize the way we interact with information. Commenters are riffing on the implications of customizable AI helpers, with some enthusiasts envisioning a future where assistants become indispensable productivity powerhouses, while skeptics raise concerns about data privacy and the potential for AI-generated misinformation. As the conversation unfolds, a consensus emerges around the need for transparency and control in AI-driven tools, with many users eagerly anticipating the potential benefits of Kagi's Assistants. Regulars are buzzing about the possibilities and pitfalls of this emerging tech, making for a fascinating debate that's got everyone talking.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
40m
Peak period
32
Day 1
Avg / period
13
Based on 39 loaded comments
Key moments
- 01Story posted
Nov 20, 2025 at 3:30 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 20, 2025 at 4:09 PM EST
40m after posting
Step 02 - 03Peak activity
32 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 2, 2025 at 7:44 PM EST
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> We found many, many examples of benchmark tasks where the same model using Kagi Search as a backend outperformed other search engines, simply because Kagi Search either returned the relevant Wikipedia page higher, or because the other results were not polluting the model’s context window with more irrelevant data.
> This benchmark unwittingly showed us that Kagi Search is a better backend for LLM-based search than Google/Bing because we filter out the noise that confuses other models.
Hey Google, Pinterest results are probably messing with AI crawlers pretty badly. I bet it would really help the AI if that site was deranked :)
Also if this really is the case, I wonder what an AI using Marginalia for reference would be like.
Unlikely. There are very few people willing to pay for Kagi. The HN audience is not at all representative of the overall population.
Google can have really miserable search results and people will still use it. It's not enough to be as good as google, you have to be 30% better than google and still free in order to convert users.
I use Kagi and it's one of the few services I am OK with a reoccurring charge from because I trust the brand for whatever reason. Until they find a way to make it free, though, it can't replace google.
https://kagi.com/stats
It's likely they can filter the results for their own agents, but will leave other results as they are. Half the issue with normal results are their ads - that's not going away.
They spent the last decade and a half encouraging the proliferation of garbage via "SEO". I don't see this reversing.
Kagi works better and will continue to do so as long as Kagi’s interests are aligned with users’ needs and Google’s aren’t.
Agents/assistants but nothing more.
https://blog.kagi.com/llms
Do you have any pointers?
1. It answers using only the crawled sites. You can't make it crawl a new page. 2. It doesn't use a page' search function automatically.
This is expected, but doesn't hurt to take that in mind. I think i'd be pretty useful. You ask for recent papers on a site and the engine could use hackernews' search function, then kagi would crawl the page.
"""
site:https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=fal... recent developments in ai?
"""
Also when testing, if you know a piece of information exists in a website, but the information doesn't show up when you run the query, you don't have the tools to steer the engine to work more effectively. In a real scenario you don't know what the engine missed but it'd be cool to steer the engine in different ways to see how that changes the end result. For example, if you're planning a trip to japan, maybe you want the AI to only be shown a certain percentage of categories (nature, night life, or places too), alongside controlling how much you want to spend time crawling, maybe finding more niche information or more related information.
Just recently started paying for Kagi search and quite love it.
I’m an Assistant Principal so use it to help me get better with spreadsheets, churn through complex formulas, and some other miscellaneous tasks for feedback and assistance. Definitely use a lot of screenshots of things to also help consume info in there.
I think I might be stuck with both for a while as I’m not sure Kagi can quite fill this gap yet.
I guess I should also explore how capable the free version is at this point, too.
You have a spend limit, but the assistant has dozens of of models
Prompt: "At a recent SINAC conference (approx Sept 2025) the presenters spoke about SINAC being underresourced and in crisis, and suggested better leveraging of and coordination with NGOs. Find the minutes of the conference, and who was advocating for better NGO interaction."
The conference was actually in Oct 2024. The approx date in parens causes Gemini to create an entirely false narrative, which includes real people quoted out of context. This happens in both Gemini regular chat and Gemini Deep Research (in which the narrative gets badly out of control).
Kagi reasonably enough answers: "I cannot find the minutes of a SINAC conference from approximately September 2025, nor any specific information about presenters advocating for better NGO coordination at such an event."
I tried the quick assistant a bit (don't have ultimate so I can't try research), and while the writing style seems slightly different, I don't see much difference in information compared to using existing models through the general kagi assistant interface.
75 more comments available on Hacker News