Kagi News
Posted3 months agoActive3 months ago
blog.kagi.comTechstoryHigh profile
supportivemixed
Debate
60/100
Kagi NewsNews AggregationRss FeedsAI-Powered News
Key topics
Kagi News
News Aggregation
Rss Feeds
AI-Powered News
Kagi News is a new service that aggregates news from various sources and provides a daily summary, sparking discussion on its features, limitations, and potential impact on news consumption.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
21m
Peak period
154
Day 1
Avg / period
40
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 30, 2025 at 11:09 AM EDT
3 months ago
Step 01 - 02First comment
Sep 30, 2025 at 11:29 AM EDT
21m after posting
Step 02 - 03Peak activity
154 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 10, 2025 at 12:37 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45426490Type: storyLast synced: 11/23/2025, 1:00:33 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Could you guys maybe print it on paper and send it to my physical mailbox, so I can do this ritual with breakfast? :-)
Guten: A Tiny Newspaper Printer - https://news.ycombinator.com/item?id=42599599 - January 2025 (106 comments)
Getting my daily news from a dot matrix printer - https://news.ycombinator.com/item?id=41742210 - October 2024 (253 comments)
(I was very skeptical about Kagi Assistant but now i am a happy Kagi Ultimate subscriber).
I like that Kagi charges for their service, so their motive is to provide services for that cost, and not with ads on top of it.
That said, all my friends think I'm insane and poke fun at me for paying for search, so I imagine we're a small minority.
People just hate paying for software in general in my experience, especially a subscription.
I have multiple good friends who refuse to pay 99 cents a month to get 50gb of iCloud storage so they can backup their phones, and instead of all their precious memories on a single device that is out and about.
I do live these days with the understanding that pretty much all of my personal info is out there one way or another, a social security number is about as private as a phone number these days.
My credit union login does an infinite redirect on login. Works fine on Chrome (and all other major browsers)
Perplexity on the mobile version web search entirely broken. Loops with some error and becomes unresponsive. Works fine on safari.
Many other random stuff breaks at least a few times per day usually from login redirects and authentication. Extensions like 1Password have autofill only working some of the time. The list goes on
It's just a nice interface for all LLMs which i often use on mobile or laptop for various work and also private tasks.
The last months have shown that there is no single LLM worth investing in (todays "top" LLM is tomorrows second-in-class).
You get multiple LLM in a single interface, with a single login and a single subscription to maintain, all your threads stored at the same place, the ability to switch between models in a thread, custom models...
Kagi's contracts with LLM providers are the ones businesses get with actual privacy protections which is also nice.
Because not every site has a RSS feed. For example when Claude Sonnet 4.5 released it would make sense to have that, but there is no RSS feed for Anthropic. Being compatible with the entire web instead of just a subset of it is useful.
I'm currently on the hunt for an RSS reader that has good filtering and sorting functionality, so I can (for instance) pull several feeds from only certain sources, but not see any posts/articles about terms A or B, yet see and sort any posts with term C by time, followed by either posts from source 1 with terms C and D, or posts from source 2 with terms E or F but not G, which would be sorted by relevance.
I know that's a complicated and probably poorly written explanation; but I'm imagining something like Apple Mail Rules for RSS.
I might not agree with all decisions Kagi makes, but this is gold. Endless scrolling is a big indicator that you're a consumer not a customer.
Someone recently highlighted the shift from social networks to social media in a way I'd never thought about:
>> The shift from social networks to social media was subtle, and insidious. Social networks, systems where you talk to your friends, are okay (probably). Social media, where you consume content selected by an algorithm, is not. (immibis https://news.ycombinator.com/item?id=45403867)
Specifically, in the same way that insufficient supply of mortgage securities (there's a finite number of mortgages) led to synthetic CDOs [0] in order to artificially boost supply of something there was a market for.
Social media and 24/7 news (read: shoving content from strangers into your eyeballs) are the synthetic CDOs of content, with about the same underlying utility.
There is in fact a finite amount of individually useful content per unit of time.
[0] If you want the Michael Lewis-esque primer on CDOs https://m.youtube.com/watch?v=A25EUhZGBws
This is a great way to put it. Much of the social media content is a derivative/synthetic representation of actual engagement. Content creators and influencers can make us "feel" like we have a connection to them (eg: "get ready with me!" type videos), but it's not the same as genuine connection or communication with people.
but now it's ABSOLUTELY EVERYWHERE and almost completely socially acceptable. In fact, people look at you weird if you don't have a favorite youtuber or what-have-you.
It's not healthy. Not healthy one bit. Whereas it used to be for 'others' (meaning rich and famous people who lived lives we could never hope for), parasocial relationships tend to be focused on people who are 'just like us' now. There's probably something in there to be studied.
Please expand obscure acronyms, not everyone lives in your niche.
[0] https://reederapp.com
Anyway, there's this https://netnewswire.com - https://github.com/Ranchero-Software/NetNewsWire (mac native) if someone is looking for an open source alt.
Now I just read the news on a Sunday (unless I'm doing something much more exciting). For the remainder of the week I don't read the news at all. It's the way my grandad used to read the news when he was a farmer.
I've found it to be a convenient format. It let's you stay informed, while it gives enough of a gap for news stories to develop and mature (unless they happen the day before). There's less speculation and rumours, and more established details, and it has reduced my day-to-day stress.
Annoyingly I still hear news from people around me, but I try to tune it out in the moment. I can't believe I used to consume news differently and it baffles me why I hear of people reading/watching/listening to the news 10+ times per day, including first thing when they awaken and last thing before they sleep. Our brains were not designed for this sort of thing.
I would agree that a single daily news update is useful (and healthy), but this must also be reflected in the choice of topics and the type of reporting.
Bunch of discussion here 3 months ago? https://news.ycombinator.com/item?id=44518473
It was in beta then.
The UK section seems to have a heavy bias towards news from Scotland.
It looks too simplistic for me to actually use.
When Biden was president I barely heard anything about US politics, but with Trump in power it's hard to avoid.
[0] https://news.ycombinator.com/item?id=45427513
This is pulling the content of the RSS feeds of several news sites into the context window of an LLM and then asking it to summarize news items into articles and fill in the blanks?
I'm asking because that is what it looks like, but AI / LLMs are not specifically mentioned in this blog post, they just say news are 'generated' under the 'News in your language' heading, which seems to imply that is what they are doing.
I'm a little skeptical towards the approach, when you ask an LLM to point to 'sources' for the information it outputs, as far as I know there is no guarantee that those are correct – and it does seem like sometimes they just use pure LLM output, as no sources are cited, or it's quoted as 'common knowledge'.
That’s not news. That’s news-adjacent random slop.
As an example from one of their sources, you can only re-publish a certain amount of words from an article in The Guardian (100 commercially, 500 non-comercially) without paying them.
But instead, Kagi "helpfully" regurgitates the whole story, visits the article once, delivers it to presumably thousands, and it can't even be bothered to display all of the sources it regurgitates unless you click to expand the dropdown. And even then the headline itself is one additional click away, and they straight up don't even display the name of the journalist in the pop-up, just the headline.
Incredibly shitty behaviour from them. And then they have the balls to start their about page with this:
> Why Kagi News? Because news is broken.
I don't know how they do it, and I'm not sure I care, the result is they've eliminated both clickbait and ragebait, and the news are indeed better off for it!
Not gonna call it the worst insult to journalism I've ever seen because I've seen factually(.)so which does essentially the same thing but calls it an "AI fact check", but it's not much better.
It's like instead of borrowing a book from the library, there's like a spokesperson at the entrance who you ask a question and then blindly believe whatever they say.
This is exactly how I want my news to be. Nothing worse than a headline about a new vaccine breakthrough, followed by a first paragraph that starts with "it was a cold November morning as I arrived in..."
I guess it's a matter of taste, but I prefer it short and to the point
Hmmm. Here I will quote some representative sections from the announcement [1]:
>> News is broken. We all know it, but we’ve somehow accepted it as inevitable. The endless notifications. The clickbait headlines designed to trigger rather than inform, driven by relentless ad monetization. The exhausting cycle of checking multiple apps throughout the day, only to feel more anxious and less informed than when we started. This isn’t what news was supposed to be. We can do better, and create what news should have been all along: pure, essential information that respects your intelligence and time.
>> .. Kagi News operates on a simple principle: understanding the world requires hearing from the world. Every day, our system reads thousands of community curated RSS feeds from publications across different viewpoints and perspectives. We then distill this massive information into one comprehensive daily briefing, while clearly citing sources.
>> .. We strive for diversity and transparency of resources and welcome your contributions to widen perspectives. This multi-source approach helps reveal the full picture beyond any single viewpoint.
>> .. If you’re tired of news that makes you feel worse about the world while teaching you less about it, we invite you to try a different approach with Kagi News, so download it today ...
I don't see any evidence from these selections (nor the announcement as a whole) that their approach states, assumes, or requires a value/fact dichotomy. Additionally, I read various example articles to look for evidence that their information architecture group information along such a dichotomy.
Lastly, to be transparent, I'll state a claim that I find to be true: for many/most statements, it isn't that difficult nor contentious to separate out factual claims from value claims. We don't need to debate the exact percentages or get into the weeds on this unless you think it will be useful.
I will grant this -- which is a different point that what the commenter above made -- when reading various articles from a particular source, it can take effort and analysis to suss out the source's level of intellectual honesty, ulterior motives, and other questions I mention in my sibling comment.
[1]: https://blog.kagi.com/kagi-news
Unfortunately, the above is nearly a cliché at this point. The phrase "value judgment" is insufficient because it occludes some important differences. To name just two that matter; there is a key difference between (1) a moral value judgment; (2) selection & summarization (often intended to improve information density for the intended audience).
For instance, imagine two non-partisan medical newsletters. Even if they have the same moral values (e.g. rooted in the Hippocratic Oath), they might have different assessments of what is more relevant for their audience. One could say both are "biased", but does doing so impart any functional information? I would rather say something like "Newsletter A is compromised of Editorial Board X with such-and-such a track record and is known for careful, long-form articles" or "Newsletter B is a one-person operation known for a prolific stream of hourly coverage." In this example, saying the newsletters differ in framing and intended audience is useful, but calling each "biased in different ways" is a throwaway comment (having low informational content in the Shannonian sense).
Personally, instead of saying "biased" I tend to ask questions like: (a) Who is their intended audience; (b) What attributes and qualities consistently shine through?; (c) How do they make money? (d) Is the publication/source transparent about their approach? (e) What is their track record about accuracy, separating commentary from factual claims, professional integrity, disclosure of conflicts of interest, level of intellectual honesty, epistemic standards, and corrections?
(I say this sarcastically and unhappily)
I use RSS with newsboat and I get mainstream news by visiting individual sites (nytimes.com, etc.) and using the Newshound aggregator. Also, of course, HN with https://hn-ai.org/
Ironically, this submission is at the top of that website :)
then i got the machine to write a front-end that visualises them and builds a search query for you: https://pastebin.com/HNwytYr9
enjoy
I think Google hates the loss of no/few ads or lame suggestions.
I'm sorry I know how to use your tool?? ? Didn't you put these keywords in to be used?
Kagi founder here. I am personally not an LLM-optimist. The thing is that I do not think LLMs will bring us to "Star Trek" level of useful computers (which I see humans eventually getting to) due to LLM's fundamentally broken auto-regressive nature. A different approach will be needed. Slight nuance but an important one.
Kagi as a brand is building tools in service of its users, no particular affinity towards any technologies.
When you go to Google News, the way they group together stories is AI (pre-LLM technology). Kagi is merely taking it one step further.
I agree with your concern. I see this as a convenient grouping, and if any interests me I can skip reading the LLM summary and just click on the sources they provide (making it similar to Google News).
Do you know that's what they're doing? They are a search engine after all. They do run their own indexer, as well as cache results from other sources.
If they're feeding urls to an AI, why can't they validate AI output urls are real? Maybe they do.
I would argue creating your own summary is several steps beyond an ordering algorithm.
You don't and you should not use this one either.
It actually seems more like an aggregator (like ground.news) to me. And pretty much every single sentence cites the original article(s).
There are nice summaries within an article. I think what they mean is that they generate a meta-article after combining the rest of them. There's nothing novel here.
But the presentation of the meta-article and publishing once a day feel like great features.
> And pretty much every single sentence cites the original article(s).
Yeah but again, correct me if I'm wrong, but I don't think asking an LLM to provide a source / citation yields any guarantee that the text it generates alongside it is accurate.
I also see a lot of text without any citations at all, here are three sections (Historical background, Technical details and Scientific significance) that don't cite any sources: https://kite.kagi.com/s/5e6qq2
Google points to phys and phys is a republish of the MIT article.
I guess I'm trying to understand your comment. Is there a distinction you're making between LLM summaries or LLM generated text, or are you stating that they aren't being transparent about the summaries being generated by LLMs (as opposed to what? human editors?).
Because at some point when I launched the app, it did say summaries might be inaccurate.
Looks like you found an example where it isn't properly citing the summaries. My guess is that they will tighten this up, because I looked mostly at the first and second page and most of those articles seemed to have citations in the summaries.
Like most people, I would want those everywhere to guard against potential hallucinations. No, the citations don't guarantee that there weren't any hallucinations, but if you read something that makes you go "huh" – the citations give you a low-friction opportunity to read more.
But another sibling commenter talked about the phys.org and google both pointing to the same thing. I agree, and this is exactly an issue I have with other aggregators like Ground.news.
They need to build some sort of graph that distills down duplicates. Like I don't need the article to say "30 sources" when 26 of them are just reprints of an AP/Reuters wire story. That shouldn't count as 30 sources.
...yes? If I go to a website called "_ News" (present company included), I expect to see either news stories aggregated by humans or news stories written and fact checked by humans. That's why newspapers have fact checking departments, but they're being replaced by something with almost none of the utility and its proponents are framing the benefits of the old system as impossible or impractical.
Like, I was asking whether they were expecting the curation/summarization to be done by humans at Kagi News.
Either you mean every time you read something interesting (“huh”) you should check it. But in that case, why bother with reading the AI summary in the first place…
Or you mean that any time you read something that sounds wrong, you should check it. But in that case, everything false in the summaries that happens to sound true to you will be confirmed in your mind without you ever checking it.
The main point of my original comment was that I wanted to understand what this is, how it works and whether I can trust the information on there, because it wasn't completely clear to me.
I'm not super up to date with AI stuff, but my working knowledge is that I should never trust the output of an LLM and always verify it myself, so therefore I was wondering if this is just LLM output or if there is some human review process, or a mechanism related to the citation functions that makes it output of a different, more trusted category.
I did catch the message on the loading screen as well now, I do still think it could be a little more clear on the individual articles about it being LLM generated text, apart from that I think I understand somewhat better what it is now.
Gmail seems like the easiest piece of the Google puzzle to replace. Different calendar systems have different quirks around repeating events, you sometimes need to try a variety of search engines to find what you're looking for, Docs aren't bug-for-bug equivalent to the Office or iCloud competitors, YouTube has audience, monetization, and hosting scale... Gmail is just "make an email account with a different provider and switch all of your accounts to use the new address." They don't even give you that much storage for free Gmail; it's 15GB, which lots of other email providers can match (especially paid ones). You can import your old emails to your new provider or just store them offline with a variety of email clients.
Is updating all of your accounts (and telling your contacts about the new address) what you consider to be the hard part, or do you actually use any Gmail-specific features? Genuinely curious, as I tend to disregard almost all mail-provider-specific features that any of my mail providers try to get me excited about (Gmail occasionally adds some new trick, but Zoho Mail is especially bad about making me roll my eyes with their new feature notifications).
2-3 spam emails slip through every week, and sometimes a false positive happens when I sign up for something new. I don't see this as a huge problem, and I doubt Gmail is significantly better.
I agree with the other commenter, I use Fastmail and I get very few spam emails, most of which wouldn't have been detected by gmail either because they're basically legitimate looking emails advertising scams. I have a Gmail account I don't use and it seems like it receives about the same amount of spam, if not more.
1: https://www.cloudflare.com/en-gb/learning/email-security/dma...
So if this automates the process of fetching the top news from a static list of news sites and summarizing the content in a specific structure, there's not much that can go wrong there. There's a very small chance that the LLM would hallucinate when asked to summarize a relatively short amount of text.
Not that the userbase of 50k is big enough to matter right now, but still...
So this might result in lower traffic for "anyone involved in journalism" – but the constant doomscrolling is worse for society. So I think we can all agree that the industry needs to veer towards less quantity and more quality.
Actual journalism doesn't rely on advertising, and is subscription based. Anyone interested in that is already subscribed to those sources, but that is not the audience this service is aiming for. Some people only want to spend a few minutes a day catching up with major events, and this service can do that for them. They're not the same people who would spend hours on news sites, so these sites are not missing any traffic.
I continue to subscribe to Reuters because of the quality of journalism and reporting. I have also started using Kagi News. They are not incompatible.
Imagine if Google news use LLM to show summaries to the users without explicitly saying it's AI on the UI.
Ironically, one of the first LLM-induced mistakes experienced by average people was a news summary: https://www.bbc.com/news/articles/cge93de21n0o.amp
Kagi made search useful again, and their genAI stuff can be easily ignored. Best of both worlds -- it remains useful for people like myself who don't want genAI involved, but there's genAI stuff for people who like that sort of thing.
That said, if their genAI stuff gets to be too hard to ignore, then I'd stop using or praising Kagi.
That this is about news also makes it less problematic for me. I just won't see it at all, since I don't go to Kagi for news in the first place.
Even Google calls the overview box AI Overview (not saying it doesn't hurt content hosting sites.)
Same as I would like to know if humans self assessed in a study about how well they drive vs the empirical evidence. Humans just aren't that good at that task so it would be good to know coming in.
Just call it Kagi Vibes instead of Kagi News as news has a higher bar (at least for me)
I've seen it so many times it definitely needs a name. As an entity of human intelligence, I am offended by these silly thought-terminating arguments.
To be honest though that’s not the point. I’m more annoyed they weren’t transparent about their methods than I am about them using AI.
I feel this is what Apple News should've been. Instead it's just god-awful ad-filled mess of news articles. And the only reason I have it is because of Apple One. But it is a clearly neglected product.
I also pay for ground news but it hasn't met my expectations, mostly because there's a lot of redundancy with wire stories. Like it'll show 50 sources but they're all just regurgitating the same AP or Reuters article. So it skews the "bias"
I tended towards Axios but lately it's gotten a bit paywalled and less informative. Can't wait to incorporate Kagi News into my daily workflow.
Apart from that, it's really nice! Good job, kagi team!
280 more comments available on Hacker News