I think nobody wants AI in Firefox, Mozilla
Mood
controversial
Sentiment
negative
Category
tech
Key topics
Mozilla Firefox
AI integration
browser security
The article argues that adding AI to Firefox and Mozilla may not be desirable, sparking a heated discussion among users.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
18m
Peak period
159
Day 1
Avg / period
80
Based on 160 loaded comments
Key moments
- 01Story posted
11/14/2025, 2:05:00 PM
4d ago
Step 01 - 02First comment
11/14/2025, 2:23:05 PM
18m after posting
Step 02 - 03Peak activity
159 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/17/2025, 4:08:26 PM
1d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Yeah, they do. Go talk to anyone who isn't in a super-online bubble such as HN or Bsky or a Firefox early-adopter program. They're all using it, all the time, for everything. I don't like it either, but that's the reality.
We can take principled stands against these things, and I do because I am an obnoxiously principled dork, but the reality is it's everywhere and everyone other than us is using it.
They are already preached at that they need a new phone or laptop every other year. Then there's a new social platform that changes its UI every 6 months or quarterly, and now similarly for their word processors and everything.
This is kinda like how if you ask everyone how often they eat McDonald's, everyone will say never or rarely. But they still sell a billion burgers each year :) Assuming you're not polling your Bsky buddies, I suspect these people are using AI tools a lot more than they admit or possibly even know. Auto-generated summaries, text generation, image editing, and conversation prompts all get a ton of use.
Do you know someone? Using Firefox nowadays is itself a "super-online bubble"
Not really. Go talk to anyone who uses the internet for Facebook, Whatsapp, and not much else. Lots of people have typed in chatgpt.com or had Google's AI shoved in their face, but the vast majority of "laypeople" I've talked to about AI (actually, they've talked to me about AI after learning I'm a tech guy -- "so what do you think about AI?") seem to be resigned to the fact that after the personal computer and the internet, whatever the rich guys in SF do is what is going to happen anyway. But I sense a feeling of powerlessness and a fear of being left behind, not anything approaching genuine interest in or excitement by the technology.
Mmm, summarized garbage.
>Also I imagine you frequently read summaries of books
This isn't what LLM summaries are being used for however. Also, I don't really do this unless you consider a movie trailer to be a summary. I certainly don't do this with books, again, unless you think any kind of commentary or review counts as a summary. I certainly would not use an LLM summary for a book or movie recommendation.
Recipe pages full of fluff.
Review pages full of fluff.
Almost any web page full of fluff, which is a rapidly rising proportion.
> And how would I know the LLM has error bounds appropriate for my situation?
You consider whether you care if it is wrong, and then you try it a couple of times, and apply some common sense when reading the summaries, just the same as when considering if you trust any human-written summary. Is this a real question?
I was thinking more along the lines of asking an LLM for a recipe or review, rather than asking for it to restrict its result to a single web page.
The opportunity cost of "missing out" on reading a page you're unsure enough about to want a summary of is not likely to be high, and similarly it doesn't matter much if you end up reading a few paragraphs before you realise you were misled.
There are very few tasks where we absolutely must have accurate information all the time.
How do I know what I'd be reading is correct?
To your question: for the most part, I've found summaries to be mostly correct enough. The summaries are useful for deciding if I want to dig into this further (which means actually reading the full article). Is there danger in that method? Sure. But no more danger than the original article. And FAR less danger than just assuming I know what the article says from a headline.
So, how do you know its summaries are correct? They are correct enough for the purpose they serve.
Of course, as more and more pieces of writing out there become slop, does any of this matter?
However 99% of the times i use this isn't because i need an accurate summary but because i come across some overly long article that i do not even know if i'm interested in reading, so i have Mistral Small generate a summary to give me a ballpark of what the article is even about and then judge if i want to spend the time reading the full thing or not.
For that use case i do not care if the summary is correct, just if it is in the ballpark of what the article is all about (from the few articles i did ended up reading, the summary was in the ballpark well enough to make me think it does a good enough work). However even if it is incorrect, the worst that can happen is that i end up not reading some article i might find interesting - but that'd be what i'd do without the summary anyway since because i need to run my Tcl/Tk script, select the appropriate prompt (i have a few saved ones), copy/paste the text and then wait for the thing to run and finish, i only use it for articles i'm in already biased against reading.
I like LLM's, I've even build my own personal agent on our Enterprise GPT subscription to tune it for my professional needs, but I'd never use them to learn anything.
For example - you summarize a YouTube link to decide if the content of it is something you're interested in watching. Even if summarizations like that are only 90% correct 90% of the times it is still really helpful, you get the info you need to make a decision to read/watch the long form content or not.
I have it connected to a local Gemma model running in ollama and use it to quickly summarize webpages, nobody really wants to read 15 minutes worth of personal anecdotes before getting to that one paragraph that actually has relevant information, and for finding information within a page, kinda like ctrl-f on steroids.
The machine is sitting there anyway and the extra cost in electricity is buried in the hours of gaming that gpu is also used for, so i haven't noticed yet, and if you game, the graphics card is going to be obsolete long before the small amount of extra wear is obvious. YMMV if you dont already have a gaming rig laying around
Openwebui is compatible with the firefox sidebar.
So grab ollama and your prefered model.
Install openwebui.
Connect openwebui to ollama
Then in firwdox open about:config
And set browser.ml.chat.provider to your local openwebui instance
Google suggests the you might also need to set browser.ml.chat.hideLocalhost to false. But i dont remember having to do that
I think its technically experiemntal, but ive been using this since day one with no issue
So grab ollama and your prefered model, install openwebui.
Then open about:config
And set browser.ml.chat.provider to your local openwebui instance
Google suggests the you might also need to set browser.ml.chat.hideLocalhost to false. But i dont remember having to do that
literally googles first hit for me: https://www.reddit.com/r/Cooking/comments/jkw62b/i_developed...
I like to keep AI at arms length, it's there if I want it but can fuck off otherwise
Lots of people really do seem to want it in everything though
I want in Text to speech (TTS) engines, transliteration/translation and... routing tickets to correct teams/persons would also be awesome :) (Classification where mistakes can easily be corrected)
Anyways, we used TTS engine before openai - it was AI based. It HAD to be AI based as even for a niche language some people couldn't tell it was a computer. Well from some phrases you can tell it, but it is very high quality and correctly knows on which parts of the word to put emphasis on.
https://play.ht/ if anyone is wondering.
On second thought this probably depends on the caption language.
For the most part, Whisper does much better than stuff I've tried in the past like Vosk. That said, it makes a somewhat annoying error that I never really experienced with others.
When the audio is low quality for a moment, it might misinterpret a word. That's fine, any speech recognition system will do that. The problem with Whisper is that the misinterpreted word can affect the next word, or several words. It's trying to align the next bits of audio syntactically with the mistaken word.
Older systems, you'd get a nonsense word where the noise was but the rest of the transcription would be unaffected. With Whisper, you may get a series of words that completely diverges from the audio. I can look at the start of the divergence and recognize the phonetic similarity that created the initial error. The following words may not be phonetically close to the audio at all.
You don't actually state whether you believe Parakeet is susceptible to the same class of mistakes...
I haven't seen those issues myself in my usage, it's just a suggestion, no need to be sarcastic about it.
Your point about the caption language is probably right though. It's worse with jargon or proper names, and worse with non-American English speakers. If we they don't even get right all the common accents of English, I have little hope for other languages.
The minimal grammatically correct sentence is simply a verb, and it's an exercise to the reader to know what the subject and object are expected to be. (Essentially, the more formal/polite you get, the more things are added. You could say "kore wa atsu desu" to mean "this is hot." But you could also just say "atsu," which could also be interpreted as a question instead of a statement.)
Chinese seems to have similar issues, but I know less about how it's structured.
Anyway, it's really nice when Japanese music on YouTube includes a human-provided translation as captions. Automated ones are useless, when it doesn't give up entirely.
It does seem to do a few clever things. For lyrics it seem to first look for existing transcribed lyrics before making their own guesses (Timing however can be quite bad when it does this). Outside of that, AI transcribed videos is like an alien who has read a book on a dead language and is transcribing based on what the book say that the word should sound like phonetically. At times that can be good enough.
(A note on sound quality. It not the perceived quality. Many low res videos has perfectly acceptable, if somewhat lossy sound quality, but the transcriber goes insane. It likes prefer 1080p videos with what I assume much higher bit-rate for the sound.)
and here's Jeff Geerling 15 months ago showing how to use Whisper to make dramatically better captions: https://www.youtube.com/watch?v=S1M9NOtusM8
I assume Google has finally put some of their multimodal LLM work to good use. Before that, they were embarrassingly bad.
It's still AI, of course. But there is distinction between it and an LLM.
[0] https://github.com/openai/whisper/blob/main/model-card.md
Seems kinda weird for it not to meet the definition in a tautological way even if it’s not the typical sense or doesn’t tend to be used for autoregressive token generation?
Audio models tend to be based more on convolutional layers than Transformers in my experience.
Idk what the definition of an LLM is but it’s indisputable that the technology behind whisper is a close cousin to text decoders like gpt. Imo the more important question is how these things are used in the UX. Decoders don’t have to be annoying, that is a product choice.
What people want is something that is better than nothing, and in that sense I can see how automatic captions is transformative in terms of accessibility.
Subtitles are good zo
These days when the term "AI" is thrown around the person is usually talking about large language models, or generative adversarial neural networks for things like image generation etc.
Classification is a wonderful application of ML that long predates LLMs. And LLMs have their purpose and niche too, don't get me wrong. I use them all the time. But AI right now is a complete hype train with companies trying to shove LLMs into absolutely anything and everything. Although I use LLMs, I have zero interest in an "AI PC" or an "AI Web Browser" any more than I have a need for an AI toaster oven. Thank god companies have finally gotten the message about "smart appliances." I wish "dumb televisions" were more common, but for a while it was looking like you couldn't buy a freakin' dishwasher that didn't have WIFI and an app and a bunch of other complexity-adding "features" that are neither required or desired by most customers.
I very much do want what used to be just called ML that was invisible and actually beneficial. Autocorrect, smart touch screen keyboards, music recommendations, etc. But the problem is that all of that stuff is now also just being called "AI" left and right.
That being said I think what most people think of when they say "AI" is really not as beneficial as they are trying to push. It has some uses but I think most of those uses are not going to be in your face AI as we are pushing now and instead in the background.
But we do have to acknowledge that AI is very much turned into an all encompassing term of everything ML. It is getting harder and harder to read an article about something being done with "AI" and to know if it was a custom purpose built model to do a specific task or is it throwing data into an LLM and hoping for the best.
They are purposefully making it harder and harder to just say "No AI" by obfuscating this so we have to be very specific about what we are talking about.
Wow, you are an optimist. I do feel "it's close", but I wouldn't bet this close. But I wouldn't argue either, I don't know. Also, when it really pops, the consequences will be more disastrous than the bubble itself feels right now. It's literally hundreds of billions in circular investing. It's absurd.
FWIW, 10+ years ago I was arguing that your old pocket calculator is as much of an AI as anything ever could be. I only kinda stopped doing that because it's tiring to argue with silly buzzwords, not because anything has changed since. When "these things were called ML" ML was just a buzzword, same as AI and AGI are now. I'm kinda glad "ML" was relieved of that burden, because ultimately it means a very real thing (which is just "parametrizing your algorithm by non-hardcoded values"), and (unlike with basic autocorrect, which no end user even perceives as "AI" or "ML") when you use ChatGPT, you don't use "ML", you use a rigid algorithm not meaningfully different from what was running on your old pocket calculator, except a billion times bigger and no one actually knows what it does.
So, yes, AI is just a stupid marketing buzzword right now, but so was ML, so was blockchain, so was NoSQL and many more. Ultimately this one is more annoying only because of scale, of how detrimental to society the actions of the culpable people (mostly OpenAI, Altman, Musk) were this time.
And I hope no one gets started about how "AI" is an inaccurate term because it's not. That's exactly what we are doing: simulating intelligence. "ML" is closer to describing the implementation, and, honestly, what difference does it make for most people using it.
It is appropriate to discuss these things at a very high level in most contexts.
What I definitively don't want, yet it's what is currently happening, is a chatbot crammed into every single app and then shoved down your throat.
Having the feature on a menu somewhere would be fine. The problem is the confluence of new features now becoming possible, and companies no longer building software for their users but as vehicles to push some agenda. Now we’re seeing this in action.
I can only hope they won't change it back at the next update (already happened once).
Just to push their annoying google assistant
LLM's are a product that want to data collect and get trained by a huge amount of inputs, with upvotes and downvotes to calibrate their quality of output, with the hope that they will eventually become good enough to replace the very people they trained them.
The best part is, we're conditioned to treat those products as if they are forces of nature. An inevitability that, like a tornado, is approaching us. As if they're not the byproduct of humans.
If we consider that, then we the users get the shorter end of the stick, and we only keep moving forward with it because we've been sold to the idea that whatever lies at the peak is a net positive for everyone.
That, or we just don't care about the end result. Both are bad in their own way.
Maybe I'll ask Gemini to write one...
You can disable AI in Google products.
E.g. in Gmail: go to Settings (the gear icon), click See all settings, navigate to the General tab, scroll down to find Smart features and personalization and uncheck the checkbox.
> Important: By default, smart feature settings are off if you live in: The European Economic Area, Japan, Switzerland, United Kingdom
(same source as in grandparent comment).
(I desperately want to disable the AI summaries of email threads, but I don't want to give up the extra spam filtering benefit of having the smart features enabled)
Google now "helpfully" decides that you must want a summary of literally every file you open in Drive, which is extra annoying because the summary box causes the UI to move around after the document is opened. The other day I was looking at my company's next year's benefits PDFs and Gemini decided that when I opened the medical benefits paperwork that the thing I would care about is that I can get an ID card with an online account... not the various plan deductibles or anything useful like that.
I turned off the "smart" features and the only thing that changed is that the nag box still pops up and shifts the UI around, but now there's a button that asks if you want a summary instead of generating it automatically.
All companies push an agenda all the time, and their agenda always is: market dominance, profitability, monopoly and rent extraction, rinse and repeat into other markets, power maximization for their owners and executives.
The freak stampede of all these tech giants to shove AI down everybody's throat just shows that they perceive the technology as having huge potential to advance the above agenda, for themselves, or for their competitors at their detriment.
I'll bear that in mind the next time I'm getting a haircut. How do you think Bob's Barbers is going to achieve all of that?
https://www.thebignewsletter.com/about
> The Problem: America is in a monopoly crisis. A monopoly is, at its core, a private government that sets the terms, services, and wages in a market, like how Mark Zuckerberg structures discourse in social networking. Every monopoly is a mini-dictatorship over a market. And today, there are monopolies everywhere. They are in big markets, like search engines, medicine, cable, and shipping. They are also in small ones, like mail sorting software and cheerleading. Over 75% of American industries are more consolidated today than they were decades ago.
> Unregulated monopolies cause a lot of problems. They raise prices, lower wages, and move money from rural areas to a few gilded cities. Dominant firms don’t focus on competing, they focus on corrupting our politics to protect their market power. Monopolies are also brittle, and tend to put all their eggs in one basket, which results in shortages. There is a reason everyone hates monopolies, and why we’ve hated them for hundreds of years.
https://blogs.cornell.edu/info2040/2021/09/17/graph-theory-o... (Food consolidation)
https://followthemoney.com/infographic-the-u-s-media-is-cont... (Media consolidation)
https://www.kearney.com/industry/energy/article/how-utilitie... (US electric utilities)
https://aglawjournal.wp.drake.edu/wp-content/uploads/sites/6... [pdf] (Agriculture consolidation)
https://www.visualcapitalist.com/interactive-major-tech-acqu... (Big Tech consolidation)
I think part of the Mozilla problem is that they are based in San Francisco which puts them in touch with people from Facebook and Google and OpenAI every frickin' day and they are just so seeped in the FOMO Dilemma [1] that they can't hear the objection to NFT and AI features that users, particularly Firefox users, hate. [2]
I'd really like to see Mozilla move anywhere but the bay area, whether that is Dublin or Denver. When you aren't hanging out with "big tech" people at lunch and after work and when you have to get in a frickin' airplane to meet with those people you might start to "think different" and get some empathy for users and produce a better product and be a viable business as opposed to another out-of-touch and unaccountable NGO.
[1] Clayton Christensen pointed out in The Innovator's Dilemma that companies like Kodak and Xerox die because they are focused on the needs of their current customers who could care less about the new shiny that can't satisfy their needs now but will be superior in say 15 years. Now we have The FOMO Dilemma which is best illustrated by Windows 8 which went in a bold direction (tabletization) that users were completely indifferent to: firms now introduce things that their existing customers hate because they read The Innovator's Dilemma and don't want to wind up like Xerox.
[2] we use Firefox because we hate that corporate garbage.
(1) Fully fund Firefox or an alternative browser (with a 100% open source commitment and verifiable builds so we know the people who get ideas like chatcontrol can't slip something bad in)
(2) Pass a law to the effect: "Violate DNT and the c-suite goes to jail and the company pays 200% of yearly revenue"
(3) same for having a cookie banner
Seems like maybe forking it in an agreeable way, and funding an EU crew to do the needful with the goal of upstreaming as much as possible.
I don't have insight into EU investments but that would provide a lot of bang for their euros.
(Also, point of order: Opera was always based in Norway, which is not a member of the European Union.)
Regulation.
some weeks if its slow he may struggle to make his rent for his apartment; he doesn't have time or capacity to engage in serious rent-seeking behavior.
but hair cut chains like Supercuts are absolutely engaging in shady behavior all the time, like games with how solons rent chairs or employing questionably legal trafficked workers.
and FYI turns out that Supercuts a wholly owned subsidiary of the Regis Corporation, who absolutely acquires other companies and plays all sorts of shady corporate games, including branching into other markets and monopoly efforts.
But if users really wanted agenda-free products and services, then those would win right? At least according to free market theory.
Not once in the history of tech “the free market” has succeeded in preventing big corps or investors with lots of money from doing something they want.
2. AI could be the next technology revolution
3. If we get on the AI bandwagon now we're getting in on the ground floor
4. If we don't get on the AI bandwagon now we risk being left behind
5. Now that we've invested into AI we need to make sure we're seeing return on our investment
6. Our users don't seem to understand what AI could possibly do so we should remind them so that they use the feature
7. Our users aren't opting in to the features we're offering so we should opt them in automatically
Like any other 'big, unproven bet' everyone is rushing in. See also: 'stories' making their way into everything (Instagram, Facebook, Telegram, etc.), vertical short-form videos (TikTok, Reels, Shorts, etc). The difference here is that the companies put literally tens or hundreds of billions of dollars into it so, for many, if AI fails and the money is wasted it could be an existential threat for entire departments or companies. nvidia is such a huge percentage of the entire US economy that if the AI accelerator market collapses it's going to wipe out something like ten percent of GDP.
So yeah, I get why companies are doing this; it's an actual 'slippery slope' that they fell into where they don't see any way out but to keep going and hope that it works out for them somehow, for some reason.
Similar to how I read about a bar in the UK that has an intentional Faraday cage to encourage people to interact with people in the real world.
That's the core issue. No one wants to fail early or fail fast anymore. It's "lets stick to our guns and push this thing hard and far until it actually starts working for us."
Sometimes the time just isn't right for a particular technology. You put it out there, try for a little bit, and if it fails, it fails. Move on.
You don't keep investing in your failure while telling your users "You think you don't want this, but trust us, you actually do."
I think there are more mundane (and IMO realistic) explanations than assuming that this is some kind of weird power move by all of software. I have a hard time believing that Salesforce and Adobe want to advance an agenda other than selling product and giving their C-suite nice bonuses.
I think you can explain a lot of this as:
1. Executives (CEOs, CTOs, VPs, whatever) got convinced that AI is the new growth thing
2. AI costs a _lot_ of money relative to most product enhancements, so there's an inherent need to justify that expense.
3. All of the unwanted and pushy features are a way of creating metrics that justify the expense of AI for the C-suite.
4. It takes time for users to effectively say "We didn't want this," and in the meantime a whole host of engineers, engineering managers, and product managers have gotten promoted and/or better gigs because they could say "we added AI" to their product.
There's also a herd effect among competing products that tends to make these things go in waves.
573 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.