No Ai* Here – a Response to Mozilla's Next Chapter
Key topics
The debate rages on about Mozilla's potential integration of AI into its browser, with some commenters sounding the alarm while others urge caution and consideration of the benefits. While some, like almosthere, suggest that dipping into AI might be worth the risk, others, like MrAlex94, express skepticism about its utility in a browser context. As the discussion unfolds, concerns about AI's potential impact on employment, mental health, and overall well-being are raised, with some, like Qem and a24j, painting a dire picture of AI's possible consequences. Amidst the backlash, a few voices, like clueless, caution against knee-jerk reactions, suggesting that Mozilla might develop its own locally hosted LLM model that could mitigate some concerns.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
12m
Peak period
71
0-6h
Avg / period
14.5
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 16, 2025 at 5:07 PM EST
18 days ago
Step 01 - 02First comment
Dec 16, 2025 at 5:19 PM EST
12m after posting
Step 02 - 03Peak activity
71 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 20, 2025 at 5:46 PM EST
14 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It's more likely it will try to kill us by talking depressed people into suicide and providing virtual ersatz boyfriends/girlfriends to replace real human relationships, what is a functional equivalent to cyber-neutering, given people can't have children by dating LLMs.
In many other areas, there are zero "no AI" options at all.
Agents (like a research agent) could also be interesting
You'll have to max out your GPU and CPU. The horrible javascript will still be around.
Local based AI features are great and I wish they were used more often, instead of just offloading everything to cloud services with questionable privacy.
I don't expect a business to make or maintain a suite of local model features in a browser free to download without monetizing the feature somehow. If said monetization strategy might mean selling my data or having the local model bring in ads, for example, the value of a local model goes down significantly IMO.
And it doesn't look like the average computer with steam installed is going to get above 8GB VRAM for a long time, let alone the average computer in general. Even focusing on new computers it doesn't look that promising.
This will not result in locally running SOTA sized models, but it could result in a percentage of people running 100B - 200B models, which are large enough to do some useful things.
More importantly, it costs a lot of money to get that high bus width before you even add the memory. There is no way things like M pro and strix halo take over the mainstream in the next few years.
It's not a knee-jerk reaction to "AI", it's a perfectly reasonable reaction to Mozilla yet again saying they're going to do something that the user base doesn't work, won't regain them marketshare, and that's going to take tens of thousands of dev hours away from working on all the things that would make Firefox a better browser, rather than a marginally less nonprofitable product.
Now, personally, I would like to have sane defaults, where I can toggle stuff on and off, but we all know which way the wind blows in this case.
So the only user base is the power user. And then yes: sane defaults, and a way to turn things on and off. And functionality that makes power users tell their power user friends to give FF a try again. Firefox can't win the current browser war, because there is none: FF is not a player in the browser space. It's an idea backed by things like MDN and very, very smart people working on the webstack and web APIs at Mozilla.
LLM integration... is arguable. Maybe it'll make Chrome worse, maybe not. Clunky and obtrusive integration certainly will.
I don't want any of this built into my web browser. Period.
This is coming from someone who pays for a Claude Max subscription! I use AI all the time, but I don't want it unless I ask for it!!!
Seriously, once you've crossed the threshold to pay for something, they think that they can somehow manipulate you (advertising) or convince you (features) to pay them for it too. And honestly, if they do it with features, I'm willing to be convinced.
I don't understand why these CEOs are so confident they're standing out from the rest. Because really, they don't.
Right now firefox is a browser as good as Chrome and in a few niche things better, but its having a deeply difficult time getting/keeping marketshare.
I don't see their big masterplan for when Firefox is just as good as the other AI powered browsers. What will make people choose Mozilla? It's not like they're the first to come up with this idea and they don't even have their own models so one way or another they're going to play second fiddle to a competitor.
I think there's a really really strong part of 2. ??? / 3. profit!!! In all this. And not just in Mozilla.
And now we have:
- A extra toolbar nobody asked for at the side. And while it contains some extra features now, I'm pretty much sure they added it just to have some prominent space to add a "Open AI Chatbot" button to the UI. And it is irritating as fuck because it remembers its state per window. So if you have one window open with the sidebar open, and you close it on another, then move to the other again and open a new window it thinks "hey, I need to show a sidebar which my user never asked for!". Also I believe it is also opening itselves sometimes when previously closed. I don't like it at all.
- A "Ask an AI Chatbot" option which used to be dynamically added and caused hundreds of clicks on wrong items on the context menu (due to muscle memory), because when it got added the context menu resizes. Which was also a source of a lot of irritation. Luckily it seems they finally managed to fix this after 5 releases or so.
Oh, and at the start of this year they experimented with their own LLM a bit in the form of Orbit, but apparently that project has been shitcanned and memoryholed, and all current efforts seem to be based on interfacing with popular cloud based AIs like ChatGPT, Claude, Copilot, Gemini and Mistral.
https://support.mozilla.org/en-US/kb/ai-chatbot This page not only prominently features cloud based AI solutions, I can't actually even see local AI as an option.
https://arstechnica.com/tech-policy/2025/02/firefox-deletes-...
For all purposes actually relevant to privacy, the updated language is more specific and just as strong.
> You give Mozilla the rights necessary to operate Firefox. This includes processing your data as we describe in the Firefox Privacy Notice. It also includes a nonexclusive, royalty-free, worldwide license for the purpose of doing as you request with the content you input in Firefox. This does not give Mozilla any ownership in that content.
> (from the attached FAQ) Mozilla doesn’t sell data about you (in the way that most people think about “selling data”), and we don’t buy data about you. Since we strive for transparency, and the LEGAL definition of “sale of data” is extremely broad in some places, we’ve had to step back from making the definitive statements you know and love. We still put a lot of work into making sure that the data that we share with our partners (which we need to do to make Firefox commercially viable) is stripped of any identifying information, or shared only in the aggregate, or is put through our privacy preserving technologies (like OHTTP).
No they fucking haven't. Provide evidence for this.
As I remember, what Mozilla actually said was that courts interpreted "selling" to mean "exchanging products or services for money" and under this interpretation, Mozilla couldn't say they didn't sell your data.
until you can't. Because the option foes from being an entry in the GUI to something in about:config, then is removed from about:config and you have to manually add it and then is removed completely. It's just a matter of time, but i bet that soon we'll se on nightly that browser.ml.enable = false and company do nothing
Nobody wants a browser that's focused on diversifying its revenue, especially from Mozilla which pretends to be a non-profit "free software community".
Chrome is paid for by ads and privacy violations, and now Firefox is paid for by "AI" companies? That is a sad state of affairs.
Ungoogled Chromium and Waterfox are at best a temporary measure. Perhaps the EU would be willing to fund a truly free (as in libre) browser engine that serves the public interest.
>Nobody wants a browser that's focused on diversifying its revenue I want a browser that has a sustainable business model so it wont collapse some time in the future. That means diversifying its revenue stream away from google's search contract.
Putting back the home button, removing the tabs overview button, disabling sponsored suggestions in the toolbar, putting the search bar back, removing the new AI toolbar, disabling the "It's been a while since you've used Firefox, do you want to cleanup your profile?", disabling the long-click tab preview, disabling telemetry, etc. etc.
https://blog.mozilla.org/wp-content/blogs.dir/278/files/2025...
it's the cornerstone of their strategy to invest in local, sovereign ai models in an attempt to court attention from persons / organizations wary of us tech
it's better to understand the concern over mozilla's announcement the following way i think:
- mozilla knows that their revenue from default search providers is going to dry up because ai is largely replacing manual searching
- mozilla (correctly) identifies that there is a potential market in eu for open, sovereign tech that is not reliant on us tech companies
- mozilla (incorrectly imo) believes that attaching ai to firefox is the answer for long term sustainability for mozilla
with this framing, mozilla has only a few options to get the revenue they're seeking according to their portfolio, and it involves either more search / ai deals with us tech companies (which they claim to want to avoid), or harvesting data and selling it like so many other companies that tossed ai onto software
the concern about us tech stack dominations are valid and probably there is a way to sustain mozilla by chasing this, but breaking the us tech stack dominance doesn't require another browser / ai model, there are plenty already. they need to help unseat stuff like gdocs / office / sharepoint and offer a real alternative for the eu / other interested parties -- simply adding ai is mozilla continuing their history of fad chasing and wondering why they don't make any money, and demonstrates a lack of understanding imo about, well, modern life
my concern over the announcement is that mozilla doesn't seem to have learned anything from their past attempts at chasing fads and likely they will end up in an even worse position
firefox and other mozilla products should be streamlined as much as possible to be the best N possible with these kinds of side projects maintained as first party extensions, not as the new focus of their development, and they should invest the money they're planning to dump into their ai ambitions elsewhere, focusing on a proper open sovereign tech stack that they can then sell to eu like they've identified in their portfolio statement
the announcement though makes it seem like mozilla believes they can just say ai and also get some of the ridiculous ai money, and that does not bode well for firefox as a browser or mozilla's future
Because the phrase "AI first browser" is meaningless corpospeak - it can be anything or nothing and feels hollow. Reminiscent of all past failures of firefox.
I just want a good browser that respects my privacy and lets me run extensions that can hook at any point of handling page, not random experiments and random features that usually go against privacy or basically die within short time-frame.
Sorry but no. I dont want another humans work summarized by some tool that's incapable of reasoning. It could get the whole meaning of the text wrong. Same with real time translation. Languages are things even humans get wrong regularly and I dont want some biased tool to do it for me.
Meanwhile, Mozilla canned the servo and mdn projects which really did provide value for their user base.
Personally I'd prefer if Firefox didn't ship with 20 gigs of model weights.
That said, they're admittedly terrible about keeping their documentation updated, letting users know about added/depreciated settings, and they've even been known to go in and modify settings after you've explicitly changed them from defaults, so the PSA isn't entirely unjustified.
"Two other forms of advanced configuration allow even further customization: about:config preference modifications and userChrome.css or userContent.css custom style rules. However, Mozilla highly recommends that only the developers consider these customizations, as they could cause unexpected behavior or even break Firefox. Firefox is a work in progress and, to allow for continuous innovation, Mozilla cannot guarantee that future updates won’t impact these customizations."
https://support.mozilla.org/en-US/kb/firefox-advanced-custom...
This really weakens the point of the post. It strikes me as a: we just don't like those AIs. Bergamot's model's behavior is no more or less auditable or a black box than an LLM's behavior. If you really want to go dig into a Llama 7B model, you definitely can. Even Bergamot's underlying model has an option to be transformer-based: https://marian-nmt.github.io/docs/
The premise of non-corporate AI is respectable but I don't understand the hate for LLMs. Local inference is laudable, but being close-minded about solutions is not interesting.
I could say it's equally close minded not to sympathize with this position, or various reasoning behind it. For me, I feel that my spoken language is effected by those I interact with, and the more exposed someone is to a bot, the more they will speak like that bot, and I don't want my language to be pulled towards the average redditor, so I choose not to interact with LLMs (I still use them for code generation, but I wouldn't if I used code for self expression. I just refuse to have a back and forth conversation on any topic. It's like that family that tried raising a chimp alongside a baby. The chimp did pick up some human like behavior, but the baby human adapted to chimp like behavior much faster, so they abandoned the experiment.)
I try to be polite just to not gain bad habits. But, for example, chatGPT is extremely confident, often wrong, and very weasely about it, so it can be hard to be “nice” to it (especially knowing that under the hood it has no feelings). It can be annoying when you bounce the third idea off the thing and it confidently replies with wrong instructions.
Anyway, I’ve been less worried about running local models, mostly just because I’m running them CPU-only. The capacity is just so limited, they don’t enter the uncanny valley where they can become truly annoying.
I do also find that only using a turn signal when others are around is a good reinforcement to always be aware of my surroundings. I feel like a jerk when I don't use one and realize there was someone in the area, just as I feel like a jerk when I realize I didn't turn off my brights for an approaching car at night. In both cases, feeling like a jerk reminds me to pay more attention while driving.
This has a failure state of "when there's a nearby car [or, more realistically, cyclist / pedestrian] of which I am not aware". Knowing myself to be fallible, I always use my turn signals.
I do take your point about turn signals being a reminder to be aware. That's good, but could also work while, you know, still using them, just in case.
I've been driving for decades now and have plenty of examples of when I was and wasn't paying close enough attention behind the wheel. I was raising this only as an interesting different take or lesson in my own experience, not to look for approval or disagreement.
Just consider that you will make mistakes. If you make a mistake and signal people will have significantly more time to react to it.
In my experience, I'm best served by trying to reinforce awareness rather than relying on it. If I got into the habit of always using blinkers regardless of my surroundings I would end up paying less attention while driving.
I rode motorcycles for years and got very much into the habit of assuming that no one on the road actually knows I'm there, whether I'm on an old parallel twin or driving a 20' long truck. I need that for us while driving and using blinkers or my brights as motivation for paying attention works to keep me focused on the road.
Signaling my turns is zero cost with regards to that action. At least for me, signaling as a matter of habit comes at the cost of focus.
Seriously: signal your turns and stop defending the indefensible, this is just silly.
There is this thing called traffic law and according to that law you are required to signal your turns. If you obstinately refuse to do so you are endangering others and I frankly don't care one bit about how you justify this to yourself but you are not playing by the rules and if that's your position then you should simply not participate in traffic.
Again: it costs you nothing. You are not paying more attention to others on the road because you are not signalling your turns, that's just a nonsense story you tell yourself to justify your wilful non-compliance.
What do you mean by "comes at the cost of focus", there? Do you mean you are more distracted by having to use your indicators?
Maybe you're just not a very good driver, if you're so distracted by the basic controls of the vehicle.
I have also ridden motorcycles for many years, and I am very familiar with the assumption that nobody on the road knows I exist. I still signal, all the time, every time, because it is a habit which requires no thinking. It would distract me more if I had to think about whether signalling was necessary in this case.
That is a very bad habit and you should change it.
You are not only signalling to other cars. You are also signalling to other road users: motorbikes, bicycles, pedestrians.
Your signal is more important to the other road users you are less likely to see.
Always ALWAYS indicate. Even if it's 3AM on an empty road 200 miles from the nearest human that you know of. Do it anyway. You are not doing it to other cars. You are doing it to the world in general.
Here is a hypothetical: A loved one is being hauled away in an ambulance and it is a bad scenario. And you're going to follow them. Your mind is busy with the stress, trying to keep things cool while under pressure. What hospital are they going to, again? Do you have a list of prescriptions? Are they going to make it to the hospital? You're under a mental load, here.
The last thing you need is to ask "did I use my turn signal" as you merge lanes. If you do it automatically, without exception, chances are good your mental muscle memory will kick in and just do it.
But if it isn't a learned innate behavior, you may forget to while driving and cause an accident. Simply because the habit isn't there.
It's similar for talking to bots, as well. How you treat an object, a thing seen as lesser, could become how a person treats people they view as lesser, such as wait staff, for example. If I am unerring polite to a machine with no feelings, I'm more likely to be just as polite to people in customer service jobs. Because it is innate:
Watch your thoughts, they become words; Watch your words, they become actions.
I have no opinion on not wanting to converse with a machine, that is a perfectly valid preference. I am referring more to the blog post's position where it seems to advocate against itself.
The focused purpose, I think, gives it more of a "purpose built tool" feel over "a chatbot that might be better at some tasks than others" generic entity. There's no fake persona to interact with, just an algorithm with data in and out.
The latter portion is less a technical and more an emotional nuance, to be sure, but it's closer to how I prefer to interact with computers, so I guess it kinda works on me.
> Large language models are something else entirely. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn’t sit well.
Like I said, I'm all for local models for the exact reasons you mentioned. I also love the auditability. It strikes me as strange that the blog post would write off the architecture as the problem instead of the fact that it's not local.
The part that doesn't sit well to me is that Mozilla wants to egress data. It being an LLM I really don't care.
Not everyone uses their browser just to surf social media, some people use it for creating things, log in to walled gardens to work creatively. They do not want to send this data to an AI company to train on, to make themselves redundant.
Discussing the inner workings of an AI isn't helping, this is not what most people really worry about. Most people don't know how any of it works but they do notice that people get fired because the AI can do their job.
A local model will have fewer filters applied to the output, but I can still only evaluate the input/output pairs.
An ideal translation is one which round-trips to the same content, which at least implies a consistency of representation.
No such example or even test as far as I know exists for any of the summary or search AIs since they expressly lose data in processing (I suppose you could construct multiple texts with the same meanings and see if they summarize equivalently - but it's certainly far harder to prove anything).
It's not a lossy process, and N round-trips should not lose any net meaning either.
This isn't a possible test with many other applications.
Translation is lossy. Good translation minimizes it without sounding awkward, but that doesn't mean some detail wasn't lost.
It’s insane this has to be pointed out to you but here we go.
Hammers are the best, they can drive nails, break down walls and serve as a weapon. From now on the military will, plumber to paratrooper, use nothing but hammers because their combined experience of using hammers will enable us to make better hammers for them to do their tasks with.
(It's weird how people can be so anti-anti-AI, but then when someone takes a middle position, suddenly that's wrong too.)
To me the difference between something like AI translation and an LLM is that the former is a useful feature and the latter is an annoyance. I want to be able to translate text across languages in my web browser. I don't want a chat bot for my web browser. I don't want a virtual secretary - and even if I did, I wouldn't want it limited to the confines of my web browser.
Firefox could have an entire section dedicated to torturing digital puppies built into the platform and... Ok, well, that's too far, but they could have a costco warehouse full of AI crap and I wouldn't mind at all as long as it was off by default and preferably not even downloaded to the system unless I went in an chose to download it.
I know respecting user preference doesn't line their pockets but neither does chasing users down and shoving services they never asked for and explicitly do not want into their faces.
It's mostly knee-jerk reaction from having AI forced upon us from every direction, not just the ones that make sense
Am I being overly critical here or is this kind of a silly position to have right after talking about how neural machine translation is okay? Many of Firefox's LLM features like summarization afaik are powered by local models (hell even Chrome has local model options). It's weird to say neural translation is a black box but LLMs are somehow black boxes that we cannot hope to understand what they do with the data. Neural translation also has unverifiable behavior in the same sense.
I could interpret some of the data talk as talking about non local models but this very much seems like a more general criticism of LLMs as a whole when talking about Firefox features. Moreover, some of the critiques like verifiability of outputs and unlimited scope still don't make sense in this context.
Open weights, or open training data? These are very different things.
The model itself is just a binary blob, like a compiled program. Either you get its source code (the complete training data) or you don't.
There is no reason nor design where you also provide it with full disk access or terminal rights.
This is one of the most ignorant posts and comment sections I’ve seen on HN in a while.
Also I’m referring to the post, not this comment specifically.
That's not really accurate: Firefox peaked somewhere around 30% market share back when IE was dominant, and then Chrome took over the top spot within a few years of launching.
FWIW, I think there's just no good move for Mozilla. They're competing against 3 of the biggest companies in the world who can cross-subsidise browser development as a loss-leader, and can push their own browsers as the defaults on their respective platforms. The most obvious way to make money from a browser - harvesting user data - is largely unavailable to them.
I used firefox faithfully for a long time, but it's time for someone to take it out back and put it down.
Also, I switched to Waterfox about a year ago and I have no complaints. The very worst thing about it is that when it updates its very in your face about it, and that is such a small annoyance that its easily negligible.
Throw on an extension like Chrome Mask for those few websites that "require chrome" (as if that is an actual thing), a few privacy extensions, ecosia search, uBlacklist (to permablock certain sites from search results), and Content Farm Terminator to get rid of those mass produced slop sites that weasel their way into search results and you're going to have a much better experience than almost any other setup.
Then I thought, "Aha! Surely LibreWolf is the one I'm thinking of!"
Turns out no, it's a third one! It's PaleMoon...
LLMs being applied to everything under the sun feels like we're solving problems that have other solutions, and the answers aren't necessarily correct or accurate. I don't need a dubiously accurate summary of an article in English, I can read and comprehend it just fine. The downside is real and the utility is limited.
The trouble is that statistical MT (the things that became neural net MT) started achieving better quality metrics than rule-based MT sometime around 2008 or 2010 (if I remember correctly), and the distance between them has widened since then. Rule-based systems have gotten a little better each year, while statistical systems have gotten a lot better each year, and are also now receiving correspondingly much more investment.
The statistical systems are especially good at using context to disambiguate linguistic ambiguities. When a word has multiple meanings, human beings guess which one is relevant from overall context (merging evidence upwards and downwards from multiple layers within the language understanding process!). Statistical MT systems seem to do something somewhat similar. Much as human beings don't even perceive how we knew which meaning was relevant (but we usually guessed the right one without even thinking about it), these systems usually also guess the right one using highly contextual evidence.
Linguistic example sentences like "time flies like an arrow" (my linguistics professor suggested "I can't wait for her to take me here") are formally susceptible of many different interpretations, each of which can be considered correct, but when we see or hear such sentences within a larger context, we somehow tend to know which interpretation is most relevant and so most plausible. We might never be able to replicate some of that with consciously-engineered rulesets!
I too used to think that rule-based AI would be better than statistical, Markov chain parrots, but here we are.
Though I still think/hope that some hybrid system of rule-based logic + LLMs will end up being the winner eventually.
----------------
[1] https://en.wikipedia.org/wiki/Bitter_lesson
Time flies like an arrow; fruit flies like a banana.
Mozilla appoints new CEO Anthony Enzor-Demeo
https://news.ycombinator.com/item?id=46288491
166 more comments available on Hacker News