Back to Home11/14/2025, 2:05:00 PM

I think nobody wants AI in Firefox, Mozilla

1258 points
733 comments

Mood

controversial

Sentiment

negative

Category

tech

Key topics

Mozilla Firefox

AI integration

browser security

Debate intensity85/100

The article argues that adding AI to Firefox and Mozilla may not be desirable, sparking a heated discussion among users.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

18m

Peak period

159

Day 1

Avg / period

80

Comment distribution160 data points

Based on 160 loaded comments

Key moments

  1. 01Story posted

    11/14/2025, 2:05:00 PM

    4d ago

    Step 01
  2. 02First comment

    11/14/2025, 2:23:05 PM

    18m after posting

    Step 02
  3. 03Peak activity

    159 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/17/2025, 4:08:26 PM

    1d ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (733 comments)
Showing 160 comments of 733
WD-42
4d ago
1 reply
Nobody ever wants anything in Firefox, but in this case it’s probably especially true.
cedilla
4d ago
People want a lot of stuff in Firefox. However, people also seem to neatly bin all features into either "obviously necessary part of a web browser" and "obviously extraneous nonsense" when what they really mean is "things I personally want" and "things I personally don't want".
everdrive
4d ago
11 replies
Does anyone want AI in anything? I can see the value of navigating to an LLM and asking specific questions, but generally speaking I don't want that just running / waiting on my machine as I open a variety of applications. It's a huge waste of resources and for most normal people is an edge case.
coldpie
4d ago
4 replies
> Does anyone want AI in anything?

Yeah, they do. Go talk to anyone who isn't in a super-online bubble such as HN or Bsky or a Firefox early-adopter program. They're all using it, all the time, for everything. I don't like it either, but that's the reality.

gbear605
4d ago
2 replies
If I talk to the people I know who don’t spend all their time online, they’re just not using AI. Quite a few of my close friends haven’t used AI even once in any way, and most of the rest tried it out once and didn’t really care for it. They’re busy doing things in the real world, like spending time with their kids, or riding horses, or reading books.
vidarh
4d ago
Being busy riding horses and reading books are both niche activities (yes, reading too, sadly, at lest above a very small number of books which does not translate to people being busy doing it more than a tiny fraction of their time), which suggests perhaps your close friends are a rather biased set. Nothing wrong with that, but we're all in bubbles.
coldpie
4d ago
I talk to an acquaintance selling some homemade products on Etsy, he uses & likes the automatically generated product summary Etsy made for him. My neighbor asks me if I have any further suggestions for refinishing her table top beyond the ones ChatGPT suggested. Watching all of my coworkers using Google search, they just read the LLM summary at the top of the page and look no further. I see a friend take a picture, she uses the photo AI tool to remove a traffic sign from the background. Over lunch, a coworker tells me about the thing she learned about from the generated summary of a YouTube video.

We can take principled stands against these things, and I do because I am an obnoxiously principled dork, but the reality is it's everywhere and everyone other than us is using it.

52-6F-62
4d ago
1 reply
Way off. I've polled about this (informally) as well. Non-technical people think it's another thing they have to learn and do not want to (except for those who have been conditioned into constant pursuit of novelty, but that is not a picture of mental health or stability for anyone). They want technology to work for them, not to constantly be urged into full-time engagement with their [de]vices.

They are already preached at that they need a new phone or laptop every other year. Then there's a new social platform that changes its UI every 6 months or quarterly, and now similarly for their word processors and everything.

coldpie
4d ago
1 reply
> I've polled about this (informally) as well.

This is kinda like how if you ask everyone how often they eat McDonald's, everyone will say never or rarely. But they still sell a billion burgers each year :) Assuming you're not polling your Bsky buddies, I suspect these people are using AI tools a lot more than they admit or possibly even know. Auto-generated summaries, text generation, image editing, and conversation prompts all get a ton of use.

52-6F-62
4d ago
Only if you are assuming I am asking so directly...
lucasoshiro
4d ago
> They're all using it, all the time, for everything

Do you know someone? Using Firefox nowadays is itself a "super-online bubble"

smlavine
4d ago
> They're all using it.

Not really. Go talk to anyone who uses the internet for Facebook, Whatsapp, and not much else. Lots of people have typed in chatgpt.com or had Google's AI shoved in their face, but the vast majority of "laypeople" I've talked to about AI (actually, they've talked to me about AI after learning I'm a tech guy -- "so what do you think about AI?") seem to be resigned to the fact that after the personal computer and the internet, whatever the rich guys in SF do is what is going to happen anyway. But I sense a feeling of powerlessness and a fear of being left behind, not anything approaching genuine interest in or excitement by the technology.

Adrig
4d ago
2 replies
My mom recently praised the brave AI summary of a webpage so who knows, the usage might be higher than we think.
giancarlostoro
4d ago
3 replies
I used to hate Twitter when it first launched because I thought short form text was stupid, now I see everything will become summaries with AI and nobody will ever read anything meaningful.
everdrive
4d ago
2 replies
It could be something of an historical return to form; a small class of properly educated people and then the wider, semi-literate masses.
threetonesun
4d ago
1 reply
I'm "properly educated" by most definitions, 95% of web pages are garbage and a summary is fine. Also I imagine you frequently read summaries of books and movies and many other things before deciding to read or watch the entire work.
everdrive
4d ago
>95% of web pages are garbage and a summary is fine.

Mmm, summarized garbage.

>Also I imagine you frequently read summaries of books

This isn't what LLM summaries are being used for however. Also, I don't really do this unless you consider a movie trailer to be a summary. I certainly don't do this with books, again, unless you think any kind of commentary or review counts as a summary. I certainly would not use an LLM summary for a book or movie recommendation.

mythrwy
4d ago
2 replies
Communicating in pictographs
giancarlostoro
4d ago
1 reply
Gotta love them emojis
mghackerlady
4d ago
If someone wanted to do this for whatever reason, there's actually a language that can be written exclusively in emojis. It's called toki pona, and while emojis aren't the standard writing system, there have been several proposals. It works well since toki pona has a very small syntax (only around ~150 words iirc)
ponector
4d ago
That should be a next step. It takes too much time to read summary. So the result should be a summary picture! Text based image generation is quite good now. How would you call this chatgpt feature?
jabroni_salad
4d ago
Did you write a comment like this last time a recipe clipper got posted here?
vidarh
4d ago
There is plenty of text for which a good summary will have a far higher ratio of meaning to words than the original.
nunez
4d ago
Loads of people are Google's AI Summaries; it's the first result, so, hard to miss.
mabedan
4d ago
2 replies
I use it for summarization constantly. I made iOS/mac shortcuts which call Gemini for various tasks and use them quite often, mostly summarization related.
rwmj
4d ago
6 replies
How do you know its summaries are correct?
vidarh
4d ago
1 reply
For most things it doesn't matter, as long as its usually correct enough, and "enough" is a pretty low bar for a lot of things.
lotsofpulp
4d ago
1 reply
Can you give an example? And how would I know the LLM has error bounds appropriate for my situation?
vidarh
4d ago
3 replies
> Can you give an example?

Recipe pages full of fluff.

Review pages full of fluff.

Almost any web page full of fluff, which is a rapidly rising proportion.

> And how would I know the LLM has error bounds appropriate for my situation?

You consider whether you care if it is wrong, and then you try it a couple of times, and apply some common sense when reading the summaries, just the same as when considering if you trust any human-written summary. Is this a real question?

novemp
4d ago
1 reply
Most recipe blogs have a "skip to recipe" button because they know you don't care.
vidarh
4d ago
1 reply
Enough don't.
novemp
4d ago
1 reply
DuckDuckGo has a great tool for dealing with those ones: "Block this site from all results".
vidarh
4d ago
That doesn't get me their content.
lotsofpulp
4d ago
1 reply
I guess I never come across that situation because I just don’t engage with sources that fluff. That is a good example, but presumably, there should be no errors there because it’s just stripping away unnecessary stuff? Although, you would have to trust the LLM doesn’t get rid of or change a key step in the process, which I still don’t feel comfortable trusting.

I was thinking more along the lines of asking an LLM for a recipe or review, rather than asking for it to restrict its result to a single web page.

vidarh
4d ago
Doesn't matter if they get it wrong sometimes. So does human writers.
kemayo
4d ago
"Get me the recipe from this page" feels like a place where I do really care that it gets it right, because in an unfamiliar recipe it doesn't take much hallucination around the ingredients to ruin the dish.
ponector
4d ago
1 reply
How do you know they want a correct summary? AI slop is good enough, acceptable for many people.
coffeebeqn
4d ago
1 reply
What is the use of such a summary?
vidarh
4d ago
Determining whether something is worth reading doesn't require a good summary, just one that contains enough relevant snippets to give a decent indication.

The opportunity cost of "missing out" on reading a page you're unsure enough about to want a summary of is not likely to be high, and similarly it doesn't matter much if you end up reading a few paragraphs before you realise you were misled.

There are very few tasks where we absolutely must have accurate information all the time.

jasonlotito
4d ago
1 reply
It's a good question. I'm not the OP, but I'd like to add something to this discussion.

How do I know what I'd be reading is correct?

To your question: for the most part, I've found summaries to be mostly correct enough. The summaries are useful for deciding if I want to dig into this further (which means actually reading the full article). Is there danger in that method? Sure. But no more danger than the original article. And FAR less danger than just assuming I know what the article says from a headline.

So, how do you know its summaries are correct? They are correct enough for the purpose they serve.

garciansmith
4d ago
You can make a better decision if you have the context of the actual thing you are reading, both in terms of how it's presented (the non-textual aspects of a webpage for instance) and the language used. You can get a sense of who the intended audience might be, what their biases might be, how accurate this might be, etc. By using a summarizing tool all that is lost, you give up using your own faculties to understand and judge, and instead you put your trust in a third party which uses its own language, has its own biases, etc.

Of course, as more and more pieces of writing out there become slop, does any of this matter?

badsectoracula
4d ago
I've done some summarizing with my own small Tcl/Tk-based frontend that uses llama.cpp to call Mistral Small (i.e. all is done locally) and i do know that it can be off about various things.

However 99% of the times i use this isn't because i need an accurate summary but because i come across some overly long article that i do not even know if i'm interested in reading, so i have Mistral Small generate a summary to give me a ballpark of what the article is even about and then judge if i want to spend the time reading the full thing or not.

For that use case i do not care if the summary is correct, just if it is in the ballpark of what the article is all about (from the few articles i did ended up reading, the summary was in the ballpark well enough to make me think it does a good enough work). However even if it is incorrect, the worst that can happen is that i end up not reading some article i might find interesting - but that'd be what i'd do without the summary anyway since because i need to run my Tcl/Tk script, select the appropriate prompt (i have a few saved ones), copy/paste the text and then wait for the thing to run and finish, i only use it for articles i'm in already biased against reading.

Quothling
4d ago
You already know that they aren't. Yesterday my wife and I were discussing Rønja Røverdatter. When we were kids it used to have danish talk over, so you could still hear the original swedish sound as well. Now it has been dubbed, and we were talking about the actor who voices Birk. Anyway, we looked him up and found out he was in Blinkende Lygter, which neither of us remebered. So we asked Gemini and it told us he played the child flashback actor of the main character... except he doesn't, and to make matters worse, Gemini said that he played Christian a young Torkil... So it even got the names wrong. Sure this isn't exactly something Gemini would know, considering Rønja Røverdatter is an old Astrid Lingren novel that was turned to film decades ago, and Blinkende Lygter is a Danish movie from 20ish years ago where Sebastian Jessen plays a tiny role. Since they are prediction engines though, they'll happily give you a wrong answer because that's what the math added up to.

I like LLM's, I've even build my own personal agent on our Enterprise GPT subscription to tune it for my professional needs, but I'd never use them to learn anything.

kristofferR
4d ago
Because they mostly are, and even if not, it doesn't usually matter.

For example - you summarize a YouTube link to decide if the content of it is something you're interested in watching. Even if summarizations like that are only 90% correct 90% of the times it is still really helpful, you get the info you need to make a decision to read/watch the long form content or not.

azinman2
4d ago
1 reply
What are you constantly summarizing?
mabedan
4d ago
Articles. Some articles I fully read, some others I just read the headline, and some others I want to spend 2 minutes reading the summary to know whether I want to read the full thing.
RansomStark
4d ago
3 replies
In firefox yeah! I use it often.

I have it connected to a local Gemma model running in ollama and use it to quickly summarize webpages, nobody really wants to read 15 minutes worth of personal anecdotes before getting to that one paragraph that actually has relevant information, and for finding information within a page, kinda like ctrl-f on steroids.

The machine is sitting there anyway and the extra cost in electricity is buried in the hours of gaming that gpu is also used for, so i haven't noticed yet, and if you game, the graphics card is going to be obsolete long before the small amount of extra wear is obvious. YMMV if you dont already have a gaming rig laying around

distances
4d ago
2 replies
Something like this I wouldn't mind, privacy focused local only models that allow you to use your own existing services. Can you give a quick pointer on how to connect Firefox to Ollama?
RansomStark
4d ago
Use openwebui with ollama.

Openwebui is compatible with the firefox sidebar.

So grab ollama and your prefered model.

Install openwebui.

Connect openwebui to ollama

Then in firwdox open about:config

And set browser.ml.chat.provider to your local openwebui instance

Google suggests the you might also need to set browser.ml.chat.hideLocalhost to false. But i dont remember having to do that

RansomStark
4d ago
Docs here: https://docs.openwebui.com/tutorials/integrations/firefox-si...

I think its technically experiemntal, but ive been using this since day one with no issue

rpdillon
4d ago
1 reply
The default AI integration doesn't seem to support this. The only thing I could find that does is called PageAssist, and it's a third-party extension. Is that what you're using?

https://addons.mozilla.org/en-US/firefox/addon/page-assist/

RansomStark
4d ago
My mistake, I left a step out. Use openwebui with ollama. Openwebui is compatible with the firefox sidebar.

So grab ollama and your prefered model, install openwebui.

Then open about:config

And set browser.ml.chat.provider to your local openwebui instance

Google suggests the you might also need to set browser.ml.chat.hideLocalhost to false. But i dont remember having to do that

kbelder
4d ago
1 reply
An AI specifically customized to pull the recipe out of long rambling cooking blog posts would be great. I'd use that regularly.
red-iron-pine
4d ago
that's not "AI" that's just a basic firefox extension, and one that's trivially easy to search for

literally googles first hit for me: https://www.reddit.com/r/Cooking/comments/jkw62b/i_developed...

jmkni
4d ago
1 reply
I agree

I like to keep AI at arms length, it's there if I want it but can fuck off otherwise

Lots of people really do seem to want it in everything though

shevy-java
4d ago
That's fine. My gripe here is that Firefox, Google etc.. try to force this onto everyone. If I could I would just disable the crap AI as I don't need or use or want it. But we are not given an easy option here; the Google "opt-out" is garbage. I actually had to install browser extensions to eliminate that Google AI spam. That extension works better than those "Google options" given to us. I actually rarely use Firefox so I can not even want to be bothered to install an installation, but I know that I don't need any AI crap from Firefox/Mozilla either. People are no longer given a choice. The big companies and organisations abuse people. I have said since years that we, the people, need back control over the world wide web. That includes the UI.
jve
4d ago
2 replies
> Does anyone want AI in anything?

I want in Text to speech (TTS) engines, transliteration/translation and... routing tickets to correct teams/persons would also be awesome :) (Classification where mistakes can easily be corrected)

Anyways, we used TTS engine before openai - it was AI based. It HAD to be AI based as even for a niche language some people couldn't tell it was a computer. Well from some phrases you can tell it, but it is very high quality and correctly knows on which parts of the word to put emphasis on.

https://play.ht/ if anyone is wondering.

boplicity
4d ago
4 replies
Automatic captions has been transformative, in terms of accessibility, and seems to be something people universally want. Most people don't think of it as AI though, even when it is LLM software creating the captions. There are many more ways that AI tools could be embedded "invisibly" into our day-to-day lives, and I expect they will be.
bildung
4d ago
3 replies
Do you have an example of a good implementation of ai captions? I've only experienced those on youtube, and they are really bad. The automatic dubbing is even worse, but still.

On second thought this probably depends on the caption language.

satvikpendem
4d ago
1 reply
There are projects that will run Whisper or another transcription service locally on your computer, which has great quality. For whatever reason, Google chooses not to use their highest quality transcription models on YouTube, maybe due to cost.
sjsdaiuasgdia
4d ago
1 reply
I use Whisper running locally for automated transcription of many hours of audio on a daily basis.

For the most part, Whisper does much better than stuff I've tried in the past like Vosk. That said, it makes a somewhat annoying error that I never really experienced with others.

When the audio is low quality for a moment, it might misinterpret a word. That's fine, any speech recognition system will do that. The problem with Whisper is that the misinterpreted word can affect the next word, or several words. It's trying to align the next bits of audio syntactically with the mistaken word.

Older systems, you'd get a nonsense word where the noise was but the rest of the transcription would be unaffected. With Whisper, you may get a series of words that completely diverges from the audio. I can look at the start of the divergence and recognize the phonetic similarity that created the initial error. The following words may not be phonetically close to the audio at all.

satvikpendem
4d ago
1 reply
Try Parakeet, it's more state of the art these days. There are others too like Meta's omnilingual one.
sjsdaiuasgdia
4d ago
1 reply
Ah yes, one of the standard replies whenever anyone mentions a way that an AI thing fails: "You're still using [X]? Well of course, that's not state of the art, you should be using [Y]."

You don't actually state whether you believe Parakeet is susceptible to the same class of mistakes...

satvikpendem
4d ago
1 reply
¯\_(ツ)_/¯

I haven't seen those issues myself in my usage, it's just a suggestion, no need to be sarcastic about it.

sjsdaiuasgdia
4d ago
It's an extremely common goalpost-moving pattern on HN, and it adds little to the conversation without actually addressing how or whether the outcome would be better.
delecti
4d ago
1 reply
I'm not going to defend the youtube captions as good, but even still, I find them incredibly helpful. My hearing is fine, but my processing is rubbish, and having a visual aid to help contextualize the sound is a big help, even when they're a bit wrong.

Your point about the caption language is probably right though. It's worse with jargon or proper names, and worse with non-American English speakers. If we they don't even get right all the common accents of English, I have little hope for other languages.

redwall_hp
4d ago
1 reply
Automatic translation famously fails catastrophically with Japanese, because it's a language that heavily depends on implied rather than explicit context.

The minimal grammatically correct sentence is simply a verb, and it's an exercise to the reader to know what the subject and object are expected to be. (Essentially, the more formal/polite you get, the more things are added. You could say "kore wa atsu desu" to mean "this is hot." But you could also just say "atsu," which could also be interpreted as a question instead of a statement.)

Chinese seems to have similar issues, but I know less about how it's structured.

Anyway, it's really nice when Japanese music on YouTube includes a human-provided translation as captions. Automated ones are useless, when it doesn't give up entirely.

freehorse
4d ago
1 reply
I assume people talk about transcription, not translation. Translation in youtube ime is indeed horrible in all languages I have tried, but transcription in english is good enough to be useful. However, the more technical jargon a video uses, the worse transcription is (translation is totally useless in anything technical there).
belorn
4d ago
1 reply
Automatic transcription in English heavily depend on accent, sound quality, and how well the speaker is articulating. It will often mistake words that sound alike to make non-sensible sentences, randomly skip words, or just inserts random words for no clear reason.

It does seem to do a few clever things. For lyrics it seem to first look for existing transcribed lyrics before making their own guesses (Timing however can be quite bad when it does this). Outside of that, AI transcribed videos is like an alien who has read a book on a dead language and is transcribing based on what the book say that the word should sound like phonetically. At times that can be good enough.

(A note on sound quality. It not the perceived quality. Many low res videos has perfectly acceptable, if somewhat lossy sound quality, but the transcriber goes insane. It likes prefer 1080p videos with what I assume much higher bit-rate for the sound.)

freehorse
1d ago
In the times I have noticed the transcription be bad, my speech comprehension itself is even worse. So I still find it useful. It is not substitution for human created (or at least curated) subtitles by any means, but better than nothing.
Kiro
4d ago
1 reply
Do you have an example? YT captions being useless is a common trope I keep seeing on reddit that is not reflected in my experience at all. Feels like another "omg so bad" hyperbole that people just dogpile on, but would love to be proven wrong.
meatmanek
4d ago
1 reply
Captions seem to have been updated sometime between 7 and 15 months ago. Here's a reddit post from 7 months ago noticing the update: https://www.reddit.com/r/youtube/comments/1kd9210/autocaptio...

and here's Jeff Geerling 15 months ago showing how to use Whisper to make dramatically better captions: https://www.youtube.com/watch?v=S1M9NOtusM8

I assume Google has finally put some of their multimodal LLM work to good use. Before that, they were embarrassingly bad.

Kiro
4d ago
Interesting. I wonder if people saying that they are useless base it on experiences before that and have had them turned off since.
Sophira
4d ago
3 replies
To be clear, it's not LLMs creating the captions. Whisper[0], one of the best of its kind currently, is a speech recognition model, not a large language model. It's trained on audio, not text, and it can run on your mobile phone.

It's still AI, of course. But there is distinction between it and an LLM.

[0] https://github.com/openai/whisper/blob/main/model-card.md

big_toast
4d ago
1 reply
It’s an encoder-decoder transformer trained on audio (language?) and transcription.

Seems kinda weird for it not to meet the definition in a tautological way even if it’s not the typical sense or doesn’t tend to be used for autoregressive token generation?

uoaei
4d ago
1 reply
Is it Transformer-based? If not then it's a different beast architecturally.

Audio models tend to be based more on convolutional layers than Transformers in my experience.

big_toast
4d ago
The openai/whisper repo and paper referenced by the model card seem to be saying it's transformer based.
janalsncm
4d ago
Whisper is an encoder decoder transformer. The input is audio spectrograms, the output is text tokens. It is an improvement over old school transcription methods because it’s trained on audio transcripts, so it makes contextually plausible predictions.

Idk what the definition of an LLM is but it’s indisputable that the technology behind whisper is a close cousin to text decoders like gpt. Imo the more important question is how these things are used in the UX. Decoders don’t have to be annoying, that is a product choice.

LtWorf
4d ago
Whisper is a great random word generator when you use it on italian!
belorn
4d ago
I doubt that people prefer automatic capitations over human made, no more than people prefer AI subtitles. The big AI subtitle controversy going on right now in anime demonstrate well that quite a lot is lost in translation when an AI is guessing what words are most likely in a situation, compared to a human making a translation.

What people want is something that is better than nothing, and in that sense I can see how automatic captions is transformative in terms of accessibility.

Muromec
4d ago
For a few days now Im getting super cringe robot voice force dubbing every youtube video in Dutch. I use it without being logged in and hate it a lot.

Subtitles are good zo

gspencley
4d ago
ML has been around for ages. Email spam filters are one of the oldest examples.

These days when the term "AI" is thrown around the person is usually talking about large language models, or generative adversarial neural networks for things like image generation etc.

Classification is a wonderful application of ML that long predates LLMs. And LLMs have their purpose and niche too, don't get me wrong. I use them all the time. But AI right now is a complete hype train with companies trying to shove LLMs into absolutely anything and everything. Although I use LLMs, I have zero interest in an "AI PC" or an "AI Web Browser" any more than I have a need for an AI toaster oven. Thank god companies have finally gotten the message about "smart appliances." I wish "dumb televisions" were more common, but for a while it was looking like you couldn't buy a freakin' dishwasher that didn't have WIFI and an app and a bunch of other complexity-adding "features" that are neither required or desired by most customers.

nerdjon
4d ago
6 replies
Yes and no and this is the problem with the current marketing around AI.

I very much do want what used to be just called ML that was invisible and actually beneficial. Autocorrect, smart touch screen keyboards, music recommendations, etc. But the problem is that all of that stuff is now also just being called "AI" left and right.

That being said I think what most people think of when they say "AI" is really not as beneficial as they are trying to push. It has some uses but I think most of those uses are not going to be in your face AI as we are pushing now and instead in the background.

cratermoon
4d ago
1 reply
Nobody wants what's currently marketed as "AI" everywhere.
nerdjon
4d ago
1 reply
I mean, that is kinda exactly what I said..

But we do have to acknowledge that AI is very much turned into an all encompassing term of everything ML. It is getting harder and harder to read an article about something being done with "AI" and to know if it was a custom purpose built model to do a specific task or is it throwing data into an LLM and hoping for the best.

They are purposefully making it harder and harder to just say "No AI" by obfuscating this so we have to be very specific about what we are talking about.

cratermoon
4d ago
1 reply
For a while I made an effort to specify LLM or generative AI vs AI as a whole, but I eventually became convinced that it was no longer valuable. Currently AI is whatever OpenAI, Anthropic, Meta, NVidia, etc say it is, and that is mostly hype and marketing. Thus I have turned my language on its head, specifying "ML" or "recommendation system" or whatever specific pre-GPT technology I mean, and leave "AI" to the whims of the Sams and Darios of SV. I expect the bubble to pop in the next 3-6 months, if not before the end of 2025, taking with it any mention of "AI" in a serious or positive way.
krick
4d ago
> 3-6 months

Wow, you are an optimist. I do feel "it's close", but I wouldn't bet this close. But I wouldn't argue either, I don't know. Also, when it really pops, the consequences will be more disastrous than the bubble itself feels right now. It's literally hundreds of billions in circular investing. It's absurd.

krick
4d ago
2 replies
> what used to be just called ML

FWIW, 10+ years ago I was arguing that your old pocket calculator is as much of an AI as anything ever could be. I only kinda stopped doing that because it's tiring to argue with silly buzzwords, not because anything has changed since. When "these things were called ML" ML was just a buzzword, same as AI and AGI are now. I'm kinda glad "ML" was relieved of that burden, because ultimately it means a very real thing (which is just "parametrizing your algorithm by non-hardcoded values"), and (unlike with basic autocorrect, which no end user even perceives as "AI" or "ML") when you use ChatGPT, you don't use "ML", you use a rigid algorithm not meaningfully different from what was running on your old pocket calculator, except a billion times bigger and no one actually knows what it does.

So, yes, AI is just a stupid marketing buzzword right now, but so was ML, so was blockchain, so was NoSQL and many more. Ultimately this one is more annoying only because of scale, of how detrimental to society the actions of the culpable people (mostly OpenAI, Altman, Musk) were this time.

MetaWhirledPeas
4d ago
"AI" is the only term that makes sense for end users because "AI" is the only term that is universally understood. Hackernews types tend to overlook the layman.

And I hope no one gets started about how "AI" is an inaccurate term because it's not. That's exactly what we are doing: simulating intelligence. "ML" is closer to describing the implementation, and, honestly, what difference does it make for most people using it.

It is appropriate to discuss these things at a very high level in most contexts.

tolciho
4d ago
Right now? John McCarthy invented the term in order to get a grant, or in other words it was a marketing buzzword from day zero. He says so himself in the lighthill debate, and then the audience breaks out into hoots and howls.
thewebguyd
4d ago
Right, it should be invisible to the user. Those formerly-called-ML features are useful. They do a very specific, limited function, and "Just Work."

What I definitively don't want, yet it's what is currently happening, is a chatbot crammed into every single app and then shoved down your throat.

JohnFen
4d ago
This is why I use the term "genAI" rather than "AI" when talking about things like LLMs, sora, etc.
j4coh
4d ago
They need to show usage going up and to the right or the house of cards falls apart. So now you’re forced to use it.
catlifeonmars
4d ago
I think companies should also advertise when they use JavaScript on the page. “Use this new feature —- why? Because it’s powered by JavaScript”
andy99
4d ago
4 replies
The existence of the features doesn’t bother me. It’s the constant nagging about them. I can’t use a google product without being harassed to the point of not being able to work by offers to “help me write” or whatever.

Having the feature on a menu somewhere would be fine. The problem is the confluence of new features now becoming possible, and companies no longer building software for their users but as vehicles to push some agenda. Now we’re seeing this in action.

vidarh
4d ago
2 replies
The worst one w/Google is how they've highjacked long-press on the power button on Android, and you can change what it does but your options are arbitrarily limited.
ortusdux
4d ago
1 reply
My annoyance with Samsung's dedicated Bixby button factored into my switch to Pixel. The long-press highjack was disappointing.
LtWorf
4d ago
On my samsung i did find a setting to restore the off button being able to shut off the telephone.

I can only hope they won't change it back at the next update (already happened once).

kelvinjps10
4d ago
1 reply
I hate it how they're gonna change the power button to something else that's not power options.

Just to push their annoying google assistant

seszett
4d ago
1 reply
What are you guys talking about? I have a Pixel 8, didn't install Lineage OS on it, and my power button works fine?
kelvinjps10
4d ago
1 reply
Some phones with the lastest android, when you press the power button instead of showing you the power options, it opens google assistant.
thewebguyd
4d ago
1 reply
Apple did the same shit, long press of the power button opens up Siri.
kelvinjps10
3d ago
I know I used to have a phone that didn't do this and I used to make fun of my friends¡Phone because it would do this, then I got a new phone (android) and it did. Karma I guess, can you also disable it on ¡Phone?
amarant
4d ago
3 replies
Clippy really is back
Arisaka1
4d ago
Clippy only helped with very specific products, and was compensating for really odd UI/UX design decisions.

LLM's are a product that want to data collect and get trained by a huge amount of inputs, with upvotes and downvotes to calibrate their quality of output, with the hope that they will eventually become good enough to replace the very people they trained them.

The best part is, we're conditioned to treat those products as if they are forces of nature. An inevitability that, like a tornado, is approaching us. As if they're not the byproduct of humans.

If we consider that, then we the users get the shorter end of the stick, and we only keep moving forward with it because we've been sold to the idea that whatever lies at the peak is a net positive for everyone.

That, or we just don't care about the end result. Both are bad in their own way.

netsharc
4d ago
Someone should write a browser extension that changes AI buttons in websites to Clippy.

Maybe I'll ask Gemini to write one...

Llamamoe
4d ago
Clippy was predictable, free, and didn't steal your data.
thih9
4d ago
6 replies
> I can’t use a google product without being harassed (...)

You can disable AI in Google products.

E.g. in Gmail: go to Settings (the gear icon), click See all settings, navigate to the General tab, scroll down to find Smart features and personalization and uncheck the checkbox.

Source: https://support.google.com/drive/answer/15604322

cwillu
4d ago
4 replies
And will that work permanently, or will I have to hunt down another setting in another month when they stuff it into another workflow I don't want it in?
chankstein38
4d ago
1 reply
Yeah, if YouTube Shorts or Games are any indication, it'll be back soon! The AI Mode in Google Search comes up nearly every time I use it no matter how many times I hit "No"
tracker1
4d ago
YouTube shorts is an abomination... I'm so sick of the movie clips everywhere... Not to mention the AI slop in the general YouTube results... I like historical content, but the garbage content just pisses me off to no end.
netsharc
4d ago
4 replies
Every time I update Google Photos on Android, it asks me "Photos backup is turned off! Turn it on? [so you use up your 15 GB included storage and buy more for a subscription fee?]".
bobsoap
4d ago
Every time I open Google Photos, it does this. Every single time. It's insanely hostile.
Maken
4d ago
Every time you update? How about Maps asking if you want to use advanced location every time you open it?
robocat
4d ago
My iPhone has a permanent red badge counter trying to get me to upgrade to iCloud. I've moved the settings icon so I don't see it normally, but it is nagging. There's other dark patterns used by Apple to try and increase their income by "asking" me to pay more.
mghackerlady
4d ago
What's even worse is that every time you sign into a google account without a phone number or home address associated with it, it screams at you to add them for sECurItY
thih9
4d ago
1 reply
Depends; in the EU and selected countries that setting was always opt-in (i.e. it was never enabled for you). Elsewhere I guess the user has to periodically check their settings, or privacy policies, etc, which in practice sounds impossible.

> Important: By default, smart feature settings are off if you live in: The European Economic Area, Japan, Switzerland, United Kingdom

(same source as in grandparent comment).

cwillu
4d ago
2 replies
Then no, I can't use a google product without being harassed, unless I live in a limited selection of blessed countries.
thih9
4d ago
Note that these countries blessed themselves via legal steps (EU ones at least) and are not blessed by Google.
Muromec
4d ago
welcome to not being a passport bro for a change. Thats how mostbof the world feels when another cool thing happens, but the other way around
infermore
4d ago
guess we'll see in a month
dgacmu
4d ago
1 reply
This is correct but also a little misleading: Google gives you a choice to disable smart features globally, but you end up tossing out things you might want as well, such as the automatic classification into smart folders in Gmail. It feels very much like someone said " let's design a way to do this. That will make most people not want to turn any of the features that will make most people not want to turn it off because of the collateral damage"

(I desperately want to disable the AI summaries of email threads, but I don't want to give up the extra spam filtering benefit of having the smart features enabled)

strange_quark
4d ago
This toggle _still_ doesn't turn off all the bs.

Google now "helpfully" decides that you must want a summary of literally every file you open in Drive, which is extra annoying because the summary box causes the UI to move around after the document is opened. The other day I was looking at my company's next year's benefits PDFs and Gemini decided that when I opened the medical benefits paperwork that the thing I would care about is that I can get an ID card with an online account... not the various plan deductibles or anything useful like that.

I turned off the "smart" features and the only thing that changed is that the nag box still pops up and shifts the UI around, but now there's a button that asks if you want a summary instead of generating it automatically.

andy99
4d ago
I have everything disabled for my personal account. For work, when I looked into it, it had to be disabled centrally by my company.
SECProto
4d ago
Note that this setting (only accessible from desktop) also blocks spellcheck, a feature that absolutely does not need AI to implement
natebc
4d ago
It needs to be much more granular than it is. For example: Turning that setting off also disables the (very, very old) Updates/Promotions/Social/Forums tabs in the Gmail interface. ONE checkbox in the sea of gmail options?
ufocia
4d ago
I prefer opt-in vs. opt-out. Opt-out is pretentious and patronizing.
cornholio
4d ago
4 replies
> no longer building software for their users but as vehicles to push some agenda

All companies push an agenda all the time, and their agenda always is: market dominance, profitability, monopoly and rent extraction, rinse and repeat into other markets, power maximization for their owners and executives.

The freak stampede of all these tech giants to shove AI down everybody's throat just shows that they perceive the technology as having huge potential to advance the above agenda, for themselves, or for their competitors at their detriment.

philipallstar
4d ago
3 replies
> All companies push an agenda all the time, and their agenda always is: market dominance, profitability, monopoly and rent extraction, rinse and repeat into other markets, power maximization for their owners and executives.

I'll bear that in mind the next time I'm getting a haircut. How do you think Bob's Barbers is going to achieve all of that?

toomuchtodo
4d ago
1 reply
It was a sloppy statement, but is broadly speaking, true. For overwhelming citations, https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... (HN Search of posts from Matt Stoller's BIG Newsletter, which focuses on corporate monopolies and power in the US).

https://www.thebignewsletter.com/about

> The Problem: America is in a monopoly crisis. A monopoly is, at its core, a private government that sets the terms, services, and wages in a market, like how Mark Zuckerberg structures discourse in social networking. Every monopoly is a mini-dictatorship over a market. And today, there are monopolies everywhere. They are in big markets, like search engines, medicine, cable, and shipping. They are also in small ones, like mail sorting software and cheerleading. Over 75% of American industries are more consolidated today than they were decades ago.

> Unregulated monopolies cause a lot of problems. They raise prices, lower wages, and move money from rural areas to a few gilded cities. Dominant firms don’t focus on competing, they focus on corrupting our politics to protect their market power. Monopolies are also brittle, and tend to put all their eggs in one basket, which results in shortages. There is a reason everyone hates monopolies, and why we’ve hated them for hundreds of years.

https://blogs.cornell.edu/info2040/2021/09/17/graph-theory-o... (Food consolidation)

https://followthemoney.com/infographic-the-u-s-media-is-cont... (Media consolidation)

https://www.kearney.com/industry/energy/article/how-utilitie... (US electric utilities)

https://aglawjournal.wp.drake.edu/wp-content/uploads/sites/6... [pdf] (Agriculture consolidation)

https://www.visualcapitalist.com/interactive-major-tech-acqu... (Big Tech consolidation)

PaulHoule
4d ago
1 reply
That geographic concentration is a real thing.

I think part of the Mozilla problem is that they are based in San Francisco which puts them in touch with people from Facebook and Google and OpenAI every frickin' day and they are just so seeped in the FOMO Dilemma [1] that they can't hear the objection to NFT and AI features that users, particularly Firefox users, hate. [2]

I'd really like to see Mozilla move anywhere but the bay area, whether that is Dublin or Denver. When you aren't hanging out with "big tech" people at lunch and after work and when you have to get in a frickin' airplane to meet with those people you might start to "think different" and get some empathy for users and produce a better product and be a viable business as opposed to another out-of-touch and unaccountable NGO.

[1] Clayton Christensen pointed out in The Innovator's Dilemma that companies like Kodak and Xerox die because they are focused on the needs of their current customers who could care less about the new shiny that can't satisfy their needs now but will be superior in say 15 years. Now we have The FOMO Dilemma which is best illustrated by Windows 8 which went in a bold direction (tabletization) that users were completely indifferent to: firms now introduce things that their existing customers hate because they read The Innovator's Dilemma and don't want to wind up like Xerox.

[2] we use Firefox because we hate that corporate garbage.

toomuchtodo
4d ago
3 replies
My two cents is Mozilla should be in a European tech hub, with some component of their funding coming from the EU, where the EU's belief in regulation and nation state efforts to protect humans exceeds that of the US.
PaulHoule
4d ago
1 reply
It's not a popular opinion but if I was the EU I would do the following:

(1) Fully fund Firefox or an alternative browser (with a 100% open source commitment and verifiable builds so we know the people who get ideas like chatcontrol can't slip something bad in)

(2) Pass a law to the effect: "Violate DNT and the c-suite goes to jail and the company pays 200% of yearly revenue"

(3) same for having a cookie banner

pstuart
4d ago
#1 seems the most likely to happen (but I like the others).

Seems like maybe forking it in an agreeable way, and funding an EU crew to do the needful with the goal of upstreaming as much as possible.

I don't have insight into EU investments but that would provide a lot of bang for their euros.

paradox460
4d ago
1 reply
Europe had a potential Mozilla: Opera. They let it flounder and Chinese investors bought it.
chipotle_coyote
4d ago
I liked the original Opera—it’s been a while, but I think I actually paid for it on Windows a long, long time ago—but I’m not sure they were ever a “potential Mozilla,” at least in the way I would interpret that. They were a closed source, commercial browser founded by a for-profit company.

(Also, point of order: Opera was always based in Norway, which is not a member of the European Union.)

CamperBob2
4d ago
1 reply
What stops the EU from doing that now?

Regulation.

flaburgan
4d ago
Wrong. They are actually doing it, with NLNet and NGI (Next Generation Internet) but they chose to funs Servo not Firefox.
red-iron-pine
4d ago
1 reply
Bob the Barber ain't doin shit but that's mostly because he's got a room temperature IQ and is already struggling with taxes and biz-dev. he can do a mean fade, tho.

some weeks if its slow he may struggle to make his rent for his apartment; he doesn't have time or capacity to engage in serious rent-seeking behavior.

but hair cut chains like Supercuts are absolutely engaging in shady behavior all the time, like games with how solons rent chairs or employing questionably legal trafficked workers.

and FYI turns out that Supercuts a wholly owned subsidiary of the Regis Corporation, who absolutely acquires other companies and plays all sorts of shady corporate games, including branching into other markets and monopoly efforts.

https://en.wikipedia.org/wiki/Regis_Corporation

pstuart
4d ago
I would subscribe to your newsletter ;-)
chipsrafferty
4d ago
The statement, more refined, would clarify, "publicly traded companies".
amelius
4d ago
4 replies
> All companies push an agenda all the time, and their agenda always is: market dominance, profitability, monopoly and rent extraction, rinse and repeat into other markets, power maximization for their owners and executives.

But if users really wanted agenda-free products and services, then those would win right? At least according to free market theory.

isodev
4d ago
1 reply
> according to free market theory

Not once in the history of tech “the free market” has succeeded in preventing big corps or investors with lots of money from doing something they want.

AlecSchueler
4d ago
1 reply
I'm actually leaning towards the above comment being satire, it's hard to believe anyone on HN could believe in a free market in 2025.
1718627440
4d ago
This is yet again confusing a free market with an unregulated one. A free market is a market, where all costs are included (no external costs), so that market participants can make free decisions that will lead to the best outcome. To price in all external costs, regulation is needed.
cornholio
4d ago
Sure, if the common denominator user is at least as savvy as the entire marketing and strategy departments of these trillion dollar companies, then sure, users will identify products that are not designed according to their best interests and will then perfectly coordinate their purchases so that such products fail in the marketplace. Sure.
autoexec
4d ago
One of the problems with that idea is that sometimes it will be far more profitable to refuse to give consumers what they want and because eventually making the most amount of money possible becomes the only thing that matters to a company, what users want gets ignored and users are forced to settle for whats available.
usefulcat
4d ago
Maybe in the long term, but not necessarily in the short term.
danudey
4d ago
3 replies
1. AI is generating a lot of buzz

2. AI could be the next technology revolution

3. If we get on the AI bandwagon now we're getting in on the ground floor

4. If we don't get on the AI bandwagon now we risk being left behind

5. Now that we've invested into AI we need to make sure we're seeing return on our investment

6. Our users don't seem to understand what AI could possibly do so we should remind them so that they use the feature

7. Our users aren't opting in to the features we're offering so we should opt them in automatically

Like any other 'big, unproven bet' everyone is rushing in. See also: 'stories' making their way into everything (Instagram, Facebook, Telegram, etc.), vertical short-form videos (TikTok, Reels, Shorts, etc). The difference here is that the companies put literally tens or hundreds of billions of dollars into it so, for many, if AI fails and the money is wasted it could be an existential threat for entire departments or companies. nvidia is such a huge percentage of the entire US economy that if the AI accelerator market collapses it's going to wipe out something like ten percent of GDP.

So yeah, I get why companies are doing this; it's an actual 'slippery slope' that they fell into where they don't see any way out but to keep going and hope that it works out for them somehow, for some reason.

xmcp123
4d ago
1 reply
It’s also worth noting that non AI investment has basically dried up, so anyone wanting that initial investment needs to use the buzzwords.
dreamcompiler
4d ago
1 reply
In the 90s I did a lot of AI research but we weren't allowed to call it AI because if you used that label your funding would instantly be cancelled. After this bubble pops we'll no doubt return to that situation. Sigh.
karmakurtisaani
3d ago
Conversely, if you're doing any mathematical research nowdays, you better find some AI angle to your work if you want to get funding.
exographicskip
4d ago
1 reply
Great breakdown. I'm starting to think I'd pay to disable AI in most products.

Similar to how I read about a bar in the UK that has an intentional Faraday cage to encourage people to interact with people in the real world.

sidewndr46
4d ago
This sounds great actually. It seems like a fantastic revenue opportunity. We can add mandatory AI to all our products. We can then offer a basic plan that removes AI from most products, except in-demand ones. To remove it their you'll need the premium plan. There's a discount for annual subscription. You can also get the "Friends and Family" plan that covers 12 devices, but is region locked. If you go too far from your domicile, the AI comes back. This helps keep user indoors, streaming, and watching ads. Business plans will have the option to disable AI if their annual bill exceeds a certain amount. We can align this amount such that encourages typical business accounts to grow by a modest percent each year. We'll do this by setting the amount low enough that businesses are incentived to purchase but also high enough that they windup buying significant services from us. This potentially allows us to sell them services they don't need or that don't even exist, as the demand for AI free products is projected to rise in a 2-10 year timeframe.
thewebguyd
4d ago
> where they don't see any way out but to keep going and hope that it works out for them somehow, for some reason.

That's the core issue. No one wants to fail early or fail fast anymore. It's "lets stick to our guns and push this thing hard and far until it actually starts working for us."

Sometimes the time just isn't right for a particular technology. You put it out there, try for a little bit, and if it fails, it fails. Move on.

You don't keep investing in your failure while telling your users "You think you don't want this, but trust us, you actually do."

moregrist
4d ago
1 reply
> The freak stampede of all these tech giants to shove AI down everybody's throat just shows that they perceive the technology as having huge potential to advance the above agenda, for themselves, or for their competitors at their detriment.

I think there are more mundane (and IMO realistic) explanations than assuming that this is some kind of weird power move by all of software. I have a hard time believing that Salesforce and Adobe want to advance an agenda other than selling product and giving their C-suite nice bonuses.

I think you can explain a lot of this as:

1. Executives (CEOs, CTOs, VPs, whatever) got convinced that AI is the new growth thing

2. AI costs a _lot_ of money relative to most product enhancements, so there's an inherent need to justify that expense.

3. All of the unwanted and pushy features are a way of creating metrics that justify the expense of AI for the C-suite.

4. It takes time for users to effectively say "We didn't want this," and in the meantime a whole host of engineers, engineering managers, and product managers have gotten promoted and/or better gigs because they could say "we added AI" to their product.

There's also a herd effect among competing products that tends to make these things go in waves.

sidewndr46
4d ago
I think the real takeaway here is that Jensen Huang was smart enough to found a technology company that developed innovative products with real consumer demand. He's also smart enough to have seen the writing on the wall regarding consumer market demand saturation for high-margin products. No matter what happens with AI, Huang will be recorded as having executed the greatest pivot of all time in terms of company direction.
hansmayer
4d ago
I'd upvote this a 100 times. It's gotten to a point where, when I see a UI element, text, or email subject featuring those irritating twinkling-emojis that are supposed to indicate something between "magic" and "incredible speed", I feel physical uneasiness. Maybe it's precisely because of this contradiction that these symbols now stand for. Recently we purchased an .io domain for a product we're working on. Guess what, few days later there comes an e-mail with that twinkly-crap-start containing a suggestion that a ".com" domain for the same name is available, and that at a rather low price! Gasp! So I look it up...well yeah, it is a .com alright. But missing the bloody last letter of our name. Such is the crap that you get out of those LLMs, always incomplete, always missing something and this is increasingly the sentiment in the tech professionals community - no thanks, we don't want you to keep feeding us your slop, billions that you burned already into nothing be damned!
giancarlostoro
4d ago
I do want AI for some things but I actively go out of my way to find it, I dont want AI forced everywhere its like cryptominers you are forced into wasting compute energy resources you never asked to waste but much worse at least cryptominers are limited by your hardware, in this case you have an entire datacenter churning just until you can click “Disable” on the model.
allan_s
4d ago
3 month I was annoyed by the "let me translate the page for you" and last week in vacation I was browsing some local website, and I was more than happy to have firefox being able to translate the website dynamically, the result was okay-ish , but okay enough that I was able to proceed. And I'm more than happy that it didn't left my mobile device.

573 more comments available on Hacker News

ID: 45926779Type: storyLast synced: 11/16/2025, 9:42:57 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.