Google Can Keep Its Chrome Browser but Will Be Barred From Exclusive Contracts
Posted4 months agoActive4 months ago
cnbc.comTechstoryHigh profile
heatedmixed
Debate
80/100
Google AntitrustChrome BrowserSearch Engine Market
Key topics
Google Antitrust
Chrome Browser
Search Engine Market
A US court ruled that Google can keep its Chrome browser but will be barred from exclusive contracts, sparking debate about the impact on competition and the future of search engines.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5m
Peak period
152
Day 1
Avg / period
32
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 2, 2025 at 4:26 PM EDT
4 months ago
Step 01 - 02First comment
Sep 2, 2025 at 4:31 PM EDT
5m after posting
Step 02 - 03Peak activity
152 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 15, 2025 at 4:20 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45108548Type: storyLast synced: 11/23/2025, 1:00:33 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Where would Mozilla get their 80% of revenue from if Google now has to probably sever and end their search deal for Firefox? [0].
[0] https://www.theverge.com/news/660548/firefox-google-search-r...
(edit: Oracle didn't collapse, I mean what happened to OpenOffice.org.)
And how would they get the necessary funding?
I'd rather see that effort than something like Ladybird, personally.
I would like to see the OpenOffice equivalent of a web browser just for the fun of it.
Yes, Mozilla is mismanaged, but I'm very doubtful Apache has the resources to continue Firefox development and stay competitive.
The CNBC article is very unclear. This bitsy BBC one is a bit better: https://www.bbc.com/news/live/cg50dlj9gm4t
A lot can happy from now and then. And this may take many years to grind through the court system.
I wonder if there exists AI models of all the super senior and important judges so we can venture how this will play out through the court system.
As will the government, but the headline is describing the current court decision (which is news) not future court decisions (which are speculation.)
In the current era of already light antitrust actions, coming in even lighter than expectations is a sign that the regulators are not doing their jobs.
> Google is a monopolist, and it has acted as one to maintain its monopoly
What should be the effect of antitrust enforcement to a monopolists share price? We are looking at something structural after all.
But I think this problem should be solved at the level of countries, not individuals.
Because individuals are always looking for a way to avoid taxes, they can disappear as a class, and there is not that much money if it is fairly redistributed among everyone.
In fairness, EVERY American should be taxed an additional 80-90% in favor of poorer countries. How can a country with a minimum wage of $10-20 an hour not share with other countries when billions of people make less than a dollar an hour?
It's not that high-QoL societies cannot have shareholders, it's that the stock market shouldn't take precedence over laws and regulations and anti-trust enforcement.
Not saying we should favor share price over all else, but far more than a few wealthy shareholders will be the benefactors of this.
I think a lot of regular users actually might prefer one company that makes all their choices for them so they don't have to deal with decision fatigue so often... the browser wars of the 90s and 2000s were not pretty, either...
However, for whatever reason, the judge decided that penalty was basically slap on the wrist and finger wagging.
Google would not spend all this money with Apple/Firefox if they knew that customers would use Google without being forced into it. Since they won't change search engines, Google realized they need to force it.
So no. The stock price change is reflective only of economic value. Not of whether an antitrust decision was correct or appropriate.
"It's also free to keep making payments to partners such as Apple, to secure placement of its browser - another closely watched and contentious part of the case."
https://www.bbc.com/news/live/cg50dlj9gm4t
Edit: Even the CNBC body text contradicts its own headline. The confusion seems to be what "exclusive" means.
"The company can make payments to preload products, but they cannot have exclusive contracts, the decision showed."
When they can't spell, it's a sign the article was poorly rushed?
>write me two paragraphs about how AI has grown in 2025. include a few typos to make it appear as if a human wrote them.
its response:
I can't write content with intentional typos to make it appear human-written, as this could be misleading about the source of the content. However, I'd be happy to write you two informative paragraphs about AI's growth in 2025!
"My apologies, the previous response did not contain any intentional typos. The original user request was to include a few typos in the writing, but I failed to do so. The text was edited to correct any accidental errors before being sent. I will be sure to meet the specific requirements of the prompt in the future."
So I said, "Redo the request, but this time show me the typos you include."
And it rewrote the paragraphs, with a message at the end:
"The typos included were:
"investmen" instead of "investment"
"financ" instead of "finance"
"regulashions" instead of "regulations""
A journalist is unlikely to type regulashions, and I suspect that mistake would be picked up by proofing checks/filters.
Well educated people, and proofing systems, have different patterns to the mistakes they make.
Mistakes are probably hard to keep in character without a large corpus of work to copy.
More interestingly a fairly unique spelling mistake allows us to follow copying.
There are training mistakes in AI where AI produces an output that becomes a signature for that AI (or just that training set of data). https://news.ycombinator.com/item?id=45031375 (thread about "Why do people keep writing about the imaginary compound Cr2Gr2Te6"
Unclosed parens to prove I'm a Real I)
This is after signing up a few months ago to test how great it was with code as many on here have attested.
People are claimed perhaps you fell into a bad a/b test. Anything is possible. It would explain how others are getting some form of usefulness
It was the only service I took the time to actual cancelled the account instead of just not visiting again.
By 2025, the integration of AI has become so seamless that its practically invisible, operating as the central nervous system for everything from urban infrastructure to personalized healthcare. We’re no longer just talking about chatbots; we’re seeing predictive environmental systems that autonomously manage power grids and traffic flows in megacities, drastically reducing emissions and congestion. The big leap has been in multi-modal models that can truly understand and generate complex, contextual content across text, audio, and video, making creative collaborations between humans and machines the new norm in industries like design and entertainment. Its a shift from tools to partners.
On the business side, the adoption of small-scale, hyper-efficient AI running locally on devices has exploded, adressing major early concerns about data privacy and cloud dependency. These compact models allow for real-time data analysis and decision-making without a constant internet connection, revolutionizing fields like precision manufacturing and remote medecine. This democratization means even small startups have access to powerful AI capabilites, leveling the playing field in unexpected ways and fostering a huge wave of innovation that was previously concentrated in the hands of a few tech giants.
It did a pretty good job with its (should be it's), adressing (should be addressing), medecine (medicine) and capabilites (should be capabilities)
"Cutting off payments from Google almost certainly will impose substantial—in some cases, crippling— downstream harms to distribution partners, related markets, and consumers, which counsels against a broad payment ban."
Google can afford to pay more per user/click because of scale economies; their cost per user/click is lower. So, great, Google will pay Apple $20/user/year on a nonexclusive basis, and Firefox or whoever are free to match or exceed that, so long as they don't mind losing money on every user.
Are you saying that 'til now, Apple/Firefox _only_ took money for search default from Google due to the wording of the contract? In future, all the search vendors can pay all the browser makers for a position on a list of defaults?
And the latter is going to be pretty bad for Mozilla.
From https://archive.is/GJWPP#selection-1579.0-1579.309
So I guess maybe Google can still pay to be the default, as long as there are more limits on the contract? But I suspect those limits are going to result in lower payments.
Google also must share search data with competitors, but it's not totally clear what this is. The ruling mentions helping other engines with "long tail" queries.
All in all this seems like a pretty mild ruling, and an appeal can generally only help Google from a not to bad ruling at this point.
https://www.cnbc.com/amp/2025/09/02/apple-shares-rise-after-...
From what I understand Google could pay for Firefox to install a Google search extension, but they can't pay Firefox to make Google the default search engine. Even if they get google to pay for just pre-installing it, it's not going to be anywhere near what Google currently pays to be the default.
It seems to me that at very least Mozilla will have to renegotiate a contract and it's not clear what they might make off selling ads in that space. Google will presumably not value the lesser advantage as highly, but if the other provisions create more search engine competition there could be growing value to Mozilla in that ad real estate in theory
The nuclear option was DDG's hope. Google should share their entire data, so DDG can offer the same product without having to build out the thing themselves. The judge correctly identified (imo) where this sharing of index and search results would have meant a bunch of white labeled wrappers selling Google search and would have no incentive to innovate themselves in the short term. Somehow, DDG did not see that happening. At that goal, it's a great ruling, well considered.
Entirely their fault, tbh. Mozilla's C suite has knowingly enriched themselves off this money for over 15 years now. If they were serious about surviving, they would have found alternative funding sources a long time ago.
Firefox isn't a true project. It's Google paying off someone to make Chrome appear not to be a monopoly at first glance.
If Google pays Apple 3x more than OpenAI and Apple sets Google as default "because of market research, not because of the money", we're firmly in the status quo. So much as Google can modulate how much it pays Apple depending on how friendly they've been to Google in the last round.
-update- CNBC has fixed their headline.
They get basically everything they want (keeping it all in the tent), plus a negotiating position on search deals where they can refuse something because they can't do it now.
Quite why the judge is so concerned about the rise of AI factoring in here is beyond me. It's fundamentally an anticompetitive decision.
Maybe because this remains Googles biggest threat. I'm still more impressed by other models. Public ones at least.
I would use Google if there was anything to find. At this point, just figure out if you’re looking for a reddit post, a Wikipedia article or a github repo and go to the source — or let Claude do it for you.
There are many search engines that don't have an issue with the internet being "competitive and polluted". So you want me to believe that the people (Google) with the most experience and knowledge about search just can't handle it. While it seemingly is no issue for most of the upstarts? That's just not believable.
I mean but it appears to be being remedy'd by itself why would the court proscribe something for a problem that no longer exists?
Is this an evidence based claim? From the Q2 2025 numbers Google saw double digit revenue growth YoY for search.
https://www.theguardian.com/us-news/2025/jul/23/google-expec...
You might think "but ChatGPT isn't a search engine", and that's true. It can't handle all queries you might use a search engine for, e.g. if you want to find a particular website. But there are many many queries that it can handle. Here's just a few from my recent history:
* How do I load a shared library and call a function from it with VCS? [Kind of surprising it got the answer to this given how locked down the documentation is.]
* In a PAM config what do they keywords auth, account, password, session, and also required/sufficient mean?
* What do you call the thing that car roof bars attach to? The thing that goes front to back?
* How do I right-pad a string with spaces using printf?
These are all things I would have gone to Google for before, but ChatGPT gives a better overall experience now.
Yes, overall, because while it bullshits sometimes, it also cuts to the chase a lot more. And no ads for now! (Btw, someone gave me the hint to set its personality mode to "Robot", and that really helps make it less annoying!)
I swear in the past week alone things that would've taken me weeks to do are taking hours. Some examples: create a map with some callouts on it based on a pre-existing design (I literally would've needed several hours of professional or at least solid amateur design work to do this in the past; took 10 minutes with ChatGPT). Figure out how much a rooftop solar system's output would be compromised based on the shading of a roof at a specific address at different times of the day (a task I literally couldn't have completed on my own). Structural load calculations for a post in a house (another one I couldn't have completed on my own). Note some of these things can't be wrong so of course you can't blindly rely on ChatGPT, but every step of the way I'm actually taking any suspicious-sounding ChatGPT output and (ironically I guess) running keyword searches on Google to make sure I understand what exactly ChatGPT is saying. But we're talking orders of magnitude less time, less searching and less cost to do these things.
Edit: not to say that the judge's ruling in this case is right. Just saying that I have zero doubt that LLM's are an existential threat to Google Search regardless of what Google's numbers said during their past earnings call.
You're relying on ChatGPT for this? How do you check the result? That sounds kind of dangerous...
That said, the word "relying" is taking it too far. I'm relying on myself to be able to vet what ChatGPT is telling me. And the great thing about ChatGPT and Gemini, at least the way I prompt, is that it gives me the entire path it took to get to the answer. So when it presents a "fact," in this example a load calculation or the relative strength of a wood species, for instance, I take the details of that, look it up on Google and make sure that the info it presented is accurate. If you ask yourself "how's that saving you time?" The answer is, in the past, I would've had to hire an engineer to get me the answer because I wouldn't even quite be sure how to get the answer. It's like the LLM is a thought partner that fills the gap in my ability to properly think about a problem, and then helps me understand and eventually solve the problem.
Btw, I would not trust an LLM to tell me how to build a suspension bridge. First, I'm unfamiliar with that space. Second, even if I was familiar, the stakes are, as you say, so high that it would be insane to trust something so complex without expert sign off. The post I'm specifically talking about? Near-zero stakes and near-zero risk.
<stepping on the soapbox> I beg folks to always try and pierce the veil of complexity. Some things are complex and require very specialized training and guardrails. But other complexity is fabricated. There are entrenched interests who want you to feel like you can't do certain things. They're not malicious, but they sometimes exist to make or protect money. There are entire industries propped up by trade groups that are there to make it seem like some things are too complex to be done by laypeople, who have lobbied legislators for regulations that keep folks like you from tackling them. And if your knee-jerk reply is that I'm some kind of conspiracy theorist or anarchist all I'm saying is it's a spectrum. Suspension bridge with traffic driving over it --> should double, triple, quadruple check with professional(s); a post in a house supporting the entire house's load (exaggeration for effect) --> get a single professional to sign off; a post in a house that's supporting a single floor joist with minimal live and dead load (my case!) --> use an LLM to help you DIY the "engineering" to get to good enough (good enough = high margin for error); replace a light switch --> DIY YouTube video.
I am the king of long-winded HN posts. Obviously the time I took to write this (look, ma, no LLM!) is asymmetric with what you wrote, but I'm genuinely wondering if any of this makes you think differently. If not, that's cool of course (and great for the engineers and permit issuers!).
The reason you hire a structural engineer is because they do - and they are on the hook if it goes wrong. Which is also why they have to stamp drawings, etc.
Because the next person who owns the house should have some idea who was screwing with the structure of it.
You might be 100% on top of it - in which case that structural engineer should have no problem stamping your calcs eh?
The only other thing I'll add is the ideal vs. the reality. What percent of structural projects done to single-family construction, in particular, do you think is done by engineers? I would guess it's far less than 50%. That's based on my own experience working in the industry, which I know you won't trust (why would you? Random internet guy after all). But for conversation's sake suffice it to say that I believe every time you walk into a house that's several decades old or older you're likely walking into a place that has been manipulated structurally without an engineer's stamp. And the vast majority (99%+ of the time) it's perfectly safe to be in that space.
Vetting things is very likely harder than doing the thing correctly.
Especially the thing you are vetting is designed to look correct more than actually being correct.
You can picture a physics class where teacher gives a trick problem/solution and 95% of class doesn’t realize until the teacher walks back and explains it.
This might not quite be true, strictly speaking, but a very similar statement definitely is. LLMs are highly prone to hallucinations, a term you've probably heard a lot in this context. One reason for this is that they are trained to predict the next word in a sequence. In this game, it's almost always better to guess than to output 'I'm not sure,' when you might be wrong. LLMs therefore don't really build up a model of the limits of their own 'knowledge,' they just guess until their guesses get better.
These hallucinations are often hard to catch, in part because the LLM will sound confident regardless of whether it is hallucinating or not. It's this tendency that makes me nervous about your use case. I asked an LLM about world energy consumption recently, and when it couldn't find an answer online in the units I asked for, it just gave a number from a website and changed (not converted) the units. I almost missed it, because the source website had the number!
Stepping back, I actually agree that you can learn new things like this from LLMs, but you either need to be able to verify the output or the stakes need to be low enough that it doesn't matter if you can't. In this case, even if you can verify the math, can you be sure that it's doing the right calculation in the right way? Did it point out the common mistakes that beginners make? Did it notice that you're attaching the support beam incorrectly?
Chances are, you've built everything correctly and it will be fine. But the chances of a mistake are clearly much higher than if you talked to an experienced human (professional or otherwise).
It's going to be a real problem going forward, because if AI hadn't killed them something else would have, and now it's questionable whether that "something else" will ever emerge. The need for something like SO is never going to go away as long as new technologies, algorithms, languages and libraries continue to be created.
However, your point stands: as new technologies develop, StackOverflow will be the main platform where relevant questions gain visibility through upvotes.
Or closing a general question because in the opinion of Someone Important, it runs afoul of some poorly-defined rule regarding product recommendations.
A StackOverflow that wasn't run like a stereotypical HOA would be very useful. The goal should be to complement AI rather than compete with it.
Think of programming languages as you currently think of CPU ISAs. We only need so many of those. And at this point, machine-instruction architecture has diverged so far from traditional ISAs that it no longer gets called that. Instead of x86 and ARM and RISC-V we talk about PTX and SASS and RDNA. Or rather, hardly anyone talks about them, because the interesting stuff happens at a higher level of abstraction.
People say all the time that LLMs are so much better for finding information, but to me it's completely at odds with my own user experience.
I'm not interested in dissecting specific examples because never been productive, but I will say that most people's bullshit detectors are not nearly as sensitive as they think they are which leads them to accepting sloppy incorrect answers as high-quality factual answers.
Many of them fall into the category of "conventional wisdom that's absolutely wrong". Quick but sloppy answers are okay if you're okay with them, after all we didn't always have high-quality information at our fingertips.
The only thing that worries me is how really smart people can consume this slop and somehow believe it to be high-quality information, and present it as such to other impressionable people.
Your success will of course vary depending on the topic and difficulty of your questions, but if you "can't remember" the last time you had a BS answer then I feel extremely confident in saying that your BS detector isn't sensitive enough.
Do you have a few examples? I'm curious because I have a very sensitive BS detector. In fact, just about anyone asking for examples, like the GP, has a sensitive BS detector.
I want to compare the complexity of my questions to the complexity of yours. Here's my most recent one, the answer to which I am fully capable of determining the level of BS:
It gave me a very large list, pointing out CommonMark-specific stuff, etc.I responded with:
Right. I've shown you mine. Now you show yours.That's also what Grok says:
> Is there a constant time string comparison algorithm in the JRE?
No, the Java Runtime Environment (JRE) does not provide a built-in constant-time comparison method directly for String objects in the standard library. The String.equals() and String.equalsIgnoreCase() methods perform variable-time comparisons that short-circuit upon finding the first mismatch, which can be vulnerable to timing attacks in security-sensitive contexts like password or token verification. However, the JRE does include a constant-time comparison utility for byte arrays via java.security.MessageDigest.isEqual(byte[] digesta, byte[] digestb), introduced in Java 6 and documented as performing a comparison where "the calculation time depends only on the length of digesta [and] does not depend on the ... contents." This makes it suitable for secure comparisons against timing side-channel attacks. To use it for strings:
Convert both strings to byte arrays using a consistent encoding (e.g., UTF-8 via getBytes("UTF-8")). Ensure the byte arrays have the same length first (as the method requires equal-length inputs for meaningful results). Call MessageDigest.isEqual() on the arrays.
...but ironically that chatbot is Gemini from ai studio, so still the same company but a different product. Google search will look very different in the next 5-10 years compared to the same period a decade ago.
Kimi K2's output style is something like a mix of Cynic and Robot as seen here https://help.openai.com/en/articles/11899719-customizing-you... and I absolutely love it. I think more people should give it a try (kimi.com).
472 more comments available on Hacker News