Chatgpt's Atlas: the Browser That's Anti-Web
Key topics
The article discusses ChatGPT's Atlas browser, which is seen as 'anti-web' and raises concerns about AI-driven browsing, data collection, and the future of the web. The discussion revolves around the implications of such a browser and its potential impact on users and content creators.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4d
Peak period
95
Day 4
Avg / period
40
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 25, 2025 at 5:08 AM EDT
3 months ago
Step 01 - 02First comment
Oct 28, 2025 at 10:35 PM EDT
4d after posting
Step 02 - 03Peak activity
95 comments in Day 4
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 2, 2025 at 11:00 PM EST
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
2 - we didn't leave command-line interfaces behind 40 years ago
2 - That's an entirely different situation and you know it.
Atlas may not be the solution but I love the idea of an LLM that sits between me and the dreck that is today’s web.
I also believe noticing Baader-meinhof in the 90s is rather unsurprising, since RAF was just "a few years" ago. However, "dreck" as someone else noted is documented since the early 20th century. So I dont think me noticing this just recently isn't a bias, rather a true coincidence.
https://en.wikipedia.org/wiki/Frequency_illusion
The web was supposed to set us free and you cry out for chains?
It's in your hands. It's literally in the hands of this entire forum. There is nobody else to blame.
I guess you did say "the idea of" not "the reality of".
I think we should note that this is a situation created by the businesses more than the consumers. By offering "free" products they drove paid products out of the market. They didn't have to do that. But if I'm going to be as fair as possible, it's only regulation that could have stopped such a niche feom being exploited. One business or another would have eventually figured it out.
It feels like $10 / month would be sufficient to solve this problem. Yet, we've all insisted that everything must be free.
But I can't actually make that payment - except maybe by purchasing a paid adblocker - where ironically the best open source option (uBlock Origin) doesn't even accept donations.
I think the most telling is the breakdown of Average Revenue Per User per region for Facebook specifically [1]. The average user brought in about $11 per quarter while the average US/CA user brought in about $57 per quarter during 2023.
[1] https://s21.q4cdn.com/399680738/files/doc_financials/2023/q4... (page 15)
It’s closer to $100 than $10 though, for all the services I pay for to avoid ads, and you still need ad blockers for the rest of the internet.
- Kagi
- YouTube Premium
- Spotify Premium
- Meta ad-free
- A bunch of substacks and online news publications
- Twitter Pro or whatever it’s called
On top of that I aggressively ad-block with extensions and at DNS level and refuse to use any app with ads. I have most notifications disabled, too.
It is a lot better, but it’s more like N * $10 than $10 per month.
But, doesn't Youtube Premium include Youtube Music? So why pay for Spotify premium too?
YouTube Music is both better and worse: UI has some usability issues and unfortunately it shares likes and playlists with the normal YouTube account, as a library it has lots of crap uploaded by YouTube users, often wrong metadata, but thanks to that it also has some niche artists and recordings which are not available on other platforms.
It doesn't address the other reasons, but there are some free tools for moving Spotify playlists to YouTube.
I've also been using Spotify for longer than YouTube Music, or its predecessor that Google killed (as they do periodically) even exited.
Spotify (now) supports lossless.
Spotify connects to Sonos, wiim, etc. devices
Spotify supports marking albums and playlists for offline sync, including to my Garmin watch.
I participate in a number of collaborative Spotify playlists (e.g. on group trips, at parties, etc.). I’ve never seen anyone make a collaborative playlist on another platform, much less missed out on participating in one.
Shazam results have an “Open in Spotify” button and Shazam adds everything it identifies to a Spotify playlist.
When I’ve used it, the YouTube Music UI has felt like it’s not really designed for people who listen to music the way I do at all.
I’m not willing to go without YouTube just to spite Google but I’d rather not give them money or attention/usage if I can avoid it.
I don’t know how many of these would also be ok with YouTube Music, but it’s clearly not all of them and I suspect it’s close to zero. I’m fortunate that the cost of Spotify is not a burden for me, and I’d much rather pay it to get closer to the experience I want than try to get by with YouTube Music.
https://help.kagi.com/kagi/ai/llms-privacy.html#llms-privacy
That said, don't be lured, you know they're already working on ways to put ads and trackers and what not inside ChatGPT and Atlas, those 20$ sound won't pay enough to recoup all that investment and cost and maximize profits.
So I think we should be careful what we wish for here.
This is kind of surprising, because those are precisely the ways I would say that a Web search is better than ChatGPT. Google is generally sub second to get to results, and quite frequently either page 1 or 2 will have some relevant results.
With ChatGPT, I get to watch as it processes for an unpredictable amount of time, then I get to watch it "type".
> ads free
Free of ads where ChatGPT was paid to deliver them. Because it was trained on the public Internet, it is full of advertising content.
Update: Example query I just did for "apartment Seville". Google completed in under a second. All the results above the fold are organic, with sponsored way down. Notably the results include purchase, long-term and vacation rental sites. The first 3 are listing sites. There's an interactive map in case I know where I want to go; apartments on the map include links to their websites. To see more links, I click "Next."
ChatGPT (MacOS native app) took ~9 seconds and recommended a single agency, to which it does not link. Below that, it has bullet points that link to some relevant sites, but the links do not include vacation rentals. There are 4 links to apartment sites, plus a link to a Guardian article about Seville cracking down on illegal vacation rentals. To see more links, I type a request to see more.
For all the talk about Google burying the organic links under a flood of ads, ChatGPT shows me far fewer links. As a person who happily pays for and uses ChatGPT daily, I think it's smart to be honest about its strengths and shortcomings.
That being said I've never really come across on what are some good general ways to Google to give me good results.
I know some tricks e.g. Filetype:PDF , use scholar for academic search, use "site:...". smth like "site:reddit.com/r/Washington quiet cafe" for most things people would want to do in a city, because people generally ask about those things on community forums.
But I have a poor time with dev related queries because 1/2 the time it's seo'd content and when I don't know enough about a subject, LLMs generally gives me a lot of lines of inquiries(be careful of X and also consider Y) that I would not bother to ask cause I don't know what I don't know.
But if you were looking for any information about anything? Say, what's the average cost of an apartment in Seville? That's where on Google, yes the search result is fast, but then you have to click through the links and those websites are often very slow to load and full of ads, and you might need to look at a few of them to get the information you wanted.
If you have a few added follow up piece of information you're interested in as well, what's the typical apartment size, have the prices been going up or down, what is the typical leasing process, etc. You can bottom out with the info much faster on ChatGPT.
If I’m trying to learn about a topic (for example, how a cone brake works in a 4WD winch), then ChatGPT gives me a great overview with “ Can you explain what a cone brake is and how it works, in the context of 4WD winches?” while google, with the search “4wd winch cone brake function explained”, turns up a handful of videos covering winches (not specifically cone brakes) and some pages that mention them without detailing their function. ChatGPT wins here.
If I were trying to book a flight I’d never dream of even trying to use ChatGPT. That sort of use case is a non-starter for me.
As it is, I find there are some things LLMs are genuinely better for but many where a search is still far more useful.
My mistake, you're completely correct, perhaps even more-correct than the wonderful flavor of Mococoa drink, with all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners!
(https://www.youtube.com/watch?v=MzKSQrhX7BM&t=0m13s)
Google used to be like that, and if ChatGPT is better right now, it won't remain that way for long. They're both subject to the same incentives and pressures, and OpenAI is, if anything, less ethical and idealistic than Google was.
It’s unlikely LLM operators can break even by charging per use, and it should be expected that they’ll race to capture the market by offering “free” products that in reality are ad serving machines, a time-tested business model that has served Meta and friends very well. The fact that Atlas browser is (and they don’t even hide it) a way to work around usage limits of ChatGPT should ring alarm bells.
An AI browser is choosing to send all the stuff you browse, to a third party without a demonstrated interest in keeping it all private, and getting back stuff that might or might not be true to the original content. Or maybe not even true at all.
Oh and - Atlas will represent your interests, right up until OpenAI decides it's not in their financial interest to do so. What do you do when the entire web browser UI gets enshittified?
But isn't this, instead, letting a third party strip that context away and give it its own context so that you can't make those choices and decisions properly? Information without context is, to me, nearly worthless.
And even if you believe they are neutral parties and have your own interests at heart (which, quite frankly, I think is naive), once companies like that know everything about you, you don't think they'll abuse that knowledge?
- Atlas slurps the web to get more training data, bypassing Reddit blocks, Cloudflare blocks, paywalls, etc. It probably enriches the data with additional user signals that are useful.
- Atlas is an attempt to build a sticky product that users won't switch away from. An LLM or image model doesn't really have sticky attachment, but if it starts storing all of your history and data, the switching costs could become immense. (Assuming it actually provides value and isn't a gimmick.)
- Build pillars of an interconnected platform. Key "panes of glass" for digital lives, commerce, sales intent, etc. in the platformization strategy. The hardware play, the social network play -- OpenAI is trying to mint itself as a new "Mag 7", and Atlas could be a major piece in the puzzle.
- Eat into precious Google revenue. Every Atlas user is a decrease in Google search/ads revenue.
Response: Already achieved by OpenAI!
I guess Mag 7 is the new FAANG, not the mag-7 shotgun
I also need to laugh. Wasn't open AI just crying about people copying them not so long ago?
https://www.womenslaw.org/about-abuse/forms-abuse/emotional-...
The article does taste a bit "conspiracy theory" for me though
can never go back
GUIs emerged to make things easier for users to tell their computers what to do. You could just look at the screen and know that File > Save would save the file instead of remembering :w or :wq. They minimized friction and were polished to no end by companies like MSFT and AAPL.
Now that technology has got to a point where our computers now can bridge the gap between what we said and what we meant reasonably well, we can go back to CLIs. We keep the speed and expressiveness of typing but without the old rigidity. I honestly can't wait for the future where we evolve interfaces to things we previously only dreamt of before.
Particularly when you throw in agentic capabilities where it can feel like a roll of the dice if the LLM decides to use a special purpose tool or just wings it and spits out its probabilistic best guess.
The bridge would come from layering natural languages interfaces on top of deterministic backends that actually do the tool calling. We already have models fine-tuned to generate JSON schemas. MCP is a good example of this kind of stuff. It discovers tools and how to use them.
Of course, the real bottle neck would be running a model capable of this locally. I can't run any of models actually capable of this on a typical machine. Till then, we're effectively digital serfs.
Worth reading to the end.
No we didnt.
claude -p "Question goes here"
As that will print the answer only and exit.
"htop" is a TUI, "ps" is a CLI. They can both accomplish most of the same things but the user experience is completely different. With htop you're clicking on columns to sort the live-updating process list, while with "ps" you're reading the manual pages to find the right flags to sort the columns, wrapping it in a "watch" command to get it to update periodically, and piping into "head" to get the top N results (or looking for a ps flag to do the same).
In any case, Claude Code is not really CLI, but rather a conversational interface.
Don't get me wrong, I'm not arguing that expansion of GUI based interfaces wasn't a good thing. There's plenty of things I prefer to interact with that way, and the majority of people wouldn't use computers if CLIs were still the default method. But what he's describing is literally not how any ever used the commandline.
Anyone who deals with any kind of machine with a console port.
CLIs are current technology, that receive active development alongside GUI for a large range of purposes.
Heck Windows currently ships with 3 implementations. Command Prompt, Powershell AND Terminal.
CLI is ALWAYS fallback when nothing else works (except when fetish of people on HN). even most devs use IDEs most of the time.
Theres a pretty decent list of incompatible features, but its shrinking (mostly due to those features being EOL'd not upgraded)
If I’d anticipated breaching containment and heading towards the orange site, I may not have risked the combination of humor and anything that’s not completely literal in its language. Alas.
Heck, just look at what's happening at this very moment: I'm using a textarea with a submit button. Even as a developer/power-user, I have zero interest in replacing that with:
Also, some commenters here at HN stating that the CLI/TUI it's just the fallback option... that's ridiculous. Nvi/vim, entr, make... can autocompile (and autocomplete too with some tools) a project upon writting any file in a directory thanks to the entr tool.
>This was also why people hated operating systems like MS-DOS, and why even all the Linux users reading this right now are doing so in a graphical user interface.
What is the significance of "Even all the Linux users"? First of all it's probably incorrect, because of the all quantifier. I've went out of my way to look at the website via the terminal to disprove the statement. It's clearly factually incorrect now.
Second, what does hate have anything to do with this? Graphical user interfaces serve different needs than text interfaces. You can like graphical user interfaces and specifically use Linux precisely because you like KDE or Gnome. You can make a terrible graphical user interface for something that ought to be a command line interface and vice versa. Vivado isn't exactly known for being easy to automate.
Third, why preemptively attack people as nerds?
>There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
I mean, not only does this come off as an incredible strawman. After all, who wouldn't be excited by computers in an era where they were the hot new thing? Computers were exciting, not because they had text interfaces. They were fun, because they were computers. It's like learning how to drive on the highway for the first time.
The worst part by far is the unnecessary insinuation though. It's the standard anti-intellectual anti-technology stereotype. It creates hostility for absolutely no reason.
I have opposite view. I think text (and speech) is actually pretty good interface, as long as the machine is intelligent enough (and modern LLMs are).
I've once saw a demo of an AI photo editing app that displays sliders next to light sources on a photo and you are able to dim/brighten the individual light sources intensity this way. This feels to me like a next level of the user interface.
1. There's a "normal" interface or query-language for searching.
2. The LLM suggests a query, based on what you said you wanted in English, possibly in conjunction with results of a prior submit.
3. The true query is not hidden from the user, but is made available so that humans can notice errors, fix deficiencies, and naturally--if they use it enough--learn how it works so that the LLM is no longer required.
For example, "Find me all pictures since Tuesday with pets" might become:
Then the implementation of "fuzzy-content" would generate a text-description of the photo and some other LLM-thingy does the hidden document-building like:Theres very little thats charitable in the article, so I met it at the same place.
But most of those will likely be developers, that use the CLI in a very particular way.
If we now subdivide further, and look at the people that use the CLI for things like browsing the web, that's going to be an even smaller number of people. Negligible, in the big picture.
I'd bookmarked a lot of Gamasutra articles over the years and am kinda bummed out that I can't find any of them now that the site has shifted. You mentioned having a collection of their essays? Is there any way to share or access them?
This thing is an absolute security nightmare. The concept of opening up the full context of your authenticated sessions in your email, financial, healthcare or other web sites to ChatGPT is downright reckless. Aside from personal harm, the way they are pushing this is going to cause large scale data breaches at companies that harbour sensitive information. I've been the one pushing against hard blocking AI tools at my org so far but this may have turned me around for OpenAI at least.
Well, unless the scenario is moot because such a vendor would never have released it in the first place.
On the contrary, it could be the case that Microsoft ritually sacrifices a dozen babies each day in their offices and it would still be used because office.
Yay!! Let’s all make a not-for-profit!!
Oh, but hold on a minute, look at all the fun things we can do with lots of money!
Ooooh!!
Clearly, an all-local implementation is safer, and using less powerful local models is the reasonable tradeoff. Also make it open source for trust.
All that said, I don’t need to have everything automated, so we also have ‘why even build it’ legitimate questions to ask.
So I guess the only logical next step for Big AI is to destroy the web, once they have squeezed every last bit out of it. Or at least make it dependent on them. Who needs news sites when OpenAI can do it? Why blog - just prompt your BlogLLM with an idea. Why comment on blogs - your agent will do it for you. All while avoid child porn with 97% accuracy - somerhing human curated content surely cannot be trusted to do.
So I am 0% surprised.
I imagine a future where websites (like news outlets or blogs) will have something like a “100% human created” label on it. It will be a mark of pride for them to show off and they’ll attract users because of it
> By default, we don’t use the content you browse to train our models. If you choose to opt-in this content, you can enable “include web browsing” in your data controls settings. Note, even if you opt into training, webpages that opt out of GPTBot, will not be trained on.
https://openai.com/index/introducing-chatgpt-atlas/
But also, it’s not the 20-somethings building this people making decisions are in their 40’s and 50’s.
:skull:
But this feels truly dystopian. We here on HN are all in our bubble, we know that AI responses are very prone to error and just great in mimicking. We can differentiate when to use and when not (more or less), but when I talk to non-tech people in a normal city not close to a tech hub, most of them treat ChatGPT as the all-knowing factual instance.
They have no idea of the concious and unconcious bias on the responses, based on how we ask the questions.
Unfortunately I think these are the majority of the people.
If you combine all that with a shady Silicon Valley CEO under historical pressure to make OpenAI profitable after 64 billion in funding, regularly flirting with the US president, it seems always consequential to me that exactly what the author described is the goal. No matter the cost.
As we all feel like AI progress is stagnating and mainly the production cost to get AI responses is going down, this almost seems like the only out for OpenAI to win.
> “sometimes this tool completely fabricates content, gives me a box that looks like a search box, and shows me the fabricated content in a display that looks like a web page when I type in the fake search box”
If a human wrote that same article about Taylor Swift, would you say it completely fabricates content? Most “articles” on the web are just rewrites of someone else’s articles anyway and nobody goes after them as bad actors (they should).
2.0 - algorithmic feeds of real content with no outbound links - stay in the wall
3.0 - slop infects rankings and feeds, real content gets sublimated
4.0 - algorithmic feeds become only slop
5.0 - no more feeds or rankings, but on demand generative streams of slop within different walled slop gardens
6.0 - 4D slop that feeds itself, continuously turning in on itself and regenerating
191 more comments available on Hacker News