Key Takeaways
That's not to say I don't have gripes with how Google Maps works, but I just don't know why the other factors were not considered.
Almost certainly Instagram/tiktok are though. I know a few places which have been ruined by becoming TikTok tourist hotspots.
Counterpoint: I have met people in the UK who's lives revolve around doing nothing but.
https://gehrcke.de/2023/09/google-changes-recently-i-see-mor...
The wrong RSS thing may have just tipped the scales over to Google not caring.
No more Google. No more websites. A distributed swarm of ephemeral signed posts. Shared, rebroadcasted.
When you find someone like James and you like them, you follow them. Your local algorithm then prioritizes finding new content from them. You bookmark their author signature.
Like RSS but better. Fully distributed.
Your own local interest graph, but also the power of your peers' interest graphs.
Content is ephemeral but can also live forever if any nodes keep rebroadcasting it. Every post has a unique ID, so you can search for it later in the swarm or some persistent index utility.
The Internet should have become fully p2p. That would have been magical. But platforms stole the limelight just as the majority of the rest of the world got online.
If we nerds had but a few more years...
I would settle for simpler, attainable things. Equal opportunity for next generation. Quality education for everybody. Focus on merit not other characteristics. Personal freedom if it does not infringe on the freedom of people around you (ex: there can't be such thing as a "freedom to pollute").
In my view Internet as p2p worked pretty well to improve the previous status quo in many areas (not all). But there will never be a "stable solution", life and humans are dynamic. We do have some good and free stuff on the Internet today because of the groundwork laid out 30 years ago by the open source movement. Any plan started today will have noticeable effect in many years. So "we can't even make" sounds more of an excuse to not start, rather than an honest take.
What does this mean? I suppose it can't literally mean equal opportunity, because people aren't equal, and their circumstances aren't equal; but then, what does this mean?
Currently I know in many countries multiple measures/rules/policies that affect these 3 things in ways that I find damaging for the society overall on the long term. Companies complain they don't have work forces, governments complain the natality is low but there are many issues with raising a child. Financial incentives to parents do not seem to work (for example: https://www.bbc.com/news/world-europe-47192612)
I feel that saying "we have the resources" ignores the difficulty of allocating them better, which is the hardest part. Compared to 20 years ago we have amazing software tools and hardware capabilities, and still many large projects fail - it's not because they don't have the resources...
Every family should be provided with a UBI that covers food and rent (not in the city). That is a more attainable goal and would solve the same problems (better, in fact).
(Not saying that UBI is a panacea, but I've lived in countries that have experimented with such and it seems the best of the alternatives)
On the other side of the same coin there are already governments that will make you legally responsible of what your page's visitors write in comments. This renders any p2p internet legally unbearable (i.e. someone goes to your page, posts some bad word and you get jailed). So far they say "it's only for big companies" but it's a lie, just boiling frogs.
"cannot do anything" is relative. Google did something about it (at least for the first 10-15 years) but I am sure that was not their primary intention nor they were sure it will work. So "we have no clue what will work to reduce it" is more appropriate.
Now I think everybody has tools to build stuff easier (you could not make a television or a newspaper 50 years ago). That is just an observation of possibility, not a guarantee of success.
YouTube should get split out and then broken up. Google Search should get split out and broken up. etc.
This is not a problem you solve with code. This is a problem you solve with law.
When the DMCA was a bill, people were saying that the anti-circumvention provision was going to be used to monopolize playback devices. They were ignored, it was passed, and now it's being used to monopolize not just playback devices but also phones.
Here's the test for "can you rely on the government here": Have they repealed it yet? The answer is still no, so how can you expect them to do something about it when they're still actively making it worse?
Now try to imagine the world where the Free Software Foundation never existed, Berkeley never released the source code to BSD and Netscape was bought by Oracle before donating the code to Mozilla. As if the code doesn't matter.
Most efficient = cheaper. A lot of times cheaper sacrifices quality, and sometimes safety.
How do you think Google or Cloudflare actually work? One big server in San Francisco that runs the whole world, or lots of servers distributed all over?
Why do you think they're a monopoly in the first place? Obviously because they were more efficient than the competition and network effects took care of the rest. Having to make choices is a cost for the consumer - IOW consumers are lazy - so winners have staying power, too. It's a perfect storm for a winner-takes-all centralization since a good centralized service is the most efficient utility-wise ('I know I'm getting what I need') and decision-cost-wise ('I don't need to search for alternatives') for consumers until it switches to rent seeking, which is where the anti-monopoly laws should kick in.
In other words, open source decentralized systems are the most efficient because you don't have to reduplicate a competitor's effort when you can just use the same code.
> Obviously because they were more efficient than the competition and network effects took care of the rest.
In most cases it's just the network effect, and whether it was a proprietary or open system in any given case is no more than the historical accident of which one happened to gain traction first.
> Having to make choices is a cost for the consumer
If you want an email address you can choose between a few huge providers and a thousand smaller ones, but that doesn't seem to prevent anyone from using it.
> until it switches to rent seeking
If it wasn't an open system from the beginning then that was always the end state and there is no point in waiting for the guard to lock the door before trying to remove yourself from the cage.
This is the great lie. Approximately zero end consumers care about code, the product they consume is the service, and if the marginal cost of switching the service provider is zero, it's enough to be 1% better to take 99% of the market.
Most people don't care about reading it. They very much care about what it does.
Also, it's not "approximately zero" at all. It's millions or tens of millions of people out of billions, and when a small minority of people improve the code -- because they have the ability to -- it improves the code for all the rest too. Which is why they should have a preference for the ability to do it even if they're not going to be the one to exercise it themselves.
> if the marginal cost of switching the service provider is zero, it's enough to be 1% better to take 99% of the market.
Except that you'd then need to be 1% better across all dimensions for different people to not have different preferences, and everyone else is trying to carve out a share of the market too. Meanwhile if you were doing something that actually did cause 99% of people to prefer a service that does that then everybody else would start doing it.
There are two main things that cause monopolies. The first is a network effect, which is why those things all need to be open systems. The second is that one company gets a monopoly somewhere (often because of the first, sometimes through anti-competitive practices like dumping) and then leverages it in order to monopolize the supply chain before and after that thing, so that competing with them now requires not just competing with the original monopoly but also reproducing the rest of the supply chain which is now no longer available as independent commodities.
This is why we need antitrust laws, but also why we need to recognize that antitrust laws are never perfect and do everything possible to stamp out anything that starts to look like one of those markets through development of open systems and promoting consumer aversion to products that are inevitably going to ensnare them.
"People don't want X" as an observed behavior is a bunch of nonsense. People's preferences depend on their level of information. If they don't realize they're walking into a trap then they're going to step right into it. That isn't the same thing as "people prefer walking into a trap". They need to be educated about what a trap looks like so they don't keep ending up hanging upside down by their ankles as all the money gets shaken out of their pockets.
Isn't what you're describing something like mastodon or usenet?
From what you've described, you've just re-invented webrings.
The amount of spam has increased enormously and I have no doubt there are a number of such anti-spam flags and a number of false positive casualties along the way.
However, if they do it for the statutory term, they can then successfully apply for existing-use rights.
I sort of wonder when will Google cop the blame for all of this, because Governments will blame them.
And I've seen expert witnesses bring up Google pins on Maps during tribunal over planning permits and the tribunal sort of acts as if it's all legit.
I called the locksmith and they came, but in an unmarked van, spent over an hour to change 2 locks, damaged both locks, and then tried to charge me like $600 because the locks were high security. It's actually a deal for me, y'know, these locks go for much more usually. I just paid them and immediately contacted my credit card company to dispute the charge.
I called their office to complain and the lady answering the phone hung up on me multiple times. I drove to where the office was supposed to be, and there was no such office. I reported this to google maps and it did get removed very quickly, but this seems like something that should be checked or at least tied back to an actual person in some way for accountability.
They are certainly trying. It's not good for them to have fake listings.
https://podcast.rainmakerreputation.com/2412354/episodes/169...
(just googled that, didn't listen, was looking for a much older podcast from I think Reply All from like 10yrs ago)
Just the open is similar, but the intent is totally different, and so is the focus keyword.
Not facing this issue in Bing and other search engines.
Some popular models on Hugging Face never appear in the results, but the sub-pages (discussion, files, quants, etc.) do.
Some Reddit pages show up only in their auto-translated form, and in a language Google has no reason to think I speak. (Maybe there's some deduplication to keep machine translations out of the results, but it's misfiring and discarding the original instead?)
I think at least for Google there are some browser extensions that can remove these results.
It’s also clearly confusing users, as you get replies in a random language, obviously made by people who read an auto translation and thought they were continuing the conversation in their native language.
This is what has caused the degradation of search quality since then.
www.xyz.com/blog/keyword-term/ www.xyz.com/services/keyword-term/ www.xyz.com/tag/keyword-term/
So, for a topic, if I have two of the above pages, Google will pick one of them canonically despite different keyword focus and intent. And the worst part is that it picks the worst page canonical, i.e., the tag page over blog or blog page over service.
1. Ai overview: my page impressions were high, my ranking was high, but click through took a dive. People read the generated text and move along without ever clicking.
2. You are now a spammer. Around August, traffic took a second plunge. In my logs, I noticed these weird queries in my search page. Basically people were searching for crypto and scammy websites on my blog. Odd, but not loke they were finding anything. Turns out, their search query was displayed as an h1 on the page and crawled by google. I was basically displaying spam.
I don't have much control over ai overview because disabling it means I don't appear in search at all. But for the spam, I could do something. I added a robot noindex on the search page. A week later, both impressions and clicks recovered.
example.com/search?q=text+scam.com+text
On my website, I'll display "text scam.com text - search result" now google will see that link in my h1 tag and page title and say i am probably promoting scams.Also, the reason this appeared suddenly is because I added support for unicode in search. Before that, the page would fail if you added unicode. So the moment i fixed it, I allowed spammers to have their links displayed on my page.
Since these are very low quality results surely one of Google's 10000 engineers can tweak this away.
That's trivially easy. Imagine a spammer creating some random page which links to your website with that made up query parameter. Once Google indexes their page and sees the link to your page, Google's search console complains to you as the victim that this page doesn't exist. You as in the victim have no insight into where Google even found that non-existent path.
> Since these are very low quality results surely one of Google's 10000 engineers can tweak this away.
You're assuming there's still people at Google who are tasked with improving actual search results and not just the AI overview at the top. I have my doubts Google still has such people.
[1] https://cyberinsider.com/threat-actors-inject-fake-support-n...
Google got smart and found out such exploits, and penalized sites that do this.
You can avaoid this by no caching search pages and applying noindex via X-robots tag https://developers.google.com/search/docs/crawling-indexing/...
But yes just noindex search pages like they already said they did
What could Google do to mitigate?
They say the data before and after is not comparable anymore as they are not counting certain events below a threshold anymore. You might need to have your own analytics to understand your traffic from now own.
This been our experience with out content-driven marketing pages in 2025. SERP results constant, but clicks down 90%.
This not good for our marketing efforts, and terrible for ad-supported public websites, but I also don't understand how Google is not terribly impacted by the zero-click Internet. If content clicks are down 90%, aren't ad clicks down by a similar number?
I’m imagining something like “blog.example/?s=crypto” which only I should see, not Google.
The solution is to tell the crawler that my search page shouldn't be indexed. This can be done with the robots meta tags.
Anyway, I'd really like to at least see google make the overview text itself clickable, and link to the source of the given sentence or paragraph. I think that a lot of people would instinctively click-through just to quickly spot check if it was made as easy as possible.
IIRC-
Used to take you to cited links, now launches a sidebar of supposed sources but which are un-numbered / disconnected from any specific claims from the bot.
1) encourage SEO sites to proliferate and dominate search results, pushing useful content down on the page.
2) sell ad placement directly on the search results page, further pushing useful content down on the page
3) introduce AI summaries, making it unnecessary to even look for the useful content pushed down on the page.
Now, people only see the summaries and the paid-for ad placements. No need to ever leave the search page.
Like, isn't this a well-known thing that happens constantly no matter if you're a user or run any websites? Relying on search engine ranking algorithms is russian roulette for businesses sadly, at least unless you outbid the competition to show your own page as an advertisement when someone searches your business' name
We have a consultant for the topic but I am not sure how much of that conversation I could share publicly so I will refrain myself of doing so.
But I think I can say that it is not only about data structure or quality. The changes in methodology applied by Google in September might be playing a stronger role than what people initially thought
Together with deleting my Facebook and Twitter accounts, this removed a lot of pressure to conform to their unclear policies. Especially around 2019-21, it was completely unclear how to escape their digital guillotine which seemed to hit various people randomly.
The deliverability problem still stands, though. You cannot be completely independent nowadays. Fortunately my domain is 9 years old.
Request URL: https://journal.james-zhan.com/google-de-indexed-my-entire-b... Request Method: GET Status Code: 304 Not Modified
So maybe it's the status code? Shouldn't that page return an 200?
When I go to blog.james..., I first get a 301 moved permanently, and then journal.james... loads, but it returns a 304 not modified, even if i then reload the page.
Only when I fully sumbit the URL again in the URL-bar, it responds with a 200.
Maybe crawling also returns a 304, and Google won't index that?
Maybe prompt: "why would a 301 redirect lead to a 304 not modified instead of a 200 ok?", "would this 'break' Google's crawler?"
> When Google's crawler follows the 301 to the new URL and receives a 304, it gets no content body. The 304 response basically says "use what you cached"—but the crawler's cache might be empty or stale for that specific URL location, leaving Google with nothing to index.
Your LLM prompt and response are worthless.
Request URL: https://news.ycombinator.com/item?id=46196076
Request Method: GET
Status Code: 200 OK (from disk cache)
I just thought that it would be worthwhile investigating in that direction.
And no <meta name="robots"> in the HTML either.
What URL are you seeing that on? And what tool are you using to detect that?
Google, here's your chance: please reply to this here post of mine, with a plausible explanation that convincingly exonerates you in the face of overwhelming evidence against you.
I'll check back in a few hours to see whether google has done so. If not, we can continue blaming them.
Author's fault, Google's fault, someone else's fault.
From the post, while it is hard to completely rule out the possibility that author did something wrong, they likely did everything they could to remove the suspicion. I assume they consulted all documentation or other resources.
Someone else's fault? It is unlikely, since there isn't (obviously) another party involved here.
Which leaves us to Google's fault.
Also, I mean, if a user can't figure out what's wrong, the blame should just go to the vendor by default for poor user experience and documentation.
I have a page that ranks well worldwide, but is completely missing in Canada. Not just poorly ranked, gone. It shows up #1 for keyword in the US, but won't show up with precise unique quotes in Canada.
https://search.yahoo.com/search?p=blog.james-zhan.com&fr=yfp...
I used to work for an SEO firm, I have a decent idea of best practices for this sort of thing.
BAM, I went from thousands of indexed pages to about 100
See screenshot:
https://x.com/donatj/status/1937600287826460852
It's been six months and never recovered. If I were a business I would be absolutely furious. As it stands this is a tool I largely built for myself so I'm not too bothered but I don't know what's going on with Google being so fickle.
Even when I knew the exact name of article I was looking for google was unable to find it. And yes it still existed,
Primary domain cannot be found via search - Bing knows about brand, LinkedIn, YouTube channel and but refuses to show search results about primary domain.
Bing search console does not give any clue, force reindexing does not help. Google search works fine.
But basically what happened: In august 2025 we finished the first working version of our shop. I wanted to accelerate indexing after some weeks because only ~50 of our pages were indexed and submitted the sitemap and everything got de-indexed within days. I thought for the longest time that its content quality because we sell niche trading cards and the descriptions are all one liners i made in Excel. ("This is $cardname from $set for your collection or deck!"). And because its single trading cards we have 7000+ products that are very similiar. (We did do all product images ourselves I thought google would like this but alas).
But later we added binders, whole sets and took a lot of care with their product data. The frontpage also got a massive overhaul - no shot. Not one page in index. We still get traffic from marketplaces and our older non-shop site. The shop itself lives on a subdomain (shop.myoldsite.com). The normal site also has a sitemap but that one was submitted 2022. I later rewrote how my sitemaps were generated and deleted the old ones in search console hoping this would help. It did not. (The old sitemap was generated by the shop system and was very large. Some forums mentioned that its better to create a chunked sitemap so I made a script that creates lists with 1000 products at a time as well as an index for them.)
Later observations are:
- Both sitemaps i deleted in GSC are still getting crawled and are STILL THERE. You cant see them in the overview but if you have the old links they still appear as normal.
- We eventually started submitting product data to google merchant center as well. It works 100% fine and our products are getting found and bought. The clicks still even show up in search console!!!! So I have a shop with 0 indexed pages in GSC that gets clicks every day. WTHeck?
So like... I dont even know anymore. Maybe we also have to restart like the person in the blog did and move the shop to a new domain and NEVER give google a sitemap. If I really go that route I will probably delete the cronjob that creates the sitemap in case google finds it by itself. But also like what the heck? I have worked in a web agency for 5 years and created a new webpage about every 2-8 weeks so i roughly launached about 50-70 webpages and shops and i NEVER saw that happen. Is it an ai hallucinating? Is it anti spam gone too far? Is it a straight up bug that they dont see? Who knows. I dont
(Good article though and I hope maybe some other people chime in and googlers browsing HN see this stuff).
I'm annoyed that mine even shows up on Google.
24 more comments available on Hacker News
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.