tomcp.org
github.comKey Features
Tech Stack
Key Features
Tech Stack
Does this skirt the robots.txt by chance? Not being to fetch any web page is really bugging me and I'm hoping to use a better web_fetch that isn't censored. I'm just going to copy/paste the content anyway.
I’m not sure how you could do much more in a CF worker, but this might be too simple to be useful on many sites.
Example: I had to pull in a docs site that was built for a project I’m working on. We wanted an LLM to be able to use the docs in their responses. However, the site was based on VitePress. I didn’t have access to the source markdown files, so I wrote an MCP fetcher that uses an headless chrome instance to load the page. I then pull the innerHTML directly from the processed DOM. It’s probably overkill, but an example of when this tool might not work.
Also, by treating this as an MCP Resource rather than a Tool, the docs are pinned permanently instead of relying on the model to "decide" to fetch them.
Cloudflare Workers handle this perfectly for free (100k reqs/day) without the overhead of managing a dockerized browser instance.
But I do think the lack of a JavaScript loader will be a problem for many sites. In my case, I still run the innerHTML through a Markdown converter to get rid of the extra cruft. You’re right that this helps a lot. Even better if you can choose which #id element to load. Wikipedia has a lot of extra info that surrounds the main article that even with MD conversion adds extra fluff. But without the JS loading, you’re still going to not be able to process a lot of sites in the wild.
Now, I would personally argue that’s an issue with those sites. I’m not a big fan of dynamic JS loaded pages. Sadly, I think that that ship has sailed…
Different use cases, I think.
From what I can see, if the content I want to enrich is static, the web fetch tool seems sufficient. Is this tool capable of extracting information from dynamic websites or sites behind login walls, or is it essentially the same as a web fetch tool that only works with static pages?
1. Standard web_fetch tools usually dump raw HTML into the context (including navbars, scripts, and footer noise). This wastes a huge amount of tokens and distracts the model. toMCP runs the page through a readability parser and converts it to clean markdown before sending it to the AI.
2. Adding a website as an MCP Resource pins it as a permanent, read-only context, making it ideal for keeping documentation constantly available. This differs from the web_fetch tool, which is an on-demand action the AI only triggers when it decides to, meaning the data isn't permanently attached to your project.
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.