What If You Don't Need Mcp at All?
Postedabout 2 months agoActiveabout 2 months ago
mariozechner.atTechstoryHigh profile
calmmixed
Debate
70/100
McpGraphqlAI AgentsAPI Design
Key topics
Mcp
Graphql
AI Agents
API Design
The article questions the necessity of MCP (Model Context Protocol) for AI agents, sparking a discussion on alternative approaches and the future of tool integration.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
7h
Peak period
4
6-8h
Avg / period
2
Comment distribution16 data points
Loading chart...
Based on 16 loaded comments
Key moments
- 01Story posted
Nov 16, 2025 at 1:58 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 16, 2025 at 8:43 PM EST
7h after posting
Step 02 - 03Peak activity
4 comments in 6-8h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 17, 2025 at 8:36 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45947444Type: storyLast synced: 11/20/2025, 7:50:26 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
We've been developing this in case folks are interested: https://github.com/stanford-mast/a1
I believe the point is to do something akin to "promise pipelining":
https://capnproto.org/rpc.html
http://erights.org/elib/distrib/pipeline.html
When an MCP tool is used, all of the output is piped straight into the LLM's context. If another MCP tool is needed to aggregate/filter/transform/etc the previous output, the LLM has to try ("try" is a keyword -- LLMs are by their nature nondeterministic) and reproduce the needed bits as inputs into the next tool use. This increases latency dramatically and is an inefficient use of tokens.
This "a1" project, if I'm reading it correctly, allows for pipelining multiple consecutive tool uses without the LLM/agent being in the loop, until the very end when the final results are handed off to the LLM.
An alternative approach inspired by the same problems identified in MCP: https://blog.cloudflare.com/code-mode/
I add MCP tools to tighten the feedback loop. I want my Agent to be able to act autonomously but with a tight set of capabilities that don't often align with off-the-shelf tools. I don't want to YOLO but I also don't want to babysit it for non-value-added, risk-free prompts.
So, when I'm developing in go, I create `cmd/mcp` and configure a `go run ./cmd/mcp` MCP server for the Agent.
It helps that I'm quite invested in MCP and built github.com/ggoodman/mcp-server-go, which is one of the few (only?) MCP SDKs that let you scale horizontally over https while still supporting advanced features like elicitation and sampling. But for local tools, I can use the familiar and ergonomic stdio driver and have my Agent pump out the tools for me.
It goes a tad beyond the spec minimum because I think it's valuable to be able to persist some small KV data with sessions and users.
MCP and A2A are JSONRPC schemas people follow to build abstraction around their tools. Agents can use MCP to discover tools, invoke and more. OpenAPI Schemas are good alternatives to MCP servers today. In comparison to OpenAPI Schemas, MCP servers are pretty new.
my fav protocol is TCP, which I am a proud user of nc localhost 9999.
but not everyone have same taste of building software.
https://github.com/cagataycali/devduck
Though it is more about debugging CSS, I think we are on the same way: let agents use tool by scripting.
The agent should look at my README.md, not a custom human-like text that is meant to be read by machines only.
It also should look at `Makefile`, my bash aliases and so on, and just use that.
In fact, many agents are quite good at this (Code Fast 1, Sonnet).
Issue is, we have a LONG debt around those. READMEs often suck, and build files often suck. We just need to make them better.
I see agents as an opportunity for making friendlier repos. The agent is a free usability tester in some sense. If it can't figure out by reading the human docs, then either the agent is not good enough or your docs aren't good enough.
> When I start a session where the agent needs to interact with a browser, I just tell it to read that file in full and that's all it needs to be effective. Let's walk through their implementations to see how little code this actually is.
Cool, now you want to package that so others can use it? What next?
Put it behind an MCP is an easy approach. Then I can just install that MCP and by choosing it I have all the capabilities mentioned here.
Or in this particular case, a Claude Skill could likely do as well.
But I mean, that's MCP. I don't even really understand the people discussing that MCP is bad or whatever, it's a plug and play protocol so I can package tools for others to use in their preferred agent client.
CLI access also has the issue that if you want to integrate it in an application, well how do you bundle bash in a secure way so your agent can use it? And would you allow users custom tool call, now they can run arbitrary bash commands?
Then MCP comes out and AI explodes, sucking all the air out of the room for non AI tools.
Now it seems like AI can work with CLIs better than MCP, so I’m tempted to slap AI integration all over the project to better convey the idea.
It’s crazy how quickly MCP has run it’s course and watching an entire ecosystem rediscover things from first principals.
MCP is just an API with docs.
The problem isn’t MCP itself. It’s that each MCP “server” has to expose every tool and docs which consumes context.
I think the tools should use progressive reveal and only give a short summary like the skill does. Then agent can get full API of the tool on request.
Right now loading GitHub MCP takes something like 50k tokens.
165 more comments available on Hacker News