Code Mode: the Better Way to Use Mcp
Posted3 months agoActive3 months ago
blog.cloudflare.comTechstory
calmpositive
Debate
20/100
LlmsMcpCode Generation
Key topics
Llms
Mcp
Code Generation
Cloudflare introduces 'Code Mode' for MCP, allowing LLMs to generate code for API interactions, sparking discussion on the evolution of LLMs and their applications.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
2h
Peak period
3
12-18h
Avg / period
1.8
Comment distribution11 data points
Loading chart...
Based on 11 loaded comments
Key moments
- 01Story posted
Sep 27, 2025 at 4:49 PM EDT
3 months ago
Step 01 - 02First comment
Sep 27, 2025 at 6:27 PM EDT
2h after posting
Step 02 - 03Peak activity
3 comments in 12-18h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 29, 2025 at 8:02 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45399204Type: storyLast synced: 11/20/2025, 1:35:57 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It doesn't seem to me that isolates are sufficiently unique technology to warrant this only running on a server. Surely we can spin up a V8 locally or something and achieve the same thing?
Hopefully the popular MCP clients will start implementing this approach, if it works as well as claimed.
https://blog.cloudflare.com/code-mode/#or-try-it-locally :-)
Though AFAIK Wrangler is really only intended for development and not local deployment.
What is the regular API? How do you express all the integrations needed in this API? Who provides the integrations? Answering these questions lead you back to something like an MCP, which is an API contract that can be as generic or as specific as needed. Wasting context window to understand and re-implement each integration is why MCPs exist.
All the security issues are orthogonal, and occur regardless if invoking this API occurs via code or natural language.
"Wasting context window to understand and re-implement each integration is why MCPs exist" does seem to be exactly the point. Pointing the LLM at a swagger/openAPI spec and expecting it to write a good integration might work, but gets old after the first time. Swagger docs focus on what mostly, LLMs work better knowing why as well.
And,why not just use a locally installed cli rather than an MCP? You need to have one for a start, and use-cases (chained calls) are more valuable that atomic tool calls.
There is more behind the motivation for MCP and "tool calling" ability generally with LLMs. This motivation seems less and less relevant, but back when reasoning and chain-of-thought were newly being implemented, and people were more wary of yolo modes, the ability for an LLM harness to decide to call a tool and gather JIT context was a game changer. A dynamic RAG alternative. MCP might not be the optimal solution long term. For example, the way that claude has been trained to use the gh cli, and to work with git generally is much more helpful than either having to set up a git MCP or getting it to feel its way around git --help or man pages to, as you said "re-implement each integration" from scratch every time.