Context Engineering Is Sleeping on the Humble Hyperlink
Posted2 months agoActive2 months ago
mbleigh.devTechstoryHigh profile
calmpositive
Debate
60/100
Large Language ModelsContext EngineeringHyperlinksRestful Apis
Key topics
Large Language Models
Context Engineering
Hyperlinks
Restful Apis
The article discusses using hyperlinks for context engineering in LLMs, and the discussion explores its potential applications, limitations, and relation to existing technologies like HATEOAS and RAG.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2d
Peak period
37
36-42h
Avg / period
11.3
Comment distribution68 data points
Loading chart...
Based on 68 loaded comments
Key moments
- 01Story posted
Oct 23, 2025 at 10:24 AM EDT
2 months ago
Step 01 - 02First comment
Oct 24, 2025 at 10:46 PM EDT
2d after posting
Step 02 - 03Peak activity
37 comments in 36-42h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 26, 2025 at 4:34 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45682164Type: storyLast synced: 11/20/2025, 3:29:00 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> This never worked in practice. Building hypertext APIs was too cumbersome and to actually consume APIs a human needed to understand the API structure in a useful manner anyway.
Every time I read one of these comments I feel like DiCaprio's character in Inception going "but we did grow old together." HATEOAS worked phenomenally. Every time you go to a webpage with buttons and links in HTML that describe what the webpage is capable of (its API, if you will), you are doing HATEOAS [0]. That this interface can be consumed by both a user (via the browser) and a web scraper (via some other program) is the foundation of modern web infrastructure.
It's a little ironic that the explosion of information made possible by HATEOAS happened while the term it self largely got misunderstood, but such is life. Much like reclaiming the proper usage of its close cousin, "REST," using HATEOAS correctly is helpful for properly identifying what made the world's largest hypermedia system successful—useful if you endeavor to design a new one [1].
[0] https://htmx.org/essays/hateoas/
[1] https://unplannedobsolescence.com/blog/why-insist-on-a-word/
I guess someone interested would have to read the original work by Roy (who seems to have come up with the term) to find out which opinion is true
The only thing that would have made the web not conform to HATEOAS were if browsers had to have code that's specific to, say, google.com, or maybe to Apache servers. The only example of anything like this on the modern web is the special log in integrations that Microsoft and Google added for their own web properties - that is indeed a break of the HATEOAS paradigm.
Say I as a user want to read the latest news stories of the day in the NYT. I tell my browser to access the NYT website root address, and then it contacts the server and discovers all necessary information for achieving this task on its own. It may choose to present this information as a graphical web page, or as a stream of sound, all without knowing anything about the NYT web site a priori.
https://ics.uci.edu/~fielding/pubs/dissertation/net_arch_sty...
The server MUST be stateless, the client MAY be stateful. You can't get ETags and stuff like that without a stateful client.
https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...
> any concept that might be the target of an author's hypertext reference must fit within the definition of a resource
It's an optional constraint. It's valid for CSS, JavaScript and any kind of media type that is negotiable.
> resource: the intended conceptual target of a hypertext reference
> representation: HTML document, JPEG image
A resource is abstract. You always negotiate it, and receive a representation with a specific type. It's like an interface.
Therefore, `/style.css` is a resource. You can negotiate with clients if that resource is acceptable (using the Accept header).
"Presentation layer" is not even a concept for REST. You're trying to map framework-related ideas to REST, bumping into an impedance mismatch, and not realizing that the issue is in that mismatch, not REST itself.
REST is not responsible for people trying to make anemic APIs. They do it out of some sense of purity, but the demands do not come from HATEOAS. They come from other choices the designer made.
I'm realizing/remembering now that our internal working group's concept of HATEOAS was, apparently, much stricter to the point of being arguably divergent from Fielding's. For us "HATEOAS" became a flag in the ground for defining RESTful(ish) API schemas from which a user interface could be unambiguously derived and presented, in full with 100% functionality, with no HTML/CSS/JS, or at least only completely generic components and none specific to the particular schema.
"Schema" is also foreign to REST. That is also a requirement coming from somewhere else.
You're probably coming from a post-GraphQL generation. They introduced this idea of sharing a schema, and influenced a lot of people. That is not, however, a requirement for REST.
State is the important thing. It's in the name, right? Hypermedia as the engine of application state. Not application schema.
It's much simpler than it seems. I can give a common example of a mistake:
GET /account/12345/balance <- Stateless, good (an ID represents the resource, unambiguous URI for that thing)
GET /my/balance <- Stateful, bad (depends on application knowing who's logged in)
In the second example, the concept of resource is being corrupted. It means something from some users, and something to others, depending on state.
In the first example, the hypermedia drives the state. It's in the link (but it can be on form data, or negotiation, for example, as long as it is stateless).
There is a little bit more to it, and it goes beyond URI design, but that's the gist of it.
It's really simple and not that academical as it seems.
Fielding's work is more a historical formalisation where he derives this notion from first principles. He kind of proves that this is a great style for networking architectures. If you read it, you understand how it can be performant, scalable, fast, etc, by principle. Most of the dissertation is just that.
Browsers can alter a webpage with your chosen CSS, interactively read webpages out loud to you, or, as is the case with all the new AI browsers, provide LLM powered "answers" about a page's contents. These are all recontextualizations made possible by the universal HATEOAS interface of HTML.
The concept of a HATEOAS API is also very simple: the API is defined by a communication protocol, 1 endpoint, and a series of well-defined media types. For a website, the protocol is HTTP, that 1 endpoint is /index.html, and the media types are text/html, application/javascript, image/jpeg, application/json and all of the others.
The purpose of this system is to allow the creation of clients and servers completely independently of each other, and to allow the protocols to evolve independently in subsets of clients and servers without losing interoperability. This is perfectly achieved on the web, to an almost incredible degree. There has never been, at least not in the last decades, a big where, say, Firefox can't correctly display pages severed by Microsoft IIS: every browser really works with every web server, and no browser or server dev even feels a great need to explicitly test against the others.
The point of HATEOAS is to inform the architecture of any system that requires numerous clients and servers to interoperate with little ability for direct cooperation; and where you also need the ability to evolve this interaction in the longer term with the same constraint of no direct cooperation. As the dissertation explains, HATEOAS was used to guide specific fixes to correct mistakes in the HTTP/1.0 standard that limited the ability to achieve this goal for the WWW.
The front end ui was entirely driven, ui and functionality exposed by the data/action payload.
I'm still not sure if it's because of the implementation or because there is something fundemental.
I came away from that thinking that the db structure, the dag and data flow is what's really important for thinking about any problem space and any ui considerations should be not first class.
But I'm not a theorist, I just found a specific real, real formal working implementation in prod to be not great, it's a little hard even now to understand why.
Maybe it just works for purely text interfaces, adding any design or dynamic interaction causes issues.
I think maybe it's that the data itself should be first class, that well typed data should exist and a system that allows any ui and behavior to be attached to that data is more important than an api saying what explicit mutations are allowed.
If I was to explore this, I think folder and files, spreadsheet, dbs, data structures, those are the real things and the tools we use to mutate them are second order and should be treated as such. Any action that can be done on data should be defined elsewhere and not treated as being the same importance, but idk, that's just me thinking outloud.
You could use whatever lightweight rendering you wanted, mostly it was very minimal react but that hardly mattered. One thing that was a positive was how little the ui rendering choice mattered.
I don't really want to say more as it's unique enough to be equivalent to naming the company itself.
The web is also a real product, one that's (when not bloated with adtech) capable of being fast and easy to develop on. That other people have tried to do HATEOAS and failed to make it nice is part of why it's so useful to acknowledge as valid the one implementation that has wildly succeeded.
You aren't saying hypermedia/hyperlinks served by a backend equal hateaos are you?
hateaos is from 2000 isn't it? Long after hyperlinks and the web already existed.
That’s exactly what it is.
> hateaos is from 2000 isn't it? Long after hyperlinks and the web already existed.
> Over the past six years, the REST architectural style has been used to guide the design and development of the architecture for the modern Web, as presented in Chapter 6. This work was done in conjunction with my authoring of the Internet standards for the Hypertext Transfer Protocol (HTTP) and Uniform Resource Identifiers (URI), the two specifications that define the generic interface used by all component interactions on the Web.
This is straight from the intro of fielding’s doctoral dissertation.
The missing piece was having machines that could handle enough ambiguity to "understand" the structure of the API without it needing to be generic to the point of uselessness.
The creator of REST, Roy Fielding, literally said this loud and clear:
> REST APIs must be hypertext-driven
> What needs to be done to make the REST architectural style clear on the notion that hypertext is a constraint? In other words, if the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period.
— https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...
I think of all the people in the world, the creator of REST gets to say what is and isn’t REST.
REST API became colloquially defined as "HTTP API with hierarchical URL structure, usually JSON". I wrote about the very phenomenon 15 years ago! https://www.mobomo.com/2010/04/rest-isnt-what-you-think-it-i...
HATEOAS and by-the-book REST don’t provide much practical value for writing applications. As the article says, a human has to read the spec, make sense of each endpoint’s semantics, and write code specific to those semantics. At that point you might as well hardcode the relevant URLs (with string templating where appropriate) rather than jumping through hoops and pretending every URL has to be “discovered” on the off chance that some lunatic will change the entire URL structure of your backend but somehow leave all the semantics unchanged.
The exception, as the article says, is if we don’t have to understand the spec and write custom code for each endpoint. Now we truly can have self-describing endpoints, and HATEOAS moves from a purist fantasy to something that actually makes sense.
If we jump down to the bolts and nuts, let's say on a json API, it's about including extra attributes/fields in your json response that contain links and information of how to continue. These attributes have to be blended with your other real attributes.
For example if you just created a resource with a POST endpoint, you can include a link to GET the freshly created resource ("_fetch"), a link to delete it ("_delete"), a link to list all resources of the same collection ("_list"), etc...
Then the client application is supposed to automatically discover the API's functionality. In case of a UI, it's supposed to automatically discover the API's functionality and build a presentation layer on the fly, which the user can see and use. From our example above, the UI codebase would never have a "delete" resource button, it would have a generic button which would be created and placed on the UI based on the _delete field coming back from the API
This used to work great two years ago when chatgpt first got the Web browsing feature. Nowadays, no eyeballs on ads: no content.
"The irony is they trained their model on so much porn even barely NSFW prompts get flagged because Grok the Goon Commander thinks a simple fully-clothed lapdance requires a visible massive dong being thrusted up her piehole."
https://old.reddit.com/r/grok/comments/1ofd6xm/the_irony_is_...
I had the post pretty much done, went on vacation for a week, and Claude Skills came out in the interim.
That being said Skills are indeed an implementation of the patterns possible with linking, but they are narrower in scope than what's possible even with MCP Resources if they were properly made available to agents (e.g. dynamic construction of context based on environment and/or fetching from remote sources).
[tools] web_search = true
It seems to think they are a vendor lockin play by Anthropic, running as an opaque black box.
To rebut their four complaints:
1. "Migrating away from Anthropic in the future wouldn't just mean swapping an API client; it would mean manually reconstructing your entire Skills system elsewhere." - that's just not true. Any LLM tool that can access a filesystem can use your skills, you just need to tell it to do so! The author advocates for creating your own hierarchy of READMEs, but that's identical to how skills work already.
2. "There's no record of the selection process, making the system inherently un-trustworthy." - you can see exactly when a skill was selected by looking in the tool logs for a Read(path/to/SKILL.md) call.
3. "This documentation is no longer readily accessible to humans or other AI agents outside the Claude API." - it's markdown files on disk! Hard to imagine how it could be more accessible to humans and other AI agents.
4. "You cannot optimize the prompt that selects Skills. You are entirely at the mercy of Anthropic's hidden, proprietary logic." - skills are selected by promoting driven by the system prompt. Your CLAUDE.md file is injected into that same system prompt. You can influence that as much as you like.
There's no deep, proprietary magic to skills. I frequently use claude-trace to dig around in Claude Code internals: https://simonwillison.net/2025/Jun/2/claude-trace/ - here's an example from last night: https://simonwillison.net/2025/Oct/24/claude-code-docs-map/
The closing section of that article revels where that author got confused They said: "Theoretically, we're losing potentially "free" server-side skill selection that Anthropic provides."
Skills are selected by Claude Code running on the client. They seem to think the it's a model feature that's proprietary to Anthropic - it's not, it's just another simple prompting hack.
That's why I like skills! They're a pattern that works with any AI agent already. Anthropic merely gave a name to the exact same pattern that this author calls "Agent-Agnostic Documentation" and advocates for instead.
I'm talking about skills as they are used in Claude Code running on my laptop. Are you talking about skills as they are used by the https://claude.ai consumer app?
Not an exact science yet.
Though in this case, I did read the original article.
https://docs.claude.com/en/docs/claude-code/claude_code_docs...
This is driven by instructions in the Claude Code system prompt:
> When the user directly asks about Claude Code (eg. "can Claude Code do...", "does Claude Code have..."), or asks in second person (eg. "are you able...", "can you do..."), or asks how to use a specific Claude Code feature (eg. implement a hook, or write a slash command), use the WebFetch tool to gather information to answer the question from Claude Code docs. The list of available docs is available at https://docs.claude.com/en/docs/claude-code/claude_code_docs....
Screenshot and notes here: https://simonwillison.net/2025/Oct/24/claude-code-docs-map/
...until now. It seems they finally found their problem.
We don't need MCPs for this, just make a tool that uses Trafilatura to read web pages into markdown and create oldschool server side web UIs, and let the agents curl them.