Graphql: the Enterprise Honeymoon Is Over
Key topics
The GraphQL honeymoon is over, and the real talk has begun - is it living up to its enterprise hype? Many developers are sharing their experiences, with some swearing by its ability to simplify complex queries and scale with their applications since as early as 2015. However, others are pointing out pain points, such as verbose and painful API designs, with Shopify and GitHub being cited as examples. The debate is sparking insightful discussions on schema design, with some arguing that issues lie not with GraphQL itself, but with how it's implemented.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
8m
Peak period
145
Day 1
Avg / period
26.7
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 14, 2025 at 12:13 PM EST
23 days ago
Step 01 - 02First comment
Dec 14, 2025 at 12:21 PM EST
8m after posting
Step 02 - 03Peak activity
145 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 27, 2025 at 8:47 AM EST
10 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://gist.github.com/andrewarrow/c75c7a3fedda9abb8fd1af14...
400 lines of QL vs one rest DELETE / endpoint
We are paying that same complexity tax you described, but without the benefit of needing to support thousands of unknown 3rd-party developers.
Equivalent delete queries in rest / graphql would be
vswe have a mixed graphql/REST api at $DAY_JOB and our delete mutations look almost identical to our REST DELETE endpoints.
No need to update manually. Further, you can prevent breaking changes to the spec using oasdiff
- Overly verbose endpoint & request syntax: $expand, parenthesis and quotes in paths, actions etc.
- Exposes too much filtering control by default, allowing the consumer to do "bad things" on unindexed fields without steering them towards the happy path.
- Bad/lacking open source tooling for portals, mocks, examples, validation versus OpenAPI & graphQL.
It all smells like unpolished MS enterprise crap with only internal MS & SAP adoption TBH.
My issue with this article is that, as someone who is a GraphQL fan, that is far from what I see as its primary benefit, and so the rest of the article feels like a strawman to me.
TBH I see the biggest benefits of GraphQL are that it (a) forces a much tighter contract around endpoint and object definition with its type system, and (b) schema evolution is much easier than in other API tech.
For the first point, the entire ecosystem guarantees that when a server receives an input object, that object will conform to the type, and similarly, a client receiving a return object is guaranteed to conform to the endpoint response type. Coupled with custom scalar types (e.g. "phone number" types, "email address" types), this can eliminate a whole class of bugs and security issues. Yes, other API tech does something similar, but I find the guarantees are far less "guaranteed" and it's much easier to have errors slip through. Like GraphQL always prunes return objects to just the fields requested, which most other API tech doesn't do, and this can be a really nice security benefit.
When it comes to schema evolution, I've found that adding new fields and deprecating old ones, and especially that new clients only ever have to be concerned with the new fields, is a huge benefit. Again, other API tech allows you to do something like this, but it's much less standardized and requires a lot more work and cognitive load on both the server and client devs.
The other one I would mention is the ability to very easily reuse resolvers in composition, and even federate them. Something that can be very clunky to get right in REST APIs.
Composed resolvers are the headache for most and not seen as a net benefit, you can have proxied (federated) subsets or routes in REST, that ain't hard at all
Right, so if you take away the resolver composition (this is graph composition and not route federation), you can do the same things with a similar amount of effort in REST. This is no longer a GraphQL vs REST conversation, it's an acknowledgement that if you don't want any of the benefits you won't get any of the benefits.
It is that very compositional graph resolving that makes many see it as overly complex, not as a benefit, but as a detriment. You seem to imply that the benefit is guaranteed and that graph resolving cannot be done within a REST handler, which it can be, but it's much simpler and easier to reason about. I'm still going to go get the same data, but with less complexity and reasoning overhead than using the resolver composition concept from GraphQL.
Is resolver composition really that different from function composition?
One of those conclusions is that GraphQL is more complex than REST without commensurate ROI
Not sure about the schema evolution part. Protobufs seem to work great for that.
It is an important security benefit, because one common attack vector is to see if you can trick a server method into returning additional privileged data (like detailed error responses).
In many REST frameworks, while you define the return object type that is sent back over the wire, by default, if the actual object you return has additional fields on it (even if they are found nowhere in the return type spec), those fields will still get serialized back to the client. A common attack vector is to try to get an API endpoint to return an object with, for example, extra error data, which can be very helpful to the attacker (e.g. things like stack traces). I'd have to search for them, but some major breaches occurred this way. Yes, many REST frameworks allow you to specify things like validators (the original comment mentioned zod), but these validators are usually optional and not always directly tied to the tools used to define the return type schema in the first place.
So with GraphQL, I'm not talking about access controls on GraphQL-defined fields - that's another topic. But I'm saying that if your resolver method (accidentally or not) returns an object that either doesn't conform to the return type schema, or it has extra fields not defined in the schema (which is not uncommon), GraphQL guarantees those values won't be returned to the client.
Therefore requests between GQL and downstream services are travelling "over the wire" (though I don't see it as an issue)
Having REST apis that return only "fat" objects is really not the most secure way of designing APIs
But you're right, if you have version skew and the client is expecting something else then it's not much help.
You could do it client-side so that if the server adds an optional field the client would immediately prune it off. If it removes a field, it could fill it with a default. At a certain point too much skew will still break something, but that's probably what you want anyway.
If you just slap in Zod, the server will drop the extra inputs. If you hate Zod, it's not hard to design a similar thing.
> or if client and server disagree that a field is optional or not
Doesn't GQL have the concept of required vs optional fields too? IIUC it's the same problem. You just have to be very diligent about this, not really a way around it. Protobufs went as far as to remove 'required' out of the spec because this was such a common problem. Just don't make things required, ever :-)
Yea, graphql is what I'm referring to.
I agree with that, and when I'm in a "typescript only" ecosystem, I've switched to primarily using tRPC vs. GraphQL.
Still, I think people tend to underestimate the value of having such clear contracts and guarantees that GraphQL enforces (not to mention it's whole ecosystem of tools), completely outside of any code you have to write. Yes, you can do your own zod validation, but in a large team as an API evolves and people come and go, having hard, unbreakable lines in the sand (vs. something you have to roll your own, or which is done by convention) is important IMO.
https://github.com/kubb-labs/kubb
Most of the commits and pull requests are AI. Issues are also seemingly being handled by AI with minimal human intervention.
And yes, current models are amazing at reducing time it takes to push out a feature or fix a bug. I wouldn't even consider working at a company that banned use of AI to help me write code.
PS: It's also irrelevant to whether it's AI generated or not, what matters is if it works and is secure.
How do you know it works and is secure if a lot of the code likely hasn't ever been read and understood by a human?
And you presume that the code hasn't been read or understood by a human. AI doesn't click merge on a PR, so it's highly likely that the code has been read by a human.
So, the project is human enough to annoy me, anyway.
The only mature, correct, fast option with a fixed cost (since it mostly exists at the type level meaning it doesn't scale your bundle with your API) was openapi-ts. I am not affiliated other than a previous happy user, though I did make some PRs while using it https://openapi-ts.dev/
The value of GQL is pretty much equivalent to SOA orchestration - great in theory, just gets in the way in practice.
Oh and not to mention that GQL will inadvertently hide away bad API design(ex. lack of pagination).. until you are left questioning why your app with 10k records in total is slow AF.
See how that works?
But the point is that that benefit is not unique to graphql, so by itself, that is not a compelling reason to choose graphql over something else.
But OpenAPI is verbose to the point of absurdity. You can't feasibly write it by hand. So you can't do schema first development. You need an open API compatible lib for authoring your API, you need some tooling to generate the schema from the code, then you need another tool to generate types from the schema. Each step tends to implement the spec to varying degrees, creating gaps in types, or just outright failing.
Fwiw I tried many, many tools to generate the typescript from the schema. Most resulted in horrendous, bloated code. The official generators especially. Many others just choked on a complex schema, or used basic string concatenation to output the typescript leading to invalid code. Additionally the cost of the generated code scales with the schema size, which can mean shipping huge chunks of code to the client as your API evolves
The tool I will wholeheartedly recommend (and which I am unaffiliated beside making a few PRs) is openapi-ts. It is fast and correct, and you pay a fixed cost - there's a fetch wrapper for runtime and everything else exists at the type level.
I was kinda surprised how bad a lot of the tooling was considering how mature OpenAPI is. Perhaps it's advanced in the last year or so, when I stopped working on the project where I had to do this.
https://openapi-ts.dev/
A sibling comment to your reply expressed the same sentiment as me, and also mentioned typespec as a possible solution
There is the part where dealing with another tool isn't much worth it most of the time, and the other side where we're already reading/writing screens of yaml or yaml like docs all the time.
Taking time to properly think about and define an entry point is reasonable enough.
I think you're over fitting your own experiences.
I never got to use it when I last worked with OpenAPI but it seemed like the antidote to the verbosity. Glad to hear someone had positive experience with it. I'll definitely try it next time I get the chance
Moreover, system boundaries are the best places to invest in being explicit. OpenAPI specs really don’t have that much overhead (especially if you make use of YAML anchors), and are (usually) suitably descriptive to describe the boundary.
In any case, starting with a declarative contract/IDL and doing something like codegen is a great way to go.
If I need more information about a resource that an endpoint exposes, I need another request. If I'm looking at a podcast episode, I might want to know the podcast network that the show belongs to. So first I have to look up the podcast from the id on the episode. Then I have to look up the network by the id on the podcast. Now, two requests later, I can get the network details. GQL gives that to me in one query, and the fundamental properties of what makes GQL GQL are what enables that.
Yes, you can jam podcast data on the episode, and network data inside of that. But now I need a way to not request all that data so I'm not fetching it in all the places where I don't need it. So maybe you have an "expand" parameter: this is what Stripe does. And really, you've just invented a watered down, bespoke GraphQL.
GQL has a pretty substantial up front cost, undeniably. But you hopefully balance that with the benefit you'd get from it.
The simple question is: what happens when you deploy API changes, but your client is running on stale types?
Anything that comes from the front end can be tampered with. Server is guaranteed nothing.
Request can be tampered with so there's *NO additional security from GraphQL protocol.
I'm actually spending a lot of time in rest-ish world and contract isn't the problem I'd solve with GraphQL either. For that I'd go through OpenAPI, and it's enforcement and validation. That is very viable these days, just isn't a "default" in the ecosystem.
For me what GraphQL solves as main problem, which I haven't got good alternative for is API composition and evolution especially in M:N client-services scenario in large systems. Having the mindset of "client describes what they need" -> "graphql server figures out how to get it" -> "domain services resolve the part" makes long term management of network of APIs much easier. And when it's combined with good observability it can become one of the biggest enablers for data access.
On a related note, this is also why I really dislike those "Hey, just expose your naked DB schemas as a GraphQL API!" tools. Like the best part about GraphQL is how it decouples your API contract from backend implementation details, and these tools come along and now you've tightly coupled all your clients to your DB schema. I think it's madness.
OpenAPI, Thrift and protobuf/gRPC are all far better schema languages. For example: the separation of input types and object types.
1. The main argument to introduce has always been the appropriate data fetching for the clients where clients can describe exactly whats required
2. Ability to define schema is touted as an advantage, managing the schema becomes a nightmare.( Btw the schema already exits at the persistence layer if that was required, schema changes and schema migration are already challenging, you just happen to replicate the challenge in one additional layer with graphQL)
3. You go big and you get into graphQL servers calling into other graphQL servers and thats when things become really interesting. People do not realize/remember/care the source of the data, you have name collisions, you get into namespaces
4. You started on the pretext of optimizing the query and now you have this layer that your client works with, the natural flow is to implement mutations with GraphQL.
5. Things are downhill from this point, with distributed services you had already lost on transactionality, graphQL mutations just add to it. You get into circular references cause underlying services are just calling other services via graphQL to get the data you asked for with graphQL query
6. The worst, you do not want to have too many small schema objects so now you have this one big schema that gets you everything from multiple REST API end points and clients are back to where they started from. Pick what you need to display on the screen.
7. Open up the network tab of any *enterprise application which uses graphQL and it would be easy to see how much non-usable data is fetched via graphQL for displaying simplistic pages
There is nothing wrong about graphQL, pretty much applies to all the tools. Comes down to how you use it, how good you are at understanding the trade-offs. Treating anything like a silver bullet is going to lead in the same direction. Pretty much all engineers who operated at the application scale is aware of it, unfortunately they just stay quiet
I've seen this this solved in REST land by using a load balancer or proxy that does path based routing. api.foo.com/bar/baz gets routed to the "bar" service.
Depends on your infra needs. Could easily be handled by the controller calling out to an external service. Like you do with a database.
You could use a proxy layer, but it isn't a requirement.
What else does relay give me that URQL does not?
- you don't have a normalized cache. You may not want one! But if you find yourself annoyed that modifying one entity in one location doesn't automatically cause another view into that same entity to update, it's due to a lack of a normalized cache. And this is a more frequent problem than folks admit. You might go from a detail view to an edit view, modify a few things, then press the back button. You can't reuse cached data without a normalized cache, or without custom logic to keep these items in sync. At scale, it doesn't work.
- Since you don't have a normalized cache, you presumably just refetch instead of updating items in the cache. So you will presumably re-render an entire page in response to changes. Relay will just re-render components whose data has actually changed. In https://quoraengineering.quora.com/Choosing-Quora-s-GraphQL-..., the engineer at Quora points out that as one paginates, one can get hundreds of components on the screen. And each pagination slows the performance of the page, if you're re-rendering the entire page from root.
- Fragments are great. You really want data masking, and not just at the type level. If you stop selecting some data in some component, it may affect the behavior of other components, if they do something like Object.stringify or JSON.keys. But admittedly, type-level data masking + colocation is substantially better than nothing.
- Relay will also generate queries for you. For example, pagination queries, or refetch queries (where you refetch part of a tree with different variables.)
There are lots of great reasons to adopt Relay!
And if you don't like the complexity of Relay, check out isograph (https://isograph.dev), which (hopefully) has better DevEx and a much lower barrier to entry.
https://www.youtube.com/watch?v=lhVGdErZuN4 goes into more detail about the advantages of Relay
despite many Rest flaw that I know that it feels tedious sometimes, I still prefer that
and now with AI that can scaffold most rest. the pain point of rest mostly "gone"
now that people using a lot of Trpc, I wonder can we combine Grpc + rest that essentialy typesafe and client would be guaranteed to understand how model response look ?????
I also really liked that you can create a snapshot of the whole schema for integration test purposes, which makes it very easy to detect breaking changes in the API, e.g. if a nullable field becomes not-nullable.
But I also agree with lots of the points of the article. I guess I am just not super in love with REST. In my experience, REST APIs were often quite messy and inconsistent in comparison to GraphQL. But of course that’s only anecdotal evidence.
GraphQL clients are built to do exactly that, Relay originally and Apollo in the last year, if I’m understanding what you’re saying: any component that touches E doesn’t have to care about how you got to it, fragment masking makes short work
Do people actually work like this is 2025? I mean sure, I guess when you're having entire teams just for frontends and backends then yea, but your average corporate web app development? It's all full stack these days. It's often expected that you can handle both worlds (client and server) and increasingly its even TypeScript "shared universe" when you don't even leave the TS ecosystem (React w/ something like RR plus TS BFF w/ SQL). This last point, where frontend and backend meet, is clearly the way things are going in general. I mean these days React doesn't even beat around the bush and literally tells you to install it with a framework, no more create-react-app, server side rendering is a staple now and server side components are going to be a core concept of React within a few years tops.
Javascript has conquered the client side of the internet, but not the server side. Typescript is going to unify the two.
Full stack is common for simple web apps, where the backend is almost a thin layer over the database.
But a lot of the products I’ve worked with have had backends that are far more complex than something you could expect the front end devs to just jump into and modify.
The internet at large seems to have a fundamental misunderstanding about what GraphQL is/is not.
Put simply: GQL is an RPC spec that is essentially implemented as a Dict/Key-Value Map on the server, of the form: "Action(Args) -> ResultType"
In a REST API you might have
In GraphQL, you have a "resolvers" map, like: And instead of sending a GET /user request, you send a GET /query with "getUser" as your server action.The arguments and output shape of your API routes are typed, like in OpenAPI/OData/gRPC.
That's all GraphQL is.
Seriously though, you can pretty much map GraphQL queries and resolvers onto JSONSchema and functions however you like. Resolvers are conceptually close to calling a function in a REST handler with more overhead
I suspect the companies that see ROI from GraphQL would have found it with many other options, and it was more likely about rolling out a standard way of doing things
My understanding is that this is not part of the spec and that the only way to achieve this is to sign/hash documents on clients and server to check for correctness
Though you still don’t need to and shouldn’t. Better to use the well defined tools to gate max depth/complexity.
It has been, at the scale it matters and should be used at. Most companies don't operate at that scale though.
At build time, the server generates a random string resolver names that map onto queries, 1-1, fixed, because we know exactly what we need when we are shipping to production.
Clients can only call those random strings with some parameters, the graph is now locked down and the production server only responds to the random string resolver names
Flexibility in dev, restricted in prod
My experience with GraphQL in a nutshell: A lot of effort and complexity to support open ended queries which we then immediately disallow and replace with a fixed set of queries that could have been written as their own endpoints.
I say probably because in the last ~year Apollo shipped functionality (fragment masking) that brings it closer.
I stand by my oft-repeated statement that I don’t use Relay because I need a React GraphQL client, I use GraphQL because I really want to use Relay.
The irony is that I have a lot of grievances about Relay, it’s just that even with 10 years of alternatives, I still keep coming back to it.
What about relay is so compelling for you? I'm not disagreeing, just genuinely curious since I've never really used it.
* Relatively fine-grained re-rendering out of the box because you don’t pass the entire query response down the tree. useFragment is akin to a redux selector
* Plays nicely with suspense and the defer fragment, deferring a component subtree is very intuitive
* mutation updaters defined inline rather than in centralised config. This ended up being more important than expected, but having lived the reality of global cache config with our existing urql setup at my current job, I’m convinced the Relay approach is better.
* Useful helpers for pagination, refetchable fragments, etc
* No massive up-front representation of the entire schema needed to make the cache work properly. Each query/fragment has its own codegenned file that contains all the information needed to write to the cache efficiently. But because they’re distributed across the codebase, it plays well with bundle size for individual screens.
* Guardrails against reuse of fragments thanks to the eslint plugin. Fragments are written to define the data contract for individual components or functions, so there’s no need to share them around. Our existing urql codebase has a lot of “god fragments” which are very incredibly painful to work with.
Recent versions of Apollo have some of these things, but only Relay has the full suite. It’s really about trying to get the exact data a component needs with as little performance overhead as possible. It’s not perfect — it has some quite esoteric advanced parts and the documentation still sucks, but I haven’t yet found anything better.
Did my only ever podcast appearance about it a few years ago. Haven’t watched it myself because yikes, but people say it was pretty good https://youtu.be/aX60SmygzhY?si=J8rQF6Pe5RGdX1r8
Resolvers should be an exception for the data that can't come directly from the database, not the backbone of the system.
In my experience, it's better to fix a bad endpoint and keep all the browser/server side tooling around tracing requests than to replace all that with a singular graphql endpoint. But curious to hear someone else's opinion here
this gets repeated over and over again, but if this your take on GraphQL you def shouldn't be using GraphQL, because overfetching ever such a big problem that would warrant using GraphQL.
In my mind, the main problem GraphQL tries to solve is the same "impedance mismatch" that ORMs try to solve. ORM's do this at the data level fetching level in the BE, while GraphQL does this in the client.
I also believe that using GraphQL without a compiler like Relay or some query/schema generation tooling is an anti-pattern. If you're not going to use a compiler/query generation tool, you probably won't get much out of GraphQL either.
Wait, what? Overfetching is easily one of the top #3 reasons for the enshittification on the modern web! It's one of the primary causes of incredible slowdowns we've all experienced.
Just go to any slow web app, press F12 and look at the megabytes transferred on the network tab. Copy-paste all text on the screen and save it to a file. Count the kilobytes of "human readable" text, and then divide by the megabytes over the wire to work out the efficiency. For notoriously slow web apps, this is often 0.5% or worse.
Checks out
TLDR, you get nice features like: if the field you're selecting doesn't exist, the extension will create the field for you (as a client field.) And your entire app is built of client fields that reference each other and eventually bottom out at server fields.
How is this easier or faster than writing a few lines of code at BFF?
The complexity and time lost to thinking is just not worth it, especially once you ship your GarphQL app to production, you are locking down the request fields anyway (or you're keeping yourself open for more pain)
I even wrote a zero-dependency auth helpers package and that was not enough for me to keep at it
https://github.com/verdverm/graphql-autharoo
Like OP says, pretty much everything GraphQL can do, you can do better without GraphQL
Also, using a proper GraphQL server and not composing it yourself from primitives is usually beneficial.
Apollo shows up in the README and package.json, so I'm not sure why you are assuming I was not using a proper implementation
Because of the graph aspect, queries don't work til all of the underlying resources have been updated to support github apps. From a juice vs squeeze perspective it's terrible - lots of teams have to do work to update their resources (which given turnover and age they may not even be aware of) before basic queries start working, until you finally hit a critical mass at some high percentage of coverage.
Add to all that the prevailing enterprise customer sentiment of "please anything but graphql" and it's a really hard sell - it's practically easier and better to ask teams to rebuild their APIs in REST than update the graphql.
GraphQL is painful to maintain. GraphQL is hated by many engineers.
The most practical thing is to replace it, even if it takes many years.
It's about the only thing about my job I still do like.
The difference is that it is schema-first, so you are describing your API at a level that largely replaces backend-for-frontend stuff.
I tend not to use it in unsecured contexts and I don't know if I would bother with GraphQL more generally, though WP-GraphQL has its advantages.
70 more comments available on Hacker News