GraphQL: The enterprise honeymoon is over

by johnjames4214- johnjames.blog

Source

> The main problem GraphQL tries to solve is overfetching.

My issue with this article is that, as someone who is a GraphQL fan, that is far from what I see as its primary benefit, and so the rest of the article feels like a strawman to me.

TBH I see the biggest benefits of GraphQL are that it (a) forces a much tighter contract around endpoint and object definition with its type system, and (b) schema evolution is much easier than in other API tech.

For the first point, the entire ecosystem guarantees that when a server receives an input object, that object will conform to the type, and similarly, a client receiving a return object is guaranteed to conform to the endpoint response type. Coupled with custom scalar types (e.g. "phone number" types, "email address" types), this can eliminate a whole class of bugs and security issues. Yes, other API tech does something similar, but I find the guarantees are far less "guaranteed" and it's much easier to have errors slip through. Like GraphQL always prunes return objects to just the fields requested, which most other API tech doesn't do, and this can be a really nice security benefit.

When it comes to schema evolution, I've found that adding new fields and deprecating old ones, and especially that new clients only ever have to be concerned with the new fields, is a huge benefit. Again, other API tech allows you to do something like this, but it's much less standardized and requires a lot more work and cognitive load on both the server and client devs.

I 100% agree that overfetching isn't the main problem graphql solves for me.

I'm actually spending a lot of time in rest-ish world and contract isn't the problem I'd solve with GraphQL either. For that I'd go through OpenAPI, and it's enforcement and validation. That is very viable these days, just isn't a "default" in the ecosystem.

For me what GraphQL solves as main problem, which I haven't got good alternative for is API composition and evolution especially in M:N client-services scenario in large systems. Having the mindset of "client describes what they need" -> "graphql server figures out how to get it" -> "domain services resolve the part" makes long term management of network of APIs much easier. And when it's combined with good observability it can become one of the biggest enablers for data access.

> For me what GraphQL solves as main problem, which I haven't got good alternative for is API composition and evolution especially in M:N client-services scenario in large systems. Having the mindset of "client describes what they need" -> "graphql server figures out how to get it" -> "domain services resolve the part" makes long term management of network of APIs much easier. And when it's combined with good observability it can become one of the biggest enablers for data access.

I've seen this this solved in REST land by using a load balancer or proxy that does path based routing. api.foo.com/bar/baz gets routed to the "bar" service.

Doesn't even need to be a proxy, you can lay out your controller and endpoints like this just fine in most modern frameworks

How do you do routing across services?

> How do you do routing across services?

Depends on your infra needs. Could easily be handled by the controller calling out to an external service. Like you do with a database.

You could use a proxy layer, but it isn't a requirement.

Completely agree with this rationale too. GraphQL does encapsulation really, really well. The client just knows about a single API surface, but the implementation about which actual backend services are handling the (parts of each) call is completely hidden.

On a related note, this is also why I really dislike those "Hey, just expose your naked DB schemas as a GraphQL API!" tools. Like the best part about GraphQL is how it decouples your API contract from backend implementation details, and these tools come along and now you've tightly coupled all your clients to your DB schema. I think it's madness.

I have used, implemented graphQL in two large scale companies across multiple (~xx) services. There are similarities in how it unfolds, however I have not seen any real world problem being solved with this so far

1. The main argument to introduce has always been the appropriate data fetching for the clients where clients can describe exactly whats required

2. Ability to define schema is touted as an advantage, managing the schema becomes a nightmare.( Btw the schema already exits at the persistence layer if that was required, schema changes and schema migration are already challenging, you just happen to replicate the challenge in one additional layer with graphQL)

3. You go big and you get into graphQL servers calling into other graphQL servers and thats when things become really interesting. People do not realize/remember/care the source of the data, you have name collisions, you get into namespaces

4. You started on the pretext of optimizing the query and now you have this layer that your client works with, the natural flow is to implement mutations with GraphQL.

5. Things are downhill from this point, with distributed services you had already lost on transactionality, graphQL mutations just add to it. You get into circular references cause underlying services are just calling other services via graphQL to get the data you asked for with graphQL query

6. The worst, you do not want to have too many small schema objects so now you have this one big schema that gets you everything from multiple REST API end points and clients are back to where they started from. Pick what you need to display on the screen.

7. Open up the network tab of any *enterprise application which uses graphQL and it would be easy to see how much non-usable data is fetched via graphQL for displaying simplistic pages

There is nothing wrong about graphQL, pretty much applies to all the tools. Comes down to how you use it, how good you are at understanding the trade-offs. Treating anything like a silver bullet is going to lead in the same direction. Pretty much all engineers who operated at the application scale is aware of it, unfortunately they just stay quiet

I agree as well. This may be the only thing GraphQL excels at. Dataloader implementations give this superpowers.

OpenAPI, Thrift and protobuf/gRPC are all far better schema languages. For example: the separation of input types and object types.

If you generate TypeScript types from OpenAPI specs then you get contracts for both directions. There is no problem here for GraphQL to solve.

This is very much possible, and I have done it, and it works great once it's all wired up.

But OpenAPI is verbose to the point of absurdity. You can't feasibly write it by hand. So you can't do schema first development. You need an open API compatible lib for authoring your API, you need some tooling to generate the schema from the code, then you need another tool to generate types from the schema. Each step tends to implement the spec to varying degrees, creating gaps in types, or just outright failing.

Fwiw I tried many, many tools to generate the typescript from the schema. Most resulted in horrendous, bloated code. The official generators especially. Many others just choked on a complex schema, or used basic string concatenation to output the typescript leading to invalid code. Additionally the cost of the generated code scales with the schema size, which can mean shipping huge chunks of code to the client as your API evolves

The tool I will wholeheartedly recommend (and which I am unaffiliated beside making a few PRs) is openapi-ts. It is fast and correct, and you pay a fixed cost - there's a fetch wrapper for runtime and everything else exists at the type level.

I was kinda surprised how bad a lot of the tooling was considering how mature OpenAPI is. Perhaps it's advanced in the last year or so, when I stopped working on the project where I had to do this.

https://openapi-ts.dev/

I write all of my openapi specs by hand. It's not hard.

I imagine you are very much in the minority. A simple hello world is like a screen full of yaml. The equivalent in graphql (or typespec which I always wanted to try as an authoring format for openapi https://typespec.io/) would be a few lines

I see your point, yet writing openapi specs by hand is pretty common.

There is the part where dealing with another tool isn't much worth it most of the time, and the other side where we're already reading/writing screens of yaml or yaml like docs all the time.

Taking time to properly think about and define an entry point is reasonable enough.

Being verbose doesn't make it difficult.

Not necessarily, no. But at a certain point, I believe it does. Difficult to read, is difficult to edit, is difficult to work with.

A sibling comment to your reply expressed the same sentiment as me, and also mentioned typespec as a possible solution

The standard pattern in go and some scala libs, is to define the spec and generate the code.

I think you're over fitting your own experiences.

Do you validate responses from client-side and server-side(Fastapi does this and prevents invalid responses from being sent) from spec?

Agree with the other comments about writing OpenAPI by hand. It’s really not that bad at all, and most certainly not “verbose to the point of absurdity.”

Moreover, system boundaries are the best places to invest in being explicit. OpenAPI specs really don’t have that much overhead (especially if you make use of YAML anchors), and are (usually) suitably descriptive to describe the boundary.

In any case, starting with a declarative contract/IDL and doing something like codegen is a great way to go.

YAML OpenAPI schema, like SQL, is quite easy to write by hand and more importantly by AI. Telling AI to keep the openapi in sync with the latest changes made on an API works great and can even help you identify inconsistencies.

I use https://typespec.io to generate openapi, writing openapi yaml quickly became horrible past a few apis.

Ha yes, see one of my other comments to another reply.

I never got to use it when I last worked with OpenAPI but it seemed like the antidote to the verbosity. Glad to hear someone had positive experience with it. I'll definitely try it next time I get the chance

What about the whole "graph" part? Are there any openapi libraries that deal with that?

OpenAPI definition includes class hierarchy as well. You can use tools to generate TypeScript type definitions from that.

And the fetching in a single request?

There is json-schema which is a sort of dialect/extension of OpenAPI which offers support for fetching relations (and relations of relations etc) and selecting a subset of fields in a single request https://json-schema.org/

I used this to get a fully type safe client and API, with minimal requests. But it was a lot of work to get right and is not as mainstream as OpenAPI itself. Gql is of course much simpler to get going

The question I answered was regarding contracts. Fetching in a single request can be handled by your BFF.

So make things more complicated than gql?

gql is clearly the more complicated of the two ...

a gql server in python is about as simple as you can possibly go to exposing data via an API. You can use a raw http client to query it.

You still require gql requests to deal with. There's pretty much the same amount of code to build in BFF as it is to build the same in GQL... and probably less code on the frontend.

The value of GQL is pretty much equivalent to SOA orchestration - great in theory, just gets in the way in practice.

Oh and not to mention that GQL will inadvertently hide away bad API design(ex. lack of pagination).. until you are left questioning why your app with 10k records in total is slow AF.

Your response is incredibly anecdotal (as is mine absolutely), and misleading.

GQL paved the way for a lot of ergonomics with our microservices.

And there's nothing stopping you from just adding pagination arguments to a field and handling them. Kinda exactly how you would in any other situation, you define and implement the thing.

Yeah I love it when a request turns into an N+1 query because the FE guys needed 1 more field.

What's that old saying, "fool me once ..."

Discovering Kubb was a game changer for me last year.

Thanks for mentioning this. I always find it unsettling when I've researched solutions for something and only find a better option from a random HN comment.

Site: https://kubb.dev/

Fwiw I tried every tool imaginable a few years ago including kubb, (which I think I contributed to while testing things out)

The only mature, correct, fast option with a fixed cost (since it mostly exists at the type level meaning it doesn't scale your bundle with your API) was openapi-ts. I am not affiliated other than a previous happy user, though I did make some PRs while using it https://openapi-ts.dev/

This project seems to be mostly AI generated, so keep that in mind before replacing any existing solutions.

No it doesn't

Did you see the repo?

https://github.com/kubb-labs/kubb

Most of the commits and pull requests are AI. Issues are also seemingly being handled by AI with minimal human intervention.

I've had a PR on Kubb that was taken over by a human maintainer. They then closed my PR and reimplemented my fix in their own PR.

So, the project is human enough to annoy me, anyway.

AI assisted, not necessarily generated.

And yes, current models are amazing at reducing time it takes to push out a feature or fix a bug. I wouldn't even consider working at a company that banned use of AI to help me write code.

PS: It's also irrelevant to whether it's AI generated or not, what matters is if it works and is secure.

> what matters is if it works and is secure.

How do you know it works and is secure if a lot of the code likely hasn't ever been read and understood by a human?

There are literally users here that say that it works.

And you presume that the code hasn't been read or understood by a human. AI doesn't click merge on a PR, so it's highly likely that the code has been read by a human.

Graphql solves the problem. There is no problem here for openapi to solve.

See how that works?

Openapi is older than graphql.

But the point is that that benefit is not unique to graphql, so by itself, that is not a compelling reason to choose graphql over something else.

Yeah that was one point of many of the benefits of the parent.

plus now you have 2 sources of truth

? I have a single source of truth in the gql schema. My frontend calls are generated from backend schema and type checked against it.

tRPC sort of does this (there's no spec, but you don't need a spec because the interface is managed by tRPC on both sides). But it loses the real main defining quality of gql: not needing subsequent requests.

If I need more information about a resource that an endpoint exposes, I need another request. If I'm looking at a podcast episode, I might want to know the podcast network that the show belongs to. So first I have to look up the podcast from the id on the episode. Then I have to look up the network by the id on the podcast. Now, two requests later, I can get the network details. GQL gives that to me in one query, and the fundamental properties of what makes GQL GQL are what enables that.

Yes, you can jam podcast data on the episode, and network data inside of that. But now I need a way to not request all that data so I'm not fetching it in all the places where I don't need it. So maybe you have an "expand" parameter: this is what Stripe does. And really, you've just invented a watered down, bespoke GraphQL.

Is dealing with GQL easier than implementing a BFF? There may be cases where that is true, but it is not always true.

I think BFF works at a small scale, but that's true with any framework. Building a one off handful of endpoints will always be less work than putting a framework in place and building against it.

GQL has a pretty substantial up front cost, undeniably. But you hopefully balance that with the benefit you'd get from it.

If you generate OpenAPI specs, and clients, and server type definitions from a declarative API definition made with Effect's own @effect/platform, it solves even more things in a nicer, more robust fashion.

Agree whole-heartedly. The strong contracts are the #1 reason to use GraphQL.

The other one I would mention is the ability to very easily reuse resolvers in composition, and even federate them. Something that can be very clunky to get right in REST APIs.

Contracts for data with OpenAPI or an RPC don't come with the overhead of making a resolver for infinite permutations while your apps probably need a few or perhaps one. Which is why REST and something for validation is enough for most and doesn't cost as much.

re:#1 Is there a meaningful difference between GraphQl and OpenAPI here?

Composed resolvers are the headache for most and not seen as a net benefit, you can have proxied (federated) subsets of routes in REST, that ain't hard at all

> Composed resolvers are the headache for most and not seen as a net benefit, you can have proxied (federated) subsets of routes in REST, that ain't hard at all

Right, so if you take away the resolver composition (this is graph composition and not route federation), you can do the same things with a similar amount of effort in REST. This is no longer a GraphQL vs REST conversation, it's an acknowledgement that if you don't want any of the benefits you won't get any of the benefits.

There are pros & cons to GraphQL resolver composition, not just benefits.

It is that very compositional graph resolving that makes many see it as overly complex, not as a benefit, but as a detriment. You seem to imply that the benefit is guaranteed and that graph resolving cannot be done within a REST handler, which it can be, but it's much simpler and easier to reason about. I'm still going to go get the same data, but with less complexity and reasoning overhead than using the resolver composition concept from GraphQL.

Is resolver composition really that different from function composition?

Local non-utility does not imply global non-value. Of course there's costs and benefits, but it's hard to have a conversation with good-faith comparison using "many see it as overly complex" -- this is an analysis that completely ignores problem-fit, which you then want to generalize onto all usage.

People can still draw generalizations about a piece of technology that hold true regardless context or problem fit

One of those conclusions is that GraphQL is more complex than REST without commensurate ROI

Yeah, that’s a huge over-generalization

Pruning the request and even the response is pretty trivial with zod. I wouldn't onboard GQL for that alone.

Not sure about the schema evolution part. Protobufs seem to work great for that.

> Pruning the request and even the response is pretty trivial with zod.

I agree with that, and when I'm in a "typescript only" ecosystem, I've switched to primarily using tRPC vs. GraphQL.

Still, I think people tend to underestimate the value of having such clear contracts and guarantees that GraphQL enforces (not to mention it's whole ecosystem of tools), completely outside of any code you have to write. Yes, you can do your own zod validation, but in a large team as an API evolves and people come and go, having hard, unbreakable lines in the sand (vs. something you have to roll your own, or which is done by convention) is important IMO.

In my (now somewhat dated) graphql experience, evolving an API is much harder. Input parameters in particular. If a server gets inputs it doesn't recognize, or if client and server disagree that a field is optional or not (even if a value was still supplied for it so the question is moot), the server will reject the request.

> If a server gets inputs it doesn't recognize

If you just slap in Zod, the server will drop the extra inputs. If you hate Zod, it's not hard to design a similar thing.

> or if client and server disagree that a field is optional or not

Doesn't GQL have the concept of required vs optional fields too? IIUC it's the same problem. You just have to be very diligent about this, not really a way around it. Protobufs went as far as to remove 'required' out of the spec because this was such a common problem. Just don't make things required, ever :-)

> Doesn't GQL have the concept of required vs optional fields too?

Yea, graphql is what I'm referring to.

Pruning a response does nothing since everything still goes across the network

Pruning the response would help validate your response schema is correct and that is delivering what was promised.

But you're right, if you have version skew and the client is expecting something else then it's not much help.

You could do it client-side so that if the server adds an optional field the client would immediately prune it off. If it removes a field, it could fill it with a default. At a certain point too much skew will still break something, but that's probably what you want anyway.

You're misunderstanding. In GraphQL, the server prunes the response object. That is, the resolver method can return a "fat" object, but only the object pruned down to just the requested fields is returned over the wire.

It is an important security benefit, because one common attack vector is to see if you can trick a server method into returning additional privileged data (like detailed error responses).

I would like to remind you that in most cases the GQL is not colocated on the same hardware as the services it queries.

Therefore requests between GQL and downstream services are travelling "over the wire" (though I don't see it as an issue)

Having REST apis that return only "fat" objects is really not the most secure way of designing APIs

"Just the requested fields" as requested by the client?

Because if so that is no security benefit at all, because I can just... request the fat fields.

I wanted to refute you but you're right. It's not a security benefit. With GQL the server is supposed to null out the fields that the user doesn't have access to, but that's not automagic or an inherent benefit to GQL. You have the same problem with normal REST. Or maybe less so because you just wouldn't design the response with those extra fields; you'd probably build a separate 'admin' or 'privileged' endpoint which is easier to lock down as a whole rather than individual fields.

I'll explain again, because this is not what I'm saying.

In many REST frameworks, while you define the return object type that is sent back over the wire, by default, if the actual object you return has additional fields on it (even if they are found nowhere in the return type spec), those fields will still get serialized back to the client. A common attack vector is to try to get an API endpoint to return an object with, for example, extra error data, which can be very helpful to the attacker (e.g. things like stack traces). I'd have to search for them, but some major breaches occurred this way. Yes, many REST frameworks allow you to specify things like validators (the original comment mentioned zod), but these validators are usually optional and not always directly tied to the tools used to define the return type schema in the first place.

So with GraphQL, I'm not talking about access controls on GraphQL-defined fields - that's another topic. But I'm saying that if your resolver method (accidentally or not) returns an object that either doesn't conform to the return type schema, or it has extra fields not defined in the schema (which is not uncommon), GraphQL guarantees those values won't be returned to the client.

Facebook had started bifurcating API endpoints to support iOS vs Android vs Web, and overtime a large number of OS-specific endpoints evolved. A big part of their initial GraphQL marketing was to solve for this problem specifically.

> when a server receives an input object, that object will conform to the type

Anything that comes from the front end can be tampered with. Server is guaranteed nothing.

> GraphQL always prunes return objects to just the fields requested, which most other API tech doesn't do, and this can be a really nice security benefit.

Request can be tampered with so there's additional security from GraphQL protocol. Security must be implemented by narrowing down to only allowed data on the server side. How much of it is requested doesn't matter for security.

Expecting GraphQL to handle security is really one of the poorest ways of doing security, as GQL is not designed to do that.

Sorry, I made a typo:

Request can be tampered with so there's *NO additional security from GraphQL protocol.

But if you just want a nicely typed interface for your APIs, in my experience gRPC is much more useful, because of all of the other downsides the blog author mentioned.

Sorry but not convinced. How is this different from two endpoints communicating through, lets say, protobuf? Both input and output will be (un)parsed only when conforming to the definition

The author is missing the #1 benefit of GraphQL: the ability to compose (the data for) your UI from smaller parts.

This is not surprising: Apollo only recently added support for data masking and fragment colocation, but it has been a feature of Relay for eternity.

See https://www.youtube.com/watch?v=lhVGdErZuN4 for the benefits of this approach:

- you can make changes to subcomponents without worrying about affecting the behavior of any other subcomponent,

- the query is auto-generated based on the fragment, so you don't have to worry that removing a field (if you stop using it one subcomponent) will accidentally break another subcomponent

In the author's case, they (either) don't care about overfetching (i.e. they avoid removing fields from the GraphQL query), or they're at a scale where only a small number of engineers touch the codebase. (But imagine a shared component, like a user avatar. Imagine it stopped using the email field. How many BFFs would have to be modified to stop fetching the email field? And how much research must go into determining whether any other reachable subcomponent used that email field?)

If moving fast without overhead isn't a priority (or you're not at the scale where it is a problem), or you're not using a tool that leverages GraphQL to enable this speed, then indeed, GraphQL seems like a bad investment! Because it is!

Yes, Apollo not leading people down the correct path has given people a warped perception of what the benefits actually are. Colocation is such a massive improvement that's not really replicated anywhere else - just add your data requirements beside your component and the data "magically" (though not actually magic) gets requested and funnelled to the right place

Apollo essentially only had a single page mentioning this, and it wasn't easy to find, for _years_

Quite. Apollo Client is the problem, IMO, not GraphQL.

Though Relay still needs to work on their documentation: Entrypoints are so excellent and yet still are basically bare API docs that sort of rely on internal Meta shit

The docs situation continues to be hilarious and bad, for the gem they have created.

It's the unfortunate situation where those who know, know, and those who do not, blasphemy the whole thing based on misunderstanding.

Super unfortunate, which could be solved by simply moving a little money over to Relay's docs, and working on some marketing materials.

100% agree on the unnecessary connection between entrypoints and meta internals. I think this is one of the biggest misses in Relay, and severely limits its usefulness in OSS.

If you're interested in entrypoints without the Meta internals, you may be interested in checking out Isograph (which I work on). See e.g. https://isograph.dev/docs/loadable-fields/, where the data + JS for BlogBody is loaded afterward, i.e. entrypoints. It's as simple as annotating a field (in Isograph, components define fields) with @loadable(lazyLoadArtifact: true).

Neat! I basically just reimplemented some of the missing pieces myself, but honestly for the kind of non-work GraphQL/Relay stuff I do React Router with an entry point-like interface for routes (including children!) to feed in route params to loadQuery and the ref to the route itself got me close enough for my purposes

I’ll have a play though, sounds promising :)

Oh this is interesting, sort of seems like the relay-3d thing in some ways?

Yeah, you can get a lot of features out of the same primitive. The primitive (called loadable fields, but you can think of it as a tool to specify a section of a query as loaded later) allows you to support: - live queries (call the loadable field in a setInterval) - pagination (pass different variables and concatenate the result) - defer - loading data in response to a click

And if you also combine this with the fact that JS and fragments are statically associated in Relay, you can get: - entrypoints - 3D (if you just defer components within a type refinement, e.g. here we load ad items only when we encounter an item with typename AdItem https://github.com/isographlabs/isograph/blob/627be45972fc47.... asAdItem is a field that compiles to ... on AdItem in the actual query text)

And all of it is doable with the same set of primitives, and requiring no server support (other than a node field).

Do let me know if you check it out! Or if you get stuck, happy to unblock you/clarify things (it's hard for me to know what is confusing to folks new to the project.)

Reminds me a lot of Grafast too, a new stab at this the people that made postgraphile had. I liked using graphile, haven't needed to rewrite or start a new project yet for grafast. * https://grafast.org/

Agreed on fragment masking. Graphql-codegen added support for it but in a way that unfortunately is not composable with all the other plugins in their ecosystem (client preset or bust), to the point that to get it to work nicely in our codebase we had to write our own plugins that rip code from the client preset so that we could use them as standalone plugins.

The ecosystem in general appears to be a problem.

> The main problem GraphQL tries to solve is overfetching.

this gets repeated over and over again, but if this your take on GraphQL you def shouldn't be using GraphQL, because overfetching is never such a big problem that would warrant using GraphQL.

In my mind, the main problem GraphQL tries to solve is the same "impedance mismatch" that ORMs try to solve. ORM's do this at the data level fetching level in the BE, while GraphQL does this in the client.

I also believe that using GraphQL without a compiler like Relay or some query/schema generation tooling is an anti-pattern. If you're not going to use a compiler/query generation tool, you probably won't get much out of GraphQL either.

In my opinion, GraphQL tooling never panned out enough to make GraphQL worthwhile. Hasura is very cool, but on the client side, there's not much going on... and now with AI programming you can just have your data layers generated bespoke for every application, so there's really no point to GraphQL anymore.

> I also believe that using GraphQL without a compiler like Relay or some query/schema generation tooling is an anti-pattern. If you're not going to use a compiler/query generation tool, you probably won't get much out of GraphQL either.

How is this easier or faster than writing a few lines of code at BFF?

If you're interested in an example of really good tooling and DevEx for GraphQL, then may I shamelessly promote this video in which I demonstrate the Isograph VSCode extension: https://www.youtube.com/watch?v=6tNWbVOjpQw

TLDR, you get nice features like: if the field you're selecting doesn't exist, the extension will create the field for you (as a client field.) And your entire app is built of client fields that reference each other and eventually bottom out at server fields.

URQL and gql.tada are great client side tooling innovations.

Curious what tooling you're for graphql? Intellij has excellent support for it as does Postman.

> overfetching is never such a big problem

Wait, what? Overfetching is easily one of the top #3 reasons for the enshittification on the modern web! It's one of the primary causes of incredible slowdowns we've all experienced.

Just go to any slow web app, press F12 and look at the megabytes transferred on the network tab. Copy-paste all text on the screen and save it to a file. Count the kilobytes of "human readable" text, and then divide by the megabytes over the wire to work out the efficiency. For notoriously slow web apps, this is often 0.5% or worse, even if filtering down to API requests only!

It is still a major problem, yes. Interestingly, if you go back to the talks that introduced GraphQL, much of the motivation wasn’t about solving overfetching (they kinda assumed you were already doing that because it was at the peak of mobile app wave), but solving the organisational and technical issues with existing solutions.

#1 unnecessary network waterfalls

#2 downloading the same fields multiple times

#3 downloading unneeded data/code

Checks out

Hilariously – react server components largely solves all three of these problems, but developers don't seem to want to understand how or why, or seem to suggest that they don't solve any real problems.

It’s no secret that RSC was at least partially an attempt to get close to what Relay offers but without requiring you adopt GraphQL.

There's an informed critique of RSC, but no one is making it.

I agree though worth noting that data loader patterns in most pre-RSC react meta frameworks + other frameworks also solve for most of these problems without the complexity of RSC. But RSC has many benefits beyond simplifying and optimizing data fetching that it’s too bad HN commenters hate it (and anything frontend related whatsoever) so much.

Overfetching does not lead to those megabytes. And it has nothing to do with the enshittification process of a middleman like Amazon fucking over both customers and sellers.

I'm probably about as qualified to talk about GraphQL as anyone on the internet: I started using it in late 2016, back when Apollo was just an alternate client-side state/store library.

The internet at large seems to have a fundamental misunderstanding about what GraphQL is/is not.

Put simply: GQL is an RPC spec that is essentially implemented as a Dict/Key-Value Map on the server, of the form: "Action(Args) -> ResultType"

In a REST API you might have

  app.GET("/user", getUser)
  app.POST("/user", createUser)
In GraphQL, you have a "resolvers" map, like:

  {
    "getUser": getUser,
    "createUser": createUser,
  }
And instead of sending a GET /user request, you send a GET /query with "getUser" as your server action.

The arguments and output shape of your API routes are typed, like in OpenAPI/OData/gRPC.

That's all GraphQL is.

As someone who’s used GraphQL since mid-2015, if you haven’t used GraphQL with Relay you probably haven’t experienced GraphQL in a way that truly exploits its strengths.

I say probably because in the last ~year Apollo shipped functionality (fragment masking) that brings it closer.

I stand by my oft-repeated statement that I don’t use Relay because I need a React GraphQL client, I use GraphQL because I really want to use Relay.

The irony is that I have a lot of grievances about Relay, it’s just that even with 10 years of alternatives, I still keep coming back to it.

Can you elaborate? I've used URQL and Apollo with graphql code gen for type safety and am a big fan.

What about relay is so compelling for you? I'm not disagreeing, just genuinely curious since I've never really used it.

For me it’s really about the component-level experience.

* Relatively fine-grained re-rendering out of the box because you don’t pass the entire query response down the tree. useFragment is akin to a redux selector

* Plays nicely with suspense and the defer fragment, deferring a component subtree is very intuitive

* mutation updaters defined inline rather than in centralised config. This ended up being more important than expected, but having lived the reality of global cache config with our existing urql setup at my current job, I’m convinced the Relay approach is better.

* Useful helpers for pagination, refetchable fragments, etc

* No massive up-front representation of the entire schema needed to make the cache work properly. Each query/fragment has its own codegenned file that contains all the information needed to write to the cache efficiently. But because they’re distributed across the codebase, it plays well with bundle size for individual screens.

* Guardrails against reuse of fragments thanks to the eslint plugin. Fragments are written to define the data contract for individual components or functions, so there’s no need to share them around. Our existing urql codebase has a lot of “god fragments” which are very incredibly painful to work with.

Recent versions of Apollo have some of these things, but only Relay has the full suite. It’s really about trying to get the exact data a component needs with as little performance overhead as possible. It’s not perfect — it has some quite esoteric advanced parts and the documentation still sucks, but I haven’t yet found anything better.

Did my only ever podcast appearance about it a few years ago. Haven’t watched it myself because yikes, but people say it was pretty good https://youtu.be/aX60SmygzhY?si=J8rQF6Pe5RGdX1r8

Try gql tada it’s much better than graphQL codegen

I did. I really wanted to like it. I think it broke due to something I was doing with fragments or splitting up code in my monorepo. I may give it a shot again, from first principles it is a better approach.

This seems a bit reductive as it skims over the whole query resolution part entirely.

Which is where the real complexity comes in

This, for me, is a perfect description of the entirety of GraphQL tbh.

This is a great explanation of what the intent of GQL is. I'm curious though, as someone who has only a small amount of experience with it, what problem is that solving? From what I can tell, it's the same problem REST solves with a different interface. If it is the over-fetching problem, how big is that problem?

In my experience, it's better to fix a bad endpoint and keep all the browser/server side tooling around tracing requests than to replace all that with a singular graphql endpoint. But curious to hear someone else's opinion here

GraphQL is best if the entire React page gathers all requirement from subcomponents into one large GraphQL query and the backend converts the query to a single large SQL query that requests all the data directly from database where table and row level security make sure no private data is exposed. Then the backend converts SQL result into GraphQL response and React distributes the received data across subcomponents.

Resolvers should be an exception for the data that can't come directly from the database, not the backbone of the system.

I think you're oversimplifying it. You've left on the part where the client can specify which fields they want.

That's something you should only really do in development, and then cement for production. Having open queries where an attacker can find interesting resolver interactions in production is asking for trouble

> That's something you should only really do in development, and then cement for production

My experience with GraphQL in a nutshell: A lot of effort and complexity to support open ended queries which we then immediately disallow and replace with a fixed set of queries that could have been written as their own endpoints.

This is not the intended workflow. It is meant to be dynamic in nature.

But has this been thoroughly documented and are there solid libraries to achieve this?

My understanding is that this is not part of the spec and that the only way to achieve this is to sign/hash documents on clients and server to check for correctness

Well, it seems that the Apollo way of doing it now, via their paid GraphOS, is backwards of what I learned 8 years ago (there is always more than one way to do things in CS).

At build time, the server generates a random string resolver names that map onto queries, 1-1, fixed, because we know exactly what we need when we are shipping to production.

Clients can only call those random strings with some parameters, the graph is now locked down and the production server only responds to the random string resolver names

Flexibility in dev, restricted in prod

I mean yeah, in that Persisted Queries are absolutely documented and expected in production on the Relay side, and you’re a hop skip and jump away from disallowing arbitrary queries at that point if you want to

Though you still don’t need to and shouldn’t. Better to use the well defined tools to gate max depth/complexity.

All these extra requirements are why GraphQL never really captured enough mindshare to be a commonly selected tool

> GraphQL never really captured enough mindshare to be a commonly selected tool

It has been, at the scale it matters and should be used at. Most companies don't operate at that scale though.

Sure, maybe you compile away the query for production but the server still needs to handle all the permutations.

yup, and while they are fixed, it amounts to a more complicated code flow to reason about compared to you're typical REST handler

Seriously though, you can pretty much map GraphQL queries and resolvers onto JSONSchema and functions however you like. Resolvers are conceptually close to calling a function in a REST handler with more overhead

I suspect the companies that see ROI from GraphQL would have found it with many other options, and it was more likely about rolling out a standard way of doing things

Is this relevant to the posted article? I don't see how the OP misrepresents anything about GQL.

Except you can't have ie. union as argument, which means you can't construct ie. SQL/MongoDB-like where clauses.

This is a genuinely accurate critique of GraphQL. We're missing some extremely table-stakes things, like generics, discriminated unions in inputs (and in particular, discriminated unions you can discriminate and use later in the query as one of the variants), closed unions, etc.

I have strong agreement here and would add reasoning about auth flow through nested resolvers is one of the biggest challenges because it adds so much mental overhead. The reason is that a resolver may be called through completely different contexts and you have to account for that

The complexity and time lost to thinking is just not worth it, especially once you ship your GarphQL app to production, you are locking down the request fields anyway (or you're keeping yourself open for more pain)

I even wrote a zero-dependency auth helpers package and that was not enough for me to keep at it

https://github.com/verdverm/graphql-autharoo

Like OP says, pretty much everything GraphQL can do, you can do better without GraphQL

Authz overhead for graphql is definitely a problem. At GitHub we're adding github app support to the enterprise account APIs, meaning introducing granular permissions for each graphql resource type.

Because of the graph aspect, queries don't work til all of the underlying resources have been updated to support github apps. From a juice vs squeeze perspective it's terrible - lots of teams have to do work to update their resources (which given turnover and age they may not even be aware of) before basic queries start working, until you finally hit a critical mass at some high percentage of coverage.

Add to all that the prevailing enterprise customer sentiment of "please anything but graphql" and it's a really hard sell - it's practically easier and better to ask teams to rebuild their APIs in REST than update the graphql.

I mean, the use of GraphQL for third party APIs has always been questionable wisdom. I’m about a big a GraphQL fan as it gets, but I’ve always come down on the side of being very skeptical that it’s suitable for anything beyond its primary use case — serving the needs of 1st-party UI clients.

Strongly agreed.

GitHub search is among the worst out there, is this why?

Have you tried using a decorator for auth?

Also, using a proper GraphQL server and not composing it yourself from primitives is usually beneficial.

This was an auth extension or plugin for Apollo, forget what they called it.

Apollo shows up in the README and package.json, so I'm not sure why you are assuming I was not using a proper implementation

GQL was always one of those things that sound good on the surface but in practice it never delivers and the longer you're stuck with it the worse it gets. Majority of tech is actually like this. People constantly want to reinvent the wheel but in the end, a wheel is a wheel and it will never be anything else.

i do a lot of data shenanigans and it's just annoying to work with when some saas goof doesn't consider that orgs are in the business of warehousing the piss out of entire platforms worth of data that they are paying saas guys a million dollars a year for just so they can marry it together with other reporting. all roads lead to damn reporting. so if you want to woo clients but only have graphql then you should probably build some connectors they can use elsewhere they can easily retrieve all their data from. i straight up don't meet business analysts who use graphql to fetch reporting data. it's always me and my engineers sidequesting to make that data available in a warehouse env. my prob with graphql is it forces me to get intimately familiar with platforms i want to just plug into the butt of some object storage container so it can auto ingest into the warehouse and walk away. this is easy to do when the platform who knows their data and their data structure well serves up a rest api that covers all your bases. with graphql the onus is on me to figure out what the f all data i might even need and a lot of platforms have garbage documentation. so much fun since every service/app designs their db differently. no matey, postman is not the time or place for me to familiarize myself with your data model. i shall do that in the sql gladiator arena once ive ironically over fetched and beat the shit out of your graphql resolvers and stuck the data back in a database anyways. if im developing an apps or tools to interface with some platform graphql is fine but it ends there. in situations where i need to bring data pipelines online for my org its just annoying to work with. syntactically im annoyed, my engineers are annoyed, it just amuses me to no end that platforms dont know how big reporting is at orgs they seem surprised not everyone is developing some front end app to their "modular commerce solution" and sometimes they dont even know how to answer when we ask if theres anything we should consider because we're about to hang out at the ceiling of our allowed rate limits when we bring these data pipelines online. they seem surprised that we're interested in reporting, like wtf we pay you a million a year so we can do your whatever as a service thing of fkn course we'll be reporting on the data there. how else are we gonna smoke that proverbial value add on quarterly calls? graphql brings a query language over http. it takes a resolver that's well designed, configured and resourced. i'd rather just rawdog a sql query over the net and have postgres or whatever transpose that to json, return that and let me figure the rest out myself. ive never needed this exactness and freedom out of an api that graphql enjoyers love. i can take whatever there you throw at me and polish into the turd needed for the job, but i generally prefer vendors who have a well thought out and comprehensive and reliable set of rest endpoints. in that scenario its just easier for me to real time it into a warehouse and immediately push off to a stream or queue that populates a postgres instance if i need to build a high traffic web app. reporting needs and application needs are met and i dont have to don't need to do bespoke jujutsu sitting in a rest client and staring at json requests to determine what data i need before i architect out some one off gql query. i look at ton of data, graphql is the most overengineered and unintuitive way to review a lot of it.

its a data retrieval setup that specifically caters towards front end dev. i've done plenty of fe and i will design an app with whatever data when its needed when im building the front my headspace is completely impartial to whether or not im working with gql rest or a podunk db. so im here wondering why no one is just saying this: its nice and convenient when you're on the front, but its hardly a requirement to need a gql api. some like to think it solves for an organizational rift between front and backend devs, and that's just kicking the can down the road. im not sold on the empowerment of fe at the expensive of teams working well together. yeah isolate them more well never need to talk to fe again. great strat

since i happen to also work backend and on enterprise data i see a lot of angles that tightly scoped front end graphql enjoyers do not see and will likely never have to deal with ever. but we deal with it all the time, at least it's convenient for one of us. sucks that it isn't me

@grok: summarise this post in two sentences.

GPT: "GraphQL is fine for frontend apps, but it’s a pain for enterprise data pipelines where the real job is bulk ingestion, warehousing, and reporting—work that REST APIs handle far more cleanly without forcing engineers to reverse-engineer undocumented schemas and babysit resolvers and rate limits. Organizations pay SaaS vendors to extract value through reporting, not to do bespoke GraphQL gymnastics, and the industry seems oddly surprised that data teams just want to ingest everything, dump it into a warehouse, and get on with their lives. "

:thumbs_up: :)

How do GraphQL based systems solve the problem of underlying database thrashing, hot shards, ballooning inner joins, and other standard database issues? What prevents a client from writing some adversarial-level cursed query that causes massive internal state buildup?

I’m not a database neckbeard but I’ve always been confused how GraphQL doesn’t require throwing all systems knowledge about databases out the window

Most servers implement a heuristic for "query cost/complexity" with a configurable max. At the time the query is parsed, its cost is determined based on the heuristic and if it is over the max, the query is rejected.

Which would be fine for internal facing, but it doesn’t sound like it would be enough in an adversarial context?

There are a lot of public facing graphql servers that use it without issue other than frustrating users of non adversarial but complex requirements. The problem is that it is generally on a per request basis.

An adversary is going to utilize more than a single query. It mostly protects against well intentioned folks.

Other forms of protection such as rate limiting are needed for threat models that involve an adversary.

The same problems exist with REST but there it is easier as you can know query complexity ahead of time at end points. GraphQL has to have something to account for the unknown query complexity, thus the additional heuristics.

I ran a team a few years ago. The FE folks really wanted to use GraphQL, and the BE folks agreed, because someone had found an interesting library that made it easy. No-one had any experience of GraphQL before.

After a month's development I found out that there was one GraphQL call at the root of each React page, and it fetched all the data for that userID in a big JSON blob, that was then parsed into a JS object and used for the rest of the life of that page. Any updates sent the entire, modified, blob back to the server and the BE updated all the tables with the changed data. This didn't cause problems because users didn't share data or depend on shared data.

Everyone was happy because they got to put GraphQL on their resume. The application worked. We hit the required deadline. The company didn't get any traction with the application and we pivoted to something else very quickly, and was sold to private equity within two years. None of the code we wrote is running now, which is probably a good thing.

I get the feeling, from conversations with other people using GraphQL, that this is the sort of thing that actually happens in practice. The author's arguments make sense, as do the folks defending GraphQL. But I'd suggest that 80-90% of the GraphQL actually written and running out there is the kind of crap my team turned out.

What I liked about GraphQL was the fact that I only have to add a field in one place (where it belongs in the schema) and then any client can just query it. No more requests from Frontend developers like „Hey, can you also add that field to this endpoint? Then I don’t have to make multiple requests“. It just cuts that discussion short.

I also really liked that you can create a snapshot of the whole schema for integration test purposes, which makes it very easy to detect breaking changes in the API, e.g. if a nullable field becomes not-nullable.

But I also agree with lots of the points of the article. I guess I am just not super in love with REST. In my experience, REST APIs were often quite messy and inconsistent in comparison to GraphQL. But of course that’s only anecdotal evidence.

But the first point is also its demise. I have object A, and want to know something from a related object E. Since I can ask for A-B-C-D-E myself, I just do it, even though the performance or spaghettiness takes a hit. Then ends up with frontend that's tightly coupled to the representation at the time as well, when "in the context of A I also need to know E" could've been a specialized type hiding those details.

> Then ends up with frontend that's tightly coupled to the representation at the time as well, when "in the context of A I also need to know E" could've been a specialized type hiding those details.

GraphQL clients are built to do exactly that, Relay originally and Apollo in the last year, if I’m understanding what you’re saying: any component that touches E doesn’t have to care about how you got to it, fragment masking makes short work

> No more requests from Frontend developers like „Hey, can you also add that field to this endpoint? Then I don’t have to make multiple requests“.

Do people actually work like this is 2025? I mean sure, I guess when you're having entire teams just for frontends and backends then yea, but your average corporate web app development? It's all full stack these days. It's often expected that you can handle both worlds (client and server) and increasingly its even TypeScript "shared universe" when you don't even leave the TS ecosystem (React w/ something like RR plus TS BFF w/ SQL). This last point, where frontend and backend meet, is clearly the way things are going in general. I mean these days React doesn't even beat around the bush and literally tells you to install it with a framework, no more create-react-app, server side rendering is a staple now and server side components are going to be a core concept of React within a few years tops.

Javascript has conquered the client side of the internet, but not the server side. Typescript is going to unify the two.

> It's all full stack these days. It's often expected that you can handle both worlds (client and server)

Full stack is common for simple web apps, where the backend is almost a thin layer over the database.

But a lot of the products I’ve worked with have had backends that are far more complex than something you could expect the front end devs to just jump into and modify.

I would agree that REST beats GraphQL in most cases regarding complexity, development time, security, and maintainability if the backend and frontend are developed within the same organization.

However, I think GraphQL really shines when the backend and frontend are developed by different organizations.

I can only speak from my experience with Shopify's GraphQL APIs. From a client-side development perspective, being able to navigate and use the extensive and (admittedly sometimes over-)complex Shopify APIs through GraphQL schemas and having everything correctly typed on the client side is a godsend.

Just imagining offering the same amount of functionality for a multitude of clients through a REST API seems painful.

> GraphQL isn’t bad. It’s just niche. And you probably don’t need it.

> Especially if your architecture already solved the problem it was designed for.

What I need is to not want to fall over dead. REST makes me want to fall over dead.

> error handling is harder than it needs to be GraphQL error responses are… weird. > Simple errors are easier to reason about than elegant ones.

Is this a common sentiment? Looking at a garbled mash of linux or whatever tells me a lot more than "500 sorry"

I'm only trying out GraphQL for the first time right now cause I'm new with frontend stuff, but from life on the backend having a whole class of problems, where you can have the server and client agree on what to ask for and what you'll get, be compiled away is so nice. I don't actually know if there's something better than GraphQL for that, but I wish when people wrote blogs like this they'd fill them with more "try these things instead for that problem" than simply "this thing isn't as good as you think it is you probably don't need it".

If isomorphic TS is your cup of tea, tRPC is a nicer version of client server contracting than graphql in my opinion. Both serve that problem quite well though.

I do like the look of this! It seems like it nicely provides that without like kicking you into React, which I have ended up having to draw a hard line against in development after my first couple experiences not only with it, but how the distributions in AI models make it a real trap to touch. I'll swap this in in one of my projects and give it a go. Thanks!

No problem! I hope you have a good time with it!

GraphQL appeals to the enterprise mind in a way that few technologies have. Like SOAP/WSDL before it. It fits the model of spotlighting some small and medium problems, and offers a solution that adds complexity and makes everything take longer to build, and if you follow the implementation guidelines closely enough, they say you can solve the problems. Meanwhile, your competitor just has 300 API endpoints and runs circles around you, and you eventually acquire them to get all of your customers back.

If all your experience comes from Apollo Client and Apollo Server, as the author's does, then your opinion is more about Apollo than it is about GraphQL.

You should be using Relay[0] or Isograph[1] on the frontend, and Pothos[2] on the backend (if using Node), to truly experience the benefits of GraphQL.

[0]: https://relay.dev/

[1]: https://isograph.dev/

[2]: https://pothos-graphql.dev/

Incidentally, v0.5.0 of Isograph just came out! https://isograph.dev/blog/2025/12/14/isograph-0.5.0/ There are lots of DevEx wins in this release, such as the ability to create have an autofix create fields for you. (In Isograph, these would be client fields.)

There are also GraphQL interfaces for various databases which can be useful, especially with federation to tie them together into a supergraph.

GraphQL Yoga is also excellent (and you get the whole Guild ecosystem of plugins etc), if you want to go schema-first

The problem with this article is that GraphQL has become much more an enterprise solution over the last few years than a non enterprise one. Even though the general public opinion of X and HN seems to be that GraphQL has negative ROI, it's actually growing strongly in the enterprise API management segment.

GraphQL, in combination with GraphQL has become the new standard for orchestrating Microservices APIs and the development of AI and LLMs gives it even another push as MCP is just another BFF and that's the sweet spot of GraphQL.

Side note, I'm not even defending GraphQL here, it's just about facts if we're looking at who's using and adopting GraphQL. If you look around, from Meta to Airbnb, Uber, Reddit or Booking.com, Atlassian or Monday, GitHub or Gitlab, all these services use GraphQL successfully and these days, banks are adopting it to modernize API access to their Mainframe, SOAP and proprietary RPC APIs.

How do I know you might say? I'm working with WunderGraph (https://wundergraph.com/), one of the most innovative vendors in the market and we're talking to enterprise every day. We've just came home from API days Paris and besides AI and LLMs, everyone in the enterprise is talking about API design, governance and collaboration, which is where GraphQL Federation is very strong and the ecosystem is very mature.

Posts like this are super harmful for the API ecosystem because they come from inexperience and lack of knowledge.

GraphQL can solve over fetching but that's not the reason why enterprises adopt it. GraphQL Federation solves a people problem, not a technical one. It helps orgs scale and govern APIs across a large number of teams and services.

Just recently there was a post here on HN about the problems with dependencies between Microservices, a problem that GraphQL Federation solves very elegantly with the @requires directive.

One thing I've learned over the years is that people who complain about GraphQL are typically not working in the enterprise, and those who use the query language successfully don't usually post on social media about it. It's a tool in the API tool belt besides others like Open API and Kafka. Just go to an API conference and ask what people use.

Production-Ready GraphQL is a pretty good read for anyone who needs to familiarize themselves with enterprise issues associated with GraphQL.

My favorite saying on this subject is that any sufficiently expressive REST API takes on GraphQL-like properties. In other words, if you're planning on a complex API, GraphQL and its related libraries often comes with batteries-included conventions for things you're going to need anyway.

I also like that GraphQL's schema-driven approach allows you to make useful declarations that can also be utilized in non-HTTP use cases (such as pub/sub) and keep much of the benefits of predictability.

IMO the main GraphQL solutions out there should have richer integrations into OpenTelemetry so that many of the issues the author raises aren't as egregious.

Many of the struggles people encounter with the GraphQL and React stack is that it's simply very heavyweight for many commodity solutions. Much as folks are encouraging just going the monorepo route these days, make sure that your solution can't be accommodated by server-side rendering, a simple REST API, and a little bit of vanilla JS. It might get you further than you think!

Yup, honeymoon is over. Now is the time for the adult, long-term, and productive relationship.

Exactly! Once its working, it can be very healthy. And especially on the client. For a very, very, very long time. We started using GraphQL at the very beginning, back in 2015, and the way it has scaled over time -- across backend and frontend -- has worked amazingly well. Going on 10 years now and no slowing down.

We haven't been using it as long but it's definitely saved us from things that were "impossible" to associate in our microservice backend.

This doesn’t really make sense. Obviously if you combine GQL with BFF/REST you’re gonna have annoying double-work —- you’re solving the same problem twice. GQL lets you structure your backend into semantic objects then have the frontend do whatever it wants without extra backend changes. Which lets frontend devs move way faster.

This is the true big benefit, the others talking about over fetching are not wrong but overfocusing on a technical merit over the operational ones.

My frontend developers had their minds blown when they realized that because we’re using Hasura internally, the only backend work generally needed is to design the db schema and permissioning, and then once that’s done frontend developers aren’t ever blocked by anything (which is not a freedom that I would want to give to untrusted developers, hence emphasis on internal usage of GQL)

(Unfortunately Hasura has shifted entirely into this VC-induced DDN thing that seems to be a hard break from the original product, so I can’t recommend that anymore… postgraphile is probably the way)

There is a pattern where GraphQL really shines: using a GraphQL native DB like Dgraph (self-hosting) and integrating other services via GraphQL Federation in a GraphQL BFF.

Sounds like a great way to completely lock yourself into an ecosystem you'll never be able to leave!

On the contrary, you could swap the database rather easily compared to traditional REST+SQL backends.

Migrate data to another GraphQL DB and join its GraphQL schema to the supergraph. The only pain point could be DB-specific decorators, but even those could be implemented at the supergraph level (in the Federation server) if needed.

Even migrating to a non-GraphQL DB is feasible: you could just write your own resolvers in a separate GraphQL server and join that to the supergraph. But that would be more of a ecosystem lock already :)

Really, any manner of SQL database is more of an ecosystem lock than a GraphQL database behind Federation.

GraphQL is one of those solutions in need of a problem for most people. People want to use it. But they have no need for it. The number of companies who need it could probably be counted on both hands. But people try to shoehorn it into everything.

Funny that the top three threads are about how the author misses the real benefit of GraphQL and proceed to assert three different benefits. Perhaps its varied applications is one to consider :-)

No, that time it went wrong because it wasn't _true_ communism. True communism hasn't been tried yet.