Retrieving sprawling data graphs is one of the critical issues when building a robust web application. There’s a careful balance among number of responses, response times, and payload sizes. Should fetching a blog post and 10 comments from an API take:

  • 11 requests (1 for the post and 1 for each comment),
  • 2 requests (1 for the post and 1 for all the comments),
  • or 1 request (1 payload encapsulating these 11 resources)?

Ideally, it should only take a single request. The overhead of initiating an HTTP request is likely the most expensive part of our retrieval, so splitting these requests apart would make each individual request smaller but would increase the total amount of time it takes to render our page.

This single request is fine while our app exists with just a single client. The single client always displays 10 comments, and that level of implicit coupling is just fine for our nascent app. As the product grows, we might eventually decide to build an iOS client. Because of the limited real estate, we’ll only need to display the first 5 comments. Now our previous payload is sending extra data down the wire.

In this simple example, it’s probably best to just schlep the extra comments down to each client. But as our object graph grows, we’ll find ourselves sending more and more extraneous data down with each request. Because we need to serve many clients, these payloads grow. At a certain point, the data the client will actually use is no longer in the majority of the bytes. Any given client might only read 20% of a response body.

What began as a simple payload is now chewing up a lot of time in our database and spending more time on the wire because the payload is large. Our original solution is no longer tenable.

Enter GraphQL

When dealing with N clients, we need to flip the request structure on its head: We need to let the clients tell us what fragments of data they need to operate.

GraphQL solves this by explicitly telling the server, “Hey, here’s the data I need. Can you package this up and send it my way?” This allows the server to only send the exact resources and fields required for the client to behave properly. Relay makes it easy to progressively build up GraphQL queries within a React application.

We can think of each byte in the payload as having “value”: If a byte exists in the response payload, it’s because the client asked for it. There are no stowaway bytes.

The Relay framework makes building these graphs easy. Each traditional React component is wrapped by a container, which knows which attributes the contained component needs to render. As we nest these containers, we build object graphs. Once we reach a root, the object graph or GraphQL query is ready to be sent to the server.

This declarative nature of data retrieval is exciting, but there’s still room for improvement.

Too Much of a Good Thing

The biggest issue I see with GraphQL is that the default implementations disregard HTTP semantics by default.

Specifically, implementations use the POST HTTP method by default, sending the GraphQL queries within request body. This on its own is not that big of an issue, until you look to implement request-level caching. In our blog post example, every request for an article would have to go back to origin to retrieve the blog post and its comments. This is not ideal on many types of content, especially our blog post example. Because the blog post and comments will not be changing that much, we will want to cache them if our site ever becomes popular.

Because the POST method is used, using a CDN for static or dynamic content is now no longer a simple drop in. For a CDN to cache a POST request, it must now introspect the request body to build the cache key rather than just examining the headers. Fastly is the only CDN I’m aware of that allows you to use the POST body as a cache key, and even then it’s limited to 2 Kb. Other CDNs like CloudFront don’t allow caching of POST requests.

In the world of GraphQL, HTTP is an unfortunate transport mechanism rather than something to be embraced. Rather than using the standard semantics, we ignore them completely by treating HTTP like a dumb pipe.

For companies with enough resources, treating HTTP like a dumb pipe is feasible. You can develop GraphQL-aware caching layers and deploy that logic to edge nodes around the globe. But most of us are not Facebook, so we need to stick with the standards we’ve got. This means using standards like HTTP to the fullest, so that we can be sure that our applications play nicely with vendors.

Specificity of Cache Keys

Let’s say we did go through the motions to write custom edge logic to cache GraphQL on our CDN. After a week in production, we look at our cache hit ratio. We notice a large amount of MISSes. Why?

GraphQL’s strength could also be its curse: If the data we request is too specific, it will never be cachable. Although 2 clients might be querying the same resource, they might be asking for different fields. If we try to cache those responses, they will need to occupy 2 different cache keys. Otherwise, we’ll serve incorrect data to one of the clients.

Examine these 2 slightly different queries. These would have a totally separate cache key, even though they represent nearly identical information:

A concering part about these 2 queries is how they would get built: the frontend engineer likely has no idea that they are doubling their cache space by changing the views slightly.

For a typical web application, it’s likely okay to “burn a few” bytes in the interest of cacheability. Perhaps we send an extra few hundred bytes to each client, but we’re able to now service both clients with the same cached values. The clients then just pull out only the fields they need. If deploying GraphQL in production, it might be a good idea to “round up” to the resource level.

What’s a Better Alternative?

GraphQL and Relay pioneer some very interesting principles, and there are many details that would be worth copying into a better solution. Let’s go over a few of the powerful takeaways from GraphQL:

  • Declarative interfaces for describing data are powerful and easy to reason about. It’s easy to see what data a component needs to render itself.
  • We should be mindful of the total amount of bytes we send across the wire, being careful not to send too much data the client does not need.
  • Allowing the client to request the information it needs means we don’t need to build new endpoints for each client (web, iOS, etc.). This frees up time for engineers to focus on more business-specific problems.

That being said, there are some improvements to be made:

  • GraphQL does not play nice with the rest of the web, because it treats HTTP as a dump pipe.
  • GraphQL—without a custom implementation—will make any caching layer too specific, and thus mostly useless.

The better alternative today exists, but only in pieces. Rather than use the default GraphQL network layer in Relay, we should swap that out for something more standards-compliant. It would be even better if the containers did not need to specify the queries, but were able to statically extract them from the components themselves.

For the HTTP part, JSON API is a better fit for clients looking to specify the information they need. JSON API treats HTTP as a first-class citizen, so you can also be sure that your API serve cachable responses.

Here’s how a JSON API query looks for a single blog post including the attached comments:

The ideal solution keeps the declarative aspects pioneered by Relay, but allows for arbitrary translations into any “query language” (i.e. GraphQL or JSON API). Although sparse fieldsets are supported on the JSON API spec, most applications likely want to “round up” to the resource level for cachability purposes.

Wrapping Up

Although GraphQL and Relay pioneer some powerful concepts for building web applications, it’s important to decide which traits are worthy of continuation. New libraries and technologies allow us to push the boundaries of how we write applications, but not all ideas will survive.

Using GraphQL in production with a caching layer? I’d love to hear about your experience. Drop me a note on Twitter.

Special thanks to Paul Straw and Justin Duke for reading an early draft of this blog post.