NerdWallet recently prioritized implementing GraphQL as a major engineering initiative in an effort to standardize our APIs, increase our development efficiency, and reduce code duplication. At NerdWallet, engineers developing mobile and web applications leverage numerous shared services, from a global authentication service to more product-specific services, such as providing personalized mortgage rates. These services are maintained by dedicated teams across engineering. Historically, in order to create a consistent user experience, we rely on one-off SDKs tied to specific data sources that power features across platforms. GraphQL centralizes these service integrations and Apollo provides a comprehensive ecosystem that supports our initiatives.
A primary goal of GraphQL and Apollo in particular is to reduce the need for state management systems. This is not to say that they cannot work in unison, but as we will discuss later in part two, Apollo Client provides various resources that can adequately replace a global store for managing server data. At NerdWallet, our React applications leverage Redux extensively, with many selectors, reducers, and actions defined in numerous shared libraries. Each product team is responsible for updating and maintaining these libraries. Thus, cross-product integrations can become overly complicated. For example, when the shape and location of bank data differs dramatically from credit card data, providing a “universal” product review is challenging, and retrieving and mutating the data client side is expensive. GraphQL helps break up silos and enables an improved cross-functional development experience.
About three months ago, we identified our online shopping and rewards platform codebase as the first candidate for implementing end-to-end GraphQL integrations. This work also entailed phasing out Redux, and replacing the existing API integrations with GraphQL. Let us dive into our learnings from this experience in the hopes that this improves understanding of the Apollo platform. We’ll also examine some common patterns that we’ve found useful and implementation details to be mindful of.
Our work began in Apollo Server, which serves as the layer between backend services and front-end applications. It enforces a shared “language” or protocol for requesting and shaping data and provides a variety of features such as caching, testing, authentication, and more. NerdWallet’s front-end Rewards experience leverages a single backend service written in Python. This product was an ideal candidate for validating the technology because the codebase was relatively young. To begin understanding the mechanics of Apollo Server, let’s start by creating a data source.
The first step in implementing Apollo Server is defining data sources, which “are classes that encapsulate fetching data from a particular service, with built-in support for caching, deduplication, and error handling. You write the code that is specific to interacting with your backend, and Apollo Server takes care of the rest” (per their docs).
This integration’s data source looks like this:
The rewardsAPI class encapsulates all of the requests to our Python service endpoints. We provide simply-named methods that correspond directly to the backend (more on this soon), enabling ease of use for any number of queries to leverage. These atomic methods are responsible for handling requests to a single endpoint.
Correspondingly, the existing backend Python service exposes these two endpoints:
During the course of our work with GraphQL, we decided that data source methods should strictly mirror the backend with no additional operations. This maintains the purity of endpoints but allows for flexibility as query and mutation resolvers can utilize as many endpoints as necessary, depending on the complexity of the requested data. The big win here is that a query resolver can encapsulate requests to multiple services allowing the client to fetch all the data it needs without having to make requests to each service individually and aggregate the results client side. Let’s take a look at a sample query resolver to better understand this concept.
Query resolvers fetch data by encapsulating API requests via aforementioned data source methods and then shape response data as defined by the schema resolvers. Top-level schema is often comprised of nested lower-level schema resolvers due to the depth of data in the real world.
This is the resolver that leverages the getOffers data source method defined above:
The offers resolver passes the queryObj argument directly to the source method which makes the request. We then parse the response and retrieve the data property which is then shaped into our OffersSearchResult schema. The schema resolvers are where the magic happens. Let’s take a look at the schema for this query.
The query result schema is defined as follows, where OffersSearchResult is a schema type:
Our resolvers object has a schema resolver named OffersSearchResult.
This resolver parses the response body for the count_total_matched and results fields. These new fields get mapped to other schema objects as shown below. Note: it may be prudent to ignore some response properties if they aren’t relevant to your use cases.
The OffersSearchResult schema shapes our search result objects into response objects that the client is expecting. This top-level schema is composed of additional schema types with their own resolvers. The complexity of a top-level query response schema increases with the depth of data. Eventually, the entire response schema is represented by properties that map to primitive scalars of type Int, Float, String, Boolean, and ID, which is used as the cache key.
In the example below, the results property is defined as an array of a lower-level schema, Offer. The Offer schema is composed of additional schema types. For example actions is defined as a RewardsAction schema and so on and so on. If you have services that return similarly structured data, your schemas are reusable. Queries can be tailored for special use cases which can take advantage of modular schema definitions. Example: if we wanted to promote the top three pizza and pasta offers as a special callout, we can write a query that reuses our existing pieces to do so. The query resolver can do the filtering of all results for these special offers without any client side logic and the data is shaped according to existing schema. Additionally, since we defined generic data source functions, we can re-use the existing getOffers method to accomplish what we need!
This object represents the offers query with optional query arguments and the requested response fields. In this example, the nested properties accurately represent the structure of response data provided by the underlying data source request. For example, the actions field is a list of objects with amount, currency, et al properties as defined in RewardsAction.
The beauty of GraphQL lies in the ability of clients to specify which particular fields they need; nothing less, nothing more. Developers can proactively reduce their application’s CPU/memory footprint and optimize for slower internet speeds. Similarly, many of the fields shown above may not be relevant to a given feature. This allows the client to request only the bare essentials. Fields like networkRank, language, brandAssets.mimeType, brandAssets.file, etc are all frivolous in the production application and we have no use for them. They are still supported by our API and within GraphQL, so if other applications eventually need this data it’s easily retrievable.
Finally, it’s important to discuss mutation resolvers since GraphQL does support standard CRUD operations.
Adding a mutation resolver is similar to adding a query resolver. Its structure is nearly identical. In this case, defining the shape of offerId is important.
The activateOffer mutation requires (as indicated by the !) an offer ID that is an Int. We expect a String as the response. Keep in mind that an update mutation or create mutation may want to return the object. Similar to a query return type, this can be a custom schema.
Following these techniques for developing on Apollo Server will ramp up your capabilities when it comes to working with Apollo Client. Separating your data source classes, clearly defining your queries and mutations, and writing well documented schemas will produce a documented, extensible, cross-functional API that can be leveraged by teams throughout your engineering organization. In part two, I go over the details of working with Apollo Client and how the orchestration between these two pieces reduce your dependency on state management systems and yield well-architected client applications that are maintainable and modular.