This post describes the evolution of piggy, an extensible React Native developer tooling solution written in Electron, used by NerdWallet’s mobile development team to fill gaps in the React Native ecosystem.
piggy started life in early 2018 as an internal app, but was recently released open source under the MIT license. It is distributed with a few general purpose tools to aid mobile development, and is designed to allow users to easily add their own tools if desired.
If you’re interested in using piggy with your own projects, are curious about the implementation details, or would just like to browse the code, please check out the project page on Github.
I’d like to state upfront that the purpose of this document is not to convince you to integrate with piggy, although you're more than welcome to do so if you find it useful! Rather, the purpose of this document is to convince you to invest in tooling, building your own if necessary, and uses piggy as a case study to demonstrate real-world examples.
The remainder of this article will describe the original concept and subsequent iterations of piggy, and concludes with important learnings and key takeaways gleaned over four years of steady, iterative development.
Note: there are many screenshots in this post, but they are not in strict chronological order. Many original screenshots did not survive, so I’ve tried to cobble the remaining ones together as accurately as possible.
The original concept
piggy started life in 2018, and was born out of necessity while investigating app performance issues. At this time none of the performance measurement tools in Chrome DevTools, nor the React DevTools were available for React Native apps, and we needed some way to gather and display information about the running app in real-time.
The idea was simple: stream data from the mobile app to a new, bespoke desktop app (piggy) via WebSocket, and use some off-the-shelf components to help us visualize where the app was spending time.
After a few hours of hacking, we had a little Electron app that looked like this:
It wasn’t pretty, but it helped us root-cause our performance issue.
Early additions and iterations
Dark theme, updated timeline
Chris, our then-manager, had also just recently joined the mobile team at NerdWallet, and was using the prototype app to gain a better understanding of how the app was functioning at runtime. He didn’t like the light theme or the off-the-shelf graph component I used in the prototype, so he spent a bit of his spare time adding a dark theme and rewriting the chart widget.
I took his work, updated the color scheme, added some drop-shadows and fixed some margins and paddings, and we suddenly had something that resembled an actual app:
The event log
At the time, our app made heavy use of Redux, which I had never used before. I decided to create a new tool to help me further visualize the relationship between Redux actions and API calls. This data was already available in the timeline tool described above, but it lacked important context (e.g. the types of actions dispatched and their payloads), and could be difficult to parse visually if many actions and API calls fired in quick succession.
I created an alternative, list-based timeline view that looked like this:
This is what it looks like today, after many small iterations:
The state machine viewer and Redux state explorer
At this point, some engineers on the mobile development team were using piggy daily to help debug their work. Dan, a senior engineer on on the mobile team, was working on re-architecting key subsystems in the app around a finite state machine abstraction, and added a tool for visualizing state changes.
While some state machines in our app are simple, others are quite complicated. Dan’s tool records a history of all state transitions, keeps track of the current state, and is able to draw the state machine as a directed graph to aide debugging.
Around the same time, most engineers on the team were using Reactotron to monitor Redux state changes. It was sort of annoying running both piggy and Reactotron at the same time, so Dan re-implemented the features we used most often as a piggy tool, bringing real-time Redux state monitoring to the app:
By now piggy was used daily by most engineers working on the mobile app. It was generally set up to just run in the background, and would be called up quickly to root-cause certain classes of issues during development.
On Android this could be worked around using adb reverse to proxy network ports over USB using some command line magic, but simply wasn't possible on iOS. That meant in order to test iOS changes on a physical device, engineers would need to push a branch to source control, then wait for CI to churn out a build, which could easily take 20+ minutes. This was incredibly frustrating and time consuming when trying to fix bugs that only manifested on real hardware.
At some point while browsing the React Native source, I somehow stumbled upon this commit, which was a short-lived attempt to bring adb reverse-like support to iOS devices. The commit message said it was "flaky", and it was eventually removed from the code base, but I decided to try it anyway. I created a little bare-bones command line app and integrated the library, and to my surprise it worked fine! There didn't seem to be anything "flaky" about it.
After getting it working, I added a piggy tool to monitor the system for connected devices, and automatically use adb reverse and FBPortForwarding, respectively, to connect physical devices to the bundler. Now engineers could test against actual hardware just as easily as emulators/simulators – all they had to do was keep piggy running in the background.
Import/export: putting it all together
By now, certain members of our QA staff would run piggy during manual regression passes, prior to release. If they experienced bugs, they could copy/paste data or take a screenshot of the app to help developers diagnose issues. This was definitely not efficient, so we added a couple hooks that allowed the individual tools to import/export data if applicable, then implemented those hooks for the relevant tools.
Now QA could run piggy, reproduce bugs, then export the session. Engineers could then import the session on their end to see what went wrong.
This ended up being very useful, as every exported session contained:
A high-level timeline of everything that happened in the app, presented in a gantt-chart-like view.
The current state and transition history of all major state machines operating within in the app.
Shortly after, piggy was also integrated into our QA automation; if a bug is discovered during automated testing, automation will export a piggy log to be attached to relevant tickets.
Later, internal-only additions
Nearly all of the tooling designed up to this point was relatively generic and reusable for any application developed in the same space, i.e. is not specific to anything internal to NerdWallet.
During initial development we had accumulated a set of reusable UI components and established patterns to minimize dependencies between different tools. That means we now had a perfectly reasonable platform to build hyper-specific tools to make integrating against other projects and teams easier.
Here are some tools that we came up with, and use daily. These were implemented by Ming, an engineer from the mobile team, and myself:
Analytics: capture all analytic events sent from the app as it runs. Data analysts now use piggy to validate telemetry data produced by the app before releasing new features.
GraphQL: client teams at NerdWallet have been iteratively converting existing network calls to GraphQL, so we created a dedicated piggy tool to capture all queries and mutations, and their subsequent raw HTTP requests and responses. We also added timing information, automatic error extraction, and curl interoperability.
Push notifications: testing push notifications was previously by accomplished by chaining together a couple rickety old shell scripts; this functionality was move to piggy so developers, QA and marketing teams can test push notifications by entering an email address and pressing a button.
Deep linking: this tool is able to query the running app for an inventory of all supported deep links, with the ability to either export them, or trigger them within the app to validate routing works as expected.
Truth be told, given infinite time and resources I would have preferred to have developed piggy in a lower-level programming language using an immediate-mode user interface library – something like C and imgui, eschewing json/WebSockets for a binary message format like Protobuf using raw sockets.
I had originally considered building this as a throw-away, browser-based tool; however, I quickly devised a more sinister, longer-term plan for it. At this point in time I had only been at NerdWallet for a couple months, and was still getting up to speed. I figured I could use this tool as a sandbox for learning the inner workings of the app, and how it behaves during runtime.
Additionally, our mobile team was still small and scrappy and the product in its early stages. I saw numerous potential opportunities for tooling to improve quality of life for engineers and QA, some of which would require access to lower level system APIs not available in the browser.
I also have years of experience building desktop apps, so Electron seemed like a good starting point.
Why not flipper?
Unfortunately, flipper did not exist when piggy was created, so it simply wasn’t an option.
After flipper was released we briefly considered migrating our tools to it, but quickly discovered it was incompatible with our setup. Specifically: flipper can not compile on iOS when use_frameworks is specified in the project’s Podfile, which is something we require.
The purpose of this document is not to convince you to integrate with piggy; the purpose of this document is to convince you to invest in tooling, whatever that means for your product or organization.
Having infrastructure available that provides visibility into how your app is behaving at runtime can be invaluable.
Below are what I believe to be some important learnings gleaned over the past few years of working on piggy slowly and iteratively.
Instrument your app
Do it, and do it early in the app development lifecycle. Shoe-horning instrumentation into a large, existing codebase to troubleshoot emergent problems in production is not a position you want to be in.
Think hard about where things are likely to break, or where it’d be useful to be able to take measurements when debugging issues in the future. Establish hooks early on, if possible, before your app’s architecture makes it prohibitively difficult.
Common instrumentation points in client side app include many of the things discussed earlier in this post: debug messages, network requests, and app state changes. In the past I’ve also instrumented things like: local caches, resource usage (cpu/memory/storage), navigation paths, and database queries.
Roll your own tools, if necessary
If off the shelf-tools work for you, that’s awesome. If not, don’t be afraid to roll up your sleeves and build something. Often, the most useful tools solve very specific problems, and may not be possible to achieve directly with existing off-the-shelf solutions.
It may feel daunting to try to build something yourself from scratch, but the reality is that most of the time it’s only as daunting as you make it. (See commentary below about architecture).
Make your tools easy to discover and use
Once your app is instrumented and some tools have been written, I’ve found it extremely valuable to make them available in some central location – an app, a git repository, a website, etc; that way people know where to look to find things to help them solve a problem, and also may feel empowered to contribute their own.
Create a dashboard
Use your instrumentation points and tools to provide a holistic view of your app, ideally updated in real time as it runs. This is probably be obvious to lots of backend engineers, but maybe not as much for all frontend engineers.
Being able to observe changes to important subsystems of your app as it runs can be incredibly informative.
Solve specific problems
Don’t get hung up trying to build general-purpose tools unless they are necessary. General purpose tools are often more difficult and time consuming to build because they have to handle more unknowns.
Instead, it’s usually better to build tools to solve specific problems, and allow them to evolve naturally into more general purpose tools if required.
Tools don’t have to be pretty
Tooling doesn’t need to be pretty. Or elegant. Or extensible. Tooling just needs to exist and help solve a problem or inefficiency. Maybe it pulls data and reformats it into an easy-to-digest fashion. Maybe it hides a bunch of complexity behind a single button, making it easy for a non-technical person to initiate a process that previously required engineering assistance. Maybe it just makes a screen flash red when something bad happens.
Make something that’s useful, and don’t worry about how it looks.
Don’t obsess over tooling architecture
As software engineers we often fool ourselves into thinking everything we write needs a solid, well-planned architecture. Tools? Just write the damn things. Don’t suffer from architecture paralysis. End-users will never see them, and developers will be able to figure out how to use them. Be pragmatic.
If they become problematic to maintain, refactor. In general your tools should be orders of magnitude smaller than any application they are designed to support. Tools should be helpful, and not a burden.
Good software takes time and iteration
I worry the preceding sections may be interpreted as arguments for writing poorly designed software – or, at the very least, rushing into projects without spending time to architect them properly. That’s not the point I’m trying to convey. One of the most important engineering resources is time, and sometimes it simply doesn’t make sense to spend a lot of effort early in the lifecycle designing a perfect system; tooling often (but not always) falls into this category.
The reality is that almost all good software takes a considerable amount of time to build, and you need to think critically about what should be payed up-front, and what should be amortized over time.
There’s often a chicken-and-egg problem while developing software: it’s difficult to build a really good product quickly because you never really know what your users want until they have been using it for a while, and have provided feedback.
piggy had the luxury of being developed slowly over many years with little formal process, while regularly collecting user feedback. Additionally, It was rarely a high priority project, as product owners had a tendency to allocate as many resources as possible towards user-facing initiatives. Because of this, we had the ability to play fast and loose with the architecture, added features on an as-needed basis, and refactored high-level concepts judiciously, only when required. Over time, the architecture fell into place and became easy to adapt for new use cases.
There is no one-size-fits-all solution for designing and building software, so don’t be afraid to build tools because you “don’t have time to architect it properly” out of the gate; build something usable that solves a real problem, collect feedback, then iterate.
What’s next for piggy? I’m not sure. piggy developed slowly and organically out of years of investigating developer and QA pain points.
The internal APIs, data flow, and visual components are generally considered stable, and the app is just waiting for more tools to be plugged in. As new problems emerge and can be addressed with piggy, we’ll use it, but at this point in time there’s no defined roadmap.
Instrument your app and write tools. Yes, you may need to invest a non-trivial amount of time, but it can be amortized over the lifetime of your software, and almost always pays off sooner than you’d expect.
Use off-the-shelf solutions where possible, but don’t be afraid to get your hands dirty and cobble some tools together to make you and your teammates' lives easier.