Home
Blog
A Meta developer's workflow: Exploring the tools used to code at scale

November 15, 2022

A Meta developer's workflow: Exploring the tools used to code at scale

By Neil Mitchell

The size and scope of projects at Meta result in thousands of developers working with many millions of files. Many of the tools favored by smaller projects often break down as they scale to meet these needs, so we have to either extend existing tools or create new ones. In this article we’ll look at the developer workflow at Meta—beginning with getting the code, changing it, then building and testing it, and finally reviewing it. We’ll share some of the tools our developers use every day. We’ve written about many of these tools before, some of which are open source, so we’re including links throughout if you’re interested in learning more about those offerings.

Getting code: Scaling version control with EdenFS

Before editing code, the first step is to get the files on your computer. While Git has become the standard in most places, we use a custom Mercurial-derived source control. There are many problems with scaling any version control system to the size required by Meta, so we started from Mercurial, and made many improvements over time. Since that article, we’ve continued development, producing the Mononoke server, which is able to record and serve commits even faster.

However, at the scale of our repository, simply writing the necessary bytes to disk can take too long. That’s why we use a virtual file system called EdenFS. EdenFS has similar performance advantages to using sparse checkouts, but it has a much better user experience. Unlike with sparse checkouts, EdenFS does not require manually curating the list of files to check out, and users can transparently access any file without needing to update the profile. The EdenFS filesystem is closely integrated with the recently open sourced Sapling source control client. You can explore how Sapling works to make source control more user-friendly and scalable in the Sapling blog post.

Once all those files are able to be accessed, it's important for other developer tools to know which files have changed. With so many files, simply calling stat on each file in the repository is infeasible. To solve that problem we use Watchman, another Meta open source project that can detect file changes quickly. Watchman enables integration with both EdenFS (where available) and the kernel notification mechanisms elsewhere. To learn more about how Watchman can track and make updates to code efficiently at scale, check out this video explainer.

Editing code: Custom tools streamline for scale

There are many Integrated Development Environments (IDEs), editors and platforms that people use to edit code at Meta. However, we’ll focus on just one in this section—the IDE most common for backend services in languages such as C++.

First, developers need a machine to use for writing code. Many developers at Meta have a MacBook, but actually do their coding on a Linux server. While some users have a dedicated server, many others use an “on-demand devserver.” An on-demand devserver is a remote server that you grab, do some coding with and then release when you are no longer using it, typically at the end of the day. At first, this approach may sound like a terrible experience—it’s like having to reconfigure a new machine every day. But a number of technologies make it a smooth experience, such as using a persistent home drive and a source control server that automatically returns you to where you left off. Even better, the server you grab will be ready to go, with common tools fetched, caches warmed and other preparations, so it provides an even faster experience than having a dedicated machine.

For the actual editing, many people use VS Code. VS Code is first and foremost a local editor. We have a number of custom extensions to VS Code which bridge the gap, so local VS Code can open files on a remote server. We have further extensions that integrate with everything from a developer’s calendar to service disruption notifications, so developers can sit in VS Code all day without having to switch away. Even more extensions provide source code integration, IDE-like functionality and linting/formatting, giving feedback to the developer as they are editing.

Caption: The VS Code IDE connected to a remote server, showing the version control integration with a stack of four diffs.

At Meta, there are millions of source files, so standard VS Code features like searching for files by filename and searching for text in source would take unfeasibly long. To remedy these issues, we have custom tools that override these important features so that they look exactly like normal VS Code but work very differently underneath. In both cases, we have servers that precompute the information based on the source control revision, and then there are local processes that integrate local changes into the result, which allows for searching through a vast amount of files in a matter of seconds.

Building code with Buck

Once a developer’s files are on their computer, the inner developer loop involves editing those files, compiling them and experimenting with the result. Most projects at Meta are built using Buck. The Buck build system integrates with Watchman to find out which files have changed, and then it uses both a remote cache (to download files that have been built before) and remote execution (to build thousands of files in parallel). The Buck targets are specified using Starlark, a deterministic Python-like language. Although Buck works well, as with all the tools listed in this article, we’re always looking how they could be made better and have shared our thoughts on that path forward for Buck.

Caption: The build system compiling a project, with output produced using Superconsole.

Testing code using manual tests and static analysis

We want to make sure that all code that is written does what we intend it to do, and two important approaches to that are writing tests and using static analysis.

For static analysis, with the quantity of code we have, it’s important that the results are quick and the signal is high quality. If the static analysis says your code has an error, it probably does. We have a general static analysis platform called Infer, which is interprocedural, and Infer supports multiple languages including Java and C++. We have also built more tailored analysis tools, such as RacerD, which detects Java concurrency bugs and was used to help move the Android News Feed from single-threaded to multi-threaded.

We have many language-specific testing frameworks (for example, Jest for Javascript testing), and many individual tests using these frameworks. In fact, in some sense we have too many tests: running all the relevant tests whenever code changes would be infeasible. To address that issue we use predictive test selection, using a machine learning model to determine the tests that are likely to have the highest signal, and then we run only those tests.

Sitting somewhere between static analysis and handwritten test cases, we use the Sapienz tool to automatically test mobile apps. Using the app allows developers to pretend to be a user and click through various buttons, trying to seek out crashes and other undesirable behavior.

Submitting code in stacks of diffs

Once you have finished working on your code, you commit it and submit it. Perhaps one of the unique aspects of Meta development tooling is that instead of submitting a single diff at a time, often we submit stacks of diffs—a series of changes, each standing alone but also building on the previous changes. The ability to have a series of diffs enables reviewers to discuss each change in isolation and to properly separate refactorings from functional changes.

Caption: Phabricator view of reviewing a stack of three diffs, viewing changed files, commentary, discussion and results of tests.

After submitting the code, it goes to Phabricator (Phab for short), our CI and reviewing tool. This tool displays the comments (including a test plan), allows developers to navigate through the stack of diffs and lets you comment on and approve a diff. (It even lets you like those comments!) Phab also takes care of running all the tests as identified in the testing section, and it displays them inline on the diff so everyone can see the output and weigh in on them.

Once the diff is approved, a "Ship It” button tells Phab to start the process of landing diffs. After some final sanity checks (such as ensuring that recent changes haven’t broken our ability to do analysis of which tests should run), the code gets committed and is available on the main branch.

After the code lands, the deployment process starts. Explaining that process will need to be saved for a different blog post…

Conclusion

We hope this article has given you a sense of some of the things that go into the Meta developer workflow. All these pieces are tailored for working at extreme scale, and many are available as open source projects.

About DevInfra

Developer Infrastructure (DevInfra) owns most of the coding life cycle at Meta, from the time code leaves an engineer’s mind until it reaches the people who use our apps. Our mission is to increase developer efficiency so that we can continue to ship awesome products quickly. Meta is one of the leaders in the industry that is building innovative, reliable and fast developer tools and automation infrastructure, ensuring that every second of engineering time is spent on the things that matter.


Get our newsletter

Sign up for monthly updates from Meta for Developers.

Sign up