Tuesday, May 09, 2023

Tips for making MySQL builds & tests faster

Recently, Mark discovered that a part of FB MySQL sources were recompiled three times in a single build. That has been fixed, and I also played with clang -ftime-trace to see where the build time goes. I believe there is more to this topic, so I wanted to organize my thoughts on the subject.

In software development, the shorter the change-build-test cycle is, the higher the developer productivity. Yet MySQL is not making it easy to have short "build" and "test" steps in this cycle. A reasonably powerful Intel Core i9 laptop used to take 20-30 minutes for a clean debug build without the unit tests. The MTR testsuite takes hours, and it is possible to make it run for days if you wish.

What can be done about this? Here's what's working for me, focusing on 8.0 trees. There is no silver bullet here, and some of the suggestions might be even somewhat effort-intensive to set up. Luckily, most suggestions are independent and optional.

Source trees and build artifacts

TL;DR: build incrementally as much as possible. Disk space is cheap, while your time is expensive. A common "antipattern" (quotes because there is nothing wrong with such workflow otherwise) is to have a single local git clone with a single build directory. Changing a branch with git checkout forces a clean build. Changing the build type (i.e. you built Debug previously, now need RelWithDebInfo, or Debug + AddressSanitizer enabled) forces a clean build. Don't force clean builds; keep all the previously-built artifacts around as much as possible:

  • Have one build dir for every build type (e.g. build/debug, build/release, build/debug-asan). Never delete a build dir unless forced to.
  • Never use git checkout in the local clone directory, use git worktrees for everything. Only delete a worktree (and its build dirs) once that branch is merged.
  • You are free to store the build dirs wherever you want, but in order to keep track which build dir belongs to which source tree, for me, the simplest option was to keep them below the source tree. Oracle MySQL has build/ in its .gitignore, so that's a good prefix dir for them.

One objection to incremental builds is that they might somehow result in differences in build artifacts compared to the same clean builds, and that would be bad. In practice, sometimes something does break an incremental build with a build error, forcing a clean rebuild–an occasional checkpoint if you will. I have never encountered a silent divergence that was somehow detrimental, and your CI/CD farm will build your PRs cleanly anyway.

Now if you follow the worktree advice, you are likely to have quite a few of them. Some of them will be your personal feature branches, while others will be shared feature branches, and main/master/8.0/5.7 trunk branches where others commit and push. Set up a cron job to pull the later and build overnight. Pros: ready builds for your work in the morning. Cons: you left your work last night with a working build, and someone pushed a commit that broke the build for you, which is what you find in the morning. IMHO the pros outweigh the cons.

Some MySQL source trees, like the Meta one, are incompatible with git worktrees. Luckily there is a not too-complicated workaround of adding the following CMake options: -DMYSQL_GITHASH=0 -DMYSQL_GITDATE=2100-02-29 -DROCKSDB_GITHASH=0 -DROCKSDB_GITDATE=2100-02-29. Maybe one day it will be fixed properly.

A thing that did not work well for me is ccache. While easy to set up, at least twice I had to waste a lot of time on apparent source-binary mismatch only to figure out that ccache is substituting an incorrect binary object file. That was enough for me to drop it, and I could never measure its benefit anyway.

Build options

According to the MySQL docs, there are over 160 CMake options. Some of them can affect the build times for better.

Use system libraries as much possible

You are in the business of developing MySQL, not its bundled 3rd party libraries. If you are lucky, you are also not in the business of developing MySQL integration with any of them. So, ignore the bundled libraries as much as possible and use your system ones: -DWITH_SYSTEM_LIBS=ON, after installing all the dependencies (which I won't list here). Unfortunately, that's only a theory, and in practice there's a difference between theory and practice. Let's take macOS, for example, to see which of the bundled still have to be used:

  • 8.0.33: -DWITH_RAPIDJSON=bundled
  • 8.0.32-29: system libs only, yay!
  • 8.0.28-27: -DWITH_RAPIDJSON=bundled -DWITH_LZ4=bundled -DWITH_FIDO=bundled
  • 8.0.26: -DWITH_RAPIDJSON=bundled -DWITH_LZ4=bundled

That's not too bad, and, given that we are stuck with every release for at least three months (much longer than that if using, say, the Meta tree), it's worth figuring out.

Skip the unit tests but be careful

-DWITH_UNIT_TESTS=OFF is by far the single most time-saving CMake option. Usually it is also not as bad as it may sound for development because the MTR tests are still there, and depending on what you are working on, the MTR tests might cover your testing needs completely. The biggest risk there is updating some internal API in a not particularly interesting way and forgetting to update its users in the unit tests. I am still figuring out the best way here to have my cake and eat it too.

Older versions: skip the functionality you don't need

This is quickly becoming an obsolete tip, but including it for completeness. Group replication used to be an optional build part, and the X plugin still is (-DWITH_MYSQLX=OFF). Unfortunately, disabling the X plugin breaks quite a few unrelated-to-X MTR tests, so I don't do that anymore.


Use libeatmydata

Install Stewart Smith's libeatmydata and always use it, except if building with sanitizers on Linux. It is packaged for Ubuntu, macOS Homebrew, and likely elsewhere. It cuts about 25% of MTR testsuite runtime by silently substituting all the fsync and related calls with no-ops for the tested processes. It is transparent, invisible, not getting in your way etc.–a pure win. Its invocation in the context of MTR tests is a handful to type, so I am using a shell script helper:

UNAME_OUT="$(uname -s)"
if [ "$UNAME_OUT" = "Darwin" ]; then
    if [ "$(arch)" = "arm64" ]; then
    unset BREW
    export MTR_EMD=(
    unset EMD_LIBDIR
    export MTR_EMD=(

mtr_emd() {
    ./mtr "${MTR_EMD[@]}" "$@"

MTR also provides the --mem option which tries to use a non-persistent filesystem for running tests. In theory this should speed it up even more than libeatmydata, but I could never get it work reliably.

Use --parallel, find its best value for your machine & your tests.

For instance, for single-server tests on an Apple M1 Max (8 performance and 2 efficiency cores) the fastest I found was --parallel=15. Replication tests complicate this by spawning three servers per test instead of one.

Don't test what you don't need

Now we are deep in Captain Obvious territory, but still. Developing a plugin, say clone? Great, most of the time --suite=clone is enough. Sure, even with plugins the separation is not perfect, and there are dependencies, for example group replication uses clone too, but it's a start. You are less lucky if you work on InnoDB, or something in i.e. THD or Handler that is a dependency of everything.


Throwing hardware at the build time problem is a great option if you have the resources. IMHO, there are two main routes to consider: the offline-portable one & the connected-powerful one.

If you need a laptop and want the option of working offline, then Apple Silicon is second to none. Replacing an Intel Core i9 laptop with a M1 Max one made MySQL builds five times faster. Five actual times! The downside is that macOS is not exactly a datacenter server OS, and so if you are the only one on the team on macOS, guess who just became the macOS port maintainer? There is an option of running Linux on Apple Silicon, which I heard virtualizes at near-bare metal speed, but I haven't explored it yet. Even then you become the ARM port maintainer, which is not that bad considering Graviton.

If a desktop is OK, then there are options, although I haven't tried this myself. Apple should still be in the running, and you could get an AMD CPU with up to 64 cores, which should make a short work of even MySQL build.

If a network is OK, then Sunny recommends using icecream to distribute compilation. I haven't tried this either.


I wrote down everything I knew about making MySQL build and test faster. Have I missed anything? Please comment.