Ever spent time looking a scrolling console wishing that compilation was taking less time? In this post we’re going to explore the various way to fasten the build of $YOUR_SOFTWARE_PROJECT.
Let’s start with the simpler test case which will act as a reference: simple debug build compiled using make and single job.
For the project I’m using for my tests this leads to:
Parallel Makefile build
The first obvious way to build faster is to use parallel jobs. Given that the computer used for testing has 4 processors (but only 2 physical core), what is the optimal N parameter to give to make ?
make -j2: 1min31 make -j3: 1min21 make -j4: 1min15 make -j5: 1min15
We hit a plateau at N=4 so there’s no need to use a bigger value.
Ninja build system
Wow! ninja is way faster than make!
Well… until you read ‘ninja –help’ output:
-j N run N jobs in parallel [default=6, derived from CPUs available]
So ninja uses parallel builds by default, forcing N=1 leads to:
ninja -j 1: 2min40
which is identical to what make gets.
That makes sense though for at least 2 reasons: first it’s a small project so the time spent in build system is way slower than the time spent compiling.
Also ninja/make sources files are both automatically generated from CMakeList.txt – manually writing rules.ninja may be required to take advantadge of the supposed speedup brought by ninja.
Precompiled headers (pch)
Another approach to speedup the build is to use precompiled headers. The tested code base uses C++ and templates, so let’s see how much we can gain from this.
Precompiled headers can be done manually, but it’s boring so I’d prefer to use an automatic approach.
Cotire is a CMake module designed to improve C++ project build time by auto-generating precompiled headers. The integration is straightforward (one file and one line to add to CMakeList.txt) so let’s jump directly to the result:
make -j4: 1m01.513s
That’s a nice improvement, especially considering the time required to setup cotire 🙂
So far we’ve used only 1 computer but we could use several! Let’s see what happens when you add another computer and use distcc to manage the distributed build.
The new computer needs 3min to build the software using make -j3 and is connected using a 100Mbps switch.
On each computer that we wish to include in our build array we run:
distccd -j N -p 12345 -a 192.168.1.0/24 --no-detach --log-stderr
Then on the computer controlling the build:
export DISTCC_HOSTS="192.168.1.52:12345,lzo,cpp localhost:12345,lzo,cpp" CXX="distcc g++" CC="distcc gcc" cmake ..
and the results:
distcc-pump make -j80: 1min04
Not that impressive…
So, let’s throw another machine at it. This 3rd machine is capable of building in 1min30 using make -j4 and is connected using a 1Gbps switch.
distcc-pump make -j80: 0m44.144s
And now a fancy graph showing all the above numbers (converted to seconds):
What conclusion can we draw from all this?
- precompiled headers are a good improvement for C++ projects
- distcc is easy to setup and can bring build speedup but you'll need a significant project to really appreciate the gain. Also several phases of the build are not parallelisable (source generation, linking, etc)
- CMake is my favorite build system 🙂*
*don't worry LibreOffice developers reading this, I won't suggest to switch to CMake 🙂