Direct from the source
Throwing light into the dark recesses of open source code, Alexander Tolstoy says compiling code is easier than you think.
Throwing light into the dark recesses of open source code, Alexander Tolstoy explains how compiling your own code is easier than you might think.
why compile at all? “There are situations when an application you want to run has not been packaged in any way.”
Normally you receive and install software via the package manager of your Linux distribution, like apt for Ubuntu, DNF for Fedora, Pacman for Arch and so on. Sometimes it’s a third-party package that you need to download and install by hand; sometimes it’s a Snap, a Flatpak or an Appimage. The common feature in all these options is that the software is already compiled into binary executables and shared libraries, with the rest being just the details of a delivery method.
However, there are situations when an application you want to run has not been compiled and packaged in any way, leaving you uncertain how to get it running. It may be a lesser-known application, or a hot fresh release of some program that you just want to try without delay. Our expert behind
Hotpicks here at LXF will share some tips and tricks on the topic and reveal the inner workings of getting hot open-source apps up and running smoothly and easily.
In this feature we’ll find out how to compile most popular applications using their source code. There is a great diversity of programming languages, build systems, frameworks and other stuff you need to take into account, and we’re going to review the most commonly used ones. So, grab the source code, a compiler and let’s go…
you start at the source, but this task is not as trivial as it seems. Most open source software is hosted at public code-sharing sites, of which the lion’s share belongs to Github with the rest being spread among Gitlab, Launchpad and Sourceforge. In most cases, you can visit a project’s page and grab the tarball with the source code as a regular download.
Pay attention to what you actually download: a repository in its current state, or a version with a fixed number. The first option gives you the latest and (possibly) the greatest code, but sometimes a repository can be officially in a broken state after recent pulls or merges, and some authors give appropriate warning in the front page about this. The second option shows that the author has created a numbered release or a tagged version of their software and it is assumed that it should work without glitches, at least in theory.
In our examples we’ll retrieve the latest code and we’ll do it using command line tools, specifically git and Bazaar.
$ git clone https://github.com/
Cloning for Gitlab works the same, just replace github with gitlab . As for Canonical’s Launchpad, the preferred tool is Bazaar, which provides the bzr
command. Grab a Launchpad repository using:
$ bzr branch lp:
Finally, Sourceforge projects are usually very diverse in terms of providing access to a source code. Look for the Code section to find instructions for either Git, CVS or Subversion (SVN).
Working with GIT repositories has another caveat that you may miss, especially when the code fails to compile to due to odd ‘file not found’ errors. Some projects are dependent on other GIT projects and include appropriates links. As such, after entering the downloaded directory with the code, try the following command to pull out those dependencies:
$ git submodule init && git submodule update
Sometimes a GIT repository hosts several branches of the same source tree. It is not always clear which one you should take. The ‘master’ branch can be in a broken state and you may want to pick another one. Or all branches other than ‘master’ might be storing outdated or obsolete code. Regardless you can explicitly download the branch you are sure you want to get with the following command: $ git clone --branch
Now everything seems to be fine and we’re ready to proceed to the next stage.
common concepts
In order to compile an application from its source code you’ll need to install certain packages beforehand. First, you’ll need a set of compilers. Most applications are designed to be built using the GNU Compiler Collection, or GCC, so you’ll need to install the required number of GCC packages depending on what programming language your application is written in. For instance, install gcc-c++ if it is C++, or gcc-objc if it is Objective C. Install everything beginning gcc- if you’re unsure.
The next big thing are build dependencies, which very often lead to misunderstanding. In brief (and roughly), most FOSS software has two categories of files: the software itself and its header files (.h).
If the application you want to build requires the GTK3 library and you definitely have it installed, you may wonder why you get the error messages like ‘GTK3 not found’. This is because you need the gtk3-devel or similarly named package containing the GTK3 header files. The same applies to virtually any other
dependencies that an application may have. Fortunately, gone are the days when people needed to solve build dependencies by hand. If you’re using Ubuntu, Debian, Fedora, opensuse or another mainstream Linux distribution, your package manager can quickly help you fix build dependencies. Let’s see how it is done in Ubuntu. First, get the basic parts needed for compiling:
$ sudo apt-get install build-essential
This will install GCC, G++, Make and GNU C Library headers. Next, if what you’re trying to build is just a newer version of an existing Ubuntu package, issue the following command:
$ sudo apt-get build-dep
Similarly, the $ sudo dnf builddep
The above examples can save you a lot of time instead of mastering the build environment by hand. Build dependencies are not required to run the compiled application but they can have a large footprint on your hard drive. Consequently, it is often wise to run the ‘build’ machine separately, either by having another physical PC, or by running builds in a virtual machine.
Remember that compilation is a resource-intensive task that takes a lot of time and CPU horsepower, and which generates plenty of temporary files. While these limitations may not mean much for small applications, they can grow huge when you want to re-compile such software as the Firefox browser, the Libreoffice suite or the Gnome desktop. Even powerful servers need many hours to complete jobs like that, so an average desktop machine could be locked for days.
Building from source
One good practice is to examine the Readme.md file for instructions on how to compile the code. Diligent developers often provide precise commands that you can simply copy and paste to get the job done. However, such manuals are sometimes missing and therefore it is wise to review popular methods of dealing with a source code tree.
A still very widespread way of configuring and compiling code involves using the Gnu-style Autotools design. In this case you do the following:
$ ./configure
You can customise the configuration with explicit options and variables (see the output of $ ./configure --help for details). For instance, let’s define the installation prefix and tell the script where the shared libraries should land:
$ ./configure --prefix=/usr --libdir=/usr/lib64
If the script encounters errors and stops at some point, check the errors and try to fix it. For example, if a dependency is missing, you need to install the appropriate devel package and then re-run the script. When everything is fine, the configuring script will generate the Makefile, and after that you can compile the code and then install it:
$ make && sudo make install
In some scenarios the configure script is missing and instead you see the configure.in file coupled with
Makefile.am. This means that it is up to you to take some important preparations. First, collect all the macro invocations in configure.in that Autoconf will need to build the configure script:
$ aclocal
This will create the aclocl.m4 file, meaning that we can proceed with Autoconf:
$ autoconf
Finally we are ready to generate the Makefile. We add extra options to copy some boilerplate files from your Automake installation into the current directory:
$ automake --force-missing --add-missing
A much more widespread build system is known as Cmake. You can detect it by finding the Cmakelists.txt
file in the root directory of the source tree. While the easiest way is to run the Cmake configure tool directly ( $ cmake ..), it’s good practice is do this in a separate directory to avoid littering the source tree with temporary build files:
$ mkdir build && cd build && cmake ..
Just like Configure, Cmake accepts options. The set of options differs depending on the project, but some
common concepts usually work fine. As an example: $ cmake -DCMAKE_INSTALL_PREFIX:PATH=/USR -DCMAKE_INSTALL_LIBDIR=LIB64 ..
This command is equivalent to the aforementioned snippet for Configure. Most additional Cmake options can be defined by adding the -DCMAKE_ prefix before the option name. To see what options are available, consult the Cmakelists.txt file.
If you see a .pro file in the source tree directory, it means you’ll need the Qmake tool to generate the Makefile. Assuming there is only one .pro file in the current directory, just run the plain command
$ qmake or $ qmake-qt5 and wait for it to complete. Then proceed with the usual $ make && sudo make install sequence.
Modern GTK3 applications tend to use the Meson build system, which is advertised as a next-gen replacement for Automake. One of its benefits is a much faster multithreaded compilation method that saves a lot of time. The syntax is pretty simple:
$ meson build --prefix=/usr && cd build && ninja
Here, ninja is the replacement for make and therefore we can use sudo ninja -C build install to
later install the compiled application.
Finally, we’ll take a look at Node.js projects and find out how to handle them. There are many stunning open source applications made using Node.js, a Javascript run-time environment for executing web projects outside of a web browser. If you see a package.json file in the source code tree, it means you’ve encountered a Node.js project. If the Readme.md file does not help with build instructions, consider running the following:
$ npm instal && npm start
NPM stands for Node.js Package Manager. It will parse the package.json file for dependencies and automatically fetch and install them. Many such Node.js-based apps launch a local webserver and fire up your browser at http://localhost:
installing and running
It’s not entirely obvious that the last command used in most of our examples, $ sudo make install , is not obligatory in many circumstances. After your make ,
ninja or whatnot completes without errors, you should have a working, freshly built application in your current directory. Look for a binary executable file, which normally matches the application name, and try to run it right in-place:
$ ./
Sometimes that is enough, but most complex apps still want to be installed system-wide. This hides a possible vulnerability since it is not recommended to add custom files to system directories like /etc, /usr/ lib or /usr/bin. One good practice is to choose a custom destination for user-compiled apps.
This is already taken into account in Ubuntu, Fedora and some other distros that put all make install files into the /usr/local directory. No system-provided package is touched in this case. However, there is an even better and safer solution that works like a charm, especially when you don’t have root privileges. Place all installable files inside your home directory, under
~/.local. It is important to make sure that the command-line shell in your Linux can properly locate binary executables by adding the local bin dir to your PATH variable. In short, place the following line into your ~/.bashrc file:
export Path=”~/.local/bin:$path”
In a reversed situation, when you need to know where a running program lives, use which:
$ which
Now that you have successfully built and run your application, note that without integration with the distro-wide package management system such a custom application can misbehave or even stop working in the long term – for instance, if a shared library it depends on is updated via the standard update channel. Solving this problem requires packaging such an application and promoting it upstream, which is a topic for another tutorial.
Going graphical
Compiling does not necessarily means dealing with the command-line interface: there are some GUI wrappers that can help with this and save some time. What comes to mind first is the gorgeous Cmake-gui utility for all your Cmake needs.
All you need to do is to select the directory with the source code, set another directory for placing the build, and press a few other buttons. The main window displays all available extra options and switches, letting you simply enable or disable anything by clicking with the mouse button.
Other GUI tools include more heavyweight software like Qt Creator and GNOME Builder. Both are fullyfledged IDES tailored to developers’ needs. Both allow opening a source code tree as a project and running it right within the IDE.
But it’s important to note that using any graphical tool does not relieve you from understanding the basics of compiling source code. For instance, while GNOME
Builder can help you to package an application as a Flatpak, you’ll still need to determine the build dependencies yourself.