Designing a better distro
Unix originated in the ’60s and there are better ways to work now…
Let’s look at some of the changes that have become prevalent in recent years. The first is the development of init systems, which are integral to a running Linux system as they control the running of all processes. Early in the boot process, the kernel starts the init process and it continues to run as a daemon. Various options exist, all the way from the venerable SysVinit, developed for Unix decades ago. Canonical spearheaded the use of Upstart, before Systemd became the popular choice for most distros.
In the late ’90s/early 2000s, the most commonly used filesystem was extended2 (ext2), soon succeeded by ext3 and ext4, which are journalling filesystems, which greatly increases the reliability when things like power cuts occur. The ZFS filesystem arrived in 2001 and is renowned for its broad feature set and how reliably it stores data. It uses checksumming, which can be used to reconstruct any data that becomes corrupt for any reason. ZFS is probably overkill for most home users, but Ubuntu makes it simple to install and use. Btrfs has been reviled for issues with some RAID functionality, but is reliable for less enterprise purposes and has extra features that we will discuss later.
Built for reliability
A cleverly designed partitioning scheme can be used to replicate what is employed by Chrome OS to provide a reliable method of software updates. Chrome uses an A-B partitioning system, and updates are applied to the partition that isn’t currently used. On the next reboot, the other partition is booted from and if this fails for any reason, the previous partition is booted from again.
Immutability can be achieved in multiple ways, but one technique is to provide immutable root partitions. This means that a set of files is distributed by the project as a rock solid base for the OS. Immutability also refers to the fact that parts of the filesystem are configured as read-only, with the idea being that only the OS developers can initiate updates to files by releasing new upgrades. Using this technique, the OS’s footprint can be quite small, with extra applications being added using various other techniques.
When installing packages on your favourite distro, you are trusting the maintainers to ensure that your distro stays safe. While we are not suggesting that the main distro can’t be trusted, because you are giving root access to your system, every time you install or update an app, any issues in the repository could cause havoc. Adding to this concern, systems such as PPAs, which can be created by anybody, are also given root access, which is a lot of trust to give somebody you don’t know. Modern packaging systems mitigate these concerns by providing sandboxes and containers to minimise access to the wider system. As is often the case in the open source world, multiple solutions exist, such as Flatpak, AppImage and Snap.
Another solution to the installation of software is a tool such as Toolbox or Distrobox. These provide tight integration with the user’s session and containers that the tools are managing, so an Ubuntu container can be run, for example, on an Arch installation, and both GUI and command-line apps can be installed.
Where complete separation is required between services, it is still acceptable to run a virtual machine on a host OS. While containers are newer tech and require less overhead, there is also less separation between containers than between the host OS and a VM.
Declarative configuration has been around for years, with systems such as Ansible and SaltStack providing a way to create a set of config files that are applied to machines from a central server, allowing configuration to be changed and software to be added in a highly controlled manner, from as few as one system up to thousands. Imagine being able to configure your distro the same way. We’ll discuss a distro that does just that!