Containers and VMs
Containers will be a stalwart within next-gen distros.
Virtualisation is incredible technology that allows for a host server to run multiple operating systems, by segregating them into virtual machines.
The name is very accurate as the host OS emulates all the hardware that the virtual machine requires. Over the years, there have been lots of developments, including PCI pass-through, which allows for graphics cards and other hardware to be used by the virtual machines, and CPU extensions, so the host CPU can better provide resources to the VM and the host OS therefore performs as little emulation work as possible.
Containers then became the next big thing. These are different in that they run on the host OS directly, without the need to emulate hardware. This makes the use of containers more efficient on the host system and more containers can run, compared to using VMs. When first used, containers almost appear to be magic, as configuration and data can be stored on a NAS or on the host OS itself, then these folders and files are mapped through to the container. Any network ports or other resources can also be mapped across to the container. Whenever a container needs to be updated, a new version can be downloaded and run, and the configuration and data can be accessed from the host OS in the same way as for the earlier version. The separation of configuration, data and computer are important as it makes it incredibly simple to back up configuration and data, and recover from hardware failure or move hosting. When running, containers are segregated from each other using kernel features called control groups.
Contain yourself
Numerous systems exist to manage containers, including Docker, Podman and Kubernetes. Docker and
Podman accomplish similar tasks and are suitable for managing containers on a single machine. The Dockercompose tool is used to store, in a config file, the information needed to get a service up and running. This file can be as simple as needing one Docker
container and a mapped configuration file, all the way to services that require a database, web server and many other things as well. At the other end of the scale is the incredibly powerful Kubernetes, which started life at Google and is now maintained and developed by the Cloud Native Computing Foundation. It is used to automate the deployment, scaling and management of huge numbers of containers, the scale of which would be needed to run enormous publicly available services.
While this introduction to containers is very interesting, how does it apply to next-generation distributions? Well, Red Hat developed Toolbox
as a tool to allow containers to be used to set up environments for software developers. This means that any software dependencies and potentially dodgy code can run completely separate from the host OS. The other powerful thing about Toolbox is that it allows access to the host OS to store files, access webcams, sound servers and much more. This means that
Toolbox can be used to run GUI applications from many different operating systems and they display on the host OS as though they were natively installed. Distrobox is similar to Toolbox, and can be installed by using the instructions in the boxout (left).