Get to grips with Docker and app containers
Lt took a lot of convincing for a lethargic Mayank Sharma to power down his virtual machines and learn to virtualise apps instead.
Even if you’re totally disconnected with the realm of mortal beings, you’d still surely have heard of Docker and how it can solve all your IT problems. If you have somehow managed to isolate yourself from experiencing the fruits of Docker’s goodness, here’s your chance to absolve yourself.
Traditional virtualisation technologies provide full hardware virtualisation. This is to say that the virtual machine or hypervisor takes chunks of physical resources such as CPU, storage and RAM, and then slices them into virtual versions like virtual CPUs and virtual RAM. It then uses these virtual peripherals to build virtual machines that behave like regular physical computers. The isolated virtual environment is useful for testing a new distro, but is an overkill when all you need to virtualise is a single program.
This is where Linux containers — through Docker — offer an attractive alternative. Docker enables you to bundle any Linux app with all its dependencies and its own environment. You can then run multiple instances of the containerised app, each as a completely isolated and separated process, with near native runtime performance. That’s because, unlike VMs, containers share the same host system kernel. This also means that you can host more containers than VMs on any given hardware, because of its lighter footprint.
THE LANGUAGE OF DOCKER
Docker is a container runtime engine that makes it easy to package applications and push them to a remote repository, from where other users can download and use them. Let’s get familiar with some Docker terminology. Docker containers package software in a complete filesystem that includes everything an application needs to run. This ensures the app will always run the same way — irrespective of the environment Docker is running on.
A Docker image is the definition of a container. It’s a collection of all the required executables, files, environment settings and more, that make up an application along with its dependencies. The image is a read-only version of your application that’s often compared to an ISO file. To run this image, Docker creates a container out of it by cloning the image. This is what then actually executes. This arrangement makes Docker very scalable and enables you to run multiple containers from the same image.
While Docker is available as a package in the official repositories of all popular distributions, it’s best to fetch the latest version from the official Docker repository. Fire up a terminal and fetch the official download script and execute it with curl -sSL https://get.docker. com/ | sh to install Docker. Once it’s installed, start the Docker service with sudo systemctl start docker and make sure it starts on subsequent boots with sudo systemctl enable docker .
Now type docker run helloworld to test the installation. The command downloads a special image from the official Docker registry that will greet you if all goes well and explain the steps it took to test your Docker installation.
Now’s let’s jump straight in and start a new Docker container with the following command:
$ docker run -it --name alpha-silo ubuntu /bin/bash
With this command, we asked Docker to start a new container with an image called Ubuntu. The -i makes the session interactive and the -t allocates a terminal. The container is named alpha-silo and runs the / bin/ bash command once it’s started.
When we issue the command, the Docker daemon will search for Ubuntu images in the local cache. When it doesn’t find one it then downloads the image from Docker Hub. It’ll take some time to download and extract all the layers of the images. Docker maintains container images in the form of multiple layers. The good thing about this arrangement is that these layers can be shared across multiple container images. This makes the system very efficient. For example, if you have Ubuntu running on a server and you need to download the Apache container based on Ubuntu, Docker will only download the additional layer for Apache as it already has Ubuntu in the local cache, which can be reused.
Once this container has been started, it will drop you in a new shell running inside it. From here, you can interact with the shell just as you would on a normal installation. However, because containers are designed to be extremely lightweight,
you only have access to a barebones environment.
When you are done, you can exit from the shell by typing exit or pressing ‘Ctrl-D’. Outside the container, you can use the docker ps command to list all the containers and check the status of your last container. By default, the command only lists running containers. Append the -a option to the command to list stopped containers as well. To start the container again, you can use the docker start command, such as
docker start -ia alpha-silo . The -i option will, as before, start the container in interactive mode and the -a option will attach to a terminal inside the container. If you start a container without any option, such as
docker start alpha-silo , Docker will launch it in the detached mode, which is to say that it wouldn’t latch the container onto the terminal and just keep it running in the background.
You can open a terminal inside a detached container with docker attach, like docker attach alpha-silo . To detach the terminal but keep the container running in the background, press the ‘Ctrl-P-Q’ key combination. To execute a command inside a running container, use docker exec, such as docker exec alpha-silo
pwd to print the current working directory inside the container.
Remember we said containers are designed to be lightweight? If you list all the processes running inside our Ubuntu container with the docker alpha-silo exec ps -elf command, you’ll notice that it’s running bash and nothing else. That’s why when we exit from the shell by typing exit, the action stops the container as well since it is the only process running in the container.
The docker stop alpha-silo command will gracefully stop the container after stopping processes running inside it. When you no longer need the container, you can use docker rm to remove/delete it, such as
docker rm alpha-silo . The table (over the page) lists some frequently used Docker commands and their uses.
FLESH OUT THE CONTAINER
We’ve just created a minimal container named alpha-silo using the base Ubuntu image that doesn’t do much. To get more out of this container, you can either download another image that uses the same base image, but has more stuff baked in. You can also manually add software to the base image, just as you would on a regular install. Start an interactive shell inside the container and type:
$ sudo apt update; apt install net-tools apache2 -y
This command will update the repositories and install the net-tools and the Apache web server inside the container. One of the cool things about Docker is that it enables you to save your customised container as a custom image that you can then use to spin additional containers. So if we exit the container and type:
$ docker commit -a “Mayank Sharma” alpha-silo loaded-silo
...Docker will roll the customised container alpha-silo with the updated repos and the Apache web server into a custom image called loaded-silo. In the command, the -a option is the name of the author of the image. Then comes the name of the container being imaged (alpha-silo) followed by the name of the new image (loader-silo). The new loaded-silo image is now stored as a separate image on the server along with the others as you can verify with the docker images command. You can now use this image to spin new containers.
As we’ve said earlier, a Docker container is an instance of a Docker image. Docker pulls images from repositories that live inside registries.
The default Docker repository is Docker Hub, which has a bunch of official and user-contributed unofficial repositories, each of which in turn contains a number of images.
So head over to hub.Docker.com to browse through a library of pre-built Docker images. To get familiar with Docker, we’ll use it to install the WordPress blogging app. The WordPress image on Docker Hub doesn’t include a database installation. So we’ll first have to install a MariaDB database in a separate container and then ask the WordPress container to use it.
Start off by making a new directory where you wish to store the files for WordPress and MariaDB for example in your home directory: $ mkdir ~/wordpress $ cd ~/wordpress Then pull the latest MariaDB image with: $ docker run -e MYSQL_ ROOT_ PASSWORD=<password>
-e MYSQL_ DATABASE=wordpress _ db --name db4wp -v $(pwd)/database:/var/lib/ mysql -d mariadb The -e option sets the environment variables for the container, such as the database password and its name. Replace <password> above with your own. The --name option defines the name of the container. The most interesting option is -v “$(pwd)/database”:/ var/lib/mysql . It asks Docker to map the two specified locations that are separated by the colon (:). On the right is the /var/lib/mysql directory that exists within the container and is used to store the database file. The command asks Docker to place the files under the /database folder in the current working directory on the host to ensure that the data persists even after we restart the container. The -d option tells Docker to run the container in the detached daemon mode in the background.
This command will download the latest version of the official MariaDB image and put it inside a container with the specified settings. You can confirm that the MariaDB container is running with docker ps .
You can also break the process into two steps, which is what we’ll do for WordPress. First we’ll just download the WordPress image with docker pull wordpress and then build a container for it, with:
$ docker run -e WORDPRESS_ DB_ PASSWORD=<password> -d --name my_wordpress --link db4wp:mysql -v $(pwd)/html:/var/www/html -p <server public IP>:80:80 wordpress
Make sure you set the -e WORDPRESS_ DB_ PASSWORD variable to the same password as that of the MariaDB database. The --link db4wp:mysql option links the WordPress container with the MariaDB container so that the applications can talk to each other. The -v option does the same function as it did for the database and makes sure that the container’s contents under the /var/ www/html directory are persistently stored in the /html folder under the current directory on the host.
The -p <server public IP>:80:80 tells Docker to pass connections from the servers’ HTTP port to the containers’ internal port 80. Replace <server public IP> with the public IP address of your server. Instead of a public IP address, you can also use -p 127.0.0.1:8080:80 , to tell Docker to forward the container’s port 80 to port 8080 on the local host. To access the WordPress installation, open a browser on a computer in the same network as the server running the Docker daemon and head to http://<IP ADDRESS OF Docker SERVER>:8080.
Use docker inspect wordpress to get all the settings for the WordPress container. To check the log file for our WordPress container, run the docker logs -f wordpress command. You can stop a container with docker stop , start it again with docker start or restart it with docker restart . But if you have to change a parameter, like the port mapping, you’ll first have to stop a container, then remove it and then start another one with the new parameters with the docker run command.
While the Docker CLI is very well documented, it isn’t the most intuitive mechanism for creating containers. This is why you need to use the docker-compose tool to define and run containers. The tool makes it particularly easy to roll multiple containers. It’s essentially made up of a human-readable YAML data
serialisation language that lists the characteristics or options of one or more containers that can then be brought to life with a single command.
To demonstrate its advantages over the Docker CLI, we’ll recreate our MariaDB and WordPress containers with Docker Compose. First install the latest version by pasting a cURL command mentioned in the Docker Compose documentation ( docs.docker. com/compose/install/#install-compose). When you’ve got Compose up and running, change into the ~/wordpress folder and create the docker-compose. yaml file: $ cd wordpress $ vi docker-compose.yaml dbase4wp: image: mariadb environment: MYSQL_ ROOT_ PASSWORD: <password> MYSQL_ DATABASE: wordpress_ db volumes: - ./database:/var/lib/mysql my-wp: image: wordpress volumes:
- ./html:/var/www/html ports:
- “8080:80” links:
- db4wp:mysql environment:
WORDPRESS_ DB_ PASSWORD: <password>
The options are the same as before, only more verbose. Save the file and then type docker-compose up -d to create both the containers. Use
docker-compose logs -f to monitor the output of the containers.
CREATE A DOCKER IMAGE
As we know every Docker container is an instance of a Docker image. Sure, there’s a huge repository of pre-built Docker images available in Docker Hub and elsewhere. But just as we manually fleshed out the Ubuntu image earlier, we can automate the process and ask Docker to build us a custom image using a base image.
To build a Docker image we need to create a Dockerfile, which is a plain text file with instructions and arguments to assemble an image. Refer to the table (below) for a list of commands that go inside a Dockerfile. You don’t have to use every command. In fact, here’s a fully functional Dockerfile: $ vi Dockerfile ## specify the base image FROM ubuntu:artful ## enable the Universe repository
RUN sed -i ‘s/^#\s*\ (deb.*universe\)$/\1/g’ /etc/ apt/sources.list ## update the repositories RUN apt-get -y update
## install any available upgrades
RUN apt-get -y upgrade ## install the buildessential metapackage
RUN apt-get install -y buildessential
Remember, however, that while you can place the Dockerfile anywhere you want, when you build an image from it any files and directories in the same location or further down the filesystem in the sub-directories gets included in the build. It’s a good idea to create a directory especially for placing the Dockerfile. Once a Dockerfile is written, you can use it to create an image: $ docker build -t custom_ ubuntu
This command will build an image in the current directory called custom_ ubuntu based on the instructions in the Dockerfile. When it’s done, you can confirm the image is available along with the other images using the
docker images command. You can now use this custom image to build containers.
There’s a lot more you can do with Dockerbuild. In fact, we’ve barely scratched the surface, but you should now be equipped with the tools and the know-how to experience the goodness and convenience of Docker containers.
Each command in the Dockerfile spins up a new container and commits a new image layer before moving to the next command.
Use the docker history command against an image to bring up the list of commands used to create it.
You can find all kinds of information including Dockerfiles for most images hosted on the Docker Hub.
The Docker Store ( is geared towards enterprise users and in addition to free images, also hosts commercially supported images.
There are several open-source apps like Rancher that enable you to manage a Docker deployment via a point-and- click graphical user interface.