Get to grips with Docker and app con­tain­ers

Lt took a lot of con­vinc­ing for a lethar­gic Mayank Sharma to power down his vir­tual ma­chines and learn to vir­tu­alise apps in­stead.

APC Australia - - Contents -

Even if you’re to­tally dis­con­nected with the realm of mor­tal be­ings, you’d still surely have heard of Docker and how it can solve all your IT prob­lems. If you have some­how man­aged to iso­late your­self from ex­pe­ri­enc­ing the fruits of Docker’s good­ness, here’s your chance to ab­solve your­self.

Tra­di­tional vir­tu­al­i­sa­tion tech­nolo­gies pro­vide full hard­ware vir­tu­al­i­sa­tion. This is to say that the vir­tual ma­chine or hy­per­vi­sor takes chunks of phys­i­cal re­sources such as CPU, stor­age and RAM, and then slices them into vir­tual ver­sions like vir­tual CPUs and vir­tual RAM. It then uses these vir­tual pe­riph­er­als to build vir­tual ma­chines that be­have like reg­u­lar phys­i­cal com­put­ers. The iso­lated vir­tual en­vi­ron­ment is use­ful for test­ing a new dis­tro, but is an overkill when all you need to vir­tu­alise is a sin­gle pro­gram.

This is where Linux con­tain­ers — through Docker — of­fer an at­trac­tive al­ter­na­tive. Docker en­ables you to bun­dle any Linux app with all its de­pen­den­cies and its own en­vi­ron­ment. You can then run mul­ti­ple in­stances of the con­tainer­ised app, each as a com­pletely iso­lated and sep­a­rated process, with near na­tive run­time per­for­mance. That’s be­cause, un­like VMs, con­tain­ers share the same host sys­tem ker­nel. This also means that you can host more con­tain­ers than VMs on any given hard­ware, be­cause of its lighter foot­print.

THE LAN­GUAGE OF DOCKER

Docker is a con­tainer run­time en­gine that makes it easy to pack­age ap­pli­ca­tions and push them to a re­mote repos­i­tory, from where other users can down­load and use them. Let’s get fa­mil­iar with some Docker ter­mi­nol­ogy. Docker con­tain­ers pack­age soft­ware in a com­plete filesys­tem that in­cludes ev­ery­thing an ap­pli­ca­tion needs to run. This en­sures the app will al­ways run the same way — ir­re­spec­tive of the en­vi­ron­ment Docker is run­ning on.

A Docker im­age is the def­i­ni­tion of a con­tainer. It’s a col­lec­tion of all the re­quired ex­e­cuta­bles, files, en­vi­ron­ment set­tings and more, that make up an ap­pli­ca­tion along with its de­pen­den­cies. The im­age is a read-only ver­sion of your ap­pli­ca­tion that’s of­ten com­pared to an ISO file. To run this im­age, Docker cre­ates a con­tainer out of it by cloning the im­age. This is what then ac­tu­ally ex­e­cutes. This ar­range­ment makes Docker very scal­able and en­ables you to run mul­ti­ple con­tain­ers from the same im­age.

While Docker is avail­able as a pack­age in the of­fi­cial repos­i­to­ries of all pop­u­lar dis­tri­bu­tions, it’s best to fetch the lat­est ver­sion from the of­fi­cial Docker repos­i­tory. Fire up a ter­mi­nal and fetch the of­fi­cial down­load script and ex­e­cute it with curl -sSL https://get.docker. com/ | sh to in­stall Docker. Once it’s in­stalled, start the Docker ser­vice with sudo sys­tem­ctl start docker and make sure it starts on sub­se­quent boots with sudo sys­tem­ctl en­able docker .

Now type docker run hel­loworld to test the in­stal­la­tion. The com­mand down­loads a spe­cial im­age from the of­fi­cial Docker registry that will greet you if all goes well and ex­plain the steps it took to test your Docker in­stal­la­tion.

Now’s let’s jump straight in and start a new Docker con­tainer with the fol­low­ing com­mand:

$ docker run -it --name al­pha-silo ubuntu /bin/bash

With this com­mand, we asked Docker to start a new con­tainer with an im­age called Ubuntu. The -i makes the ses­sion in­ter­ac­tive and the -t al­lo­cates a ter­mi­nal. The con­tainer is named al­pha-silo and runs the / bin/ bash com­mand once it’s started.

When we is­sue the com­mand, the Docker dae­mon will search for Ubuntu im­ages in the lo­cal cache. When it doesn’t find one it then down­loads the im­age from Docker Hub. It’ll take some time to down­load and ex­tract all the lay­ers of the im­ages. Docker main­tains con­tainer im­ages in the form of mul­ti­ple lay­ers. The good thing about this ar­range­ment is that these lay­ers can be shared across mul­ti­ple con­tainer im­ages. This makes the sys­tem very ef­fi­cient. For ex­am­ple, if you have Ubuntu run­ning on a server and you need to down­load the Apache con­tainer based on Ubuntu, Docker will only down­load the ad­di­tional layer for Apache as it al­ready has Ubuntu in the lo­cal cache, which can be reused.

Once this con­tainer has been started, it will drop you in a new shell run­ning in­side it. From here, you can in­ter­act with the shell just as you would on a nor­mal in­stal­la­tion. How­ever, be­cause con­tain­ers are de­signed to be ex­tremely light­weight,

you only have ac­cess to a bare­bones en­vi­ron­ment.

When you are done, you can exit from the shell by typ­ing exit or press­ing ‘Ctrl-D’. Out­side the con­tainer, you can use the docker ps com­mand to list all the con­tain­ers and check the sta­tus of your last con­tainer. By de­fault, the com­mand only lists run­ning con­tain­ers. Ap­pend the -a op­tion to the com­mand to list stopped con­tain­ers as well. To start the con­tainer again, you can use the docker start com­mand, such as

docker start -ia al­pha-silo . The -i op­tion will, as be­fore, start the con­tainer in in­ter­ac­tive mode and the -a op­tion will at­tach to a ter­mi­nal in­side the con­tainer. If you start a con­tainer with­out any op­tion, such as

docker start al­pha-silo , Docker will launch it in the de­tached mode, which is to say that it wouldn’t latch the con­tainer onto the ter­mi­nal and just keep it run­ning in the back­ground.

You can open a ter­mi­nal in­side a de­tached con­tainer with docker at­tach, like docker at­tach al­pha-silo . To de­tach the ter­mi­nal but keep the con­tainer run­ning in the back­ground, press the ‘Ctrl-P-Q’ key com­bi­na­tion. To ex­e­cute a com­mand in­side a run­ning con­tainer, use docker exec, such as docker exec al­pha-silo

pwd to print the cur­rent work­ing direc­tory in­side the con­tainer.

Re­mem­ber we said con­tain­ers are de­signed to be light­weight? If you list all the pro­cesses run­ning in­side our Ubuntu con­tainer with the docker al­pha-silo exec ps -elf com­mand, you’ll no­tice that it’s run­ning bash and noth­ing else. That’s why when we exit from the shell by typ­ing exit, the ac­tion stops the con­tainer as well since it is the only process run­ning in the con­tainer.

The docker stop al­pha-silo com­mand will grace­fully stop the con­tainer af­ter stop­ping pro­cesses run­ning in­side it. When you no longer need the con­tainer, you can use docker rm to re­move/delete it, such as

docker rm al­pha-silo . The ta­ble (over the page) lists some fre­quently used Docker com­mands and their uses.

FLESH OUT THE CON­TAINER

We’ve just cre­ated a min­i­mal con­tainer named al­pha-silo us­ing the base Ubuntu im­age that doesn’t do much. To get more out of this con­tainer, you can ei­ther down­load an­other im­age that uses the same base im­age, but has more stuff baked in. You can also man­u­ally add soft­ware to the base im­age, just as you would on a reg­u­lar in­stall. Start an in­ter­ac­tive shell in­side the con­tainer and type:

$ sudo apt up­date; apt in­stall net-tools apache2 -y

This com­mand will up­date the repos­i­to­ries and in­stall the net-tools and the Apache web server in­side the con­tainer. One of the cool things about Docker is that it en­ables you to save your cus­tomised con­tainer as a cus­tom im­age that you can then use to spin ad­di­tional con­tain­ers. So if we exit the con­tainer and type:

$ docker com­mit -a “Mayank Sharma” al­pha-silo loaded-silo

...Docker will roll the cus­tomised con­tainer al­pha-silo with the up­dated re­pos and the Apache web server into a cus­tom im­age called loaded-silo. In the com­mand, the -a op­tion is the name of the au­thor of the im­age. Then comes the name of the con­tainer be­ing im­aged (al­pha-silo) fol­lowed by the name of the new im­age (loader-silo). The new loaded-silo im­age is now stored as a sep­a­rate im­age on the server along with the oth­ers as you can ver­ify with the docker im­ages com­mand. You can now use this im­age to spin new con­tain­ers.

REAL-WORLD CON­TAIN­ERS

As we’ve said ear­lier, a Docker con­tainer is an in­stance of a Docker im­age. Docker pulls im­ages from repos­i­to­ries that live in­side reg­istries.

The de­fault Docker repos­i­tory is Docker Hub, which has a bunch of of­fi­cial and user-con­trib­uted un­of­fi­cial repos­i­to­ries, each of which in turn con­tains a num­ber of im­ages.

So head over to hub.Docker.com to browse through a li­brary of pre-built Docker im­ages. To get fa­mil­iar with Docker, we’ll use it to in­stall the Word­Press blog­ging app. The Word­Press im­age on Docker Hub doesn’t in­clude a data­base in­stal­la­tion. So we’ll first have to in­stall a Mari­aDB data­base in a sep­a­rate con­tainer and then ask the Word­Press con­tainer to use it.

Start off by mak­ing a new direc­tory where you wish to store the files for Word­Press and Mari­aDB for ex­am­ple in your home direc­tory: $ mkdir ~/word­press $ cd ~/word­press Then pull the lat­est Mari­aDB im­age with: $ docker run -e MYSQL_ ROOT_ PASS­WORD=<pass­word>

-e MYSQL_ DATA­BASE=word­press _ db --name db4wp -v $(pwd)/data­base:/var/lib/ mysql -d mari­adb The -e op­tion sets the en­vi­ron­ment vari­ables for the con­tainer, such as the data­base pass­word and its name. Re­place <pass­word> above with your own. The --name op­tion de­fines the name of the con­tainer. The most in­ter­est­ing op­tion is -v “$(pwd)/data­base”:/ var/lib/mysql . It asks Docker to map the two spec­i­fied lo­ca­tions that are sep­a­rated by the colon (:). On the right is the /var/lib/mysql direc­tory that ex­ists within the con­tainer and is used to store the data­base file. The com­mand asks Docker to place the files un­der the /data­base folder in the cur­rent work­ing direc­tory on the host to en­sure that the data per­sists even af­ter we restart the con­tainer. The -d op­tion tells Docker to run the con­tainer in the de­tached dae­mon mode in the back­ground.

This com­mand will down­load the lat­est ver­sion of the of­fi­cial Mari­aDB im­age and put it in­side a con­tainer with the spec­i­fied set­tings. You can con­firm that the Mari­aDB con­tainer is run­ning with docker ps .

You can also break the process into two steps, which is what we’ll do for Word­Press. First we’ll just down­load the Word­Press im­age with docker pull word­press and then build a con­tainer for it, with:

$ docker run -e WORDPRESS_ DB_ PASS­WORD=<pass­word> -d --name my_­word­press --link db4wp:mysql -v $(pwd)/html:/var/www/html -p <server pub­lic IP>:80:80 word­press

Make sure you set the -e WORDPRESS_ DB_ PASS­WORD vari­able to the same pass­word as that of the Mari­aDB data­base. The --link db4wp:mysql op­tion links the Word­Press con­tainer with the Mari­aDB con­tainer so that the ap­pli­ca­tions can talk to each other. The -v op­tion does the same func­tion as it did for the data­base and makes sure that the con­tainer’s con­tents un­der the /var/ www/html direc­tory are per­sis­tently stored in the /html folder un­der the cur­rent direc­tory on the host.

The -p <server pub­lic IP>:80:80 tells Docker to pass con­nec­tions from the servers’ HTTP port to the con­tain­ers’ in­ter­nal port 80. Re­place <server pub­lic IP> with the pub­lic IP ad­dress of your server. In­stead of a pub­lic IP ad­dress, you can also use -p 127.0.0.1:8080:80 , to tell Docker to for­ward the con­tainer’s port 80 to port 8080 on the lo­cal host. To ac­cess the Word­Press in­stal­la­tion, open a browser on a com­puter in the same net­work as the server run­ning the Docker dae­mon and head to http://<IP AD­DRESS OF Docker SERVER>:8080.

Use docker in­spect word­press to get all the set­tings for the Word­Press con­tainer. To check the log file for our Word­Press con­tainer, run the docker logs -f word­press com­mand. You can stop a con­tainer with docker stop , start it again with docker start or restart it with docker restart . But if you have to change a pa­ram­e­ter, like the port map­ping, you’ll first have to stop a con­tainer, then re­move it and then start an­other one with the new pa­ram­e­ters with the docker run com­mand.

DOCKER COM­POSE

While the Docker CLI is very well doc­u­mented, it isn’t the most in­tu­itive mech­a­nism for cre­at­ing con­tain­ers. This is why you need to use the docker-com­pose tool to de­fine and run con­tain­ers. The tool makes it par­tic­u­larly easy to roll mul­ti­ple con­tain­ers. It’s es­sen­tially made up of a hu­man-read­able YAML data

se­ri­al­i­sa­tion lan­guage that lists the char­ac­ter­is­tics or op­tions of one or more con­tain­ers that can then be brought to life with a sin­gle com­mand.

To demon­strate its ad­van­tages over the Docker CLI, we’ll recre­ate our Mari­aDB and Word­Press con­tain­ers with Docker Com­pose. First in­stall the lat­est ver­sion by past­ing a cURL com­mand men­tioned in the Docker Com­pose doc­u­men­ta­tion ( docs.docker. com/com­pose/in­stall/#in­stall-com­pose). When you’ve got Com­pose up and run­ning, change into the ~/word­press folder and cre­ate the docker-com­pose. yaml file: $ cd word­press $ vi docker-com­pose.yaml dbase4wp: im­age: mari­adb en­vi­ron­ment: MYSQL_ ROOT_ PASS­WORD: <pass­word> MYSQL_ DATA­BASE: wordpress_ db vol­umes: - ./data­base:/var/lib/mysql my-wp: im­age: word­press vol­umes:

- ./html:/var/www/html ports:

- “8080:80” links:

- db4wp:mysql en­vi­ron­ment:

WORDPRESS_ DB_ PASS­WORD: <pass­word>

The op­tions are the same as be­fore, only more ver­bose. Save the file and then type docker-com­pose up -d to cre­ate both the con­tain­ers. Use

docker-com­pose logs -f to mon­i­tor the out­put of the con­tain­ers.

CRE­ATE A DOCKER IM­AGE

As we know ev­ery Docker con­tainer is an in­stance of a Docker im­age. Sure, there’s a huge repos­i­tory of pre-built Docker im­ages avail­able in Docker Hub and else­where. But just as we man­u­ally fleshed out the Ubuntu im­age ear­lier, we can au­to­mate the process and ask Docker to build us a cus­tom im­age us­ing a base im­age.

To build a Docker im­age we need to cre­ate a Dock­er­file, which is a plain text file with in­struc­tions and ar­gu­ments to as­sem­ble an im­age. Re­fer to the ta­ble (be­low) for a list of com­mands that go in­side a Dock­er­file. You don’t have to use ev­ery com­mand. In fact, here’s a fully func­tional Dock­er­file: $ vi Dock­er­file ## spec­ify the base im­age FROM ubuntu:art­ful ## en­able the Uni­verse repos­i­tory

RUN sed -i ‘s/^#\s*\ (deb.*uni­verse\)$/\1/g’ /etc/ apt/sources.list ## up­date the repos­i­to­ries RUN apt-get -y up­date

## in­stall any avail­able up­grades

RUN apt-get -y up­grade ## in­stall the buildessen­tial meta­pack­age

RUN apt-get in­stall -y buildessen­tial

Re­mem­ber, how­ever, that while you can place the Dock­er­file any­where you want, when you build an im­age from it any files and di­rec­to­ries in the same lo­ca­tion or fur­ther down the filesys­tem in the sub-di­rec­to­ries gets in­cluded in the build. It’s a good idea to cre­ate a direc­tory es­pe­cially for plac­ing the Dock­er­file. Once a Dock­er­file is writ­ten, you can use it to cre­ate an im­age: $ docker build -t cus­tom_ ubuntu

This com­mand will build an im­age in the cur­rent direc­tory called cus­tom_ ubuntu based on the in­struc­tions in the Dock­er­file. When it’s done, you can con­firm the im­age is avail­able along with the other im­ages us­ing the

docker im­ages com­mand. You can now use this cus­tom im­age to build con­tain­ers.

There’s a lot more you can do with Docker­build. In fact, we’ve barely scratched the sur­face, but you should now be equipped with the tools and the know-how to ex­pe­ri­ence the good­ness and con­ve­nience of Docker con­tain­ers.

Each com­mand in the Dock­er­file spins up a new con­tainer and com­mits a new im­age layer be­fore mov­ing to the next com­mand.

Use the docker his­tory com­mand against an im­age to bring up the list of com­mands used to cre­ate it.

You can find all kinds of in­for­ma­tion in­clud­ing Dock­er­files for most im­ages hosted on the Docker Hub.

The Docker Store ( is geared to­wards en­ter­prise users and in ad­di­tion to free im­ages, also hosts com­mer­cially sup­ported im­ages.

There are sev­eral open-source apps like Rancher that en­able you to man­age a Docker de­ploy­ment via a point-and- click graph­i­cal user in­ter­face.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.