How to In­stall and Use Docker on Ubuntu

In the last few years, server vir­tu­al­i­sa­tion has be­come very pop­u­lar. We can­not imag­ine cloud com­put­ing with­out vir­tu­al­i­sa­tion. But the ex­plo­sive growth in com­put­ing de­mands more ef­fi­cient vir­tu­al­i­sa­tion so­lu­tions. This is where con­tain­ers come into play.

OpenSource For You - - Contents - By: Naren­dra K. The au­thor is a FOSS en­thu­si­ast. He can be reached at naren­dra0002017@gmail.com.

Con­tain­ers are light­weight vir­tu­al­i­sa­tion so­lu­tions. They pro­vide OS-level vir­tu­al­i­sa­tion with­out any spe­cial sup­port from the un­der­ly­ing hard­ware. Names­paces and con­trol groups form the back­bone of the con­tainer in the GNU/Linux ker­nel. Con­tainer so­lu­tions are built on top of these fea­tures. Names­paces pro­vide iso­la­tion for pro­cesses, the net­work, mount points and so on, while con­trol groups limit ac­cess to avail­able hard­ware re­sources.

Some­times, new­bies get con­fused and think that server vir­tu­al­i­sa­tion and con­tainer­i­sa­tion are the same thing. But, in fact, they are sig­nif­i­cantly dif­fer­ent from each other. In server vir­tu­al­i­sa­tion, the OS in not shared; each VM in­stance has its own OS, whereas con­tain­ers share the un­der­ly­ing OS. This ap­proach has some ad­van­tages as well as dis­ad­van­tages. The ad­van­tage is that the VM pro­vides bet­ter iso­la­tion and se­cu­rity but per­for­mance is com­pro­mised. Whereas a con­tainer com­pro­mises on iso­la­tion but de­liv­ers a per­for­mance that’s as good as bare hard­ware.

Con­tain­ers have been around since quite a long time.

Their roots can be found in UNIX’s ch­root pro­gram. Af­ter this re­lease, many UNIX flavours have im­ple­mented their own con­tainer ver­sions, like BSD jails and So­laris zones. On the GNU/Linux plat­form, LXD, OpenVZ and LXC are al­ter­na­tives to Docker. How­ever, Docker is much more ma­ture and pro­vides many ad­vanced func­tion­al­i­ties. We will dis­cuss a few of them in the later sec­tions of this ar­ti­cle.

Set­ting up the en­vi­ron­ment

In this sec­tion, let’s dis­cuss how to in­stall Docker on an Ubuntu dis­tri­bu­tion, a task that is as sim­ple as in­stalling other soft­ware on GNU/Linux. To in­stall Docker and its com­po­nents, ex­e­cute the fol­low­ing com­mands in a ter­mi­nal:

$ sudo apt-get up­date

$ sudo apt-get in­stall docker docker.io docker-com­pose

That’s it! The in­stal­la­tion can be done by ex­e­cut­ing just two com­mands.

Now, let us ver­ify the in­stal­la­tion by print­ing the Docker ver­sion. If every­thing is fine, then it should dis­play the in­stalled Docker ver­sion. In my case, it was 1.13.1, as shown be­low:

$ docker --ver­sion

Docker ver­sion 1.13.1, build 092cba3

Now that we are done with the in­stal­la­tion, let us briefly dis­cuss a few im­por­tant Docker com­po­nents.

Docker En­gine: This is the core of Docker. It runs as a dae­mon process and serves re­quests made by the client. It is re­spon­si­ble for cre­at­ing and man­ag­ing con­tain­ers.

Docker Hub: This is the on­line pub­lic repos­i­tory where Docker im­ages are pub­lished. You can down­load as well as up­load your cus­tom im­ages to this repos­i­tory.

Docker Com­pose: This is one of the most use­ful com­po­nents of Docker, which al­lows us to de­fine its con­fig­u­ra­tion in a YAML file. Once we de­fine the con­fig­u­ra­tion, we can use it to per­form the de­ploy­ment in an au­to­mated and repet­i­tive man­ner.

In the later sec­tions of this tu­to­rial, we will dis­cuss all these com­po­nents briefly.

Get­ting hands-on with Docker

Now, that’s enough of the­ory! Let's get started with the prac­ti­cal as­pects. In this sec­tion, we'll learn about con­tain­ers by cre­at­ing them and per­form­ing var­i­ous op­er­a­tions on them, like start­ing, stop­ping, list­ing and fi­nally de­stroy­ing them.

Cre­at­ing a Docker con­tainer: A con­tainer is a run­ning in­stance of a Docker im­age. Wait, but what is a Docker im­age? It is a bun­dled pack­age that con­tains the ap­pli­ca­tion and its run­time. To cre­ate a ‘busy­box’ con­tainer, ex­e­cute the fol­low­ing com­mands in a ter­mi­nal:

# docker run busy­box

Un­able to find im­age ‘busy­box:lat­est’ lo­cally lat­est: Pulling from li­brary/busy­box d070b8e­f96fc: Pull com­plete

Di­gest: sha256:2107a35b58593c58ec5f4e8f2c4a70d195321078aeb­fad fbf­b223a2f­f4a4ed21

Sta­tus: Down­loaded newer im­age for busy­box:lat­est

Let us un­der­stand what hap­pens be­hind the scenes. In the above ex­am­ple, we are cre­at­ing a ‘busy­box’ con­tainer. First, Docker searches for the im­age lo­cally. If it is present, it will be used; oth­er­wise, it gets pulled and the con­tainer is cre­ated out of that im­age. But from where does it pull the im­age? Ob­vi­ously, it pulls it from the Docker Hub.

List­ing Docker con­tain­ers: To list all Docker con­tain­ers, we can use the ps com­mand, as fol­lows:

# docker ps -a

The -a switch lists all con­tain­ers, in­clud­ing those run­ning as well as those de­stroyed. This com­mand shows var­i­ous im­por­tant at­tributes about the con­tainer like its ID, im­age name, cre­ation date, run­ning sta­tus and so on.

Run­ning a Docker con­tainer: Now, let us run the ‘busy­box’ con­tainer. We can use Docker’s ‘run’ com­mand to do this.

# docker run busy­box

As ex­pected, this time the Docker im­age is not down­loaded; in­stead, the lo­cal im­age is reused.

Docker de­tached mode: By de­fault, a Docker con­tainer runs in the fore­ground. This is use­ful for de­bug­ging pur­poses but some­times it is an­noy­ing. Docker pro­vides the de­tach mode, us­ing which we can run the con­tainer as fol­lows: # docker run -d busy­box 240eb2570c9de­f655bcb94 c489435137057729c 4bad0e61034f5f9c6f­b0f8428

In the above com­mand, the -d switch in­di­cates de­tached mode. This com­mand prints the con­tainer ID on std­out for fur­ther use.

At­tach­ing to a run­ning con­tainer: Once the con­tainer is started in the de­tached mode, we can at­tach to it by us­ing the at­tach com­mand. We have to pro­vide the con­tainer ID as an ar­gu­ment to this com­mand. For in­stance, the com­mand be­low at­taches to a run­ning con­tainer.

# docker at­tach 240eb2570c9de­f655b cb94c489435137057 729c4bad0e61034f5f9c6f­b0f8428

Ac­cess­ing con­tain­ers: If you ob­serve care­fully, the docker run com­mand starts and stops the con­tainer im­me­di­ately. This is not at all use­ful. We can go in­side the con­tainer en­vi­ron­ment us­ing the fol­low­ing com­mand:

# docker run -it busy­box sh

In the above com­mand, we have used the -it op­tion and sh as an ad­di­tional ar­gu­ment. This will pro­vide us ac­cess to a con­tainer through the shell ter­mi­nal. This is like a nor­mal ter­mi­nal, where you can ex­e­cute all the sup­ported com­mands. To exit from this, type ‘exit’ or press Ctrl+D.

Dis­play in­for­ma­tion about a con­tainer: By us­ing the in­spect com­mand, we can ob­tain use­ful in­for­ma­tion about a con­tainer, like its ID, run­ning state, cre­ation date, re­source con­sump­tion, net­work­ing in­for­ma­tion and much more. To in­spect a con­tainer, ex­e­cute the fol­low­ing com­mand:

# docker in­spect <con­tainer-ID>

De­stroy­ing a con­tainer: Once we are done with the con­tainer, we should clear it off the sys­tem; oth­er­wise, it’ll con­sume hard­ware re­sources. We can de­stroy a con­tainer us­ing the rm com­mand, as fol­lows:

# docker rm <con­tainer-ID-1> <con­tainer-ID-2> ... <con­tainer-ID-N>

Work­ing with Docker im­ages

A Docker im­age is a blue­print for the con­tainer. It con­tains the ap­pli­ca­tion and its run­time. We can pull im­ages from a re­mote repos­i­tory and spawn con­tain­ers us­ing them. In this sec­tion, we will dis­cuss var­i­ous op­er­a­tions re­lated to it.

To check Docker im­ages, visit the of­fi­cial repos­i­tory lo­cated at https://hub.docker.com. It hosts many im­ages and pro­vides de­tailed in­for­ma­tion about them, like their de­scrip­tion, sup­ported tags, Docker files and much more.

List­ing im­ages: To list all down­loaded im­ages, use the fol­low­ing com­mand:

# docker im­ages

Pull im­age: As the name sug­gests, this com­mand down­loads the im­age from a re­mote repos­i­tory and stores it on the lo­cal disk. To down­load the im­age, we have to pro­vide the im­age’s name as an ar­gu­ment. For in­stance, the com­mands given be­low pull the busy­box im­age:

# docker pull busy­box

Us­ing de­fault tag: lat­est -----------------------------> lat­est: Pulling from li­brary/busy­box d070b8e­f96fc: Pull com­plete

Di­gest: sha256:2107a35b58593c58ec5f4e8f2c4a70d195321078aeb fadf­bf­b223a2f­f4a4ed21

Sta­tus: Down­loaded newer im­age for busy­box:lat­est

Us­ing tags: If we don’t pro­vide any ad­di­tional op­tion, then the pull com­mand down­loads an im­age tagged with the lat­est tag. We can see it in the pre­vi­ous com­mand’s out­put, where it has printed Us­ing de­fault tag: lat­est. To pull an im­age with a spe­cific tag, we can pro­vide the tag name with the pull com­mand. For in­stance, the com­mand given be­low pulls an im­age with the tag 1.28.1-uclibc:

# docker pull busy­box:1.28.1-uclibc

1.28.1-uclibc: Pulling from li­brary/busy­box ---------------------------->

Di­gest: sha256:2c3a381fd538d­d732f20d824f87­fac1e300a9e­f56eb 4006816fa0cd992e85ce5

Sta­tus: Down­loaded newer im­age for busy­box:1.28.1-uclibc

We can get im­age tags from the Docker Hub lo­cated at https://hub.docker.com.

Get­ting the his­tory of an im­age: Us­ing the his­tory com­mand, we can re­trieve his­tor­i­cal data about the im­age like its ID, cre­ation date, au­thor, size, de­scrip­tion and so on. For in­stance, the fol­low­ing com­mand shows the his­tory of the ‘busy­box’ im­age:

# docker his­tory busy­box

Delet­ing an im­age: Like con­tain­ers, we can also delete Docker im­ages. Docker pro­vides the ‘rmi’ com­mand for this pur­pose. In this com­mand, ‘i’ stands for im­age.

For in­stance, to delete a ‘busy­box’ im­age, ex­e­cute the fol­low­ing com­mand: # docker rmi f6e427c148a7

Note: We have to pro­vide an im­age ID to it, which we can ob­tain us­ing the docker im­ages com­mand.

Ad­vanced Docker top­ics

So far we have ex­plored only the ba­sics of Docker. This can be a good start for be­gin­ners. How­ever, the dis­cus­sion does not end here. Docker is a fea­ture-rich ap­pli­ca­tion. So let’s now briefly dis­cuss some ad­vanced Docker con­cepts.

Docker Com­pose: Docker Com­pose can be used to de­ploy and con­fig­ure an en­tire soft­ware stack us­ing au­to­mated meth­ods rather than us­ing the docker run com­mand, fol­lowed by man­ual con­fig­u­ra­tion. We can de­fine the con­fig­u­ra­tion in a YAML file and use that to per­form de­ploy­ment. Shown be­low is a sim­ple ex­am­ple of con­fig­u­ra­tion:

ver­sion: ‘2.0’

ser­vices: data­base: im­age: “jarvis/acme-web-app” web: im­age: “mysql”

In the above ‘docker-com­pose.yaml’ file, we have de­fined the con­fig­u­ra­tion un­der the ‘ser­vices’ dic­tio­nary. We have also pro­vided the im­ages that should be used for de­ploy­ment.

To de­ploy the above con­fig­u­ra­tion, ex­e­cute the fol­low­ing com­mand in a ter­mi­nal:

# docker-com­pose up

To stop and de­stroy a de­ployed con­fig­u­ra­tion, ex­e­cute the fol­low­ing com­mands in the ter­mi­nal:

# docker-com­pose stop # docker-com­pose down

Map­ping ports: Like any other ap­pli­ca­tion, we can run Web ap­pli­ca­tions in­side a con­tainer. But the chal­lenge is how to al­low ac­cess to out­side users. For this pur­pose, we can pro­vide port map­ping us­ing the ‘-p’ op­tion, as fol­lows:

# docker run -p 80:5000 jarvis/acme-web-app

In the above ex­am­ple, Port 80 from the host ma­chine is mapped to Port 5000 for the ‘acme-web-app’ con­tainer. Now, users can ac­cess this Web ap­pli­ca­tion us­ing the host ma­chine’s IP ad­dress.

Map­ping stor­age: The ap­pli­ca­tion’s data is stored in­side a con­tainer and, hence, when we de­stroy the con­tainer, data is also deleted. To avoid this, we can map vol­umes from the con­tainer to a lo­cal direc­tory on a host ma­chine. We can achieve this by us­ing the -v op­tion, as fol­lows:

# docker run -v /opt/mysql-data:/var/lib/mysql mysql

In the above ex­am­ple, we have mapped the con­tainer’s /var/lib/mysql vol­ume to the lo­cal direc­tory /opt/mysql­data. Be­cause of this, data can be per­sisted even when the con­tainer is de­stroyed.

Map­ping ports and vol­umes with Docker Com­pose: To map ports and vol­umes as a part of the Docker Com­pose process, add the at­tributes of the ports and vol­umes in the ‘docker-com­pose.yaml’ file. Af­ter do­ing this, our mod­i­fied file will look like what fol­lows:

ver­sion: ‘2.0’

ser­vices: data­base: im­age: “jarvis/acme-web-app” ports: -----------------------------> - “80:5000” -----------------------------> web: im­age: “mysql” vol­umes: -----------------------------> - “/opt/mysql-data:/var/lib/mysql” ---------------------------->

Docker clus­ter: So far, we have worked with a sin­gle Docker host. This is a bare-min­i­mal setup and is good enough for de­vel­op­ment and test­ing pur­poses. How­ever, this is not enough for pro­duc­tion, be­cause if the Docker host goes down, then the en­tire ap­pli­ca­tion will go off­line. To over­come this sin­gle point of fail­ure, we can pro­vide high avail­abil­ity to the con­tainer by us­ing a Swarm clus­ter

A Swarm clus­ter is cre­ated with the aid of mul­ti­ple Docker hosts. In this clus­ter, we can des­ig­nate one of the nodes as a mas­ter and the re­main­ing nodes as work­ers. The mas­ter will be re­spon­si­ble for load dis­tri­bu­tion and pro­vid­ing high avail­abil­ity within the clus­ter, whereas work­ers will host the Docker con­tainer af­ter co-or­di­nat­ing with the mas­ter.

In this ar­ti­cle, we have dis­cussed the ba­sics of Docker as well as touched upon some ad­vanced con­cepts. The ar­ti­cle is a good start­ing point for ab­so­lute be­gin­ners. Once you build a strong foun­da­tion in Docker, you can delve deep into in­di­vid­ual top­ics that in­ter­est you.

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.