Mi­croser­vices with Docker and Ku­ber­netes: An Overview

OpenSource For You - - Contents - By: Shashid­har Sop­pin The au­thor is a se­nior ar­chi­tect and has 16+ years of ex­pe­ri­ence in the IT in­dus­try—in vir­tu­al­i­sa­tion, the cloud, Docker, open source soft­ware, ML, deep learn­ing and OpenS­tack. He is part of the prod­uct en­gi­neer­ing team at Wipro. You

Docker is an open source plat­form that’s used to build, ship and run dis­trib­uted ser­vices. Ku­ber­netes is an open source orches­tra­tion plat­form for au­tomat­ing de­ploy­ment, scaling and the op­er­a­tions of ap­pli­ca­tion con­tain­ers across clus­ters of hosts. Mi­croser­vices struc­ture an ap­pli­ca­tion into sev­eral mod­u­lar ser­vices. Here’s a quick look at why these are so use­ful to­day.

The Linux op­er­at­ing sys­tem has be­come very sta­ble now and is ca­pa­ble of cleanly sand­box­ing pro­cesses, to ex­e­cute pro­cesses eas­ily; it also comes with bet­ter name space con­trol. This has led to the de­vel­op­ment and en­hance­ment of var­i­ous con­tainer tech­nolo­gies. Some of the fea­tures/char­ac­ter­is­tics of the Linux OS that help con­tainer de­vel­op­ment are:

Only the re­quired li­braries get in­stalled in their re­spec­tive con­tain­ers.

Cus­tom con­tain­ers can be built eas­ily.

Dur­ing the ini­tial days, LXC (Linux con­tainer) was very pop­u­lar and was the foun­da­tion stone for the de­vel­op­ment of var­i­ous other con­tain­ers.

Name space and con­trol groups (process level iso­la­tion). A brief his­tory of con­tain­ers is out­lined in Ta­ble 1.

Some of the ad­van­tages of con­tain­ers are:

Con­tain­ers are more light­weight com­pared to vir­tual ma­chines (VMs).

The con­tainer plat­form is used in a con­cise way to build Docker (which is one of the con­tainer stan­dards; it is ac­tu­ally a static li­brary and is a dae­mon run­ning in­side the Linux OS).

Con­tain­ers make our ap­pli­ca­tions por­ta­ble. Con­tain­ers can be eas­ily shipped, built and de­ployed.


Con­tain­ers are an en­cap­su­la­tion of an ap­pli­ca­tion with its de­pen­den­cies. They look like light­weight VMs but that is not the case. A con­tainer holds an iso­lated in­stance of an op­er­at­ing sys­tem, which is used to run var­i­ous other ap­pli­ca­tions.

The ar­chi­tec­ture di­a­gram of Docker-con­tainer in Fig­ure 1 shows how each of the in­di­vid­ual com­po­nents are in­ter­con­nected.

Var­i­ous com­po­nents of the Docker-con­tainer ar­chi­tec­ture

The Docker dae­mon (gen­er­ally re­ferred to as dock­erd) lis­tens for Docker API re­quests and man­ages Docker ob­jects such as im­ages, con­tain­ers, net­works and vol­umes. A dae­mon

can also com­mu­ni­cate with other dae­mons to man­age Docker ser­vices.

The Docker client (also called ‘docker’), with which many Docker users in­ter­act with Docker, can com­mu­ni­cate with more than one dae­mon.

A Docker registry stores Docker im­ages. Docker

Hub and Docker Cloud are pub­lic reg­istries that any­body can use, and Docker is con­fig­ured to look for im­ages on Docker Hub, by de­fault.

Note: When we use the docker pull or docker run com­mands, the re­quired im­ages are pulled from the con­fig­ured registry. When we use the docker push com­mand, the im­age is pushed to our con­fig­ured registry.

When we use Docker, we are cre­at­ing and us­ing im­ages, con­tain­ers, net­works, vol­umes, plug­ins and other var­i­ous such ob­jects. These are called Docker ob­jects.

Thus con­tain­ers are fun­da­men­tally chang­ing the way we de­velop, dis­trib­ute and run soft­ware on a daily ba­sis. These de­vel­op­ments and ad­van­tages of con­tain­ers help in the ad­vance­ment of mi­croser­vices tech­nol­ogy. Mi­croser­vices are small ser­vices run­ning as sep­a­rate pro­cesses, where each ser­vice is lined up with sep­a­rate busi­ness ca­pa­bil­i­ties. When one lists the ad­van­tages of mi­croser­vices over mono­lithic

ap­pli­ca­tions (as given be­low), it will help users un­der­stand and ap­pre­ci­ate the beauty of the for­mer. In mi­croser­vices: One sin­gle ap­pli­ca­tion is bro­ken down into a net­work of pro­cesses.

All these ser­vices com­mu­ni­cate us­ing REST or MQ. Ap­pli­ca­tions are loosely cou­pled.

The scaling up of ap­pli­ca­tions is a lot eas­ier.

There is very good iso­la­tion of these ser­vices, as when one fails, oth­ers can con­tinue.

What is Ku­ber­netes and why should one use it?

Ku­ber­netes is an open source or­ches­tra­tor for de­ploy­ing con­tainer­ised ap­pli­ca­tions (mi­croser­vices). It is also de­fined as a plat­form for cre­at­ing, de­ploy­ing and man­ag­ing var­i­ous dis­trib­uted ap­pli­ca­tions. These ap­pli­ca­tions may be of dif­fer­ent sizes and shapes. Ku­ber­netes was orig­i­nally de­vel­oped by Google to de­ploy scal­able, re­li­able sys­tems in con­tain­ers via ap­pli­ca­tion-ori­ented APIs. Ku­ber­netes is suit­able not only for In­ter­net-scale com­pa­nies but also for cloud-na­tive com­pa­nies, of all sizes. Some of the ad­van­tages of Ku­ber­netes are listed be­low:

Ku­ber­netes pro­vides the soft­ware nec­es­sary to build and de­ploy re­li­able and scal­able dis­trib­uted sys­tems. Ku­ber­netes sup­ports con­tainer APIs with the fol­low­ing ben­e­fits:

• Ve­loc­ity—a num­ber of things can be shipped quickly, while also stay­ing avail­able.

Scaling—favours scaling with de­cou­pled ar­chi­tec­ture through load bal­ancers and scaling with con­sis­tency. Ab­stract—ap­pli­ca­tions built and de­ployed on top of Ku­ber­netes can be ported across dif­fer­ent en­vi­ron­ments. De­vel­op­ers are sep­a­rated from spec­i­fied ma­chines for pro­vid­ing ab­strac­tion. This re­duces the over­all ma­chines re­quired, thus re­duc­ing the cost of CPUs and RAM.

Ef­fi­ciency—the de­vel­oper’s test en­vi­ron­ment can be cheaply and quickly cre­ated via Ku­ber­netes clus­ters and this can be shared as well, thus re­duc­ing the cost of de­vel­op­ment. Ku­ber­netes con­tin­u­ously takes ac­tion to en­sure that the cur­rent state matches the de­sired state.

The var­i­ous com­po­nents in­volved in Ku­ber­netes

Pods: These are groups of con­tain­ers that can group to­gether other con­tainer im­ages de­vel­oped by dif­fer­ent teams into a sin­gle de­ploy­able unit.

NameS­paces: This pro­vides iso­la­tion and com­plete ac­cess con­trol for each mi­croser­vice, to con­trol the de­gree to which other ser­vices in­ter­act with it.

Ku­ber­netes ser­vices: Pro­vides load bal­anc­ing, dis­cov­ery iso­la­tion and nam­ing of mi­croser­vices.

Ingress: These are ob­jects that pro­vide an easy-to-use front-end (ex­ter­nalised API sur­face area).

Run­ning and man­ag­ing con­tain­ers us­ing Ku­ber­netes

As de­scribed ear­lier, Ku­ber­netes is a plat­form for cre­at­ing, de­ploy­ing and man­ag­ing dis­trib­uted ap­pli­ca­tions. Most of these ap­pli­ca­tions take an in­put, process the data and pro­vide the re­sults as out­put. Most of these ap­pli­ca­tions con­tain lan­guage run­time, li­braries (libc and lib­ssl) and source code.

A con­tainer im­age is a bi­nary pack­age that en­cap­su­lates all of the files nec­es­sary to run an ap­pli­ca­tion in­side an OS con­tainer. The Open Con­tainer Im­age (OCI) is the stan­dard im­age for­mat that’s most widely used.

The con­tainer types are of two cat­e­gories:

Sys­tem con­tain­ers, which try to im­i­tate vir­tual ma­chines and may run the full boot pro­cesses

Ap­pli­ca­tion con­tain­ers, which run sin­gle ap­pli­ca­tions These im­ages can be run us­ing the docker run –d –name com­mand, us­ing the CLI.

The de­fault con­tainer run­time used by Ku­ber­netes is Docker, as the lat­ter pro­vides an API for cre­at­ing ap­pli­ca­tion con­tain­ers on both Linux and Win­dows based op­er­at­ing sys­tems.

kuard is a data­base and maps to Port 8080, and can aso be ex­plored us­ing a Web in­ter­face.

Docker pro­vides many fea­tures by ex­pos­ing the un­der­ly­ing ‘cgroups’ tech­nol­ogy pro­vided by the Linux ker­nel. With this, the fol­low­ing re­source us­age can be man­aged and mon­i­tored:

Mem­ory re­sources man­age­ment and lim­i­ta­tion

CPU re­sources man­age­ment and lim­i­ta­tion

De­ploy­ing a Ku­ber­netes clus­ter

Ku­ber­netes can be in­stalled on three ma­jor cloud providers like that of Ama­zon’s AWS, Mi­crosoft’s Azure and Google’s Cloud Plat­form (GCP). Each cloud provider al­lows its own con­tainer ser­vice plat­forms.

Ku­ber­netes also can be in­stalled us­ing Minikube, lo­cally. Minikube is a sim­u­la­tion of the Ku­ber­netes clus­ter, but the main func­tion of this is for ex­per­i­men­ta­tion, lo­cal

de­vel­op­ment or for learn­ing pur­poses.

Ku­ber­netes can also be run on IoT plat­forms like Rasp­berry Pi for IoT ap­pli­ca­tions and for low-cost projects. The Ku­ber­netes clus­ter has mul­ti­ple com­po­nents such as: Ku­ber­netes Proxy—for rout­ing net­work traf­fic for load bal­anc­ing ser­vices (https://ku­ber­netes.io/docs/get­tingstarted-guides/scratch/)

Ku­ber­netes DNS—a DNS server for nam­ing and dis­cov­ery of the ser­vices that are de­fined in DNS Ku­ber­netes UI— this is the GUI to man­age the clus­ter

Pods in Ku­ber­netes

A pod is a col­lec­tion of ap­pli­ca­tion con­tain­ers and vol­umes run­ning in the same ex­e­cu­tion en­vi­ron­ment. One can say that the pods are the small­est de­ploy­able artifacts in the Ku­ber­netes clus­ter en­vi­ron­ment. Ev­ery con­tainer within a pod runs in its own cgroup but shares a num­ber of Linux name spa­ces. Pods cane be cre­ated us­ing the fol­low­ing com­mand in the CLI: kubectl run kuard

Most of the pod man­i­fests are writ­ten us­ing YAML or JSON scripts. But YAML is pre­ferred as it is in hu­man read­able for­mat.

There are var­i­ous com­mand op­tions us­ing kubectl to run or list pods.

La­bels, an­no­ta­tions and ser­vice dis­cov­ery

La­bels are key-value pairs that can be at­tached to Ku­ber­netes ob­jects such as pods and replica-sets. These la­bels help in find­ing the re­quired in­for­ma­tion about Ku­ber­netes ob­jects, meta­data for ob­jects and for the group­ing of ob­jects.

An­no­ta­tions pro­vide a place to store ad­di­tional meta­data for Ku­ber­netes ob­jects with as­sist­ing tools and li­braries.

La­bels and an­no­ta­tions go hand-in-hand; how­ever, an­no­ta­tions are used to pro­vide ex­tra in­for­ma­tion about where and how an ob­ject came from and what its poli­cies are.

A com­par­i­son be­tween Docker Swarm and Ku­ber­netes

Both Ku­ber­netes and Docker Swarm are pop­u­lar and used as con­tainer orches­tra­tion plat­forms. Docker also started sup­port­ing and ship­ping Ku­ber­netes from its CE (com­mu­nity edi­tion) and EE (en­ter­prise edi­tion) re­leases.

Docker Swarm is the na­tive clus­ter­ing for Docker. Orig­i­nally, it did not pro­vide much by way of con­tainer au­to­ma­tion, but with the lat­est up­date to Docker En­gine 1.12, con­tainer orches­tra­tion is now built into its core with first-party sup­port.

It takes some ef­fort to get Ku­ber­netes in­stalled and run­ning, as com­pared to the faster and eas­ier Docker Swarm in­stal­la­tion. Both have good scal­a­bil­ity and high avail­abil­ity fea­tures built into them. Hence, one has to choose the right one based on the need of the hour. For more de­tails, do re­fer to https://www.up­cloud.com/blog/docker-swarm-vsku­ber­netes/.


[1] https://ku­ber­netes.io/docs/get­ting-started-guides/ scratch/ [2] https://ku­ber­netes.io/docs/setup/in­de­pen­dent/cre­ate­clus­ter-kubeadm/ [3] https://docs.docker.com/en­gine/dockeroverview/#docker-ar­chi­tec­ture [4] https://ku­ber­netes.io/docs/tu­to­ri­als/ku­ber­netes-ba­sics/ de­ploy-in­tro/ [5] https://www.up­cloud.com/blog/docker-swarm-vsku­ber­netes/

Fig­ure 1: Docker con­tainer ar­chi­tec­ture (http://apache­booster.com/kb/what-is-adocker-con­tainer-for-be­gin­ners/docker-ar­chi­tec­ture/)

Fig­ure 2: Ku­ber­netes ar­chi­tec­ture

Fig­ure 3: Ku­ber­netes clus­ter (https://ku­ber­netes.io/docs/tu­to­ri­als/ ku­ber­netes-ba­sics/de­ploy-in­tro/)

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.