The In­tri­ca­cies of Docker Net­work­ing

This ar­ti­cle presents an overview of Docker net­works and the as­so­ci­ated net­work­ing con­cepts.

OpenSource For You - - Contents -

Cus­tomi­sa­tion and the ma­nip­u­la­tion of de­fault set­tings is an im­por­tant part of soft­ware engi­neer­ing. Docker net­work­ing has evolved from just a few lim­ited net­works to cus­tomis­able ones. And with the in­tro­duc­tion of Docker Swarm and over­lay net­works, it has be­come very easy to de­ploy and con­nect mul­ti­ple ser­vices run­ning on Docker con­tain­ers – ir­re­spec­tive of whether they are run­ning on a sin­gle or mul­ti­ple hosts. To play around with Docker net­work­ing and tune it as per our re­quire­ments, we need to first un­der­stand the fun­da­men­tals as well as its in­tri­ca­cies. As­sign­ing spe­cific IP ad­dresses to con­tain­ers rather than us­ing the de­fault IPs, cre­at­ing userde­fined net­works, and com­mu­ni­ca­tion be­tween con­tain­ers run­ning on mul­ti­ple hosts are some of the chal­lenges we will ad­dress in this ar­ti­cle.

The idea of run­ning ma­chines on a ma­chine is quite fas­ci­nat­ing. This fas­ci­na­tion was con­verted into re­al­ity when vir­tu­al­i­sa­tion came into ex­is­tence. Vir­tual ma­chines (VMs) are the ba­sis of this process of vir­tu­al­i­sa­tion. VMs are be­ing used ev­ery­where—from small or­gan­i­sa­tions to cloud data cen­tres. They en­able us to run mul­ti­ple op­er­at­ing sys­tems si­mul­ta­ne­ously with the help of hard­ware vir­tu­al­i­sa­tion.

Along with the IT in­dus­try’s shift to­wards mi­croser­vices (the small in­de­pen­dent ser­vices re­quired to sup­port com­plete soft­ware suites or sys­tems), a need arose for ma­chines that con­sume less com­put­ing re­sources in com­par­i­son to VMs. One of the pop­u­lar tech­nolo­gies that ad­dresses this re­quire­ment is con­tain­ers. These are lightweight in terms of re­sources, com­pared to VMs. Be­sides, con­tain­ers take less time to cre­ate, delete, start or stop, when com­pared to VMs.

Docker is an open source tool that helps in the man­age­ment of con­tain­ers. It was launched in 2013 by a com­pany called dotCloud. It is writ­ten in Go and uses Linux kernel fea­tures like names­paces and cgroups. It has been just five years since Docker was launched, yet, com­mu­ni­ties have al­ready shifted to it from VMs.

When we cre­ate and run a con­tainer, Docker, by it­self, as­signs an IP ad­dress to it, by de­fault. Most times, it is re­quired to cre­ate and de­ploy Docker net­works as per our needs. So, Docker lets us de­sign the net­work as per our re­quire­ments.

Let us look at the var­i­ous ways of cre­at­ing and us­ing the Docker net­work.

As­pects of the Docker net­work

There are three types of Docker net­works—de­fault net­works, user-de­fined net­works and over­lay net­works. Let us now dis­cuss tasks like: (i) the al­lo­ca­tion of spe­cific IP ad­dresses

to Docker con­tain­ers, (ii) the cre­ation of Docker con­tain­ers in a spec­i­fied IP ad­dress range, and (iii) es­tab­lish­ing com­mu­ni­ca­tion amongst con­tain­ers that are run­ning on dif­fer­ent Docker hosts (multi-host Docker net­work­ing). Fig­ure 1 de­picts the topol­ogy of Docker net­work­ing.

Why Docker and not vir­tual ma­chines?

We will not use vir­tual ma­chines to per­form the above tasks be­cause there are al­ready many open source and com­mer­cial tools such as Vir­tu­alBox and VMware, which can be used to set up net­work con­fig­u­ra­tion on vir­tual ma­chines. They also pro­vide good user in­ter­faces. So, per­form­ing the above tasks us­ing a VM is not a big chal­lenge, but with Docker, the com­mu­nity is still grow­ing. As per our ex­pe­ri­ence, no such tools are avail­able di­rectly. Docker only pro­vides com­mand line sup­port for deal­ing with net­work con­fig­u­ra­tions.

We as­sume that Docker is in­stalled and run­ning on your ma­chine. Just like any other ser­vice in Linux, we can check if it is run­ning by us­ing the $ ser­vice docker sta­tus com­mand. The out­put should be sim­i­lar to what is shown in Fig­ure 2. We can also see other in­for­ma­tion like the process ID of the Docker dae­mon, the mem­ory, etc.

If Docker is not al­ready in­stalled, the links f will guide you to un­der­stand the con­cepts and its in­stal­la­tion steps.

Docker net­work­ing

Docker cre­ates var­i­ous de­fault net­works, which are sim­i­lar to what we get in VMs, by us­ing tools like Vir­tu­alBox, VMware and Kernel Vir­tual Ma­chine (KVM). To get a list of all the de­fault net­works that Docker cre­ates, run the com­mand as shown in Fig­ure 3.

As can be seen in this fig­ure, there are three types of net­works:

1. Bridged net­work

2. Host net­work

3. None net­work

We can as­sign any one of the net­works to the Docker con­tain­ers. The --net­work op­tion of the ‘docker run’ com­mand is used to as­sign a spe­cific net­work to the con­tainer:

$ docker run --net­work=<net­work name>

In Fig­ure 4, the net­work type of the con­tainer is given as ‘host’ by us­ing the --net­work op­tion of the ‘docker run’ com­mand. The net­work as­signed to the con­tainer is demo1 us­ing the --name op­tion and the docker im­age is ubuntu. ‘-i’ is for in­ter­ac­tive, ‘-t’ is for pseudo-TTY, and ‘-d’ is the de­tach op­tion. This runs the con­tainer in the back­ground and prints the new con­tainer ID (see Fig­ure 3).

The fol­low­ing com­mand gives de­tailed in­for­ma­tion about a par­tic­u­lar net­work.

$ docker net­work in­spect <net­work name>

From Fig­ure 5, we have con­fig­ured the net­work to be the host. So, we will get the in­for­ma­tion about the host net­work and the con­tainer de­tails that we cre­ated ear­lier. We can see that it gives the out­put in JSON for­mat with in­for­ma­tion about the con­tain­ers con­nected to that net­work, the sub­net and gate­way of that net­work, the IP ad­dress and MAC ad­dress of each of the con­tain­ers con­nected to that net­work, and much more. The con­tainer demo1, which we cre­ated above, is shown in Fig­ure 5.

Bridged net­work: When a new con­tainer is launched, with­out the --net­work ar­gu­ment, Docker con­nects the con­tainer with the bridge net­work, by de­fault. In bridged net­works, all the con­tain­ers in a sin­gle host can con­nect to each other through their IP ad­dresses. This type of net­work is cre­ated when the span of Docker hosts is one, i.e., when all the con­tain­ers run on a sin­gle host. To cre­ate a net­work that has a span of more than one Docker host, we need an over­lay net­work.

Host net­work: Launch­ing a Docker con­tainer with the --net­work=host ar­gu­ment pushes the con­tainer into the host net­work stack where the Docker dae­mon is run­ning.

All in­ter­faces of the host (the sys­tem the Docker dae­mon is run­ning in) are ac­ces­si­ble from the con­tainer which is as­signed the host net­work.

None net­work: Launch­ing a Docker con­tainer with the --net­work=none ar­gu­ment puts the Docker con­tainer in its own net­work stack. So, IP ad­dresses are not as­signed to the con­tain­ers in the none net­work, be­cause of which they can­not com­mu­ni­cate with each other.

So far, we have dis­cussed the de­fault net­works cre­ated by Docker. In the next sec­tions, we will see how users can cre­ate their own net­works and de­fine them as per their re­quire­ments.

As­sign­ing IP ad­dresses to Docker con­tain­ers

While cre­at­ing Docker con­tain­ers, ran­dom IP ad­dresses are al­lo­cated to the con­tain­ers. An IP ad­dress is as­signed only to run­ning con­tain­ers. A con­tainer will not be as­signed any IP ad­dress if it is as­signed a none net­work. So, use the bridge net­work to view the com­mand dis­play­ing the IP ad­dress. The stan­dard if­con­fig com­mand dis­plays the net­work­ing de­tails in­clud­ing the IP ad­dress of dif­fer­ent net­work in­ter­faces.

Since the if­con­fig com­mand is not avail­able in a con­tainer, by de­fault, we use the fol­low­ing com­mand to get the IP ad­dress. The con­tainer ID can be taken from the out­put of the $ docker ps com­mand.

$docker in­spect --for­mat ‘{{ .Net­workSet­tings.IPAd­dress }}’ <Con­tainer ID>

In the above com­mand, --for­mat de­fines the out­put as .Net­workSet­tings.IPAd­dress. In­spect­ing the net­work or con­tainer re­turns all the net­work­ing in­for­ma­tion re­lated to it. .Net­workSet­tings.IPAd­dress will fil­ter out the IP ad­dress from all the net­work­ing in­for­ma­tion.

In ad­di­tion, you can also look at the /etc/hosts file to get the IP ad­dress of the Docker con­tainer. This file con­tains the net­work­ing in­for­ma­tion of the con­tainer, which also in­cludes the IP ad­dress. These two ways of check­ing the IP ad­dress of a con­tainer are shown in Fig­ure 6.

We can also as­sign any de­sired IP ad­dress to the Docker con­tainer. If we are cre­at­ing a bunch of con­tain­ers, we can spec­ify the range of IP ad­dresses within which these ad­dresses of the con­tain­ers should lie. This can be done by cre­at­ing user-de­fined net­works.

Cre­at­ing user-de­fined net­works

We can cre­ate our own net­works called user-de­fined net­works in Docker. These are used to de­fine the net­work as per user re­quire­ments, and this dic­tates which con­tain­ers can com­mu­ni­cate with each other, what IP ad­dresses and sub­nets should be as­signed to them, etc. How to cre­ate user-de­fined net­works is shown be­low.

$docker net­work cre­ate --sub­net=172.18.0.0/16 de­monet

In the com­mand shown in Fig­ure 7, net­work cre­ate is used to cre­ate user-de­fined net­works. The --sub­net op­tion is used to spec­ify the sub­net in which the con­tain­ers as­signed to this net­work should re­side. As­sign de­monet as the name of the net­work be­ing cre­ated. This com­mand also has a --driver op­tion, which takes bridge’or over­lay. If the op­tion is not spec­i­fied, by de­fault it cre­ates the net­work with the bridge driver. Hence, here the net­work de­monet has the bridge net­work driver.

The con­tain­ers we cre­ate in that net­work should have IP ad­dresses in that spec­i­fied sub­net. Af­ter cre­at­ing a net­work with a spe­cific sub­net, we can also spec­ify the ex­act IP ad­dress of a con­tainer.

$docker run ­­net <user­de­fined net­work name> ­­ip <IP Ad­dress> -it ubuntu

The com­mand in Fig­ure 8 shows that the --net op­tion of the docker run com­mand is used to men­tion the net­work to which the con­tainer should be­long. --ip is to as­sign a par­tic­u­lar IP ad­dress to the con­tainer. Do re­mem­ber that the IP ad­dress in Fig­ure 8 should be­long to the spec­i­fied sub­net in Fig­ure 7.

Note: We can as­sign IP ad­dresses to con­tain­ers only in the user-de­fined net­work. We can­not as­sign spe­cific IP ad­dresses if the con­tainer is in the de­fault net­works (bridge, host or none). This can be seen in Fig­ure 9.

Cre­at­ing mul­ti­ple con­tain­ers in a spec­i­fied ad­dress range

We have al­ready cre­ated a user-de­fined net­work de­monet with a spe­cific sub­net. Now, let us learn how to cre­ate mul­ti­ple Docker con­tain­ers, the IP ad­dresses of which all fall in a given range.

We can write a shell script to cre­ate any num­ber of con­tain­ers, as re­quired. We just have to ex­e­cute the docker run com­mand in a loop. In the docker run com­mand, --net should be added to the name of the user-de­fined net­work, which in this case is de­monet. The con­tain­ers will be cre­ated with names con­tainer1, con­tainer2, and so on.

The IP ad­dresses of the con­tain­ers cre­ated in Fig­ure 10 will be in a se­quence in the given range, spec­i­fied while cre­at­ing the user-de­fined net­work.

Es­tab­lish­ing com­mu­ni­ca­tion be­tween con­tain­ers run­ning on dif­fer­ent hosts

So far we were deal­ing with the com­mu­ni­ca­tion be­tween con­tain­ers run­ning on a sin­gle host, but things get com­pli­cated when con­tain­ers run­ning on dif­fer­ent hosts have to com­mu­ni­cate with each other. For such com­mu­ni­ca­tion, over­lay net­works are used. An over­lay net­work en­ables com­mu­ni­ca­tion be­tween var­i­ous hosts in which the Docker con­tain­ers are run­ning.

Ter­mi­nol­ogy

Be­fore cre­at­ing an over­lay net­work, we need to un­der­stand a few com­monly used terms. These are: Docker Swarm, key­value store, Docker Ma­chine and con­sul.

Docker Swarm: This is a group or a clus­ter of Docker hosts that has the Swarm ser­vice run­ning in them. This helps in con­nect­ing mul­ti­ple Docker hosts into a clus­ter us­ing the Swarm ser­vice. The hi­er­ar­chy of the Swarm ser­vice con­tains the mas­ter and nodes, which lie in the same Swarm ser­vice. First, we cre­ate a Swarm mas­ter and then add all the other con­tain­ers as Swarm nodes.

Key-value store: A key-value store is re­quired to store in­for­ma­tion re­gard­ing a par­tic­u­lar net­work. In­for­ma­tion such as hosts, IP ad­dresses, etc, is stored in these key-value stores. There are var­i­ous key-value stores avail­able but, in this ar­ti­cle, we will only use the con­sul key-value store.

Con­sul: This is a ser­vice dis­cov­ery and con­fig­u­ra­tion tool that helps the client to have an up­dated view of the in­fras­truc­ture. The con­sul agent is a main process in con­sul. It keeps in­for­ma­tion about all nodes that are a part of the con­sul and runs up­date checks reg­u­larly. The con­sul agent can run in ei­ther server mode or client mode.

Docker Ma­chine: Docker Ma­chine is a tool that al­lows us to in­stall and run Docker on vir­tual hosts (vir­tual ma­chines). We can man­age these cre­ated hosts as we man­age vir­tual ma­chines us­ing the Docker Ma­chine com­mands or also from a Vir­tu­alBox in­ter­face. As Docker Ma­chine uses Vir­tu­alBox driv­ers, the com­mands of the for­mer will not run in­side a Vir­tu­alBox. It re­quires the host sys­tem to run the Docker Ma­chine com­mands.

Let’s now cre­ate an over­lay net­work. The four steps given be­low will help to do so.

Cre­at­ing mul­ti­ple vir­tual hosts with the key-value store

Start­ing with the cre­ation of hosts, the con­sul con­tainer runs in the first host, which is re­spon­si­ble for man­ag­ing the key­value store of the net­work prop­er­ties. The first host which we cre­ate has the con­sul con­tainer (key-value store) run­ning on it, so that it be­comes eas­ier to con­nect fur­ther con­tain­ers to ac­cess this key-value store while cre­at­ing them.

To cre­ate vir­tual hosts, in­stall Docker Ma­chine, which helps in cre­at­ing vir­tual hosts us­ing Vir­tu­alBox driv­ers. The com­mand be­low is used for the same.

$ docker-ma­chine cre­ate -d vir­tu­albox con­sul-node

The above com­mand cre­ates a key-value store vir­tual host us­ing Vir­tu­alBox driv­ers. con­sul-node is the name of the vir­tual host. Docker also sup­ports Etcd and ZooKeeper key-value stores other than con­sul. For this com­mand to be suc­cess­ful, Vir­tu­alBox should be in­stalled in the sys­tem. Use ssh to log in to the cre­ated vir­tual host and use it. To run the con­sul store in­side the cre­ated vir­tual host, use the com­mands given be­low. These com­mands need to be run from in­side the vir­tual host. This cre­ates our first vir­tual host with the key­value store run­ning on port 8500.

$ docker-ma­chine ssh con­sul-node

$ docker run -d --name con­sul -p “8500:8500” -h “con­sul” con­sul agent -server -boot­strap -client “0.0.0.0”

In the above com­mand, -d is the de­tach op­tion. This runs the con­tainer in the back­ground and prints its con­tainer ID. --name is used to give a name to the con­tainer, which in this case is con­sul. To pub­lish the con­tainer’s port to the host, -p is used. As the ser­vice runs in­side the con­tainer and we need to ac­cess out­side con­tain­ers, we pub­lish the 8500 port that is in­side the con­tainer and map it with port 8500 of the host. -h gives the con­tainer host­name, which is given here as con­sul. -server is the boot­strap and the client is ‘0.0.0.0’.

Cre­at­ing a Swarm mas­ter

Use the com­mands given in Fig­ure 13 to cre­ate a Swarm mas­ter af­ter set­ting up the key-value store.

In the com­mand, the --swarm op­tion is used to con­fig­ure a ma­chine with a Swarm and --swarm-mas­ter is used to con­fig­ure that vir­tual host as the swarm mas­ter. --en­gine-opt is used to in­clude more pa­ram­e­ters with the key=value for­mat. Here, the en­gine refers to the Docker en­gine. Con­sul is used as an ex­ter­nal key-value store here. The Swarm man­ager stores net­work­ing in­for­ma­tion of the Swarm mode net­work and --swarm-dis­cov­ery is used to lo­cate the Swarm man­ager.

Cre­at­ing Swarm nodes

Next, we’ll cre­ate Swarm nodes to make an over­lay net­work of Docker con­tain­ers. In this case –swarm-mas­ter isn’t an ar­gu­ment, which means this is a worker node of the swarm and not a mas­ter node.

So far we have set up the Swarm mas­ter and Swarm node. In the next step, the pro­ce­dure to cre­ate an over­lay net­work is given.

Fig­ure 15 shows us the Swarm node, the Swarm mas­ter and the Swarm man­ager (con­sul node).

Cre­at­ing mul­ti­ple over­lay net­works

We will now cre­ate two over­lay net­works, and check the con­nec­tiv­ity in­side and across the net­work. Sim­i­lar to any other net­work, con­tain­ers in the same over­lay net­work can com­mu­ni­cate with each other, but they can­not do so if they are in dif­fer­ent net­works.

1. Over­lay Net­work 1: (over­lay-net)

Go into one of the Swarm nodes us­ing ssh:

$docker-ma­chine ssh swarm-node1

$docker net­work cre­ate --driver over­lay --sub­net=13.1.10.0/24 over­lay-net

2. Over­lay Net­work 2: (over­lay-net2) Go into Swarm mas­ter us­ing ssh:

$docker-ma­chine ssh swarm-mas­ter

$docker net­work cre­ate --driver over­lay --sub­net=14.1.10.0/24 over­lay-net2

As­sign the sub­net to the net­work of the Docker con­tainer. We have cre­ated two over­lay net­works. All the con­tain­ers in the first over­lay net­work named over­lay-net are able to com­mu­ni­cate with all other con­tain­ers in that same over­lay net­work. But, con­tain­ers be­long­ing to dif­fer­ent over­lay net­works are not able to com­mu­ni­cate with each other. We will also test if any con­tainer in the over­lay-net net­work is able to con­nect to any other con­tainer in the same net­work, even if these are run­ning on dif­fer­ent hosts, but is not able to con­nect to a con­tainer in the over­lay-net2 net­work, even if these are run­ning on the same host.

Test­ing multi-hosts and multi-net­works

For test­ing the net­works we have cre­ated, we need client and server ter­mi­nol­ogy. So, here let us take an Nginx server with a sim­ple Web page hosted on it and the client, which is the Docker con­tainer, in­side or out­side the net­work. If the client gets the hosted Web page on re­quest, it shows that the client is in the same net­work of the Nginx server. If the server does not re­spond, it shows that client and Nginx server are in dif­fer­ent net­works. To avoid in­stalling and set­ting up the Nginx server, we use a Docker con­tainer for Nginx and di­rectly run it.

The fol­low­ing com­mands cre­ate a Nginx Docker con­tainer in Swarm mas­ter with the net­work over­lay-net and run a Busy­box con­tainer in Swarm-node1, also with the same net­work over­lay-net.

$ docker-ma­chine ssh swarm-mas­ter

$ docker run -itd --name=web --net­work=over­lay-net nginx:alpine

$ docker-ma­chine ssh swarm-node1

$ docker run -it --rm --net­work=over­lay-net busy­box wget -Ohttp://web

As both Nginx and Busy­box con­tain­ers are on the same net­work (over­lay-net), Busy­box gets the Nginx page. How­ever, it will not be able to ac­cess the httpd page which is in the other net­work, even if both con­tain­ers are run­ning on the same host.

Let’s cre­ate a httpd Docker con­tainer in Swarm-node1 with net­work over­lay-net2 and run a Busy­box con­tainer in the Swarm mas­ter with the same net­work.

$ docker-ma­chine ssh swarm-node1

$ docker run -itd --name=web --net­work=over­lay-net2 httpd:alpine

$ docker-ma­chine ssh swarm-mas­ter

$ docker run -it --rm --net­work=over­lay-net2 busy­box wget -Ohttp://web

Fig­ure 17 gives the out­put from the httpd con­tainer but not from the Nginx con­tainer.

This ver­i­fies our test of cre­at­ing an over­lay net­work on mul­ti­ple hosts and ac­cess­ing Docker con­tain­ers us­ing those net­works.

Fig­ure 1: Block di­a­gram of Docker net­work­ing

Fig­ure 2: To check if Docker is run­ning

Fig­ure 3: List­ing the de­fault Docker net­works

Fig­ure 5: In­spect­ing the host net­work

Fig­ure 6: Check­ing the IP ad­dress of a con­tainer

Fig­ure 7: Cre­at­ing a user de­fined net­work

Fig­ure 4: As­sign­ing a con­tainer to a spe­cific net­work

Fig­ure 11: Vir­tu­alBox UI with a new vir­tual host (con­sul-node) cre­ated us­ing Docker Ma­chine

Fig­ure 10: The script to cre­ate and run mul­ti­ple con­tain­ers in a spec­i­fied IP ad­dress range

Fig­ure 9: Er­ror while as­sign­ing the IP ad­dress to a con­tainer within the de­fault Docker net­works

Fig­ure 8: The com­mand to as­sign the net­work and IP ad­dress to the con­tainer

Fig­ure 12: Con­sul node vir­tual host

Fig­ure 13: Cre­at­ing a Swarm mas­ter

Fig­ure 14: Cre­at­ing a Swarm node

Fig­ure 16: Out­put from the Nginx server

Fig­ure 17: Out­put from the httpd server

Fig­ure 15: List of cre­ated vir­tual hosts

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.