DevOps Se­ries: Pro­vi­sion­ing with An­si­ble

An­si­ble is the sim­plest way to au­to­mate apps and IT in­fra­struc­ture. It meshes well with DevOps to de­ploy apps. In this ninth ar­ti­cle in the se­ries on DevOps, we ex­plore the use of An­si­ble for launch­ing Docker con­tain­ers and pro­vi­sion­ing vir­tual ma­chines.

OpenSource For You - - Contents - By: Shak­thi Kan­nan The au­thor is a free soft­ware en­thu­si­ast and blogs at shak­thi­

Pro­vi­sion­ing is the first step in an ap­pli­ca­tion’s de­ploy­ment process. In a cloud en­vi­ron­ment, soft­ware can be run from a Docker con­tainer, vir­tual ma­chine or bare metal, and An­si­ble can be used for pro­vi­sion­ing such sys­tems. In this ar­ti­cle, we ex­plore how to use An­si­ble to launch Docker con­tain­ers and pro­vi­sion vir­tual ma­chines.

Set­ting it up

Let’s cre­ate an An­si­ble play­book for the ‘Get started with Docker Com­pose’ com­posetest ex­am­ple avail­able at­pose/get­tingstarted/. The An­si­ble ver­sion used on the host sys­tem (Ubuntu x86_64) is You will need to in­stall Docker CE and dock­er­com­pose on Ubuntu. Fol­low the in­stal­la­tion guide pro­vided at­gine/ in­stal­la­tion/linux/ docker-ce/ubuntu/#in­stall-us­ing-the-repos­i­tory to in­stall Docker CE. You can then in­stall docker-com­pose us­ing the APT pack­age man­ager:

$ sudo apt-get in­stall docker-com­pose The com­posetest/ folder con­sists of the fol­low­ing files:

com­posetest/ com­posetest/docker-com­pose.yml com­posetest/Dock­er­file com­posetest/pro­vi­sion.yml com­posetest/re­quire­ments.txt

The file con­tains a ba­sic Flask ap­pli­ca­tion that com­mu­ni­cates with a back­end Redis data­base server. Its file con­tents are as fol­lows:

from flask im­port Flask from redis im­port Redis

app = Flask(__­name__) redis = Redis(host=’redis’, port=6379)

@app.route(‘/’) def hello(): count = redis.incr(‘hits’) re­turn ‘Hello World! I have been seen {} times.\n’. for­mat(count)

if __­name__ == “__­main__”:””, de­bug=True)

An HTTP re­quest to the Flask ap­pli­ca­tion re­turns the text string ‘Hello World! I have been seen N times.’ This will be run in­side a Docker con­tainer. The re­quire­ments.txt file is pro­vided to list the de­pen­den­cies re­quired for the project:



Now let’s pro­vi­sion a min­i­mal­is­tic Docker con­tainer that has sup­port for Python and is based on Alpine (a se­cu­rity-ori­ented, light­weight GNU/Linux dis­tri­bu­tion). The Dock­er­file for the ap­pli­ca­tion is pro­vided below for ref­er­ence:

FROM python:3.4-alpine

ADD . /code


RUN pip in­stall -r re­quire­ments.txt CMD [“python”, “”]

The Docker-com­pose.yml file is used to cre­ate the im­ages and will also be used by An­si­ble. It de­fines the ser­vices that will be de­ployed in the con­tain­ers:

ver­sion: ‘2’ ser­vices: web: build: . ports:

- “5000:5000” redis: im­age: “redis:alpine” ports:

- “6379:6379”


The Python Web ap­pli­ca­tion will be run­ning on port 5000, whereas the Redis data­base server will be lis­ten­ing on port 6379. We will first build the ap­pli­ca­tion us­ing the fol­low­ing code:

$ docker-com­pose up

Creat­ing com­posetest_web_1

Creat­ing com­posetest_re­dis_1

At­tach­ing to com­posetest_web_1, com­posetest_re­dis_1 re­dis_1 | 1:C 05 Oct 11:40:49.067 # oO0OoO0OoO0Oo Redis is start­ing oO0OoO0OoO0Oo

re­dis_1 | 1:C 05 Oct 11:40:49.067 # Redis ver­sion=4.0.2, bits=64, com­mit=00000000, mod­i­fied=0, pid=1, just started

re­dis_1 | 1:C 05 Oct 11:40:49.067 # Warn­ing: no con­fig file spec­i­fied, us­ing the de­fault con­fig. In or­der to spec­ify a con­fig file use redis-server /path/to/redis.conf

re­dis_1 | 1:M 05 Oct 11:40:49.070 * Run­ning mode=stand­alone, port=6379.

re­dis_1 | 1:M 05 Oct 11:40:49.070 # WARN­ING: The TCP back­log set­ting of 511 can­not be en­forced be­cause /proc/sys/net/core/ so­max­conn is set to the lower value of 128. re­dis_1 | 1:M 05 Oct 11:40:49.070 # Server ini­tial­ized

re­dis_1 | 1:M 05 Oct 11:40:49.070 # WARN­ING over­com­mit_ mem­ory is set to 0! Back­ground save may fail un­der low mem­ory con­di­tion. To fix this is­sue add ‘vm.over­com­mit_mem­ory = 1’ to /etc/sysctl.conf and then re­boot or run the com­mand ‘sysctl vm.over­com­mit_mem­ory=1’ for this to take ef­fect.

re­dis_1 | 1:M 05 Oct 11:40:49.070 # WARN­ING you have Trans­par­ent Huge Pages (THP) sup­port en­abled in your ker­nel. This will cre­ate la­tency and mem­ory us­age is­sues with Redis. To fix this is­sue run the com­mand ‘echo never > /sys/ker­nel/ mm/trans­par­en­t_hugepage/en­abled’ as root, and add it to your /etc/rc.lo­cal in or­der to re­tain the set­ting af­ter a re­boot. Redis must be restarted af­ter THP is dis­abled.

re­dis_1 | 1:M 05 Oct 11:40:49.070 * Ready to ac­cept con­nec­tions web_1 | * Run­ning on (Press CTRL+C to quit) web_1 | * Res­tart­ing with stat web_1 | * De­bug­ger is ac­tive! web_1 | * De­bug­ger PIN: 100-456-831

If you start a browser on the host sys­tem and open the URL, you will see the text from the Flask ap­pli­ca­tion. You can con­tinue to re­fresh the page mak­ing re­quests to the ap­pli­ca­tion, and you will see the count in­creas­ing in the text: ‘Hello World! I have been seen N times.’ Press­ing Ctrl+c in the above ter­mi­nal will stop the ap­pli­ca­tion. Let’s now cre­ate an An­si­ble play­book to launch these con­tain­ers:

- name: Pro­vi­sion Flask ap­pli­ca­tion hosts: lo­cal­host con­nec­tion: lo­cal be­come: true gath­er_­facts: true tags: [setup]


- dock­er_ser­vice: pro­jec­t_­name: com­posetest def­i­ni­tion: ver­sion: ‘2’ ser­vices: web: build: “{{ play­book_dir }}/.” ports:

- “5000:5000” redis: im­age: “redis:alpine”

reg­is­ter: out­put

- de­bug: var: out­put

- assert: that: - “­­ning” - “­posetest_re­­ning”

The above play­book can be in­voked as fol­lows: $ sudo an­si­ble-play­book pro­vi­sion.yml --tags setup

The dock­er_ser­vice mod­ule is used to com­pose the ser­vices—a Web ap­pli­ca­tion and a Redis data­base server. The out­put of launch­ing the con­tain­ers is stored in a vari­able and is used to en­sure that both the back­end ser­vices are up and run­ning. You can ver­ify that the con­tain­ers are run­ning us­ing the docker ps com­mand out­put as shown below:

$ docker ps




03f6f6a3d48f com­posetest_web “python”

18 sec­onds ago Up 17 sec­onds>5000/ tcp com­posetest_web_1 fa00c70­da13a redis:alpine “dock­er­en­try­point...” 18 sec­onds ago Up 17 sec­onds 6379/tcp com­posetest_re­dis_1


You can use the dock­er_ser­vice An­si­ble mod­ule to in­crease the num­ber of Web ser­vices to two, as shown in the fol­low­ing An­si­ble play­book:

- name: Scale the web ser­vices to 2 hosts: lo­cal­host con­nec­tion: lo­cal be­come: true gath­er_­facts: true tags: [scale]


- dock­er_ser­vice: pro­jec­t_src: “/home/guest/com­posetest” scale: web: 2 reg­is­ter: out­put

- de­bug: var: out­put - name: Start con­tainer two dock­er_­con­tainer: name: com­posetest_web_2 im­age: com­posetest_web state: started ports:

- “5001:5000” net­work_­mode: bridge net­works:

- name: com­posetest_de­fault ipv4_ad­dress: “”

The above play­book can be in­voked as fol­lows: $ sudo an­si­ble-play­book pro­vi­sion.yml --tags scale

The ex­e­cu­tion of the play­book will cre­ate one more Web ap­pli­ca­tion server, and this will lis­ten on Port 5001. You can ver­ify the run­ning con­tain­ers as fol­lows:

$ docker ps



66b59e­b163c3 com­posetest_web “python” 9 sec­onds ago Up 8 sec­onds>5000/tcp com­posetest_web_2 4e8a37344598 redis:alpine “docker-en­try­point...” 11 sec­onds ago Up 10 sec­onds>6379/tcp com­posetest_re­dis_1

03f6f6a3d48f com­posetest_web “python” 55 sec­onds ago Up 54 sec­onds>5000/tcp com­posetest_web_1

You can open an­other tab in the browser with the URL http://lo­cal­host:5001 on the host sys­tem, and the text count will con­tinue to in­crease if you keep re­fresh­ing the page re­peat­edly.

Clean­ing up

You can stop and re­move all the run­ning in­stances. First, stop the newly cre­ated Web ap­pli­ca­tion, as fol­lows:

$ docker stop 66b

You can use the fol­low­ing An­si­ble play­book to stop the con­tain­ers that were started us­ing Docker com­pose:

- name: Stop all! hosts: lo­cal­host con­nec­tion: lo­cal be­come: true gath­er_­facts: true tags: [stop]


- dock­er_ser­vice: pro­jec­t_­name: com­posetest

pro­jec­t_src: “{{ play­book_dir }}/.” state: ab­sent

The above play­book can be in­voked us­ing the fol­low­ing com­mand:

$ sudo an­si­ble-play­book pro­vi­sion.yml --tags stop

You can also ver­ify that there are no con­tain­ers run­ning in the sys­tem, as fol­lows:

Re­fer to the An­si­ble dock­er_ser­vice mod­ule’s doc­u­men­ta­tion at­si­­si­ble/ lat­est/ dock­er_ser­vice_­mod­ule.html for more ex­am­ples and op­tions.

Va­grant and An­si­ble

Va­grant is free and open source soft­ware (FOSS) that helps to build and man­age vir­tual ma­chines. It al­lows you to cre­ate ma­chines us­ing dif­fer­ent back­end providers such as Vir­tu­al­Box, Docker, lib­virt, etc. It is de­vel­oped by HashiCorp and is writ­ten in the Ruby pro­gram­ming lan­guage. It was first re­leased in

2010 un­der an MIT li­cence. The Va­grant­file de­scribes the vir­tual ma­chine us­ing a Ruby DSL, and an An­si­ble play­book can be ex­e­cuted as part of the pro­vi­sion­ing process.

The fol­low­ing de­pen­den­cies need to be in­stalled on the host Ubuntu sys­tem:

$ sudo apt-get build-dep va­grant ruby-lib­virt

$ sudo apt-get in­stall qemu lib­virt-bin ebta­bles dns­masq virt-man­ager

$ sudo apt-get libxslt-dev libxml2-dev lib­virt-dev zlib1g-dev ruby-dev

Va­grant 1.8.7 is then in­stalled on Ubuntu us­ing a .deb pack­age ob­tained from the­ web­site. Is­sue the fol­low­ing com­mand to in­stall the va­grantlib­virt provider:

$ va­grant plugin in­stall va­grant-lib­virt

The fire­walld dae­mon is then started on the host sys­tem, as fol­lows:

$ sudo sys­tem­ctl start fire­walld

A sim­ple Va­grant­file is cre­ated in­side a test di­rec­tory to launch an Ubuntu guest sys­tem. Its file con­tents are given below for ref­er­ence:

# -*- mode: ruby -*# vi: set ft=ruby :

Va­grant.con­fig­ure(“2”) do |con­fig| con­­fine :test_vm do |test_vm| = “sergk/xe­nial64-min­i­mal-lib­virt” end

con­­vi­sion “an­si­ble” do |an­si­ble| an­si­­book = “play­book.yml” end end

When pro­vi­sion­ing the above Va­grant­file, a min­i­mal­is­tic Xe­nial 64-bit Ubuntu im­age is down­loaded, started and the An­si­ble play­book is ex­e­cuted af­ter the in­stance is launched. The con­tents of the play­book.yml file are as fol­lows:

--hosts: all be­come: true gath­er_­facts: no


- name: In­stall python2 raw: sudo apt-get -y in­stall python-sim­ple­j­son


- name: Up­date apt cache apt: up­date_­cache=yes

- name: In­stall Apache apt: name=apache2 state=present

The min­i­mal Ubuntu ma­chine has Python 3 by de­fault, and the An­si­ble that we use re­quires Python 2. Hence, we in­stall Python 2, up­date the APT repos­i­tory and in­stall the Apache Web server. A sam­ple de­bug ex­e­cu­tion of the above play­book from the test di­rec­tory is given below:

$ VAGRANT_LOG=de­bug sudo va­grant up --provider=lib­virt

Bring­ing ma­chine ‘test_vm’ up with ‘lib­virt’ provider... ==> test_vm: Creat­ing im­age (snap­shot of base box vol­ume). ==> test_vm: Creat­ing do­main with the fol­low­ing set­tings... ==> test_vm: -- Name: va­grant-lib­virt-test_ test_vm

==> test_vm: -- Do­main type: kvm

==> test_vm: -- Cpus: 1

==> test_vm: -- Fea­ture: acpi

==> test_vm: -- Fea­ture: apic

==> test_vm: -- Fea­ture: pae

==> test_vm: -- Mem­ory: 512M

==> test_vm: -- Man­age­ment MAC:

==> test_vm: -- Loader:

==> test_vm: -- Base box: sergk/xe­nial64-min­i­mallib­virt

==> test_vm: -- Stor­age pool: de­fault

==> test_vm: -- Im­age: /var/lib/lib­virt/im­ages/ va­grant-lib­virt-test_test_vm.img (100G)

==> test_vm: -- Vol­ume Cache: de­fault

==> test_vm: -- Ker­nel:

==> test_vm: -- Ini­trd:

==> test_vm: -- Graph­ics Type: vnc

==> test_vm: -- Graph­ics Port: 5900

==> test_vm: -- Graph­ics IP:

==> test_vm: -- Graph­ics Pass­word: Not de­fined

==> test_vm: -- Video Type: cir­rus

==> test_vm: -- Video VRAM: 9216

==> test_vm: -- Sound Type:

==> test_vm: -- Keymap: en-us

==> test_vm: -- TPM Path:

==> test_vm: -- IN­PUT: type=mouse, bus=ps2

==> test_vm: Creat­ing shared fold­ers meta­data...

==> test_vm: Start­ing do­main.

==> test_vm: Wait­ing for do­main to get an IP ad­dress...

==> test_vm: Wait­ing for SSH to be­come avail­able...


test_vm: Va­grant in­se­cure key de­tected. Va­grant will au­to­mat­i­cally re­place

test_vm: this with a newly gen­er­ated key­pair for bet­ter se­cu­rity.

test_vm: test_vm: Insert­ing gen­er­ated pub­lic key within guest...

test_vm: Re­mov­ing in­se­cure key from the guest if it’s present...

test_vm: Key in­serted! Dis­con­nect­ing and re­con­nect­ing us­ing new SSH key...

==> test_vm: Con­fig­ur­ing and en­abling net­work in­ter­faces... ==> test_vm: Run­ning pro­vi­sioner: an­si­ble...

test_vm: Run­ning an­si­ble-play­book...

PLAY ******************************************************** TASK [In­stall python2] ************************************** ok: [test_vm]

TASK [Up­date apt cache] ************************************* ok: [test_vm]

TASK [In­stall Apache] *************************************** changed: [test_vm]

PLAY RE­CAP ************************************************** test_vm : ok=3 changed=1 un­reach­able=0 failed=0

Since you have in­stalled virt-man­ager, you can now open up the Vir­tual Ma­chine Man­ager to see the in­stance run­ning. You can also log in to the in­stance us­ing the fol­low­ing com­mand from the test di­rec­tory:

$ va­grant ssh

Af­ter log­ging into the guest ma­chine, you will find its IP ad­dress us­ing the if­con­fig com­mand. You can then open a browser on the host sys­tem with this IP ad­dress to see the de­fault Apache Web server home page, as shown in Fig­ure 1.

Fig­ure 1: Apache Web server page

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.