In­te­gra­tion of a Sim­ple Docker Work­flow with Jenk­ins Pipe­line

This ar­ti­cle is a tu­to­rial on in­te­grat­ing the Docker work­flow with Jenk­ins Pipe­line.

OpenSource For You - - Contents -

In this ar­ti­cle we will look at work­ing with the pipe­line script, Jenk­ins­file, to which we will add the func­tion­al­ity to down­load a sim­ple im­age from Docker Hub, build the im­age (iden­ti­cal copy), start a con­tainer off that im­age, run a sim­ple test and, fi­nally, if the test is passed, tag and pub­lish the built im­age to the registry hosted in Docker Hub.

The pre­req­ui­sites are:

Jenk­ins v2.x stan­dard in­stall with the Pipe­line plugin suite. A built agent setup ca­pa­ble of run­ning Docker com­mands, con­fig­ured as a node in the mas­ter.

Ba­sic knowl­edge of Groovy DSL to write pipe­line code (ei­ther scripted or declar­a­tive syn­tax).


Other than the set of plug­ins bun­dled as part of a Jenk­ins in­stal­la­tion, en­sure the plug­ins listed be­low are avail­able as well, as they are es­sen­tial to run the pipe­line script that we will de­velop in this ar­ti­cle. https://plug­ins.jenk­­mons https://plug­ins.jenk­­work­flow https://plug­ins.jenk­­abil­ity https://plug­ins.jenk­­den­tials-bind­ing

All the code snip­pets shown in the ex­am­ple be­low fol­low the declar­a­tive syn­tax as it’s eas­ier to get started with pipe­line-as-code, which is es­pe­cially at­trac­tive to be­gin­ners. Fol­low­ing this syn­tax, the com­plete pipe­line-as-code is con­tained within a tem­plate that fol­lows this pat­tern.

pipe­line { agent, en­vi­ron­ment, op­tions, pa­ram­e­ters go here var­i­ous stages to ex­e­cute build go here build com­mands or script to per­form var­i­ous tasks go here

tasks re­lated to post-build go here }

It’s good to ex­plore the links given be­low for a quick glance at pipe­line syn­tax, as well as for the steps and ex­am­ples. They are quite handy and serve as a great ref­er­ence. https://jenk­­line/steps https://jenk­­line/syn­tax/­in­sci/pipe­line-ex­am­ples

Docker sup­port

Let us be­gin by look­ing at the agent sec­tion which has the pro­vi­sion, amongst oth­ers, to sup­port the Docker build in

the pipe­line. The doc­u­men­ta­tion here pro­vides de­tails on the var­i­ous op­tions within this sec­tion; let’s have a brief look at the op­tions avail­able for Docker.

Op­tion 1: For sim­ple use cases that in­volve the pipe­line (a node on which a build job runs as per the la­bel), this suf­fices – a Docker im­age serves as the build agent (a.k.a. node, FKA or slave). pipe­line { agent { docker { im­age ‘name-of-im­age’ la­bel ‘pre­con­fig­ured-node-to-down­load-thisim­age’


} }

Op­tion 2: The pipe­line will ex­e­cute the stage(s) us­ing the con­tainer built from Dock­er­file, lo­cated in the source of the repos­i­tory. For this to work, Jenk­ins­file must be loaded from ei­ther a multi-branch pipe­line, or pipe­line from SCM. Here, the agent is in­stan­ti­ated us­ing the Dock­er­file.

pipe­line {

agent { dock­er­file true } }

Op­tion 3: The pipe­line will ex­e­cute the stage(s) us­ing the con­tainer built on an agent us­ing a cus­tom Dock­er­file sourced from SCM.

pipe­line { agent { dock­er­file { file­name ‘Dock­er­file.hello-world’ la­bel ‘pre­con­fig­ured-node-to-down­load-thisim­age’


} }

Each op­tion serves dif­fer­ent pur­poses, and pos­sesses ad­van­tages over oth­ers. While the first op­tion is lim­ited in the us­age of args (it can be added be­fore the script but can be­come clut­tered and dif­fi­cult to main­tain), the sec­ond and the third op­tions rely on the pres­ence of Dock­er­file as part of the repos­i­tory that’s host­ing the source code.

It should be noted that all the three op­tions are valid us­age, and the pipe­line will start with the agent setup with Docker args as shown in Fig­ure 1.

Jenksin­file, the pipe­line script re­ferred to ear­lier, is set up to pick any build agent that’s ca­pa­ble of run­ning Docker com­mands. In non-pro­duc­tion en­vi­ron­ments, this kind of re­source us­age is not rec­om­mended and can lead to po­ten­tially un­sta­ble builds in the long run. It’s a good prac­tice to pin the pipe­line to spe­cific agent(s) that carry the la­bel as given in the pipe­line script.

While the ded­i­cated build agent is ca­pa­ble of serv­ing the build job re­quest, as routed by the mas­ter, the other us­age pat­tern is to spin-up build agent(s) (in the cloud or on­premise) on-de­mand as per the sys­tem con­fig­u­ra­tion in the Jenk­ins mas­ter. The pro­vi­sion to cre­ate the build agent, when one is not avail­able, greatly re­duces the up­front in­vest­ment on ca­pac­ity plan­ning but one should be aware that the pro­vi­sion­ing du­ra­tion, i.e., the agent cre­ation time, will de­lay the ac­tual start of the build un­til such an agent is on­line and can com­mu­ni­cate with the mas­ter.

Hello World

Let us re­fer to a sim­ple (pub­lic) im­age from the Docker Hub registry:­tum/hello-world/. Based on the de­scrip­tion pro­vided at the Docker Hub, this im­age is meant to test Docker de­ploy­ments. It has Apache with a ‘Hello World’ page lis­ten­ing in on Port 80.

The rest of the ar­ti­cle will fo­cus on de­vel­op­ing the pipe­line script, grad­u­ally. How­ever, the fi­nal and func­tional ver­sion can be ac­cessed from the branch, the docker-build that hosts the script, in the fol­low­ing pub­lic GitHub repos­i­tory:­manathan/jenk­in­s_pipeline_demo/blob/ be129179271b1b0341727f93a399f­b34d8133c6d/Jenk­ins­file.

And the as­so­ci­ated Dock­er­file (two lines of code) is quite ba­sic, as shown in Fig­ure 5. This is stored at the root of SCM, and is avail­able in the same pub­lic GitHub repos­i­tory:­manathan/jenk­in­s_pipeline_demo/ blob/1e725e0c98e59766fd1fc4f­ba3c98276146f­b5e6/ Dock­er­file.

As noted ear­lier, the pipe­line script refers to any avail­able node, where the build will be sched­uled by the Jenk­ins mas­ter.

Registry set­tings

On suc­cess­ful com­ple­tion of val­i­da­tion, the pipe­line script will push the im­age to the repos­i­tory, ras­pam­docker/osfy which is hosted at the The repos­i­tory’s name and URL will be set in the en­vi­ron­ment sec­tion of the pipe­line script as shown in Fig­ure 2. This global

set­ting en­ables all the stages that fol­low, and helps to ac­cess them as any other en­vi­ron­ment vari­able.

In con­trast to the scripted syn­tax, each in­di­vid­ual stage is grouped within a global and sin­gle en­clos­ing block that is called ‘stages’, in the declar­a­tive syn­tax.

Pipe­line script from SCM

Un­til the de­fault check­out op­er­a­tion is dis­abled via skipDe­fault­Check­out (that should go in the op­tions sec­tion), the pipe­line will au­to­mat­i­cally clone the con­tents of the GitHub repos­i­tory, as set up in the project con­fig­u­ra­tion in Jenk­ins.

As shown in Fig­ure 3, a project that was set up in the Jenk­ins in­stance man­aged by the au­thor is con­fig­ured to source the pipe­line script from SCM—in this case, from a pub­lic Git repos­i­tory hosted in GitHub. Jenk­ins­file’s name and path is set up in the Script path field, while the name of the branch name pro­vided in the Branches to build helps to lo­cate the script in that branch.

The Ad­di­tional be­hav­iours sec­tion (lo­cated just be­low the Repos­i­tory Browser field) helps to speed up the Git clone op­er­a­tion by set­ting op­tions like min­imis­ing the ref­specs to be used, time­out the op­er­a­tion, turn on or off the tags to down­load, etc. This is meant for power users and is a big time-saver if the repos­i­tory to be cloned has a sub­stan­tial his­tory of ref­specs (branches, tags, etc).

Git com­mit’s SHA1

The first stage, prep (short for prepara­tory), runs a sim­ple Git com­mand, the out­put of which is pro­cessed fur­ther with the aid of ba­sic UNIX com­mands to get the trimmed seven digit SHA1 of the Git com­mit. The out­put is stored in an en­vi­ron­ment vari­able as shown in Fig­ure 4. While the Git com­mand is com­pre­hen­si­ble, there are a few other items in this code snip­pet that should be ex­plained.

The en­tire Git com­mand is con­tained within the sh step, and we will record the out­put (retrunStd­out: true) that we get af­ter ex­e­cut­ing the com­mand. The out­put from the com­mand is passed through the trim func­tion to a strip of ex­tra space. This (trim) func­tion­al­ity is cour­tesy of the JIRA ticket of the Jenk­ins project.

Note: Any op­er­a­tion that de­mands us­age of the Groovy lan­guage con­structs within the pipe­line script (ap­pli­ca­ble to only declar­a­tive syn­tax) should be en­closed in the script block. But how does one know when to or when not to use this block; flow con­trol struc­tures, vari­able def­i­ni­tions, func­tion calls – all of these are fit to go in­side the script block. And, of course, it is only by con­tin­u­ous prac­tice that one gets to com­pre­hend the us­age of this block.

Ac­cess to Docker-re­lated func­tions

With the in­stal­la­tion of plug­ins (as listed in the Plug­ins sec­tion), the pipe­line script has ac­cess to a host of Dock­er­spe­cific build com­mands via the Docker vari­able (Fig­ure 5 shows a list of vari­ables of­fered by the Docker vari­able). Nav­i­gate to this page by click­ing on the Pipe­line Syn­tax op­tion in the project page and se­lect­ing the op­tion Global Vari­ables Ref­er­ence (shown in Fig­ures 6 and 7).

Ac­cess to Docker com­mands, via the Docker vari­able, makes it very con­ve­nient to write sim­ple pipe­line script us­ing the dif­fer­ent func­tions, like Oth­er­wise, this would have to be done by wrap­ping the reg­u­lar Docker com­mands in­side the sh step. For ex­ten­sive

read­ing on this topic, re­fer to Jenk­ins doc­u­men­ta­tion on us­ing Docker with Pipe­line, which pro­vides doc­u­men­ta­tion with code sam­ples, which are very handy, and cov­ers var­i­ous use cases.

Docker build

The build im­age (Fig­ure 8) shows the run­ning with­out any ar­gu­ments other than ${IM­AGE}; how­ever, it can take ad­di­tional ar­gu­ments via --build-arg. This be­hav­iour is iden­ti­cal to the Docker build, so it must end with the build con­text. The out­put is saved to a vari­able, which con­tains the im­age ob­ject that will be used in the fol­low­ing stages.

The con­sole out­put (Fig­ure 9) show­ing the ex­e­cu­tion of the build stage helps to trace the se­quence of events in the build con­sole log from a suc­cess­ful ex­e­cu­tion of the build stage, as per the code snip­pet shown in Fig­ure 8. From the log, it’s clear that Docker has used the cache while ex­e­cut­ing one of the steps, be­cause there were a few re­peated runs of the build job (on the same build agent).

Im­age val­i­da­tion

Re­mem­ber the note about ‘Hello World’ men­tioned ear­lier while in­tro­duc­ing this pub­lic im­age hosted at Docker Hub – the im­age runs Apache at Port 80.

If you were build­ing the im­age lo­cally in your lap­top or desk­top, then a new con­tainer can be started us­ing the im­age. Get to know the port us­ing the docker port com­mand, and test it us­ing a Web browser by point­ing to the ap­pro­pri­ate port on the lo­cal host. It is port 80 in the run­ning con­tainer but dif­fer­ent in the lo­cal host. The chal­lenge lies in how to sim­u­late this flow in a CI en­vi­ron­ment.

Start­ing a Docker con­tainer

Let’s start a con­tainer that’s based on the newly built im­age and use the curl com­mand with a few op­tions (which will be ex­plained), as shown in Fig­ure 10. Start a new con­tainer to sim­u­late (pro­gram­mat­i­cally us­ing pipe­line-as-code) what was pos­si­ble in a Web browser.

Line 33 is an­other ex­am­ple to show the us­age of a func­tion from the Docker vari­able – this helps to start a new con­tainer at Port 80 and its out­put is stored in a vari­able con­tainer. To know the port map­ping on the host, in Line 34, run the pipe­line equiv­a­lent of the Docker port com­mand.

To re­in­force this, let us look at Fig­ure 11, which is an ex­tract of the build con­sole out­put gen­er­ated from ex­e­cut­ing Lines 33 to 35 as part of the full run of the build job.

Note: On ev­ery start of a new con­tainer (Docker runs us­ing the CLI or by run­ning a new job of the project in Jenk­ins), us­ing the ‘Hello World’ im­age, port map­ping in the lo­cal host will be dif­fer­ent.

Query­ing Apache’s port

From Fig­ure 11, we know the port map­ping on the lo­cal host (here it is 32788, but could be dif­fer­ent in your setup) for this run of the build job. Now, curl can be used to hit this port in the lo­cal host, i.e., the build agent in the con­text of the build job be­ing ex­e­cuted—we will be in­ter­ested to know the HTTP re­sponse (code) as con­fir­ma­tion of hav­ing reached the run­ning con­tainer suc­cess­fully.

Fig­ure 2: Val­ues for the repos­i­tory and the registry

Fig­ure 3: Pipe­line con­fig­u­ra­tion in Jenk­ins Project

Fig­ure 5: Func­tions ac­ces­si­ble via the Docker vari­able

Fig­ure 4: Re­trieve the SHA-1 ID

Fig­ure 9: Con­sole out­put

Fig­ure 8: The build im­age

Fig­ure 11: Query the port map­ping on the host

Fig­ure 10: Start a new con­tainer

Fig­ure 12: Test the run­ning con­tainer is a con­tin­u­a­tion of the block of code in the test stage

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.