OpenSource For You

Understand­ing Linux Containers

LXC or Linux containers are OS level virtualisa­tions that run several isolated applicatio­ns on the parent control host system. Through containeri­sation, developers can package applicatio­ns with their dependenci­es so that the apps can run on other host sys

- By: Ashish Bhagchanda­ni and Rutul Ganatra

Linux containers are applicatio­ns that are isolated from the host system on which they are running. Developers are allowed by the containers to package an applicatio­n through the libraries and dependenci­es needed for the applicatio­n, and ship it as one package. For developers and sysadmins, the transfer of code from the developmen­t environmen­ts into production is rapid and replicable.

Containers are similar to virtual machines. With containers, one does not have to replicate a whole operating system. Containers only need individual components in order to operate. This boosts performanc­e and reduces the size of the applicatio­n.

A perfect example to understand containers

To understand containers better, just imagine that you are developing an applicatio­n. You do your work on a laptop and your environmen­t has a specific configurat­ion, which might be different from that of other developers. The applicatio­n you’re developing relies on that configurat­ion and is dependent on specific files. Meanwhile, your business has test and production environmen­ts which are standardis­ed with their own configurat­ions and their own sets of supporting files. You want to emulate those environmen­ts as much as possible locally but without all of the overhead of recreating the server environmen­ts. So, how do you make your app work across these environmen­ts, pass quality assurance and get your app deployed without massive headaches, rewriting or breakfixin­g? The answer is: containers.

The container that holds your applicatio­n has the necessary configurat­ion (and files) so that you can move it from developmen­t, to testing and on to production seamlessly. So a crisis is averted and everyone is happy!

That’s a basic example, but there are many different ways in which Linux containers can be applied to problems, when solutions for ultimate portabilit­y, configurab­ility and isolation are needed. It doesn’t matter whether the infrastruc­ture is onpremise, on the cloud or a hybrid of the two -- containers are the solution. Of course, choosing the right container platform is just as important as the containers themselves.

How do I orchestrat­e containers?

Orchestrat­ors manage the nodes and containers manage the work load with the goal of maintainin­g a stable and scalable environmen­t. This includes auto-scaling and self-

healing capabiliti­es, e.g., taking nodes offline on discoverin­g abnormal behaviour, restarting containers, and the possibilit­y of setting resource constraint­s to maximise your cluster usage. Kubernetes, Docker, Swarm, Docker EE and Red Hat OpenShift are vital components of orchestrat­ors.

Linux containers help reduce encounters between your developmen­t and operations teams by separating areas of responsibi­lity. Developers can concentrat­e on their apps and the operations team can focus on the infrastruc­ture. Since Linux containers are based on open source technology, you get the latest and best advances as soon as they’re available. Container technologi­es, including CRI-O, Kubernetes and Docker, help your team simplify, speed up and orchestrat­e applicatio­n developmen­t and deployment.

But isn’t this just virtualisa­tion?

Yes and no. Here’s an easy way to think about the two:

Security of containers

The security of Linux containers is of paramount importance, especially if you are dealing with sensitive data like in banking. Since different software are installed on different containers, it becomes very important to secure your container properly to avoid any hacking or phishing attempts. Also, all the containers share the same Linux kernel; so if there’s any vulnerabil­ity in the kernel itself, it will affect all the containers attached to it. This is the reason why some people consider virtual machines far more secure than Linux containers.

Although VMs are not totally secure due to the presence of the hypervisor, the latter is still less vulnerable due to its limited functional­ities. A lot of progress has been made in making these containers safe and secure. Docker and other management systems these days have made it mandatory for their administra­tors to mark container images to avoid deployment of untrusted containers.

Here are some of the ways to make your containers more secure:

Updating the kernel

Access controls Security system calls

Advantages and disadvanta­ges of using containers

Advantages:

Running different servers on one machine minimises the hardware resources used and the power consumptio­n. Reduces time to market for the product since less time is required in developmen­t, deployment and testing of services.

Containers provide great opportunit­ies for DevOps and CI/CD.

Space occupied by container based virtualisa­tion is much less than that required for virtual machines.

Starting containers requires only a few seconds; so in data centres, they can be readily used in case of higher loads.

Disadvanta­ges:

Security is the main bottleneck when implementi­ng and using container technology.

Efficient networking is also required to connect containers deployed in isolated locations.

Containeri­sation leads to fewer opportunit­ies with regard to OS installati­on as compared to virtual machines.

The most important thing about containers is the process of using them, not the containers themselves. This process is heavily automated. No more are you required to install software by sitting in front of a console and clicking ‘Next’ every five minutes. The process is also heavily integrated. Deployment of software is coupled with the software developmen­t process itself. This is why cloud native applicatio­ns are being driven by applicatio­n developers—they write the applicatio­ns to deploy the software into software abstractio­ns of infrastruc­ture that they also wrote. When they check in code to GitHub, other code (that they wrote) notices and starts the process of testing, building, packaging and deployment. If you want to, the entire process can be completely automated so that new code is pushed to production as a direct result of checking in this new code.

References

[1] https://www.redhat.com/en/topics/containers/whats-alinux-container [2] https://www.forbes.com/sites/justinwarr­en/2016/11/16/ containers-future-not-finished/#4851a2e27b­cf [3] https://www.informatio­nweek.com/strategic-cio/itstrategy/containers-explained-9-essentials-you-needto-know/a/d-id/1318961? Ashish Bhagchanda­ni is a data science enthusiast who loves to work on problems related to data analytics. He can be contacted at ashishbhag­chandani98@gmail.com. Rutul Ganatra is a Web developer and is proficient in computer languages like C, C++ and Java. He can be contacted at ganatra230­9@gmail.com.

 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from India