Maximum PC

Manage Your Docker Containers Easily

-

LAST ISSUE WE REVEALED how to set up your own headless server running Ubuntu Server, complete with remote access via the Cockpit web-based UI. We ended by introducin­g you to the world of Docker and containers, which enable you to run individual applicatio­ns and services within their own isolated environmen­ts for security and stability reasons.

This tutorial picks up pretty much where that feature left off—so you need to refer back if you’ve not yet installed Cockpit, Docker, or the Cockpit-Docker plugin. If you’re looking to get started with Docker on another platform, the good news is that you can run it on Windows, macOS, and other flavors of Linux, too, complete with your choice of user-friendly front end— the box on page 62 reveals some of the options available.

We’ll open by revealing how Linux users can run containers as a non-root user for security purposes, plus step you through the process of finding, downloadin­g, setting up, and running containers on your server. We’ll even show you how to get around any missing features in Cockpit-Docker by bypassing it and using the Terminal in conjunctio­n with your PC’s text editor to quickly get more complex containers up and running. Ready to transform your new server? Let’s get started! –NICK PEERS

1 SET UP DEDICATED DOCKER USER

Once you’ve got Docker and Cockpit-Docker (or your choice of UI) up and running, Linux users should visit https://docs. docker.com/engine/install/linux-postinstal­l/#manage-dockeras-a-non-root-user for a guide to managing Docker without requiring access to “sudo.” Those running Cockpit-Docker have no need to create the required “docker” group—it’s been created for you. Switch to “Accounts” in Cockpit and click your user account. A new “Container Administra­tor” role has been added— checking this [ Image A] adds your user to the “docker” group for the purposes of administer­ing Docker from the command line, without having to precede commands with sudo .

>> Although Docker now supports rootless containers, which means they no longer need access to root for security purposes, they come with some restrictio­ns that can make them impractica­l in all scenarios, specifical­ly those containers that are accessed through any ports lower than 1024. Rootless Docker isn’t set up by default, although it’s simple to do in Ubuntu if you feel so obliged (see https://docs.docker.com/engine/security/rootless/).

>> An alternativ­e workaround exists, which works on a container-by-container basis. This involves configurin­g each container to run using a specific user—including nonadminis­trators—rather than the main root account. This entails creating a dedicated user for that very purpose. Switch to Terminal in Cockpit and issue the following command:

$ sudo adduser docker --ingroup docker >> This creates a dedicated “docker” user and makes it part of the “docker” group—you’re prompted to create a password during setup. We’ll be running all containers in this tutorial through this user and group, and you need to identify the docker user (UID) and group (GID) IDs:

$ id docker

>> Make a note of these—our Ubuntu server’s “docker” user, for example, returns a UID of 1001, and GID of 998.

2 SET UP CONFIGURAT­ION FOLDERS

Container images are volatile storage, so when you shut down or restart a container, all changes are wiped out and the container starts up afresh. To prevent all your changes from being lost, Docker uses a system of mount points, which enables you to connect the container to folders on your hard drive, where data such as configurat­ion settings can be stored securely.

>> The best place for configurat­ion folders is a common folder within the “docker” user’s “home” folder. Log out of Cockpit as your own user, then log back in as the “docker” user. Open the Terminal and create your folder:

$ mkdir containers

>> By default, only the “docker” user has read/write access to this folder. Log out and back in as your own user account— if you’ve not already added this account to the “docker” group as outlined in the previous step, do so now (non-Cockpit users should type sudo usermod -a -G docker username to do so).

>> This gives your user account read-only access to the containers folder. If you needed read/write access for any reason, give the “docker” group full access from the Terminal:

$ sudo chmod -R 775 / home/docker/containers

3

DOWNLOAD DOCKER IMAGES

Time to start using Docker properly. Everything you need to run a container is housed in a Docker image, and there are thousands of examples at https://hub.docker.com, which is where Docker pulls the images from. Start your search for suitable containers here, using the box over the page for inspiratio­n. You’ll find many applicatio­ns feature multiple times—if there’s no obvious official entry, look for one from LinuxServe­r, or for the most popular or highly rated versions.

>> Once you’ve identified the image you wish to use, return to Cockpit and select “Docker Containers” from the left-hand menu. Click “Get new image” next to “Images,” type the name of your chosen container into the Image Search window that pops up, and a list of matching containers appears.

>> Select your chosen container from the list and click “Download” [ Image B]. After a short pause, you should see the image being pulled under “Images.” Once complete, the image is available for you to create a container from.

4

SET UP YOUR DOCKER IMAGE

To create a container from your downloaded image, click the play button next to it to bring up the Run Image dialog. This contains all the configurat­ion options you need to successful­ly run the container on your system. While some are automatica­lly added by the image, it’s still a good idea to open the correspond­ing web page on http://hub.docker.com to see what options are available to you.

>> Start by changing the Container Name from the existing title (which is a random “adjective_ surname” pair) to something more recognizab­le. Leave the “Command” field as it is, then decide if you wish to limit the container’s access to memory and CPU by checking the relevant box to set a RAM limit and/or alter the CPU priority—1,204 is the default setting here, which can be reduced to constrain the container or increased to give it more weight as seen fit [ Image C]. Note, while RAM

limits are absolute, CPU priority settings are only enforced when resources are scarce. Unlike the other settings on this screen, both can be amended later if you wish to make changes.

>> If you want to be able to access the container using tty, leave “With terminal” checked (if in doubt, uncheck it for security reasons). Leave “Link to another container” unchecked for a single stand-alone container.

5

CONFIGURE NETWORKING

By default, your container communicat­es with the outside world using a “bridge” network connection. This works by mapping ports on your host PC to those within the container itself, enabling you to communicat­e with the container through your host PC’s IP address. Although Docker supports different types of networking relationsh­ips between container and host, CockpitDoc­ker only works with the standard bridge configurat­ion. The simple workaround here is to set up your container via the command line, as outlined in step 8.

>> The “Ports” section is naturally where you set up these mapped ports. Leave “Expose container ports” checked and you should find the image has already defined the container’s ports that need access. The port on the container is listed on the left, and it’s up to you to choose which host port to map it to. In most cases, a simple 1:1 relationsh­ip is fine (so port 3012 to port 3012), but on more commonly used ports (typically 80 or 443), you need to set a different port here [ Image D]. If you need to define additional ports, simply click the “+” button next to an existing port to add a new one.

>> When choosing a different port to use, check out Packetlife.net’s cheat sheet ( https://packetlife.net/ media/library/23/common-ports.pdf), which reveals which port numbers to steer clear of to avoid clashing with common services.

6

VOLUMES AND VARIABLES

The “Volumes” section is where you set up the mount points that connect virtual drives on your container to physical folders on your hard drive. There’s usually a “/config” folder, which is where your settings are stored so they survive container restarts, but you may want to map further folders, too (such as access to a folder on your PC containing all your audiobooks). Here it’s a case of manually entering the full host path (such as “/home/docker/containers/booksonic/config”) and setting the folder to default, read-only, or read-write as required. Read-only ensures the target folder’s contents can’t be altered by the container.

>> The next section covers the environmen­t variables your container requires to run—these are marked with the “-e” tag on the container image’s home page on http://hub.docker.com. Again, many should be predefined, leaving you the simple task of verifying them and making changes where appropriat­e [ Image E].

>> To run the container within your Docker account for security purposes, you need to create the following two keys: PUID and PGID. Their values should match the UID and GID values you recorded using the “id” command in step 2. If you’re working with LinuxServe­r images, these keys should already be in place, ready to be edited.

>> There’s one final setting to consider before you launch your container: “Restart Policy,” which determines what happens when the container is stopped or halted (say, after a crash). There are four basic options: “No,” “On Failure,” “Always,” and “Unless Stopped.” They’re largely self-explanator­y—the key difference between “Always” and “Unless Stopped” is that “Always” leads to the container being restarted automatica­lly after a server reboot, while “Unless Stopped” doesn’t.

7

LAUNCH AND TEST

Once you’ve configured your container, it’s testing time. Here’s where containers prove their worth: You can’t do any damage, so if you make a mistake, you can simply delete the container and start again. Go through the settings one final time, then click “Run.” You’re returned to the main Docker Containers screen,

where you should see your newly created container now appears under “Containers,” its “State” listed as “running.”

>> Click this and you can view your container’s status, plus access controls for starting, restarting, stopping, and deleting the container. Scroll down to reveal a console screen enabling you to see what’s happening behind the scenes, plus a “Change resource limits” button, which lets you change RAM and CPU allocation­s without having to destroy and recreate the container.

>> If the container fails to start or the console reveals an error, simply shut it down and restart it. If the same problem occurs, shut down and delete the container, then try creating it afresh, double-checking your settings. Some containers may appear to exhibit errors while still working—once it’s up and running, and an IP address is assigned (in bridged mode, this IP address is different from your host PC’s IP address, which is the address you use to communicat­e with the server), you can check to see if you can connect from a client app or through your web browser. Once verified, you can go on to configure the server itself within its own web UI—when you shut down, restart, or recreate the container, all your changes are reapplied from your config folder.

>> One last thing: Most Docker images are periodical­ly updated, but there’s no option to automatica­lly update them. Instead, you need to pull the image afresh.

8

RUN CONTAINERS FROM THE TERMINAL

Finally, we’ve seen how to set up and launch containers from Cockpit-Docker, but there are times when its “Run Image” controls aren’t sufficient to meet your needs. It doesn’t support passing commands such as --device=/dev/dri:/dev/dri (required to support Intel hardware accelerati­on in Jellyfin, for example) or using a different network relationsh­ip to the default bridge mode. It can also be time-consuming to set multiple things up again and again as you experiment with settings.

>> Thankfully, this is where a combinatio­n of Docker’s CLI interface and Cockpit’s support for full copy and paste in the

Terminal reap dividends. First, locate the image’s page on https://hub.docker.com, where you should find a lengthy Docker CLI command with all the controls, environmen­t variables, volumes, and so on listed on separate lines. Select all this text, then copy it to a blank text file in your text editor. Save the file for future reference, then make the changes required to the script in your text editor—setting the correct volume points, for example, or adding new commands as required (for example --net=host to connect the container directly to your server’s host network). Make sure you add -d to the docker run line to ensure the container runs detached.

>> Once done, save your file, then press Ctrl-A to select all the text, switch to Terminal in Cockpit, and press Ctrl-V to paste it all in as shown above [ Image F].

Press Enter and the container magically creates itself and starts up—and it appears in Cockpit-Docker, too, enabling you to monitor and control it from there.

 ??  ?? B
B
 ??  ??
 ??  ?? C
C
 ??  ?? F
F
 ??  ?? E
E
 ??  ?? D
D

Newspapers in English

Newspapers from United States