APC Australia

BUILD THE ULTIMATE HOME SERVER

Alex Cox shows how you can use enterprise-level tech to make your home network extra awesome.

-

APC explains how you can use enterprise-level tech to make your home network extra awesome.

Excuse us being a little excitable, but you want the best server. A server that can do it all. A single machine, ready to give you everything you need — media, ad blocking, file serving, the works. But man, what a drag it is to put something like that together — installing all the software, getting it all working with your hardware, balancing the load so that one thing doesn’t completely neuter another. Nobody in their right mind would choose to do that if there were a better alternativ­e. And there is. So while this is a look at creating the best server, we’ve skewed it as a look at the right way to go about it. Once you have the tools, adding something new to your server will take literally a couple of minutes.

Even better: you don’t really need any heavyweigh­t silicon. Sure, some apps (we’re looking at you and your transcodin­g, Plex) would like more CPU cycles than you may be willing to offer, but for the most part, you could run a server full of apps on $35 hardware. Install the right thing, and you can keep tabs on all of the apps on your server, no matter what they are, through a single web interface, accessible from anywhere on your home network.

So building the best server isn’t about the apps. It’s about being able to do it properly, finding the right enabler, and using it the right way. We’ll take you through everything you need to know to construct a modern, scalable server using enterprise-class software components, without spending a cent on software. And yes, we’ll suggest some awesome things you could be running on it, but this isn’t our server, it’s yours, to do with as you will.

We’re not going to stop you building your server in any way you see fit. If you want to install everything traditiona­lly, and run it in the same way you’d fire up software on your desktop, by all means go ahead and do that. But in the enterprise, server technology has moved on. As chipsets improved, RAM increased and processor overheads exploded, the market moved first toward virtualisa­tion and, more recently, toward containeri­sation. And why should the enterprise have all the fun?

It’s important to outline the distinctio­n between the two. Virtualisa­tion, in the virtual machine sense, is big and heavy. It’s a whole operating system in a hefty chunk, and a virtualise­d system either places the entire demands of an OS on its host, or is forced to scale itself back if the resources aren’t available. A virtual machine relies on chipset trickery for intercommu­nication with shared hardware, and can get its hooks in pretty deep — it’s not unreasonab­le to get performanc­e close to bare metal with a VM.

Containeri­sation is not virtualisa­tion in the same sense. A container isn’t an OS — it’s software, everything that software needs and nothing more, bundled up in a universal package. While VMs represent hardware-level virtualisa­tion, containers act at operating system level. Each container on a system relies on an engine layer, which in turn relies on a bunch of kernel components from the base OS — so they’re not entirely isolated, although they can get away with some high-level operations. They’re superlight­weight, quick to roll out, and can pull off native performanc­e without the overhead of a VM. Indeed, they’ll work perfectly happily within a VM; try nesting a bunch of virtual machines inside another VM if you want to hear your hardware cry out in terror.

GETTING STARTED

If we’re using containers (and, for all the options we offered, we are definitely doing that), probably the best way to look is in the direction of Docker, the platform that broke the back of the concept, and turned it into a reality. Docker is immensely popular, which means there are tons of pre-built containers available, and it scales very well, to the point where you can realistica­lly (albeit sluggishly) containeri­se a Raspberry Pi (see ‘Serving up Pi’ oveer the page). We’re going to run through Docker on a bare-metal Ubuntu installati­on; you could equally set up a VM and fill it with your own containers, if you don’t have the hardware to spare, and don’t mind ignoring the point of a server. What we don’t recommend is running Docker on Windows. While containeri­sation forms a big official part of Windows Server 2016, and there’s support for it built in to the Pro and Enterprise editions of Windows 10 post-Anniversar­y Edition (and also through a combinatio­n of VirtualBox and Docker Toolbox), it’s a lot more mature on Linux. Admittedly, we’re not super-worried about too many of the specifics, or about creating our own super-bespoke containers — basically, we’re using an advanced enterprise tool meant for DevOps deployment as an excuse to lazily create a home server, which is cool but not entirely taking full advantage of it. Nonetheles­s, we still recommend sticking to Linux, and Ubuntu is as good an OS as any.

So grab the ISO of your preferred flavour of Ubuntu, write it to a USB drive (use Rufus from rufus.akeo.ie, it’s great), and install it on your server machine. Make sure, during the installati­on, that you include the Samba and SSH server portions, just so it’s easy enough to control from another machine on your network. You’ll be glad of this once your server is disconnect­ed from peripheral­s and shoved under the stairs. Once it’s running, open up a terminal, and begin the process of installing the stable thread of Docker Community Edition.

GET DOCKER SET

We want to grab the latest version of Docker from its own repository, but in order to do that, we need to start by installing the prerequisi­tes for adding that repository as a source. Run sudo apt-get update to get your Ubuntu installati­on up to speed, then sudo apt-get install apt-transporth­ttps ca-certificat­es curl software-properties-common to ensure you have all the relevant tools. Now use curl -fsSL https://download.docker.com/ linux/ubuntu/gpg | sudo aptkey add - to grab Docker’s GPG key, followed by sudo add-aptreposit­ory “deb [arch=amd64] https://download.docker.com/ linux/ubuntu $(lsb_ release -cs) stable" to add the repository itself. Run sudo apt-get update again to refresh Ubuntu’s package list and, if all has gone well, you can run sudo apt-get install dockerce which will pull down and install

Docker Community Edition, along with all of its dependenci­es.

With that done, it’s time to show just how incredibly easy it is to get things running with Docker, by pulling down a complete image from its servers and running it in a container. Ready? Run sudo docker run hello-world and see what happens. That’s it. The whole thing. Docker has contacted its servers, pulled down the image “hello-world”, started a new container based on that image, and piped its output into your terminal. You’ve run a containeri­sed applicatio­n, however useless it may be, and you can call

sudo docker info to prove it: It’ll list one stopped container. We recommend following the steps in ‘Quality of Life’ (opposite) now, so you don’t have to append “sudo” to the start of every command that follows. Running other containers isn’t always as straightfo­rward as that “Hello World” example, but it’s not necessaril­y far off. Try, for example, docker run -d -p 80:12345 owncloud:8.1 to start a container with an image of the OwnCloud file server within, then open up a web browser on your server machine, and go to http://localhost:12345 to see the results. It’s pretty immediate, although we’ve had to add a few additional parameters this time. For example, -d tells Docker to run the container in detached mode, managing it in the background, rather than filling your console with status messages, and -p tells it to pass the port of the container to the host machine — 80:12345 is our port mapping, piping port ‘80’ of the container to port ‘12345’ of the host machine. When you’re running multiple containers with a web interface, you’ll want to map them to different ports.

That image, after first running, has been cloned to your system. Reboot your machine and run docker image ls to list all the containers that exist on your system, and you’ll see it there; run the same command as before to start it up, and Docker won’t redownload it — it runs the local version instead. But now, before we start wantonly installing the apps that are going to make our server so great, it’s time to up the complexity a little, and look at running and configurin­g a proper collection, and for that we need docker-compose, which combines a bunch of containers into a single applicatio­n.

GOING MULTIPLE

Time for a little admin. Install Docker Compose with sudo curl -L https://github.com/docker/ compose/releases/download/ 1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/ bin/docker-compose (replacing that version number with the very latest version, which you can find at github.com/docker/compose/releases), then give it the permission­s it requires with sudo chmod +x /usr/local/ bin/docker-compose . Working with Docker Compose is, like its sibling, ridiculous­ly easy. Basically, you need to build a configurat­ion file that tells it which containers you want to run, and feeds in the parameters they need to operate properly, then it goes off and does its thing. To make that even more simple, most applicatio­ns offer a template for precisely what needs to go into the configurat­ion file, and all you need to do is fill in the blanks. So let’s walk through a simple Docker Compose installati­on, using ad-blocking DNS manager Pihole as an example.

To do so within Docker Compose, you first need to generate the configurat­ion file; make a new directory for it with

sudo mkdir ~/compose , and change to it with cd ~/compose . Create a working folder for Pihole with sudo mkdir /var/pihole , give it permission­s using sudo chown -R <your username>: <your username> /var/pihole , then run sudo nano dockercomp­ose.yml , and enter the following into the file, switching “LANIP” for your server’s IP address, and paying special attention to the spacing — each tab should be two blank spaces: version: ‘2’ services: pihole: restart: unless-stopped container_ name: pihole

image: diginc/pi-hole volumes: - /var/pihole:/etc/pihole environmen­t: - ServerIP= LANIP cap_ add: - NET_ ADMIN ports: - “53:53/tcp” - “53:53/udp” - “80:80” Now just run docker-compose up -d , and Pihole should be up and running — if not, it could be conflictin­g with another process (such as dnsmasq) that takes control of port 53. To kill that, run sudo sed -i ‘s/^dns=dnsmasq/#&/’ /etc/ NetworkMan­ager/NetworkMan­ager. conf to remove it from the relevant config file, then sudo service network-manager restart / and sudo service networking restart , followed by sudo killall dnsmasq to make sure it’s really, truly dead. You can check your results by heading to http://localhost/ admin in a web browser. Adding further containers to your combined applicatio­n is just a case of adding their parameters to the docker-compose.yml file, then running it again. To use Pihole’s DNS sinkhole facilities properly, you now need to configure your router to route DHCP clients through your server.

THE POWER PACK

So that’s the how. What about the what? Start digging through hub.docker.com, because just about everything you could possibly want has been put in a neat package for you. File management, for example: OwnCloud, as mentioned before, offers up a Dropbox-esque way of managing your files, with a web interface that offers simple password-protected access to a bin of files from every device in your home — including mobile clients. Nextcloud does much the same job as OwnCloud, but if you’d like to go more raw, you could fire up an SFTP server (try the atmoz/sftp image), or containeri­se a Samba installati­on (kartoffelt­oby/docker-nas-samba) to give your server fuss-free NAS capabiliti­es.

Media next, and you’re spoiled for choice. Plex, using the officially sanctioned plexinc/pms-docker image, is our top choice for home (and beyond) media server duties, but you don’t have to run it alone. There are various tools that can enhance your media-getting capabiliti­es, although we can’t say too much about them here. Head over to linuxserve­r.io and check out the most-downloaded images if you’d like to see which are the most popular. Once you’re all lined up, be sure to add the excellent linuxserve­r/plexpy, which helps you keep tabs on precisely what’s going on with the traffic and files running through your Plex server. The back end portion of Kodi is also a solid choice, although in our opinion, Plex is a much more adept filewrangl­er.

Your server could perform more esoteric archiving and organisati­onal duties, too. Try Mediawiki to get an instant home wiki up and running, perfect for recording significan­t informatio­n, or Photoshow (linuxserve­r/photoshow) to quickly install a database-free photo gallery with drag-and-drop uploading. Vault ( vaultproje­ct.io) isn’t pretty, but it’s a great place to hide secrets, giving you a sealable safe for passwords, keys and other critical security informatio­n. Home Assistant (/ homeassist­ant/ home-assistant) can, after a bit of wrangling, take control of all your smart gear in one place.

MAKING IT EASIER

Once you start piling on the containers, make sure you manage them (and their increasing­ly irritating port demands) well — see ‘Managing multiple containers’ over the page. And although putting together a dockercomp­ose.yml isn’t taxing, you can get more granular control over containers with Portainer ( portainer.io), which throws up a web interface in which you can start, stop, or add new containers at will. Organizr (/ lsiocommun­ity/ organizr) is also useful, pulling all your media server apps into a single

interface, and allowing you to distribute them selectivel­y to family members — try running two Plex containers, for example, to split family-friendly content from the material you wouldn’t want your kids to get hold of. Point them to Organizr, and they’ll only see what you’ve allowed. Consider, also, Watchtower, which monitors your images for changes, and automatica­lly updates them to the latest versions.

If, rather than making things easier, you’d like to make them slightly more difficult, but a lot more personal, look into creating your own containers. It’s complex, but Docker’s documentat­ion covers the process in a huge amount of detail — head to docs.docker.com to learn about the required components and relevant commands. Once you’ve made your own container, upload it to the Docker hub, where anyone (including future-you) can pull it back down to their own machine, and replicate it exactly.

 ??  ??
 ??  ?? You need a decent internet connection and some hard drive space for downloads.
You need a decent internet connection and some hard drive space for downloads.
 ??  ?? Hello, Docker: if you see this message, it’s working fine!
Hello, Docker: if you see this message, it’s working fine!
 ??  ?? A docker_compose.yml is just a text file full of copied and pasted parameters.
A docker_compose.yml is just a text file full of copied and pasted parameters.
 ??  ?? Dig through the Docker Hub to find containers for basically every useful package out there.
Dig through the Docker Hub to find containers for basically every useful package out there.
 ??  ?? Portainer simplifies the process of managing a server with numerous containers.
Portainer simplifies the process of managing a server with numerous containers.
 ??  ?? Plex has a beautiful interface and, crucially, just works.
Plex has a beautiful interface and, crucially, just works.

Newspapers in English

Newspapers from Australia