raid with openmediavault
Set up your own NAS box with all the redundancy your data deserves, and a funky web interface to boot (if you’ll pardon the pun).
Hard drives fail. They wear out, they get dropped, they get fried in electrical storms. Some of them, doomed from the start, come off the manufacturing line worse for wear. Potential drive failure is a good reason to back up your important files, but it’s an even better reason to deploy a RAID configuration. RAID (Redundant Array of Independent/ Inexpensive Disks) enables you to use multiple drives to increase the resilience of your storage.
RAID1 is the easiest configuration to explain: it mirrors the contents of one disk to one or more others. This means that all but one of the drives involved can fail and you still won’t lose any data. RAID0 doesn’t have any such redundancy, but rather ‘stripes’ data across drives so that it can be read and written sequentially, speeding up access. We’re not interested in RAID0 here as it plays fast and loose with your data.
Some RAID1 implementations will see improved read times, since reads can be parallelised across drives. These two schemes can be combined to give either RAID1+0 or RAID0+1, which give striped mirrors or mirrored stripes. But thanks to the wonders of mathematics, there are other options too. By striping data across at least two drives, then adding parity
information about that data to a further drive, we can create a system where if any drive falls over, the data that was on it can be reconstructed from the contents of the others. This is known as RAID5, and we can go further: RAID6 allows for two drives to fail, by having two parity blocks (but requires at least four drives).
In the olden days, RAID used to require a dedicated I/O card, and for enterprise purposes they’re probably still a good idea – along with an Uninterruptible Power Supply, backup generators and round-the-clock monitoring. But the parity calculations that would have burdened CPUS of yore are no problem nowadays, and RAID is easily implemented in software.
In Windows it’s called Storage Spaces, and in the Linux kernel, it’s known as MD (Multiple Device). The tool that manages it is called mdadm. As with LVM,
mdadm combines drives and presents a single abstracted block device, for example /dev/md0, which we can put any filesystem we like on. Also like LVM,
mdadm can operate on either partitions or entire disks. It does, however, get awfully confused if you try to use disks that have in past lives been part of MD arrays, so it’s important to zero the superblock if you’ve done this:
$ sudo mdadm --misc --zero-superblock /dev/sdx
To create the three-drive RAID5 setup we built using a Fractal Design Node 304 case kindly supplied by the manufacturer, we would do:
$ sudo mdadm --create --verbose --level=5 – metadata=1.2 --chunk=512 --raid-devices=3 /dev/md0 /dev/sdx /dev/sdy /dev/sdz
It will take some time to prepare the array, although it can at this stage accept a filesystem and be mounted in ‘degraded’ mode. As before you’d create that filesystem with:
$ sudo mkfs.ext4 /dev/md0
You can monitor progress of the array build with:
$ cat /proc/mdstat
Alternatively, follow our majestic six-step guide to
Openmediavault, on the opposite page.