Linux Format

raid with openmediav­ault

Set up your own NAS box with all the redundancy your data deserves, and a funky web interface to boot (if you’ll pardon the pun).

-

Hard drives fail. They wear out, they get dropped, they get fried in electrical storms. Some of them, doomed from the start, come off the manufactur­ing line worse for wear. Potential drive failure is a good reason to back up your important files, but it’s an even better reason to deploy a RAID configurat­ion. RAID (Redundant Array of Independen­t/ Inexpensiv­e Disks) enables you to use multiple drives to increase the resilience of your storage.

RAID1 is the easiest configurat­ion to explain: it mirrors the contents of one disk to one or more others. This means that all but one of the drives involved can fail and you still won’t lose any data. RAID0 doesn’t have any such redundancy, but rather ‘stripes’ data across drives so that it can be read and written sequential­ly, speeding up access. We’re not interested in RAID0 here as it plays fast and loose with your data.

Some RAID1 implementa­tions will see improved read times, since reads can be parallelis­ed across drives. These two schemes can be combined to give either RAID1+0 or RAID0+1, which give striped mirrors or mirrored stripes. But thanks to the wonders of mathematic­s, there are other options too. By striping data across at least two drives, then adding parity

informatio­n about that data to a further drive, we can create a system where if any drive falls over, the data that was on it can be reconstruc­ted from the contents of the others. This is known as RAID5, and we can go further: RAID6 allows for two drives to fail, by having two parity blocks (but requires at least four drives).

In the olden days, RAID used to require a dedicated I/O card, and for enterprise purposes they’re probably still a good idea – along with an Uninterrup­tible Power Supply, backup generators and round-the-clock monitoring. But the parity calculatio­ns that would have burdened CPUS of yore are no problem nowadays, and RAID is easily implemente­d in software.

In Windows it’s called Storage Spaces, and in the Linux kernel, it’s known as MD (Multiple Device). The tool that manages it is called mdadm. As with LVM,

mdadm combines drives and presents a single abstracted block device, for example /dev/md0, which we can put any filesystem we like on. Also like LVM,

mdadm can operate on either partitions or entire disks. It does, however, get awfully confused if you try to use disks that have in past lives been part of MD arrays, so it’s important to zero the superblock if you’ve done this:

$ sudo mdadm --misc --zero-superblock /dev/sdx

To create the three-drive RAID5 setup we built using a Fractal Design Node 304 case kindly supplied by the manufactur­er, we would do:

$ sudo mdadm --create --verbose --level=5 – metadata=1.2 --chunk=512 --raid-devices=3 /dev/md0 /dev/sdx /dev/sdy /dev/sdz

It will take some time to prepare the array, although it can at this stage accept a filesystem and be mounted in ‘degraded’ mode. As before you’d create that filesystem with:

$ sudo mkfs.ext4 /dev/md0

You can monitor progress of the array build with:

$ cat /proc/mdstat

Alternativ­ely, follow our majestic six-step guide to

Openmediav­ault, on the opposite page.

 ??  ?? All kinds of glorious graphs await you in the Performanc­e statistics section. How quickly we filled our RAID…
All kinds of glorious graphs await you in the Performanc­e statistics section. How quickly we filled our RAID…

Newspapers in English

Newspapers from Australia