Linux Format

RAID levels in brief

-

There are several ways of combining disks to form a RAID array, each of which have their own advantages and drawbacks. In these descriptio­ns, N refers to the number of devices in the array and S is the size of each. The commonly seen levels are:

RAID 0 Not really RAID, as it just joins several disks together. You are better off using LVM for this. No resistance to failure. The total size N * S.

RAID 1 This is two or more disk with all data mirrored across all disks. It tolerates failure of N - 1 disks, total size is S.

RAID 5 A RAID of three or more disks with data and parity informatio­n distribute­d so that any single drive can fail with no loss of data. The total size is (N - 1) * S. ‘hardware assisted software RAID’ or fakeRAID. The controller has just enough RAID capability to load a driver from the disks, then it becomes software RAID. This generally only works with Windows.

We have referred to disks several times, but software RAID can work with any block device, and is often implemente­d at partition level rather than disk level. Enough talking, let’s create a RAID 1 array on /dev/sda3 and /dev/sdb3, change the devices to suit your system. These should be spare partitions, or you can use image files ( asdescribe­dintheA TestingSet­upbox,p76). As we are working with devices files in /dev, you need to be root, so either open a root terminal or prefix each command with sudo . The main command for working with software RAID devices is mdadm . $ mdadm --create /dev/md0 --level=raid1 --raid-devices=2 / dev/sda3 /dev/sdb3

If you want to save on the typing, you could use: $ mdadm -C /dev/md0 -l 1 -n 2 /dev/sd{a,b}3 but we will stick to the long options here for clarity. You have now created a block device at /dev/md0 (software RAID devices are generally named /dev/mdN) that you can format and then mount like any other block device $ mkfs.ext4 /dev/md0 $ mount /dev/md0 /mnt/somewhere

When things go wrong

Hopefully, that’s all you need to do, your array has been created and is used as a normal disk by the OS, but what if you have a drive failure? You can see the status of your RAID arrays at any time with either of these commands: $ cat /proc/mdstat $ mdadm --detail /dev/md0

Let’s assume you have a failure on the second disk and have a replacemen­t available. Remove the old drive from the array with: $ mdadm /dev/md0 --fail /dev/sdb3 --remove /dev/sdb3

Then turn off the computer, replace the drive and reboot. The arrays will still work but /proc/mdstat will show it as degraded, because a device is missing. Now run $ mdadm / dev/md0 --add /dev/sdb3 and look at /proc/mdstat again. It will show that the array is now back to two devices and that it is already syncing data to the new one. You can continue to use the computer, although there may be a drop in disk performanc­e while the sync is running. If you have a drive that doesn’t even show up any more, as in the case of a complete failure, you cannot remove /dev/sdb3 because it no longer exists, use the word missing instead of the drive name and mdadm will remove any drive it can’t find. If you already have a spare drive in your computer, say at /dev/sdc, you can add it to the array as a spare with $ mdadm /dev/md0 --add-spare

RAID 6 Four or more disks with data and parity informatio­n distribute­d so that any two drives can fail with no loss of data. The total size is (N - 2) * S.

RAID 10 A RAID 1 array of RAID 0 arrays requiring at least four drives. It can tolerate multiple failures as long as no RAID 1 section loses all its drives. The total space (N / 2) * S. /dev/sdc3 . Should sdb fail and be removed as above, sdc3 will automatica­lly be added to the array in its place and synchronis­ed. All of these examples use RAID 1 but the processes, apart from initial array creation, are identical for all higher RAID levels.

Error monitoring

How do you know if a drive is failing, are you expected to keep looking at /proc/mdstat? Of course not, mdadm also has a mode to monitor your arrays and this is run as a startup service. All you need to do is configure it in /etc/mdadm. conf, find the line containing MAILADDR , set it to your email address and remove the # from the start of the line. Now set the mdadm service to start when you boot and it will monitor your RAID arrays and notify you of any problems. The config /etc/mdadm.conf is also used to determine which devices belong to which array. The default behaviour is to scan your disks at startup to identify the array components but you can specify them explicitly with an ARRAY line. You can generate this line with $ mdadm --examine --scan . This may be useful if you have one or more slow devices attached to your system that slow down the scan process.

We have built RAID arrays from partitions in the above examples, but you can also create an array from whole disks, for example a three disk RAID 5 array like this: $ mdadm --create /dev/md0 --level=raid5 --raid-devices=3 / dev/sd{a,b,c}

After creating an array like this, you can use gdisk or gparted to partition it as you would a physical disk, the partitions then appear as /dev/md0p1 and so on. Bear in mind that your BIOS will need your /boot directory to be on a filesystem it can read, so whole disk RAID may not be suitable for your OS disk. RAID also works well with LVM (covered last month). Create the RAID array and then use that as a physical volume for LVM. That way you get the flexibilit­y of LVM with the data security of RAID.

 ??  ?? RAID is normally administer­ed with mdadm at the command line, but there is a RAID module for Webmin if you want a graphical option.
RAID is normally administer­ed with mdadm at the command line, but there is a RAID module for Webmin if you want a graphical option.

Newspapers in English

Newspapers from Australia