Virtualisation for All
The ability to run multiple copies of different operating systems on one physical machine is already quite predominant in the IT world, and has been for several decades. However, systems with microprocessors that provide hardware acceleration have greatly
Imagine a large rack holding dozens of workstations and servers. And compare that with a small 2U server that has all that infrastructure running on it. Progress? Yes, de nitely You save on physical space, consume much less electricity, and third, increase management ef ciency by having all these systems in one place. So let us transform this dream into a reality with the open source virtualisation software Proxmox.
Prerequisites
Originally, I had several pro ects deployed on Vmware Server 2.0—installations of Linux and Windows systems. verything was convenient for me, because I had already resolved the rst two factors—space and electricity bills—as all the systems were on virtual infrastructure. However, after I added more Linux pro ects to the host, Vmware became non-operational—i couldn t log in to the Vmware Web-based control panel to see what was happening, not even by means of a native client. Second, simultaneous disk-intensive operations inside two virtual machines greatly affected other containers as well, resulting in slow responses from all machines. This was not a big problem, actually, but was irritating when I couldn t get immediate access to any virtual container.
A big motivation was the purchase, back then, of a new blade system with hardware virtualisation support at the CPU level, and the appearance of kernel-based virtualisation support in Linux.
Proxmox
In brief, Proxmox is a Debian-based distribution that comes with a specially compiled kernel, including support for Kernelbased Virtual Machine (KVM) and Openv para-virtualisation from the Parallels company. You can thus run any x86-based OS with acceleration (thanks to technology like Intel VT-X or AMD-V) as a MU image, and also launch Linux distributions in isolated containers. The latter, for instance, results in a mere 1-3 per cent loss in computing performance compared to the host system.
Installation
The Proxmox installer is rather simple, and provides no special features. For example, at the moment, it can partition disk storage as one LVM volume only (while leaving 500 MB as a normal ext3 partition for / boot). So I decided to install pure Debian first, and create a partition layout I considered correct for my purposes. After that, I installed the Proxmox packages. Luckily, this option works. Generally speaking, the default parameters could fit almost everyone s purposes, so don t be afraid to install Proxmox with its own installer.
I installed Proxmox on an HP Proliant DL360 G5 with 18 GB RAM, two uad- eon 5440s, and a Smart Array P400i with 360 GB of RAID 5 storage. Instead of RAID 5, you can rebuild it with a faster operational mode, like mirror .
I attached a Debian Lenny ISO image to a virtual CD ROM in ILO2 (integrated Lights Out—the HP management console integrated into the hardware system) and bootstrapped the system (Figure 2).
During installation, I split my disk storage as follows: / partition occupies 1 GB, followed by 8 GB of swap space, and the remaining space reserved for the LVM pool. You might wonder why I did it this way. First, in case of disk failure on the host (rather unlikely, but still ), I m sure I ll get my con guration data back, with the help of a Livecd, or forensic tools. Despite the huge number of Livecds, I frankly doubt they can help me mount a destroyed LVM volume on top of RAID 0/1/5. So if the LVM subsystem decides to die, the basic root and swap partitions will survive—yes, I m this paranoid. And my nal argument: when you talk about LVM, it means you re constructing something complex—say, logical volumes with mirror capabilities, or volumes residing on different physical disks. I try to follow Occam s razor rule, and leave LVM to obs where it excels, while other duties should be carried out by other Linux components.
Next step: I installed the basic Debian packages (the base system, LVM and OPENSSH), assigned an IP address to the box, and added several lines to /etc/apt/sources.list: # PVE packages provided by proxmox.com deb http://download.proxmox.com/debian lenny pve # Mirror site deb http://ftp.dk.debian.org/debian/ lenny main non-free # security updates deb http://security.debian.org/ lenny/updates main
Next comes the usual update procedure:
# aptitude update && aptitude safe-upgrade
W: GPG error: http://download.proxmox.com lenny Release: The is not available: NO_PUBKEY C23AC7F49887F95A
Unfortunately, I forgot to import the public key from the Proxmox repository, so let us x it: # gpg --keyserver pgpkeys.mit.edu --recv-key C23AC7F49887F95A gpg: requesting key 9887F95A from hkp server pgpkeys.mit.edu gpg: /root/.gnupg/trustdb.gpg: trustdb created gpg: key 9887F95A: public key "Proxmox Release Key <proxmoxrelease@proxmox.com>" imported gpg: Total number processed: 1 gpg: imported: 1 # gpg -a --export C23AC7F49887F95A | apt-key add - OK
Run aptitude update again to synchronise the local repository, then search for Proxmox packages available for install: # aptitude search proxmox p proxmox-ve-2.6.18 The Proxmox Virtual Environment
p proxmox-ve-2.6.24 The Proxmox Virtual Environment p proxmox-ve-2.6.32 The Proxmox Virtual Environment p proxmox-ve-2.6.35 The Proxmox Virtual Environment p proxmox-virtual-environment
You should install proxmox-ve-2.6.32, because Proxmox developers consider this version to be stable and have included it in Proxmox 1.6: # aptitude install proxmox-ve-2.6.32 Reading package lists... Done ... Need to get 67.4MB of archives. After unpacking 79.7MB will be used. Do you want to continue? [Y/n/?]
Restart the system and voil Proxmox is installed. asy, isn't it?
Upgrading Proxmox to a newer version is as simple as running ' aptitude update' and 'aptitude safe-upgrade', and rebooting your box. It couldn't be easier.
As you can see, initial Proxmox deployment consists of simple and clear steps. Let's proceed to the next step—virtual environment con guration and customisation, where we'll create virtual network devices and storage pools.
Configuration
You'll need a Web browser like Mozilla Firefox or Google Chrome to log in to the Proxmox control centre. A Java Runtime nvironment and browser plug-ins are essential for video and keyboard access through VNC to an OS inside a virtual container. Later, I'll also show you how to configure a VNC channel, and use ust a standalone vncviewer. Point the browser to a newly deployed Proxmox host, and after authorisation (the host's local root user credentials) you'll see the main console page. The home page (Figure 4) displays the overall system load, the amount of free memory and swap space. Before you create a VM, you must configure network interfaces for it, as well as pre-configure storage space for the VMS. Jump into the ‘ Configuration > System >Interface Configuration’ section. This tab displays all network devices configured on the host. Click the red arrow near ‘ Interface configuration ’. Choose what network device you need, ' bridge' or ' bond', and name it (e.g., vmbr5). Bonding devices can operate in different modes: ‘ balance-rr’, ‘ active-backup’, ‘balance-xor’, ‘broadcast’, ‘802.3ad’, ‘balance-tlb’ or ‘balance-alb’. They also bring interesting functionality to the Vm—particularly, to double the bandwidth to a new VM, create a bonded network device and choose ‘ 802.3ad’ mode, and then assign it to a VM.
The next task is to create storage to hold Openv machines. Just for the record, it can store MU containers' les too.
You're free to create LVM volumes either manually, or from within the Proxmox management console: ' '. For Openv storage space, though, you must do it manually. Create a physical volume as follows:
# pvcreate -v /dev/cciss/c0d0p3
Now create a Volumegroup named VG:
# vgcreate -v VG /dev/cciss/c0d0p3
Success I have allocated 150 GB for a logical volume where Openv containers will reside:
# lvcreate -L 150G -n VG_150GB -v VG
Format the logical volume, for example, as an ext4 lesystem:
# mkfs.ext4 -m 1 -L Openvz_space -v /dev/mapper/vg-vg_150gb
Add a new entry into /etc/fstab: /dev/mapper/vg-vg_150gb /var/lib/vz ext4 errors=remount-ro 0 1
Now, this new volume will be mounted automatically after each reboot, at /var/lib/vz. Let's analyse this. The ' images' subdirectory contains le images of MU containers; 'root' and 'private' subdirectories are reserved for Openv containers. The directory 'template/cache' is where you should place virtual appliance archives, and the ' template/ ISO' directory is for CD/DVD installation images. $ ls /var/lib/vz backup dump images lock lost+found private root template vztmp
We still have 1 3 GB of free LVM space that we should reserve exclusively for MU containers: # vgdisplay VG Size PE Size Total PE Alloc PE / Size Free PE / Size 323.04 GB 4.00 MB 82697 38400 / 150.00 GB 44297 / 173.04 GB
Openvz VMS
As mentioned earlier, Proxmox uses two types of virtualisation—openv and MU. The rst is paravirtualisation, where the processes, lesystem and memory of a virtual container actually reside on a host machine, but are logically 'isolated' by Openv technology. Due to architectural constraints of Openv , only Linux distributions can run as para-virtualised machines.
The Proxmox team distributes OS Templates and ‘Virtual Appliances’ (packed Openv containers) on their site. You can download these and upload them to your infrastructure via ‘ > ’. These are precon gured and ready-to-use Debian, Ubuntu, Centos and Fedora distributions, with a minimum set of packages pre-installed. Several appliances are completely precon gured versions of Sugarcrm, Mediawiki, Drupal, etc. All you need is to point to ‘ > Create’, choose the type of VM as ‘Container (Openv )’, and select the template (Figure 6)... that's it
Alternatively, you can create an Openv machine from the command line, as follows: # /usr/bin/pvectl vzcreate 101 --disk 8 \ --ostemplate local:vztmpl/debian-5.0-standard_5.0-2_amd64.tar. gz \ --rootpasswd $1$pnwcimwv$kx5dbbyam7v03mt15c2f0. \ --hostname wiki.local --nameserver 10.10.1.200 --searchdomain local \ --onboot yes --swap 1024 --mem 1024 --cpus 1 \ --netif ifname=eth0,mac=66:2e:57:66:2e:36,bridge=vmbr5
Here, the rst four parameters mean the following:
Create a VM with ID 101 Allocate disk space of 8 GB Use the debian-5.0-standard_5.0-2_amd64.tar.gz template Supply a crypted (by the glibc2 crypt() function) password for the root account of this VM. Other parameters are self-explanatory. You can omit the