PC Pro

AID AN FINN

Thinking about running a server solution on Hyper-V? Microsoft MVP, blogger and author Aidan Finn shares his knowledge

- External Virtual Switch Team Interface LBFO NIC Team VLAN 102 Trunk Port

If you’re thinking about running a server solution based on Hyper-V, then expert Aidan has the answers. Here, he distils his knowledge into two pages of best-

If you’re reading this, I’ll assume you’re either running Hyper-V already or giving it considerat­ion. If it’s the latter, you have three options on where to run it: on-premises, with Microsoft’s business software running on Windows Server and Hyper-V platforms; hosting companies, using the same technologi­es to build hosted cloud solutions; or public cloud, using Azure, Office 365, Enterprise Mobility and Security, CRM 365 and more.

As a result of the above options, how you protect your installati­on might be different too. I typically recommend “born in virtualisa­tion” backup solutions. They support the hypervisor fully, and can offer modern solutions that enhance rather than hold back the business. I like Microsoft Azure Backup Server (MABS) v2. It’s a pay-as-you-use-it system that backs up virtual machines (VMs) – and applicatio­ns such as SQL Server – to local disk for short-term retention and uses affordable storage for long-term retention.

Then there’s disaster recovery. What happens if there’s a fire, or if there’s a human or technology issue that shuts down the business? You can double your hardware, software, facilities and operationa­l expenses by building a secondary site and replicatin­g VMs to it, but the cloud can offer more affordable pay-as-you-go alternativ­es. Azure Site Recovery offers a simple solution to replicate your running VMs to Azure’s storage, and failover one, some, or all of the machines in the event of a disaster.

On-premises hardware

The big question here is can your business afford, or does it need, a highly available infrastruc­ture. High availabili­ty is a feature of server virtualisa­tion, where a VM can failover (restart) on another host if the original host has an unplanned stoppage. This is made possible by deploying clusters of identical (or near-identical) hosts, known as nodes, using a Windows Server feature called Failover Clustering. A requiremen­t of a cluster is that the VMs are stored on shared storage, but the meaning of this has changed over the years.

Once upon a time, a Hyper-V (or VMware) cluster required a storage area network (SAN), which is one of the most expensive ways to deploy storage. Two or more servers, hosts, connect to this SAN and store their VMs on shared disks/volumes. Since then, software-defined storage has made it possible to use less expensive, and often more capable, solutions.

Windows Server 2012 introduced Storage Spaces, which was then improved in 2012 R2 and 2016. A classic Storage Spaces solution replaces the SAN with “just a bunch of disks”, or a JBOD, which is one or more trays of pooled disks that offer performanc­e and fault tolerance in a more modern way than legacy RAID solutions. WS2016 introduced Storage Spaces Direct (S2D), where sharing is accomplish­ed via block replicatio­n across high-speed networks – this is more suited to larger organisati­ons.

So, do you need a Hyper-V cluster or not? That’s a question of risk versus cost and the answer will vary from one company to the next. If you can’t survive services going down, then a Hyper-V cluster is probably for you.

Non-clustered host

The hardware design of a noncluster­ed host is pretty simple. You have a few components: You’ll require one or two processors – avoid going over 16 total cores in a single host for licensing reasons. The host OS will be installed on two disks – a RAID1 LUN. VMs will be installed on two or more data disks, either in a RAID1 or RAID10 configurat­ion. The parity calculatio­ns of RAID5 make this an unsuitable choice because of the penalty on write operations. Two NICs, teamed, can be used for host communicat­ions. Another two NICs, teamed, can be used for the virtual switch to allow VM communicat­ions. Note that the physical switch ports can be trunked and the NICs of VMs tagged if you require support for more than one VLAN. There are lots of ways to design a Hyper-V host. I always recommend that people walk the path that others have walked before, because you’ll experience fewer problems that way. The above design isn’t the only option, but it’s a well tried one.

“If you can’t survive services going down, then a Hyper-V cluster is probably for you”

Hyper-V clusters

If you require high availabili­ty then you require shared storage. In practice, this means a SAN, Storage Spaces (using a JBOD) or a cluster-in-a-box.

The SAN option is popular because it’s seen as the safe option. The reality is that SANs are expensive because they’re built from proprietar­y hardware. JBODs can be a good option, especially if you want to use servers with which you’re familiar, and Storage Spacescomp­liant hardware from another vendor.

Over recent years, I’ve had the most

success with a cluster-in-a-box (CiB). It’s a concept where the JBOD and two Hyper-V hosts are in the same enclosure, usually just 2Us in rack height in the SME world. Such a cluster solution can offer over 30TB of usable fault-tolerant storage (a mixture of SSDs and hard disks offering tiered performanc­e and scale/ economy), two processors per node, and up to at least 512GB of RAM per node. That’s a pretty big solution for just 2Us.

The downside of CiB for many is that the Storage Spaces concept hasn’t been popular with the “big 2” server manufactur­ers, probably because they’d prefer you to purchase their SANs! For a CiB solution you’ll need to look at the second-tier manufactur­ers. If you want to stick with Dell and HP then they’ll offer you lower-cost SAN solutions, but you’re looking at super-low specs, capacities and performanc­e compared to what’s on offer with Storage Spaces.

To achieve a stable and wellperfor­ming network: Make sure the server is configured for high performanc­e computing. Enable and configure Jumbo Frames if you’re using 10GbE or faster networking. All drivers and firmware should be up to date – don’t rely on Windows Update. Disable VMQ on all 1GbE NICs because it causes countless problems. Leave VMQ enabled on 10GbE or faster NICs. If you can, use 10GbE or faster NICs for any cluster communicat­ions and live migration networks. A number of vendors have made more affordable 10GbE switches for SMEs and branch offices. VMs are using more RAM, and doing a live migration can take a long time on 1GbE networks today. If you’re using more than two 10GbE NICs then enable SMB live migration, which can aggregate the bandwidth of the NICs. For example, twin 10GbE NICs can enable live migration at 20Gbits/sec.

Virtual storage design

There are always questions about storage design. I like to keep things simple. First, when I create a Hyper-V cluster, I do the following: Create a 1 x 1GB witness disk to provide quorum for the cluster. Deploy 1 cluster shared volume (CSV) for each Hyper-V node in the cluster, and balance the placement of VMs across the hosts and related CSVs. This optimises performanc­e at many points in the lifecycle of the cluster. I don’t use all of the storage for the CSVs. This is because storage growth of the VMs will vary over time, and leaving unallocate­d space allows me to grow individual CSVs as required.

The second thing I do is change the default placement of files in the Hyper-V settings: Non-clustered hosts will store all VM files in D:\Virtual Machines Host1 in a cluster will store the files in C:\Cluster Storage\CSV1; Host2 in C:\Cluster Storage\CSV2, and so on. I then place all the VM files into a folder named after the virtual machine. This makes backups, troublesho­oting, restores, replicatio­n, and so forth, much easier.

Good practices

Your Hyper-V host is a host and nothing but a host. It shouldn’t run database software, it shouldn’t be a domain controller, print server, or so on. All those roles should run in virtual machines on the host. Doing otherwise is asking for trouble later.

The Windows firewall of your host should remain on, and you should never connect to the internet from the host. You can install antivirus on the host but you must follow Microsoft’s scan exclusion requiremen­ts ( pcpro.link/277h-v) or you’ll have issues down the line. I never install AV on hosts because I’m concerned about vendor or operator errors causing VMs to disappear.

Finally, keep your Windows Updates current. Consider using Cluster Aware Updating on Hyper-V clusters to orchestrat­e updates with no VM downtime.

A Hyper-V cluster can become something you forget exists if you follow a few good practices. Keep things simple, try not to be the first to do something, and make good decisions about spending versus performanc­e – this will result in you experienci­ng many satisfacto­ry years with your Hyper-V deployment.

“There are many ways to design a Hyper-V host. I recommend you walk the path that others have walked before”

 ?? @joe_elway ?? Aidan is a Microsoft MVP and author of two books on Hyper-V. He works for a technology distributo­r in Dublin, offering advice on areas such as Azure and Hyper-V.
@joe_elway Aidan is a Microsoft MVP and author of two books on Hyper-V. He works for a technology distributo­r in Dublin, offering advice on areas such as Azure and Hyper-V.
 ??  ?? VLAN 101
VLAN 101

Newspapers in English

Newspapers from United Kingdom