AID AN FINN
Thinking about running a server solution on Hyper-V? Microsoft MVP, blogger and author Aidan Finn shares his knowledge
If you’re thinking about running a server solution based on Hyper-V, then expert Aidan has the answers. Here, he distils his knowledge into two pages of best-
If you’re reading this, I’ll assume you’re either running Hyper-V already or giving it consideration. If it’s the latter, you have three options on where to run it: on-premises, with Microsoft’s business software running on Windows Server and Hyper-V platforms; hosting companies, using the same technologies to build hosted cloud solutions; or public cloud, using Azure, Office 365, Enterprise Mobility and Security, CRM 365 and more.
As a result of the above options, how you protect your installation might be different too. I typically recommend “born in virtualisation” backup solutions. They support the hypervisor fully, and can offer modern solutions that enhance rather than hold back the business. I like Microsoft Azure Backup Server (MABS) v2. It’s a pay-as-you-use-it system that backs up virtual machines (VMs) – and applications such as SQL Server – to local disk for short-term retention and uses affordable storage for long-term retention.
Then there’s disaster recovery. What happens if there’s a fire, or if there’s a human or technology issue that shuts down the business? You can double your hardware, software, facilities and operational expenses by building a secondary site and replicating VMs to it, but the cloud can offer more affordable pay-as-you-go alternatives. Azure Site Recovery offers a simple solution to replicate your running VMs to Azure’s storage, and failover one, some, or all of the machines in the event of a disaster.
On-premises hardware
The big question here is can your business afford, or does it need, a highly available infrastructure. High availability is a feature of server virtualisation, where a VM can failover (restart) on another host if the original host has an unplanned stoppage. This is made possible by deploying clusters of identical (or near-identical) hosts, known as nodes, using a Windows Server feature called Failover Clustering. A requirement of a cluster is that the VMs are stored on shared storage, but the meaning of this has changed over the years.
Once upon a time, a Hyper-V (or VMware) cluster required a storage area network (SAN), which is one of the most expensive ways to deploy storage. Two or more servers, hosts, connect to this SAN and store their VMs on shared disks/volumes. Since then, software-defined storage has made it possible to use less expensive, and often more capable, solutions.
Windows Server 2012 introduced Storage Spaces, which was then improved in 2012 R2 and 2016. A classic Storage Spaces solution replaces the SAN with “just a bunch of disks”, or a JBOD, which is one or more trays of pooled disks that offer performance and fault tolerance in a more modern way than legacy RAID solutions. WS2016 introduced Storage Spaces Direct (S2D), where sharing is accomplished via block replication across high-speed networks – this is more suited to larger organisations.
So, do you need a Hyper-V cluster or not? That’s a question of risk versus cost and the answer will vary from one company to the next. If you can’t survive services going down, then a Hyper-V cluster is probably for you.
Non-clustered host
The hardware design of a nonclustered host is pretty simple. You have a few components: You’ll require one or two processors – avoid going over 16 total cores in a single host for licensing reasons. The host OS will be installed on two disks – a RAID1 LUN. VMs will be installed on two or more data disks, either in a RAID1 or RAID10 configuration. The parity calculations of RAID5 make this an unsuitable choice because of the penalty on write operations. Two NICs, teamed, can be used for host communications. Another two NICs, teamed, can be used for the virtual switch to allow VM communications. Note that the physical switch ports can be trunked and the NICs of VMs tagged if you require support for more than one VLAN. There are lots of ways to design a Hyper-V host. I always recommend that people walk the path that others have walked before, because you’ll experience fewer problems that way. The above design isn’t the only option, but it’s a well tried one.
“If you can’t survive services going down, then a Hyper-V cluster is probably for you”
Hyper-V clusters
If you require high availability then you require shared storage. In practice, this means a SAN, Storage Spaces (using a JBOD) or a cluster-in-a-box.
The SAN option is popular because it’s seen as the safe option. The reality is that SANs are expensive because they’re built from proprietary hardware. JBODs can be a good option, especially if you want to use servers with which you’re familiar, and Storage Spacescompliant hardware from another vendor.
Over recent years, I’ve had the most
success with a cluster-in-a-box (CiB). It’s a concept where the JBOD and two Hyper-V hosts are in the same enclosure, usually just 2Us in rack height in the SME world. Such a cluster solution can offer over 30TB of usable fault-tolerant storage (a mixture of SSDs and hard disks offering tiered performance and scale/ economy), two processors per node, and up to at least 512GB of RAM per node. That’s a pretty big solution for just 2Us.
The downside of CiB for many is that the Storage Spaces concept hasn’t been popular with the “big 2” server manufacturers, probably because they’d prefer you to purchase their SANs! For a CiB solution you’ll need to look at the second-tier manufacturers. If you want to stick with Dell and HP then they’ll offer you lower-cost SAN solutions, but you’re looking at super-low specs, capacities and performance compared to what’s on offer with Storage Spaces.
To achieve a stable and wellperforming network: Make sure the server is configured for high performance computing. Enable and configure Jumbo Frames if you’re using 10GbE or faster networking. All drivers and firmware should be up to date – don’t rely on Windows Update. Disable VMQ on all 1GbE NICs because it causes countless problems. Leave VMQ enabled on 10GbE or faster NICs. If you can, use 10GbE or faster NICs for any cluster communications and live migration networks. A number of vendors have made more affordable 10GbE switches for SMEs and branch offices. VMs are using more RAM, and doing a live migration can take a long time on 1GbE networks today. If you’re using more than two 10GbE NICs then enable SMB live migration, which can aggregate the bandwidth of the NICs. For example, twin 10GbE NICs can enable live migration at 20Gbits/sec.
Virtual storage design
There are always questions about storage design. I like to keep things simple. First, when I create a Hyper-V cluster, I do the following: Create a 1 x 1GB witness disk to provide quorum for the cluster. Deploy 1 cluster shared volume (CSV) for each Hyper-V node in the cluster, and balance the placement of VMs across the hosts and related CSVs. This optimises performance at many points in the lifecycle of the cluster. I don’t use all of the storage for the CSVs. This is because storage growth of the VMs will vary over time, and leaving unallocated space allows me to grow individual CSVs as required.
The second thing I do is change the default placement of files in the Hyper-V settings: Non-clustered hosts will store all VM files in D:\Virtual Machines Host1 in a cluster will store the files in C:\Cluster Storage\CSV1; Host2 in C:\Cluster Storage\CSV2, and so on. I then place all the VM files into a folder named after the virtual machine. This makes backups, troubleshooting, restores, replication, and so forth, much easier.
Good practices
Your Hyper-V host is a host and nothing but a host. It shouldn’t run database software, it shouldn’t be a domain controller, print server, or so on. All those roles should run in virtual machines on the host. Doing otherwise is asking for trouble later.
The Windows firewall of your host should remain on, and you should never connect to the internet from the host. You can install antivirus on the host but you must follow Microsoft’s scan exclusion requirements ( pcpro.link/277h-v) or you’ll have issues down the line. I never install AV on hosts because I’m concerned about vendor or operator errors causing VMs to disappear.
Finally, keep your Windows Updates current. Consider using Cluster Aware Updating on Hyper-V clusters to orchestrate updates with no VM downtime.
A Hyper-V cluster can become something you forget exists if you follow a few good practices. Keep things simple, try not to be the first to do something, and make good decisions about spending versus performance – this will result in you experiencing many satisfactory years with your Hyper-V deployment.
“There are many ways to design a Hyper-V host. I recommend you walk the path that others have walked before”