Think­ing about run­ning a server so­lu­tion on Hy­per-V? Mi­crosoft MVP, blog­ger and au­thor Ai­dan Finn shares his knowl­edge

PC Pro - - November 2017 Issue 277 - Ex­ter­nal Vir­tual Switch Team In­ter­face LBFO NIC Team VLAN 102 Trunk Port

If you’re think­ing about run­ning a server so­lu­tion based on Hy­per-V, then ex­pert Ai­dan has the an­swers. Here, he dis­tils his knowl­edge into two pages of best-

If you’re read­ing this, I’ll as­sume you’re ei­ther run­ning Hy­per-V al­ready or giv­ing it con­sid­er­a­tion. If it’s the lat­ter, you have three op­tions on where to run it: on-premises, with Mi­crosoft’s busi­ness soft­ware run­ning on Win­dows Server and Hy­per-V plat­forms; host­ing com­pa­nies, us­ing the same tech­nolo­gies to build hosted cloud so­lu­tions; or pub­lic cloud, us­ing Azure, Of­fice 365, En­ter­prise Mo­bil­ity and Se­cu­rity, CRM 365 and more.

As a re­sult of the above op­tions, how you pro­tect your in­stal­la­tion might be dif­fer­ent too. I typ­i­cally rec­om­mend “born in vir­tu­al­i­sa­tion” backup so­lu­tions. They sup­port the hy­per­vi­sor fully, and can of­fer mod­ern so­lu­tions that en­hance rather than hold back the busi­ness. I like Mi­crosoft Azure Backup Server (MABS) v2. It’s a pay-as-you-use-it sys­tem that backs up vir­tual ma­chines (VMs) – and ap­pli­ca­tions such as SQL Server – to lo­cal disk for short-term re­ten­tion and uses af­ford­able stor­age for long-term re­ten­tion.

Then there’s disas­ter re­cov­ery. What hap­pens if there’s a fire, or if there’s a hu­man or tech­nol­ogy is­sue that shuts down the busi­ness? You can dou­ble your hard­ware, soft­ware, fa­cil­i­ties and op­er­a­tional ex­penses by build­ing a sec­ondary site and repli­cat­ing VMs to it, but the cloud can of­fer more af­ford­able pay-as-you-go al­ter­na­tives. Azure Site Re­cov­ery of­fers a sim­ple so­lu­tion to repli­cate your run­ning VMs to Azure’s stor­age, and failover one, some, or all of the ma­chines in the event of a disas­ter.

On-premises hard­ware

The big ques­tion here is can your busi­ness af­ford, or does it need, a highly avail­able in­fra­struc­ture. High avail­abil­ity is a fea­ture of server vir­tu­al­i­sa­tion, where a VM can failover (restart) on an­other host if the orig­i­nal host has an un­planned stop­page. This is made pos­si­ble by de­ploy­ing clus­ters of iden­ti­cal (or near-iden­ti­cal) hosts, known as nodes, us­ing a Win­dows Server fea­ture called Failover Clus­ter­ing. A re­quire­ment of a clus­ter is that the VMs are stored on shared stor­age, but the mean­ing of this has changed over the years.

Once upon a time, a Hy­per-V (or VMware) clus­ter re­quired a stor­age area net­work (SAN), which is one of the most ex­pen­sive ways to de­ploy stor­age. Two or more servers, hosts, con­nect to this SAN and store their VMs on shared disks/vol­umes. Since then, soft­ware-de­fined stor­age has made it pos­si­ble to use less ex­pen­sive, and of­ten more ca­pa­ble, so­lu­tions.

Win­dows Server 2012 in­tro­duced Stor­age Spa­ces, which was then im­proved in 2012 R2 and 2016. A clas­sic Stor­age Spa­ces so­lu­tion re­places the SAN with “just a bunch of disks”, or a JBOD, which is one or more trays of pooled disks that of­fer per­for­mance and fault tol­er­ance in a more mod­ern way than legacy RAID so­lu­tions. WS2016 in­tro­duced Stor­age Spa­ces Di­rect (S2D), where shar­ing is ac­com­plished via block repli­ca­tion across high-speed net­works – this is more suited to larger or­gan­i­sa­tions.

So, do you need a Hy­per-V clus­ter or not? That’s a ques­tion of risk ver­sus cost and the an­swer will vary from one com­pany to the next. If you can’t sur­vive ser­vices go­ing down, then a Hy­per-V clus­ter is prob­a­bly for you.

Non-clus­tered host

The hard­ware de­sign of a non­clus­tered host is pretty sim­ple. You have a few com­po­nents: You’ll re­quire one or two pro­ces­sors – avoid go­ing over 16 to­tal cores in a sin­gle host for li­cens­ing rea­sons. The host OS will be in­stalled on two disks – a RAID1 LUN. VMs will be in­stalled on two or more data disks, ei­ther in a RAID1 or RAID10 con­fig­u­ra­tion. The par­ity cal­cu­la­tions of RAID5 make this an un­suit­able choice be­cause of the penalty on write op­er­a­tions. Two NICs, teamed, can be used for host com­mu­ni­ca­tions. An­other two NICs, teamed, can be used for the vir­tual switch to al­low VM com­mu­ni­ca­tions. Note that the phys­i­cal switch ports can be trunked and the NICs of VMs tagged if you re­quire sup­port for more than one VLAN. There are lots of ways to de­sign a Hy­per-V host. I al­ways rec­om­mend that peo­ple walk the path that oth­ers have walked be­fore, be­cause you’ll ex­pe­ri­ence fewer prob­lems that way. The above de­sign isn’t the only op­tion, but it’s a well tried one.

“If you can’t sur­vive ser­vices go­ing down, then a Hy­per-V clus­ter is prob­a­bly for you”

Hy­per-V clus­ters

If you re­quire high avail­abil­ity then you re­quire shared stor­age. In prac­tice, this means a SAN, Stor­age Spa­ces (us­ing a JBOD) or a clus­ter-in-a-box.

The SAN op­tion is pop­u­lar be­cause it’s seen as the safe op­tion. The re­al­ity is that SANs are ex­pen­sive be­cause they’re built from pro­pri­etary hard­ware. JBODs can be a good op­tion, es­pe­cially if you want to use servers with which you’re fa­mil­iar, and Stor­age Spacescom­pli­ant hard­ware from an­other ven­dor.

Over re­cent years, I’ve had the most

suc­cess with a clus­ter-in-a-box (CiB). It’s a con­cept where the JBOD and two Hy­per-V hosts are in the same enclosure, usu­ally just 2Us in rack height in the SME world. Such a clus­ter so­lu­tion can of­fer over 30TB of us­able fault-tol­er­ant stor­age (a mix­ture of SSDs and hard disks of­fer­ing tiered per­for­mance and scale/ econ­omy), two pro­ces­sors per node, and up to at least 512GB of RAM per node. That’s a pretty big so­lu­tion for just 2Us.

The down­side of CiB for many is that the Stor­age Spa­ces con­cept hasn’t been pop­u­lar with the “big 2” server manufacturers, prob­a­bly be­cause they’d pre­fer you to pur­chase their SANs! For a CiB so­lu­tion you’ll need to look at the sec­ond-tier manufacturers. If you want to stick with Dell and HP then they’ll of­fer you lower-cost SAN so­lu­tions, but you’re look­ing at su­per-low specs, ca­pac­i­ties and per­for­mance com­pared to what’s on of­fer with Stor­age Spa­ces.

To achieve a sta­ble and wellper­form­ing net­work: Make sure the server is con­fig­ured for high per­for­mance com­put­ing. En­able and con­fig­ure Jumbo Frames if you’re us­ing 10GbE or faster net­work­ing. All driv­ers and firmware should be up to date – don’t rely on Win­dows Up­date. Dis­able VMQ on all 1GbE NICs be­cause it causes count­less prob­lems. Leave VMQ en­abled on 10GbE or faster NICs. If you can, use 10GbE or faster NICs for any clus­ter com­mu­ni­ca­tions and live mi­gra­tion net­works. A num­ber of ven­dors have made more af­ford­able 10GbE switches for SMEs and branch of­fices. VMs are us­ing more RAM, and do­ing a live mi­gra­tion can take a long time on 1GbE net­works to­day. If you’re us­ing more than two 10GbE NICs then en­able SMB live mi­gra­tion, which can ag­gre­gate the band­width of the NICs. For ex­am­ple, twin 10GbE NICs can en­able live mi­gra­tion at 20Gbits/sec.

Vir­tual stor­age de­sign

There are al­ways ques­tions about stor­age de­sign. I like to keep things sim­ple. First, when I cre­ate a Hy­per-V clus­ter, I do the fol­low­ing: Cre­ate a 1 x 1GB wit­ness disk to pro­vide quo­rum for the clus­ter. De­ploy 1 clus­ter shared vol­ume (CSV) for each Hy­per-V node in the clus­ter, and bal­ance the place­ment of VMs across the hosts and re­lated CSVs. This op­ti­mises per­for­mance at many points in the lifecy­cle of the clus­ter. I don’t use all of the stor­age for the CSVs. This is be­cause stor­age growth of the VMs will vary over time, and leav­ing un­al­lo­cated space al­lows me to grow in­di­vid­ual CSVs as re­quired.

The sec­ond thing I do is change the de­fault place­ment of files in the Hy­per-V set­tings: Non-clus­tered hosts will store all VM files in D:\Vir­tual Ma­chines Host1 in a clus­ter will store the files in C:\Clus­ter Stor­age\CSV1; Host2 in C:\Clus­ter Stor­age\CSV2, and so on. I then place all the VM files into a folder named af­ter the vir­tual ma­chine. This makes back­ups, trou­bleshoot­ing, re­stores, repli­ca­tion, and so forth, much eas­ier.

Good prac­tices

Your Hy­per-V host is a host and noth­ing but a host. It shouldn’t run data­base soft­ware, it shouldn’t be a do­main con­troller, print server, or so on. All those roles should run in vir­tual ma­chines on the host. Do­ing oth­er­wise is ask­ing for trou­ble later.

The Win­dows fire­wall of your host should re­main on, and you should never con­nect to the in­ter­net from the host. You can in­stall an­tivirus on the host but you must fol­low Mi­crosoft’s scan ex­clu­sion re­quire­ments ( or you’ll have is­sues down the line. I never in­stall AV on hosts be­cause I’m con­cerned about ven­dor or op­er­a­tor er­rors caus­ing VMs to dis­ap­pear.

Fi­nally, keep your Win­dows Up­dates cur­rent. Con­sider us­ing Clus­ter Aware Up­dat­ing on Hy­per-V clus­ters to orches­trate up­dates with no VM down­time.

A Hy­per-V clus­ter can be­come some­thing you for­get ex­ists if you fol­low a few good prac­tices. Keep things sim­ple, try not to be the first to do some­thing, and make good de­ci­sions about spend­ing ver­sus per­for­mance – this will re­sult in you ex­pe­ri­enc­ing many sat­is­fac­tory years with your Hy­per-V de­ploy­ment.

“There are many ways to de­sign a Hy­per-V host. I rec­om­mend you walk the path that oth­ers have walked be­fore”


Ai­dan is a Mi­crosoft MVP and au­thor of two books on Hy­per-V. He works for a tech­nol­ogy distrib­u­tor in Dublin, of­fer­ing ad­vice on ar­eas such as Azure and Hy­per-V.

VLAN 101

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.