About a year ago, a mate of mine asked me to do an in­for­mal au­dit of the IT sys­tems of a lo­cal com­pany where he’d just been hired as head of tech­nol­ogy. What I found can best be de­scribed as a colos­sal mess. The server room had around 30 servers of var­i­ous makes and vin­tages, some in racks, but many just piled on top of each other. All of the ma­chines were switched on, con­sum­ing lots of elec­tric­ity and cre­at­ing an ear-split­ting amount of noise.

The servers were all plugged into unmanaged switches via a bird’s nest of tan­gled net­work ca­bles, none of them la­belled. Many of the ma­chines were also plugged into do­mes­tic four-way ex­ten­sion leads! There cer­tainly weren’t any power man­age­ment de­vices to be seen.

The whole thing was a sham­bles – and, more im­por­tantly, a dis­as­ter wait­ing to hap­pen. Ob­vi­ously, I won’t name and shame my friend or the com­pany here. What I will say is that it’s an out t deal­ing in tech elec­tron­ics and re­ally should have known bet­ter! Said com­pany isn’t alone, though. Over time, many

rms nd them­selves slip­ping into a sim­i­lar state.

My friend and I even­tu­ally man­aged to log into each of the servers. There was no cen­tral man­age­ment, ob­vi­ously, al­though the Win­dows servers were at least sit­ting within a do­main, and the var­i­ous Linux and other boxes all shared the same root pass­word as the Win­dows ad­min­is­tra­tor ac­count. Not a good thing – but it did make our task much eas­ier!

We found that of the 30 or so servers, six were pow­ered up and do­ing pre­cisely noth­ing. Three didn’t even have net­work ca­bles at­tached. And of those that were do­ing stuff, four were just churn­ing away on jobs no longer needed.

So we whit­tled things down to ap­prox­i­mately 20 use­ful servers.

The rst step in sort­ing out this mess was to sug­gest a de­gree of con­sol­i­da­tion. There were sev­eral

le servers and around ten web and in­tranet servers run­ning var­i­ous ap­pli­ca­tions. With the aid of an Ex­cel spread­sheet, and a bit of jug­gling, I was able to re­duce the box count to just 12.

But then I added some more. Why? Well there was no re­dun­dancy at all in the rm’s sys­tems. It had an SQL server run­ning lots of im­por­tant tasks, but it was just a sin­gle in­stance. So, I sug­gested that as a bare min­i­mum a sec­ond SQL Server be in­stalled with mir­rored data­bases, set up within a high avail­abil­ity group.

Like­wise, the in­tranets and var­i­ous mis­sion-crit­i­cal web ap­pli­ca­tions were run­ning in a stand­alone mode, so I sug­gested sec­ondary servers with au­to­matic failover in case some­thing were to go wrong. Same again for email. And we needed to add a backup ap­pli­ance, too. We were quickly back up to around 20 servers again.

Now, had this been a nor­mal rm I’d prob­a­bly have been sug­gest­ing that much of this pro­cess­ing be moved to the cloud. Tools such as AWS and Azure are pretty re­li­able and very eco­nom­i­cal. But be­cause of the par­tic­u­lar in­dus­try this com­pany works in, and the na­ture of some of its con­tracts, use of cloud com­put­ing and stor­age is strictly for­bid­den.

Twenty phys­i­cal servers is daft, though. The key to sort­ing out this mess was vir­tu­al­i­sa­tion. A sin­gle, beefy server would prob­a­bly run vir­tu­alised copies of all of these phys­i­cal servers quite hap­pily, but then we’d be back to hav­ing no re­dun­dancy. If the big server goes bang then you end up with a whole com­pany twid­dling its thumbs.

So for rea­sons I’ll ex­plain in a minute, I sug­gested three phys­i­cal servers plus a SAN for stor­age (a good SAN will have re­dun­dancy built in). Also, high avail­abil­ity pairs of man­aged net­work switches and PDUs, to avoid a sin­gle point of fail­ure. It’s an ideal setup for a SME – and the whole thing can eas­ily

t into a half-height rack, leav­ing plenty of space for other comms kit, routers, re­walls and the like. In fact, it’s pretty much a ref­er­ence setup for SME vir­tu­al­i­sa­tion.

Why three servers? Well, I like to have two run­ning all the mis­sion­crit­i­cal ser­vices with mir­rored data and les. Items such as in­tranets, web servers, email and data­base servers. Each phys­i­cal host can run half of the “live” ser­vices to even out the CPU and mem­ory load. In the in­stance that one of this pair of

This isn’t the server room in ques­tion, but it looked quite sim­i­lar!

PAUL OCKENDEN owns an agency that helps busi­nesses ex­ploit the web, from sales to mar­ket­ing and ev­ery­thing in be­tween

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.