“THE SERVERS WERE ALL PLUGGED INTO UNMANAGED SWITCHES VIA A BIRD’S NEST OF TANGLED CABLES”
PAUL OFFERS A FRIEND SUFFERING A SERIOUS CLUSTERMUCK IN IS HIS SERVER ROOM SOME IT GUIDANCE
About a year ago, a mate of mine asked me to do an informal audit of the IT systems of a local company where he’d just been hired as head of technology. What I found can best be described as a colossal mess. The server room had around 30 servers of various makes and vintages, some in racks, but many just piled on top of each other. All of the machines were switched on, consuming lots of electricity and creating an ear-splitting amount of noise.
The servers were all plugged into unmanaged switches via a bird’s nest of tangled network cables, none of them labelled. Many of the machines were also plugged into domestic four-way extension leads! There certainly weren’t any power management devices to be seen.
The whole thing was a shambles – and, more importantly, a disaster waiting to happen. Obviously, I won’t name and shame my friend or the company here. What I will say is that it’s an out t dealing in tech electronics and really should have known better! Said company isn’t alone, though. Over time, many
rms nd themselves slipping into a similar state.
My friend and I eventually managed to log into each of the servers. There was no central management, obviously, although the Windows servers were at least sitting within a domain, and the various Linux and other boxes all shared the same root password as the Windows administrator account. Not a good thing – but it did make our task much easier!
We found that of the 30 or so servers, six were powered up and doing precisely nothing. Three didn’t even have network cables attached. And of those that were doing stuff, four were just churning away on jobs no longer needed.
So we whittled things down to approximately 20 useful servers.
The rst step in sorting out this mess was to suggest a degree of consolidation. There were several
le servers and around ten web and intranet servers running various applications. With the aid of an Excel spreadsheet, and a bit of juggling, I was able to reduce the box count to just 12.
But then I added some more. Why? Well there was no redundancy at all in the rm’s systems. It had an SQL server running lots of important tasks, but it was just a single instance. So, I suggested that as a bare minimum a second SQL Server be installed with mirrored databases, set up within a high availability group.
Likewise, the intranets and various mission-critical web applications were running in a standalone mode, so I suggested secondary servers with automatic failover in case something were to go wrong. Same again for email. And we needed to add a backup appliance, too. We were quickly back up to around 20 servers again.
Now, had this been a normal rm I’d probably have been suggesting that much of this processing be moved to the cloud. Tools such as AWS and Azure are pretty reliable and very economical. But because of the particular industry this company works in, and the nature of some of its contracts, use of cloud computing and storage is strictly forbidden.
Twenty physical servers is daft, though. The key to sorting out this mess was virtualisation. A single, beefy server would probably run virtualised copies of all of these physical servers quite happily, but then we’d be back to having no redundancy. If the big server goes bang then you end up with a whole company twiddling its thumbs.
So for reasons I’ll explain in a minute, I suggested three physical servers plus a SAN for storage (a good SAN will have redundancy built in). Also, high availability pairs of managed network switches and PDUs, to avoid a single point of failure. It’s an ideal setup for a SME – and the whole thing can easily
t into a half-height rack, leaving plenty of space for other comms kit, routers, rewalls and the like. In fact, it’s pretty much a reference setup for SME virtualisation.
Why three servers? Well, I like to have two running all the missioncritical services with mirrored data and les. Items such as intranets, web servers, email and database servers. Each physical host can run half of the “live” services to even out the CPU and memory load. In the instance that one of this pair of
This isn’t the server room in question, but it looked quite similar!
PAUL OCKENDEN owns an agency that helps businesses exploit the web, from sales to marketing and everything in between