PC Pro

STE VE CASSIDY Doing a spot of techie DIY is a good thing: not only can it save you money, it also keeps you on top of trends and solving modern problems.

Doing a spot of techie DIY is a good thing: not only can it save you money, it also keeps you on top of trends and solving modern problems

- STEVE CASSIDY

Back when paper magazines were the undisputed kings of computing, we used to have a slightly uncomforta­ble relationsh­ip with Custom PC – another title in the Dennis stable, but based on a different propositio­n. PC Pro is, of course, aimed at the PC Pros, both in the industry and in the wider population of users. Custom PC was all about the nuts and bolts – building your own PC by browsing parts lists and fine-tuning its configurat­ion to exactly what you wanted. We felt ourselves to be the serious types, with important databases to maintain and large-scale websites to present; most of what happened over at Custom PC was about gaming, we said.

Except that when you consider what has happened in the large-scale world of heavy-duty computing, there’s a viewpoint that says that the Custom PC guys won the competitio­n. Quite sizeable projects did their number-crunching using CUDA – a feature of Nvidia gamer-grade graphics cards that makes their horsepower available to any job the developers can imagine. CUDA wouldn’t have arisen without the pointless arms race between gamers, to raise display frame rates way above the level at which human perception can’t determine the difference any more. Their readiness to keep buying faster bankrolled the CUDA aspect of what Nvidia was doing.

Then there’s the now somewhat discredite­d notion of “white-box computing”. This was the idea that there was no advantage to the various customisat­ions offered by the big corporate brands at all, and that even if there was for the first version of your big compute bloc, it would be easy to identify the dependenci­es and bottleneck­s and just engineer them out, either in the code or in the architectu­re. The slogan said that the PC standard wasn’t to be adulterate­d, and that there were no real performanc­e problems, only mistaken implementa­tions.

Oh, how naive this looks these days. The driving forces at the top end of the computer architectu­re all revolve around data, which has grown so fast that pretences of defending a philosophy have had to be discarded pretty quickly, and ultrafast network interlinks such as Infini Band have stamped all over the hardware-agnostic ideals of the white-box aficionado­s. Detailed, low-level compatibil­ity and guaranteed levels of throughput mean that, if anything, this field is now dominated by ultra-cautious, ultra-attentive specialist suppliers, who don’t take kindly to the idea that you might fancy changing the spec or doing an upgrade all by yourself.

Yes, I know that Google is a big fan of not using enterprise servers from big names in its data centres – I should put my cloud hat on at this point and not invite any debate on this subject. Google is the ultimate in custom programmin­g projects. It isn’t subject to any developmen­t constraint­s, save those it imposes on itself, and this means that fine-tuning the boxes for

its purposes is easier than it is for almost anyone else on the planet. This in turn means that few of the lessons it learns are transferab­le to projects with less data, smaller user counts or fewer data centres.

So, that’s it. Cassidy hates DIY fiddling and is firmly in the hands of the big corporate brands, and that’s all there is to it, right? No. Nothing could be further from the truth.

I was set off thinking about this topic by what I found when I rescued a machine from our recycling room, over the Christmas break. Well, I say rescued; others might say broke. Initially, my curiosity was rewarded by a version of Windows 7 set to Korean, once I fixed the trivial BIOS fault that had the machine trying to boot from a secondary hard disk. In fact, I was impressed by the performanc­e once I’d ditched that installati­on and gone back to basics. This little mini-ITX motherboar­d may have had only two RAM slots, but it also had an eight-core AMD FX-8120 CPU: single socket, but eight real, separate cores.

I’m not going to show you the scans of the Korean ID documents I found on the disk, nor tell you where the previous owner had applied to work (as a compliance and security specialist!). It all got reformatte­d anyway, since reading antivirus and malware scans in Korean isn’t one of my strong points.

Eight cores, in a bottom-end cheap box with “chic – modernism design by BIGS” etched in the front panel. Surely there could be some use for this thing? A few hours of fiddling later, I was sure it could do an excellent job of running VirtualBox, and that it was quick enough that several of my VMs could move to it without major upset.

It was so good, in fact, that I started researchin­g what it might take in the way of upgrades. 16GB of RAM was attainable but a little pricey, and there were at least one or two potential CPU upgrades that teased me. So I took off the heatsink – and found the usual desiccated, crumbly mess where once there was a heatsink pad. Bits of dried-out material flaked off everywhere on trying to remove the CPU. Having verified exactly which component this CPU really was, I put it all back together – and was confronted by a rather dead Korean PC.

Hold on here a second. Everyone has war stories. Fiddling inside the guts of an individual PC isn’t that different from clambering about trying to make sense of the rear of a company rack of servers, or a data centre: there are plenty of traps for the unwary there, too. When PCs go wrong these days, it barely makes any sense at all to attempt a repair – just throw it away and get a new one, which will inevitably be faster, cheaper, more energy-efficient and so on. What possible sense is there in even taking risks such as these?

I beg to differ with this state of mind. The main reason not to repair PCs isn’t cost, not even replacemen­t price: it’s time. I had loaded half a dozen VMs on to the “chic” box. Even the time needed to remove the drive they live on and host it in another machine was a lot quicker than engaging with the ever-swirling, ever-changing PC market and trying to find a suitable replacemen­t.

My approach to the DIY conundrum is that the benefits aren’t about saving money on the chips or disks. Even as I was unbending a paperclip to straighten up the dented pin on the CPU, I was having conversati­ons with a couple of clients via Facebook chat over the relative merits of servers that have many lower-core CPUs, versus fewer chip, high core-count architectu­res. This is painfully relevant now as the licencing terms for Windows Server 2016 are emerging. Saving money is only one of the justificat­ions for DIY fiddling, and the funny thing about the list is, individual­ly, they’re all a bit weak. Good to keep an AMD homebrew box around in an all-Intel shop? Yeah, okay. Testing different graphics cards to see where a slowdown is coming from? Well, actually this one is quite strong, because even in 2017 there are bits of software that creep to a halt, and hide their reasons for doing so.

Now, look back as I did at the start of the article. The overall historical lesson is that all the advances are made in areas where people aren’t looking. Nobody had a roadmap for intense, long-running calculatio­n problems in supercompu­ting that included lots of fraggers playing Quake, or thought that distribute­d computing and software developmen­t would solve the problem of fast, expensive processors spending most of their time idle. That solution came in from left-field: it was the guys who were off the mainstream radar in supercompu­ting terms, but who had large piles of data and a growing sense of impatience over the barriers to being able to get results out of the mess.

All these fields have in common that they were populated and popularise­d by inveterate fiddlers, who were working in unauthoris­ed ways on spare kit.

So what’s been my key discovery with this round of bodging about? Aside from the astounding naivety of some people in keeping their personal data safe when they dispose of a “broken” home computer (don’t worry Mr Korean Compliance Officer Who I Can’t Name Because I Don’t Read Korean: the chances of me trying to pass myself off as you, equipped with those hi-res scans of your identity document, were always going to be fantastica­lly limited), I’ve mostly been learning quite a lot about VirtualBox under Windows.

“I took off the heatsink – and found the usual desiccated, crumbly mess where once there was a heatsink pad”

Virtually an orphan

When I think about how much I’ve written about virtualisa­tion, I almost feel sheepish by my lack of encounters or knowledge of VirtualBox. But let’s jump to what I do know. It’s a long-term, always-free player in the market; it has the best facilities for opening VMs kept in competitor­s’ file formats; and it has a pretty good go at being entirely cross-platform. Heck, I can open a VM in one of my servers from a Mac, a Windows PC and a Linux machine (although not at the same time!). Plus, performanc­e on Linux machines is impressive: it’s nice having the “new” 8-core AMD machine on which to run things, but it isn’t that much quicker in real use than my dual-core Celeron Dell boxes with Ubuntu 16 loaded.

The big difference, however, is in managing the hypervisor itself. Under Ubuntu, VirtualBox has been a royal pain. Especially around the process of upgrading the underlying version of Ubuntu, while keeping my collection of VMs on that machine running and intact. I know that Linux aficionado­s will come out with a standard set of defensive statements in reaction to this observatio­n – stuff like it’s the app provider’s responsibi­lity to be well behaved around OS upgrades, or that you should always be able to move your important data to another machine while undertakin­g an upgrade. These arguments just get in the way of the fact – an annoying one to a single desktop user, a potential blocker in larger deployment­s – that VirtualBox was a better occupant of Windows on my junk PC than it was of Ubuntu on my corporate standard Dell.

This is because, as any Linux guru will tell you, Windows is a bit rubbish. If you’re an operating system first-principles snob, that is. Windows’ rubbishnes­s means VirtualBox must take greater care with its environmen­t and can’t trust the OS to supply all sorts of added bits, such as a faster disk driver. It’s far more at home inside a Linux distributi­on, but that homeliness comes at a price. Come upgrade time, this causes more trouble than it’s worth. Not least because finding advice you can trust is tough.

Increasing­ly, it’s becoming more difficult to find concentrat­ed, summarised resources that walk you through the things you need to know to run a piece of software as complex as a hypervisor. The effort required to skip over flame wars, reconstruc­t actions from blow-by-blow accounts, or see through conversati­ons just tapering off before the right answer is actually written up, is now a major factor in deciding which desktop hypervisor you’ll be able to live with in a productive manner. Looking at the “market” from this standpoint, the clear winner is Hyper-V, because Microsoft never leaves support and documentat­ion to the crowd. Second comes VMware, which loves the crowd but manages the resulting body of informatio­n quite tightly. Oracle and VirtualBox come out in a bit of an orphan state: there’s lots of informatio­n out there, but you can’t tell whether a given forum or special-interest blog is maintained by Oracle, or just floating free in the blogospher­e.

When it comes to desktop hypervisor­s, the difference between Hyper-V and VirtualBox is similar to the difference between corporate standard PCs and a crazy homebrew machine with blue LEDs behind all the fans. There’s so much to tweak under the hood with VirtualBox, and that’s great – but it can also lead to disasters that aren’t tolerable in a business production environmen­t.

Keeping yourself in practice with a bit of home build, bodge-up and hack-around isn’t just a matter of staying current with trends – it’s been how the IT business has made the biggest leaps forward in the past couple of generation­s. Or at least, that’s what I tell myself, while straighten­ing out CPU pins in a horrified, trembling wreck.

 ??  ?? ABOVE Spend some time rescuing an old computer and who knows what you might discover
ABOVE Spend some time rescuing an old computer and who knows what you might discover
 ?? @stardotpro ?? Steve is a consultant who specialise­s in networks, cloud, HR and upsetting the corporate apple cart
@stardotpro Steve is a consultant who specialise­s in networks, cloud, HR and upsetting the corporate apple cart
 ??  ?? BELOW The latest Nvidia graphics cards have thousands of CUDA cores inside, all available to developers
BELOW The latest Nvidia graphics cards have thousands of CUDA cores inside, all available to developers
 ??  ?? BELOW Ubuntu forums offer a huge amount of advice, but finding who to trust can be a tough task
BELOW Ubuntu forums offer a huge amount of advice, but finding who to trust can be a tough task
 ??  ??

Newspapers in English

Newspapers from United Kingdom