Dealing with legacy systems
What can you do with ancient PCs that occupy vital roles? Steve Cassidy looks at the logic of maintaining legacy systems – and the associated challenges
What can you do with ancient PCs that occupy vital roles?
In any pub conversation – among IT people at least – there’s likely to be a manager boasting that he has no legacy systems in his portfolio. “All our stuff is in the cloud these days” is something I’ve heard more than once, as if this were the natural destination of all computing functions. However, sometimes there are good reasons – or, at least, inescapable ones – for sticking to something old, hot, slow and physical. “They don’t make ’em like this anymore.”
Designing computer equipment isn’t just about matching up cases, keyboards and screens. There are all sorts of subtleties of implementation in chip and board design, and every so often these deliver a machine that punches far above its weight, price or market segment. In the past, I’ve heard of video-editing software that effectively demanded specific models of HP or Dell workstation, and you can still find a healthy market in ancient Mac Pros online, supporting designers whose productivity in the various releases of Photoshop, InDesign, QuarkXPress and so on is – to put it politely – not mainly constrained by the performance of the equipment.
Moving in different cycles
IT people like to work with this year’s model. The default assumption is that faster, or more stingy with the electric, is always worth the upgrade cost – and that customers will be persuaded of that benefit. The idea that other renewal cycles exist, and that they might be absolutely dominant for the activity in question, rarely arises.
One obvious example is the use of the iPad in education. New hardware comes along roughly every 18 months, and the software update cycle for iOS is around a year. Yet education budgets tend to assume, not unreasonably, that once a piece of kit is in place it won’t need refreshing within three years. The preferred interval would be a decade, based not on what the educators say they would like but what is actually needed and achievable in reality. Other types of kit come with other shelf-life expectations. Public sector purchasers may be wary of projectors
“Sometimes there are good reasons – or, at least, inescapable ones – for sticking to something old, slow and physical”
that can show a PowerPoint presentation or Adobe PDF files directly from a USB stick, because those software standards move forward quickly and obsolescence beckons. Then we have systems that use PCs as displays or controllers: air conditioners for offices, MRI machines for hospitals, even those 50ft-long yellow grumbling beasts that rip the surface off a motorway just as you go on your holidays. These all have some kind of computer controlling them, which can become many generations old before the machinery itself needs updating.
Lack of upgrade expertise
Often, a legacy system could in theory be upgraded or replaced quite easily – but there’s nobody available to do it, because the guy who originally set it up has left, and there’s no-one else who knows the system, and the customer’s business, well enough to do the work. That might sound like an exaggeration, but on many occasions I’ve seen businesses let a jobbing programmer go, without ensuring that they have access to his code, his tools and his forwarding address. There are rumours that this even hits the biggest players: development of Microsoft PowerPoint was reportedly stalled for several years at one point
because the wrong employee had left the company.
The browser TARDIS
There’s a certain agony in realising that your website or web-controlled piece of hardware makes use of a certain feature that has since been declared a security risk, or simply deprecated in favour of a newer technology. Suddenly, the feature is gone from your web browser, and you can no longer operate your piece of kit or website. For an internal web app, the easiest solution is to keep an old machine around, complete with a nasty old browser, just for that one job. Other approaches are possible: I’ve been told of one company that trained all of its staff in removing and reinstalling the different versions of Java, just to get around certain security requirements.
A question of rights
Modern IT managers, schooled in sensible modern practices, may right now be gasping like landed fishes at the idea of allowing end users to remove and replace key system components, as this probably means they can run whatever else they want too – including all kinds of pernicious malware. But for some businesses, that’s the least disruptive option.
Even if your everyday users don’t need administrative rights, you may find it impossible to keep them away from your developers. Developing and debugging commercially usable code almost mandates admin access, and the testing and release timetable equally prohibits the luxury of revoking it whenever it’s not required. As a result, there are lots of industry-specific apps out there that assume and expect that the machine they’re running on will be completely open to them – and, naturally, to pretty much any other executable that comes along.
So what can you do?
These numerous scenarios add up to a widespread problem. That’s manifestly so, as many legacy systems sit in plain sight. If you’re late for an EasyJet flight, go to the desk and you’ll see its character-mode, serial-terminal booking system in (very speedy, it has to be said) action. Look up at a train timetable display screen when it crashes and you might see error messages from Windows NT 4, or Windows XP. This points to an unfortunate truth: if there were easy get-outs, people would be using them – so obviously there aren’t.
In fact, when it comes to legacy challenges, it’s very hard to find success stories. And when they do come up, the solutions aren’t always elegant. I remember a certain large British bank swallowing up a competitor, and subsequently announcing that it had achieved the intended merger of its banking systems on time and to budget. Technically, it had met the letter of those goals, but not the spirit. The stated aim had been to have their disparate mainframe banking suites running “on the same hardware”. In practice, one of the two package suites was simply virtualised – lock, stock and barrel, along with its entire operating system – and installed on a giant mainframe alongside the original system.
More recently, I met an engineer who had been responsible for an old, liquid-cooled IBM mainframe, used to run weather forecast data for an entire country. To improve the speed of forecasting, he’d resorted to a similar bodge to the nameless UK bank. First, he invested in a howling, room-sized Cray supercomputer running SUSE Linux. Then, onto this he simply installed the IBM’s boot image as a virtual machine. Lo and behold: forecasts came in 40 times faster, and the legacy hardware issue was, after a fashion, resolved.
Virtualisation isn’t always the answer. In fact, it’s not technically a solution at all, because it doesn’t address the underlying issue, which is that the code isn’t running on a modern, supported platform. Still, virtualisation is often the easiest way to move old files and favourites over from an old system to a new one – and the tools are largely free (although the paid-for versions are worth the comparatively small expenditure).
If you take this route, one thing to watch out for is network security.
Most VM hypervisor suites give you virtual switches to play with; even the little desktop versions of the heavyweight stuff can have three or four VMs sharing a physical Ethernet card. But life gets complicated if your legacy system is itself vulnerable to bad network traffic. In this case, you really want to be able to run a proper firewall “in front of” that particular VM. Setting this up is one challenge – proving that it’s actually working and delivering a benefit is a whole different level of testing.
Virtualisation helps when it comes to the question of administrative rights, too. Let’s look back at that crazy case where users needed admin privileges to uninstall and reinstall different Java versions; the modern fix for this would be simply to fork the VM images. You can have a basic image with one version of Java, and then a forked clone with the other version installed. Flipping between two display windows is a lot easier than repeatedly installing and uninstalling software. In the case I heard about, it proved not only more secure, but ended up saving some users a couple of hours per day.
While this kind of fix sounds great – and will certainly get you some kudos in the world of desktop virtualisation hacker forums, if that’s the sort of place you hang out – remember that it’s all still just kicking the ball further down the road. Workarounds such as these rely on evading, rather than fixing, the fundamental issue of compatibility.
Support your local developer
The real escape route from legacy may involve getting stuck into the trendy world of social networking, to help you find someone who can actually solve your problem. That doesn’t necessarily mean putting out a cry for help on Facebook, but perhaps talking to an industry body. After all, no individual player in an industry is likely to be able to support developers through a ground-up reworking of a legacy system, or through the ongoing lifecycle of the resulting product. An industry association can have a role both as a specifier and as the owner of the code, the rights, the process and the employees required to keep it alive.
That last bit, incidentally, lies at the absolute core of the issue. Almost anyone can write a piece of software, and sell it too. The problem comes after that, when the requirements or the environment start to change, and things no longer work as expected. Businesses don’t like keeping developers on the payroll just to hang around, on the off-chance that a modification is needed. An industry body or a member’s club that allows commercial competitors to come together and engage IT resources ought to be an obvious solution.
However, the only fly in the ointment is that pesky issue of commercial advantage. There are certain industries where competition isn’t that fierce, processes are pretty standard and everything is highly professional, and in those fields you very often find standard software suites that do what they should. The commercial property sales business is an example of this, perhaps because there’s so much money at risk that no-one dares to take a risk on the wrong piece of projectmodelling software. Unfortunately, being a developer to a professional association comes with almost no pressure to keep things up to date – after all, the legacy system will carry on running for another year, right? For example, what if a developer had agreed to an implementation schedule that included the weeks of the London Olympics, then casually revealed as the day drew near that they were a member of Team GB, would be competing for the UK, and would be back in a few weeks? Great for the country, not so great for the people paying the wages.
Of course, that isn’t strictly a computing problem – more of an human resources issue. However, the IT profession as a whole needs to acknowledge its own role in these situations. How long would it take for a hobbyist with a Raspberry Pi to, as an example, whip up a replacement for those train timetable display controllers running Windows XP? Why then does it take years and years for anything to change? Very often the worst kinds of legacy thinking are not on the part of the business at all. Consequently, all you can do is keep things running as smoothly as possible, and carry on the hunt for upgrade opportunities.