PC Pro

JON HONEYBALL Jon helps out an old client as they shift to Office 365, but it’s the back-end infrastruc­ture that was in greatest need of a reboot.

Jon helps out an old client as they shift to Office 365, but it’s the back-end infrastruc­ture that was in greatest need of a reboot

- Jon@jonhoneyba­ll.com

It was one of those things I probably should have said no to, but they were a client from years ago, and a thoroughly good bunch at that.

I should explain that my days of doing small-business IT consultanc­y are pretty much over; when the only sensible answer to most questions became “Office 365”, I felt there wasn’t much value I could add. Getting companies from where they are to an Office 365 solution isn’t necessaril­y as easy as Microsoft would have you believe, but that’s the power of marketing.

So, the phone call was a surprise. Years ago, I’d recommende­d that this small office-based firm move from Windows Server to the cloud, and they were happy to do the work themselves. Being a technical firm, I didn’t see any great problem. They could stage the work over months and take it all at their own pace. Sorted, now where’s the beer?

The surprise was that they had apparently done none of the work back then, but had decided to sit on what they had. To be fair, it was working just fine; it wasn’t an “all eggs in one server basket” mess, but was properly considered and implemente­d. I was proud of where we’d ended up – a mix of base OS and VMs containing defined functional­ity. Backup and disaster recovery was sorted, both at the OS level and the VM image level. There were plenty of redundant services on the network to keep everything going. The storage was appropriat­ely RAID, and there was a tape archive that was religiousl­y taken off-site, and then brought back to do test reads/trial restores. It was hard to see why they should move.

When we met up afresh, it was clear the cobwebs were now showing, and that things were changing in the business. The desktops were as creaky as the servers, the Windows 7 installati­on was adequate, and there was a significan­t risk of hardware failure on the ageing boxes.

What to do? Well, fortunatel­y this client is well funded and isn’t put off by significan­t expenditur­e and change, if it’s required.

Given that this was going to affect almost everything in their building, we adopted an outside-in approach. First to go was the ADSL line; this was upgraded to FTTC. Next went the old limited firewall and the fortunatel­y separate ADSL router, and in came the BT box for the FTTC and a Cisco Meraki firewall, along with a 24-port switch and a couple of that company’s most excellent Wi-Fi units.

At this point, it was clear that the old infrastruc­ture was far from healthy, given the increase in speed and fluidity of the new installati­on. Some checks back at the lab showed that the port to which the old ADSL router had been connected was working in a “some of the time” way. It dropped packets, and was generally unwell. Not enough for the servers and the internet connection to totally disappear, but enough to make things somewhat wobbly.

With phase one complete, next was the wiring. The building wasn’t old, but the non-trunked wiring was tired and some of the wall sockets were unreliable. Too many cables took circuitous routes through holes in door frames, and were clipped to the wall with plastic P-Clips. A trawl through the original specificat­ion sheets and documentat­ion, including certificat­ions of testing, showed that there was none. Of anything.

Some tests revealed that the wiring wasn’t up to the task of the next 10-20 years of operationa­l use. So it had to come out, too. Requests for tender went out to various wiring specialist­s, and the one chosen had the most “wouldn’t use anyone else” recommenda­tions. Since it was a small business, rewiring wasn’t a big issue, and also allowed for additional ports to be put into useful places – IP for the overhead projector in the boardroom, for example.

Now to the difficult part – what to do with the data. Since it was obvious that the company was moving to Office 365, some elements of

migration needed to be done, especially the past content of its on-premises Exchange Server. This old dear was sufficient­ly dusty inside that all the components on the motherboar­d were unrecognis­able, with a thick layer of brown unpleasant­ness. A tentative attempt at blowing air into the box to shift some of the gunk resulted in a dust cloud that would have required me to be hospitalis­ed had I not already run for the hallway.

Moving the Exchange Server and its critical history of emails to the cloud was left to a trusted subcontrac­tor, who does this far more often than me these days. The question was now down to storage and archive, and the desktops.

The client had been using Dell since Michael was a lad in diapers, so the decision was made to stick with it. This loyalty would make sense if Mr Dell actually gave a rat’s arse about a small, ten-seat company in East Anglia, but for some reason I suspect this isn’t the case. And I doubt the “sales account executive” would care much either, even if you could work out where he was located in the world.

Still, Dell kit is a known quantity and it’s no better or worse than the competitio­n. New boxes were specified, the opportunit­y taken to upgrade the screen sizes and resolution­s to something more 2017 than 2001, and Windows 10 Profession­al was installed.

Next up was the vexatious question of authentica­tion. Back in the mists of time, this company had run on a group of Windows NT servers. Active Directory arrived and made things so much more complicate­d, even in the Small Business Server arena. Single sign-on sounds wonderful until you realise that you really don’t need it – and aren’t using it. If you’re in a larger SMB, then single sign-on can be great. But I suspect that most don’t use it at the micro-end of the SMB space; it’s just complexity they don’t need.

But what about local storage? The cloud is great, but fails miserably when the line goes down. Some sort of server seemed appropriat­e.

At this point, I turned to Synology – my NAS provider of choice. Of course, there are other vendors out there (as the BBC would say), but I get on well with Synology, all the way from my twin-disk NAS test beds through to a monster with eight 6TB drives in it, and an external additional box that takes another five drives. In this configurat­ion, there are two SSD drives acting as cache, ten 6TB drives in RAID, and one 6TB drive in hot spare just in case one of the drives fails within the RAID. It uses the Btrfs file system, which is really rather clever, and which allowed me to upgrade the RAID by adding more physical drives to it and then extending the volume over the space. I could add a 5-drive bay to the unit and pop in another 30TB of storage, but I’m not at full capacity yet.

So an in-house Synology seemed just the ticket. Populate it with some SSD for cache, and enough 6TB drives to get them going, with space ready for the inevitable data expansion that happens over time.

Is simply sticking a pile of spinning rust onto the local network the best way forward in 2017, you ask? Especially when the client then decides that some laptops would be handy, and what about the smartphone­s and tablets that have crept into the business? Maybe some sort of hybrid local/cloud solution would be better?

We considered using OneDrive, although OneDrive for Business brings me out in spots that a whole bottle of Clearasil can’t shift. My cloud storage platform of choice is still Dropbox Business, although I’d be happier if it extended the support of ensuring your data is hosted in Germany to a wider range of customers. At present, it’s limited to only those companies who have 250 or more accounts, which seems a bit mean. I can’t see why it should be limited to only those, unless Dropbox simply doesn’t have the capacity in Germany – either of storage space or bandwidth – to host a barrage of smaller customers.

Still, the normal service would do. A set of accounts were set up, with each device having its own local Dropbox team area and local personal storage. One of the most important accounts is the one that connects the Synology NAS itself to Dropbox – in other words, the NAS itself becomes a Dropbox client.

You can tune this approach, too. For example, place a small Synology box at your home address and connect it to the self-same Dropbox account – it, too, will receive all the file updates, but will hold them off-site at your house. Even better, make this off-site Dropbox client “receive only”, so you can’t make local changes that then get pushed out to the core network. If someone hacked into your home LAN, this would be a useful protection against pollution.

As you can see, this is quite different in design to a convention­al file server sat in the corner, and which everyone accesses over the LAN. The advantage of the cloud approach, as typified by Dropbox tools, is that users have access to everything they want in a managed, asynchrono­us way. And this really matters when you have

“Is simply sticking a pile of spinning rust onto the local network the best way forward in 2017?”

laptops to contend with. Of course, there are various tools you can use to enable cached file support and so forth, but how are you going to roll that out on an Android tablet? Or an iPad? At least Dropbox provides one consistent view, plus it’s accessible via a web browser in the event you need to log into a presentati­on computer on a podium in a lecture theatre.

Using Dropbox as the sync is fine, but you need to ensure that the data is being backed up too. One solution is a start, but several are better. The route I’m using at the moment is to install the Synology Cloud Station Backup agent onto all devices, using this to push real-time changes to the Cloud Station Backup agent running on the Synology box. The secondary route for off-siting a file from my desktop/ laptop into the Synology is a big help – I have Dropbox sync across all devices, but I also have real-time sync from the computer to the NAS into a multiple-version-history backup and archive store.

This is getting better, but we’re not done. A whole Synology NAS could fail, so what do we do about that? Well, we could have a second box on the network and operate that as a real-time failover. This might be wise, but I’m far from convinced it’s worth having unless you want to be paranoid, especially when the core data is also held on machines’ local disks.

A proper off-site archive is required, too. A simple Dropbox link to a Synology server held in your loft space might be adequate to protect you against fire or theft of the main box, but it certainly isn’t good enough as an archive.

The client had been using LTO for a while, so a move to a more recent and capacious LTO drive seemed the best way forward, implementi­ng an appropriat­e tape-rotation policy, where tapes were kept off-site and only brought on-site to check their integrity on a tightly managed, policy-driven diary.

Networked printing simply isn’t a problem – each laptop or desktop can print directly to a networked colour laser printer.

At the end of the day, what were the pros and cons here? Would it have been worth opting for a newer, full server model in the old school of Small Business Server? I’m not convinced. Modern NAS boxes offer a huge amount of capability. Yes, you lose out in some areas, but many of those – some might say, most – are only of interest to larger sites and organisati­ons. I prefer simplicity and robustness to overly complex solutions. And a modern NAS gives you capabiliti­es that let you grow to quite significan­t sizes, if they’re balanced against the strengths of cloud solutions.

As the saying goes, everything has a place – and there is a place for everything.

Vbox data logging

Toy of the month goes to RaceLogic for its utterly lovely Vbox Motorsport device. Real-time (well, 20 times a second) logging of speed, accelerati­on, g-force and so forth to your mobile phone. Automated 0-60 times, decelerati­ons and 0-speed-0 for those pub-bragging rights, too – although one would only ever use such capabiliti­es on a private road or test track.

We have a bunch of RaceLogic’s lab equipment, including its mindboggli­ng LabSat GPS recorder/replay/ generators, and it’s heart-warming to see a British company doing such good work. For those who enjoy track days at wonderful locations such as Cadwell Park, these tools offer real insight into what’s happening.

Wi-Fi logging tools

For network analysis, I keep coming back to Chanalyzer from MetaGeek. Its solutions work brilliantl­y, and it provides immediate insight into what’s going on. Here at the lab, we’ve been playing with the rather delicious tool from VisiWave ( see left). This lets you take an architectu­ral drawing of a building, and then walk around it taking measuremen­ts of Wi-Fi data strength. Having taken a whole bunch of measuremen­ts, it can then provide you with a “hot spot” colour-coded view of the wireless signal, by interpolat­ing between the measured results.

Of course, this is exactly that – an interpolat­ion. So it’s a made-up result rather than a measuremen­t. However, if you take enough measuremen­ts, then the interpolat­ion between measured points can be pretty accurate. The ability to turn on and off various layers that belong to different SSIDs is useful, and the ability to make a working guess at where a Wi-Fi base station is located is helpful, too – especially if you’re trying to work out where your neighbours might have plonked their new BT or Sky box. Recommende­d highly, along with the MetaGeek software; and incredibly useful if you need that sort of thing.

The Cloudbleed bug

I was going to talk about Cloudbleed, but I’m not convinced that all the dust has settled on that front, nor the whole story come to light. For those not yet up to speed, Cloudbleed is the trendy term – building on Heartbleed from 2014 – that covers the bug found in Cloudflare’s code.

That matters because Cloudflare is the shared backbone of thousands of big-name sites, including friends such as Uber, Fitbit and OkCupid, and it turns out that even visiting Uber could have led to other people’s passwords being stored in your browser cache.

I think the whole area of CDNs and web monitoring/management/ interfacin­g is a can of worms that most end users are entirely ignorant about. I’ll chat with the other columnists and see how best to cover this topic. But it’s very important – and isn’t going away.

 ??  ?? BELOW RaceLogic’s Vbox Motorsport is a terrific gadget for monitoring your performanc­e – on a race track
BELOW RaceLogic’s Vbox Motorsport is a terrific gadget for monitoring your performanc­e – on a race track
 ??  ?? ABOVE Want to determine the strength of your Wi-Fi signal? Here’s VisiWave in action
ABOVE Want to determine the strength of your Wi-Fi signal? Here’s VisiWave in action
 ??  ?? ABOVE To solve storage problems, I turned to a Synology NAS with huge potential capacity
ABOVE To solve storage problems, I turned to a Synology NAS with huge potential capacity
 ?? @jonhoneyba­ll ?? Jon is the MD of an IT consultanc­y that specialise­s in testing and deploying hardware
@jonhoneyba­ll Jon is the MD of an IT consultanc­y that specialise­s in testing and deploying hardware
 ??  ?? BELOW Tired cabling was an issue; it wouldn’t serve the needs of my client into the future
BELOW Tired cabling was an issue; it wouldn’t serve the needs of my client into the future
 ??  ??

Newspapers in English

Newspapers from United Kingdom