Jon gives a high-five to the Office team as it finally converges on a common codebase, and provides advice on surviving the CPU Armageddon.
Jon gives a high-five to the Office team as it finally converges on a common codebase, and provides advice on surviving the CPU Armageddon
Big news from the Office team at Microsoft! It has finally converged all versions of Office onto a common codebase. This has long been a goal of Microsoft, and it’s tried before and failed.
The codebase for Mac Office was forked off in 1997. That ties in with the creation of the Mac Business Unit at Microsoft in 1997, when it famously made a $150 million investment in Apple, and promised to keep developing Office for Mac. So I guess the new MBU just took the code and went its own way. No doubt the experience of getting Office for Windows 97 out of the door was sufficiently painful that much internal harmonisation effort had already fallen by the wayside.
Obviously, having multiple codebases for a product that ostensibly claims to be compatible will result in pain and hardship for all concerned. And Mac Office has certainly had its woes over the past decade and more. For instance, the move to Intel CPUs meant the next version of Office had no Visual Basic for Applications, because Microsoft hadn’t ported that in time.
In recent years things have been somewhat better, although the compatibility has still had rough edges, especially if you push a product such as Excel hard. However, over the past year or so, it’s clear that the Mac Office team has been working hard, and releasing new versions to the Office Insider group of advanced testers. Some of these have been plain weird – my favourite memory was the bug that top/bottom inverted your Excel sheet in its entirety. There was also a nasty repaint bug that meant the sheet wouldn’t update – that lasted for months. But it’s been coming together nicely, and with the release of Mac Office 2016 Version 16 on 18 January, it was happy to announce code convergence.
What does this mean in practice? Well it means that the vast majority of the core codebase (written in C++) is common to Mac, Windows, iOS and Android. There’s a relatively thin layer of platform-specific code that interfaces with the host device, and which has to be customised to that platform. So that’s C++ for Windows, Objective C for Mac and iOS, and Java for Android.
Having a common codebase means more common functionality, and also the ability to launch new features across platforms in closer time alignment. For too long, the “Not Windows” version of Office, especially the Mac one, has been the weak cousin of the Windows version. But this is changing as part of Microsoft’s cloud-first approach. The days of Ballmer-esque hatred of anything that isn’t Windows are definitely over. And congratulations to Erik Schwiebert, the Microsoft principal software engineer on the Mac team, for helping to bring this together.
Rode mics and software
Congratulations are also in order for Rode Microphones from Australia. This is a market-leading company that makes top-flight microphones at sensible prices. If you look around at any trade fair, you’ll see a Rode microphone on top of almost every camera. The company really has sewn up that market, and it’s down to the wisdom and savvy of the founder, Peter Freedman, who has steadily invested in all the technology to ensure everything is made in-house. It has given Rode an edge to rapidly bring products to market, and to explore new and disruptive price points. As I mentioned at the time, it bought the SoundField brand from the UK a year ago – which is close to my heart – and I can’t wait to see its forthcoming interpretation of that. I was intrigued to read a few days ago that Rode is moving into the measurement mic market too. This has long been the province of companies such as Brüel & Kjær, GRAS and others. It’s a small market if you’re talking about laboratory-grade reference microphones, where a price tag of thousands of pounds per item isn’t unusual. Even a calibrator can cost that much. With the arrival of RodeTest, it looks like Peter is on a mission to shake up that market. It’s bringing out a range of reference test microphones, and I’m hoping the price will be a fraction of the established players. Rode has bought FuzzMeasure, too, which is excellent acoustics measurement software for macOS. You might think there isn’t much of a market for such esoteric technologies
and tools, but you’d be wrong. The reason it’s been so niche until now is the high cost of the microphones and test hardware. Even the excellent Audiomatica Clio system will run to a couple of grand once you have its cheap and cheerful measurement mic added to the invoice. Once you drop the price so it hits the commodity marketplace, sales expand. And anywhere you have music, playback, recording or any kind of place where acoustics matter, it’s so much better to actually measure it than take a half-informed guess.
I can’t wait to see what Rode brings to market, and to compare the results to the big boys. Disruption is good, especially when it results in capabilities at lower prices for a greater number of people.
Why is it so hard to set up digital signing of email?
I’ve been wondering about digitally signing email. It’s something that I almost never see, either personally or professionally, and that worries me. I accept that full encrypted email is quite a step. Large organisations can roll out such a solution, together with full encrypted document control, and make it mandatory on every desktop, laptop and mobile device. Things aren’t so simple for the small-business owner, though.
I looked up various Microsoft documents on how to implement this on Office 365 – and rapidly found myself in a maze of twisting passages. The documentation assumed my laptop was running Windows; in particular, documentation for the Mac version of Office was strangely absent.
It’s too much like hard work; there ought to be a simple routine here. Various Microsoft literature points to places in the admin pages where I can set up a digital certificate, but this doesn’t appear to have been updated for a while, and certainly doesn’t reflect the new UI that I see on the Office 365 administration screen.
Surely there’s a business case for having digitally signed emails? It would give confidence to the recipient that something hadn’t gone awry in the transmission chain, and would also allow us to have a somewhat stronger capability in the war against spam.
Fun with the Synology NVR
I’ve been having fun with the new Synology NVR1218 I mentioned last month. It’s basically a two-drive NAS box that has the excellent Surveillance Station software built in. Add some hard disks and IP cameras and you’re good to go. Even better, it has an HDMI output on the back so you can plug in a monitor or spare TV to keep an eye on what’s happening. And when you need more storage, there’s an additional box that plugs in to increase the number of disks.
It does what it says on the tin, and I think it’s a cracking solution. I recommended it to my mate Tim for one of his clients, and he was knocked sideways by it. The only problem I experience was at setup.
When you set up a standard Synology NAS, you have to connect over the network because the NAS has no UI. This isn’t the case with the NVR1218, because I had plugged in a monitor. I got the Surveillance Station desktop, and as administrator I could do what I needed, and then set up a user account for day-to-day operation. Everything seemed fine – but I couldn’t find a way to join it to my cloud Synology account, and thus allow remote login.
It didn’t matter which bit of the UI I dug into, it simply wasn’t there. I asked Synology reps at CES, and they scratched their collective heads and were confused. They put me in touch with the UK office, where folk were similarly bemused. Then the light dawned: if I connected over the network to the NVR1218, I got the normal Synology NAS administration desktop, where I could set up everything I need.
My fault entirely, and I feel a bit of an idiot for not thinking of it first. But
“The Synology NVR1218 does what it says on the tin – and it’s a cracking solution”
it’s a tribute to the ease of use of the box that I thought I’d done everything I needed through the Surveillance Station desktop interface. If you get one of these boxes, then remember to do the full setup!
You can’t have missed the news about the CPU problem that’s plaguing almost every chip that’s shipped for a very long time. Some are claiming that it isn’t really a big deal; that exploits for it will be obscure and unlikely; and that we should all just chill out and have a gin and tonic.
Some say that the fixes being released will be just fine, but the rate at which they’re being issued and then withdrawn due to all sorts of unpleasantness indicates this is unlikely to be the case. And others are proclaiming it’s somewhere on the journey to CPU Armageddon – that the world has just stopped rotating.
What is crystal clear to me, however, is that we’re just at the start of this process. Intel has announced that it’s going to be many months before it can ship new CPUs that are clear of the various design flaws. That’s no comfort for those of us with existing devices, who must hope that a sticking-plaster approach of new firmware and core driver patches will somehow make things better. It certainly doesn’t fill me with the same reassurance of “go to bed dear, it will be better tomorrow” spoken by your mother when you were a child.
Have we been taken for a ride by the companies? This is a critical question and, undoubtedly, it will underpin many of the attempts at litigation that are already underway – or will start shortly. These cases will take years to play out. And I’m certain that there are many American lawyers already planning their new yachts on the back of the expected workload.
What strikes me as odd is that this has affected not only Intel but AMD too. And ARM, which is an entirely different platform and architecture. I could understand some design thinking becoming enshrined years ago at Intel, and it just cranking the handle, reusing the old design every time it came up with a new CPU. After all, it worked before, it will work now, and the performance boost is coming from the underlying silicon and fabrication capabilities at the foundries.
But for AMD and ARM to have the same issue makes me scratch my head. Has there been a Big Boys’ Book Of How To Design A CPU that everyone has followed religiously? Or are there genuinely only a few engineers who really understand this stuff, and they’ve worked on all the platforms over the years? Or is this just a case of “well – it’s not ideal, but the performance boost is worth it, and no-one will ever know”, applied across an entire industry.
I’m not sure. Certainly, something doesn’t feel quite right here. CPUs have problems – they have in the past, and they will in the future. The problem here is that in today’s world, these devices are strongly nailed down onto the motherboard. Back in the days of socketed CPUs, you could have registered your duff Pentium Pro with Intel, and it would send you out a replacement. After a fiddly ten minutes, armed with that sticky white heat paste to get the heatsink and fan reattached, you’d be up and running.
But what are we to do today? How do I physically change the CPU in my Dell XPS 13? Or my MacBook Pro? Or inside my iPhone or Samsung S8? The micro-miniaturisation that the industry has been perfecting means that a quick and dirty swap-out is no longer on the table.
We’ll have to rely on firmware and code to try to mitigate this – which may or may not be enough. Worse still, we can’t buy new computers with truly fixed hardware for the best part of a year, even longer. This puts a lot of companies in a difficult position, especially in the data centre. And by goodness, you’re in a sticky position if you’re providing hosted VM capabilities, where it apparently might be possible for one VM to be able to read into the memory space of another VM.
What to do? Well, on the assumption that we can’t go out and wave the corporate credit card because hard-fixed CPUs aren’t
available and won’t be for a long time, we have to mitigate the risk. The first thing to do is to ensure that everything is patched up to date. That you have every possible driver from your hardware vendor, and that you’re religiously checking for firmware and UEFI updates. Don’t assume it will land in a friendly Windows Update; it’s now time to go digging.
This also means that it really is time to have a long, hard look at your hardware estate. What do you have, how old is it, and are you applying a realistic replacement timescale? Although you might expect me to suggest changing to newer hardware, there is a case to hold off until fixed CPUs are available. Or, conversely, you might decide that you have a three-year replacement policy, whereby you retire one-third of your hardware base every year.
In this scenario, you need new hardware this year, so you continue to buy. However, you do so in the knowledge that the hardware is compromised and that you get absolutely rock-solid ongoing support from your vendor. If I was running a large organisation, I’d be having very polite words with my Dell representative, preferably in front of my corporate legal team, to ensure that you’re kept totally in the loop about what is coming out and when.
This is a huge opportunity for the vendors, both software and hardware, to step up to the plate. For too long, support has been the grubby cost centre within these suppliers, and now they have to realise that this simply won’t work. For example, if you have a large estate of VMware, then you’ll be significantly judging your ongoing commitment to that platform based upon how VMware reacts, moves forward and treats you as a valued customer.
I wish I could give better advice. As always, knowledge is king. What do you have, what is it running, what firmware/OS/drivers are loaded? If you’re not 100% on top of those issues, then it’s time to sort things out. Get in outside help, even for a one-day sanity check, where someone keeps saying “why?” and “no, but…” at you until there are no dirty secrets left.