PC Pro

JON HONEYBALL Jon explores the advantages of Apple’s new file system, before explaining why we all need to sit up and listen when it comes to surround sound for VR.

Jon explores the advantages of Apple’s new file system, before explaining why we all need to sit up and listen when it comes to surround sound for VR

- Jon@jonhoneyba­ll.com

The launch of a new file system is a rare event. There are good reasons for this: moving over an entire community of users from one file system format to another is a big deal. No, scratch that – it’s a huge deal. And it’s the sort of hugeness that means you can do this only once a decade. Even that would be considered almost rashly hasty.

If you think about it, the PC/ Windows world has seen remarkably few file systems over the history of the PC. We started with MS-DOS, and then subdirecto­ries were added – oh, the naughty radicalism of that move! MS-DOS evolved from FAT to FAT32, and then stopped. NTFS came with Windows NT in 1992, and it had tiny evolutions in the first decade, before it also stopped.

Then Microsoft came up with ExFAT, which can be described as FAT64, or “FAT-with-some-NTFSbits-added”, depending on whether your view is top-down or bottom-up. ExFAT has been fairly successful, although take-up by third parties remains slow, no doubt as a result of the licensing cost that Microsoft applies for ExFAT inclusion in your product.

So the news that Apple is moving to the new Apple File System (APFS) is significan­t, simply because it’s such a rare event. Apple has decided that the existing one is old and tired, coming as it does from the era of the floppy disk drive. All this is true, but Apple has tweaked it into multiple versions targeted at its different platforms. So there’s an APFS for macOS. One for iOS. Another for Watch (yes, the Watch needs to have a file system, even if you can’t directly access it yourself).

By moving to APFS, Apple has brought together all the capabiliti­es required into a single platform – and has strengthen­ed and extended it significan­tly. One notable move is in the handling of encryption. iOS supported full disk encryption, which was a good thing. APFS goes further and allows for full disk encryption of both files and metadata, and also allows for multi-key encryption with per-file keys for file data and a separate key for sensitive metadata. This is a critical step forward for personal data security, and somewhat of a blow to the intelligen­ce agencies that want access to everything right now.

What’s interestin­g about APFS is that Apple is making very little noise about it. It isn’t even mentioned in the release notes for 10.3, which is odd given how significan­t the change is. It might explain why the 10.3 update takes some time to process, because it has to trundle through the file system and change all the AFS stuff to APFS. This isn’t the first time such changes have been done on the fly – readers with long memories will recall how Microsoft originally hadn’t written an NTFS formatter program, but had a tool that converted FAT format to NTFS, using the infamous “convert d: /fs:NTFS” command. Part of getting users onto APFS is that the transition has to be totally seamless.

Last time I checked, the APFS support in macOS betas was for data volumes only – you couldn’t boot from it. Clearly, this isn’t acceptable for a mass rollout of APFS in the forthcomin­g autumn update to macOS, so I’m sure Apple is quickly fixing this limitation. But be clear that macOS will be changing from AFS to APFS this year.

Some are claiming that their iPhone runs longer on battery using the APFS file system, because it’s faster and more efficient. I think it’s much too early to have a view on this, but it’s possible that it will speed up older iPhones (and related devices).

Colour space of AMOLED display

Here’s an interestin­g problem. I’ve recently been conducting various measuremen­ts of tablet screens. The tools I use are about the best that money can buy – the Klein meter ( kleininstr­uments.com/k-80) feeding into the excellent CalMAN software from SpectraCal ( calman. spectracal.com).

I measured the iPhone 7 Plus, which displayed truly superb performanc­e. Its accuracy to the sRGB colour space is so good that you can carry out colour-correct work onscreen with high confidence that your edits will reflect in the final output correctly. If you use a screen with poor colour accuracy, then you’ll make changes but have no idea if they’re correct or even appropriat­e to the underlying desired modificati­on. It truly is a case of the blind leading the blind.

So colour accuracy matters. Normally, you could measure the performanc­e of a screen, then generate a new ICC file that contains details of the colour errors and how to correct

them. Then you apply

“The news that Apple is moving to the new Apple File System (APFS) is significan­t, simply because it’s such a rare event”

this ICC file to the display and it displays colours correctly.

Unfortunat­ely, Apple doesn’t allow you to rewrite the ICC file for the iPhone or iPad screen. While this is of itself somewhat disappoint­ing, the reality is that the screen is normally sufficient­ly close to the correct colour that tweaking isn’t really needed. I say “normally” because Apple, like any large-scale manufactur­er, has to use multiple suppliers to satisfy the mind-bending production numbers that are demanded by customers, especially in the initial few months when the New Shiny™ has just landed and everyone wants to be seen holding it.

So Apple buys screens from multiple sources, and they don’t always agree in terms of colour accuracy and even maximum brightness. The difference­s aren’t huge, and I’m not convinced that they would justify the creation of a custom ICC profile for a typical “man in the street” user. But it would be nice for the profession­al, finicky user who wants to know that it’s correct.

I then put a Samsung tablet onto the test rig – a Galaxy Tab S2, which is quite rightly a highly regarded product. I fired up the meter and CalMAN again, and measured the display accuracy. It was all over the place. I mean, truly horribly wrong. This caused much head-scratching, until I realised that this tablet uses the current state-of-the-art Super AMOLED display, which claims to offer a bigger colour range than a normal display.

I dug into the display settings, and found that I could set it to Basic mode. This was almost spot on to the sRGB-expected colours. But when I went back to normal, default mode, the colours were wrong. “Wrong” as in “much too shouty and loud”, if I can use the term we use here in the lab. Super-saturated, bold and bright colours. It certainly looks impressive, but it isn’t correct.

Now this does matter, because an RGB value of 255.0.0 has an expected red performanc­e when displayed. It maps through to the original colour value that the camera saw. Take 255.0.0 and apply it to a Super AMOLED display, and you’ll get a very strong red. It will be punchy. It’s correct in that it is indeed a super-saturated red. But it isn’t the same red that was intended when the camera took the photo.

Herein lies the problem. In the film/video world, we’ve known about colour issues for years, and there’s even a specialise­d step in the production process that exists to handle colour imbalances, called “grading”. We have multiple colour spaces to choose from in the high-end film/video world and we know that we need to take control of it. However, this isn’t true when it comes to photograph­y, and especially not in the world of the smartphone camera, where the user simply points and shoots.

Until we have a known and defined colour space for domestic still images that encompasse­s the enhanced colour space of Super AMOLED, we’re left with a problem. Do we set the display to the sRGB colour space and disallow the enhanced colour space that the display can stretch to? Or do we show enhanced colour, knowing that it’s incorrect, however nice it might be to look at?

Then, another question: how should we report on such a display? Do we measure it in “default user mode”, in which case it’s wrong, despite it just looking like “very loud” colour. Or do we switch it down to sRGB mode and report on its fine performanc­e, clear in the knowledge that almost no user will switch away from Super AMOLED mode?

It’s certainly a quandary, and one to be aware of when considerin­g all the pathway components used in a colour film and stills environmen­t. Don’t assume that things are as you might want, out of the box.

Surround sound stuff

The level of technical innovation within the UK never ceases to amaze me. Back in the 1970s, Peter Craven and Michael Gerzon developed the Soundfield microphone. This is essentiall­y a single-point microphone array, comprising four capsules in a tetrahedro­n array. The output from this array, known as “A-format”, is processed via some straightfo­rward maths to generate a four-channel signal called “B-format”. The channels on B-format are named X, Y, Z and W. X is front to back; Y is left to right; Z is the up to down; and W is an omnidirect­ional signal.

Once you have a B-format feed, either live or recorded, you effectivel­y have the data necessary to create a virtual sphere of sound, representi­ng all the sound sources around you – whether they be behind, up, left, or in the top-right corner of your perspectiv­e.

Now things get really fun. From the B-format feed that creates a

“Until we have a known and defined colour space for domestic still images, we’re left with a problem”

virtual sphere, you can place virtual loudspeake­rs into that sphere and then map them to real, physical speakers that are mounted around you. It might be an eight-speaker cube. Or a 12-speaker array. Or some other custom array. Providing you have a way of mapping the virtual speakers to the array, you have a way of recreating the physical sound field that was present at the Soundfield microphone.

And, of course, if you have a high-quality digital recorder, then recording the four channels of B-format means you can play it back later, and resynthesi­se the sound field. Or fiddle with it to your heart’s content. There’s superb software available for working with B-format; my favourite is from Harpex ( harpex.

net). It allows me to take B-format and output to binaural, which is surround sound optimised for headphones. Or even to Dolby Atmos format. Noisemaker­s ( noisemaker­s.

fr) from France has a full suite of fascinatin­g plugins too, some of which are pay-for and others are free.

Calrec production­ised the Soundfield microphone in the late 1970s, and it’s been a stalwart of the surround sound world ever since. I remember using one for recordings throughout my undergrad years, from 1982 to 1986. Indeed, I wanted my final year research project to be a way of capturing head movement such that I could then post-process the virtual sphere in a way to handle the head movement. So if you were listening on headphones, which is vastly easier than a huge arrangemen­t of a dozen or more speakers mounted on a framework, moving your head would allow the sound field to stay correctly orientated in space, irrespecti­ve of your head movement. Unfortunat­ely, at the time, the accelerome­ters and processing necessary was simply beyond what was achievable for a reasonable cost.

But times change. Roll forward to a few months ago, and the Soundfield platform was sold to Rode microphone­s in Australia, as I mentioned in this column two months ago. Rode is worldrenow­ned for its high-quality microphone­s, sold at reasonable prices. Its engineerin­g is absolutely state-of-the-art for the complex process of making microphone­s. It has made a fortune from selling mics that go on top of cameras to accurately capture the sound for news, interviews and so forth, and also makes a range of top-notch sound recording mics.

It has already announced a VideoMic Soundfield for release later in the year. I couldn’t be more excited by the news, because it will bring B-format to a far wider range of customers. Soundfield mics have been the playthings of the wealthy – the need for four channels to store B-format was quite a pain in the early days, because four-channel digital recorders were rare. And the mics weren’t cheap – around £5,000 for one isn’t a price to be sniffed at.

So why am I discussing this once again? Because surround sound is becoming increasing­ly important. There’s a big push towards VR now, whether you’re working at the cheap end of the market using a smartphone and a cardboard headmount from Google, or you have the money to pay for an Oculus Rift. But be in no doubt that VR is pushing full surround

sound to centre-stage, and that means that B-format’s time has really come.

Still don’t believe me? Just look at all the work broadcaste­rs such as the BBC are doing with VR with binaural fed from microphone arrays, including B-format. Or Google’s work with its Spatial Audio system ( pcpro.

link/273audio) and its support on YouTube. If you just want surroundso­und playback, you can use YouTube/Google on headphones from a computer running Chrome, Firefox, Opera or Edge. On Android and iOS, you can use the appropriat­e YouTube app.

With the cost of entry collapsing, the desire to capture full surround sound with orientatio­n informatio­n is growing. This is a true resurgence of the work done by Gerzon and Craven all those years ago, and it’s certainly worthy of your attention.

Storage stories

My plans for a second Synology NAS came to fruition with the purchase of a 2415+ 12-bay monster. I started off by fitting it with ten Seagate SkyHawk 8TB drives, along with two Samsung 850 Evo 250GB SSDs for cache. I should have known I was in trouble when I powered it up for the first time.

Two of the drives failed. I returned them to Amazon and received replacemen­ts. Then another two failed. At that point, I contacted Amazon and returned the entire stack of Seagate drives for a full refund. I replaced them with trusty WD Red drives from another supplier. Since then it has been trundling along just fine. My plan is to use the built-in Synology Hyper Backup tool between the two large arrays, with one array held in the lab and the second over the courtyard at my ISP ( merula.net) on the other end of the Gigabit fibre link I mentioned a few months ago.

Am I worried about the drive failures? No, because the whole point about redundant systems is that they are redundant. I have no idea why I had a bad batch of Seagate SkyHawks from Amazon, but having got to four failures out of 12 drives (ten plus the two replacemen­ts), I decided it was time to walk away.

 ??  ?? LEFT Some claim Apple’s new file system speeds up their older devices
LEFT Some claim Apple’s new file system speeds up their older devices
 ??  ??
 ??  ??
 ?? @jonhoneyba­ll ?? Jon is the MD of an IT consultanc­y that specialise­s in testing and deploying hardware
@jonhoneyba­ll Jon is the MD of an IT consultanc­y that specialise­s in testing and deploying hardware
 ??  ?? BELOW Are the colours of your tablet screen accurate enough for colour correction work?
BELOW Are the colours of your tablet screen accurate enough for colour correction work?
 ??  ?? ABOVE Klein’s K-80 colorimete­r is my tool of choice for measuring colour accuracy of screens
ABOVE Klein’s K-80 colorimete­r is my tool of choice for measuring colour accuracy of screens
 ??  ??
 ??  ?? BELOW Harpex’s software allows me to optimise B-format for headphones
BELOW Harpex’s software allows me to optimise B-format for headphones
 ??  ?? ABOVE Rode’s first VideoMic Soundfield mic will be released later this year
ABOVE Rode’s first VideoMic Soundfield mic will be released later this year
 ??  ??

Newspapers in English

Newspapers from United Kingdom