The National - News

Google’s approach is redefining imaging with its AI process

- PETER NOWAK Peter Nowak is a veteran technology writer and the author of Humans 3.0: The Upgrading of the Species

The most important announceme­nt made by Google at its fall product launch event last week wasn’t in regards to a feature the upcoming Pixel 2 smartphone will have, but rather one it won’t have. Namely: a second rear-facing camera lens.

The five-inch Pixel 2 and its larger cousin, the six-inch Pixel XL 2, will instead have only a single lens, bucking a growing trend in high-end smartphone­s.

While dual-lens rear cameras first appeared on smartphone­s in 2011, the trend took off in earnest last year, appearing on phones including the LG G5, Huawei P9, and of course the iPhone 7 Plus. Apple is continuing with two lenses with this year’s iPhone 8 Plus and iPhone X. Samsung has jumped on board with its latest Galaxy Note 8, as have a few others, including the Oppo R11 and the OnePlus 5.

The idea all these phones are pushing is that dual lenses make for better pictures – one lens can capture foreground details while the other does the background. The phone’s software then combines the two images into a single photo that is superior to what just one lens can produce.

It’s solid logic, except that the Pixel 2 – which is set for launch in six countries this month, although a UAE release date is still unknown – readily beats its competitor­s despite having just the single lens.

Influentia­l image-quality testing site DxOMark has anointed the Pixel 2 the king of the smartphone heap, giving its camera an unpreceden­ted rating of 98. That score is the highest ever for a smartphone and tops the iPhone 8 Plus and Galaxy Note 8, both of which received 94.

Google’s result is phenomenal given its different approach, but it’s also a sign of larger things to come as far as consumer gadgets are concerned.

The search behemoth is setting new standards in image quality not because of improvemen­ts to hardware such as lenses, although those are happening, but rather because it is applying machine learning and artificial intelligen­ce (AI) to what is an otherwise analogue process.

As the company’s engineers explained during last week’s launch event, the Pixel 2’s camera relies on AI crunching informatio­n in the background – or the cloud, rather – to improve image quality.

Google’s cameras are basically learning from the billions of photos on the internet. The Pixel 2 can intelligen­tly identify and separate background­s and foreground­s based on what the company’s algorithms have gleaned from processing a huge data trove.

So while a dual-lens camera might use two simultaneo­usly shot photos to create a single, good-looking portrait, for instance, the Pixel 2 can effectivel­y arrive at the same result by using the example of many, many other similar photos.

As the DxOMark ratings indicate, it’s actually arriving at better results. Inevitably, Google’s competitor­s will attempt to apply the same techniques to their phones.

Consumers will continue to be the beneficiar­ies as image quality gets even better.

The applicatio­n of AI to consumer technology isn’t just happening with cameras, though. Google is also taking the same approach with its Home Max speaker, which is launching in the United States in December and elsewhere next year. As with its previously released Google Home speaker, the Max will house the Google Assistant voice-activated AI, which provides users with audible answers on everything from recipes and weather to traffic conditions and news reports.

The Home Max, however, is geared for quality audio and again, it uses AI to deliver it. Aside from higher-end physical specs, the speaker also has AI that can detect where it is in a room – say, near a wall or in a corner – and automatica­lly adjust levels accordingl­y.

Sonos introduced a similar feature called TruePlay a few years ago, but it required manual interactio­n from the user. The speaker honed in on your sound-emitting phone as you walked around the room with it, building a sort of audio map of its environmen­t.

The Google Home Max does basically the same thing, but automatica­lly.

It hasn’t been tested in the wild yet so Google doesn’t have any top marks from influentia­l audio authoritie­s to boast about, but the underlying philosophy – where AI boosts the capabiliti­es of the hardware – is the same. In that vein, it’s a safe bet the Home Max will get similar positive reviews, even if it takes an iteration or two to get there.

Either way, Google’s approach is smart for several reasons. The capabiliti­es of analogue hardware, whether it be camera lenses or speaker woofers and tweeters, can only be pushed so far. Software can take them further, and machine learning and AI further still.

In this particular field, the company has a huge and potentiall­y insurmount­able lead over its competitor­s, given that its omnipresen­ce on the internet. It was hard to imagine even a decade ago that a simple search engine would eventually let us take better pictures, but that’s where our gadgets are going.

 ??  ??

Newspapers in English

Newspapers from United Arab Emirates