Google I/O 2017: AI, privacy and you
An AI-first world is still first powered by advertising.
Last year, almost a year into his new role as CEO of Google, Sundar Pichai boldly declared that Google would shift from a mobile-first strategy to AI-first (artificial intelligence).
This year’s I/O developer conference demonstrated the fruits of that AIfirst focus, where Pichai once again emphasized, “…in an AI-first world, we are rethinking all our products and applying machine learning and AI to solve user problems, and we are doing this across every one of our products.”
One of the most impressive of these AIfirst products is the new Google Lens. As part of Google Assistant, Lens can analyze
the real world via a smartphone camera and return relevant results on it. Google demonstrated pointing the camera at a flower, and Lens then identified its species.
But Lens’ AI smarts isn’t just limited to visual recognition, it can also suggest next actions based on what it sees. In one example, it correctly identified text as a Wi-Fi SSID and password, and then asked if the user wanted to connect to the network using those credentials.
Google Home and Photos also received new smart features, and even Gmail for iOS and Android got an AI boost, with an automated-response feature called Smart Reply (previously available on the web) that lets you choose up to three automated responses based on the email you received.
Google’s advances in machine learning require brilliant people and smart coding, but they also requires large sets of data that an AI can use to improve itself over time. Lens, Photos and Assistant harvest more data about the world, and by extension, more of your personal information.
So these new AI-enabled features, while undeniably helpful, also raise scores of privacy concerns. What does Google do with the login password it just identified? How much of your email does Google parse in order to autosuggest responses? Does Google log your location information
“Google’s advances in machine learning require brilliant people and smart coding.” a public service provider; R&D funding has to come from somewhere and even AI has to answer to the bottom line.
In the past, Google could only track what you did online, but consider that with an app like Lens, Google will be able to see what you’re doing in the physical world. As our online and offline worlds encroach on one another, so does the surveillance economy threaten to pervade more of our ‘real’ lives – even as it comes in the form of an AI-powered app. when you upload images to Lens?
We know that Google already culls your search habits to sell ads; it’s not uncommon, for example, to see an ad on YouTube advertising the very thing you’d just been searching for. It wouldn’t be a stretch to think that your speech and image queries would also be used in the same way.
Because the more AdSense, Google’s advertising arm, knows about you and your world, the more it can sell micro-targeted ads that you’re more likely to click. It pays to remember that advertising has always formed the bulk of Google’s profits, in 2016, for example, advertising brought in US$79 billion out of its total US$89 billion revenues.
Google’s AI advances have vast potential to help the public good. But Google is a for-profit company, and not