Everything Google revealed at I/O 2018
Google leaned heavily into machine learning and personalization during its keynote. BRAD CHACOS reports
Google always pulls out the stops for the keynote at its annual I/O developer conference, and this year was no exception. Google I/O 2018 lacked the flashy flagship hardware that defined previous keynotes – nary a new Chromebook, Pixel, or Google Home could be found – but it still managed to shine, thanks to some
serious improvements to the software and services underlying the entire Google ecosystem.
Hardware is nothing without software that tells it what to do, after all. And at I/O 2018, Google’s software was focused squarely on making the Internet more about you through the power of machine learning. Let’s dig in.
Gmail Smart Compose
Google CEO Sundar Pichai kicked things off with Smart Compose, which is basically Gmail’s Smart Reply cranked to 11. Whereas Smart Reply would scan your emails and intelligently offer buttons with quick one-click responses, Smart Compose uses AI to suggest complete sentences as you’re drafting an email. As you type, you’ll see suggestions appear in faded grey text; clicking Tab uses the suggestion.
“Smart Compose helps save you time by cutting back on repetitive writing, while reducing the chance of spelling and grammatical errors,” Google says. “It can even suggest relevant contextual phrases. For example, if it’s Friday it may suggest ‘Have a great weekend’ as a closing phrase.”
Smart Compose sounds like a serious timesaver if it’s as effective in reality as it is in concept.
Machine learning is making Google Photos more useful in the coming months, too. While you’re looking at an image in the coming weeks, you might see new prompts offering to fix the brightness of an image, or fade the background to black to make the star of the
picture pop. Get this: Google’s AI smarts will even be able to add colour to old black-and-white pictures.
Just as cool, if you take an image of a document, Photos will be able to create a PDF of it automatically – even if it’s taken at an awkward angle.
Google Assistant is evolving into your Google Assistant. A flurry of upgrades is coming to the AI helper, including the ability to choose from six different voices and, in the future, even a John Legend voice pack. New features let Assistant respond to natural conversations and parse complex multi-step queries. On phones, the app will be able to show you an overview snapshot of your day. Smaller upgrades are also on the way, and third-party smart devices with screens will start rolling out with Assistant in July. For further details, go to page 28.
Speaking of phones, Google Assistant will even be able to call local businesses to schedule reservations for you, conducting complex conversations in real time using Google’s AI smarts and new voices. The machine sounded eerily human in an on-stage demonstration, complete with ummms and ahhhs in the middle of sentences. The recipients seemingly had no idea they were conversing with a robot.
Google didn’t reveal Android P’s final name at I/O, but it did launch the next-gen Android OS in developer preview beta form. A previous developer preview launched in March, but the beta version adds Android P features revealed at I/O 2018 – see
page 12 for our hands-on. Android P is shaping up to be a substantial update for Google’s smartphone operating system, with new AI-powered features, a major navigation change, and a suite of tools aimed at curing smartphone addiction. Catch up on the newly announced features in our Android P beta coverage.
Continuing the theme of the day, Google Maps is getting an overhaul that uses machine learning to infuse your experience with personalized recommendations. A redesigned Explore tab and new For You tab will highlight local events and restaurants, drawing not only from physical locations but also from what you’ve liked in the past, and trending activities in the area. This summer, Google Assistant will come to Maps as well. For details see page 25.
Google also showed off a wild future for walking directions in maps. Tapping into computer vision and machine learning, Maps can create an augmentedreality Street View that overlays directions and business details on your screen in real time. Wild stuff.
Even Google News is getting in on the personalization action, with an overhauled News app and web presence that makes it easier to find the news that matters to you from the sources you trust. It’s emphasized most by a ‘For You’ tab that appears when you open the app, but Google’s AI touches every aspect of the service. That includes a ‘Full Coverage’ section that attempts to give you a cohesive and broad view of any particular story by mapping out relationships between people, places, and things in the story, then organizing it into story lines with frequently
asked questions and highlighted tweets from a variety of sources. Google says Full Coverage is “by far the most powerful feature of the app,” but there’s a lot more that’s new. Read up on it all on page 43.
The entire point of Google Lens is to leverage the company’s strengths in machine learning and computer vision to provide you with more information about the world, but it’s getting even more useful soon. A new smart text selection tool lets you copy and paste text captured with your camera. Even more useful, selecting a text snippet brings up information about the subject. “Say you’re at a restaurant and see the name of a dish you don’t recognize – Lens will show you a picture to give you a better idea,” Google says. “This requires not just recognizing shapes of letters, but also the meaning and context behind the words.” A fresh style match feature, on the other hand, can show you information about outfits or home décor you like, as well as products with a similar style.
But perhaps most significantly, Lens is being freed from the shackles of Photos and Assistant. Google’s technology will now come baked directly into the Pixel’s camera app, and cameras in (unspecified) devices by LG, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, and Asus.
For further details, go to page 38.
Linux on Chromebooks
It didn’t make the I/O main stage, but in a followup post, Google revealed that Chromebooks are
getting Linux support to help developers code on the browser-based laptops. A preview will be available for the Pixelbook soon.
According to Google: “Support for Linux will enable you to create, test and run Android and web apps… Run popular editors, code in your favourite language and launch projects to Google Cloud with the command-line. Everything works directly on a Chromebook.
Linux runs inside a virtual machine that was designed from scratch for Chromebooks. That means it starts in seconds and integrates completely with Chromebook features. Linux apps can start with a click of an icon, windows can be moved around, and files can be opened directly from apps.”
Waymo’s self-driving cars will take passengers for real
Google’s Waymo self-driving car company sought to show its safer side at the keynote. No doubt its rival Uber’s self-driving technology failure, which led to the death of a pedestrian in Tempe, Arizona, in March, was top of mind.
CEO John Krafcik said Waymo has used Google’s deep neural networks to reduce its pedestrian detection error rate by 100X. That sounds great, though by digging into the numbers, that error rate started at 1 in 4, and therefore improved to about 1 in 400. We’ll see how those numbers work out in real life when the company starts a driverless transportation service in another Arizona city, Phoenix, later this year.
Linux on a Chromebook