Google Lens: Six things we can’t wait to try out
Google’s I/O was heavy on AI and machine learning, and the best intersection of the two is Google Lens, writes MICHAEL SIMON
Google Lens looks fresh and exciting, though we’ve seen hints of this technology before. Google Goggles might not have been mentioned during the I/O keynote, but its spirit was most certainly present at I/O. Released seven years ago when AI and AR were still in their infancy, Goggles was an app that let you identify places, scan barcodes, and search for prices by snapping a photo of the thing you were looking at.
Google Lens, which was announced during the very first minutes of I/O, is essentially a supercharged version of Google Goggles. Built into Assistant and Photos, the new machine learning AI promises to decode the world around us by using Google’s AR and neural networks to scan images and pull out relevant bits of data. Here are the six things we’re most excited to try out.
Google Translate is already one of our go-to tools when trying to read text in a different language, but Google Lens takes it out of the Translate app and puts it right into Photos. To translate something, you need only snap a picture of it and call on Google Lens’ smarts. This approach makes using Translate’s technology even simpler, and we’ll be much more likely to remember to use it in a pinch.
It’s not hard to find interesting spots when visiting a new city, but with Google Lens, discovering hidden gems in our own town becomes a lot easier. Just point your camera at a place you’re interested in, and Google Lens will scan it. Then, in real time as you look through the viewfinder, you’ll be able to see what it is, what it sells, and what people think about it. The process is far simpler than getting the name, typing it into Google, and scanning through the results.
This area is where you can see just how much Google Lens has improved on Google Goggles. Google Lens lets you snap a picture of just about anything, and then it will tell you everything you need to know about it – during the keynote, Sundar Pichai demonstrated this feature by identifying a common lily. We’ll need to
try it ourselves to confirm its accuracy, but our phones could possibly become the greatest encyclopedia ever, teaching us about arts, architecture, and nature without requiring a dive into a search hole.
We’ve all been in the situation where we’re at a friend’s house and we need to connect to their router, except they don’t know the password. So we need to crawl under a desk, flip over the router to find the label, type each character, and, 10 minutes later, finally connect. Google Lens does all that work for you. You’ll only need to snap a picture of the password label on the router and it will automatically connect.
Buying tickets to shows and movies on our phones is already pretty effortless, but Google Lens wants to
make it a complete breeze. If you walk down a street and see a marquee that shows a band that’s playing, Google Lens will spring to life as soon as you snap a picture. You can listen to sample songs, add the date to your calendar, and, of course, buy tickets. Presumably, it will work just as well with movies and other events – we can’t wait to take a photo of a movie poster and then see show times and trailers.
The keynote didn’t mention anything specific about buying stuff using Google Lens, but we can’t help but wonder about its potential as a shopping assistant. We’ve already seen something similar with Bixby on the Galaxy S8, but outside of books, it’s not very helpful. If Google can perfect the system so it brings up shopping results for anything we scan, it could be the killer use case for Google Lens.