GOOGLE,

YOU AUTOCOMPLETE ME

Fast Company - - Front Page - By Mark Wil­son Il­lus­tra­tion by Del­can & Co.

I don’t like to say “hi.” I’m a “hey” per­son. But more and more, I find my­self greet­ing friends and col­leagues with a “hi” on email. Why? Be­cause Google sug­gests that I do. In May, Gmail in­tro­duced a new “Smart Com­pose” fea­ture that uses autocomplete tech­nol­ogy to pre­dict my next words in gray. I ac­cept them simply by hit­ting TAB.

Words mat­ter to me. I am a pro­fes­sional writer, af­ter all. But then Gmail made it tan­ta­liz­ingly easy to say “hi” in­stead of “hey,” and Google’s pre­dic­tion, al­beit wrong at first, be­came self­ful­fill­ing. It wasn’t un­til two weeks af­ter I be­gan

us­ing Smart Com­pose that I re­al­ized I had handed over a small part of my iden­tity to an al­go­rithm.

This sort of pre­dic­tive tech­nol­ogy is ev­ery­where: Ama­zon sug­gests prod­ucts aligned with your shop­ping his­tory. Apple pro­vides a spe­cial menu for the IOS apps you’re most likely to open next. Spo­tify tai­lors playlists to your mu­si­cal tastes. And Facebook lit­er­ally chooses the sto­ries from friends you should see first, last, or never—then no­ti­fies you 365 days a year that it’s time to say “happy birth­day” to some­one out there.

Google, how­ever, is the torch­bearer when it comes to know­ing what we want. It was per­son­al­iz­ing ads when Zucker­berg was still in mid­dle school, and auto-com­plet­ing our searches be­fore any­one at­tempted to sound out the acro­nym GDPR. At the Google I/O de­vel­oper con­fer­ence this past May, held in the com­pany’s Moun­tain View, Cal­i­for­nia, home­town, the search gi­ant in­tro­duced a suite of new fea­tures that fur­ther eases us into au­topi­lot. The An­droid P op­er­at­ing sys­tem, which be­gan rolling out in Au­gust, doesn’t just sug­gest the app you might want to open next, such as Phone or Run­keeper; it of­fers the next ac­tion you might take, such as “call your mom” or “go for a run,” based on your pre­vi­ous us­age. (Since I/O, Google has shared an­other update, to Google Maps, that of­fers per­son­al­ized rat­ings for restau­rants and bars, pre­dict­ing how much you’ll like each place.)

Then there’s Du­plex, a forth­com­ing voice as­sis­tant that, in Google’s demos, was able to call a restau­rant and ne­go­ti­ate a ta­ble with a hu­man­like per­son­al­ity that served as a sur­ro­gate for the user’s own. Its vo­cal fry and fre­quent “umms” were so un­canny that many in the me­dia ac­cused it of be­ing faked, though when Fast Com­pany re­cently tried the ser­vice, it seemed to work as ad­ver­tised.

Du­plex’s de­but in May was met with ap­plause by the com­pany’s fan­boy de­vel­op­ers. Soon af­ter, out­side the con­fer­ence’s co­coon, the im­pli­ca­tions be­gan sink­ing in. These sorts of ad­vance­ments may seem thrilling—or at least be­nignly help­ful— at first. But what do they all add up to? At what point does Google’s power of sug­ges­tion grow so strong that it’s not about how well its ser­vices an­tic­i­pate what we want, but how much we’ve in­ter­nal­ized their rec­om­men­da­tions—and think of them as our own? Most of the con­ver­sa­tion around ar­ti­fi­cial in­tel­li­gence to­day is fo­cused on what hap­pens when ro­bots think like hu­mans. Per­haps we should be just as con­cerned about hu­mans think­ing like ro­bots.

“The irony of the digital age is that it’s caused us to re­flect on what it is to be hu­man,” says tech ethi­cist David Pol­gar, founder of the All Tech Is Hu­man ini­tia­tive, which aims to bet­ter align tech­nol­ogy with our own in­ter­ests. “A lot of this pre­dic­tive an­a­lyt­ics is get­ting at the heart of whether or not we have free will: Do I choose my next step, or does Google? And if it can pre­dict my next step, then what does that say about me?”

Pol­gar is cur­rently col­lab­o­rat­ing on re­search with Indiana Uni­ver­sity that asks if in­ter­net com­mu­ni­ca­tions are bo­ti­fy­ing hu­man be­hav­ior. In the age of Twitter, chat­bots, and auto-com­plete, he’s wor­ried that “our on­line con­ver­sa­tions are be­com­ing so di­luted that it is dif­fi­cult to de­ter­mine if [a mes­sage has been] writ­ten by a hu­man or a bot.” Even more trou­bling: We may no longer care about the dis­tinc­tion, and our vo­cab­u­lary and con­ver­sa­tion qual­ity are suf­fer­ing as a re­sult.

Ced­ing a “hi” for a “hey,” of course, is only a mi­nor loss of in­di­vid­u­al­ity. My emails weren’t all that unique any­way, ac­cord­ing to Lauren Squires, a lin­guist and as­so­ciate pro­fes­sor in the English depart­ment at Ohio State Uni­ver­sity. “So . . . [Google] is go­ing to cre­ate these new set phrases for us, and we’ll be locked in and never stray from them?” she asks with a laugh. “But we kind of do that any­way! I don’t want to un­der­play the cre­ativ­ity that goes into lan­guage, but a lot is de­pen­dent on scripts.” She points to my rote greet­ing to her at the start of our in­ter­view as an ex­am­ple. Squires her­self uses Google’s Ai-driven quick replies when email­ing on her phone. “They’re not giv­ing us new pat­terns; they’re en­cod­ing pat­terns that al­ready ex­ist,” she says. As for the sub­tle difference be­tween “hi” and “hey,” she thinks syn­onyms are over­rated. “Whether you have one word for chew or two words—chew and mas­ti­cate—i don’t know if it’s bet­ter to have two words for that.” To Squires, much of our lan­guage is about func­tion, not flour­ish.

Word choice unto it­self may not al­ways mat­ter. The larger con­cern is how rapidly a user might al­ter his own be­hav­ior simply be­cause of this sin­gle bit of Google’s user in­ter­face. “Peo­ple in­side larger tech com­pa­nies like to say, ‘On­line be­hav­ior is a mir­ror,’ ” says Pol­gar. “I dis­agree. The very fact that [these com­pa­nies are] al­ter­ing your en­vi­ron­ment and send­ing cer­tain cues is in­her­ently go­ing to al­ter your be­hav­ior.”

MOST OF THE CON­VER­SA­TION AROUND AI IS FO­CUSED ON WHAT HAP­PENS WHEN RO­BOTS THINK LIKE HU­MANS. PER­HAPS WE SHOULD BE JUST AS CON­CERNED ABOUT HU­MANS THINK­ING LIKE RO­BOTS.

Look­ing at a sin­gle week­end of emails and no­ti­fi­ca­tions on my phone, it’s al­most nau­se­at­ing to count all the apps telling me what to do. Twitter alerts me to peo­ple I ought to follow. Facebook urges me to read ev­ery sin­gle com­ment on a grad­u­a­tion-day post from an ac­quain­tance I should prob­a­bly just un­friend. Linkedin spots a work an­niver­sary and nudges me to “con­grat­u­late” my con­tact. Google wants me to take a photo and leave a re­view of a Starbucks, and Groupon tells me to re­deem my deal for a two-for-one Taco in a Bag be­fore it ex­pires. This is what Tris­tan Har­ris, the for­mer de­sign ethi­cist at Google who co­founded the Cen­ter for Hu­mane Tech­nol­ogy, has de­scribed as the co-opt­ing of our minds. “By shap­ing the menus we pick from,” he wrote in a 2016 es­say, “tech­nol­ogy hi­jacks the way we per­ceive our choices and re­places them with new ones.”

I’d like to be­lieve that I’m im­mune to these mes­sages—we all do—but our code is mal­leable. In 2014, Michi­gan State doc­toral stu­dent Mi­jung Kim cre­ated a weather fore­cast app called Weather Story. She wanted to see if, over time, sub­jects who re­ceived push no­ti­fi­ca­tions from the app be­gan open­ing it more of­ten. Un­sur­pris­ingly, they did. But Kim also dis­cov­ered that they be­gan open­ing the app with in­creas­ing speed. It was as if their re­flexes were be­ing op­ti­mized to re­spond to the app.

Sil­i­con Val­ley is just be­gin­ning to ac­knowl­edge that it may be push­ing en­gage­ment too far. Apple’s up­com­ing IOS 12 will in­tro­duce a se­ries of tools to track and limit your app us­age, and even lever­age AI to mute some push no­ti­fi­ca­tions. Google’s An­droid P sys­tem of­fers sim­i­lar fea­tures and the op­tion to turn your screen an un­ap­peal­ing gray at night. Even In­sta­gram has rec­og­nized a phe­nom­e­non that it’s dubbed “zom­bie scrolling” and has rolled out a new in­ter­face to help users break the habit. Ad­ver­tis­ers, af­ter all, want their users en­gaged.

Re­ly­ing on tech com­pa­nies to self-reg­u­late will only get us so far. Google, for one, is all too aware of how it can af­fect user be­hav­iors, at least ac­cord­ing to a video it pro­duced in 2016, which leaked in May via The Verge. Nar­rated by Nick Foster, the head of de­sign at X, Al­pha­bet’s moon­shot fac­tory, the “Self­ish Ledger” is a thought experiment inspired by epi­ge­net­ics, a predar­winian un­der­stand­ing of ge­net­ics. Epi­ge­net­ics pro­posed that an or­gan­ism’s ex­pe­ri­ences ac­crue over time into a “ledger” of in­grained be­hav­iors that is passed along to off­spring. Pic­ture it as DNA built from ex­pe­ri­ences.

In the era of big data, Foster imag­ines Google us­ing digital epi­ge­net­ics to cause the next so­ci­etal rev­o­lu­tion. “As gene se­quenc­ing yields a com­pre­hen­sive map of hu­man bi­ol­ogy, re­searchers are in­creas­ingly able to tar­get parts of the se­quence, and mod­ify them, in or­der to achieve a de­sired re­sult,” he says. “As pat­terns be­gin to emerge in [users’] be­hav­ioral se­quence, they too may be tar­geted. The ledger could be given a fo­cus, shift­ing it from a sys­tem that not only tracks our be­hav­ior but of­fers di­rec­tion to­ward a de­sired re­sult.”

That “de­sired re­sult” would be of Google’s choos­ing. In a gro­cery app, Foster ex­plains, a user might be pushed to lo­cal ba­nanas with a bright red no­ti­fi­ca­tion—be­cause Google val­ues sus­tain­abil­ity. Even­tu­ally, Foster sug­gests, the multi­gen­er­a­tional data that Google col­lects could give it a “specieslevel un­der­stand­ing” to tackle so­ci­etal top­ics like depression, health, and anx­i­ety. What he doesn’t say: For Google to solve those prob­lems, you’d have to hand over not just your data, but also your agency. The com­pany has since dis­tanced it­self from the video, re­leas­ing a state­ment say­ing that the “Self­ish Ledger” was cre­ated to “ex­plore un­com­fort­able ideas and con­cepts in or­der to pro­voke dis­cus­sion and de­bate. It’s not re­lated to any cur­rent or fu­ture prod­ucts.” But the po­ten­tial for Google and other tech com­pa­nies, such as Facebook and Ama­zon, to ex­er­cise this kind of power re­mains.

To prove his own ex­is­tence, Descartes came up with the sim­ple rule: “I think, there­fore I am.” But if tech­nol­ogy has di­vorced thought from ac­tion and turned con­scious­ness into re­flex, are we truly alive? I side with Descartes. The an­swer is “no.”

That is, un­less Google sug­gests oth­er­wise.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.