Ma­chine Learn­ing in the At­ten­tion Econ­omy

The Insider - - CONTENTS -

Early in 2017, the School of Me­dia & Pub­lic Af­fairs at George Wash­ing­ton Univer­sity (GWU) con­ducted a study on the rise of “at­ten­tion met­rics” in pub­lish­ing and its use in both edi­to­rial and ad­ver­tis­ing.

At­ten­tion met­rics, ac­cord­ing to the study, “refers to mea­sures of web­site vis­i­tors’ en­gaged time, de­ter­mined by con­crete ev­i­dence of their pres­ence on a page, such as cur­sor move­ment, keystrokes, and scrolling.” Two things struck me when read­ing this pa­per:

1. At­ten­tion met­rics were limited to mea­sur­ing web­sites only.

2. The win­ners in the At­ten­tion Econ­omy were pre­dicted to be top-tier pub­lish­ers (main­stream and dig­i­tal) who al­ready have reach and re­sources. Smaller and lo­cal pub­lish­ers were not seen as likely can­di­dates for this type of tech­nol­ogy due to their lack of both. So where does that leave news apps on mo­bile which are on the rise, and the ma­jor­ity of news­pa­per and mag­a­zine pub­lish­ers around the world?

Now, I’m not say­ing I’m against at­ten­tion met­rics at all, hav­ing writ­ten about “smart data” and the power of be­hav­ioral an­a­lyt­ics many times. I just be­lieve they need to be in­clu­sive in terms of plat­forms and pub­lish­ers, and they need to work in real time to en­sure that the met­rics are be­ing used for the right rea­sons — to cre­ate a more en­gag­ing ex­pe­ri­ence for read­ers.

At­ten­tion Econ­omy — the global dig­i­tal cur­rency

We live in a world that bom­bards us with con­tent ev­ery minute of ev­ery day. There are far too many things com­pet­ing for our at­ten­tion (news, so­cial me­dia, ad­ver­tis­ing, en­ter­tain­ment, etc.), which re­quire us to be more and more dili­gent in sep­a­rat­ing the wheat from the chaff; but we can’t do it alone.

That’s why plat­forms that truly serve the needs of users, by us­ing tech­nol­ogy to help turn con­tent chaos into con­tent con­trol, will win in the bat­tle for the big bucks.

Re­cently the CEO of Net­flix said its big­gest com­peti­tor wasn’t HBO or Ama­zon as one might ex­pect; it was sleep. This is an ex­cel­lent ex­am­ple of a com­pany that un­der­stands its cus­tomers and fo­cuses on them rather than real or imag­ined corporate chal­lengers. It’s no won­der Net­flix has over 100 mil­lion sub­scribers world­wide.

I just wish more pub­lish­ers would fol­low suit. Sadly, too many of them still view read­ers in a neg­a­tive light be­cause they’re not will­ing to pay. So they spend too much time and money wor­ry­ing about Face­book, Google, and other plat­forms steal­ing money from their pock­ets — money to which they feel en­ti­tled de­spite how they’ve treated

their au­di­ences over the past two decades with rub­bish-rid­den web­sites and qual­ity-de­prived con­tent.

At­ten­tion is the most valu­able com­mod­ity on the planet for con­sumer-cen­tric com­pa­nies — a global cur­rency that most pub­lish­ers don’t pay enough “at­ten­tion” to.

“Op­ti­miz­ing a site to at­tract and keep a vis­i­tor’s at­ten­tion re­quires more than mea­sure­ment. It takes ex­per­tise and re­sources — hu­man, tech­no­log­i­cal, and fi­nan­cial — that most news pub­lish­ers sim­ply don’t have. Tech gi­ants, how­ever, have cre­ated a vir­tu­ous cy­cle of mea­sur­ing at­ten­tion, an­a­lyz­ing the mas­sive data in­take, and us­ing it to op­ti­mize their sites, thereby get­ting even more at­ten­tion.”

Matthew Hind­man, As­so­ciate Pro­fes­sor School of Me­dia and Pub­lic Af­fairs, George Wash­ing­ton Univer­sity

Hind­man is right. It takes a lot of tech­no­log­i­cal and be­hav­ioral an­a­lyt­ics ex­per­tise to build a plat­form that can at­tract and re­tain a users’ at­ten­tion and keep them com­ing back for more. It takes Ar­ti­fi­cial In­tel­li­gence (AI) in the form of un­su­per­vised ma­chine learn­ing al­go­rithms to re­view mas­sive amounts of data and de­ter­mine the op­ti­mal con­tent to present to a reader to max­i­mize en­gage­ment — when, where, and how it should be pre­sented.

Hu­man com­plex­i­ties and ma­chine learn­ing

In March 2017, Net­flix an­nounced that it would re­place its five-star re­view sys­tem with a bi­nary “thumbs up/thumbs down” rank­ing. On the sur­face, one would ques­tion how a bi­nary rat­ing could pos­si­bly be bet­ter than a 5-choice op­tion.

There are two rea­sons. Through its mas­sive amount of user data, Net­flix found that view­ers…

1. Tend to vol­un­teer 200% more rat­ings when given the choice of thumbs up/down ver­sus five stars

2. Of­ten rank more re­spected con­tent (e.g. a no­table doc­u­men­tary) with five stars and more friv­o­lous con­tent with a sin­gle star, de­spite the fact that they are far more likely to ac­tu­ally watch the com­edy

The first dis­cov­ery prob­a­bly comes as no sur­prise since a less com­plex choice is eas­ier for view­ers to make. And since more rat­ings means more data for the rec­om­men­da­tion en­gine, it’s nat­u­ral that Net­flix would switch to a bi­nary rat­ing sys­tem to gather more in­for­ma­tion.

The sec­ond find­ing, how­ever, is a bit more com­plex and can have fun­da­men­tal im­pli­ca­tions on the de­sign of learn­ing ma­chines. This con­flict­ing be­hav­ior is a good ex­am­ple of “cog­ni­tive dis­so­nance” in ac­tion. Let me ex­plain…

Back in 1959, what is now rec­og­nized as the clas­sic experiment of cog­ni­tive dis­so­nance was re­ported in the Jour­nal of Ab­nor­mal and So­cial Psy­chol­ogy by re­searchers Leon Festinger and James M. Carl­smith.

Un­der­grad­u­ate stu­dents of In­tro­duc­tory Psy­chol­ogy at Stan­ford Univer­sity were asked to per­form a bor­ing task and then tell an­other sub­ject that the task was ex­cit­ing.

Half of the sub­jects were paid $20 to do that — the other half only $1. Be­hav­ior­ists the­o­rized that the $20 re­cip­i­ents would like the task more be­cause of the mon­e­tary value they would as­so­ciate with their role in “sell­ing it”.

Cog­ni­tive dis­so­nance the­o­rists be­lieved that those paid only $1 would feel the most in­ner con­flict be­tween the be­lief that they were not evil or stupid with the ac­tion of car­ry­ing out a bor­ing task and then ly­ing to an­other per­son — all for only a dol­lar.

So they pre­dicted that those in the $1 group would be more mo­ti­vated to re­solve their dis­so­nance by re-con­cep­tu­al­iz­ing or ra­tio­nal­iz­ing their ac­tions. They would form the be­lief that the bor­ing task was, in fact, fun — which is ex­actly what hap­pened.

In the case of Net­flix view­ers, cog­ni­tive dis­so­nance oc­curs when a viewer ex­pe­ri­ences con­flict over what they thought they should watch (the doc­u­men­tary) with what they ac­tu­ally watched (a friv­o­lous com­edy).

If one were to sub­scribe to Festinger’s the­ory, these Net­flix view­ers would re­sort to “se­lec­tive ex­po­sure.”

Se­lec­tive ex­po­sure refers to an in­di­vid­ual’s ten­dency to seek out in­for­ma­tion that re­in­forces their opin­ions, while avoid­ing con­tent that con­flicts with them

Ac­cord­ing to Festinger, when peo­ple en­counter ideas that don’t map to their pre-ex­ist­ing be­liefs, se­lec­tive ex­po­sure helps pro­duce harmony be­tween them.

So in the case of the $1 sub­jects, they would search for ways to sup­port the be­lief that the bor­ing task was ac­tu­ally fun.

In the case of Net­flix view­ers, what would they do? Would they force them­selves to watch more doc­u­men­taries to sup­port their rank­ing? Would they ask peo­ple they ad­mire/trust (e.g. friends, fam­ily or in­flu­encers on so­cial me­dia) their opin­ions about the doc­u­men­tary to jus­tify their rat­ing of the doc­u­men­tary?

There’s no data to pro­vide those in­sights, but it is fas­ci­nat­ing stuff, don’t you think? That said, re­gard­less of what those view­ers did, there’s lit­tle doubt that hu­man be­ings of­ten ob­fus­cate their true pref­er­ences even from them­selves.

This makes the de­sign of a learn­ing ma­chine even more chal­leng­ing be­cause it needs to ex­am­ine and an­a­lyze closely what con­sumers do (e.g. give a doc­u­men­tary five stars with­out watch­ing it) ver­sus what they pre­fer to do (e.g. watch Dumb and Dum­ber, but still give it only one star).

There are many schol­arly ar­ti­cles on why this be­hav­ioral di­chotomy ex­ists and I in­vite you to check them out if you’re suf­fer­ing from in­som­nia. The opin­ions of these ex­perts may con­flict with ev­ery the­ory be­fore them, but that doesn’t change the fact that all hu­mans suf­fer from in­trap­er­sonal con­flict that has yet to be fully un­der­stood.

So, in this in­creas­ingly al­go­rith­m­driven world we must be care­ful not to treat all data as “truth” as our friend, Ester Dyson, warns…

“We need to be very aware of the in­flu­ence of the data used by the al­go­rithms in our lives. Peo­ple make the de­ci­sions; AI just makes us more ef­fi­cient in reap­ply­ing the cri­te­ria and bi­ases of peo­ple’s de­ci­sions in new but sim­i­lar sit­u­a­tions. The most im­por­tant thing to con­sider is that much of what hap­pens is un­der our con­trol, but ‘our’ is an am­bigu­ous con­cept.”

She knows of which she speaks, hav­ing worked in AI since the 1980s. Al­go­rithms are driven by data that comes from mil­lions of peo­ple — flawed hu­man be­ings who are not as pre­dictable, con­sis­tent or even re­li­able as we would like. We’ve seen this far too of­ten lately in the ram­pant rise of fake news and the ed­i­tors and duped read­ers who help spread mis­in­for­ma­tion and pro­pa­ganda. It makes it hard to trust any­one these days.

Trust in Al­go­rithms

When I first started re­search­ing this ar­ti­cle, I couldn’t help but re­call a Reuters In­sti­tute re­port, Brand and trust in a frag­mented news en­vi­ron­ment, which dis­cov­ered that most peo­ple (more so the younger or tech­savvy ones) pre­fer al­go­rithms over hu­man ed­i­tors when it comes to news cu­ra­tion.

News con­sumers value the “in­de­pen­dence of al­go­rithms” — be­liev­ing them to be less bi­ased or swayed by edi­to­rial and po­lit­i­cal agen­das. They also like the fact that con­tent is se­lected based on their per­sonal read­ing habits.

But, and this is quite in­ter­est­ing, some par­tic­i­pants — par­tic­u­larly those us­ing news ag­gre­ga­tors — thought that al­go­rithms helped in­tro­duce them to a broader range of con­tent and brands based on their in­ter­ests and pref­er­ences. “It gets a va­ri­ety of things like I’m in­ter­ested in cer­tain top­ics that I prob­a­bly wouldn’t find or I’d have to search for it my­self so it’s like a one stop shop of things that in­ter­est me.” (20-34, US) Brand and trust in a frag­mented news en­vi­ron­ment

But as trusted as al­go­rithms are with read­ers, they are not all cre­ated equal. All of us have seen ev­i­dence of that with echo cham­bers be­ing spawned out of the im­ple­men­ta­tion of in­fe­rior rec­om­men­da­tion al­go­rithms.

So con­sumers trust al­go­rithms, but what do pub­lish­ers think of them?

Ac­cord­ing to a re­cent sur­vey done with 100 news­pa­per pub­lish­ers, in­creas­ing dig­i­tal au­di­ences was not only their big­gest chal­lenge, it was a higher pri­or­ity than both sub­scrip­tion and ad­ver­tis­ing rev­enues.

How­ever, de­spite users pre­fer­ring al­go­rithms to cu­rate news for them, less than 33% of pub­lish­ers were us­ing the most pop­u­lar con­tent ag­gre­ga­tors with read­ers. That seems odd given they are try­ing to in­crease reach. How can they walk away from two bil­lion Face­book users? Here again, we see an ex­am­ple of cog­ni­tive dis­so­nance at play. The ma­jor­ity of pub­lish­ers say they want to in­crease au­di­ence, but then refuse to work with those of­fer­ing ac­cess to pow­er­ful rec­om­men­da­tion al­go­rithms that could help them reach mil­lions.

This is some­thing I’ve never un­der­stood, hav­ing been a strong ad­vo­cate for “be ev­ery­where your read­ers are”, even when that ev­ery­where is on one of our so-called com­peti­tor’s plat­forms.

But, like Net­flix PressReader doesn’t con­sider Magzter, Tex­ture, Readly, or Blen­dle as com­peti­tors. It’s ac­tu­ally the pub­lish­ers who don’t rec­og­nize that they need help from ex­perts out­side their walled gar­dens. They still be­lieve they can make it on their own.

They don’t see value in tech­nol­ogy com­pa­nies even though they have the tal­ent and ex­per­tise in AI, learn­ing ma­chines, and al­go­rithms that can de­liver not just want read­ers think they want, but a broader range of rel­e­vant con­tent that, based on be­hav­ioral an­a­lyt­ics, will at­tract and en­gage them longer.

So how are these pub­lish­ers bring­ing harmony to their in­ner con­flict cre­ated by want­ing a big­ger au­di­ence, but re­fus­ing help from those who know how to de­liver it?

I guess you have to ask them, but my first thought would be that they are se­lec­tively ex­pos­ing them­selves to only things and be­liefs that live within their tightly pro­tected pub­lish­ing ecosys­tem, where out­siders are not wel­come.

I may be wrong, but what­ever they’re do­ing, it’s painfully ob­vi­ous that they’re not help­ing them­selves, their read­ers or, in most cases, share­hold­ers.

Stan­ford psy­chol­o­gist, Stephen Kull ob­served in a study of nu­clear plan­ners that “The in­stinct to sur­vive is strong, but the in­stinct to al­le­vi­ate fear is stronger.”

Is fear of ag­gre­ga­tion, plat­forms, al­go­rithms, and tech com­pa­nies what’s par­a­lyz­ing pub­lish­ers?

Are they so en­trenched in their se­lec­tive be­hav­iors that they fail to see that they’re mov­ing away from sur­vival rather than to­wards it?

Is there any hope? Ab­so­lutely!

The pas­sion for qual­ity con­tent has never been stronger and I be­lieve that, as an in­dus­try, we can work to­gether for a bet­ter fu­ture for all. It all starts with trust and the will­ing­ness to col­lab­o­rate. Pub­lish­ers may not trust any­one but them­selves, but if we, as tech­nol­ogy and plat­form providers do, we just might help pull our in­dus­try out of the grave its dig­ging for it­self.

If you think it’s worth the ef­fort like I do,

let’s talk!

"It's not about de­nial. I'm just very se­lec­tive about the re­al­ity I ac­cept." - Bill Wat­ter­son, Amer­i­can car­toon­ist

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.