Face­book is not Your Friend

The Fright­en­ing Re­al­ity of What Face­book is Do­ing to Us

Trillions - - Contents -

As news sur­faced in March about Cam­bridge An­a­lyt­ica’s abuse of pri­vate data pro­vided to it by Face­book, a much more se­ri­ous is­sue was brew­ing in the back­ground re­gard­ing the so­cial me­dia gi­ant. The prob­lem is not how third par­ties use Face­book’s data but how Face­book plans to use it for its own pur­poses, pow­ered by ar­ti­fi­cial in­tel­li­gence.

Dur­ing the Cam­bridge An­a­lyt­ica scan­dal, Face­book’s users learned with con­sid­er­able shock pre­cisely how much data Face­book is gath­er­ing about them. Un­der CEO Mark Zucker­berg’s di­rec­tion, al­most ev­ery move an in­di­vid­ual makes on a smart­phone or com­puter with Face­book in­stalled and op­er­at­ing is be­ing cap­tured and logged.

It has long been un­der­stood that just us­ing Face­book in­volves giv­ing up some of your pri­vacy. The app openly asks for ac­cess to all of your con­tact in­for­ma­tion to al­low it to op­er­ate – some­thing it says it is do­ing to help you con­nect to po­ten­tial friends. It also asks for ac­cess to your brows­ing his­tory, phone call in­for­ma­tion and more. It is so “boil­er­plate” in its re­quests that most of us do not blink when we click “okay” be­cause many other apps ask for sim­i­lar things.

As the de­tails of the Cam­bridge An­a­lyt­ica mess be­came bet­ter known, so, too, did the de­tails of what in­for­ma­tion Face­book is stor­ing about its users. This in­cludes ev­ery mes­sage with Face­book Mes­sen­ger – de­tails of ev­ery phone call made and ev­ery text mes­sage – for at least a year and, of course, brows­ing his­tory. All that data is also tracked with re­spect to time, se­quence, lo­ca­tion and du­ra­tion where ap­pro­pri­ate. In­sta­gram and the stand-alone ver­sion of Face­book Mes­sen­ger are also tracked in a sim­i­lar man­ner.

What U.k.-based Cam­bridge An­a­lyt­ica did first was to cre­ate an app called This Is Your Dig­i­tal Life, a prod­uct de­vel­oped by re­searcher Alek­sandr Ko­gan. Two hun­dred and seventy thou­sand peo­ple were paid to take part in us­ing the app. Then, us­ing Face­book data con­nected with the app (with Face­book’s ex­plicit per­mis­sion and knowl­edge), Cam­bridge An­a­lyt­ica pulled in data on that group of 270,000 and their Face­book friends. The to­tal num­ber of users Cam­bridge An­a­lyt­ica pulled data from is es­ti­mated to be 87 mil­lion.

Cam­bridge An­a­lyt­ica then used that data to cre­ate tar­geted po­lit­i­cal ad­ver­tis­ing dur­ing the 2016 U.S. po­lit­i­cal cam­paign, ap­par­ently in­clud­ing the U.S. pres­i­den­tial cam­paign.

For its part, Face­book called the mis­use of the data by Cam­bridge An­a­lyt­ica a “breach of trust” from the orig­i­nal con­tact it had pro­vided data ac­cess to, for Ko­gan’s app. Zucker­berg also re­sponded to Euro­pean Union (EU) law­mak­ers re­gard­ing the case that it would

refuse to com­pen­sate users of the mis­use of that data. That came on May 24 in writ­ten an­swers to ques­tions Zucker­berg did not have time to an­swer while present with the law­mak­ers ear­lier that week. Zucker­berg’s re­sponses were jus­ti­fied ap­par­ently by the state­ments that no bank ac­count or credit card de­tails had been shared as part of the breach and fur­ther that no EU user data was com­pro­mised in any way. It is a cold, un­feel­ing an­swer that only an ac­coun­tant could love. It also ig­nores that Face­book eth­i­cally and prob­a­bly legally has li­a­bil­ity for giv­ing the in­for­ma­tion away in the first place with­out en­sur­ing that it would be safe­guarded.

This only cov­ers the is­sue of cus­tomer in­for­ma­tion leaks and the mis­use of data by out­siders. What no one was talk­ing about in any of these cases is what Face­book it­self is go­ing to do with the ac­cess to its own data.

With 2.19 bil­lion ac­tive Face­book users world­wide as of the first quar­ter of 2018, there is an enor­mous trea­sure chest of data avail­able for the com­pany to run through. Since an es­ti­mated 98% of its rev­enue comes from ad­ver­tis­ing, we al­ready know that Face­book will use that data to gen­er­ate the high­est pos­si­ble rev­enue by cre­at­ing ad­ver­tis­ing pre­cisely tar­get­ing those users.

How Face­book is go­ing to use that data is where things get a lit­tle fright­en­ing.

It is an es­tab­lished prac­tice that Face­book al­ready uses the data in one sim­ple way: by send­ing you ad­ver­tise­ments for the things you are al­ready look­ing at. Keep in mind that with ac­cess to your brows­ing data, Face­book al­ready knows that you are check­ing out cer­tain sites. Those of us with Face­book ac­counts have prob­a­bly been shocked more than once by see­ing an ad­ver­tise­ment di­rectly sent to us af­ter we had al­ready been look­ing at it ear­lier in a sep­a­rate browser win­dow. As an ex­am­ple, if you hap­pen to have opened a web­site for a lux­ury va­ca­tion re­sort in one win­dow, do not be sur­prised if you soon see an ad­ver­tise­ment for that same re­sort ap­pear­ing in your Face­book feed.

A sec­ond and some­what sim­i­lar ap­proach to how Face­book will use this data is by pro­vid­ing ad­ver­tis­ing al­ter­na­tives to what you were look­ing at or brows­ing for. A clas­sic ex­am­ple would be if you had been look­ing at one brand of soft drink a lot; Face­book might pro­vide an ad for a dif­fer­ent soft drink in your feed.

A third type of ap­proach is old tech­nol­ogy but is still a lit­tle dis­turb­ing when one re­al­izes it is hap­pen­ing. It in­volves stitch­ing to­gether a num­ber of steps for sim­ple pre­dic­tive be­hav­ior mod­el­ing. In this case, again be­cause Face­book has ac­cess to so much in­for­ma­tion about who you are, where you live, de­mo­graph­ics that can pre­dict your buy­ing ca­pa­bil­ity and habits and where you have been brows­ing, the com­pany can make guesses and tar­get ad­ver­tis­ing to what you might be look­ing for. An ex­am­ple of that prac­tice, which oth­ers have done but no one would ac­knowl­edge is hap­pen­ing now at Face­book, would be if it sees you hav­ing searched for a spe­cific Blue Book price for a cer­tain make, model and year of car; sees you look­ing at mul­ti­ple new car sites; and then sees you check­ing out fi­nanc­ing op­tions on still an­other site, it is an easy log­i­cal jump to sug­gest that you may be con­sid­er­ing buy­ing a new car and will need fi­nanc­ing. Face­book could then pro­vide a tar­geted ad for a spe­cific new car model for you based on what else it also knows about you. It knows whether you are sin­gle or mar­ried be­cause many users give that in­for­ma­tion away. It knows the lo­ca­tions you go to from geo­tag­ging. It of­ten knows your age and can es­ti­mate the in­come you might be mak­ing based on your coun­try re­gion and other fac­tors it has al­ready har­vested. It can then make that one gi­ant leap for­ward and guess with very high ac­cu­racy what kinds of ve­hi­cles you might want.

This is just the beginning. There is a very much re­lated sort of tech­nol­ogy that Face­book has de­vel­oped – some­thing the com­pany calls loy­alty pre­dic­tion. This in­volves us­ing all that in­for­ma­tion about you – in­clud­ing all those de­mo­graph­ics and search in­for­ma­tion dis­cussed ear­lier – to es­ti­mate when you are likely to stop be­ing loyal to a par­tic­u­lar prod­uct brand. The other way of say­ing the same thing is, of course, to pre­dict when you are likely the most vul­ner­a­ble to ad­ver­tise­ments that might nudge you to buy some­thing dif­fer­ent.

Face­book and other com­pa­nies ap­par­ently at­tempted to track in­for­ma­tion at one time about the things

each of us pur­chased on­line. Then they shared that in­for­ma­tion with your Face­book friends, say­ing that this “friend” of yours had pur­chased such an item and maybe you might want to try it your­self. That ap­proach back­fired as be­ing too in­va­sive, even in an era when we all reg­u­larly give away our pri­vacy in re­turn for ac­cess to tools and tech­nolo­gies such as what Face­book of­fers.

The loy­alty pre­dic­tion model could make use of the same data, since it not only has ac­cess to your own data but also the data of your friends. The com­pany likely “knows”’ – with a high de­gree of ac­cu­racy – what your clos­est friends are pur­chas­ing. So it can use that in­for­ma­tion to guess what you are per­son­ally most likely sus­cep­ti­ble to pur­chase, based on your own data as well as those in your in­flu­encers group.

Loy­alty pre­dic­tion is also part of an even more im­por­tant ad­ver­tis­ing-re­lated tech­nol­ogy at Face­book. That one is called Fblearner Flow. The way this works starts with three types of in­for­ma­tion Face­book has on each user. The first is what users sub­mit about them­selves just by their ac­tions on the site and in con­nec­tion with other ac­tions they take (such as al­low­ing Face­book to track their brows­ing his­tory). The sec­ond is by clas­si­fy­ing each user ac­cord­ing to an es­ti­mated 52,000+ unique at­tributes Face­book as­signs to each of them as “cat­e­gories of in­ter­est.” Ex­am­ples might in­clude in­for­ma­tion that sug­gests where one shops reg­u­larly or what house­hold in­come they have. The third is by pur­chas­ing data from third­party data bro­kers, pre­sum­ably for the in­di­vid­u­als who are users on the site.

Ac­cord­ing to a study by Prop­ub­lica, which was also the source of the in­for­ma­tion about the 52,000 unique at­tributes Face­book tracks on its users, Face­book pro­vides po­ten­tial ad buy­ers with a list of ap­prox­i­mately 29,000 cat­e­gories of in­for­ma­tion. Most of those come from the at­tributes list noted ear­lier. An es­ti­mated 600 of those cat­e­gories were pur­chased from third par­ties who have data on us that Face­book can merge with its own in­for­ma­tion. That in­for­ma­tion can then be used for ad­ver­tis­ers to de­cide which of those cat­e­gories of po­ten­tial pur­chasers they want to reach with their ads.

This helps pro­vide many of those some­what creepy ad­ver­tise­ments some of us see in our Face­book feed that seem way too “on tar­get.” This is just the beginning.

With Fblearner Flow, what Face­book does is it uses a form of ar­ti­fi­cial in­tel­li­gence (AI) that in­jects all that it knows about each of its users, both in­di­vid­u­ally and col­lec­tively, into a com­puter learn­ing sim­u­la­tion pro­gram. That pro­gram uses that data to pre­dict a va­ri­ety of pos­si­ble con­sumer be­hav­iors for the users, based on what it has ob­served hap­pen­ing in the past and what it cal­cu­lates can be in­flu­enced in the fu­ture. It then sorts those out­comes into col­lec­tive groups of peo­ple who can log­i­cally be ex­pected to re­spond in near-iden­ti­cal ways. Fi­nally, Face­book mar­kets those groups as po­ten­tial sub­jects for ad­ver­tis­ers to tar­get with spe­cific ad­ver­tis­ing de­signed for those mar­kets.

If that all sounds nearly im­pos­si­ble to imag­ine, re­mem­ber again the sheer mag­ni­tude of the data­base of be­hav­iors and peo­ple that Face­book has. It has 2.19 bil­lion ac­tive users mul­ti­plied by the 52,000+ unique at­tributes it cat­e­go­rizes, plus out­side data it buys on its users to help fill in im­por­tant de­tails. With such large num­bers, it is rel­a­tively straight­for­ward to come up with high-like­li­hood tar­gets for al­most any kind of ad­ver­tis­ers.

Fblearner Flow does not work in a vac­uum, of course. It is an Ai-driven ma­chine-learn­ing pro­gram. It tracks the suc­cess of spe­cific types of ad­ver­tise­ments with spe­cific groups, so it can re­tune where ads can be placed for the high­est level of re­turn for the ad­ver­tis­ers. The pro­gram is also ac­tively ad­just­ing and re­tun­ing its data in­puts, so the groups that are be­ing tar­geted are al­ways up-to-date.

Where this all goes be­yond creepy and be­comes al­most unimag­in­able is when one also con­sid­ers what in com­puter sys­tems is re­ferred to as the “feed­back loop” for these ad­ver­tise­ments. What if Face­book were to “promise” a cer­tain level of re­turn for an ad­ver­tiser? That is cer­tainly a rea­son­able con­clu­sion for some­thing it could do, if not now then def­i­nitely in the fu­ture. Then imag­ine that a given ad­ver­tis­ing cam­paign does not meet its tar­gets. Face­book could then jump in and, per­haps with its own in-house dig­i­tal ad­ver­tis­ing stu­dios, cre­ate a cus­tom­ized cam­paign to nudge just the right num­ber of po­ten­tial buy­ers over the edge and bring things up to tar­get. It could also check those cam­paigns in its sim­u­la­tions in ad­vance and pre­dict what could hap­pen with stag­ger­ing ac­cu­racy.

The com­bi­na­tion of the sur­gi­cally pre­cise in­flu­enc­ing tools Face­book has, plus the sheer vol­ume of users and its dom­i­nant po­si­tion in the ad­ver­tis­ing busi­ness, is only go­ing to make the com­pany more pow­er­ful in the fu­ture. The pre­ci­sion with which Face­book can do this also means it could be less costly to run any kind of cam­paign de­signed to in­flu­ence hu­man be­hav­ior. It is en­tirely log­i­cal to as­sume that soon we could see the fol­low­ing come to pass, us­ing Face­book as the means to make it all hap­pen:

In elec­tions:

• Plan pre­ci­sion tar­geted cam­paigns (to­tally within U.S. borders, so they are all le­gal) that are bought and paid for by ma­jor cor­po­ra­tions and ac­tivist groups to get can­di­dates who are “on the edge” into of­fice. Con­vince groups who are crit­i­cally needed to elect cer­tain can­di­dates not to vote at all or, al­ter­na­tively, to come in large num­bers where they may never have shown up at all be­fore.

• Use data that is at best mis­lead­ing but that does not fall un­der an ab­so­lute po­si­tion of “ly­ing” to sup­port these ad­ver­tis­ing cam­paigns.in busi­ness:

• Tar­get start-ups with dis­rup­tive in­no­va­tions so the ex­ist­ing com­pa­nies will con­tinue to dom­i­nate. That has not been pos­si­ble to do with such pre­ci­sion and such clear re­turn on in­vest­ment. The big money-mak­ers can now make this part of their reg­u­lar busi­ness cam­paigns to push down po­ten­tial com­peti­tors via this kind of ma­nip­u­la­tion.

• Drive stock prices up via le­gal ma­nip­u­la­tion means in ad­ver­tis­ing that can be timed. Help drive lo­cal sup­port for a po­ten­tial harm­ful busi­ness ven­ture in a lo­cal com­mu­nity, again by tar­get­ing in­flu­encers in the lo­cal area with far more pre­ci­sion than ever be­fore. More pre­cisely lobby po­ten­tial mem­bers of Congress. In govern­ment:

• Ma­nip­u­late the pop­u­la­tion at large to sup­port po­si­tions they may not have con­sid­ered in the past. Ac­tively ma­nip­u­late the lan­guage the pop­u­la­tion at large uses in talk­ing about things so that it is eas­ier to ac­cept an idea that the ma­jor­ity might nor­mally re­ject. Launch tar­geted cam­paigns to get the pop­u­la­tion “in line” with un­pop­u­lar de­ci­sions that have al­ready been made. If a pres­i­dent was to de­clare an un­pop­u­lar war, for ex­am­ple, imag­ine if the govern­ment in­vested a bil­lion dol­lars in tar­geted ads and news ma­nip­u­la­tion in Face­book to turn opin­ions in its fa­vor. Imag­ine if the di­vi­sive move to re­lo­cate the U.S. em­bassy in Is­rael from Tel Aviv to Jerusalem, some­thing that has been panned widely in Europe, was to be the sub­ject of a tar­geted in­flu­ence cam­paign by the United States to Face­book users in the EU.

Un­for­tu­nately, these are all likely sce­nar­ios that not only might hap­pen in the fu­ture but just might al­ready be hap­pen­ing now.

There are uglier things to con­sider that could hap­pen just by la­bel­ing them dif­fer­ently than the ob­vi­ous. These could in­clude mak­ing hate groups more lik­able, en­cour­ag­ing dis­crim­i­na­tion with­out hav­ing to pass a sin­gle law or get­ting the pub­lic to ac­cept near-crim­i­nal ac­tions by large cor­po­ra­tions as not just “okay” but also some­thing to cel­e­brate as be­ing in “Amer­ica’s best in­ter­ests. ”face­book’s ad­ver­tise­ments, with their pre­ci­sion tar­get­ing and likely in­creas­ing abil­ity to guar­an­tee re­sults us­ing AI, could end up be­ing one of the main ways the coun­try gets in line with ev­ery­thing from con­sumer mar­ket­ing to broad-based eco­nomic be­hav­ior and the sup­port of govern­ment ac­tions. They could, if not reg­u­lated cor­rectly, rule the world.

There are those who might ques­tion how some­thing as sim­ple as ad­ver­tise­ments could make such a dif­fer­ence. Con­sider, then, what has al­ready been al­leged re­gard­ing Don­ald Trump reg­u­larly watch­ing the pop­u­lar morn­ing talk show Fox & Friends. Many me­dia groups have noted that what­ever is dis­cussed on that show of­ten ends up ref­er­enced in Trump’s Twit­ter feed, both dur­ing and im­me­di­ately af­ter the show. What is rarely noted but is also true is that cor­po­rate ad­ver­tis­ers also know this. They are ap­par­ently now buy­ing and run­ning ad­ver­tise­ments to run on that show in the Wash­ing­ton, D.C., mar­ket just for Don­ald Trump to see. It has even been al­leged that some of those ad­ver­tise­ments may have been what pushed Don­ald Trump over the edge to sup­port the cur­rent steep steel and alu­minum im­port tar­iffs im­posed re­cently.

If tar­get­ing just one per­son with the right ad­ver­tise­ments at the right time can make such a dif­fer­ence, imag­ine what it could mean for 2.19 bil­lion peo­ple to be tar­geted all at the same time – on any topic.

This is just what we know Face­book is do­ing now. The ques­tions we need to be ask­ing are what is it do­ing that we don’t yet know about and what will it do in the fu­ture?

Most hu­mans are eas­ily in­flu­enced, and so­cial en­gi­neer­ing and mind con­trol tech­nol­ogy and tech­niques have evolved rapidly dur­ing the past cen­tury.

Have you ever gone into a big box whole­sale store and bought far more than you in­tended to or felt that you should re­ally be care­ful to not ac­ci­den­tally steal some­thing? If you have, then you were con­sciously feel­ing the ef­fects of very ba­sic mind con­trol tech­nol­ogy us­ing sub­lim­i­nal mes­sag­ing. Most large stores legally use hid­den tech­nol­ogy to in­flu­ence

shop­pers’ be­hav­ior; it works very well on most peo­ple, and most peo­ple have no idea that it is oc­cur­ring.

Vast amounts of money are be­ing in­vested in de­vel­op­ing more ef­fec­tive ways to in­flu­ence your be­hav­ior, and much of it is be­ing in­vested in dig­i­tal com­mu­ni­ca­tions and so­cial me­dia.

Face­book ex­ists to make more and more money, and its pri­mary rev­enue stream is from pro­vid­ing ac­cess to your brain and ev­ery­thing that can be known about you in or­der to ma­nip­u­late your be­hav­ior.

Face­book is go­ing to do what­ever makes it the most money with­out its key peo­ple go­ing to jail. Most of its ad­ver­tis­ers are the same way. They are amoral money-mak­ing sys­tems that push eth­i­cal and le­gal bound­aries to make in­creas­ingly more money, and they do that by per­suad­ing you to buy their prod­ucts and steal­ing your money when they can get away with it. They also use their grow­ing power to re­move eth­i­cal and le­gal bound­aries that might limit their profit. They ac­tu­ally write most of the new laws and the re­peals of ex­ist­ing laws in the United States.

But money is not the only mo­ti­va­tor for Face­book and its clients. It is also about power – us­ing power to gain more power – and, as is shown in our next ar­ti­cle, Face­book is will­ing to align it­self with some very sin­is­ter forces in or­der to in­crease its power.

No one forces us to use Face­book, and we can all live with­out it. We could use the time spent on Face­book to have real face-to-face re­la­tion­ships with real peo­ple or get more in­volved in our com­mu­ni­ties and cre­ate bet­ter con­di­tions for ev­ery­one.

Im­age: Book Cat­a­log, CC

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.