Truth and Cy­ber Se­cu­rity

Face­book against Fake News

The McGill Daily - - Contents - Clau­dia Kitchen Sci+tech Writer

On Sep­tem­ber 25, Kevin Chan, Face­book Canada’s global di­rec­tor and head of pub­lic pol­icy vis­ited Mcgill Univer­sity’s Max Bell School of Pub­lic Pol­icy to speak about the rise of fake news and Face­book’s fight against mis­in­for­ma­tion. “I’m at­tend­ing this sem­i­nar be­cause Face­book has filled a role in so­ci­ety where it has come to have as much in­flu­ence as the New York Times or any jour­nal­ist pub­li­ca­tion. If you polled peo­ple, the ma­jor­ity would prob­a­bly say they get their news from Face­book,” said U2 In­ter­na­tional De­vel­op­ment stu­dent Bran­don Heiblum.

Al­though this does mean that in­for­ma­tion is more eas­ily ac­ces­si­ble to the pub­lic, it has re­sulted in the spread of “fake news,” or mis­in­for­ma­tion. Fake news, Chan ac­knowl­edged, is a sig­nif­i­cant is­sue in to­day’s world and Face­book has been mak­ing ef­forts to fix it.

Fol­low­ing the US 2016 Pres­i­den­tial elec­tion, Face­book faced wide­spread crit­i­cism for the neg­a­tive im­pacts their plat­form had on elec­tion in­tegrity. The web­site was a main ac­tor in the prop­a­ga­tion of ar­ti­cles with false in­for­ma­tion.

Chan ad­mits that in 2016 Face­book was slow to act on this is­sue, but Face­book wants to rec­tify this mov­ing for­ward. He ex­plained the plat­form is meant to be a fo­rum for shar­ing dif­fer­ent voices and opin­ions. He cited Mark Zucker­berg, the cre­ator of Face­book “I don’t want any­one to use our tools to un­der­mine democ­racy be­cause that’s not what we stand for.”

Chan stated that their team is do­ing ev­ery­thing they can to keep Face­book safe, in­clud­ing us­ing ar­ti­fi­cial in­tel­li­gence to find and delete fake face­book ac­counts and stop the spread of mis­in­for­ma­tion. They’ve also in­tro­duced new ad trans­parency fea­tures, and ad-check­ing part­ner­ships with lead­ing jour­nal­is­tic publi­ca­tions. “We are com­mit­ted to mak­ing Face­book a force for good for democ­racy,” said Chan.

He opened the sem­i­nar with a dis­cus­sion of Face­book’s role in a cur­rent is­sue: keep­ing pro­vin­cial elec­tions in Que­bec safe from in­ter­fer­ence. The Com­mu­ni­ca­tions Se­cu­rity Es­tab­lish­ment of the Cana­dian govern­ment had told Face­book that mis­in­for­ma­tion and ac­count hack­ing were the big­gest threats to elec­tion in­tegrity. In re­sponse, Face­book cre­ated a five-fold plan called the Cana­dian In­tegrity Ini­tia­tive. First, it in­cluded a two year pro­gram with Me­dia Smarts, Canada’s cen­tre for dig­i­tal and me­dia lit­er­acy, to help Cana­di­ans get in­formed on how to de­tect a fake news ar­ti­cle for them­selves. Sec­ond, Face­book re­leased its own “cy­ber hy­giene guide” for party mem­bers and politi­cians to learn bet­ter cy­ber se­cu­rity prac­tices, and pro­tect against ac­count hack­ing. Third, an emer­gency Face­book cy­ber hot­line was cre­ated for po­lit­i­cal par­ties to ad­dress is­sues such as sus­pected hacks. Fourth, a cy­ber hy­giene train­ing pro­gram opened up to all po­lit­i­cal par­ties. Lastly, Face­book im­ple­mented an ad­ver­tis­ing trans­parency ini­tia­tive that al­lows users, to view all ads be­ing run by a par­tic­u­lar page when­ever they see an ad­ver­tise­ment.

Face­book has also taken addi- tional mea­sures to end the spread of mis­in­for­ma­tion. They have paired up with the Agence France-presse (AFP) a news plat­form, to hire fact check­ers to re­view con­tent in both French and English. Sto­ries that the AFP have flagged have sig­nif­i­cantly less shares, slow­ing the spread of mis­in­for­ma­tion. Users are also no­ti­fied of “false in­for­ma­tion” prior to po­ten­tial shares of any flagged con­tent. Chan spec­i­fied that Face­book has cho­sen to work with in­de­pen­dent, third party check­ers, be­cause Face­book be­lieves these third par­ties are more qual­i­fied to de­clare mis­in­for­ma­tion.

A pro­gram called “Re­al­ity Check” comes as a re­sult of Face­book’s part­ner­ship with Me­dia Smarts. This ini­tia­tive re­leases videos and tip sheets to help users stay in­formed. Their most re­cent video, Au­then­tic­ity 101, lists five steps peo­ple can take to make sure the con­tent they share is ac­cu­rate.

Chan says he is fre­quently asked whether things are get­ting bet­ter or worse in terms of the spread of mis­in­for­ma­tion. In re­sponse, he stated that he truly be­lieves Face­book is do­ing ev­ery­thing they can to move in the right di­rec­tion. “You can never fully solve a se­cu­rity prob­lem, threats will con­stantly find new ways to cause harm. But our goal is to make it much harder for ac­tors to op­er­ate across our plat­forms,” said Chan. “Of course, our work can never be done and we re­main vig­i­lant to de­feat bad ac­tors and emerg­ing cy­ber risks. We ex­pect to add ad­di­tional elec­tion in­tegrity mea­sures in the months to come lead­ing up to the 2019 fed­eral elec­tion,” he con­tin­ued.

This past July, af­ter in­tense in­ves­ti­ga­tion, 32 Face­book and In­sta­gram ac­counts were re­moved due to demon­strated in­au­then­tic be­hav­ior. Face­book has dou­bled their per­son­nel work­ing on the is­sue, and now have close to 20,000 mem­bers on their se­cu­rity team. Ad­di­tion­ally, Face­book is do­ing what they can to dis­able fake ac­counts whose sole pur­pose is to spread mis­in­for­ma­tion. In the first quar­ter of 2018, they dis­abled over 583 mil­lion fake Face­book ac­counts. The ma­jor­ity were taken down min­utes af­ter their cre­ation, be­fore any hu­man user could re­port it. As Chan ex­plained, in the week prior to the sem­i­nar, two fake ac­counts re­lat­ing to the Van­cou­ver mu­nic­i­pal elec­tions were de­ac­ti­vated.

Mcgill stu­dents who at­tended the sem­i­nar said they walked away with new per­spec­tives on the way Face­book is pre­vent­ing mis­in­for­ma­tion. “I learned that they’re hav­ing this in­ter­nal de­bate about how to reg­u­late it them­selves. In pub­lic dis­course we don’t nec­es­sar­ily see that. It’s nice to see that they’re ac­tu­ally do­ing some­thing even if we don’t see the ef­fects right away,” stated Bryan Buraga, a U1 Arts and Sciences stu­dent.

As stu­dents, so­cial me­dia has a huge im­pact on each and ev­ery one of our daily lives and the in­for­ma­tion we have ac­cess to. It is the fastest and most ef­fec­tive way to spread in­for­ma­tion. For Kevin Chan and Face­book, mak­ing sure that this user ex­pe­ri­ence (and this in­for­ma­tion) re­mains safe, is a top pri­or­ity.

“I learned that they’re hav­ing this in­ter­nal de­bate about how to reg­u­late them­selves. In pub­lic dis­course we don’t nec­es­sar­ily see that. It’s nice to see that they’re ac­tu­ally do­ing some­thing even if we don’t see the ef­fects right away.” - Bryan Buraga, U1 Arts and Sciences stu­dent

“Our goal is to make it much harder for ac­tors to op­er­ate across our plat­forms...of course, our work can never be done and we re­main vig­i­lant to de­feat bad ac­tors and emerg­ing cy­ber risks. We ex­pect to add ad­di­tional elec­tion in­tegrity mea­sures in the months to come lead­ing up to the 2019 Fed­eral elec­tion” - Kevin Chan, Global Di­rec­tor and Head of Pub­lic Pol­icy, Face­book

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.