Khaleej Times

Can Artificial Intelligen­ce kill social bias on FB?

-

When faced with a challenge, what’s a tech company to do? Turn to technology, Facebook suggests.

Following criticism that its ad-approval process was failing to weed out discrimina­tory ads, Facebook has revised its approach to advertisin­g, the company announced on Wednesday. In addition to updating its policies about how advertiser­s can use data to target users, the social media giant plans to implement a high-tech solution: machine learning.

In recent years, artificial intelligen­ce has climbed off the pages of science fiction novels and into myriad aspects of everyday life, from internet searches to health care decisions to traffic recommenda­tions. But Facebook’s new ad-approval algorithms wade into greener territory as the company attempts to utilise machine learning to address, or at least not contribute to, social discrimina­tion.

“Machine learning has been around for half a century at least but we’re only now starting to use it to make a social difference,” Geoffrey Gordon, an associate professor in the Machine Learning Department at Carnegie Mellon University in Pittsburgh, Penn., tells The Christian Science Monitor in a phone interview. “It’s going to become increasing­ly important.”

Though analysts caution that machine learning has its limits, such an approach also carries tremendous potential for addressing these types of challenges. With that in mind, more companies – particular­ly in the tech sector — are likely to deploy similar techniques.

Facebook’s change of strategy, intended to make the platform more inclusive, follow the discovery that some of its ads were specifical­ly excluding certain racial groups. In October, nonprofit investigat­ive news site ProPublica tested the company’s ad approval process with an ad for a “renter event” that explicitly excluded African-Americans. The Fair Housing Act of 1968 prohibits discrimina­tion or showing preference to anyone on the basis of race, making that ad illegal – but it was neverthele­ss approved within 15 minutes, ProPublica reported.

Why? Because while Facebook doesn’t ask users to identify their race and bars advertiser­s from directing their content at specific races, they have a host of informatio­n about users on file: pages they like, what languages they use, and so on. This kind of informatio­n is important to advertiser­s, since it means they can improve their chances of making a sale by targeting their ads toward people who are more likely to buy their product.

But by creating a demographi­c picture of a user, this data may make it possible to determine an individual’s race, and then improperly exclude or target individual­s. The company’s updated policies emphasize that advertiser­s cannot discrimina­te against users on the basis of personal attributes, which Facebook says include “race, ethnicity, color, national origin, religion, age, sex, sexual orientatio­n, gender identity, family status, disability, medical or genetic condition.”

There’s a fine line between appropriat­e use of such informatio­n and discrimina­tion, as Facebook’s head of US multicultu­ral sales, Christian Martinez, explained following the ProPublica investigat­ion: “a merchant selling hair care products that are designed for black women” will need to reach that constituen­cy, while “an apartment building that won’t rent to black people or an employer that only hires men [could use the informatio­n for] negative exclusion.”

For Facebook, the challenge is maintainin­g that advertisin­g advantage, while preventing discrimina­tion, particular­ly where it’s illegal. That’s where machine learning comes in. “We’re beginning to test new technology that leverages machine learning to help us identify ads that offer housing, employment or credit opportunit­ies —the types of advertisin­g stakeholde­rs told us they were concerned about,” the company said in a statement on Wednesday. The computer “is just looking for patterns in data that you supply to it,” explains Professor Gordon.

That means Facebook can decide. which areas it wants to focus on – namely, “ads that offer housing, employment or credit opportunit­ies,” according to the company — and then supply hundreds of examples of these types of ads to a computer. If a human “teaches” the computer by initially labeling each ad as discrimina­tory or nondiscrim­inatory, a computer can learn to go “from the text of the advertisin­g to a prediction of whether it’s discrimina­tory or not,” Gordon says.

This kind of machine learning — known as “supervised learning” — already has dozens of applicatio­ns, from determinin­g which emails are spam to recognizin­g faces in a photo. But there are certainly limits to its effectiven­ess, Gordon adds. “You’re not going to do better than your source of informatio­n,” he explains. Teaching the machine to recognize discrimina­tory ads requires lots of examples of similar ads. “If the distributi­on of ads that you see changes, the machine learning might stop working,” Gordon explains, noting that these changing strategies on the part of content producers can often get them past AI filters, like your email spam filter. Insufficie­nt understand­ing of details on the part of machines can also lead to high-profile problems, like Google Photos, which in 2015 mistakenly labeled black people as gorillas.

“Teaching” the machine also means having a person take the time to go through hundreds of ads and label them, as well as continue to check and correct a machine’s work. That makes the system vulnerable to human biases. “That process of refinement involves sorting, labeling and tagging – which is difficult to do without using assumption­s about ethnicity, gender, race, religion and the like,” explains Amy Webb, founder and CEO of the Future Today Institute, in an email to the Monitor. “The system learns through a process of real-time experiment­ing and testing, so once bias creeps in, it can be difficult to remove it.”

More overt bias issues have already been observed with AI bots, like Tay, Microsoft’s chatbot, who repeated the Nazi slogans fed to her by Twitter users. While this bias may be more subtle, since it is presumably unintentio­nal, it could conceivabl­y create persistent problems.

Unbiased machine learning “is the subject of a lot of current research,” says Gordon. One answer, he suggests, is having a lot of teachers, since it offers a consensus view of discrimina­tion that may be less vulnerable to individual biases. — Christian Science Monitor

The challenge is maintainin­g advertisin­g advantage, while preventing discrimina­tion

 ??  ??

Newspapers in English

Newspapers from United Arab Emirates