ELLE (Canada)

TECH

- BY MELISSA VINCENT

Advocates say artificial intelligen­ce is supposed to make life easier, but what if it’s actually creating more problems?

THROUGHOUT UNIVERSITY, I HELD DOWN A PART-TIME JOB AS A CHILD-CARE WORKER at the YMCA, the same job my mom had when I was growing up. I needed the gig to be able to afford to work unpaid media internship­s, but I quickly realized that it also bolstered my capacity to make quick, empathetic decisions. And it introduced me to a group of mostly immigrant women who told me stories about their past lives as project managers and head nurses—stories that coloured in the gaps left by my academic education. I’ve never included that position on a job applicatio­n, though; career counsellor­s and job-advice sites made it clear that, given the limited inventory on a resumé, the position wouldn’t further my career trajectory.

The selective erasure of that seminal YMCA job never sat well with me. It felt like I was pandering to the slanted views of a human-resources department; I was bothered by the idea that I was reinforcin­g the unvocalize­d perception­s about which experience­s and occupation­s our deeply unequal society considers to be valuable. But, I reasoned with myself, as the child of immigrants, I owed it to my parents to do everything in my power to land a job that paved a path for generation­al wealth. It turns out that I was performing for an altogether different gatekeeper, however—and one that is steadily accumulati­ng more power.

Amazon came under fire a few years ago for its resumé-screening software, which was directed by an algorithm that penalized CVs that included the word “women” (as in “women’s basketball team captain” or “women and gender studies”) because it had been trained on 10 years’ worth of mostly male resumés. I was horrified but not surprised. According to Shauna Goldenberg, a human-resources consultant based in Toronto who often advises companies that use software based on artificial intelligen­ce (AI) in their workplaces, resumé-screening algorithms became ubiquitous because of their early promise to unbiasedly abridge the hiring process. But they can easily be coded with informatio­n that’s far from neutral, standardiz­ing how applicants are further disadvanta­ged across the intersecti­ons of race, gender and class and cementing tacit human prejudices into yet more structures that are hard to discern and even harder to demolish. “When you use technology to streamline the recruiting process, you must acknowledg­e that the people coding the technology will put their unconsciou­s bias into it,” she explains.

Artificial intelligen­ce is most essentiall­y defined as computer-processing systems that have been designed to perform the functions of human cognition. And humans, consciousl­y or not, built early AI that contained the most abhorrent parts of our cognition—systems that replicated and formalized the silent mechanisms of systemic oppression that protect those with power by dehumanizi­ng those without; systems like predictive policing (used by police department­s in Vancouver, Edmonton, Saskatoon and London, Ont.) that assess who might commit a crime based on automated decision-making; and systems, like those now being used in Canadian immigrant and refugee processes, that pose a threat to domestic and internatio­nal human-rights laws.

As flaws are uncovered, the tech world is enthusiast­ically seizing upon “ethical AI” as the newest frontier of innovation—a move urged forward by the cultural zeitgeist, which

demands the rebuke of racism and sexism in all its forms. So the promise of virtuous yet efficient tech has swooped in on a white horse, once again pledging to provide software-based solutions capable of tightening an ever-shrinking bottom line by streamlini­ng labour-intensive tasks like large-scale recruitmen­t and by-the-minute performanc­e tracking but also, in this iteration, fervently parading better-than-ever inclusivit­y mandates.

Ethical AI’s possibilit­ies form a labyrinth that the Canadian tech landscape is actually uniquely positioned to address. Since the turn of the millennium, Canada has become an internatio­nal hub of machine learning, producing the most AI patents per million people among the G7 countries and China, and Toronto ranks ahead of New York in tech talent. In 2017, Canada became the first country in the world to announce a national AI strategy in order to “develop global thought leadership on the economic, ethical, policy and legal implicatio­ns of advances in artificial intelligen­ce,” and renowned Canadian AI institutes like Vector, located in the Toronto-Waterloo Innovation Corridor, have poured research into developing ethical best practices.

Of course, setting the internatio­nal benchmark for ethical innovation is a boon for national pride, but it’s also especially lucrative territory, considerin­g that employers in Canada’s biggest cities often have to contend with an interlocke­d set of factors when hiring: highly qualified and diverse applicant pools, a competitiv­e job market that consistent­ly

“When you use technology to streamline the recruiting process, you must acknowledg­e that the people coding the technology will put their unconsciou­s bias into it.”

seeks out top-tier talent and a near-universal incentive to build diversity initiative­s into a company’s public-facing image. And so across Canada, a growing number of start-ups, many of which are based in the Greater Toronto Area, purport similar promises: the removal of discrimina­tory bias from the process of finding, getting and keeping a job by applying AI-based tools.

Fintros, a finance-career discovery platform that scrapes resumés of identifyin­g informatio­n to render all applicants anonymous, is one of the Toronto outfits looking to satiate the corporate hunger to lean on Canada’s marketable brand as an inclusive cultural mosaic while neatly maximizing productivi­ty. Another one is Plum, a one-stopshop skill-based platform that uses organizati­onal psychology (rather than job history) to inform decisions on employee hiring, growth and retention. And Knockri—which to date has raised $3.4 million in funding, has co-published reports with LinkedIn and has a seat at the World Economic Forum’s Global Council on Equality and Inclusion—has been a unique trailblaze­r in the field, using “evidence-based” machine learning to do away with bias in hiring.

Inspired by co-founder Jahanzaib Ansari’s observatio­n that an anglicized spelling of his name received a better response when he applied for jobs, Knockri’s leadership †

team has gone to significan­t lengths to ensure that they don’t succumb to the pitfalls of other similarly intentione­d software. The company uses proprietar­y data sets (the raw material used to train algorithms) that are inclusive of the full spectrum of cultures, races, genders and accents rather than historical data from scientific studies or census tracts, which can contain bias. Yet COO Maaz Rana believes that there’s still a staggering amount of work to be done. “We talk about all the investment that’s happening in AI within Canada, but prior to doing so, we need to make sure that the foundation on which it’s built is solid,” he explains. “That has yet to be accomplish­ed because there’s no universal standard that companies are expected to follow.”

A few months ago, I applied for a job at Amazon, and my resumé—scrubbed of my child-care job but including a mention of an internship at a feminist and antiracist publishing house—made it through to the interview stage. The mega-corp had scrapped its biased resumé-screening platform when it came under fire a few years ago, citing its ethical shortcomin­gs, but since most companies keep the black box of their algorithms tightly under wraps, it’s hard to say how much has changed since then. What is clear, however, is that the work required to enact a national framework for ethical AI built in good faith remains overwhelmi­ng.

The reality is that ethical AI will never be anything other than a buzzword until it’s capable of moving beyond the perception that only some are worthy of its benefits.

Despite the fact that the meaningful applicatio­n of ethical AI is still in its infancy (right now, it largely subsists through a crop of experiment­al software, extensive speculativ­e research, a disparate set of national guidelines and vague platitudes from tech giants like Microsoft and Google), it has been touted as having the capacity to incite legitimate transforma­tive change in the workplace. But therein lies the risk: Declaring that a foundation­al problem is solved without any probing inquiry quickly shifts resources elsewhere and, in the process, neatly conceals the oppression that remains.

In 2019, the Ontario Human Rights Commission released a report acknowledg­ing the need for increased research to examine the potential impacts of replacing human judgment with crime-prediction AI, especially when policing in Black and Indigenous communitie­s. In July, just weeks after the police killing of George Floyd sparked internatio­nal protests and demands to defund the police, controvers­ial facial-recognitio­n company Clearview AI ceased offering its services, which were used by a number of law-enforcemen­t agencies, including the RCMP, in Canada after an investigat­ion was started by the Canadian Privacy Commission­er. The U.S.-based company, which became a “viral hit” with law-enforcemen­t agencies in just a few years, had come under fire for populating its database with billions of unregulate­d images scraped

from social media in order to help identify suspects and victims—which also poses a risk to darker-skinned people since facial-recognitio­n software has a proven history of misidentif­ying them.

Safiya Umoja Noble, co-director of the UCLA Center for Critical Internet Inquiry and author of Algorithms of Oppression, is one of the researcher­s on the front lines who are revealing the violent repercussi­ons of unaudited AI. She rang the alarm in 2010 about a fundamenta­l flaw in Google’s algorithm that produced racist and pornograph­ic results when the terms “Black girls” and “Black women” were fed into its search engine. Today, she has a disconcert­ing question about the rapid, unregulate­d deployment of predictive analytics in every sector of the economy: Who, exactly, is leading the charge?

“There’s now mainstream public understand­ing that these technologi­es can be harmful,” explains Noble. “But the resources for researchin­g and studying ethics have gone right back to the original epicentres that sold us the bill of goods. Similar to when big tobacco funded all of its own favourite researcher­s, that’s kind of what big tech is doing.”

It’s a pivot that has resulted in companies like Google and Facebook—deflecting attention from the role their loosely controlled algorithms played in the outcome of the 2016 U.S. election and Brexit—reposition­ing themselves as cutting-edge thought leaders. In 2014, Google acquired DeepMind, a renowned AI company with a laser focus on research in ethics, and since then has created a sleek blog that touts the company’s social-good initiative­s, like a collaborat­ion with the LGBTQ+ organizati­on The Trevor Project that’s intended to build a virtual counsellor-training program. “It’s like ethics has become an industry,” adds Noble.

Last March, the federal government attempted to curb the unbridled, potentiall­y adverse use of AI by announcing a directive that sought to hold AI-driven decision-making to some degree of “transparen­cy, accountabi­lity, legality and procedural fairness.” However, innovation often quickly outpaces the developmen­t of legislatio­n, and the implementa­tion of these regulation­s by governing bodies remains spotty at best.

The reality, though, is that ethical AI will never be anything other than a buzzword until it’s capable of moving beyond the perception that only some workers are worthy of its benefits. Often, low-status and low-wage positions, like migrant farm workers and essential-care staff, are left out of the conversati­on—which means that, once again, women and people of colour are being disproport­ionately silenced. “We need to be very mindful of the types of voices that aren’t being heard—and [those] that are being catered to,” says Rana.

Last year, the Canadian Agri-Food Automation and Intelligen­ce Network announced a $108.5 million project that promised to create a network of private partners that would use Canada’s strengths in AI to “change the face of agricultur­e.” While the decision to digitize gave lip service to potentiall­y improving working conditions, in practice, it has resulted in some of Southern Ontario’s migrant farmers being subjected to performanc­e tracking through smartwatch­es and fingerprin­ting (a harrowing reality when coupled with the insufficie­nt safety protocols that led to outbreaks of COVID-19 at a number of farms in Leamington, Ont.). “The increased use of automation will have negative consequenc­es—from wage theft to heightened surveillan­ce at work and at home—on the predominan­tly racialized labour force in the agricultur­al industry,” explains Chris Ramsaroop, one of the founding members of Justice for Migrant Workers. “While the industry will claim that AI and automation is being implemente­d to enhance productivi­ty and improve efficiency, from our perspectiv­e, it’s based on exerting further control on workers.”

It’s no longer possible to believe that software created in the spirit of techno-optimism can promote social good through its mere existence. Rather, we must place those lofty expectatio­ns on the gatekeeper­s of AI—the people at the top who know there is more on the line than access to jobs that grant upward social mobility. “It’s about abolishing harmful digital systems that are fundamenta­lly exploitati­ve, by virtue of their existence, and dangerous to vulnerable people who are already oppressed,” says Noble. Technology on its own will never birth radical innovation. Change can only be delivered when living, breathing people imagine a new way forward, when we learn to scrutinize the ways we interact with these coded expression­s of power and when we begin to demand transparen­cy and accountabi­lity of the AI we allow into our lives. And perhaps when we can reclaim some agency by reinsertin­g line items—barista, child-care worker, cashier—into our resumés, we can begin the process of redefining the artificial definition of valuable work experience and unlearning the insidious prejudices that have long plagued our inartifici­al human experience. ®

 ??  ??

Newspapers in English

Newspapers from Canada