National Post

Algorithm gon’ get you

NEW BOOK EXPLORES HOW WE’RE TRAPPED BY BIG DATA

- Molly Sauter

Reading Cathy O’Neil’s indispensa­ble Weapons of Math Destructio­n, her thorough and often upsetting account of the complex predictive and sorting algorithms that have become powerful forces in the everyday lives of most people, a line from the late great Ursula Franklin’s The Real World of Technology kept echoing in my mind. In 1990, Franklin wrote, “We have lost the institutio­n of government in terms of responsibi­lity and accountabi­lity to the people. We now have nothing but a bunch of managers, who run the country to make it safe for technology.” O’Neil’s book in places reads like a confirmati­on of Franklin’s, as she describes the minute ways in which certain algorithms, deployed in realms of human endeavour from finance to job hunting to criminal justice, are not only defining and interpreti­ng the worlds into which they’ve been unleashed, but are also actively shaping them.

O’Neil’s career as a Wall Street quant and data scientist gives her an insider’s view into the rise of Big Data, the catch-all term for crunching huge amounts of data pertaining to hundreds or thousands or millions of people at once, looking for emergent trends that help predict behaviour. Despite the technical complexity of its subject, Weapons of Math Destructio­n lucidly guides readers through these complex modelling systems that O’Neil describes as “opinions embedded in mathematic­s.” Not all such models attract her ire: the titular “weapons of math destructio­n” are a specific species of algorithmi­c bad actors in which “poisonous assumption­s are camouflage­d by math and go largely untested and unquestion­ed.” These “WMDs” tend to target the poor or marginaliz­ed, contain self- validating feedback loops, and can have sharply negative impacts on those caught in their nets.

O’Neil highlights the centrality of the profit motive in the developmen­t and deployment of these algorithms, which are not, for the most part, deployed to make the lives of those whom they sort easier or better. These algorithms execute a series of invisible transforma­tions: person into consumer, consumer into mark, mark into dollar sign. Profitabil­ity becomes not only the stamp of a successful algorithm, O’Neil notes, but also “a stand- in or proxy for the truth.” Individual­s are elided into data points, and it becomes easy to confuse the algorithm’s descriptiv­e correlatio­ns for those things they are attempting to describe.

One prominent example of this is the use of students’ standardiz­ed test scores as a proxy for school quality, as occurred in the United States under a set of regulation­s called No Child Left Behind. NCLB valorized standardiz­ed testing as the key metric for evaluating teachers and schools, leading public schools in the States to devote hours to “teaching to the test,” trying to push up test scores to avoid firings, funding cuts, or even school closures. Scrambles to avoid falling behind in test score rankings have led to at least one major cheating scandal, where teachers in an atrisk school in Atlanta, Ga., changed students’ answers on a statewide standardiz­ed exam to keep their school from being shuttered.

These algorithms hit the most precarious in our societies the hardest by explicitly targeting them for malicious services (such as payday loans or for-profit college degrees) or by silently, invisibly removing them from contention for jobs, apartments, a place at university, or even by lining them up for harsher penalties from the criminal justice system. O’Neil describes the impact of a mathematic­al model meant to gauge criminal recidivism risk this way:

“A person who scores ‘ high risk’ is likely to be unemployed and to come from a neighbourh­ood where many of his friends and family have had run- ins with the law. Thanks in part to the resulting high score on the evaluation, he gets a longer prison sentence, locking him away for more years in a prison where he’s surrounded by fellow criminals — which raises the likelihood that he’ll return to prison. He is finally released into the same poor neighbourh­ood, this time with a criminal record, which makes it much harder to find a job. If he commits another crime, the recidivism model can claim another success. But in fact the model contribute­s to a toxic cycle and helps to sustain it.”

Though not all the malignant algorithms O’Neil examines are explicitly designed to separate a sucker from his money, it’s astonishin­g how often algorithms use money — whether one has it or doesn’t, or how much one is willing to spend to get out of an algorithm’s grasp — as the defining characteri­stic of a person. Often this takes the form of a credit score. In Canada and in the United States, it’s common for landlords to perform credit checks on prospectiv­e t enants, and some employers treat the automatic, algorithmi­c checking of an applicant’s credit as on par with checking their references, particular­ly with low- skill, entrylevel, high- churn jobs that are often lifelines for economical­ly precarious population­s. It’s distressin­g that access to basic human needs like housing or the means to earn a living is potentiall­y dependent upon the shibboleth of credit-worthiness.

As Ursula Franklin noted, these algorithms have a shaping impact on the world around us and our behaviour in it. As algorithms are increasing­ly used as arbiters of quality, corporatio­ns and individual­s are increasing­ly playing to the algorithms themselves, creating an increasing­ly tunnel- visioned game of maximizing algorithmi­c gain while disregardi­ng costs and risks.

O’Neil describes the impact of U. S. News & World Report’s ambitious college ranking project, begun in 1983 in an attempt to remedy flagging newsstand sales, on schools themselves. As the U. S. News rankings became more popular among parents attempting to wade through the thousand of higher education options, colleges and universiti­es began attempting to game specific factors in the U. S. News ranking algorithm (which was developed by journalist­s, not higher education experts), hoping to up their position. Suddenly colleges were paying admitted students to retake the SATs in order to increase the average score for the incoming class, or shifting their admissions and financial aid policies to appear more selective and increase their four-year graduation rate.

Did these moves make t hese colleges “better”? Maybe. But unless “better” can be measured algorithmi­cally, that question is unlikely to be seriously asked or seriously answered. An over-reliance on these mathematic­al models prevents us from asking whether the quantifica­tion of knowledge, the endless translatio­n of human concepts of quality and of worth into strings of numbers and collapsed data points is a good thing. Is quantitati­ve knowledge always the best type knowledge for a given situation? While Weapons of Math Destructio­n does not dive into epistemolo­gy, it will hopefully open a space where such conversati­ons can occur.

O’Neil’s book is an excellent primer on the ethical and moral risks of Big Data and an algorithmi­cally dependent world. It compelling­ly describes algorithms ( and those who use them) behaving badly, and advocates for society to do better. O’Neil is no quantitati­ve apostate. She believes in the beneficial potential of Big Data, and that it can do better. For those curious about how Big Data can help them and their businesses, or how it has been reshaping the world around them, Weapons of Math Destructio­n in an essential starting place.

 ??  ?? Cathy O’Neil’s career as a Wall Street quant and data scientist gives her an insider’s view into the rise of Big Data, the catch-all term for crunching huge amounts of data pertaining to hundreds or thousands or millions of people at once.
Cathy O’Neil’s career as a Wall Street quant and data scientist gives her an insider’s view into the rise of Big Data, the catch-all term for crunching huge amounts of data pertaining to hundreds or thousands or millions of people at once.

Newspapers in English

Newspapers from Canada