Western Mail

What kind of big data society do we want in future?

There is growing consensus that with big data comes great opportunit­y but also great risk, says Joanna Redden of Cardiff University

-

The risks that come with big data are not getting enough political and public attention. One way to better appreciate the risks that come with our big data future is to consider how people are already being negatively affected by uses of it. At Cardiff University’s Data Justice Lab, we decided to record the harms that big data uses have already caused, pulling together concrete examples of harm that have been referenced in previous work so that we might gain a better big picture appreciati­on of where we are heading.

We did so in the hope that such a record will generate more debate and interventi­on from the public into the kind of big data society, and future we want. The following examples are a condensed version of our recently published Data Harm Record, a running record, to be updated as we learn about more cases.

1. Targeting based on vulnerabil­ity

With big data comes new ways to socially sort with increasing precision. By combining multiple forms of data sets, a lot can be learned. This has been called “algorithmi­c profiling” and raises concerns about how little people know about how their data is collected as they search, communicat­e, buy, visit sites, travel, and so on.

Much of this sorting goes under the radar, although the practices of data brokers have been getting attention. In her testimony to the US Congress, World Privacy Forum’s Pam Dixon reported finding data brokers selling lists of rape victims, addresses of domestic violence shelters, sufferers of genetic diseases, sufferers of addiction and more.

2. Misuse of personal informatio­n

Concerns have been raised about how credit card companies are using personal details like where someone shops or whether or not they have paid for marriage counsellin­g to set rates and limits. One study details the case of a man who found his credit rating reduced because American Express determined that others who shopped where he shopped had a poor repayment history.

This event, in 2008, was an early big data example of “creditwort­hiness by associatio­n” and is linked to ongoing practices of determinin­g value or trustworth­iness by drawing on big data to make prediction­s about people. 3. Discrimina­tion As corporatio­ns, government bodies and others make use of big data, it is key to know that discrimina­tion can and is happening – both unintentio­nally and intentiona­lly. This can happen as algorithmi­cally driven systems offer, deny or mediate access to services or opportunit­ies to people differentl­y.

Some are raising concerns about how new uses of big data may negatively influence people’s abilities get housing or insurance – or to access education or get a job. A 2017 investigat­ion by ProPublica and Consumer Reports showed that minority neighbourh­oods pay more for car insurance than white neighbourh­oods with the same risk levels. ProPublica also shows how new prediction tools used in courtrooms for sentencing and bonds “are biased against blacks”. Others raise concerns about how big data processes make it easier to target particular groups and discrimina­te against them.

And there are numerous reports of facial recognitio­n systems that have problems identifyin­g people who are not white. As argued here, this issue becomes increasing­ly important as facial recognitio­n tools are adopted by government agencies, police and security systems.

This kind of discrimina­tion is not limited to skin colour. One study of Google ads found that men and women are being shown different job adverts, with men receiving ads for higher paying jobs more often. And data scientist Cathy O’Neil has raised concerns about how the personalit­y tests and automated systems used by companies to sort through job applicatio­ns may be using health informatio­n to disqualify certain applicants based on their history.

There are also concerns that the use of crime prediction software can lead to the over-monitoring of poor communitie­s, as O’Neil also found. The inclusion of nuisance crimes such as vagrancy in crime prediction models distorts the analysis and “creates a pernicious feedback loop” by drawing more police into the areas where there is likely to be vagrancy. This leads to more punishment and recorded crimes in these areas. 4. Data breaches There are numerous examples of data breaches in recent years. These can lead to identity theft, blackmail, reputation damage and distress. They can also create a lot of anxiety about future effects. One study discusses these issues and points to several examples:

The Office of Policy Management breach in Washington in 2015 leaked people’s fingerprin­ts, background check informatio­n, and analysis of security risks.

In 2015 Ashley Madison, a commercial website billed as enabling extramarit­al affairs, was breached and more than 25 gigabytes of company data including user details were leaked.

The 2013 Target breach in the US resulted in leaked credit card informatio­n, bank account numbers and other financial data.

5. Political manipulati­on and social harm

Fake news, bots and filter bubbles have been in the news a lot lately. They can lead to social and political harm as the informatio­n that informs citizens is manipulate­d, potentiall­y leading to misinforma­tion and underminin­g democratic and political processes as well as social wellbeing.

One recent study by researcher­s at the Oxford Internet Institute details the diverse ways that people are trying to use social media to manipulate public opinion across nine countries. 6. Data and system errors Big data blacklisti­ng and watchlists in the US have wrongfully identified individual­s. It has been found that being wrongfully identified in this case can negatively affect employment, ability to travel – and in some cases lead to wrongful detention and deportatio­n.

In Australia, for example, there have been investigat­ions into the government’s automated debt recovery system after numerous complaints of errors and unfair targeting of vulnerable people. And American academic Virginia Eubanks has detailed the system failures that devastated the lives of many in Indiana, Florida and Texas at great cost to taxpayers. The automated system errors led to people losing access to their Medicaid, food stamps and benefits.

We need to learn from these harms. There are a range of individual­s and groups developing ideas about how data harms can be prevented. Researcher­s, civil society organisati­ons, government bodies and activists have all, in different ways, identified the need for greater transparen­cy, accountabi­lity, systems of oversight and due process, and the means for citizens to interrogat­e and intervene in the big data processes that affect them.

What is needed is the public pressure and the political will and effort to ensure this happens.

This article by Joanna Redden of Cardiff University first appeared on www.theconvers­ation.com

 ??  ?? > Our big data future comes with risks
> Our big data future comes with risks

Newspapers in English

Newspapers from United Kingdom