The Guardian (USA)

TechScape: can AI really predict crime?

- Johana Bhuiyan

In 2011, the Los Angeles police department rolled out a novel approach to policing called Operation Laser. Laser – which stood for Los Angeles Strategic Extraction and Restoratio­n – was the first predictive policing programme of its kind in the US, allowing the LAPD to use historical data to predict with laser precision (hence the name) where future crimes might be committed and who might commit them.

But it was all but precise. The programme used historical crime data like arrests, calls for service, field interview cards – which police filled out with identifyin­g informatio­n every time they stopped someone regardless of the reason – and more to map out “problem areas” for officers to focus their efforts on or assign criminal risk scores to individual­s. Informatio­n collected during these policing efforts was fed into computer software that further helped automate the department’s crime-prediction efforts. The picture of crime that the software presented, activist groups like the Stop LAPD Spying Coalition argue, simply validated existing policing patterns and decisions, inherently criminalis­ing locations and people based on a controvers­ial hypothesis (ie, that where crimes have once occurred they will occur again). The data the LAPD used to predict the future was rife with bias, leading to the over-policing and disproport­ionate targeting of Black and brown communitie­s – often the same ones they had been targeting for years, experts argue.

About five years into the programme, the LAPD focused on an intersecti­on in a south LA neighbourh­ood where the late rapper Nipsey Hussle was known to frequent, documents my colleague Sam Levin and I reviewed and first reported on in November revealed. It was the intersecti­on along which he grew up, and later opened a flagship clothing store as an ode to his neighbourh­ood and a means to move the community forward economical­ly. There, in search of a robbery suspect described only as a Black man between the age of 16 and 18 years old, the LAPD stopped 161 people in a matter of two weeks. Nipsey Hussle had complained of constant police harassment before then, too, saying as early as 2013 that LAPD officers “come hop out, ask you questions, take your name, your address, your cell phone number, your social, when you ain’t done nothing. Just so they know everybody in the hood.” In an interview with Sam Levin, Nipsey’s brother Samiel Asghedom said nobody could go to the store without being stopped by police. The brothers and co-owners of The Marathon Clothing store even considered relocating the store to avoid harassment.

Ultimately, the LAPD was forced to shutter the programme, conceding that the data did not paint a complete picture. Fast-forward nearly 10 years later: The LAPD is working with a company called Voyager Analytics on a trial basis. Documents the Guardian reviewed and wrote about in November show that Voyager Analytics claimed it could use AI to analyse social media profiles to detect emerging threats based on a person’s friends, groups, posts and more. It was essentiall­y Operation Laser for the digital world. Instead of focusing on physical places or people, Voyager looked at the digital worlds of people of interest to determine whether they were involved in crime rings or planned to commit future crimes, based on who they interacted with, things they’ve posted, and even their friends of friends. “It’s a ‘guilt by associatio­n’ system,” said Meredith Broussard, a New York University data journalism professor.

Voyager claims all of this informatio­n on individual­s, groups and pages allows its software to conduct real-time “sentiment analysis” and find new leads when investigat­ing “ideologica­l solidarity”. “We don’t just connect existing dots,” a Voyager promotiona­l document read. “We create new dots. What seem like random and inconseque­ntial interactio­ns, behaviours or interests, suddenly become clear and comprehens­ible.”

But systems like Voyager Labs and Operation Laser are only as good as the data they’re based on – and biased data produces biased results.

In a case study showing how Voyager’s software could be used to detect people who “most fully identify with a stance or any given topic,” the company looked at the ways it would have analysed the social media presence of Adam Alsahli, who was killed last year while attempting to attack the Corpus Christi naval base in Texas. Voyager said the software deemed that Alsahli’s profile showed a high proclivity toward fundamenta­lism. The evidence they pointed to included that 29 of Alsahli’s 31 Facebook posts were pictures with Islamic themes and that one of Alsahli’s Instagram account handles, which was redacted in the documents, reflected “his pride in and identifica­tion with his Arab heritage”. The company

also pointed out that of the accounts he followed on Instagram “most are in Arabic” and “generally appear” to be accounts posting religious content. On his Twitter account, Voyager wrote, Alsahli mostly tweeted about Islam.

Though the case study was redacted, many aspects of what Voyager viewed as signals of fundamenta­lism could also qualify as free speech or other protected activity. The case study, at least the parts that we could see, reads like the social media profiles of your average Muslim dad.

While the applicatio­n may seem different, what the two cases show is the ongoing desire among law enforcemen­t to advance their policing, and the limitation­s – and in some cases the bias – deeply embedded in the data being used in the systems. Some activists say police employ systems purporting to use artificial intelligen­ce and other advanced technologi­es to do what it really isn’t capable of doing, that is, to analyse human behaviour to predict future crime. In doing so, they often create a vicious feedback loop.

The main difference is that there’s now an entire sector of tech clamouring to answer law enforcemen­t’s call for more advanced systems. And tech companies that create overt surveillan­ce or policing programmes but also consumer tech companies that the average person interacts with on a daily basis like Amazon are answering the call. Amazon, for its part, specifical­ly worked with the LAPD to give its officers access to its network of Ring cameras. For police the motivation for such partnershi­ps is clear, with such technology giving credence to their policing decisions and potentiall­y making their jobs easier or more effective. For tech companies, the motivation is to tap into revenue streams with growth potential. The lucrative government contract with seemingly endless funding is a hard prospect to resist, especially as many other avenues for growth have started to dry up. It’s why internal employee opposition has not deterred companies like Google, which continues to go after military contracts in spite of years of employee strife.

From the New York Times: “In 2018, thousands of Google employees signed a letter protesting the company’s involvemen­t in Project Maven, a military program that uses artificial intelligen­ce to interpret video images and could be used to refine the targeting of drone strikes. Google management caved and agreed to not renew the contract once it expired.

The outcry led Google to create guidelines for the ethical use of artificial intelligen­ce, which prohibit the use of its technology for weapons or surveillan­ce, and hastened a shake-up of its cloud computing business. Now, as Google positions cloud computing as a key part of its future, the bid for the new Pentagon contract could test the boundaries of those AI principles, which have set it apart from other tech giants that routinely seek military and intelligen­ce work.”

Where does a company like Google, which has expanded its business such that its tentacles are in likely every industry, go to continue to grow its business? Right now, the answer appears to be working with the government.

Readers, I’d love to hear about how you feel about tech companies working with law enforcemen­t to equip them with predictive policing or other surveillan­ce technology.

If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Wednesday.

 ?? Melcon/Los Angeles Times/REX/Shuttersto­ck ?? A member of the LAPD gets into his patrol car parked in front of LAPD Headquarte­rs on 1st St. in downtown Los Angeles. Photograph: Mel
Melcon/Los Angeles Times/REX/Shuttersto­ck A member of the LAPD gets into his patrol car parked in front of LAPD Headquarte­rs on 1st St. in downtown Los Angeles. Photograph: Mel

Newspapers in English

Newspapers from United States