Eye in sky lands on our streets, as policing around world gears up
• Gauteng provincial government to boost surveillance through Vumacam partnership
If I had a rand for every time I felt compelled to reference as a technology journalist in the 16-plus years I’ve been doing this, I’d have ... OK maybe enough for a beer, and not of the craft variety.
Still, this Philip K Dick novella turned early-aughts action flick has proven eerily prophetic. We see similar tech all around us today, from (virtually) autonomous cars to biometric-based security, from touch screen interfaces to persistent and personalised advertising, stalking us as we move about in the real and digital world.
We see drones being deployed by both public and private entities, elements of gestural computing at play in gaming rigs and augmented reality tools, and — thanks to Elon Musk’s Neuralink announcement earlier this month — brain-computer interfaces and implants are back in the spotlight.
The tech predictions of the movie treatment weren’t born in a bubble. Director Steven Spielberg did his homework, working with a panel of experts to make the 2050s tech depicted at least plausible, not merely fantastical. And, judging by the pace of innovation, we’re ahead of schedule.
But it wasn’t my love of skop skiet en donner that set the agenda for my column this week. It was a flurry of news stories crossing my screens on the potential and pitfalls of the next generation of security tech and tech-enabled policing.
In late January Wired and The Verge ran a story that seems to encapsulate some of the worst ways to use such tools, detailing how police in the US have leapt from creating DNA-generated 3D models of suspect’s faces to running said models through facial recognition tools.
The intent was to crack cold cases. The result is something akin to the version of future policing in It not only shows a disregard for ethics, but an alarming overconfidence in the tech itself.
We know that technology can introduce bias and that people have used these systems poorly. In Minnesota, as MIT Technology Review reported, technology was used to identify protesters at Black Lives Matter protests. Protesting, I shouldn’t have to say, isn’t a crime, even in the US. And the US and UK have both used algorithms in post-conviction processes, in attempting to predict recidivism and in recommending sentences accordingly.
ProPublica has reported that one such particular system guessed re-offence likelihood wrong twice as often for black people than for white people. Systems built on incomplete or biased data replicate bias.
That’s not to say the systems without tech intervention are flawless. We know humans have bias. Globally, I can point to any number of death row inmates proved innocent, judges skirting sentencing guidelines to extract more milligrams of flesh, police officers jumping to (wrong) conclusions, and so on.
I’d like to say that the nice thing about human error is that it isn’t networked, but that’s simply not true. A hard-line police captain or prosecutor will slowly shape a local workforce in his likeness. If your coworkers endorse your casual racism, it reinforces such beliefs, and that’s how workplace cultures develop and dig in, like grout mould.
So it is with interest, and caution, that I saw a local story of tech-enabled policing in my inbox. On Tuesday the Gauteng provincial government (GPG) and Vumacam announced a partnership involving “Vumacam’s extensive camera network and advanced crimefighting technologies”.
Vumacam boasts SA’s largest private CCTV network, with over 6,000 cameras in Gauteng and “access” to some 5,000 partner cameras nationwide. The GPG-Vumacam partnership also comes with the promise of “extending camera coverage to underserved areas, particularly within ... townships, informal settlements and hostels”.
It is really an extension of existing relationships. Vumacam works with the Joburg metro police, the SA Police Service and Business Against Crime through various projects and integrated operations centres, using tech to monitor the video feeds incoming from the extensive network, flag incidents and track vehicles.
It’s an announcement that will be sure to divide opinions around the braai this weekend. In the face of relentlessly poor crime stats and the emotive power of victim stories, it will feel like a great, even necessary, step to some. Others object to surveillance on principle, arguing that it is inherently an invasion of privacy, something many believe trumps the law and order justification.
I sit — rather painfully — on the (electrified) fence: an idealist about privacy, a pragmatist about crime. I worry that such systems are open to be turned into the CCTV equivalent of the SIM surveillance scandal, where loopholes in SIM card registration laws enabled state surveillance of investigative journalists.
In fact, the Johannesburg Roads Agency (JRA) and Vumacam got into it — legally — a few years back, when the JRA declined to process some of Vumacam’s “wayleaves” (a type of permit or permission) applications, temporarily halting the rollout of a network of highdefinition security cameras. Right To Know was an amicus in the case. The judge ultimately sided with Vumacam.
I asked Vumacam what measures it had in place to address the obvious privacy concerns surveillance raised. CEO Ricky Croock replied that the “system security and data privacy standards [are] of a world class standard”.
“Dark screen technology with system-driven alerts means nobody is ever able to watch a feed constantly, but we are always alerted to crime incidents with the help of AI,” he explained.
Furthermore, he said the data was anonymised and “only drawn when needed for investigations — under highly secure and audited conditions. Where feeds and data are not required, it is disposed of after 30 days. Those with access to the system must pass rigorous checks and use the system under highly regulated conditions, and where footage is used for investigations it is stored securely with stringent measures in place for access.”
Vumacam certainly has ticked the boxes here, hopefully enough to prevent abuses of power even if the measures will never appease the diehard privacy pundit. So, with potential for bias in the hardware, software and “wetware” (aka people), we are the butter in the hard-placerock sandwich.
We will have to remain constantly vigilant, for the bad guy in the shadows as much as in policies.