Should police work with hacked data? A lot can go wrong
One subject that never fails to interest is “CopTech”, as used by law enforcement agencies. We’re going to hear a lot more about this particular topic over the coming years.
That’s because the constantly evolving tech is new and often not well regulated in the context of law enforcement. Obviously this makes for creative policing and surveillance opportunities, but also many grey areas and potentially risky, unintended consequences.
Every now and then it seems as if police, spies and the armed forces take the “think like a hacker” concept a bit too far and trip up with ethics.
As an example, in May this year it was revealed that NZ Police had on the quiet trialled the American Clearview AI facial recognition system that is built using billions of images scraped from people’s social media profiles. Some would say that if you post pictures of yourself or others on social media, you’re asking for them to be used by anyone, but even by that measure Clearview AI seems to have gone too far with its business. The company now faces investigation by the Australian, British and Canadian privacy watchdogs.
There’s an argument of course that the long arm of the law should have access to the same resources that hackers do. However, as statesponsored offensive hacking campaigns like NotPetya showed us when it took out Maersk’s IT systems as collateral damage, there is plenty of potential for things to go wrong.
It’s a colossal amount of data from breaches as well at over 102 billion “assets”, Spycloud boasts. The data trove is growing at more than 50 breaches a week. That’s breaches, and not the number of records, which could be anything.
The data held by Spycloud appears to be very detailed too, with plenty of sensitive personal information. The data can be fed into software like Maltego, an open source intelligence gleaning tool that finds links between pieces of information.
Spycloud says they collect the data through “human intelligence” and claim it has the most and cleanest data of any provider. Microsoft through a venture capital subsidiary has sunk tens of millions of dollars in investment in Spycloud.
There’s nothing to suggest that Spycloud is doing anything nefarious with the data it has. On the contrary, the company says it has a cybersecurity mission, and tries to develop methods and technologies to prevent account takeovers, and to investigate fraud.
The media release I spotted last month talked about Spycloud working with law enforcement and another tech firm, Zero Trafficking, that also uses large-scale data analytics to fight the horrible scourge of human trafficking.
At first glance, it seems like a great idea to use people’s hacked personal data against the criminals that stole it and other miscreants. Besides, the data is already out there and often in multiple sets that are traded by spammers and other criminals.
The problem is that nobody asked the people whose stolen data is assembled in the 102 billion “assets” used for analysis and processing. I’m in their system, and June was the first time I had heard of Spycloud. As mentioned before, the data is “enriched” which means several sources are joined up to give a fuller picture of subjects.
How is a private, commercial data collection and processing company regulated though? If you’re outside US jurisdiction, do international regulations apply? Does the law enforcement collaboration require warrants? What exactly is the data used for by police? How many other companies are involved in such work, and who are their customers? Can we opt out somehow and have our hacked data deleted?