How will Microsoft’s AI system catch paedophiles?
Project Artemis will help smaller sites protect children online
Do you have nightmares about the prospect of a world dominated by artificial intelligence (AI)? We wouldn’t blame you. Most discussions about AI focus on the dangers. Killer robots will malfunction and turn on their creators; self-driving cars will squash pedestrians; facialrecognition will enslave the population. We’re doomed!
These make great headlines for an anxious age, but the reality is less apocalyptic. Most AI is actually quite mundane, built simply to recognise patterns in photos and language. And it’s being used in many positive ways, particularly by medical researchers to diagnose diseases earlier and more accurately than doctors.
Microsoft now thinks AI can be used to catch paedophiles grooming children online. Work began at a ‘hackathon’ event in November 2018, with
Microsoft developers joining teams from Facebook, Google and Snap (which makes Snapchat) to analyse thousands of conversations to understand phrases paedophiles use when attempting to befriend children.
Since then Microsoft has been working on Project Artemis to develop the research into an AI system that can work out the probability that a conversation is a grooming incident.
The project was led by Dr Hany Farid, an expert in the field of image analysis. In 2009, he worked with Microsoft to build the AI tool Photodna, which identifies images of child exploitation. It’s now used by more than 150 companies and organisations around the world.
Microsoft hasn’t revealed
It analysed thousands of conversations to understand phrases paedophiles use when attempting to groom children
what phrases it looks for, so that paedophiles don’t try to beat the system. Microsoft claims it’s sophisticated enough to distinguish between grooming attempts and erotic conversations between consenting adults.
After testing Artemis on Skype and Xbox Live, Microsoft is now ready to share it with companies that run chat services. They can use the tool to automatically flag suspicious conversations that need to be checked by human moderators, and passed on to the police if necessary.
This kind of automated detection is vital because many smaller companies can’t afford to pay humans to check everything that appears online. Artemis has been built specially for such firms without the millions that Facebook and Google can spend on large teams of moderators.
Andy Burrows, Head of Child Safety Online Policy at the NSPCC, said there’s “no excuse” for sites to not adopt the tool. “It could not only shield young people from abuse, but also pin down predatory adults,” he added.
Deployment of the system will be managed by Thorn, a US charity set up by the actors Demi Moore and Ashton Kutcher that aims to “eliminate child sexual abuse from the internet”. It says that Artemis is a milestone in catching paedophiles because it helps to create an industry standard for what detection
and monitoring of predators should look like.
Thorn’s boss Julie Cordua says sophisticated systems are needed because paedophiles are persistent and devious. She said they “try to isolate the child and will follow them across multiple platforms, so they can have multiple exploitation points”.
Microsoft admits that Artemis, which works only in English at present, isn’t a silver bullet, saying the “horrific” crime of internet grooming needs to be tackled by the whole of society working together. It’s encouraging other tech companies to work on Artemis “with the goal of continuous improvement and refinement”.
But this shouldn’t be misinterpreted as pessimism. There’s justified hope that tools like Artemis will help to fight what remains the most sickening threat on the internet. Such valuable work should help to persuade the public that there’s more to AI than bleak predictions of oppression and violence.