AI manipulation on the increase
People are now using artificial intelligence to create imitations of others, writes
their products. Facebook and YouTube can already automatically tag and categorise content using machine learning, so adding a ‘‘fake video filter’’ shouldn’t be too tricky, given the AI prowess at those companies.
AI researchers like White are also working on solutions. White told me he was ‘‘currently working on tools which help the public differentiate between genuine and manipulated media’’.
This feels like the tip of an iceberg though. We’ve reached an interesting inflection point in our ability to create fake media experiences. In April, a holographic version of Roy Orbison will embark on a tour of the United Kingdom. While this will be a pre-scripted show, how long till we see a holographic Roy Orbison controlled in real-time by an AI? That would enable a different performance each night.
Indeed, you can have a fake multimedia experience in your own home, simply by putting on a virtual reality headset. Last year, the band Coldplay live-streamed one of their concerts in VR, enabling anyone anywhere in the world to attend. Who’s to say that experience wasn’t as ‘‘authentic’’ as being at the concert in person?
These are complex questions that society will increasingly grapple with. But in the short term, we must find solutions to counter AI manipulation because there’s a clear danger of it being used for nefarious purposes. The Russian government supposedly meddling with Facebook feeds before the US election is one thing. Creating a fake, but believable, video of Vladimir Putin saying he’s just launched a nuclear missile is quite another.
Richard MacManus (@ricmac) founded tech blog ReadWrite Web in 2003 and has since become an internationally recognised commentator on what’s next in technology and what it means for society.