AI is no substitute for the human touch
Google’s apocalyptic machine prophecies may be funny, but do they demonstrate a potentially bleak future? Shaun Prescott investigates.
Google Translate is a magical thing. Drag and drop any language into it, and it’ll deliver a fairly accurate translation in the language of your choice. Except when it doesn’t. A case in point: internet sleuths discovered in July of this year that typing ‘dog’ into the panel 22 times and then translating it to west African language Yoruba provided the following result: “Doomsday Clock is three minutes at twelve. We are experiencing characters and dramatic developments in the world, which indicate that we are increasingly approaching the end times and Jesus’ return.”
That’s not the right translation, of course, and while it’s tempting and fun to speculate that demons are responsible, it’s more than likely a hiccup in Google’s “neural machine translation” technology. That’s according to a Harvard professor speaking to website Motherboard. Basically, the technology compares and contrasts the same texts in different languages and uses this data to help it translate. If the source text is, say, The Bible, but the Yoruba language lacks a good (or any) translation of that text, then things will inevitably go astray. But that’s just one very broad example: when the system lacks the requisite data, it will seek to populate a response anyway.
“When you give [Google Translate] a new one it is trained to produce something, at all costs, that also looks like human language,” Harvard professor Alexander Rush said. “However if you give it something very different, the best translation will be something still fluent, but not at all connected to the input.”
This is an interesting example of automation and machine learning throwing up less-than-desirable results. In the case of what’s been dubbed ‘TranslateGate’ it’s mildly amusing. But the internet is increasingly marked by a vague eeriness born of systems. Take the case of ‘Elsagate’, a reddit-born conspiracy focused on the very-real ubiquity of seemingly algorithmically generated children’s YouTube videos. Channels spit these videos out at an alarming rate based on popular tags. Elsa from Frozen is common, as are Spider-Man and the Joker. Generic assets are spliced together with no rhyme or reason, with these aforementioned characters often sharing the limelight together. To watch these videos is to marvel at a product virtually untouched by human warmth: they’re cold, nonsensical, and fuelled by the logic of machine learning and SEO. They can often deviate dramatically from what most parents would regard as child-safe material, too.
These self-maintaining systems are no doubt efficient, but in the case of Elsagate, what’s interesting is the proliferation of not algorithmically generated, but real human-created videos that have emerged that seek to ape the superficiality of the astoundingly-popular bot creations. In other words, the system fine-tunes a certain variety of logic, and then the human touch adapts to it. It’s like something out of a cyberpunk novel.
Our lives are increasingly driven by the advice of machines. Platforms like Amazon, Netflix and Steam all think they know enough about our tastes to offer suggestions for other things we’d like. These are innocuous, harmless enough. But TranslateGate and Elsagate both demonstrate the foibles of machines and the potential consequences of letting them run the show.