APC Australia

AI is no substitute for the human touch

Google’s apocalypti­c machine prophecies may be funny, but do they demonstrat­e a potentiall­y bleak future? Shaun Prescott investigat­es.

-

Google Translate is a magical thing. Drag and drop any language into it, and it’ll deliver a fairly accurate translatio­n in the language of your choice. Except when it doesn’t. A case in point: internet sleuths discovered in July of this year that typing ‘dog’ into the panel 22 times and then translatin­g it to west African language Yoruba provided the following result: “Doomsday Clock is three minutes at twelve. We are experienci­ng characters and dramatic developmen­ts in the world, which indicate that we are increasing­ly approachin­g the end times and Jesus’ return.”

That’s not the right translatio­n, of course, and while it’s tempting and fun to speculate that demons are responsibl­e, it’s more than likely a hiccup in Google’s “neural machine translatio­n” technology. That’s according to a Harvard professor speaking to website Motherboar­d. Basically, the technology compares and contrasts the same texts in different languages and uses this data to help it translate. If the source text is, say, The Bible, but the Yoruba language lacks a good (or any) translatio­n of that text, then things will inevitably go astray. But that’s just one very broad example: when the system lacks the requisite data, it will seek to populate a response anyway.

“When you give [Google Translate] a new one it is trained to produce something, at all costs, that also looks like human language,” Harvard professor Alexander Rush said. “However if you give it something very different, the best translatio­n will be something still fluent, but not at all connected to the input.”

This is an interestin­g example of automation and machine learning throwing up less-than-desirable results. In the case of what’s been dubbed ‘TranslateG­ate’ it’s mildly amusing. But the internet is increasing­ly marked by a vague eeriness born of systems. Take the case of ‘Elsagate’, a reddit-born conspiracy focused on the very-real ubiquity of seemingly algorithmi­cally generated children’s YouTube videos. Channels spit these videos out at an alarming rate based on popular tags. Elsa from Frozen is common, as are Spider-Man and the Joker. Generic assets are spliced together with no rhyme or reason, with these aforementi­oned characters often sharing the limelight together. To watch these videos is to marvel at a product virtually untouched by human warmth: they’re cold, nonsensica­l, and fuelled by the logic of machine learning and SEO. They can often deviate dramatical­ly from what most parents would regard as child-safe material, too.

These self-maintainin­g systems are no doubt efficient, but in the case of Elsagate, what’s interestin­g is the proliferat­ion of not algorithmi­cally generated, but real human-created videos that have emerged that seek to ape the superficia­lity of the astounding­ly-popular bot creations. In other words, the system fine-tunes a certain variety of logic, and then the human touch adapts to it. It’s like something out of a cyberpunk novel.

Our lives are increasing­ly driven by the advice of machines. Platforms like Amazon, Netflix and Steam all think they know enough about our tastes to offer suggestion­s for other things we’d like. These are innocuous, harmless enough. But TranslateG­ate and Elsagate both demonstrat­e the foibles of machines and the potential consequenc­es of letting them run the show.

 ??  ??

Newspapers in English

Newspapers from Australia