Maximum PC

BRACE FOR IMPACT

AI developers warn: this could get bumpy

-

NEW TECHNOLOGI­ES bring dissenting voices. The explosion of AI-powered systems has brought more than its fair share of people sounding alarms. However, many of these voices belong to engineers and scientists who were involved in AI developmen­t at the highest levels, and know what they are talking about. Google’s CEO, Sundar Pichai (pictured above), has delivered a sobering warning about the effects of AI-powered systems in a recent interview. It will, he says, affect “every product across every company”. His biggest concern is that it can all too easily be used to spread convincing misinforma­tion. Google has had a series of internal guidelines for AI since 2018. “We have to be very thoughtful” he warns.

A recent Netflix drama included an AIpowered ‘ enhanced interrogat­ion’ robot developed by the CIA. It was satire, but chilling. As scientist and psychologi­st Geoffrey Hinton pointed out, AI isn’t intrinsica­lly evil, but in the wrong hands it could make malice go a long way. He warns of serious societal problems if AI is used without restrictio­ns. Early versions of ChatGPT weren’t released because they were too easy to use maliciousl­y. Google only reluctantl­y released Bard when Microsoft forced its hand. Nvidia is concerned enough to develop NeMo Guardrails, an open-source system designed to stop large language model AI from going too far. The implicatio­ns of AI tools in the hands of organizati­ons with little regard for individual freedoms is grim. AI does not know what is true, but is good at justifying errors. AI does not know what is harmful, but has the ability to change the world. Right now, we are in the midst of a huge experiment to see what it is capable of. It will be disruptive, and some of that disruption will be destructiv­e.

 ?? ??

Newspapers in English

Newspapers from United States