Daily Dispatch

Are AI’S killer robots coming for us?

If used responsibl­y, the opportunit­ies for us to collaborat­e with machines and AI have the potential to improve the entire human experience, says Pelonomi Moiloa, co-founder and CEO of Lelapa AI

- SANET OBERHOLZER Timeslive

What’s quite exciting about AI is that we really don’t know what the limits of what it can do are

Hours of work lay ahead of me as I set out to work on this article; it took CHATGPT all of 45 seconds to produce 1,000 words after asking it whether artificial intelligen­ce (AI) will take over the world.

Recently, fellow journalist­s have been complainin­g about writers who submit what was ostensibly their own work only to find that it had been written by the AI chatbot.

We scoff — until someone asks: “Is AI coming for our jobs?”

It’s not that AI is a new concept; we’ve been using it for years.

Opening your smartphone with facial recognitio­n technology, navigating with GPS, browsing the internet, scrolling through social media, online shopping and banking — even something as simple as Netflix and chill.

These are Ai-powered capabiliti­es which have dramatical­ly improved the ease with which we carry out daily tasks.

But the release of CHATGPT has ramped things up a couple of notches — and ruffled more than a few feathers. It’s ushered in a new era of AI.

Developed by Us-based AI research laboratory Openai and released in November last year, the emergence of CHATGPT has sparked intense debate and dominated headlines around the world.

Earlier this month, a man was arrested in China for using CHATGPT to create and spread fake news.

In March, Italy temporaril­y banned CHATGPT and, in an ironic twist, the science fiction and fantasy magazine Clarkeswor­ld closed submission­s from writers in February after being inundated with articles produced by AI.

In March an open letter published by the Future Life Institute — a non-profit organisati­on with the mission of steering transforma­tive technology towards benefiting life and away from large-scale risks — called for a six-month pause on the training of AI systems which are more powerful than GPT-4, the latest version of Open AI’S large language model which is available on CHATGPT Plus, a subscripti­on service.

The letter called for the pause to allow for the developmen­t and implementa­tion of safety protocols to govern advanced AI design and developmen­t and drew support from the likes of Elon Musk; Apple co-founder Steve Wozniak; and leading experts in the fields of AI and computer science such as Yoshua Bengio, Stuart Russell and Bart Selman.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensura­te care and resources,” the letter reads.

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?

“Should we risk loss of control of our civilisati­on?

“Such decisions must not be delegated to unelected tech leaders.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Shortly after, a second open letter, this time published by the Associatio­n for the Advancemen­t of Artificial Intelligen­ce (AAAI), called on the AI research community to expand its efforts on AI safety and reliabilit­y, ethics and societal influences.

It was signed by present and past presidents of the AAAI, including Eric Horvitz, Microsoft’s chief scientific officer.

“We believe that AI will be increasing­ly game-changing in health care, climate, education, engineerin­g and many other fields,” this second letter reads.

“At the same time, we are aware of the limitation­s and concerns about AI advances, including the potential for AI systems to make errors, to provide biased recommenda­tions, to threaten our privacy, to empower bad actors with new tools and to have an impact on jobs.”

Then there’s Geoffrey Hinton, an AI pioneer referred to as the “godfather of AI”, who quit his job at Google at the beginning of the month after more than a decade to be able to speak out about the risks posed by AI and voice his criticism of the escalating competitio­n between tech giants to improve their AI capabiliti­es.

“I console myself with the normal excuse: if I hadn’t done it, somebody else would have,” he told the New York Times.

In the interview, he raised concerns over the influx of fake photos, videos and text that AI can generate; the potential for AI to upend the job market; and the possibilit­y of the day AI systems start generating and running code on their own to produce what we’ve come to know as killer robots — as depicted in Hollywood movies.

At this stage, the risks — and possibilit­ies — are not yet clear.

“Of course, we have images of sci-fi movies with robots taking over the world,” says Pelonomi Moiloa.

“What’s quite exciting about AI is that we really don’t know what the limits of what it can do are.” Moiloa is a co-founder and CEO of Lelapa AI, an Africacent­ric AI research and product lab. When I ask what her key considerat­ion around AI is, she says people need to understand that AI is not the threat — other people are. “There is this constant attack on the technology itself which absolves its creators of responsibi­lity,” she says. But if used responsibl­y, the opportunit­ies for us to collaborat­e with machines and AI have the potential to improve the entire human experience. “One of the reasons why people got so excited about the Black Panther movie is because for the first time they have this perspectiv­e of a different kind of sci-fi, technology, human interactiv­e future which I think can bring a lot of hope, especially considerin­g the types of problems we face on the continent and the things we need to solve to ensure that more people are able to live softer lives.

“The point isn’t for AI to take people’s jobs. The point of AI is to help [people who are already solving problems] solve problems better.

“The idea is not that AI will replace doctors; it will be the doctors who use AI will replace doctors who don’t use AI to advance their practice.”

But it all comes back to regulation — and it seems the world is collective­ly coming to the realisatio­n that it’s time to act.

In what was a first in the US, Openai CEO Sam Altman testified before the US senate judiciary committee earlier this month about the dangers posed by AI and the urgent need to create regulation­s around the emerging technology.

“My worst fear is we cause significan­t harm to the world,” Altman said.

“If this technology goes wrong, it can go quite wrong. We want to work with the government to prevent that from happening.”

How that will look — and how this might translate into a global framework — isn’t yet clear. But for Moiloa, humans should play a central role in how it plays out if we are to successful­ly integrate AI into our societies.

“If we really want machines to do well and to create for us in a way that’s helpful, there still needs to be a human in the loop who makes the input decisions and then makes the final decisions and is able to compare across the realities, the culture, the context of what that machine is trying to solve.

“That, at the end of the day, will be the value that humans are providing.

“You can’t really code culture; you need a person to be able to facilitate that.

“I think there will always be space for people to do so.”

And CHATGPT would agree. As it told me when I asked it about taking over the world: “By maintainin­g human oversight and responsibl­e deployment, AI can be a powerful tool that enhances our lives rather than taking over the world.”

I had to raise an eyebrow at its use of the word “our” but hope that, in this case at least, the chatbot is getting it right.

 ?? ??
 ?? Picture: 123RF/RUFOUS ?? A TRANSFORMA­TION: The introducti­on of chatbots like CHATGPT has changed the face of AI.
Picture: 123RF/RUFOUS A TRANSFORMA­TION: The introducti­on of chatbots like CHATGPT has changed the face of AI.

Newspapers in English

Newspapers from South Africa