Are AI’S killer robots coming for us?
If used responsibly, the opportunities for us to collaborate with machines and AI have the potential to improve the entire human experience, says Pelonomi Moiloa, co-founder and CEO of Lelapa AI
What’s quite exciting about AI is that we really don’t know what the limits of what it can do are
Hours of work lay ahead of me as I set out to work on this article; it took CHATGPT all of 45 seconds to produce 1,000 words after asking it whether artificial intelligence (AI) will take over the world.
Recently, fellow journalists have been complaining about writers who submit what was ostensibly their own work only to find that it had been written by the AI chatbot.
We scoff — until someone asks: “Is AI coming for our jobs?”
It’s not that AI is a new concept; we’ve been using it for years.
Opening your smartphone with facial recognition technology, navigating with GPS, browsing the internet, scrolling through social media, online shopping and banking — even something as simple as Netflix and chill.
These are Ai-powered capabilities which have dramatically improved the ease with which we carry out daily tasks.
But the release of CHATGPT has ramped things up a couple of notches — and ruffled more than a few feathers. It’s ushered in a new era of AI.
Developed by Us-based AI research laboratory Openai and released in November last year, the emergence of CHATGPT has sparked intense debate and dominated headlines around the world.
Earlier this month, a man was arrested in China for using CHATGPT to create and spread fake news.
In March, Italy temporarily banned CHATGPT and, in an ironic twist, the science fiction and fantasy magazine Clarkesworld closed submissions from writers in February after being inundated with articles produced by AI.
In March an open letter published by the Future Life Institute — a non-profit organisation with the mission of steering transformative technology towards benefiting life and away from large-scale risks — called for a six-month pause on the training of AI systems which are more powerful than GPT-4, the latest version of Open AI’S large language model which is available on CHATGPT Plus, a subscription service.
The letter called for the pause to allow for the development and implementation of safety protocols to govern advanced AI design and development and drew support from the likes of Elon Musk; Apple co-founder Steve Wozniak; and leading experts in the fields of AI and computer science such as Yoshua Bengio, Stuart Russell and Bart Selman.
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter reads.
“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?
“Should we risk loss of control of our civilisation?
“Such decisions must not be delegated to unelected tech leaders.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Shortly after, a second open letter, this time published by the Association for the Advancement of Artificial Intelligence (AAAI), called on the AI research community to expand its efforts on AI safety and reliability, ethics and societal influences.
It was signed by present and past presidents of the AAAI, including Eric Horvitz, Microsoft’s chief scientific officer.
“We believe that AI will be increasingly game-changing in health care, climate, education, engineering and many other fields,” this second letter reads.
“At the same time, we are aware of the limitations and concerns about AI advances, including the potential for AI systems to make errors, to provide biased recommendations, to threaten our privacy, to empower bad actors with new tools and to have an impact on jobs.”
Then there’s Geoffrey Hinton, an AI pioneer referred to as the “godfather of AI”, who quit his job at Google at the beginning of the month after more than a decade to be able to speak out about the risks posed by AI and voice his criticism of the escalating competition between tech giants to improve their AI capabilities.
“I console myself with the normal excuse: if I hadn’t done it, somebody else would have,” he told the New York Times.
In the interview, he raised concerns over the influx of fake photos, videos and text that AI can generate; the potential for AI to upend the job market; and the possibility of the day AI systems start generating and running code on their own to produce what we’ve come to know as killer robots — as depicted in Hollywood movies.
At this stage, the risks — and possibilities — are not yet clear.
“Of course, we have images of sci-fi movies with robots taking over the world,” says Pelonomi Moiloa.
“What’s quite exciting about AI is that we really don’t know what the limits of what it can do are.” Moiloa is a co-founder and CEO of Lelapa AI, an Africacentric AI research and product lab. When I ask what her key consideration around AI is, she says people need to understand that AI is not the threat — other people are. “There is this constant attack on the technology itself which absolves its creators of responsibility,” she says. But if used responsibly, the opportunities for us to collaborate with machines and AI have the potential to improve the entire human experience. “One of the reasons why people got so excited about the Black Panther movie is because for the first time they have this perspective of a different kind of sci-fi, technology, human interactive future which I think can bring a lot of hope, especially considering the types of problems we face on the continent and the things we need to solve to ensure that more people are able to live softer lives.
“The point isn’t for AI to take people’s jobs. The point of AI is to help [people who are already solving problems] solve problems better.
“The idea is not that AI will replace doctors; it will be the doctors who use AI will replace doctors who don’t use AI to advance their practice.”
But it all comes back to regulation — and it seems the world is collectively coming to the realisation that it’s time to act.
In what was a first in the US, Openai CEO Sam Altman testified before the US senate judiciary committee earlier this month about the dangers posed by AI and the urgent need to create regulations around the emerging technology.
“My worst fear is we cause significant harm to the world,” Altman said.
“If this technology goes wrong, it can go quite wrong. We want to work with the government to prevent that from happening.”
How that will look — and how this might translate into a global framework — isn’t yet clear. But for Moiloa, humans should play a central role in how it plays out if we are to successfully integrate AI into our societies.
“If we really want machines to do well and to create for us in a way that’s helpful, there still needs to be a human in the loop who makes the input decisions and then makes the final decisions and is able to compare across the realities, the culture, the context of what that machine is trying to solve.
“That, at the end of the day, will be the value that humans are providing.
“You can’t really code culture; you need a person to be able to facilitate that.
“I think there will always be space for people to do so.”
And CHATGPT would agree. As it told me when I asked it about taking over the world: “By maintaining human oversight and responsible deployment, AI can be a powerful tool that enhances our lives rather than taking over the world.”
I had to raise an eyebrow at its use of the word “our” but hope that, in this case at least, the chatbot is getting it right.