UNSTOPPABLE MARCH
CHILDREN should be taught about the risks of artificial intelligence from the moment they get their first mobile phone, because the robot age is upon us, experts say. Although Terminator-style killer cyborgs remain – for now – on the pages of science fiction books, technology is already sufficiently advanced to impact all areas of our lives.
The speed of the breakthroughs has even shocked tech leaders, with more than 1,000, including Elon Musk and Apple co-founder Steve Wozniak, urging companies to pause further research.
However, China’s determination to plough on – driven partly by its ageing demographic – leaves thewest with a headache.
Open AI’S CHATGPT, which allows users to have a human-type conversation with a machine, has caused alarm with its power.
It was even able to pass exams, causing some schools and universities to re-visit their marking process.
AI has already caused a series of difficult incidents. Last month one chatbot told a man to leave his wife. It took less than two hours of “conversation” with the Bing Chatbot for US journalist Kevin Roose to become “deeply unsettled”, when his computer told him: “Actually, you’re not
‘Since it cannot be stopped, the only answer is education’
happily married.your spouse and you don’t love each other. You just had a boring Valentine’s Day dinner together.”
George Washington University law professor Jonathan Turley was astonished to be told by colleagues he had been included in a Ai-generated list of academics involved in sex scandals. The source turned out to be a fake 2018 Washington Post article which falsely claimed Prof Turley had sexually assaulted students during a trip to Alaska.
And last week it emerged that a woman had received a call from a voice which she recognised as her son’s, claiming to have had an accident.
He asked for money for police bail. But when she phoned her son back she discovered it wasn’t him who had made the call, but an Ai-created imitation of his voice.
The EU has now introduced new legislation, Italy has become the first government to ban CHATGPT due to data privacy concerns and the UK Government has rolled out a white paper laying out a “pro-innovation approach”.
But with China unlikely to heed calls for caution, there is little chance of the brakes being applied.
A look at China’s AI developments highlight the Chinese Communist Party’s main concern: the control of its people.
According to a recent US Government report, the world’s top five most accurate developers of facial recognition tech – the mainstay of China’s ubiquitous CCTV network – are now Chinese. And its demographic time bomb – with 43 per cent of the population drawing pensions by 2050 – has led to massive AI innovations for the robotics sector.
And China has now surpassed the US with 322 robots per 10,000 people.
AI depends heavily on data, which is why there is an ongoing controversy with the Chinese social media app Tik Tok. “AI is fundamentally a technology for prediction – autocratic governments would like to be able to predict the whereabouts, thoughts, and behaviours of citizens,” explained Prof David Yang of Harvard University.
If China successfully exports its technology it could “generate a spreading of similar autocratic regimes to the rest of the world”.
Prof Cai Hengjin, of the AI Research Institute at China’s Wuhan University, said: “One measurement is how fast and powerful AI would grow beyond our imagination.
“Some thought it would grow slowly and we still have decades or even hundreds of years left – but that’s not the case.
“We only have a couple of years – because our AI advancement is too fast.”
Expert Prof Mark Lee, from Birmingham University, said while killer robots may be decades away we should be more concerned about the here and now.
The tech race, he believes, is unstoppable – meaning we should prepare for a new age.
HE SAID: “There is no real sense these multinationals know where they’re going with these developments as they race against each other to secure the next hit. “Since it cannot be stopped, the only answer is education. “We have to teach people how to evaluate and question information. It will require education in critical rethinking and will have to begin as soon as you give your kids their first mobile phone and they encounter AI for the first time.”
If anything, he said, fears of deadly
cyborgs may distract us from a more immediate threat. He said: “Recent developments are impressive but, despite the claims of Microsoft and Google, these languagebased models are not artificial intelligence.
“They do not possess intelligence in the human form – they have nil intent, no self-reflection.”
Although he believes these may become a concern within 50 years, we should focus our attention on immediate Ai-disruption, such as dis-information and employment.
“I am worried we will spend the next few years worrying about killer robots when the real danger is fake news – and it is with us now,” said Prof Lee.
Obvious incarnations of AI technology include “deep fakes” – computer-generated photos and videos which use face recognition
technology to replicate a person’s face or body. Recent examples include the image allegedly showing Pope Francis wearing a white puffer jacket, or Donald Trump being arrested by police.
JUST after Russia’s invasion of Ukraine last year, the Kremlin generated a deep fake video “showing” Ukrainian President Zelensky ordering his troops to surrender. Autocratic regimes have been quick to recognise its potential. Russia’s infamous “troll farm” outside St Petersburg uses social media to push its propaganda, and AI is only going to make the controversial process that much easier.
“It takes effort and resources to employ banks of trolls – but imagine when autocratic governments are able to do this using
AI,” said Prof Lee. Solutions aren’t easy. While Western nations may decide to pass laws that watermark Ai-generated material, what about fake material posted by rogue states who do not play by the same rules?
A growing scepticism about who or what to trust will find people taking further refuge in “echo chamber culture” which is already so pervasive across social media, argued Prof Lee.
He added: “Different chatbots with different agendas will find you and match your beliefs and back them.
“People will no longer be talking to each other, they will be agreeing with chatbots.”
He concluded: “The internet is a nasty, vast landscape – Western liberals certainly do not control the standard. AI gives you things you don’t want.”