SOME of the world’s biggest robot companies have joined together to ask the United Nations to “protect us all” from the dangers of robotic machines being used to fight wars.
The list of 116 tech bosses includes Elon Musk, the head of Tesla and Spacex, and Mustafa Suleyman, the founder of Google’s artificial intelligence (AI) company, Deep Mind.
They are worried that advances in robotics and AI will lead to governments and terrorists using lethal autonomous weapons systems (‘killer robots’) against civilians. These are machines that do not need a human controller or pilot, and can take thedecision on their own to target and kill.
“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter says.
The sophistication of robotics and AI has increased dramatically in recent years, so many people believe that killer robots will soon be seen in real life and not just in sci-fi movies.
One of the main objections to robots being used in battles is that they do not have human judgement. It’s also not clear who would be responsible if one killed innocent people. Would it be the person that programmed it? The company that made it? The general who sent it into battle?
Many critics think that countries will use this confusion to brush off responsibility during wartime, and that this will put more people’s lives in danger.
Hackers are also a big worry, as someone could do a lot of damage by hacking into a robotic plane or tank that is armed with missiles and machine guns.
The US Navy has been testing the X-47B unmanned combat drone