Locked door no match for ‘burglar bots’ that slip in through the catflap
Criminals could use AI to create small autonomous robots to gain entry into homes, study warns
FOR many homeowners, the traditional way to protect against burglars has been to ensure doors and windows are locked and perhaps leave a light on.
But such gestures might prove futile in years to come after scientists warned that the next generation of home invaders could be robots that are programmed to gain entry through cat flaps or letter boxes.
Using Artificial Intelligence (AI), small autonomous robots are being developed that could breach traditional security safeguards.
Delivered through small openings such as cat flaps, they could then scan a person’s home in order to retrieve keys that could then allow a human burglar to enter.
Alternatively, scientists believe more advanced machines could use AI to search a property themselves for valuables, or cash, using cameras to scan and assess different rooms.
The robots could also be used simply to determine whether anybody is at home, relaying the information to a human operator who could then break in if the coast is clear.
The frightening prospect is just one area in which scientists and police believe AI could be used by criminals to exploit people in the future.
A study, published in Crime Science by researchers at UCL, identified a range of criminal opportunities that technological advances could create.
While the use of so-called “burglar bots” is regarded as a low-harm and low-reward crime, scientists and police are concerned about “deepfake” videos and images that could exploit and blackmail unsuspecting victims.
Using sophisticated AI software, criminals are able to generate convincing impersonations of people, which could be used to persuade people to part with money or secure passwords.
Police fear unscrupulous criminal gangs could generate a video of someone from material freely available online and use it to persuade their elderly parents to send them money.
Another sinister application might be to create fake videos of public figures speaking about controversial issues to manipulate support.
The researchers also highlighted the potential risks posed by the roll-out of driverless cars, which they warned could be used by extremists to carry out terror attacks.
Professor Lewis Griffin, from UCL’S computer science department, the senior author of the report, said: “As the capabilities of Ai-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”