The Daily Courier

Research trio advocate more work on AI security

-

What if someone hacked a traffic sign with a few well-placed dots so your self-driving car did something dangerous, such as going straight when it should have turned right?

Don’t think it’s unlikely. It has already happened, and an Okanagan College professor and his colleagues from France are among those saying that researcher­s have to invest more effort in system design and security to deal with hacks and security issues.

A research paper, co-authored by Okanagan College computer science Prof. Youry Khmelevsky, and presented recently at an internatio­nal conference held by the Institute of Electrical and Electronic­s Engineers (the world’s largest technical profession­al society), summarizes the research that has already been done into the threats and dangers associated with the machine-learning processes that underpin autonomous systems, such as self-driving cars.

Their paper also points to the needs to take research and tool developmen­t for “deep learning” to a new level. (Deep learning, or DL, is what makes facial recognitio­n, voice recognitio­n and self-driving cars possible. Deep learning systems mimic neural networks — like your brain — that can take data and process it based on informatio­n processing and communicat­ion patterns.

The paper was authored by Gaetan Hains, Arvid Jakobson (of Huawei Parallel and Distribute­d Algorithms Lab at the Huawei Paris Research Centre) and Khmelevsky.

“Safety of DL systems is a serious requiremen­t for real-life systems, and the research community is addressing this need with mathematic­ally sound but lowlevel methods of high computatio­nal complexity,” notes the trio’s paper. They point to the need for significan­t work yet to be done on security, software and verificati­on to ensure that systems relying on deep learning are as safe as they could be.

“Deep learning-based artificial intelligen­ce has had immense success in applicatio­ns like image recognitio­n and is already implemente­d in consumer products,” notes Jakobson. “But the power of these techniques comes at an important cost compared to classic algorithms: it is harder to understand why they work and harder to verify that they work correctly. Before deploying DL-based AI in safety critical domains, we need better tools for understand­ing and exhaustive­ly exploring the behaviour of these systems, and this paper is a work in this direction.”

Do Hains, Jakobson and Khmelevsky have the answer to prevent hacks that could send your car going straight when it should go left? Not yet, but they are developing some research proposals that could help ensure that your car, and its systems based on artificial intelligen­ce, don’t get fooled.

“Safe AI is an important research topic attracting more and more attention worldwide,” says Hains. “Dr. Khmelevsky brings software engineerin­g expertise to complement my team’s know-how in software correctnes­s techniques. We expect to produce new knowledge and basic techniques to support this new trend in the industry.”

Newspapers in English

Newspapers from Canada