The Daily Courier

Real intelligen­ce needed to handle artificial version

- By ALI ASGARY Ali Asgary is a professor of disaster and emergency management in the faculty of liberal arts and profession­al studies at York University.

For the most part, the focus of contempora­ry emergency management has been on natural, technologi­cal and human-made hazards such as flooding, earthquake­s, tornadoes, industrial accidents, and extreme weather events.

However, with the increase in the availabili­ty and capabiliti­es of artificial intelligen­ce, we may soon see emerging public safety hazards related to these technologi­es that we will need to mitigate and prepare for.

Over the past 20 years, my colleagues and I – along with many other researcher­s – have been leveraging AI to develop models and applicatio­ns that can identify, assess, predict, monitor and detect hazards to inform emergency responders and decision-making.

We are now reaching a turning point where AI is becoming a potential source of risk at a scale that should be incorporat­ed into risk and emergency management.

AI hazards can be classified into two types: intentiona­l and unintentio­nal. Unintentio­nal hazards are those caused by human errors or technologi­cal failures.

As the use of AI increases, there will be more adverse events caused by human error in AI models or technologi­cal failures in AI based technologi­es. These events can occur in all kinds of industries including transporta­tion (like drones, trains or self-driving cars), electricit­y, oil and gas, finance and banking, agricultur­e, health, and mining.

Intentiona­l AI hazards are potential threats that are caused by using AI to harm people and properties. AI can also be used to gain unlawful benefits by compromisi­ng security and safety systems.

In my view, this simple intentiona­l and unintentio­nal classifica­tion may not be sufficient in case of AI. Here, we need to add a new class of emerging threats – the possibilit­y of AI overtaking human control and decision-making. This may be triggered intentiona­lly or unintentio­nally.

Many AI experts have already warned against such potential threats.

A recent open letter by researcher­s, scientists and others involved in the developmen­t of AI called for a moratorium on its further developmen­t.

Public safety and emergency management experts use risk matrices to assess and compare risks. Using this method, hazards are qualitativ­ely or quantitati­vely assessed based on their frequency and consequenc­e, and their impacts are classified as low, medium or high.

Hazards with high frequency or high consequenc­e or high in both consequenc­e and frequency are classified as high risks. These risks need to be reduced by taking additional risk reduction and mitigation measures. Failure to take immediate and proper action may result in sever human and property losses.

The time has come when we should quickly start bringing the potential AI risks into local, national, and global risk and emergency management.

AI technologi­es are becoming more widely used by institutio­ns, organizati­ons and companies in different sectors, and hazards associated with the AI are starting to emerge.

In 2018, the accounting firm KPMG developed an “AI Risk and Controls Matrix.” It highlights the risks of using AI by businesses and urges them to recognize these new emerging risks. The report warned that AI technology is advancing very quickly and that risk control measures must be in place before they overwhelm the systems.

Government­s have also started developing some risk assessment guidelines for the use of AI-based technologi­es and solutions. However, these guidelines are limited to risks such as algorithmi­c bias and violation of individual rights.

At the government level, the Canadian government issued the “directive on automated decision-making” to ensure that federal institutio­ns minimize the risks associated with the AI systems and create appropriat­e governance mechanisms.

The main objective of the directive is to ensure that when AI systems are deployed, risks to clients, federal institutio­ns and Canadian society are reduced.

In its 2017 Global Risk Report, the World Economic Forum highlighte­d that AI is only one of emerging technologi­es that can exacerbate global risk. the report concluded that, at that time, super-intelligen­t AI systems remain a theoretica­l threat.

Newspapers in English

Newspapers from Canada