Real intelligence needed to handle artificial version
For the most part, the focus of contemporary emergency management has been on natural, technological and human-made hazards such as flooding, earthquakes, tornadoes, industrial accidents, and extreme weather events.
However, with the increase in the availability and capabilities of artificial intelligence, we may soon see emerging public safety hazards related to these technologies that we will need to mitigate and prepare for.
Over the past 20 years, my colleagues and I – along with many other researchers – have been leveraging AI to develop models and applications that can identify, assess, predict, monitor and detect hazards to inform emergency responders and decision-making.
We are now reaching a turning point where AI is becoming a potential source of risk at a scale that should be incorporated into risk and emergency management.
AI hazards can be classified into two types: intentional and unintentional. Unintentional hazards are those caused by human errors or technological failures.
As the use of AI increases, there will be more adverse events caused by human error in AI models or technological failures in AI based technologies. These events can occur in all kinds of industries including transportation (like drones, trains or self-driving cars), electricity, oil and gas, finance and banking, agriculture, health, and mining.
Intentional AI hazards are potential threats that are caused by using AI to harm people and properties. AI can also be used to gain unlawful benefits by compromising security and safety systems.
In my view, this simple intentional and unintentional classification may not be sufficient in case of AI. Here, we need to add a new class of emerging threats – the possibility of AI overtaking human control and decision-making. This may be triggered intentionally or unintentionally.
Many AI experts have already warned against such potential threats.
A recent open letter by researchers, scientists and others involved in the development of AI called for a moratorium on its further development.
Public safety and emergency management experts use risk matrices to assess and compare risks. Using this method, hazards are qualitatively or quantitatively assessed based on their frequency and consequence, and their impacts are classified as low, medium or high.
Hazards with high frequency or high consequence or high in both consequence and frequency are classified as high risks. These risks need to be reduced by taking additional risk reduction and mitigation measures. Failure to take immediate and proper action may result in sever human and property losses.
The time has come when we should quickly start bringing the potential AI risks into local, national, and global risk and emergency management.
AI technologies are becoming more widely used by institutions, organizations and companies in different sectors, and hazards associated with the AI are starting to emerge.
In 2018, the accounting firm KPMG developed an “AI Risk and Controls Matrix.” It highlights the risks of using AI by businesses and urges them to recognize these new emerging risks. The report warned that AI technology is advancing very quickly and that risk control measures must be in place before they overwhelm the systems.
Governments have also started developing some risk assessment guidelines for the use of AI-based technologies and solutions. However, these guidelines are limited to risks such as algorithmic bias and violation of individual rights.
At the government level, the Canadian government issued the “directive on automated decision-making” to ensure that federal institutions minimize the risks associated with the AI systems and create appropriate governance mechanisms.
The main objective of the directive is to ensure that when AI systems are deployed, risks to clients, federal institutions and Canadian society are reduced.
In its 2017 Global Risk Report, the World Economic Forum highlighted that AI is only one of emerging technologies that can exacerbate global risk. the report concluded that, at that time, super-intelligent AI systems remain a theoretical threat.