Can artificial intelligence be trusted with safety-critical operations?
There is no doubt that artificial intelligence (AI), data gathering and analytics have had a large impact on the oil and gas sector, from sensorising wells to enabling predictive maintenance.
“As these autonomous and self-learning systems become more and more responsible for making decisions that may ultimately affect the safety of personnel, assets, or the environment, the need to ensure safe use of AI in systems has become a top priority,” Simen Eldevik, principal research scientist at DNV GL, wrote in a paper on the topic.
The reason AI and machine learning technologies are so useful for the asset-intensive operations of the upstream sector is that they can automate and optimise processes, but also, they can learn from experience and improve with time. But that can also be a liability.
“AI and ML algorithms need relevant observations to be able to predict the outcome of future scenarios accurately, and thus, data-driven models alone may not be sufficient to ensure safety as usually we do not have exhaustive and fully relevant data,” Eldevik notes. He is correct—operators very rarely, if ever, have fully exhaustive data that could eliminate all possible risks.
Automation and data analysis will have a growing role in many industries, including “many safety-critical or high-risk engineering systems,” he says. But accidents will inevitably happen.
“As an industry, we do not want to learn only from observation of failures,” Eldevik writes. He notes that it is imperative to combine data-driven models with causal and physic-based knowledge, so that operators can learn from potential hasard scenarios beforehand, rather than learning from them after they have occurred.
DNV GL outlines a few key recommendations:
“We need to utilise data for empirical robustness. High-consequence and low-probability scenarios are not well captured by data-driven models alone, as such data are normally scarce. However, the empirical knowledge that we might gain from all the data we collect is substantial. If we can establish which parts of the data-generating process (Dgp)are stochastic in nature, and which are deterministic (e.g., governed by known first principles), then stochastic elements can be utilised for other relevant scenarios to increase robustness with respect to empirically observed variations.
We need to utilise causal and physics-based knowledge for extrapolation robustness. If the deterministic part of a DGP is well known, or some physical constraints can be applied, this can be utilised to extrapolate well beyond the limits of existing observational data with more confidence. For high-consequence scenarios, where no, or little, data exist, we may be able to create the necessary data based on our knowledge of causality and physics.
We need to combine data-driven and causal models to enable real-time decisions. For a high-consequence system, a model used to inform risk-based decisions needs to predict potentially catastrophic scenarios prior to these scenarios actually unfolding. However, results from a complex computer simulations or empirical experiments are not usually possible to obtain in real-time. Most of these complex models have a significant number of inputs, and, because of the curse-of-dimensionality, it is not feasible to calculate/simulate all potential situations that a real system might experience prior to its operation. Thus, to enable the use of these complex models in a real-time setting, it may be necessary to use surrogate models (fast approximations of the full model). ML is a useful tool for creating these fast-running surrogate models, based on a finite number of realisations of a complex simulator or empirical tests.
A risk measure should be included when developing data-driven models. For high-risk systems, it is essential that the objective function utilised in the optimisation process incorporates a risk measure. This should penalise erroneous predictions, where the consequence of an erroneous prediction is serious, such that the analyst (either human or AI) understands that operation within this region is associated with considerable risk. This risk measure can also be utilised for adaptive exploration of the response of a safety-critical system.
Uncertainty should be assessed with rigour. As uncertainty is essential for assessing risk, methods that include rigorous treatment of uncertainty are preferred.”