LAST WORD

Can ar­ti­fi­cial in­tel­li­gence be trusted with safety-crit­i­cal op­er­a­tions?

Oil & Gas Middle East - - COVER STORY -

There is no doubt that ar­ti­fi­cial in­tel­li­gence (AI), data gath­er­ing and an­a­lyt­ics have had a large im­pact on the oil and gas sec­tor, from sen­soris­ing wells to en­abling pre­dic­tive main­te­nance.

“As these au­ton­o­mous and self-learn­ing sys­tems be­come more and more re­spon­si­ble for mak­ing de­ci­sions that may ul­ti­mately af­fect the safety of per­son­nel, as­sets, or the en­vi­ron­ment, the need to en­sure safe use of AI in sys­tems has be­come a top pri­or­ity,” Si­men Elde­vik, prin­ci­pal re­search sci­en­tist at DNV GL, wrote in a pa­per on the topic.

The rea­son AI and ma­chine learn­ing tech­nolo­gies are so use­ful for the as­set-in­ten­sive op­er­a­tions of the up­stream sec­tor is that they can au­to­mate and op­ti­mise pro­cesses, but also, they can learn from ex­pe­ri­ence and im­prove with time. But that can also be a li­a­bil­ity.

“AI and ML al­go­rithms need rel­e­vant ob­ser­va­tions to be able to pre­dict the out­come of fu­ture sce­nar­ios ac­cu­rately, and thus, data-driven mod­els alone may not be suf­fi­cient to en­sure safety as usu­ally we do not have ex­haus­tive and fully rel­e­vant data,” Elde­vik notes. He is cor­rect—op­er­a­tors very rarely, if ever, have fully ex­haus­tive data that could elim­i­nate all pos­si­ble risks.

Au­to­ma­tion and data anal­y­sis will have a grow­ing role in many in­dus­tries, in­clud­ing “many safety-crit­i­cal or high-risk en­gi­neer­ing sys­tems,” he says. But ac­ci­dents will in­evitably hap­pen.

“As an in­dus­try, we do not want to learn only from ob­ser­va­tion of fail­ures,” Elde­vik writes. He notes that it is im­per­a­tive to com­bine data-driven mod­els with causal and physic-based knowl­edge, so that op­er­a­tors can learn from po­ten­tial hasard sce­nar­ios be­fore­hand, rather than learn­ing from them af­ter they have oc­curred.

DNV GL out­lines a few key rec­om­men­da­tions:

“We need to utilise data for em­pir­i­cal ro­bust­ness. High-con­se­quence and low-prob­a­bil­ity sce­nar­ios are not well cap­tured by data-driven mod­els alone, as such data are nor­mally scarce. How­ever, the em­pir­i­cal knowl­edge that we might gain from all the data we col­lect is sub­stan­tial. If we can estab­lish which parts of the data-gen­er­at­ing process (Dgp)are sto­chas­tic in na­ture, and which are de­ter­min­is­tic (e.g., gov­erned by known first prin­ci­ples), then sto­chas­tic el­e­ments can be utilised for other rel­e­vant sce­nar­ios to in­crease ro­bust­ness with re­spect to em­pir­i­cally ob­served vari­a­tions.

We need to utilise causal and physics-based knowl­edge for ex­trap­o­la­tion ro­bust­ness. If the de­ter­min­is­tic part of a DGP is well known, or some phys­i­cal con­straints can be ap­plied, this can be utilised to ex­trap­o­late well be­yond the lim­its of ex­ist­ing ob­ser­va­tional data with more con­fi­dence. For high-con­se­quence sce­nar­ios, where no, or lit­tle, data ex­ist, we may be able to cre­ate the nec­es­sary data based on our knowl­edge of causal­ity and physics.

We need to com­bine data-driven and causal mod­els to en­able real-time de­ci­sions. For a high-con­se­quence sys­tem, a model used to in­form risk-based de­ci­sions needs to pre­dict po­ten­tially cat­a­strophic sce­nar­ios prior to these sce­nar­ios ac­tu­ally un­fold­ing. How­ever, re­sults from a com­plex com­puter sim­u­la­tions or em­pir­i­cal ex­per­i­ments are not usu­ally pos­si­ble to ob­tain in real-time. Most of these com­plex mod­els have a sig­nif­i­cant num­ber of in­puts, and, be­cause of the curse-of-di­men­sion­al­ity, it is not fea­si­ble to cal­cu­late/sim­u­late all po­ten­tial si­t­u­a­tions that a real sys­tem might ex­pe­ri­ence prior to its op­er­a­tion. Thus, to en­able the use of these com­plex mod­els in a real-time set­ting, it may be nec­es­sary to use sur­ro­gate mod­els (fast ap­prox­i­ma­tions of the full model). ML is a use­ful tool for cre­at­ing these fast-run­ning sur­ro­gate mod­els, based on a fi­nite num­ber of re­al­i­sa­tions of a com­plex sim­u­la­tor or em­pir­i­cal tests.

A risk mea­sure should be in­cluded when de­vel­op­ing data-driven mod­els. For high-risk sys­tems, it is es­sen­tial that the ob­jec­tive func­tion utilised in the op­ti­mi­sa­tion process in­cor­po­rates a risk mea­sure. This should pe­nalise er­ro­neous pre­dic­tions, where the con­se­quence of an er­ro­neous pre­dic­tion is se­ri­ous, such that the an­a­lyst (either hu­man or AI) un­der­stands that op­er­a­tion within this re­gion is as­so­ci­ated with con­sid­er­able risk. This risk mea­sure can also be utilised for adap­tive ex­plo­ration of the re­sponse of a safety-crit­i­cal sys­tem.

Un­cer­tainty should be as­sessed with rigour. As un­cer­tainty is es­sen­tial for as­sess­ing risk, meth­ods that in­clude rig­or­ous treat­ment of un­cer­tainty are pre­ferred.”

xxxxxxxx

Newspapers in English

Newspapers from UAE

© PressReader. All rights reserved.