AI ETHICS: A BIG IS­SUE FOR THE NEXT DECADE

Will we ever know enough about ar­ti­fi­cial in­tel­li­gence and its long-term ef­fects to be able to make ethics in AI work ef­fec­tively? Pa­trick Smith, chief tech­nol­ogy of­fi­cer for EMEA at Pure Stor­age dis­cusses

Network Middle East - - VIEWPOINT / AI ETHICS -

Ar­ti­fi­cial In­tel­li­gence is a game changer. It has the power to cut through mass amounts of data and be a real force for pos­i­tive change. Al­ready we’re wit­ness­ing ar­ti­fi­cial in­tel­li­gence power ini­tia­tives to find cures for dis­eases, pre­dict crop short­ages or im­prove busi­ness pro­duc­tiv­ity. How­ever, whilst there are pos­i­tive as­so­ci­a­tions, we can­not ig­nore the con­cerns.

Bi­ased data, deep fake videos, and elec­toral mis­use are just some of the terms that are as (if not more so) per­ti­nent when dis­cussing ar­ti­fi­cial in­tel­li­gence.

Ev­ery in­dus­trial revo­lu­tion brings so­ci­etal con­cerns with it, and the age of data is no dif­fer­ent. In the new decade and be­yond, ef­fec­tive use of data and min­imis­ing bias with ar­ti­fi­cial in­tel­li­gence will be­come rep­u­ta­tional cur­rency for brands.

Just as eth­i­cal be­hav­iour im­proves per­cep­tions of busi­nesses, those pay­ing the most at­ten­tion to eth­i­cal data man­age­ment will be viewed more favourably.

How­ever, the is­sue is com­plex and mul­ti­fac­eted – with many ar­eas of am­bi­gu­ity that could be stop­ping busi­nesses from act­ing.

The very public back­lash against Google, the fear of get­ting it wrong, and un­der­ly­ing is­sues & un­fair­ness in so­ci­ety could make many hes­i­tant to act.

That said, much like cli­mate change, if left unchecked, we risk only re­al­is­ing the im­pact of un­eth­i­cal and bi­ased ar­ti­fi­cial in­tel­li­gence when it has reached a tip­ping point be­yond pos­si­ble in­ter­ven­tion. There­fore im­me­di­ate ac­tion must be taken not just by com­pa­nies, but by gov­ern­ments, reg­u­la­tors and in­di­vid­u­als. Some im­me­di­ate ac­tions in­clude:

• More main­stream me­dia at­ten­tion given to the is­sue, cre­at­ing greater trans­parency for the public • Con­sid­er­a­tion for, and im­pli­ca­tions of ar­ti­fi­cial in­tel­li­gence is­sues to be in­tro­duced into the ed­u­ca­tion sys­tem • Greater public ed­u­ca­tion and aware­ness of the role in­di­vid­u­als can play in safe­guard­ing their own data • Cre­ation of an en­vi­ron­ment where com­pa­nies are not afraid to try some­thing, and course cor­rect if needed • Ethics pan­els that truly re­flect a di­verse so­ci­ety

Will we ever know enough about ar­ti­fi­cial in­tel­li­gence and its long-term ef­fects to be able to make ethics in ar­ti­fi­cial in­tel­li­gence work ef­fec­tively? This is­sue is so com­plex and ever chang­ing that it’s ir­re­spon­si­ble to say we’ll ever have a de­fin­i­tive an­swer.

Re­gard­less, or­gan­i­sa­tions and gov­ern­ments need to act now to do what they can to en­sure the right mo­tions are put into place.

Will it be easy? No, but this isn’t just a pol­icy change - it ex­tends far be­yond that. These con­ver­sa­tions need to hap­pen now be­fore the night­mare sce­nario comes true and we lose con­trol of ar­ti­fi­cial in­tel­li­gence.

Pa­trick Smith, chief tech­nol­ogy of­fi­cer EMEA, Pure Stor­age.

Newspapers in English

Newspapers from UAE

© PressReader. All rights reserved.