The Mercury - - BUSINESS REPORT - I Bloomberg

this means the del­e­ga­tion of “lethal au­thor­ity.“

Quite a few such sys­tems al­ready ex­ist, such as the Is­raeli Harpy Drone, de­signed to linger over en­emy ter­ri­tory and de­stroy any en­emy radar it dis­cov­ers. The US Navy de­vel­oped the so-called “fire and for­get” mis­sile, which could be fired from a ship to a re­mote area where it au­to­mat­i­cally seeks and de­stroys an en­emy ves­sel.

Cur­rently, the US Depart­ment of De­fence is de­vel­op­ing a ma­chine vi­sion sys­tem based on AI tech­nol­ogy known as Project Maven, which analy­ses mas­sive amounts of data and video cap­tured by mil­i­tary drones in or­der to de­tect ob­jects, track their mo­tion and alert hu­man an­a­lysts of pat­terns and ab­nor­mal or sus­pi­cious ac­tiv­ity.

The data is also used to im­prove the tar­get­ing of au­tonomous drone strikes us­ing fa­cial and ob­ject recog­ni­tion.

Ahead of other coun­tries, the US is ex­per­i­ment­ing with au­tonomous boats that track sub­marines for very long dis­tances, while China re­searches the use of “swarm in­tel­li­gence” to en­able teams of drones to hunt and de­stroy as a sin­gle unit. Rus­sia is work­ing on an un­der­wa­ter drone that could de­liver pow­er­ful nu­clear war­heads to de­stroy en­tire ci­ties.

Ma­chine learn­ing and drone swarms are all po­ten­tial tech­nolo­gies that could change the fu­ture of war. Some of th­ese AI tech­nolo­gies are al­ready be­ing used on the bat­tle­field. For ex­am­ple, Rus­sia main­tained that a drone swarm at­tacked one of its bases in Syria.

How­ever, ac­cord­ing to Manuel DeLanda the ad­vent of in­tel­li­gent and au­tonomous bombs, mis­siles and drones equipped with ar­ti­fi­cial ob­ser­va­tion and de­ci­sion-mak­ing ca­pa­bil­i­ties is part of a much larger trans­fer of cog­ni­tive struc­tures from hu­mans to ma­chines.

If we think about it, Mi­crosoft’s Cor­tana, Google’s Now, Sam­sung’s Bixby, Ap­ple’s Siri and Net­flix’s stream­ing al­go­rithms all com­bine user in­put, ac­cess to big data, and be­spoke al­go­rithms to pro­vide de­ci­sion sup­port to hu­man users based on their de­ci­sion his­tory and the de­ci­sions of mil­lions of other users. This is noth­ing more than a slow and steady shift of au­thor­ity from hu­mans to al­go­rithms. Sim­i­larly, the de­ci­sion-mak­ing in war is be­ing trans­ferred to in­tel­li­gent ma­chines – un­for­tu­nately not with­out risk.

On March 29, 2018, five mem­bers of the Al Man­thari fam­ily were trav­el­ling with a Toy­ota Landcruiser in Ye­men to the city of al-Sawma’ah to pick up a lo­cal el­der. At about 2pm a rocket from a US Preda­tor drone with ob­ject and fa­cial recog­ni­tion hit the ve­hi­cle, killing three of its pas­sen­gers, while a fourth died later. The US took re­spon­si­bil­ity for the strike, claim­ing the vic­tims were ter­ror­ists. Yet Ye­me­nis who knew the fam­ily claim they were civil­ians.

Could it be that the Al Man­thari fam­ily were killed due to in­cor­rect meta­data, which are used to se­lect drone tar­gets? Meta­data is of­ten har­vested from mo­bile phones – con­ver­sa­tions, text mes­sages, email, web brows­ing be­hav­iour, lo­ca­tion, and pat­terns of be­hav­iour. At­tacks are based on spe­cific pre­de­ter­mined cri­te­ria, for ex­am­ple a con­nec­tion to a sus­pected ter­ror­ist phone num­ber or a sus­pected ter­ror­ist lo­ca­tion. But some­times this in­tel­li­gence can be wrong!

AI weapons may have many ben­e­fits, but they carry sig­nif­i­cant eth­i­cal con­cerns, in par­tic­u­lar that a com­puter al­go­rithm could both se­lect and elim­i­nate hu­man tar­gets with­out any hu­man in­volve­ment.

How­ever, based on his­tory, I do not doubt that al­go­rithms will run fu­ture wars and that “killer ro­bots” will be used. Per­haps the sug­ges­tion of Ron­ald Arkin should be fol­lowed to fit all au­tonomous weapons with ”eth­i­cal gov­er­nors.”

One thing is for sure in the war of al­go­rithms – dom­i­na­tion will de­pend on the qual­ity of each side’s AI and al­go­rithms.

Pro­fes­sor Louis Fourie is the deputy vicechan­cel­lor: Knowl­edge and In­for­ma­tion Tech­nol­ogy – Cape Penin­sula Univer­sity of Tech­nol­ogy. TESLA ap­pointed a di­rec­tor who’s been on its board for years to re­place Elon Musk as chair­per­son, stir­ring sus­pi­cion as to whether the mer­cu­rial chief ex­ec­u­tive will be re­strained af­ter costly run-ins with se­cu­ri­ties reg­u­la­tors. Robyn Den­holm, 55, will as­sume the chair­per­son’s role im­me­di­ately. A di­rec­tor since 2014, she’ll leave her po­si­tion as chief fi­nan­cial of­fi­cer and head of strat­egy at Aus­tralian telecom­mu­ni­ca­tions com­pany Tel­stra Corp af­ter a six-month no­tice pe­riod. The ap­point­ment marks the end of an era for Musk, 47, who be­came chair­per­son when he led a $7.5 mil­lion ini­tial in­vest­ment in Tesla in April 2004. His prob­lem­atic Au­gust tweets about try­ing to take the com­pany pri­vate caused months of chaos and cul­mi­nated in a set­tle­ment with the US Se­cu­ri­ties and Ex­change Com­mis­sion. The agency sought to im­prove the gov­er­nance of a board long crit­i­cised for be­ing too closely aligned with its bil­lion­aire leader. “While Den­holm is tech­ni­cally an in­de­pen­dent mem­ber of the board, she has been part of the Musk team for some time now and that sug­gests she will not be up to the task of check­ing Musk’s worst in­stincts,” said Stephen Di­a­mond, a pro­fes­sor of law at Santa Clara Univer­sity. Tesla rose 0.6 per­cent to $350.25 (nearly R4 899) in New York be­fore the start of trad­ing yes­ter­day. Ced­ing the role of chair­per­son was a con­di­tion of the ac­cord Musk reached with the SEC to set­tle fraud charges re­lated to his tweets on tak­ing the com­pany pri­vate.


AN UN­MANNED aerial ve­hi­cle (UAV) hangs on dis­play above the Tex­tron booth dur­ing the Spe­cial Op­er­a­tions Forces In­dus­try Con­fer­ence in Florida. Ar­ti­fi­cial In­tel­li­gence weapons may have many ben­e­fits, but they carry sig­nif­i­cant eth­i­cal con­cerns. I

Newspapers in English

Newspapers from South Africa

© PressReader. All rights reserved.