Weaponised AI is com­ing. Are al­go­rith­mic for­ever wars our fu­ture?

The Guardian Australia - - Opinion - Ben Tarnoff

Last month marked the 17th an­niver­sary of 9/11. With it came a new mile­stone: we’ve been in Afghanistan for so long that some­one born af­terthe at­tacks is now old enough to go fight there. They can also serve in the six other places where we’re of­fi­cially at war, not to men­tion the 133 coun­tries where spe­cial op­er­a­tions forces have con­ducted mis­sions in just the first half of 2018.

The wars of 9/11 con­tinue, with no end in sight. Now, the Pen­tagon is in­vest­ing heav­ily in tech­nolo­gies that will in­ten­sify them. By em­brac­ing the lat­est tools that the tech in­dus­try has to of­fer, the US mil­i­tary is cre­at­ing a more au­to­mated form of war­fare – one that will greatly in­crease its ca­pac­ity to wage war ev­ery­where for­ever.

On Fri­day, the de­fense de­part­ment closes the bid­ding pe­riod for one of the big­gest tech­nol­ogy con­tracts in its his­tory: the Joint En­ter­prise De­fense In­fras­truc­ture (Jedi). Jedi is an am­bi­tious project to build a cloud com­put­ing sys­tem that serves US forces all over the world, from an­a­lysts be­hind a desk in Vir­ginia to soldiers on pa­trol in Niger. The con­tract is worth as much as $10bn over 10 years, which is why big tech com­pa­nies are fight­ing hard to win it. (Not Google, how­ever, where a pres­sure cam­paign by work­ers forced man­age­ment to drop out of the run­ning.)

At first glance, Jedi might look like just an­other IT mod­ern­iza­tion project. Gov­ern­ment IT tends to run a fair dis­tance be­hind Sil­i­con Val­ley, even in a place as lav­ishly funded as the Pen­tagon. With some 3.4 mil­lion users and 4 mil­lion de­vices, the de­fense de­part­ment’s dig­i­tal foot­print is im­mense. Mov­ing even a por­tion of its work­loads to a cloud provider such as Ama­zon will no doubt im­prove ef­fi­ciency.

But the real force driv­ing Jedi is the de­sire to weaponize AI – what the de­fense de­part­ment has be­gun call­ing “al­go­rith­mic war­fare”. By pool­ing the mil­i­tary’s data into a mod­ern cloud plat­form, and us­ing the ma­chine-learn­ing ser­vices that such plat­forms pro­vide to an­a­lyze that data, Jedi will help the Pen­tagon re­al­ize its AI am­bi­tions.

The scale of those am­bi­tions has grown in­creas­ingly clear in re­cent months. In June, the Pen­tagon es­tab­lished the Joint Ar­ti­fi­cial In­tel­li­gence Cen­ter (JAIC), which will over­see the roughly 600 AI projects cur­rently un­der way across the de­part­ment at a planned cost of $1.7bn. And in Septem­ber, the De­fense Ad­vanced Re­search Projects Agency (Darpa), the Pen­tagon’s sto­ried R amp;D wing, an­nounced it would be in­vest­ing up to $2bn over the next five years into AI weapons re­search.

So far, the re­port­ing on the Pen­tagon’s AI spend­ing spree has largely fo­cused on the prospect of au­ton­o­mous weapons – Ter­mi­na­tor-style killer ro­bots that mow peo­ple down with­out any in­put from a hu­man op­er­a­tor. This is in­deed a fright­en­ing near­future sce­nario, and a global ban on au­ton­o­mous weaponry of the kind sought by the Cam­paign to Stop Killer Ro­bots is ab­so­lutely es­sen­tial.

But AI has al­ready be­gun rewiring war­fare, even if it hasn’t (yet) taken the form of lit­eral Ter­mi­na­tors. There are less cin­e­matic but equally scary ways to weaponize AI. You don’t need al­go­rithms pulling the trig­ger for al­go­rithms to play an ex­tremely dan­ger­ous role.

To un­der­stand that role, it helps to un­der­stand the par­tic­u­lar dif­fi­cul­ties posed by the for­ever war. The killing it­self isn’t par­tic­u­larly dif­fi­cult. With a mil­i­tary bud­get larger than that of China, Rus­sia, Saudi Ara­bia, In­dia, France, Bri­tain and Ja­pan com­bined, and some 800 bases around the world, the US has an abun­dance of fire­power and an un­par­al­leled abil­ity to de­ploy that fire­power any­where on the planet.

The US mil­i­tary knows how to kill. The harder part is fig­ur­ing out whom to kill. In a more tra­di­tional war, you sim­ply kill the en­emy. But who is the en­emy in a con­flict with no na­tional bound­aries, no fixed bat­tle­fields, and no con­ven­tional ad­ver­saries?

This is the peren­nial ques­tion of the for­ever war. It is also a key fea­ture of its de­sign. The vague­ness of the en­emy is what has en­abled the con­flict to con­tinue for nearly two decades and to ex­pand to more than 70 coun­tries – a boon to the con­trac­tors, bu­reau­crats and politi­cians who make their liv­ing from US mil­i­tarism. If war is a racket, in the words of ma­rine le­gend Smed­ley But­ler, the for­ever war is one the long­est cons yet.

But the vague­ness of the en­emy also cre­ates cer­tain chal­lenges. It’s one thing to look at a map of North Viet­nam and pick places to bomb. It’s quite an­other to sift through vast quan­ti­ties of in­for­ma­tion from all over the world in or­der to iden­tify a good can­di­date for a drone strike. When the en­emy is ev­ery­where, tar­get iden­ti­fi­ca­tion be­comes far more la­bor-in­ten­sive. This is where AI – or, more pre­cisely, ma­chine learn­ing – comes in. Ma­chine learn­ing can help au­to­mate one of the more te­dious and time-con­sum­ing as­pects of the for­ever war: find­ing peo­ple to kill.

The Pen­tagon’s Project Maven is al­ready putting this idea into prac­tice. Maven, also known as the Al­go­rith­mic War­fare Cross-Func­tional Team, made head­lines re­cently for spark­ing an em­ployee re­volt at Google over the com­pany’s in­volve­ment. Maven is the mil­i­tary’s “pathfinder” AI project. Its ini­tial phase in­volves us­ing ma­chine learn­ing to scan drone video footage to help iden­tify in­di­vid­u­als, ve­hi­cles and build­ings that might be worth bomb­ing.

“We have an­a­lysts look­ing at full­mo­tion video, star­ing at screens 6, 7, 8, 9, 10, 11 hours at a time,” says the project di­rec­tor, Lt Gen Jack Shana­han. Maven’s soft­ware au­to­mates that work, then re­lays its dis­cov­er­ies to a hu­man. So far, it’s been a big suc­cess: the soft­ware has been de­ployed to as many as six com­bat lo­ca­tions in the Mid­dle East and Africa. The goal is to even­tu­ally load the soft­ware on to the drones them­selves, so they can lo­cate tar­gets in real time.

Won’t this tech­nol­ogy im­prove pre­ci­sion, thus re­duc­ing civil­ian ca­su­al­ties? This is a com­mon ar­gu­ment made by higher-ups in both the Pen­tagon and Sil­i­con Val­ley to de­fend their col­lab­o­ra­tion on projects like Maven. Code for Amer­ica’s Jen Pahlka puts it in terms of “sharp knives” ver­sus “dull knives”: sharper knives can help the mil­i­tary save lives.

In the case of weaponized AI, how­ever, the knives in ques­tion aren’t

par­tic­u­larly sharp. There is no short­age of hor­ror sto­ries of what hap­pens when hu­man over­sight is out­sourced to faulty or prej­u­diced al­go­rithms – al­go­rithms that can’t rec­og­nize black faces, or that re­in­force racial bias in polic­ing and crim­i­nal sen­tenc­ing. Do we re­ally want the Pen­tagon us­ing the same tech­nol­ogy to help de­ter­mine who gets a bomb dropped on their head?

But the deeper prob­lem with the hu­man­i­tar­ian ar­gu­ment for al­go­rith­mic war­fare is the as­sump­tion that the US mil­i­tary is an essen­tially benev­o­lent force. Many mil­lions of peo­ple around the world would dis­agree. In 2017 alone, the US and al­lied strikes in Iraq and Syria killed as many as 6,000 civil­ians. Num­bers like these don’t sug­gest a few hon­est mis­takes here and there, but a sys­temic in­dif­fer­ence to “col­lat­eral dam­age”. In­deed, the US gov­ern­ment has re­peat­edly bombed civil­ian gather­ings such as wed­dings in the hopes of killing a high-value tar­get.

Fur­ther, the line be­tween civil­ian and com­bat­ant is highly por­ous in the era of the for­ever war. A re­port from the In­ter­cept­sug­gests that the US mil­i­tary la­bels any­one it kills in “tar­geted” strikes as “en­emy killed in ac­tion”, even if they weren’t one of the tar­gets. The so-called “sig­na­ture strikes” con­ducted by the US mil­i­tary and the CIA play sim­i­lar tricks with the con­cept of the com­bat­ant. These are drone at­tacks on in­di­vid­u­als whose iden­ti­ties are un­known, but who are sus­pected of be­ing mil­i­tants based on dis­play­ing cer­tain “sig­na­tures” – which can be as vague as be­ing a mil­i­tary-aged male in a par­tic­u­lar area.

The prob­lem isn’t the qual­ity of the tools, in other words, but the in­sti­tu­tion wield­ing them. And AI will only make that in­sti­tu­tion more bru­tal. The for­ever war de­mands that the US sees en­e­mies ev­ery­where. AI prom­ises to find those en­e­mies faster – even if all it takes to be con­sid­ered an en­emy is ex­hibit­ing a pat­tern of be­hav­ior that a (clas­si­fied) ma­chine-learn­ing model as­so­ciates with hos­tile ac­tiv­ity. Call it death by big data.

AI also has the po­ten­tial to make the for­ever war more per­ma­nent, by giv­ing some of the coun­try’s largest com­pa­nies a stake in per­pet­u­at­ing it. Sil­i­con Val­ley has al­ways had close links to the US mil­i­tary. But al­go­rith­mic war­fare will bring big tech deeper into the mil­i­tary-in­dus­trial com­plex, and give bil­lion­aires like Jeff Be­zos a pow­er­ful in­cen­tive to en­sure the for­ever war lasts for­ever. En­e­mies will be found. Money will be made.

Pho­to­graph: Getty Im­ages

The US has been in Afghanistan for so long that some­one born af­ter the at­tacks is now old enough to go fight there.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.