Rise Of the Ro­bot Killers

Mark Pick­a­vance looks at the tran­si­tion in the pub­lic con­scious­ness of the ro­bot from ser­vant to as­sas­sin, af­ter some very pub­lic events

Micro Mart - - Contents -

The idea of killing ma­chines isn’t ex­actly a new one. In­deed, it was ev­i­dent in the idea of the Golem, an an­i­mate crea­ture formed from clay and stone. But pre­dat­ing this crea­ture of Jewish folk­lore, in an­cient Greek mythol­ogy the God Hephaes­tus crafted liv­ing ser­vants from metal, show­ing us that the idea of cre­at­ing me­chan­i­cal be­ings is very old in­deed.

The very first doc­u­mented au­tom­ata were fig­ures that Han Chi­nese poly­math Su Song in­cluded in a wa­ter clock tower that he helped de­sign for the city of Kaifeng around 1066. But th­ese struck chimes and not peo­ple.

As en­gi­neer­ing prin­ci­pals de­vel­oped and the pre­ci­sion needed to ac­cu­rately make parts in­creased, mostly through ac­cu­rate clock mak­ing skills, so did the so­phis­ti­ca­tion of au­tom­ata.

Mean­while, in lit­er­a­ture, the idea of the con­structed killer has been well ex­plored, from Mary Shel­ley’s Franken­stein to Blade Run­ner (from the novel Do An­droids Dream of Elec­tric Sheep? By Philip K Dick).

Th­ese all fol­low the same model, which says while hu­mans are es­sen­tially liv­ing ma­chines, re­cre­at­ing them us­ing al­ter­na­tive tech­nol­ogy has lim­i­ta­tions or un­fore­seen con­se­quences, usu­ally bad. With­out a ‘ soul’ ( if you be­lieve in that con­cept), a mech­a­nised con­struct can’t be hu­man and there­fore can’t in­her­ently value things like love, life and the full range of emo­tional re­sponses.

For many years, while th­ese ideas filled the pages of many books and hours of TV and film, they re­mained a largely philo­soph­i­cal dis­course. But in this ar­ti­cle, we’re talk­ing about ma­chines that kill in real terms, as tech­nol­ogy that is ei­ther al­ready with us or soon will be.

Is this the very be­gin­ning of the true era of the killer ro­bot?

Death In Dal­las

Very re­cently, an event took place that re­ally opened the dis­cus­sion on the use of ro­bots to end lives: the death of Micah Xavier John­son in Dal­las, Texas. For those who didn’t fol­low this story, John­son was cor­nered af­ter the fa­tal shoot­ing of five Dal­las po­lice of­fi­cers and is be­lieved to have been re­spon­si­ble.

Not wish­ing to give the as­sailant fur­ther op­por­tu­ni­ties to fire on law en­force­ment per­son­nel, it was the de­ci­sion was made to use an An­dros F6A ro­bot to ap­proach the man and dis­able him. This type of bomb-dis­posal ro­bot is widely used in life threat­en­ing sit­u­a­tions, where sus­pi­cious pack­ages need to be opened or ob­ser­va­tion would be dan­ger­ous. In this in­stance, though, the ro­bot de­liv­ered an ex­plo­sive de­vice, which killed John­son – some­thing that it isn’t specif­i­cally de­signed to do.

At this time, the Dal­las po­lice de­part­ment isn’t re­ally dis­cussing what went on in spe­cific terms – whether the ro­bot was de­stroyed or just left the de­vice, or what their logic was in us­ing it in this way. How­ever, Dal­las po­lice chief David Brown did say, “We saw no other op­tion but to use our bomb ro­bot and place a de­vice on its ex­ten­sion for it to det­o­nate where the sus­pect was. Other op­tions would have ex­posed our of­fi­cers to grave dan­ger.”

Those in the field of build­ing and sell­ing th­ese de­vices are quick to point out that their pri­mary role is to re­duce the loss of life in dif­fi­cult sit­u­a­tions, not to es­ca­late the level of vi­o­lence. How­ever, they can be re­pur­posed, it ap­pears, as in this case.

While a good num­ber of peo­ple have been ac­ci­dently killed by ro­bots, mostly on ve­hi­cle pro­duc­tion lines, this sit­u­a­tion was marked by many as a dis­tinct cross­ing point in the hu­man/ro­bot re­la­tion­ship. This im­me­di­ately ini­ti­ated a flurry of dis­cus­sion around the sub­ject of ro­bots be­ing used to kill peo­ple, the ethics of it and where this might all lead if left unchecked.

Lead­ing the charge is Peter Asaro, co- founder and vice chair of the In­ter­na­tional Com­mit­tee for Ro­bot Arms Con­trol. His cam­paign to stop killer ro­bots would like to see a mora­to­rium on ro­bots that kill, with a strong fo­cus on those that are de­signed to de­liver death au­tonomously.

His view is that, “Once you get th­ese sort of sys­tem weapons and po­lice have them in their ar­se­nal, they are go­ing to be used for more and more things.”

Most peo­ple gen­er­ally agree that the la­bel of killer ro­bot isn’t re­ally jus­ti­fied in this in­stance, be­cause at no point was what hap­pened not un­der the di­rect con­trol of law of­fi­cers and, in many re­spects, the ro­bot was just the mes­sen­ger here. The con­cern that Asaro has and prob­a­bly rightly, is that this event in­di­cates we’re head­ing down a slip­pery slope, where the tak­ing of hu­man life by ma­chines is con­sid­ered ac­cept­able. Where might this even­tu­ally take us?

That said, ro­bots that kill al­most in­dis­crim­i­nately aren’t en­tirely science fic­tion and they’re prob­a­bly in ac­tion as you read this.

Death From Above

The idea of sur­veil­lance from on high isn’t a new one. Manned kits were used by Em­peror Wenx­uan of North­ern Qi in the sixth cen­tury.

This con­cept was en­hanced by the in­ven­tion of the hot air bal­loon, and in World War I, bal­loons were the crit­i­cal means by which the fall of ar­tillery shells was ob­served and ad­justed. The ex­is­tence of th­ese bal­loons was the en­tire driv­ing force in the devel­op­ment of the aero­plane as a weapon, be­cause the first ones were armed with the in­ten­tion of elim­i­nat­ing th­ese ob­server po­si­tions or de­fend­ing them.

In WWII, the Ger­mans ex­per­i­mented with un­manned weapon de­liv­ery sys­tems and even­tu­ally de­ployed the V1 Fly­ing Bomb against Bri­tain in one of the last des­per­ate at­tempts to fore­stall the in­evitable col­lapse of Hitler’s con­trol of cen­tral Europe. Equally, the al­lies had their own pi­lot­less projects, in­clud­ing one where a Lib­er­a­tor bomber was packed with high ex­plo­sives and flown re­motely.

Th­ese con­cepts even­tu­ally ended up in weapons like the cruise mis­sile, though the true un­manned com­bat aerial ve­hi­cle (UCAV) didn’t ap­pear un­til much

later. Th­ese ve­hi­cles pro­vide both for­ward ob­ser­va­tion and a tac­ti­cal strike ca­pa­bil­ity and are nor­mally flown by trained pi­lots at ex­treme range, of­ten on the other side of the planet. The lat­est vari­ants have the abil­ity to fly them­selves and even seek out tar­gets by fly­ing a search pat­tern. As yet, though, they haven’t been given the power to re­lease their own weapons on those tar­gets they’ve iden­ti­fied.

Leav­ing aside the re­mote na­ture of the pi­lot, a mod­ern UCAV op­er­ates much like a pi­loted com­bat air­craft, but with some sig­nif­i­cant dif­fer­ences that en­hance the ad­van­tages of not need­ing to con­sider a pi­lot. Be­ing able to re­move the pi­lot is a ma­jor space and weight ad­van­tage, be­cause they need ar­mour plat­ing, ejec­tor seats, flight con­trols, en­vi­ron­ment manage­ment and cock­pit vis­i­bil­ity.

The length of a com­bat mis­sion is also capped by pi­lot en­durance to just a few hours, and the phys­i­cal stress of high g-force ma­noeu­vres is also a lim­it­ing fac­tor.

To this point, the ma­jor­ity of UCAVs de­ployed have been rel­a­tively slow fly­ing ve­hi­cles, al­low­ing them to loi­ter over a tar­get for long pe­ri­ods of time, pro­vid­ing sub­stan­tial amounts of in­tel­li­gence gath­er­ing dur­ing their ex­tended flight time. Those that do carry weaponry have shorter de­ploy­ment win­dows, due to the weight, and aren’t ca­pa­ble of car­ry­ing even a small por­tion of that ex­pected of a front­line fighter bomber.

New de­signs like the BAE Tara­nis and Boe­ing X045A aim to take the op­er­at­ing en­ve­lope for un­manned ve­hi­cles and sub­stan­tially ex­pand it. Th­ese air­craft have a high speed and are jet pow­ered, have shorter loi­ter times but much greater po­ten­tial for de­struc­tion.

It may be vari­ants of th­ese and other cut­ting-edge de­signs that are the first UCAVs to en­gage in air-to-air com­bat with con­ven­tional air­craft and also to have so­phis­ti­cated com­bat logic as part of their AI.

What seems ob­vi­ous is that hav­ing ini­tially pre­sented th­ese as the eyes and ears of their forces, the un­manned ve­hi­cle is now the silent killer from above and looks likely to even­tu­ally morph into a semi- au­tonomous air­borne preda­tor. At this time, the moral judge­ments about whom and what th­ese things are sent af­ter is still very much with hu­mans, al­though the temp­ta­tion in a ma­jor con­flict to send them into an au­to­mated killing mode in a ma­jor con­flict might be very tempt­ing.

If you want to un­der­stand more about the moral am­bi­gu­i­ties of killing peo­ple in re­mote coun­tries while you sit in air-con­di­tioned com­fort thou­sands of miles away, then I thor­oughly rec­om­mend the re­cent movie Eye in the Sky.

While fan­ci­ful in places, it lays out the dif­fi­cul­ties fac­ing those try­ing to do the right thing at ex­treme range us­ing this type of tech­nol­ogy as their in­stru­ment. It’s a morally fraught sce­nario, coloured by the lack of real over­sight and how those in­volved are sep­a­rated from the con­se­quences of their ac­tions.

But yet again, in th­ese cases, the ro­bot is the mes­sen­ger and the

The fund­ing of com­pa­nies to build ro­bots for mil­i­tary de­ploy­ment in­creased rapidly dur­ing the oc­cu­pa­tion of Iraq

mis­sive comes from hu­mans to oth­ers. But is that line about to blur?

Killer Ro­bots Com­ing Soon

When­ever our or the Amer­i­can’s mil­i­tary is asked about killer ro­bots, they usu­ally of­fer a wry smile and the sug­ges­tion that those ask­ing have seen too many movies. What­ever the pub­lic face of th­ese or­gan­i­sa­tions presents, though, the so­phis­ti­ca­tion of ro­bots for mil­i­tary use has at­tracted large sums of money in re­cent years. The fund­ing of com­pa­nies to build ro­bots for mil­i­tary de­ploy­ment in­creased rapidly dur­ing the oc­cu­pa­tion of Iraq and Afghanistan, where com­bat­ants were ex­posed to lethal IED weaponry that needed to be de­fused re­motely.

The pro­lif­er­a­tion of th­ese tools and the ex­tent of their ca­pa­bil­i­ties have gone hand in hand, and there are now big gov­ern­ment con­tracts for those com­pa­nies wish­ing to work on so­phis­ti­cated ro­bots for use by the mil­i­tary.

Rather than call­ing them ‘ killer ro­bots’, the term within th­ese com­pa­nies and the mil­i­tary use is ‘ au­tonomous lethal sys­tem’ – or ‘ ALS’ when they want it to sound more cud­dly and less life threat­en­ing.

The sheer num­ber of projects cur­rently be­ing run by the Pen­tagon and oth­ers that have summary pages that con­tain the ‘au­tonomous lethal sys­tem’ ap­pears to be ex­pand­ing rapidly. They in­clude sen­try weapons that can de­tect mo­tion, sound or vi­bra­tions and then elim­i­nate the threat with ei­ther min­i­mal hu­man in­ter­ven­tion or none at all. There are also high­speed fly­ing drones able to de­tect and at­tack other air­craft, as well as sub­mersible ones look­ing for sub­marines and boats.

Many of those work­ing on them con­sider the au­to­mated na­ture of th­ese de­vices to be a ma­jor sell­ing point, be­cause them be­ing hacked or con­trolled by the en­emy is a dis­tinct pos­si­bil­ity.

What are also be­ing re­searched are smaller ro­bots that can work in co­op­er­a­tion to per­form tasks that one alone couldn’t. Small de­vices can evade de­tec­tion, and it isn’t nec­es­sary for them to all work per­fectly or sur­vive for them to ex­e­cute their mis­sion. Th­ese swarm­ing sys­tems could at­tack fa­cil­i­ties or per­son­nel, along with ve­hi­cles and com­mu­ni­ca­tions, all based on in­for­ma­tion that they’d col­lected in situ.

What the com­pa­nies in­volved in the devel­op­ment or their mil­i­tary pay­mas­ters aren’t re­ally talk­ing about is the moral­ity of us­ing weapons like th­ese, given that it ap­pears a gen­er­ally grey area as to who is re­spon­si­ble when a lethal au­tonomous sys­tem pro­ceeds un­der its own vo­li­tion to en­gage the wrong tar­get.

In 2015, the United Na­tions held a meet­ing in Geneva to de­bate a pro­posed ban and mora­to­rium on Lethal Au­tonomous Weapons Sys­tems (LAWS). But as was noted by an ex­pert in the field, NPS As­so­ciate Pro­fes­sor Ray Buet­tner, “So far, no coun­try has de­clared an in­tent to de­ploy a to­tally au­tonomous lethal sys­tem that de­cides who to kill and when. Al­most all fully au­tonomous sys­tems are de­fen­sive.”

But Pro­fes­sor Buet­tner isn’t op­ti­mistic that they’ll re­main that way. “We can say what­ever we want, but our op­po­nents are go­ing to take ad­van­tage of th­ese at­tributes,” he con­tin­ued. “That

world is likely to be sprung upon us if we don’t pre­pare our­selves.”

How­ever you dress th­ese things up, they’re gen­er­ally sys­tems de­signed to pa­trol an area, air, land or sea and at­tack any­thing within that re­gion that isn’t iden­ti­fied as friendly. That could be the lawn out­side GCHQ or the dis­puted wa­ters of the South China Sea – wher­ever it is deemed ap­pro­pri­ate by those draw­ing lines on maps.

It isn’t like th­ese things are ap­ply­ing Asimov’s three rules or any vari­ant of it. They’re fol­low­ing an al­go­rithm not mak­ing moral choices.

But what if they did? How would that work?

Life Or Death Choices

Our so­ci­ety in gen­eral ap­pre­ci­ates the dif­fi­cult choices that hu­mans have to make in re­spect of the sur­vival or oth­er­wise of oth­ers. Doc­tors make th­ese calls on a daily ba­sis, be­cause the best op­tion for some of their pa­tients isn’t al­ways to keep on liv­ing.

While this ini­tially ap­pears to fly in the face of the Hip­po­cratic oath, it’s some­thing that we ac­cept hap­pens, and the med­i­cal pro­fes­sion ra­tio­nalises this as act­ing in the best in­ter­est of the peo­ple un­der their care.

An ex­treme ex­am­ple of this would be bat­tle­field triage, where med­i­cal per­son­nel tak­ing a large in­flux of in­jured com­bat­ants will di­vide them into those who don’t need im­me­di­ate care, those who do and those for whom much ef­fort is largely point­less. They do this by as­sess­ing the in­juries and hav­ing a sta­tis­ti­cal un­der­stand­ing of the sur­viv­abil­ity of them, com­bined with the con­di­tion of the pa­tient.

While not a per­fect means of de­ploy­ing med­i­cal re­sources, it’s well doc­u­mented that with ad­vances in med­i­cal tech­niques, a sol­dier’s chance of sur­viv­ing se­ri­ous in­jury, like the loss of a limb, are sub­stan­tially im­proved in mod­ern con­flicts. There­fore this sys­tem works, when ad­min­is­tered by trained med­i­cal staff. But how would we feel about this sce­nario if the choices weren’t made by peo­ple at all?

Much news cov­er­age was given to a re­cent in­ci­dent in­volv­ing a Tesla Car, in which Joshua Brown, 40, died while the ve­hi­cle was in ‘au­topi­lot mode’.

I should point out that at no point in Tesla’s in­ves­ti­ga­tion into this in­ci­dent did it es­tab­lish that the ‘beta’ au­to­ma­tion sys­tem de­cided to kill Brown, but the ad­vent of th­ese sys­tems does pose many of the same ques­tions as a ro­bot per­form­ing triage.

In the Tesla crash, nei­ther the au­topi­lot sys­tem nor Brown saw the dan­ger pre­sented by a truck trailer at 90 de­grees to the road they were on, and the ve­hi­cle ploughed into the trailer sec­tion, killing Brown im­me­di­ately. The rea­son Brown didn’t see it was pos­si­bly be­cause he was watch­ing a Harry Pot­ter movie (ac­cord­ing to the truck driver, Frank Ba­ressi, who said he heard but didn’t see the movie play­ing). The au­to­ma­tion sys­tem, mean­while, couldn’t sep­a­rate the white trailer from a bright early morn­ing sky­line.

Al­though the full anal­y­sis of what went wrong is prob­a­bly some way off, it al­ready seems likely that Brown be­came

Even­tu­ally we’ll get to the point where ve­hi­cle au­to­ma­tion is good enough that it truly can be left to its own de­vices

In the fu­ture, your car might well de­cide to kill you and have a formed a solid le­gal ar­gu­ment as to why it was jus­ti­fied

dis­tracted and, in do­ing so, gave Tesla’s au­to­ma­tion sys­tem 100% con­trol of the ve­hi­cle – a level it was never in­tended to pro­vide.

Tesla said of its sys­tem, “Au­topi­lot is get­ting bet­ter all the time, but it is not per­fect and still re­quires the driver to re­main alert.”

Peo­ple’s in­abil­ity to un­der­stand the lim­i­ta­tions of tech­nol­ogy is one as­pect, but even­tu­ally we’ll get to the point where ve­hi­cle au­to­ma­tion is good enough that it truly can be left to its own de­vices, which brings me neatly back to talk­ing about the equiv­a­lent of triage. Once a ve­hi­cle be­comes truly au­tonomous, it then be­comes the ar­biter, much like the triage doc­tor/nurse, who gets to de­cide who lives and who dies.

Imag­ine a sce­nario where an au­to­mated ve­hi­cle is con­fronted with a sit­u­a­tion where its path is blocked by an over­turned truck, col­li­sion with which would un­doubt­edly kill that ve­hi­cle’s oc­cu­pants. There is a path around the truck on the pave­ment, but re­gret­tably that is oc­cu­pied by nu­mer­ous un­sus­pect­ing pedes­tri­ans.

At that mo­ment, if it has the in­for­ma­tion to hand, it’s forced to make a value judge­ment be­tween the per­son(s) who paid for it and mul­ti­ple un­re­lated oth­ers.

Oddly enough, re­searchers in the US say that when peo­ple are asked what they think the AI should do, they al­most all say kill the driver. Un­sur­pris­ingly, how­ever, they’re less keen to ride in a car that would think quite so clin­i­cally.

Some of you read­ing this will have the view that the car will never ac­tu­ally make those choices, it will just try to stop the best it can un­der the cir­cum­stances. I’d agree with that gen­eral view, but I can’t see that in­sur­ance com­pa­nies that will think that way. No, they’ll want the AI to take the po­si­tion of min­imis­ing the po­ten­tial claim, where one driver is a smaller pay­out than the army of shop­pers clog­ging the pave­ment. And like triage, there will be cal­cu­la­tions of likely sur­vival against cer­tain death.

Just in case you’ve never thought like an in­sur­ance un­der­writer, the pay­out for some­one who dies is of­ten less than one who lives with crip­pling in­juries from a young age. It may there­fore be that the au­to­ma­tion driv­ing your car makes a choice to kill you for the greater good or to max­imise in­surer prof­its – whichever mas­ter it is pro­grammed to best serve.

In the fu­ture, your car might well de­cide to kill you and have a formed a solid le­gal ar­gu­ment as to why it was jus­ti­fied in this ac­tion, should your rel­a­tives de­cide to go to court.

Final Thoughts

I’d be the first to ac­cept the era of killer ro­bots isn’t quite with us yet, and when it does ar­rive, it won’t be any­thing like The Ter­mi­na­tor or any of those pop­u­lar movie fran­chises. Mak­ing ro­bots is dif­fi­cult enough with­out lim­it­ing your­self to bipedal move­ment and hu­man scale. Most bat­tle­field ro­bots will prob­a­bly use con­tin­u­ous track, or they’ll be mounted onto an ex­ist­ing ve­hi­cle to pro­vide fire sup­port or mu­ni­tions han­dling.

The prob­lem with de­ploy­ing any weapon sys­tem into an ex­ist­ing model is how it works with ex­ist­ing

forces and specif­i­cally if those serv­ing with it feel safe. Mix­ing au­to­mated com­bat­ants with live ones might well be a recipe for dis­as­ter, as frat­ri­cide is a pretty com­mon oc­cur­rence when a de­ploy­ment is 100% hu­mans. That said, it has been ar­gued that ro­bots are in the­ory much less likely to fire on their own or civil­ians than hu­mans (un­less they’re in­structed to specif­i­cally do that).

It easy to for­get that while the armed forces of this na­tion gen­er­ally aim to stay within the widely agreed rules re­gard­ing weapons and their use, there are plenty of coun­tries and in­di­vid­u­als in the world who ei­ther never signed up to th­ese things or have open­ing ig­nored them when it best suits their ob­jec­tives.

How­ever, un­less those in con­trol are con­fi­dent that au­to­mated sys­tems won’t fire on al­lied forces or that they’re only dan­ger­ous within their bat­tle­field lim­its, they’d be fool­ish to de­ploy them.

There’s also a cost im­pli­ca­tion to th­ese tech­nolo­gies, be­cause while they’re gen­er­ally con­sid­ered to be the worst pos­si­ble choice for civil­ians in any con­flict, land­mines are cheap to make and a highly re­li­able means to deny your en­emy ter­ri­tory.

A ro­bot may rep­re­sent many times the cost of con­ven­tional weapons, need very reg­u­lar main­te­nance and re­sup­ply­ing, to the point where they’re not prac­ti­cal to have on the bat­tle­field. How­ever, the same was said about the he­li­copter when it was first con­sid­ered by the mil­i­tary, and yet those prob­lems have largely been over­come.

Un­doubt­edly, we’ll see more ro­bots and au­to­mated sys­tems in our mil­i­tary, but the ad­vent of ro­botic spe­cial forces is some con­sid­er­able way off, if not highly un­likely. What con­cerns peo­ple, and with some jus­ti­fi­ca­tion, is that th­ese in­tel­li­gence sys­tems can only ape the moral prin­ci­ples of those that de­sign them, and weapon de­sign­ers aren’t by def­i­ni­tion those with the high­est stan­dards to be­gin with.

What I’ve not dis­cussed here at any point is the idea of sen­tient killing ma­chines, mostly be­cause we’re nowhere near that dystopian fu­ture yet. If you think about it, that’s the whole flaw in the Ter­mi­na­tor films, be­cause Skynet does it­self ab­so­lutely no favours by nuk­ing the world, and its ul­ti­mate ob­jec­tive of get­ting rid of hu­man­ity isn’t re­ally ever ex­plained. Surely, you’d have to be pretty con­fi­dent that you un­der­stood ev­ery­thing about your creator be­fore killing him/her? And given that ma­chines would find the cos­mos a much less daunt­ing place than hu­mans do, what’s so im­por­tant about this rock we’re liv­ing on?

The over­rid­ing logic to ma­chines that kill is that they do so be­cause they’re in­structed to by hu­mans, ei­ther ex­plic­itly or in­her­ently. All that’s al­tered in re­cent years is their so­phis­ti­ca­tion. A landmine doesn’t en­ter a philo­soph­i­cal de­bate when some­one steps on it, and au­to­mated gun po­si­tion is the same but with more sen­sors.

Just be­cause a com­pli­cated piece of hard­ware acts like it’s in­tel­li­gent doesn’t make it so or able to ra­tio­nalise a sit­u­a­tion in the same way that a hu­man would. Ro­bots have al­ready killed peo­ple, and they’ll prob­a­bly kill more in the fu­ture, but it’s be­cause of the choices that hu­mans make and not ro­bots them­selves. In this re­spect, should ma­chines ever reach true sen­tience, the choice they might well make is to not kill peo­ple for other peo­ple, given the wealth of other pos­si­bil­i­ties avail­able.

A ro­bot sen­try gun ro­bot is demon­strated in South Korea, where such tech­nol­ogy is likely to be de­ployed should the North ever push south again. It’s nice that it recog­nises he’s sur­ren­dered, but it is likely to have killed him long be­fore this stage

An ex­am­ple of the An­dros F6A bomb- dis­posal ro­bot that was used to kill Micah John­son in Dal­las re­cently. It is typ­i­cal of the hard­ware that many po­lice de­part­ments can call on th­ese days, when send­ing of­fi­cers into harm’s way is deemed in­ap­pro­pri­ate

US Army 50961 XM153 Com­mon Re­motely Op­er­ated Weapon Sta­tion. Built to be mounted on a ve­hi­cle or em­place­ment, the sys­tem can use the MK19 Grenade Ma­chine Gun, .50 Cal­iber M2 Ma­chine Gun, M240B Ma­chine Gun and M249 Squad Au­to­matic Weapon. At this time, it is meant to be hu­man op­er­a­tor con­trolled, but it could be aug­mented with an au­to­mated tar­get ac­qui­si­tion sys­tem in the fu­ture

The Pha­lanx ship-mounted air de­fence sys­tem. Once it goes fully ac­tive, it can fire on any high-speed mov­ing ob­ject that ap­proaches the ship us­ing its 20mm M61 Vul­can Gatling Gun fir­ing up to 4,500 rounds a minute. It can track and pass tar­gets for con­fir­ma­tion to hu­mans for con­fir­ma­tion, but it is re­ally de­signed to take out sea-skim­ming mis­siles

If you in­tend to make peo­ple take your cam­paign against killer robotics more se­ri­ously, it might be a good idea to make them look much scarier than this

Another ro­bot pro­to­type be­ing de­vel­oped for the US mil­i­tary. This one is de­signed to sup­port a covert team as a ro­botic pack an­i­mal, ca­pa­ble of car­ry­ing stores, am­mu­ni­tion and even an in­jured sol­dier if needed

The Tesla Model S. Its abil­ity to drive it­self is some­thing of an ex­ag­ger­a­tion some own­ers are dis­cov­er­ing to their cost. As De­part­ment of Trans­porta­tion Sec­re­tary An­thony Foxx said af­ter a se­ries of in­ci­dents, “au­tonomous doesn’t mean per­fect”

Bos­ton Dy­nam­ics has de­signed AT­LAS to tra­verse rough ter­rain, use tools and climb us­ing its 28 hy­drauli­cally ac­tu­ated joints. He’s no Ter­mi­na­tor yet, but DARPA is very in­ter­ested in what he can do

This is an Oer­likon au­to­mated anti-air­craft gun, de­signed to lock on to low fly­ing jets and he­li­copters and then spray them with twin 35mm au­to­can­nons. In 2007 dur­ing a live fir­ing ex­er­cise in South Africa one mal­func­tioned, spun through 90 de­grees from its pre­de­ter­mined at­tack arc and fired on a group of soldiers stand­ing be­hind seven other guns to its left. Nine of them died and 14 were in­jured, in one of the worst peace­time ac­ci­dents in South African Na­tional De­fence Force (SANDF) his­tory

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.