The deadly soul of a new ma­chine

The Hazleton Standard-Speaker - - OPINION - TI­MOTHY EGAN

Try to imag­ine the last 11 min­utes of Lion Air Flight 610 in Oc­to­ber. The plane is a new ma­chine, Boe­ing’s sleek and in­tel­li­gent 737 Max 8, fit­ted with an ad­vanced elec­tronic brain. Af­ter take­off, this cy­ber­pi­lot senses that some­thing is wrong with the an­gle of as­cent and starts to force the jet­liner down.

A tug of war fol­lows be­tween men and com­puter, at 450 miles an hour — the hu­man pi­lots try­ing to right the down­ward plunge, the au­to­matic pi­lot tak­ing it back from them. The bot wins. The jet­liner crashes into the Java Sea. All 189 on­board are killed.

And here’s the most ag­o­niz­ing part: The killer was sup­posed to save lives. It was a smart com­puter de­signed to pro­tect a grav­ity-de­fi­ance ma­chine from er­ror. It lacks judg­ment and in­tu­ition, pre­cisely be­cause those hu­man traits can some­times be fa­tal in guid­ing an aero­dy­namic tube through the sky.

We still don’t know the ex­act rea­son the pi­lots of that fa­tal flight couldn’t dis­able the smart sys­tem and re­turn to man­ual con­trol. It looks as if the sen­sors were off, in­sti­gat­ing the down­ward spi­ral. A re­port by the Fed­eral Avi­a­tion Ad­min­is­tra­tion in 2013 found that 60 per­cent of ac­ci­dents over a decade were linked to con­fu­sion be­tween pi­lots and au­to­mated sys­tems.

But it’s not too much of a reach to see Flight 610 as rep­re­sen­ta­tive of the hinge in his­tory we’ve ar­rived at — with the bots, the ar­ti­fi­cial in­tel­li­gence and the so­cial me­dia al­go­rithms now shap­ing the fate of hu­man­ity at a star­tling pace.

Like the cor­rec­tion sys­tem in the 737, these in­ven­tions are de­signed to make life eas­ier and safer — or at least more prof­itable for the own­ers. And they do, for the most part. The over­all idea is to out­source cer­tain hu­man func­tions, the drudgery and things prone to faulty judg­ment, while re­tain­ing mas­ter con­trol. The ques­tion is: At what point is con­trol lost and the cre­ations take over? How about now?

It was ex­actly 200 years ago that Mary Shel­ley pub­lished a story of a mon­ster who is still very much with us. Her book “Franken­stein” is about the con­se­quences of man play­ing God. You can see per­mu­ta­tions of the mon­ster, a not-un­sym­pa­thetic patch­work of hu­man parts, in char­ac­ters like Dolores, the host who rebels in the tele­vi­sion se­ries “West­world.”

Shel­ley’s con­cerns were raised at the peak of the In­dus­trial Rev­o­lu­tion, when the West­ern world was trans­formed from sleepy agri­cul­tural so­ci­eties into a fre­netic age of fac­to­ries, ma­chines and over­crowded cities. All the help­ful in­ven­tions also pro­duced mass dis­lo­ca­tion, life-killing pol­lu­tion, child la­bor and — as per the in­ven­tion of the cot­ton gin in the Amer­i­can South — an ex­pan­sion of hu­man en­slave­ment.

To­day we are close to cre­at­ing a hu­man brain in­side a com­puter — an en­tirely new species. In his book “Sapi­ens,” Yu­val Noah Harari takes us through a mostly up­beat tour of hu­man­ity since the cog­ni­tive rev­o­lu­tion of 70,000 years ago. At the end of the book — our time — he warns about the new be­ing, the cy­borg now tak­ing shape in a lab near you.

The CEO of Mi­crosoft, Satya Nadella, hit a sim­i­lar cau­tion­ary note at the com­pany’s re­cent an­nual share­holder meet­ing. Big Tech, he said, should be ask­ing “not what com­put­ers can do, but what they should do.”

It’s the “can do” part that should scare you. Face­book, once all pup­pies, baby pic­tures and high school re­u­nion up­dates, is a mon­ster of mis­in­for­ma­tion. And Face­book’s cre­ator is more clue­less than Dr. Franken­stein about the dan­gers of what he has un­leashed on the world.

Mark Zucker­berg, Face­book’s CEO, has glibly as­sured us that build­ing ad­vanced ar­ti­fi­cial in­tel­li­gence sys­tems will root out the hate speech, lies and pro­pa­ganda passed among the 2 bil­lion ac­tive users of Face­book. But fake news — whether gossip shared by fam­ily mem­bers or the toxic kind spread by Rus­sians in base­ments — is the mother’s milk of Face­book. The AI may only make it eas­ier for mass ma­nip­u­la­tion. In that sense, Face­book is headed for its own crash into the sea.

Driver­less cars will soon be avail­able for rideshar­ing in the United States. If they can re­duce the car­nage on the roads — more than 70 mil­lion peo­ple killed and 4 bil­lion in­jured world­wide since the dawn of the auto age — this will be a good thing. Ex­cept that this year a bot-car killed a woman in a cross­walk in Ari­zona, and oth­ers have been slower than hu­mans to re­act. There shouldn’t be any rush — ex­cept from the profit drivers at the ride-shar­ing com­pa­nies — to hand over the steer­ing wheel toad river with­out a heart beat.

It’s not Lud­dite to see the be-care­ful-what-you-wish-for les­son from Mary Shel­ley’s era to our own, at the cusp of an age of tech­no­log­i­calis it Lud­dite to ask for more screen­ing, more eth­i­cal con­sid­er­a­tions, more pro­jec­tions of what can go wrong, as we sur­ren­der judg­ment, rea­son and over­sight to our soul­less cre­ations.

As haunt­ing as those fi­nal mo­ments in­side the cock pit of Flight 610 were, it’ s equally haunt­ing to grasp the full mean­ing of what hap­pened: The sys­tem over­rode the hu­mans and killed ev­ery­one. Our in­ven­tion. Our folly.

TI­MOTHY EGAN is a colum­nist with The New York Times.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.