How do you teach ethics to au­topi­lot cars?

Ger­man think tank pon­ders the dilem­mas of robo­tised driv­ing

The Star Early Edition - - MOTORING - MO­TOR­ING STAFF

CAN a self-driv­ing car be pro­grammed to choose who dies or is in­jured in an un­avoid­able crash? Should it be al­lowed to? Who would be held re­spon­si­ble if such a sit­u­a­tion arises? Driver or pro­gram­mer?

These are just some of the highly con­tentious ques­tions the mo­tor­ing world is faced with as cars edge ever closer to fully driverless ca­pa­bil­i­ties, and un­til they’re an­swered the fu­ture of au­ton­omy re­mains at an im­passe.

The Ger­man govern­ment re­cently tasked an ethics com­mis­sion, com­pris­ing 14 philoso­phers, lawyers, the­olo­gians, engi­neers and con­sumer pro­tec­tion ad­vo­cates, to draw up the world’s first set of eth­i­cal guide­lines for au­to­mated driv­ing. One of the com­mis­sion’s mem­bers, Pro­fes­sor Christoph Lütge, of­fers his take on some of au­ton­omy’s stick­ier sub­jects in this in­ter­view:

Q: Pro­fes­sor Lütge, let’s imag­ine a sit­u­a­tion where a col­li­sion with a per­son is in­evitable. How­ever, the car could hit ei­ther a child or an older per­son. What de­ci­sion should the self-driv­ing car make here?

A: Self-driv­ing cars should not make de­ci­sions based on a per­son’s char­ac­ter­is­tics, whether age, phys­i­cal con­di­tion or sex. Hu­man dig­nity is in­vi­o­lable. Which is why ve­hi­cles can­not be pro­grammed along the lines of: “If in doubt, hit the man with the walk­ing frame”.

Q: Even though most driv­ers would prob­a­bly make that de­ci­sion?

A: The de­ci­sion is not be­ing made by a hu­man be­ing with a moral frame­work and the ca­pac­ity to make a choice. In­stead, we are look­ing at how a sys­tem can be pro­grammed to deal with fu­ture sce­nar­ios. Imag­ine this sit­u­a­tion: A car is on a nar­row path with a cliff face on the right and a sharp drop to the left. Sud­denly, a child ap­pears up ahead and the car can­not brake in time. Should the car drive into the child or off the road and into the abyss? Pro­gram­mers can­not make the de­ci­sion to sac­ri­fice the driver. The only op­tion is to brake as ef­fec­tively as pos­si­ble.

Q: But shouldn’t the sys­tem be able to cal­cu­late the num­ber of vic­tims and base its de­ci­sions on that?

A: This was a topic of much de­bate in the com­mis­sion but we came to the con­clu­sion that one can jus­tify a re­duc­tion in the num­ber of ca­su­al­ties.

Q: Doesn’t this con­tra­dict the rul­ing made by the Ger­man Fed­eral Con­sti­tu­tional Court? The Court ruled that an air­plane hi­jacked by ter­ror­ists can­not be shot down, even if it is head­ing to­wards a tar­get where there is a sig­nif­i­cantly higher num­ber of peo­ple.

A: There is an im­por­tant eth­i­cal dif­fer­ence here: No­body can de­cide to bring about the death of an in­di­vid­ual. The plane in this sce­nario con­tains real peo­ple who we can iden­tify. In the case of au­to­mated driv­ing, we are talk­ing about gen­eral pro­gram­ming to re­duce ca­su­al­ties with­out know­ing who the vic­tims are or clas­si­fy­ing them be­fore­hand.

Apart from that it’s not just a ques­tion of num­bers. You have to fac­tor in the sever­ity of the dam­age. If you are faced with an ei­ther/or sit­u­a­tion where a car can merely graze sev­eral peo­ple, then it shouldn’t choose to fa­tally in­jure one in­di­vid­ual.

Q: But what about the thou­sands of sce­nar­ios be­tween these ex­tremes? One man­u­fac­turer will choose one out­come while an­other make opts for a dif­fer­ent one.

A: I be­lieve there should a neu­tral body that man­ages a cat­a­logue of sce­nar­ios with uni­ver­sally ac­cepted stan­dards. This or­ga­ni­za­tion could also test the tech­nolo­gies be­fore man­u­fac­tur­ers take them to mar­ket.

Q: Is it eth­i­cally ac­cept­able at all to shift the re­spon­si­bil­i­ties that we as hu­mans bear over to tech­nol­ogy?

A: This re­spon­si­bil­ity is not be­ing shifted to tech­nol­ogy per se but to the man­u­fac­tur­ers and op­er­a­tors of the tech­nol­ogy. We want reg­u­la­tions that clearly set out when the driver is in con­trol and when tech­nol­ogy is in con­trol – and who is li­able.

Fur­ther­more, we don’t want a sit­u­a­tion where the sys­tem sud­denly hands over con­trol to the driver for what­ever rea­son. And as re­spon­si­bil­ity can change be­tween the car and the driver, ev­ery jour­ney should be doc­u­mented in a black box. In­ter­na­tional stan­dards have to be de­vel­oped for these sce­nar­ios.

Q: What if I don’t want to hand over re­spon­si­bil­ity?

A: In the com­mis­sion, we were told by engi­neers that driv­ing be­comes less safe when hu­mans in­ter­vene. How­ever, hu­mans have a ba­sic right not to be obliged to sub­mit to tech­nol­ogy. In other words, it must be pos­si­ble to de­ac­ti­vate au­to­mated con­trol.

Q: There are still many cases where the hu­man re­sponse is bet­ter, any­way.

A: It is only eth­i­cally ac­cept­able to al­low au­to­mated driv­ing if it will cause less dam­age than a hu­man be­ing be­hind the wheel. We as­sume that this will be pos­si­ble in the near fu­ture – to such an ex­tent that it will lead to a sig­nif­i­cant eth­i­cal im­prove­ment in driv­ing. Our aim is to con­trib­ute to this de­vel­op­ment through these guide­lines.

In this sit­u­a­tion it’s up to the car to make life-and-death de­ci­sions.

Newspapers in English

Newspapers from South Africa

© PressReader. All rights reserved.