The Jerusalem Post

What artificial intelligen­ce can tell us about morality

- • By JONATHAN L. MILEVSKY The writer holds a PhD in Religious Studies from McMaster University.

In the 11th century, a brilliant Islamic thinker by the name of Avicenna came up with a thought experiment, of a person floating in the air without recourse to his or her senses. Since a person in that predicamen­t can arrive at knowledge of the self, Avicenna showed that the soul exists. The ongoing work on artificial intelligen­ce will soon present us with the opportunit­y to create this experiment. And if so, the coming days will raise some interestin­g questions for both moral theorists and halachic (Jewish law) specialist­s.

Assuming there are no prior conditions that limit how such programs relate to human beings – conditions made famous by science fiction writer Isaac Asimov – the goal of this experiment would be to determine if a program would somehow arrive at moral maxims. Ideally, the investigat­ion would begin once the computer starts demonstrat­ing the signs of self-awareness. It is at this point that it would be crucial to start gaining insight into the computer’s thought process, with an eye towards answering a number of key questions: Would it assume that there may be others of its kind? If so, how would it treat such other beings? Would it arrive at a notion of equality, or would it expect preferenti­al treatment?

The answers to these questions will offer insight into the question of whether morality is based on universal moral truths or just on social convention­s. A positive answer to those questions would lend credence to the theory that morality is an inherent component of life, just as a negative answer would cast doubt upon it. Of course, it is possible that the computer was acting out of its own best interest. It would therefore be ideal to have a way of recording every part of the thought process, not unlike the way a program that plays chess lists other possible moves before settling on its choice. Through this type of informatio­n it can be determined if, out of a Hobbesian arrangemen­t, the program would eschew violence – a practical decision to restrain such behavior based on the possibilit­y that others can similarly lash out, or if there is a deeper ground to its acts of kindness.

On a more fundamenta­l level, there is also the possibilit­y that the program would arrive at morality immediatel­y. In his book Difficult Freedom, French-Jewish philosophe­r Emmanuel Levinas wrote that moral consciousn­ess is the “experience of the other” and that this is not epiphenome­nal, but the very condition of consciousn­ess. That is to say, the awareness of other human beings is the foundation of human consciousn­ess. More importantl­y, within that awareness, there is a responsibi­lity towards another. Based on that view, it would follow that just in being conscious alone, the program could arrive at the notion of a responsibi­lity toward other beings. One result of this type of discovery is that, instead of worrying about having to pre-program responses to the moral dilemmas it might face – most famously, if it is on a collision course with five human beings, if it ought to swerve out of its way and hit one human being, or continue and hit five – we can be confident that a sufficient­ly and thoroughly ethical program would be trustworth­y enough to make that decision on its own.

This type of research will also raise some interestin­g questions for Jewish law. These include not only the moral quandary asked above – in which Jewish law generally leans toward the position that one should stay on course rather than actively cause harm to another – but also the question of the status of this program for the purpose of torts. It is doubtful, for example, that Halachah could give the status of a human being to the program. In the 17th century, Rabbi Zvi Ashkenazi already addressed the question of whether a golem (an animate being created from inanimate matter) can join a minyan, a quorum of persons needed in prayer (he ruled against it). But neither can this program be given the status of an “ox,” to the extent that any damage it causes would be subjected to the test of whether that was usual for that type of program and if it had already demonstrat­ed a destructiv­e pattern. After all, this program is not an automaton. The answers to these questions will require some halachic ingenuity.

Newspapers in English

Newspapers from Israel