AI pioneer will
The developers of Artificial Intelligence are ignoring the warnings of one of its pioneers, writes Jane Bradley
In the 1980s, there was an MS-DOS computer programme which came automatically installed on our home PC called Eliza.
Originally created to attempt the Turing Test in 1966, she was coded to mimic the conversation a psychotherapist might have with a patient.
Purportedly able to have a real time – and realistic – conversation with the user, Eliza offered to discuss your problems and replied to whatever you told her with a seemingly relevant response.
“Come, come elucidate your thoughts,” she told my friends and I, soothingly, whether we told her that we didn’t like our dinner that night or that we thought our teacher was actually a vampire.
Of course, even as a fairly young child, I knew the program was just that: programmed. And once I had used it a few times, I discovered that the same responses came out time and time again, no matter what we put in. In short, Eliza was clearly not real.
Yet, it turns out that us cynical children were fairly unusual. The programme’s creator, Germanamerican computer scientist Joseph Weizenbaum, who regarded Eliza as a method to show the superficiality of communication between man and machine, ended up surprised by the number of people who attributed human-like feelings to the inanimate creation.
In an interview, Weizenbaum said: “My secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room.”
Weizenbaum’s secretary, who logically knew that Eliza was an inanimate creation, found her connection to the artificial woman to be so hugely personal that she wanted to