We have to get AI under control — before AI controls us
When the Mindfest Conference, sponsored by FAU’s Center for the Future Mind opened, organizers were busy setting up Sophia the robot and mobilizing a robot dog to prance around the floor before the speaking began.
These machines were designed to interact with humans and gave an eerie air to the event, where speakers and attendees had come from around the world to discuss the interaction between mind and machine.
Back in the 1930s, British mathematician Alan Turing famously suggested a test for AI intelligence. This test, now referred to as the Turing Test, said that an AI is intelligent if it can convince human judges in a computer chat that they are conversing with a human.
The 2014 movie “The Imitation Game,” starring Benedict Cumberbatch, popularized that idea. The latest chatbots such as ChatGPT and Bard from tech giants like Microsoft and Google are being used and discussed by a wide range of commentators.
These chatbots, arguably, also are capable of passing the Turing Test. This rapidly advancing technology has sparked the public’s imagination by making clear the myriad applications and risks of these tools and has also intensified an already fierce, and possibly destructive, competition between AI providers.
AI already is being used in a vast number of human enterprises, from medicine, agriculture, energy, science, art, literature and cyber security to military applications where life-and-death decisions must be made in short time windows.
But AI also has the potential to cause rising unemployment and the spread of disinformation; unleash a new generation of scams; direct harmful and biased information to targeted groups; and amplify hate and massive political destabilization.
What do the rapidly increasing applications of AI systems that surpass classes of human abilities mean for the future of humanity?
The possibility of a cultural misunderstanding of what the chatbots really are was of particular concern at the Mindfest Conference. Google engineer Blake Lemoine, who subsequently left the company, claimed a case can be made that such systems are sentient or conscious, that is, they actually may have feelings. Such developments indicate the matter of machine sentience should be approached with care. Intelligence is not the same thing as sentience. AI could surpass human intelligence in many endeavors while lacking any kind of feeling or empathy.
All this points to the need for an international organization to encourage and promote the beneficial use and understanding of AI — AI that will augment human productivity instead of replace it in the workforce; help prevent its misuse; and provide a clearinghouse of trusted information.
Such an organization would develop policy proposals, create and enforce standards, and be an association in which members would benefit from good behavior.
The need is urgent. Not only are companies competing for AI uses; our nation’s potential adversaries also are in a mad rush to out-compete the United States and other democracies by developing AI applications that will dominate international commerce and provide military superiority.
We must develop a clear understanding of the advantages and risks of AI and identify appropriate guardrails and standards that will benefit everyone.
It would be a grave mistake for humanity to stumble into a situation driven by competition that we can neither understand nor control, or a situation in which we face grave moral questions about AI without thought, preparation or consensus.
Some technologists recently issued a proclamation calling for a moratorium on AI development. This may sound appealing, but is unrealistic.
The creation of an international AI trade organization that develops policies and standards to look out for human interests is a realistic approach to ensure that AI serves humanity — and not the other way around.