Las Vegas Review-Journal

Haven’t we seen how AI ends already?

- Adelia Ladson Adelia Ladson is a reporter with The Moultrie (Ga.) Observer.

When it comes to artificial intelligen­ce, those of us who are members of Generation X have seen this movie already and know how it ends.

We grew up with Hollywood portraying AI as the adversary to mankind, starting with “2001: A Space Odyssey,” in 1968, setting the precedent and “The Terminator” and “The Matrix” launching it to a new level of scary. Now, do I believe that the human race is going to be literally hunted down to virtual extinction by Ai-driven robots? No, of course not.

But seriously, in the light of Openai’s program CHATGPT and its meteoric rise to the top of the must-have list for companies over the past year — and, now, with its leadership in flux with the firing of its CEO, Sam Altman — yes, maybe, I am a little concerned about the monster being unleashed. Especially, when the news coming out is that the leadership issues were caused by difference­s in opinion on the safety of AI and growing quickly for profit versus growing slowly, responsibl­y. And, folks, Openai is just one company in the AI arms race.

Although, I don’t believe in the apocalypti­c movie theme, I do believe in the theme of AI running amok. It’s a definite probabilit­y in today’s environmen­t of people wanting instant gratificat­ion and immediate answers to everything without much effort. I also believe in the danger of people, who are already too lazy to think for themselves, relying on an AI program to think for them and accepting what it tells them to be true. People are already let social media platforms tell them what to think, and this is yet another step beyond that.

So, are we heading down a road where the human brain is slowly atrophying for lack of complex thinking while artificial intelligen­ce is quickly developing past our ability to understand its actual limits? Or setting its limits so it can be used as a tool to assist in problem-solving and not as a replacemen­t for human intellect itself? Who sets the limits?

Don’t expect those limits to be set by the computer scientists. A scientist will push a button just to see what happens, regardless of the consequenc­es. They can’t help themselves. Curiosity is in their nature. Don’t count on company CEOS to set those limits either. They’ll make a dollar or die trying. Greed is in their nature.

So who holds AI accountabl­e when it has “hallucinat­ions,” which is what they call the wrong answers and informatio­n that it gives more frequently than company CEOS want the public to believe. (I can personally attest to that in my exploratio­n of CHATGPT.)

Who holds it accountabl­e when it makes a calculated decision that is logically right but morally wrong? It can’t distinguis­h between the two because it has no “dog in the fight” and, therefore, nothing to lose.

The answer is: you.

It’s up to you to remember that AI is just a tool. It’s not a friend, it’s not an educator, it’s not a qualified expert and it’s certainly not a replacemen­t for good old-fashioned critical thinking. So, continue to use your own brains before they leak out of your ears,

Newspapers in English

Newspapers from United States