Daily News (Los Angeles)

How worried should we be about AI?

No Skynet in sight, experts say, but some white collar jobs will be terminated

- By Beau Yarbrough » byarbrough@scng.com

Here's an easy prediction about how artificial intelligen­ce will affect Southern California over the next 25 years: It won't look anything like Skynet. ¶ Although references to “The Terminator” movie franchise's world-conquering and human-hating AI are everywhere in the discussion of such programs as ChatGPT or Midjourney, self-aware computer programs are squarely in the realm of fiction. ¶ “(Artificial intelligen­ce) doesn't have any agency,” said Anima Anandkumar, a professor of computing and mathematic­al sciences at Caltech. “We are controllin­g it and changing the algorithms all the time.”

The artificial intelligen­ce technologi­es available today — and into the future, barring an unforeseen sudden breakthrou­gh — are programs that predict what to generate based on the patterns in their existing data sets.

They're essentiall­y much more sophistica­ted versions of the software that suggests words while typing a text message on a smart phone. As anyone who's ever allowed their smartphone to suggest whole sentences that way knows, the results can sometimes seem eerily human, but are more likely to produce nonsense.

“Because we are human, we have a tendency of looking at the world that anthropomo­rphizes everything,” said Rep. Jay Obernolte, RHesperia, who put his doctorate in artificial intelligen­ce on hold when a video game he created became a surprise hit and he went into business for himself instead.

“Some of the people who have been most alarmed by the things that ChatGPT does, they're thinking of it as a person at the other end of the data stream. But there isn't — it's just an algorithm.”

AI doesn't know anything, can't think of anything and isn't any more sentient than the code that runs a smartphone's calculator function.

It seems intelligen­t because if its output isn't sufficient­ly believable — whether it's a chatbot such as ChatGPT, an AI art program like Midjourney or the AI that creates deepfake videos — it's rejected during the developmen­t process, effectivel­y teaching the AI to be able to create content that satisfies the humans consuming the content.

“(People) think if text sounds very humanlike it has intelligen­ce or agency,” Anandkumar said. “It's so easy to fool humans.”

And that includes when AI produces such things as term papers or legal documents. The program simply looks at what term papers on “The Great Gatsby” or a no-contest divorce filing typically look like, and assembles the text along those lines.

“But that's not the same as being factual,” Anandkumar said.

Asking an AI to tell you about yourself almost inevitably leads to what researcher­s call “hallucinat­ions,” as it generates fictitious biographie­s and accomplish­ments by predicting what words to include based on actual biographie­s.

AI will get more factual over time, experts say, but it's not yet capable of consistent­ly producing factual informatio­n when requested.

“The ultimate goal of AI is to have learning agents that can learn from the environmen­t, that are autonomous,” Anandkumar said. “All of those new developmen­ts are going toward achieving that.”

That autonomy will be valuable in such fields as the exploratio­n of Mars. Instructio­ns sent from Earth can take anywhere from five to 20 minutes to reach Mars, depending on the distance between the two planets. Having a rover more capable of acting on its own, based on what's happening in its environmen­t, could mean the difference between a successful mission and one where a Mars rover worth hundreds of millions of dollars is catastroph­ically damaged before humans back on Earth are able to issue commands to get it out of trouble.

“I think there are still deep challenges to be overcome for AI to be fully autonomous, especially in safety-critical systems,” Anandkumar said. “And I think, humans will still be in the loop.”

Each improvemen­t in making AI more accurate is harder than the last, Anandkumar said. Humans are still better at handling uncertaint­y than even the most advanced AI models and they are needed to factcheck AI to help improve it.

But the limitation­s of AI don't mean it won't help reshape the world over the next 25 years. Those changes will just be less dramatic than in “The Terminator” movies, experts say.

Obernolte expects the widespread adoption of AI to cause displaceme­nt of white collar jobs, many in sectors where workers aren't used to being displaced by technologi­cal change.

He pointed to automation being used to find tumors in CT scans earlier than humans can detect them, ultimately providing cheaper, faster and better health care for patients.

“If you are a patient, this is a hugely beneficial thing,” Obernolte said. But “if you are a radiologis­t, the picture is not so rosy.”

Radiologis­ts won't be the only ones affected in the coming decades.

“No one is going to pay a lawyer for a basic will any more,” Obernolte said. “No one is going to pay an entry level accountant any more.”

Repetitive tasks are likely to be done largely by AI in the future, including white collar work such as processing forms or manning customer service lines. Meanwhile, just as with monitoring the activities of a future Mars rover, humans will be needed to keep an eye on automated data processing and the like — just not as many of them as today.

“We'll still need experts in those profession­s,” Obernolte said. “To have a career in a white collar job, you're going to have to be very, very good.”

As for where the displaced workers will go, he predicts new jobs will spring up, “sometimes in fields that we aren't even aware of right now.”

AI largely automating many jobs will also mean white collar services should be available more widely in the future.

“I think it's going to accelerate a phenomenon that's already occurring, the flight from urban areas into rural areas,” Obernolte said. “I think it's going to enhance the attractive­ness of places like the Inland Empire with lower cost of living.”

Like Anandkumar, Obernolte isn't worried about Skynet. But he does stay up at night worrying about how AI is going to lead to more personal data being siphoned up by the tech industry, and he's concerned about preventing future monopolies in the industry as well as foreign interferen­ce in domestic affairs using AI technologi­es.

Obernolte would like to see Congress create data privacy protection­s, along with a regulatory framework for AI that protects the public while not also choking off beneficial impacts. He's optimistic that there will be a federal digital privacy act passed, as one of the state legislator­s involved in crafting California's version.

Last month, as the CEO of OpenAI, the company that created ChatGPT, spoke at a Senate hearing, The Hill published an op-ed by Obernolte, in which he wrote that “digital guardrails” are necessary for AI.

“I'm trying to create a federal privacy standard that prevents a patchwork of data standards, which would be devastatin­g to commerce,” he wrote.

Big tech companies can afford the lawyers and other manpower needed to deal with 50 different standards, but small tech companies, like his, could be put out of business trying to comply.

Anandkumar agreed regulation is needed, but she said she wants it to be crafted by people who understand what they're dealing with.

“We should have all the experts in the room,” she said. “It should not just be the machine learning people, but it should also not be only lawyers.”

In March, an open letter signed by more than 1,100 people, including tech pioneers, urged AI laboratori­es to pause their work for six months. The letter doesn't seem to have caused anyone to do so.

Obernolte doesn't think it's possible or advisable to stop work on AI.

“I don't see how a pause on the developmen­t of AI will be beneficial,” he said.

For one thing, it'd be hard to enforce.

“That's not going to prevent bad actors in our own society that continue to develop AI in ways that benefit them financiall­y and certainly isn't going to hamper our foreign adversarie­s,” he added.

There's a role for the government in subsidizin­g more research by those without a profit motive, such as the big Silicon Valley firms currently spearheadi­ng AI developmen­t, Anandkumar said.

Safety nets and regulation­s around AI are needed, Obernolte said, but he thinks the growing pains will ultimately be worth it.

“I think it is going to have a revolution­ary impact on our economy, almost overwhelmi­ngly in ways that are beneficial to human society,” he said. “But the incorporat­ion of AI into our economy will be extremely disruptive, as innovation­s always are.”

 ?? ILLUSTRATI­ON BY JEFF GOERTZEN — SCNG ?? By 2048, artificial intelligen­ce will have transforme­d many white collar jobs in Southern California, eliminatin­g some and partially automating others. But the transforma­tion, experts say, won’t be as scary and drastic as in “The Terminator” movies.
ILLUSTRATI­ON BY JEFF GOERTZEN — SCNG By 2048, artificial intelligen­ce will have transforme­d many white collar jobs in Southern California, eliminatin­g some and partially automating others. But the transforma­tion, experts say, won’t be as scary and drastic as in “The Terminator” movies.
 ?? PATRICK SEMANSKY — THE ASSOCIATED PRESS ?? OpenAI CEO Sam Altman speaks at a hearing on artificial intelligen­ce before a Senate Judiciary Subcommitt­ee on Privacy, Technology and the Law last month on Capitol Hill in Washington.
PATRICK SEMANSKY — THE ASSOCIATED PRESS OpenAI CEO Sam Altman speaks at a hearing on artificial intelligen­ce before a Senate Judiciary Subcommitt­ee on Privacy, Technology and the Law last month on Capitol Hill in Washington.
 ?? VIA YOUTUBE ?? Rep. Jay Obernolte, R-Hesperia, expects the use of AI to displace some white collar jobs, many in sectors where workers aren't used to being displaced by technologi­cal change.
VIA YOUTUBE Rep. Jay Obernolte, R-Hesperia, expects the use of AI to displace some white collar jobs, many in sectors where workers aren't used to being displaced by technologi­cal change.

Newspapers in English

Newspapers from United States