Miami Herald (Sunday)

OpenAI CEO says artificial intelligen­ce is ‘most important step yet’ for humans and tech

- BY PRIYA ANAND AND EMILY CHANG

Sam Altman, chief executive officer of artificial intelligen­ce startup OpenAI Inc., said there are many ways that rapidly progressin­g AI technology “could go wrong.” But he argued that the benefits outweigh the costs, “We work with dangerous technology that could be used in dangerous ways very frequently.”

Altman addressed growing concern about the rapid progress of AI in an interview onstage at the Bloomberg Technology Summit in San Francisco. Altman has also publicly pushed for increased regulation of artificial intelligen­ce in recent months, speaking frequently with officials around the world about responsibl­e stewardshi­p of AI.

Despite the potential dangers of what he called an exponentia­l technologi­cal shift, Altman spoke about several areas where AI could be beneficial, including medicine, science and education.

“I think it’d be good to end poverty,” he said. “But we’re going to have to manage there.”

OpenAI has been valued at more than $27 the risk to get billion, putting it at the forefront of the booming field of venture-backed AI companies. Addressing whether he would financiall­y benefit from OpenAI’s success, Altman said, “I have enough money,” and stressed that his motivation­s were not financial.

“This concept of having enough money is not something that is easy to get across to other people,” he said, adding that it’s human nature to want to be useful and work on “something that matters.”

“I think this will be the most important step yet that humanity has to get through with technology,” Altman said. “And I really care about that.”

Elon Musk, who helped Altman start OpenAI, has subsequent­ly been critical of the organizati­on and its potential to do harm. Altman said that Musk “really cares about AI safety a lot,” and that his criticism was “coming from a good place.” Asked about the theoretica­l “cage match” between Musk and his fellow bil

lionaire Mark Zuckerberg, Altman joked: “I would go watch if he and Zuck actually did that.”

OpenAI’s products — including the chatbot ChatGPT and image generator Dall-E — have dazzled audiences. They’ve also helped spark a multilbill­ion-dollar frenzy among venture capital investors and entreprene­urs who are vying to help lay the foundation of a new era of technology.

To generate revenue, OpenAI is giving companies access to the applicatio­n programmin­g interfaces needed to create their own software that makes use of its AI models. The company is also selling access to a premium version of its chatbot, called ChatGPT Plus. OpenAI doesn’t release informatio­n about total sales.

Microsoft Corp. has invested a total of $13 billion in the company, people familiar with the matter have said. Much of that will be used to pay Microsoft back for using its Azure cloud network to train and run OpenAI’s models.

The speed and power of the fast-growing AI industry has spurred government­s and regulators to try to set guardrails around its developmen­t. Altman was among the artificial intelligen­ce experts who met with President Joe Biden last week in San Francisco. The CEO has been traveling widely and speaking about AI, including in Washington, where he told U.S. senators that, “if this technology goes wrong, it can

‘‘ I THINK [AI WILL] BE GOOD TO END POVERTY. BUT WE’RE GOING TO HAVE TO MANAGE THE RISK TO GET THERE.

Sam Altman, chief executive officer of artificial intelligen­ce startup OpenAI Inc.

go quite wrong.”

Major AI companies, including Microsoft and Alphabet Inc.’s Google, have committed to participat­ing in an independen­t public evaluation of their systems. But the U.S. is also seeking a broader regulatory push. The Commerce Department said earlier this year that it was considerin­g rules that could require AI models to go through a certificat­ion process before being released.

Last month, Altman signed onto a brief statement that included support from more than 350 executives and researcher­s saying “mitigating the risk of extinction from AI should be a global priority alongside other societalsc­ale risks, such as pandemics and nuclear war.”

Despite dire warnings from technology leaders, some AI researcher­s contend that artificial intelligen­ce isn’t advanced enough to justify fears that it will destroy humanity, and that focusing on doomsday scenarios is only a distractio­n from issues like algorithmi­c bias, racism and the risk of rampant disinforma­tion.

OpenAI’s ChatGPT and Dall-E, both released last year, have inspired startups to incorporat­e AI into a vast array of fields, including financial services, consumer goods, health care and entertainm­ent.

Bloomberg Intelligen­ce analyst Mandeep Singh estimates the generative AI market could grow by 42% to reach $1.3 trillion by 2032.

 ?? JACK GRUBER USA TODAY ?? Samuel Altman, right, the CEO of OpenAI, testified before the Senate Committee on the Judiciary Subcommitt­ee on Privacy, Technology and the Law at a hearing on artificial intelligen­ce in Washington, D.C.
JACK GRUBER USA TODAY Samuel Altman, right, the CEO of OpenAI, testified before the Senate Committee on the Judiciary Subcommitt­ee on Privacy, Technology and the Law at a hearing on artificial intelligen­ce in Washington, D.C.

Newspapers in English

Newspapers from United States