Arkansas Democrat-Gazette

Discussing AI with Arati Prabhakar

White House science adviser calls for more safeguards against artificial intelligen­ce risks

- MATT O’BRIEN

When President Joe Biden has questions about artificial intelligen­ce, one expert he turns to is his science adviser Arati Prabhakar, director of the White House Office of Science and Technology Policy.

Prabhakar is helping to guide the U.S. approach to safeguardi­ng AI technology, relying in part on cooperatio­n from big American tech firms like Amazon, Google, Microsoft and Meta.

The India-born, Texas-raised engineer and applied physicist is coming at the problem from a career bridging work in government — including leading the Defense Department’s advanced technology research arm — and the private sector as a former Silicon Valley executive and venture capitalist.

She spoke with The Associated Press earlier this month ahead of a White House-organized test of AI systems at the DefCon hacker convention in Las Vegas. The interview has been edited for length and clarity.

Q: Does the president come to ask you about AI?

A: I’ve had the great privilege of talking with him several times about artificial intelligen­ce. Those are great conversati­ons because he’s laser-focused on understand­ing what it is and how people are using it. Then immediatel­y he just goes to the consequenc­es and deep implicatio­ns. Those have been some very good conversati­ons. Very explorator­y, but also very focused on action.

Q: Senate Majority Leader Chuck Schumer (who’s pushing for AI regulation­s) says making AI models explainabl­e is a priority. How realistic is that?

A: It’s a technical feature of these deep-learning, machine-learning systems, that they are opaque. They are black box in nature. But most of the risks we deal with as human beings come from things that are not explainabl­e. As an example, I take a medicine every single day. While I can’t actually predict exactly how it’s going to interact with the cells in my body, we have found ways to make pharmaceut­icals safe enough. Think about drugs before we had clinical trials. You could hawk some powder or syrup, and it might make you better, or it might kill you. But when we have clinical trials and a process in place, we started having the technical means to know enough to start harnessing the value of pharmaceut­icals. This is the journey we have to be on now for artificial intelligen­ce. We’re not going to have perfect measures, but I think we can get to the point where we know enough about the safety and effectiven­ess of these systems to really use them and to get the value that they can offer.

Q: What are some specific AI applicatio­ns you’re concerned about?

A: Some of the things we see are big and obvious. If you break the guardrails of a chatbot, which people do routinely, and coax it to tell you how to build a weapon, well, clearly that’s concerning. Some of the harms are much more subtle. When these systems are trained off human data, they sometimes distill the bias that’s in that data. There’s now a fairly substantia­l, distressin­g history of facial recognitio­n systems being used inappropri­ately and leading to wrongful arrests of Black people. And then privacy concerns. All of our data that’s out in the world, each individual piece may not reveal much about us, but when you put it all together it tells you rather a lot about each of us.

Q: Seven companies (including Google, Microsoft and ChatGPT-maker OpenAI) agreed in July to meet voluntary AI safety standards set by the White House. Were any of those commitment­s harder to get? Where’s the friction?

A: I want to start by just acknowledg­ing how fortunate we are that so many of the companies that are driving AI technology today are American companies. It reflects a long history of valuing innovation in this country. That’s a tremendous advantage. We also just have to be very, very clear that with every good intention in the world, the realities of operating in the market are, by definition, a constraint on how far these individual companies can go. We hope many more will join them and voluntary commitment­s will grow. We just have to be clear that’s only one part of it. That’s companies stepping up to their responsibi­lities. But we in government need to step up to ours, both in the executive branch and for the legislativ­e branch.

Q: Do you have a timeline for future actions (such as a planned Biden executive order)? Will it include enforceabl­e accountabi­lity measures for AI developers?

A: Many measures are under considerat­ion. I don’t have a timeline for you. I will just say fast. And that comes directly from the top. The president has been clear that this is an urgent issue.

 ?? (File Photo/AP/Jacquelyn Martin) ?? White House Office of Science and Technology Director Arati Prabhakar speaks March 30 at the Democracy Summit at the Convention Center in Washington.
(File Photo/AP/Jacquelyn Martin) White House Office of Science and Technology Director Arati Prabhakar speaks March 30 at the Democracy Summit at the Convention Center in Washington.
 ?? (File Photo/AP/Alex Brandon) ?? President Joe Biden speaks Nov. 14 during a news conference on the sidelines of the G20 summit meeting in Bali, Indonesia.
(File Photo/AP/Alex Brandon) President Joe Biden speaks Nov. 14 during a news conference on the sidelines of the G20 summit meeting in Bali, Indonesia.
 ?? (File Photo/AP/Michael Dwyer) ?? The OpenAI logo is seen on a mobile phone in front of a computer screen, which displays output from ChatGPT, on March 21 in Boston.
(File Photo/AP/Michael Dwyer) The OpenAI logo is seen on a mobile phone in front of a computer screen, which displays output from ChatGPT, on March 21 in Boston.

Newspapers in English

Newspapers from United States