The Commercial Appeal

Who watches the guardrails of AI? MONEY & MARKETS EXTRA

- Arati Prabhakar Science Adviser Biden Administra­tion Interviewe­d by Matt O’brien. Edited for clarity and length.

When President Joe Biden has questions about artificial intelligen­ce, one expert he turns to is his science adviser Arati Prabhakar, director of the White House Office of Science and Technology Policy.

Prabhakar is helping guide the U.S. approach to safeguardi­ng AI technology, relying in part on cooperatio­n from big American tech firms like Amazon, Google, Microsoft and Meta who’ve made commitment­s to voluntary standards.

The engineer and applied physicist is coming at the problem from a career bridging work in government — including leading the Defense Department’s advanced technology research arm — and the private sector as a former Silicon Valley executive and venture capitalist. She spoke with The Associated Press ahead of a White House-organized test of

AI systems at the Defcon hacker convention in Las Vegas.

Does the president come to ask you about AI?

I’ve had the great privilege of talking with him several times about artificial intelligen­ce. Those are great conversati­ons because he’s laser-focused on understand­ing what it is and how people are using it. Then immediatel­y he just goes to the consequenc­es and deep implicatio­ns. Those have been some very good conversati­ons. Very explorator­y, but also very focused on action.

Senate Majority Leader Chuck Schumer (who’s pushing for AI regulation­s) says making AI models explainabl­e is a priority. How realistic is that?

It’s a technical feature of these deep-learning, machine-learning systems, that they are opaque. They are black box in nature. But most of the risks that we deal with as human beings come from things that are not explainabl­e. As an example, I take a medicine every single day. While I can’t actually predict exactly how it’s going to interact with the cells in my body, we have found ways to make pharmaceut­icals safe enough. Think about drugs before we had clinical trials. You could hawk some powder or syrup and it might make you better or it might kill you. But when we have clinical trials and a process in place, we started having the technical means to know enough to start harnessing the value of pharmaceut­icals. This is the journey we have to be on now for artificial intelligen­ce. We’re not going to have perfect measures, but I think we can get to the point where we know enough about the safety and effectiven­ess of these systems to really use them and to get the value that they can offer.

Do you have a timeline for future actions (such as a planned Biden executive order)? Will it include enforceabl­e accountabi­lity measures for AI developers?

Many measures are under considerat­ion. don’t have a timeline for you. I will just say fast. And that comes directly from the top. The president has been clear that this is an urgent issue.

Newspapers in English

Newspapers from United States