Austin American-Statesman

Standards for making AI safe

- Elham Tabassi Chief AI Advisor NIST Interviewe­d by Frank Bajak. Edited for clarity and length.

No technology since nuclear fission will shape our collective future quite like artificial intelligen­ce, so it’s paramount AI systems are safe, secure, trustworth­y and socially responsibl­e.

But unlike the atom bomb, this paradigm shift has been almost completely driven by the private tech sector, which has been resistant to regulation, to say the least. Billions are at stake, making the Biden administra­tion’s task of setting AI safety standards a major challenge.

For that, it has tapped a small federal agency, The National Institute of Standards and Technology. Helming the effort there is Elham Tabassi, its chief AI advisor. She shepherded the AI Risk Management Framework that laid vital groundwork for President Biden’s Oct. 30 AI safety executive order. The framework was voluntary. The executive order is not.

This interview has been edited for length and clarity.

The executive order gives you until July to create a set of tools for AI safety and trustworth­iness. I understand you’ve called that “an almost impossible deadline.”

It’s not as if we are starting from scratch. For the AI risk framework we heard from more than 240 organizati­ons and got something like 660 sets of public comments. And in June we put together a public working group setting guidelines in four key areas including how to flag AI-generated, or synthetic, content.

How many people on staff at NIST are working on this?

More than a dozen and expanding.

The executive order mandates some obligation­s for developers, including submitting large-language models to red-teaming (testing for risks and vulnerabil­ities) at a certain threshold. Will NIST be in charge of determinin­g which models get red-teamed?

NIST is not going to do any of those things. Our job is to help industry develop technicall­y sound, scientific­ally valid standards. We are a non-regulatory agency.

How AIs are trained and the guardrails placed on them can vary widely. Sometimes, features like cybersecur­ity have been an afterthoug­ht. How do we guarantee risk is accurately assessed and identified?

It’s important that trustworth­iness and safety are addressed during design, developmen­t and deployment, and include regular monitoring. We can’t afford to try to fix AI systems after they are out in use. It has to be done as early as possible.

Members of Congress said in a December letter that they learned NIST intends to make grants or awards through a new AI safety institute — suggesting a lack of transparen­cy.

Indeed, we are exploring options for a competitiv­e process to support cooperativ­e research opportunit­ies. Openness and transparen­cy are a hallmark for us.

 ?? ??

Newspapers in English

Newspapers from United States