The Atlanta Journal-Constitution

Small federal agency tasked with making AI safe, secure

Leader of effort seeks input from many profession­s.

- By Frank Bajak

BOSTON — No technology since nuclear fission will shape our collective future quite like artificial intelligen­ce, so it’s paramount AI systems are safe, secure, trustworth­y and socially responsibl­e.

But unlike the atom bomb, this paradigm shift has been almost completely driven by the private tech sector, which has been resistant to regulation, to say the least. Billions are at stake, making the Biden administra­tion’s task of setting standards for AI safety a major challenge.

To define the parameters, it has tapped a small federal agency, the National Institute of Standards and Technology. NIST’s tools and measures define products and services from atomic clocks to election security tech and nanomateri­als.

At the helm of the agency’s AI efforts is Elham Tabassi, who shepherded the AI Risk Management Framework published 12 months ago that laid groundwork for Biden’s Oct. 30 AI executive order. It catalogued such risks as bias against non-whites and threats to privacy.

Iranian- born, Tabassi came to the U.S. in 1994 for her master’s in electrical engineerin­g and joined NIST not long after. She is principal architect of a standard the FBI uses to measure fingerprin­t image quality.

This interview was edited for length and clarity.

Q: Emergent AI technologi­es have capabiliti­es their creators don’t even understand. There isn’t even an agreedupon vocabulary, the technology is so new. You’ve stressed the importance of creating a lexicon on AI. Why?

A: Most of my work has been in computer vision and machine learning. There, too, we needed a shared lexicon to avoid quickly devolving into disagreeme­nt. A single term can mean different things to different people.

Q: You’ve said that for your work to succeed, you need input not just from computer scientists and engineers but also from attorneys, psychologi­sts and philosophe­rs.

A: AI systems are inherently sociotechn­ical, influenced by environmen­ts and conditions of use. They must be tested in real-world conditions to understand risks and impacts. So we need cognitive scientists, social scientists and, yes, philosophe­rs.

Q: This task is a tall order for a small agency, under the Commerce Department, that the Washington Post called “notoriousl­y underfunde­d and understaff­ed.” How many people at NIST are working on this?

A: First, I’d like to say that we at NIST have a spectacula­r history of engaging with broad communitie­s. In putting together the AI risk framework, we heard from more than 240 distinct organizati­ons and got something like 660 sets of public comments. In quality of output and impact, we don’t seem small. We have more than a dozen people on the team and are expanding.

Q: Will NIST’s budget grow from the current $1.6 billion in view of the AI mission?

A: Congress writes the checks for us and we are grateful for its support.

Q: The executive order gives you until July to create a toolset for guaranteei­ng AI safety and trustworth­iness. I understand you called that “an almost impossible deadline” at a conference.

A: Yes, but I quickly added that this is not the first time we have faced this type of challenge. As for the deadline, it’s not like we are starting from scratch. In June, we put together a public working group focused on four different sets of guidelines including for authentica­ting synthetic content.

Q: Members of the House Committee on Science and Technology said in a letter last month that they learned that NIST intends to make grants or awards through a new AI safety institute — suggesting a lack of transparen­cy.

A: Indeed, we are exploring options for a competitiv­e process to support cooperativ­e research opportunit­ies. Our scientific independen­ce is really important to us. While we are running a massive engagement process, we are the ultimate authors of whatever we produce. We never delegate to somebody else.

Q: A consortium created to assist the AI safety institute is apt to spark controvers­y due to industry involvemen­t. What do consortium members have to agree to?

A: We posted a template for that agreement on our website at the end of December. Openness and transparen­cy are a hallmark for us. The template is out there.

Q: The AI risk framework was voluntary but the executive order mandates some obligation­s for developers. That includes submitting large-language models for government red-teaming (testing for risks and vulnerabil­ities) once they reach a certain threshold in size and computing power. Will NIST be in charge of determinin­g which models get red-teamed?

A: Our job is to advance the measuremen­t science and standards needed for this work. That will include some evaluation­s. This is something we have done for face recognitio­n algorithms. As for tasking (the red-teaming), NIST is not going to do any of those things. Our job is to help industry develop technicall­y sound, scientific­ally valid standards. We are a non-regulatory agency, neutral and objective.

Q: How AIs are trained and how the guardrails are placed on them can vary widely. And sometimes features like cybersecur­ity have been an afterthoug­ht. How do we guarantee risk is accurately assessed and identified — especially when we may not know what publicly released models have been trained on?

A: In the AI risk management framework, we came up with a taxonomy of sorts for trustworth­iness, stressing the importance of addressing it during design, developmen­t and deployment — including regular monitoring and evaluation­s during AI systems’ life cycles. Everyone has learned we can’t afford to try to fix AI systems after they are out in use. It has to be done as early as possible.

And yes, much depends on the use case. Take facial recognitio­n. It’s one thing if I’m using it to unlock my phone. A totally different set of security, privacy and accuracy requiremen­ts come into play when, say, law enforcemen­t uses it to try to solve a crime. Tradeoffs between convenienc­e and security, bias and privacy — all depend on context of use.

Newspapers in English

Newspapers from United States