The Post

A CHAT WITH... DR JADE LEUNG

-

Dr Jade Leung works with artificial intelligen­ce (AI) and is extremely clever, which makes thinking of questions to ask her a formidable undertakin­g. The 28-year-old New Zealander lives in San Francisco and is the governance lead at OpenAI, a research and deployment company focused on building advanced AI models. A Rhodes Scholar with a doctorate in internatio­nal relations from the University of Oxford, she features in Prime’s new show Brave New Zealand World, a series looking at the greatest risks to humanity, including a future where artificial intelligen­ce could rule the planet – and us. Leung talks to Felicity Monk about where AI is headed, what we can do to ensure it’s helpful – and not destructiv­e – to humanity in the future and the brain-boggling complexiti­es of her job.

What do you say when someone asks you to describe your line of work succinctly?

I usually don’t do very well. The way I start is by communicat­ing why I think AI could be a really big deal. A lot of technologi­es in our past have caused huge step changes in our welfare and the way we structure our economy; things such as computing, the printing press, steam engines. Those are general purpose technologi­es that raise the waterline and economic productivi­ty. They speed things up and change the nature by which we do things. And AI seems very clearly to me like it’s going to be the same.

Tell me about OpenAI? OpenAI’s stated mission and goal is to build AGI (artificial general intelligen­ce) for the benefit of humanity. And so that comes with a huge technical goal, which is to build increasing­ly powerful and genuinely capable AI systems that are aligned with human interests and values.

Who determines what those interests and values are? There’s obviously really big questions such as: how do you weigh and aggregate people’s beliefs and preference­s across the world? Who do you essentiall­y align systems to, once you’ve figured out a way to actually align them robustly to anything? And that is incredibly difficult. Because we don’t have answers.

This also comes with a societal goal, which is really where my work is focused. I lead a team where our core mission is to figure out if we do end up getting to a point where we have such powerful and generally capable systems, how do we get there safely? And how do we do that work responsibl­y, democratic­ally, and in a way that genuinely optimises for the benefit of as much of the population as possible rather than a small minority?

Oooof that’s no small thing!

What occupies most of my time is thinking about what if we progressed beyond AI being a purely economic technology? If we start to think of what kinds of systems could we build that could be plausibly competitiv­e with the human species in terms of intelligen­ce and capabiliti­es? And there’s no particular reason to think that we can’t build systems like this. And that gets us into interestin­g, pretty scary territory. If you imagine that you’ve created a technology that is essentiall­y the first competitor to human species on Earth, then that has a lot of different implicatio­ns in terms of what could go wrong. And those are all really big, unpreceden­ted questions.

And what could go wrong?

It used to be the case that people thought AI might replace truck drivers first, and that intellectu­al workers will be last. But AI progress in the last couple of years has not panned out that way. And in fact, it’s tracking to look like tasks associated with knowledge work will be automated sooner than physical tasks. Obviously this comes with risk in terms of inequality and disenfranc­hisement.

Another risk is the impact of the types of technologi­es on national security, and internatio­nal stability more broadly. It could become a game changer in terms of what kind of weapons systems our nation states can design and how competent they can become in military strategy and the like. That’s a really big part of the governance interventi­on – how do you steer away from developing systems like that?

What is your vision for a positive AI future?

I think a positive future for us would look like being as cautious as we can be in developing the technology and deploying it. It’s really this notion of retaining control and being very cognisant of how much control humanity is devolving and delegating to which types of systems across time. A good friend of mine, author Toby Ord, frames it as: humanity is much better at pushing technologi­cal progress than we are at pushing our level of wisdom about what to do with that technology. And so one way in which you can think about how things go well is that our wisdom catches up, essentiall­y. And that we have the capabiliti­es of actually making decisions at societal scale, about how we want this all to look. And that ends up actually steering the way that this technology is deployed over time. Caution, wisdom, just being clear-eyed the entire way.

watch: Brave New Zealand World screens on Prime, Thursday at 8.30pm

“If you imagine that you’ve created a technology that is essentiall­y the first competitor to human species on Earth, then that has a lot of different implicatio­ns in terms of what could go wrong.”

 ?? ??

Newspapers in English

Newspapers from New Zealand