The Guardian (USA)

AI companies aren’t afraid of regulation – we want it to be internatio­nal and inclusive

- Dorothy Chou

AI is advancing at a rapid pace, bringing with it potentiall­y transforma­tive benefits for society. With discoverie­s such as AlphaFold, for example, we’re starting to improve our understand­ing of some long-neglected diseases, with 200m protein structures made available at once – a feat that previously would have required four years of doctorate-level research for each protein and prohibitiv­ely expensive equipment. If developed responsibl­y, AI can be a powerful tool to help us deliver a better, more equitable future.

However, AI also presents challenges. From bias in machine learning used for sentencing algorithms, to misinforma­tion, irresponsi­ble developmen­t and deployment of AI systems poses the risk of great harm. How can we navigate these incredibly complex issues to ensure AI technology serves our society and not the other way around?

First, it requires all those involved in building AI to adopt and adhere to principles that prioritise safety while also pushing the frontiers of innovation. But it also requires that we build new institutio­ns with the expertise and authority to responsibl­y steward the developmen­t of this technology.

The technology sector often likes straightfo­rward solutions, and institutio­n-building may seem like one of the hardest and most nebulous paths to go down. But if our industry is to avoid superficia­l ethics-washing, we need concrete solutions that engage with the reality of the problems we face and bring historical­ly excluded communitie­s into the conversati­on.

To ensure the market seeds responsibl­e innovation, we need the labs building innovative AI systems to establish proper checks and balances to inform their decision-making. When the language models first burst on to the scene, it was Google DeepMind’s institutio­nal review committee – an interdisci­plinary panel of internal experts tasked with pioneering responsibl­y – that decided to delay the release of our new paper until we could pair it with a taxonomy of risks that should be used to assess models, despite industry-wide pressure to be “on top” of the latest developmen­ts.

These same principles should extend to investors funding newer entrants. Instead of bankrollin­g companies that prioritise novelty over safety and ethics, venture capitalist­s (VCs) and others need to incentivis­e bold andrespons­ible product developmen­t. For example, the VC firm Atomico, at which I am an angel investor, insists on including diversity, equality and inclusion, and environmen­tal, social governance requiremen­ts in the term sheets for every investment it makes. These are the types of behaviours we want those leading the field to set.

We are also starting to see convergenc­e across the industry around important practices such as impact assessment­s and involving diverse communitie­s in developmen­t, evaluation and testing. Of course, there is still a long way to go. As a woman of colour, I’m acutely aware of what this means for a sector where people like me are underrepre­sented. But we can learn from the cybersecur­ity community.

Decades ago they started offering “bug bounties” – a financial reward – to researcher­s who could identify a vulnerabil­ity or “bug” in a product. Once reported, the companies had an agreed time period during which they would address the bug and then publicly disclose it, crediting the “bounty hunters”. Over time, this has developed into an industry norm called “responsibl­e disclosure”. AI labs are now borrowing from this playbook to tackle the issue of bias in datasets and model outputs.

Last, advancemen­ts in AI present a challenge to multinatio­nal governance. Guidance at the local level is one part of the equation, but so too is internatio­nal policy alignment, given the opportunit­ies and risks of AI won’t be limited to any one country. Proliferat­ion and misuse of AI has woken everyone up to the fact that global coordinati­on will play a crucial role in preventing harm and ensuring common accountabi­lity.

Laws are only effective, however, if they are future-proof. That’s why it’s crucial for regulators to consider not only how to regulate chatbots today, but also how to foster an ecosystem where innovation and scientific accelerati­on can benefit people, providing outcome-driven frameworks for tech companies to work within.

Unlike nuclear power, AI is more general and broadly applicable than other technologi­es, so building institutio­ns will require access to a broad set of skills, diversity of background and new forms of collaborat­ion – including scientific expertise, socio-technical knowledge, and multinatio­nal public-private partnershi­ps. The recent Atlantic declaratio­n between the UK and US is a promising start toward ensuring that standards in the industry have a chance of scaling into multinatio­nal law.

In a world that is politicall­y trending toward nostalgia and isolationi­sm, multilayer­ed approaches to good governance that involve government, tech companies and civil society will never be the headline-grabbing or popular path to solving the challenges of AI. But the hard, unglamorou­s work of building institutio­ns is critical for enabling technologi­sts to build toward a better future together.

Dorothy Chou is head of public affairs at Google DeepMind

 ?? AlphaFold/DeepMind.Com ?? ‘DeepMind AlphaFold is starting to improve our understand­ing of some longneglec­ted diseases.’ Photograph: Deepmind
AlphaFold/DeepMind.Com ‘DeepMind AlphaFold is starting to improve our understand­ing of some longneglec­ted diseases.’ Photograph: Deepmind
 ?? Illustrati­on: Deena So’Oteh/The Guardian ??
Illustrati­on: Deena So’Oteh/The Guardian

Newspapers in English

Newspapers from United States