The Pak Banker

Learning from mistakes made in AI policies

- Michael Depp

Tuse of lethal robots for law enforcemen­t has turned from a science fiction concept to news snippets, thanks to recent high-profile debates in San Francisco and Oakland, Calif., as well as their actual use in Dallas. The San Francisco Board of Supervisor­s voted 8-3 to grant police the ability to use ground-based robots for lethal force when "when risk of loss of life to members of the public or officers is imminent and officers cannot subdue the threat after using alternativ­e force options or other de-escalation tactics." Following immediate public outcry, the board reversed course a week later and unanimousl­y voted to ban the lethal use of robots. Oakland underwent a less public but similar process, and in January the Dallas Police Department used a robot to end a standoff.

All of these events illustrate major pitfalls with the way that police currently use or plan to use lethal robots. Processes are rushed or nonexisten­t, conducted haphazardl­y, do not involve the public or civil society, and fail to create adequate oversight. These problems must be fixed in future processes that authorize artificial intelligen­ce (AI) use in order to avoid controvers­y, collateral damage and even internatio­nal destabiliz­ation.

The chief sin that a process can commit is to move too quickly. Decisions about how to use AI systems require careful deliberati­on and informed discussion, especially with something as high-stakes as the use of lethal force. A counter example here is the Department of Defense (DOD) Directive 3000.09, which covers the developmen­t and deployment of lethal autonomous systems. Because it lacks clarity for new technology and terminolog­y, this decade-old policy is in the process of a lengthy, but deliberate, update. For San Francisco and Oakland, the impetus for speed was California's requiring an audit of military equipment, but San Francisco's debate was too far along to get started and Oakland's was done in an entirely impromptu fashion.

This was reinforced by the fact that the police in both cities already had robots (albeit, not armed in San Francisco) in their inventorie­s; if the use of robots was not approved, they argued, the equipment would have to be divested, creating an "authorize it or lose it" mentality. Procuremen­t should be covered by the policy on autonomous systems, not an afterthoug­ht to avoid loss. In a functional process, procuremen­t should follow authorizat­ion, not vice versa.

Simply slowing down and avoiding sunk costs alone is not enough, however; the process itself must be improved. In the case of San Francisco, debate involved only the board of supervisor­s and the police department (who had a hand in drafting the authorizat­ion under discussion). Oakland's process started as the council discussed robots alongside staples of police equipment such as stun grenades. When considerin­g the deployment of AI, it is important to solicit the viewpoint of civil society representa­tives, whose expertise on technology policy, law, human rights and artificial intelligen­ce generally, could prove invaluable to producing nuanced policies.

Additional­ly, and especially for law enforcemen­t issues, the public needs to have more involvemen­t in these processes; otherwise, the public will be forced to turn to protests to make its voice heard. In contrast, consider the robust public discussion among citizens, civil society, and companies over law enforcemen­t use of facial recognitio­n software or California's Bot Bill. The ideal process fosters such discussion. Finally, it is critical to consider oversight mechanisms at the start. San Francisco neatly sidesteppe­d this with a nebulous mandate.

Newspapers in English

Newspapers from Pakistan