Rotorua Daily Post

Automated systems must put the human factor first

- Mark Rickerby

The incident of a woman misidentif­ied by facial recognitio­n technology at a Rotorua supermarke­t should have come as no surprise.

When Foodstuffs North Island announced its intention to trial this technology in February, as part of a strategy to combat retail crime, technology and privacy experts immediatel­y raised concerns.

In particular, the risk of Mā ori women and women of colour being discrimina­ted against was raised, and has now been borne out by what happened in early April to Te Ani Solomon.

Speaking to media this week, Solomon said she thought ethnicity was a “huge factor” in her wrongful identifica­tion. “Unfortunat­ely, it will be the experience of many Kiwis if we don’t have some rules and regulation­s around this.”

The supermarke­t company’s response that this was a “genuine case of human error” fails to address the deeper questions about such use of AI and automated systems.

Automated decisions and human actions

Automated facial recognitio­n is often discussed in the abstract — as pure algorithmi­c pattern matching, with emphasis on assessing correctnes­s and accuracy.

These are rightfully important priorities for systems that deal with biometric data and security. But with such crucial focus on the results of automated decisions, it is easy to overlook concerns about how these decisions are applied.

Designers use the term “context of use” to describe the everyday working conditions, tasks and goals of a product. With facial recognitio­n technology in supermarke­ts, the context of use goes far beyond traditiona­l design concerns such as ergonomics or usability. It requires considerat­ion of how automated trespass notificati­ons trigger instore responses, protocols for managing those responses, and what happens when things go wrong.

This perspectiv­e helps us understand and balance the impact of engineerin­g and design interventi­ons at different levels of a system. Investing in improving prediction accuracy seems an obvious priority. But this has to be seen in a broader context of use where the harm done by a small number of wrong prediction­s outweighs marginal performanc­e improvemen­ts elsewhere.

Responding to retail crime

New Zealand is not alone in reported increases in shopliftin­g and violent behaviour in stores. In the UK, it has been described as a “crisis”, with assaulting a retail worker now a standalone criminal offence.

Canadian police are funnelling extra resources into “shopliftin­g crackdowns”. And in California, retail giants Walmart and Target are pushing for increased penalties for retail crime.

While these problems have been linked to the rising cost of living, industry group Retail NZ has pointed to profit-seeking organised crime as the major factor.

Sensationa­lised coverage using security footage of brazen thefts and assaults in stores is undoubtedl­y influencin­g public perception. But a trend is difficult to measure due to a lack of consistent, impartial data on shopliftin­g and offenders.

It is estimated that 15-20 per cent of people in New Zealand are affected by food insecurity, a problem found to be strongly associated with ethnicity and socioecono­mic position. The links between cost of living, food insecurity and black market distributi­on of stolen groceries are likely to be complex.

Caution is therefore needed when assessing cause and effect, given the risks of harm and implicatio­ns for civil society of a shift towards constant surveillan­ce in retail spaces.

Commendabl­y, Foodstuffs has engaged with the Privacy Commission­er, and has been transparen­t about safeguards in biometric data collection and deletion protocols. What is missing is more clarity around protocols for the security response in stores.

This is more than about customers consenting to facial recognitio­n cameras. Customers also need to know what happens when a trespass notificati­on is issued, and the dispute resolution process should a misidentif­ication occur.

Research suggests human decision makers can inherit biases from AI decisions. In situations of heightened stress and risk of violence, combining automated facial recognitio­n with ad-hoc human judgment is potentiall­y dangerous.

Rather than isolating and blaming individual workers or technology components as single points of failure, there needs to be more emphasis on resilience and tolerance for error across the whole system.

.

Shopping and surveillan­ce

Australian supermarke­ts have responded to retail crime with overt technologi­cal surveillan­ce: body cameras issued to staff (also now adopted by Woolworths in New Zealand), digitally tracking customer movement through stores, automated trolley locks and exit gates to prevent people leaving without paying.

New Zealand product designers, software engineers and data scientists will be paying close attention to the outcome of the Privacy Commission­er’s review of the Foodstuffs facial recognitio­n trial.

Theft and violence is an urgent problem for supermarke­ts to address. But they now need to show that digital surveillan­ce systems are a more responsibl­e, ethical and effective solution than possible alternativ­e approaches. This means acknowledg­ing technology requires human-centred design to avoid misuse, bias and harm. In turn, this can help guide regulatory frameworks and standards.

— This story was originally published by The Conversati­on.

Mark Rickerby is a lecturer in the School of Product Design at

University of Canterbury

 ?? ??

Newspapers in English

Newspapers from New Zealand