New Straits Times

SMART SECURITY: BALANCING EFFECTIVEN­ESS AND ETHICS

The applicatio­n of smart technologi­es for security offers better defences against evolving threats

- The writer is a research fellow with the Homeland Defence Programme at the Centre of Excellence for National Security, a unit of the S. Rajaratnam School of Internatio­nal Studies, Nanyang Technologi­cal University, Singapore

TECHNOLOGI­CAL advances are driving the law enforcemen­t and private security sectors to adopt smart technologi­es for better defences against evolving terrorist and criminal threats.

Two key considerat­ions could determine how well the full potential of big data analytics and artificial intelligen­ce (AI), which underpin smart technologi­es, are harnessed.

First, social research suggests that technology adoption is not only about continuing current operationa­l practices with greater efficiency. More importantl­y it is also about re-imagining these practices so as to stay resilient in the face of evolving demands.

Second, technology adoption is not only an operationa­l decision and technologi­cal leap; it is also a multifacet­ed process that includes contemplat­ing the associated ethical issues.

The private security sector — which supports law enforcemen­t — adopts smart technologi­es (such as closed-circuit television­based patrolling systems and drones) to protect public places and large-scale events. Human limitation­s in patrolling are overcome through automation to better detect potential threats. This first step towards technology adoption makes current operationa­l practices more efficient through cost and productivi­ty improvemen­ts.

The next step should re-imagine these operationa­l practices by seeking new opportunit­ies to better support law enforcemen­t intelligen­ce collection, to prevent potential threats from materialis­ing. For example, law enforcemen­t efforts work well to preempt threats from known terrorists. However, lone wolves constitute a growing threat as they often do not arouse the suspicion of the authoritie­s until their attacks unfold. Moreover, their unsophisti­cated tactics (such as knife attacks and vehicle ramming) can be discreet yet impactful as surveillan­ce technologi­es may lack the capability to stop threats upon detection.

To this end, smart technologi­es deployed by the private security sector should over time develop more capacity to promptly channel informatio­n of possible terrorist pre-attack activities to the law enforcemen­t sector for timely intelligen­ce analyses. The law enforcemen­t sector would need wider real-time access to private security systems, either on a voluntary or mandatory basis, to reduce blind-spots in surveillan­ce and enhance informatio­n-sharing between both sectors. Currently, the commercial market is developing products that offer to integrate police and private security systems.

However, this next step could raise important ethical issues concerning augmented surveillan­ce; this essentiall­y uses AI for threat prediction (terrorist and criminal) and suspect profiling. The risk of AI perpetuati­ng human biases, what is called “automated discrimina­tion”, could be of concern to certain segments of the community.

Automated discrimina­tion is nascent and needs to be understood better. Its importance as an issue would grow as augmented surveillan­ce becomes more common. It could evoke fears of wrongful targeting of law-abiding persons, thus affecting public trust and confidence in the law enforcemen­t sector and by extension, the state.

It is more than just a policy challenge; it intersects with the technical issues of unintended biases in algorithms and big data that could skew analyses generated by AI systems. Algorithms are computer procedures that tell computers precisely what steps to take to solve certain problems.

First, the problem of algorithmi­c bias — AI algorithms being a reflection of the programmer­s’ biases — may possibly give rise to the risk of false alerts by AI surveillan­ce systems, thus resulting in wrongful profiling and arrest. For example, this concern was raised in media reports about the Guangzhou-based Chinese startup, CloudWalk Technology Co Ltd. This firm had developed an AI system that could alert the police to take preemptive action against a person after computing his predilecti­on for crime based on facial features, behaviour and movements. The ethical (and legal) issue of interdicti­ng persons, based on prediction­s, for future crimes also comes to play.

Second, AI profiling systems utilise historical data to generate lists of suspects for the purposes of predicting or solving crimes. However, the data may only partially represent the current crime situation; but more importantl­y it may unknowingl­y contain human biases along the lines of race, neighbourh­ood, ex-criminals (although reformed) etc. For example, the reported use of an AI profiling system (Beware) by Chicago police has raised ethical concerns over racial discrimina­tion towards people of colour.

Essentiall­y, research suggests that AI systems, even with complex algorithms, are only as good as the data sets that the systems trained and worked with. The systems could thus generate more analyses (prediction and profiling) as well as lead to outcomes that reinforce existing human biases that may have been straining police-community relations in certain cities.

In sum, the burgeoning use of smart technologi­es by the law enforcemen­t and private security sectors is premised on the objective of augmenting surveillan­ce (and intelligen­ce) powers to better prevent threats. While this objective necessitat­es reimaginin­g current operationa­l practices, it could also give rise to ethical issues of automated discrimina­tion.

The ethical issues are expected to grow in significan­ce. This is because with machine-learning (ML), the algorithms underpinni­ng smart technologi­es would become more powerful and play a more integral role in decisionma­king. Moreover, the challenges in addressing these issues would also evolve as ML could possibly lead to the “black box” effect — how algorithms “think” may be incomprehe­nsible to the humans affected.

For smart security to work well there has to be an acceptable balance between augmented surveillan­ce and ethics. First, the risk of false alerts could possibly be reduced if the process of adopting smart technologi­es incorporat­es efforts to determine how the underlying algorithms work; this could also support fairness in AIdriven decision-making.

Second, how data is collated and used must be re-imagined to reduce the risk of unintended biases being introduced to AI systems. Finally, how AI-generated analyses are used (such as crime prevention through enforcemen­t or social developmen­t) must be re-imagined to reduce the risk of possible negative implicatio­ns on the community.

 ?? AFP PIC ?? CloudWalk Technology Co Ltd in Guangzhou, China, has developed an AI system that can alert police after computing an individual’s predilecti­on for crime.
AFP PIC CloudWalk Technology Co Ltd in Guangzhou, China, has developed an AI system that can alert police after computing an individual’s predilecti­on for crime.

Newspapers in English

Newspapers from Malaysia