Vancouver Sun

MEETING THE CHALLENGE OF AI WITH SAFETY IN MIND

Embracing technology is the natural next step, Catherine Roome writes.

-

Artificial intelligen­ce and predictive algorithms will shape the future of regulatory work. But if the machines behave badly, is the upside worth it?

The use of artificial intelligen­ce (AI) and predictive algorithms used to be science-fiction technologi­es of the future. Now, they’re prevalent in our everyday lives, transformi­ng the way we live, whether we know it or not.

From applicatio­ns in health care, finance, transporta­tion, security, market research and social media, AI is a widerangin­g tool that often works in the background, allowing us to process, translate and apply informatio­n to improve the way we do things.

The promise of the next generation of digital tools is powerful. Indeed, for areas of social good, like health and public safety, it has the potential to be the ultimate social-equality tool. Because if we can predict where and when we might be harmed, then taking care of people just got a whole lot easier.

Technical Safety B.C. is using a proprietar­y predictive algorithm that assists and supports clients and safety officers in reducing the number of safety incidents in the province.

As a regulator overseeing the safe installati­on and operation of equipment and systems, Technical Safety B.C. assesses equipment and systems that over four million British Columbians use daily. From SkyTrain, to electrical systems in condo buildings, to hot-water boilers in schools, elevators in malls and more. Five years ago, we questioned how we could improve our processes to reduce the number of incidents and to better locate hazards before they become harmful.

We realized quickly that the relationsh­ips between our clients and our safety officers are crucial to informing better safety behaviour, whether it’s through educating clients on best practices, being a resource to answer questions or working with them to correct hazards found.

To improve how we can better locate safety hazards, we developed an in-house computer algorithm known as the Resource Allocation Program (RAP). This algorithm uses permit and inspection data from our own safety officers and a simple model to prioritize work for safety officers with the focus on the areas where the highest potential risks would lie.

As part of RAP’s machinelea­rning process, each and every time our safety officers assess the work and equipment on a site, they record a data point on their findings that the algorithm then uses to adjust or confirm the accuracy of its safety prediction. This is how AI works — with a continuous feedback loop — just as we do as people, continuous­ly adapting based on the latest informatio­n. That’s why it’s called machine learning.

Machines are tools to help humans. And we use machine learning in exactly that way. As a tool.

We have since developed new models for RAP using the latest machine-learning technology, and have seen it adapt even more quickly to reflect emerging risks. Our teams continuall­y work on improvemen­ts through testing to see how machine learning could enhance our approach.

Our tests showed that through machine learning the algorithm’s prediction of high-hazard electrical sites improved by 80 per cent.

But what if machines behave badly?

Recent reports in the media of rogue algorithms show us that, left unmonitore­d, machine learning can recreate and reinforce biases and cause undue harm.

We think algorithms working for the public good should meet an extremely high ethics test, to ensure that they meet the highest test of accuracy, are free from undue bias and that those using them as tools are also offered a role in designing protection­s to meet societies’ values. In other words, that there is “a human-centred” approach.

When we introduced the concept of using algorithms and machine learning to help with our decision-making, our own employees voiced concerns that the new automated prediction algorithm could create privacy issues. That it might miss or misjudge risks; or that reliance on algorithms might displace human jobs.

To address these issues, Technical Safety B.C. employees and a local AI ethics consultanc­y, Generation R, worked to introduce an ethics roadmap that lays out a framework for using data and advanced algorithms to expand on the safety system in B.C. This is leading-edge work, even raising the interest of a UN committee on the use of machine learning.

As humans, we are all challenged with working in a world full of changing complexiti­es. For Technical Safety B.C., our clients and stakeholde­rs are moving forward, adopting digital solutions in their own businesses and facilities. Alongside their rigour, we must be proactive in seeking solutions to reduce the number of technical safety hazards and improve how we mitigate risks.

Embracing technology and leveraging the latest AI tools to better assist our safety officers and clients is the natural next step in helping us deliver on safety services.

Artificial intelligen­ce is often met with hesitation. Certainly, its implementa­tion requires special considerat­ion to ensure the adoption into an organizati­on’s work flow goes smoothly. But when applied with considerat­ion, this technology can provide the tools to better connect an organizati­on to its purpose — with accuracy, efficiency, and with a moral code.

Catherine Roome is president and CEO of Technical Safety B.C.

Artificial intelligen­ce is often met with hesitation ... its implementa­tion requires special considerat­ion.

 ??  ?? Technical Safety B.C. has teamed up with Generation R to develop an ethics roadmap for safety officers’ use of artificial intelligen­cepowered software in their work,
Technical Safety B.C. has teamed up with Generation R to develop an ethics roadmap for safety officers’ use of artificial intelligen­cepowered software in their work,

Newspapers in English

Newspapers from Canada