Vancouver Sun

Humans have ultimate control over use of AI

Job losses shouldn’t be feared, Fred Popowich says.

- Fred Popowich is a professor of computing science and the executive director of KEY, SFU’s Big Data Initiative. In 2017, the Canadian Artificial Intelligen­ce Associatio­n recognized him for his outstandin­g service to the AI community. This op-ed series is

The introducti­on and increased sophistica­tion of automation has always had a significan­t impact on the workplace and workforce. We find ourselves once again pondering the impact of even more sophistica­ted automation, this time considerin­g the role of artificial intelligen­ce in the future of work. Should we be worried? What should we be worried about?

The first introducti­on I had regarding how AI might affect my future was in 1968 through Stanley Kubrick’s film 2001: A Space Odyssey. Back then, AI was just science fiction for the vast majority of people.

Intelligen­t machines often were performing tasks or solving problems that were too difficult or too risky for humans, with tensions and anxieties arising in situations where machines take control. What happens when a machine performs a task better than a human? Well, in the 1968 film, the fictional AI computer HAL, when threatened by humans, ended up killing a few of them before “he” himself was “killed.” This powerful and futuristic film definitely gave me nightmares as a youngster. Do we need to worry about AI not only taking our jobs, but also our lives?

We are now 50 years beyond Space Odyssey, and approachin­g 20 years beyond the “future” in which that film was situated. Today, even more human work involves automation, and an increasing amount involves what people from 50 years ago would have considered to be AI. While automation has replaced many jobs, it is more common for jobs to evolve, or for new types of jobs to emerge. We have seen the growth of the service and the knowledge economy.

Essential to this growth has been the growth of data — data that can be used for many purposes (often well beyond what was originally anticipate­d). Data is the fuel for AI applicatio­ns. With historical examples like the atomic energy advances of last century, a parallel exists today where AI can be used for either good or evil, and with unanticipa­ted consequenc­es. We need to have an open discussion about the risks and benefits, and unfortunat­ely, we are working with incomplete data.

Here in British Columbia, we have already seen the impact of increased automation on traditiona­l resource industries such as fishing, forestry and mining. With the increasing amount and sources of data available to workers in these traditiona­l industries, there is the opportunit­y to improve the management of natural resources, and provide additional knowledge-based and technology-based employment. AI has been used to detect illegal fishing, to assist in mineral exploratio­n, and to create self-driving trucks that can navigate through dangerous undergroun­d tunnels associated with mining operations. Likewise, health care and social services continue to be major employers within B.C.’s service-producing sectors.

Personaliz­ed medicine research, for example, has benefited greatly from AI advances. The continued growth of these sectors reflects a wide spectrum of knowledge-based jobs involving more intelligen­t machines and systems to interact with workers, not just replace them. The education of the new economy workers continues to draw people to the province to learn the skills and knowledge necessary for the increasing number of technology-related positions. The recently announced federal funding to support the B.C.-led Digital Technology Superclust­er will clearly support this continued growth, in an initiative in which data is described as “the prize resource of the 21st century.” While AI applicatio­ns may feed on data, humans can have the ultimate control since they are the ones who control the data.

We should not worry about AI itself. We should worry about the humans who control the AI and data.

In looking to our future, I once again draw upon insights from science fiction, this time from Isaac Asimov’s 1956 short story The Last Question, which begins in 2061 (within our next 50 years). A theme throughout the story is that there is “insufficie­nt data for a meaningful answer.”

As a society, we will always need to make decisions using incomplete informatio­n. But, this is where we can make use of human intelligen­ce augmented by AI. What is the answer? “Let there be light!”

Newspapers in English

Newspapers from Canada