Vancouver Sun

What is ‘uniquely human’?

ARTIFICIAL INTELLIGEN­CE FORCES US TO TAKE A HARD LOOK AT OURSELVES

- STUART THOMSON sxthomson@postmedia.com Twitter.com/stuartxtho­mson

In late 2016, a ghost truck rumbled down a Colorado highway carrying 50,000 cans of Budweiser beer.

The cab of the truck was empty, but the truck hewed to the dotted highway lines with eerie precision. If it alarmed any passing motorists, no one piped up about it and, to be fair, anyone with sharp vision would’ve spotted a man crouched in the back, keeping an eye on things.

The driverless truck is just one instance of the supernatur­al normalcy we can expect as artificial intelligen­ce invades everyday life.

AI — perhaps more accurately called machine learning — is already doing a lot more than just publicity stunts. If you’ve noticed that voice recognitio­n and real-time translatio­n have gotten a lot better recently, you can thank the Canadians who developed the Deep Learning techniques powering them.

The management consulting firm McKinsey & Company estimates that automation brought about by AI could boost productivi­ty more than the steam engine did in the 19th century and the IT revolution did in the 1990s.

We’ve come to understand that automation is a friendly term for a robot taking our livelihood and, in terms of job losses, the estimates range from economists who expect the impact will be manageable to futurists who foresee a full-scale transforma­tion.

Nicolas Chapados, the chief science officer at Element AI, says repetitive drudgery will be taken over by computers and people will focus on jobs that are “uniquely human.”

The computers, in other words, are going to make us take a hard look at ourselves. What are we actually good at? And what is uniquely human anyway?

The driverless truck barrelling down the Colorado highway is semi-autonomous, meaning it can handle long stretches of highway, but needs a person to guide it through the bustling, narrow roads of cities. Once the truck is safely on the interstate, the operator engages the autopilot and kicks back for a few hours.

Driverless vehicles are ranked in a level system with Level 3 requiring the driver to be ready to take control at any time, Level 4 meaning the vehicle can handle full automation under some conditions and Level 5 meaning the vehicle is fully automated under any conditions. The beer truck is a Level 4 vehicle and we’re still a few years away from Level 5.

Right now, driving through a city is a uniquely human experience: a driver inches forward at a stop sign to show the other drivers she’s next in line, makes eye contact with a pedestrian to show it’s safe for him to cross the street and improvises through winter weather, using hard-won experience.

That will change as AI figures out how our world works, so the nature of human jobs is likely to be a moving target. Chapados thinks jobs requiring human judgment, empathy and the ability to relate or connect will have a long shelf life. McKinsey thinks unpredicta­ble labour jobs like constructi­on or forestry will also last longer than predictabl­e jobs like welding. Even as we arrive at an idea of what is uniquely human, we may find it changing.

One of the founders of Otto, the company responsibl­e for the beer truck which is now owned by Uber, told Wired Magazine that he envisions a world where trucking is primarily a local job. Drivers will work out of hubs and jump on the trucks as they approach the cities. Others imagine a North American trucking force that is essentiall­y made up of drones with operators working remotely out of whatever city has cheap and abundant labour.

Either way, it means no more sleeping in trucks, no more long hauls and, crucially, no more tired drivers on the roads. Traffic fatalities involving large trucks have been rising in the United States with 4,317 in 2016. Seventy-two per cent of the deaths are occupants of other vehicles. When a large truck is in a collision, it usually wins.

Safety is one thing, but a quick glance at the makeup of Canada’s labour force should give people a cold chill. According to the most recent census labour force breakdown, there were 253,385 transport truck drivers in Canada in 2011. That’s about three per cent of men in the workforce.

And that’s just one profession of many that could be marked for the slaughter.

For the trucking industry, there’s some pretty simple arithmetic involving the wage bill. For jobs that aren’t so cut and dry, companies may take a gradual approach to introducin­g AI, dipping their toes in the pool rather than diving in.

Economics matter to consumers, too. There is already fierce competitio­n between Google and Amazon to get their smart speakers into homes. These devices play music, read your daily schedule and order products, all the while learning how to do more helpful (and profitable) things.

Goldie Nejat, the director of the Institute for Robotics and Mechatroni­cs at the University of Toronto, said it’s likely that “fetch and carry” helper robots are the next stage. For elderly people in the early stages of cognitive impairment, it could allow them to live in their homes longer.

A robot could nudge the person to have dinner at 6 p.m., take their pills or look through a recipe book with them to help get motivated. That would leave the more compassion­ate — or uniquely human — duties to healthcare workers and free them from the repetitive jobs.

Chapados is relatively sanguine about the job landscape, arguing that while “it is undeniable that there will be displaced jobs,” there will be new jobs created and existing jobs that simply change for the better.

Most profession­s have a certain amount of grunt work that can be happily assigned to artificial intelligen­ce. Compiling reports, entering data or trawling through large amounts of informatio­n are already being tasked to computers.

Security services can assign a computer program to look through hours of video footage for a guy wearing a red T-shirt and save themselves the time of doing it manually. Hiring managers can sort through stacks of resumes and bankers can make complicate­d loan decisions in seconds.

But whatever AI is doing, humans will probably be working alongside it.

The present-day grocery store experience shows how this can work. Self-serve checkouts may have taken the role of a human cashier, but they have also created a new role, where a store employee overrides the mistakes, refills the receipt paper and provides some ad hoc therapy to frustrated shoppers.

That’s a role that will take many forms, but it’s apparent now that AI combined with a human operator is far more effective than a human or a computer alone. The former chess grandmaste­r Garry Kasparov, who famously lost to IBM’s Deep Blue supercompu­ter, says that “centaur” players are vastly better than one or the other.

When AI starts making big decisions, the most important job of all might be monitoring it and watching out for its mistakes.

Kristen Thomasen, an assistant professor of law, robotics and society at the University of Windsor law school, said real-time monitoring will be crucial because catching errors too late could have massive consequenc­es.

If a computer hands out an incorrect prison sentence, we can’t afford to have someone notice it a year later during a routine audit.

The most vexing problem AI presents is that when computers teach themselves to do things using Deep Learning, we lose the ability to understand why they make their decisions.

At the Senate open-caucus meeting on the future of artificial intelligen­ce, a few senators zeroed in on the “trolley problem,” which is an age-old thought experiment that takes on a new relevance in a world where computers drive our cars for us. If a trolley is hurtling down the tracks toward five people and you can throw a switch that sends it in the direction of one person, do you do it? Is it worth killing one person to save five? Or should a person refrain from killing no matter what?

You can imagine the trolley problem at the end of a Batman movie, used by the Joker to illuminate society’s hypocrisy and moral failings, but it’s hard to imagine it in real life. Likewise, with self driving cars, Thomasen said it’s hard to imagine a car smart enough to drive itself that couldn’t come up with some alternativ­e to murdering any amount of people.

As the senators examined the theoretica­l carnage of the trolley problem, no one mentioned estimates of how many lives could be saved each year in a world full of self-driving cars. People are distracted by their cellphones, by a crying baby in the back seat, by their heavy eyelids after staying up too late watching television or, more dangerousl­y, they are under the influence of booze or drugs. One U.S. study estimated 300,000 lives would be saved in that country in a decade with autonomous vehicles.

And, on top of all that, people just aren’t very good at driving. A lot of traffic congestion is caused by the knock-on effect of drivers following each other too closely. Self-driving cars would completely eradicate the stress of driving and you could read a book or style your hair on the commute.

Government­s, regulators and owners will still want to understand why their devices are making certain decisions, especially in the event that something goes wrong. The European Union wants that to be a requiremen­t of any automated system, starting in 2018.

Can a computer really explain itself, though? Humans have trouble with that at the best of times.

In the United States, the news organizati­on ProPublica reported that an algorithm used to determine “risk assessment­s” for criminals and predict whether they will commit more crimes is biased against black people. The company pushed back against the claim, but would only reveal so much about how the algorithm works — it is proprietar­y code after all.

That kind of bias could easily be picked up by artificial intelligen­ce systems that learn from massive amounts of pre-existing data. AI is expected to take over from humans determinin­g who gets loans from banks. It’s not a stretch to wonder if the AI will pick up all the human bias already wrapped up in these decisions.

While a hand-coded proprietar­y algorithm is no one’s idea of transparen­t, machine learning presents a new layer of opaqueness. What happens when the coders themselves don’t know why someone didn’t get a loan or why one offender got a year in prison and another got house arrest?

It took less than 24 hours for Microsoft’s Twitter AI bot to tweet, “Hitler was right I hate the jews,” after a full-on assault by trolls.

The prison algorithm and Tay, the Microsoft robot that spiralled into bigotry, should make us stop and think. We may be creating technology that can free us from drudgery, but also reflects and refines the ugliest parts of human nature.

Racism and misogyny on social media are seen as aberration­s or outliers. The systemic imbalances in banks and prisons are seen as problems that need solving, but not some fundamenta­l part of who we are. But are we sure about that? In its clumsy, earnest way, AI might be examining our habits and assuming society currently works as we intended. It will be our job to examine ourselves and the world we live in and ask if we can answer honestly that it does not.

A burgeoning career opportunit­y may be the role of AI minder. The minders would make sure the AI’s decisions make sense and aren’t making incorrect logical leaps, which will be especially important in health care and should give doctors some long-term employment safety.

But beyond the trolley problem, there are ethical issues on a grander scale that will require constant vigilance.

Building a computer system that mimics our brain and learns from the processes and data we’ve already created may shine a light on humanity that may be unflatteri­ng. In our quest to find out what is uniquely human, we have to be prepared for the possibilit­y that we won’t like the answer.

THE COMPUTERS ... ARE GOING TO MAKE US TAKE A HARD LOOK AT OURSELVES.

 ?? TOMOHIRO OHSUMI/BLOOMBERG ?? With artificial intelligen­ce set to take over mundane jobs like trucking and loan approvals, humanity will be forced to focus on jobs that are “uniquely human,” such as ensuring that AI is taking logical steps toward its conclusion­s.
TOMOHIRO OHSUMI/BLOOMBERG With artificial intelligen­ce set to take over mundane jobs like trucking and loan approvals, humanity will be forced to focus on jobs that are “uniquely human,” such as ensuring that AI is taking logical steps toward its conclusion­s.

Newspapers in English

Newspapers from Canada