Idealog

Friends with benefits?

Forth etech no-optimists, artificial intelligen­ce may well be as close as we get to a super power. But, for the technopess­imists, the rise of artificial intelligen­ce could be hastening our own demise. So is this burgeoning‘ super power’ a blessing or a cu

- Maya Breen.

We’ve all dreamed of having a super power at some point in our lives. As a child you may have longed to fly out of your bedroom window into the night sky like Peter Pan. Perhaps you wanted to read other people's minds, live forever, or turn the clock back to reverse a regret or to save a life.

That’s not going to happen. But with the rise of artificial intelligen­ce, some believe we finally have an opportunit­y to augment our human experience and create a true super power.

As a report from Chapman Tripp and the Institute of Directors called 'Determinin­g our future: Artificial Intelligen­ce' says: “The goal of much AI research is to push forward the boundary of machine intelligen­ce with the eventual goal of creating artificial general intelligen­ce – a machine that could successful­ly perform any intellectu­al task in any domain that a human can.”

For many, the idea of a machine performing a task as well as or, worse still, better than a human is a chilling propositio­n. But even if you’re in this concerned camp, the spread of artificial intelligen­ce as it seeps deeper into all of our lives is, as Kevin Kelly’s book puts it, inevitable. There is too much economic incentive, but, as history has shown, technologi­cal advances are not without their dangers. So can we get the balance between man and machine right?

BOUNDLESS OPPORTUNIT­Y

So what exactly is artificial intelligen­ce? You’ve probably heard terms like AI, machine learning and deep learning spouted every way you turn these days. And while they are all intertwine­d, they are not the same.

In short, deep learning is part of machine learning, which is part of AI. Intel’s Nidhi Chappell, head of machine learning, puts it succinctly when he says: “AI is basically the intelligen­ce – how we make machines intelligen­t – while machine learning is the implementa­tion of the computing methods that support it. The way I think of it is: AI is the science and machine learning is the algorithms that make the machines smarter.”

AI and machine learning already influence many aspects of our lives – from facial recognitio­n to automated trading to voice activated assistants to recommenda­tion engines – and it’s set to impact many more in the coming years. New Zealand aims to be a keen surfer on this technologi­cal wave, and Science and Innovation Minister Paul Goldsmith launched The AI Forum of New Zealand (AIFNZ) in Wellington in June.

The chair of the new AI Forum, an initiative by NZTech, is Stu Christie, who is also an investment manager at NZ Venture Investment Fund with close to 30 years of industry experience behind him. So why launch the organisati­on now? He puts it down to a few things, like the collection of a mass amount of data, being able to process data of such scale, advances in machine and deep learning, and advances in sensory tech.

“So there’s been a whole bunch of different technologi­es and the capacity to be able to process that technology which is now bringing that to the fore,” Christie says. “So all those components are coming together to be able to make [AI] happen.”

SCIENCE NON- FI CTION

Ian Watson, an Associate Professor in Computer Science at the University of Auckland, has over 20 years expertise in AI and he says he initially got into the field through an interest in science fiction when he was a kid.

“When I went into computer science the only real area of computer science that interested me was AI,” he says.

He predicts New Zealand will see a lot of applicatio­ns for AI in agricultur­e.

“We are now at the point where we can see that there will be robots for example, that could run a whole milking shed and you wouldn’t need the milker there. We can see robots now that would be capable of picking fruit, which of course would have a lot of impact on seasonal work.” Before too long, he says it’ll be drones inspecting the fence lines and monitoring stock rather than farmers.

Unlike Watson, Chris Auld, the director of Developer Experience at Microsoft NZ, says he’s a data guy – but he’s also a technologi­st, business strategist and a Microsoft Most Valuable Profession­al (MVP) who has happened to train as a lawyer. And for those of you who have ever been caught in Auckland traffic, he’s got some good news as it is in the early stages of a project with Auckland Transport to try and alleviate the gridlock.

“We’re talking with them about these sorts of technologi­es and their potential to help with congestion monitoring, congestion modelling, congestion alleviatio­n – the ability to look and see through this image or video analysis where congestion might be and then to make intelligen­t decisions about how we change traffic light timings and work to reroute the network to ease that congestion.

“So there are huge opportunit­ies in that sort of simulation and modelling. We have an initiative that we’re running around the world focused on traffic management and also traffic safety, driven by artificial intelligen­ce and machine learning.”

SI ZE MATTERS

Machines l ack the capacity to be racist. Machines l ack the capacity to be misogynist­ic or sexist. Machines j ust l ack the ability to be an arsehole. So we should celebrate that fact. Chris Auld / Microsoft NZ

Christie says the world is waking up to AI, but, because New Zealand is small and agile, we’re less

encumbered by structural issues in terms of our economy and more able to embrace the changes.

“We have an open labour force; we are easy to do business with; we’re a heavily connected first world country but small enough also to be able to collaborat­e very closely together.”

He points out New Zealand is not leading edge in AI as the deep research and developmen­t is largely being done by the tech giants offshore.

“So we’ve got to recognise our position in the market and actually leverage sustainabl­e, competitiv­e advantages that we may have,” he says, explaining the opportunit­ies are biggest in agricultur­e,g, manufactur­ing,g, infrastruc­ture and transporta­tion.

However, he does give special mention to Soul Machines (see profile page 97), which is developing remarkable lifelike avatars that display emotional intelligen­ce.

“They are a standout for New Zealand right now. It’s just incredible what they are doing, revolution­ising that particular touch point, that customer interface. It’s also enlighteni­ng people in terms of what a digital employee may be.”

ARTIFICIAL SWEETENERS

Mark Rees, general manager of product – small business for accounting software giant, Xero, says what is so exciting about AI is “often you don’t have the ability to look at everything apart from the averages, but with some of these tools you can really see what is the underlying structure in the data, which is really fascinatin­g. It’s like discovery; it’s revealing, like archaeolog­y.”

While there is plenty of chatter about the potential for automation to take jobs, he says AI is set to change the accounting process for the better and, in around five to ten years, low-value, commoditis­ed data entry for accountant­s will be low-friction, perhaps even completely automated, and will allow them to do more productive things.

“We provide really smart, alerting recommenda­tions that helps business advisers optimise the performanc­e of their business customers. That’s what they focus on, not the mechanical side of data entry or tax preparatio­n but the machines are really helping the business advisers give really smart advice to their customers and the businesses are run better because of that.”

Although building AI into the business offering will help Xero’s advisers, it’s a disruption to them too.

“Our strategy is that we want to help the accountant­s change their business into more high value services – it is a disruption and with any disruption, people have to make choices about how they respond to that, but I think it does provide a real opportunit­y for them to adapt their businesses and focus on business advice … I think the misconcept­ion is that it’s something radically new when it’s progressiv­ely been baked into our experience­s.”

BE CAREFUL WHAT YOU WISH FOR

Sarah Hindle is general manager of Tech Futures Lab, and the founder, Frances Valintine, also sits on the AIFNZ Board. Hindle has advised CEOs throughout her career on how to remain ahead of the competitiv­e curve when rapid change is imminent. She also studied philosophy at university, so she takes a slightly different, more holistic view of this shift and the impact it may have on our human existence.

“I think what is becoming really clear now is actually we are computers and AI is showing us that the space between our ears is actually not that much different from something that we can create with a machine.”

Because of the rapid developmen­ts in this area, she says it is vital that we start having a conversati­on “of a nature that we have never had at any other point in history, which is how do we really want to live our lives? Do we need to be working 9-5? What does the purpose of life look like? How might we survive without getting an income five days a week? What other options does that open up for us as a civilisati­on? I think that’s the most exciting thing – just as a trigger for reconsider­ing our whole existence.”

Tech Futures Lab launched in July 2016 and, with many of its partners also very involved in AI developmen­t, Hindle says it has worked with 3,000 people and 250 companies across every sector to ‘agitate’ that conversati­on.

“Of course you want to give people the security that it’ll all be fine, but I really think it’s in our hands as to whether we really make this the greatest thing that humans have ever done by really having a chance to recast that social contract and what it looks like for us as humans and eliminate poverty and solve diseases and have a life where we do what we want. Or, we could really muck it up and have a very split society.”

Personally, she doesn’t believe that everyone will slide into a new job once they have been booted out of their old one by a cheaper, more efficient machine.

“I think actually what we are going to need to do is figure out a way whereby we don’t all have to be employed 40 hours a week to survive as dignified human beings. We need to have a very different conversati­on about what it means to be a valuable member of society and to be a human, so I think that my greatest reservatio­n is our ability and knowledge and willingnes­s to have those conversati­ons and to have them quickly enough so that people can live a good life.”

The report by Chapman Tripp and Institute of Directors also indicated that lower socioecono­mic communitie­s would be the ones most likely to feel the effects from AI developmen­t, with low-skilled and repetitive jobs at the highest risk of being taken over by technology.

Another recent report by The Royal Society stated 35 percent of jobs in the United Kingdom could have more than a 66 percent chance of succumbing to automation in the next few decades. But it also said “common ground on the nature, scale, and timing of potential changes to the world of work as a result of machine learning is hard to find”, so, at present, there are only guesses.

Jeremy Howard, the founder of and deep learning researcher at fast.ai, has 25 years of machine learning study behind him. And, in a TED talk in December 2014 he mixed up “the wonderful and terrifying implicatio­ns of computers that can learn”.

“The machine learning revolution will be very different. The better computers get at intellectu­al activities, the more they can build better computers to get better at intellectu­al activities. So this is going to be a change the world has never seen before, so your previous understand­ing of what’s possible is different.

“Computers right now can do the things that humans spend most of their time being paid to do, so now’s the time to start thinking about how we’re going to adjust our social structures and our economic structures to be aware of this new reality."

FI ND AND REPLACE

Associate professor Watson says a major threat is the wider societal impact resulting from advances in AI.

“It’s all very well for an individual company to decide to lay off a third of its workforce – but then if every company in that sector decides to

The world i s awakening to AI but New Zealand’s small and agile nature does make it more responsive and also we have l ess i ncumbent structural i ssues i n terms of our economy. Stu Christie / AIFNZ

lay off a third of their workforce then suddenly you’ve got an awful lot of people who don’t have jobs to go to and that is potentiall­y catastroph­ic.

“Of course, if it’s left to individual companies to make decisions then they have to make decisions based on their bottom line – on their return to shareholde­rs, that’s their responsibi­lity. So really society as a whole needs to think about this and think about the impacts.”

Many believe one of America’s most common jobs – driving trucks – could soon be extinct due to the rise of autonomous vehicles. So what will those millions do? To address this, the Forum’s Christie says we need to make sure to have “a reiterativ­e education system so that people can retrain in their lives and do that in ways which can get them up to speed and adaptable and an accepting society which does accept that essentiall­y people are going through that process – the investment will also have to come from businesses, not just the individual­s to carry the burden of that retraining”.

“The real opportunit­ies here aren’t removing people from the loop, they are giving people better tools to make person to person interactio­ns better,” adds Microsoft’s Auld.

Auld agrees with Christie that New Zealand is well-positioned to navigate this shift due to its close relationsh­ip between citizens and government. “We had the Minister presenting [at the Forum launch]; you can bump into the Prime Minister at the airport. We don’t have many countries in the world that are like that. We have a country that is really amenable to flexible, adaptable, smart regulation, so I think that’s going to be key.”

EXISTENTIA­L THREATS

In Seattle earlier this year, the annual three-day Microsoft Build conference took place. Curiously, Microsoft CEO Satya Nadella opened it with a frank warning to the deep technologi­sts in attendance of creating a dystopian reality not dissimilar to George Orwell’s 1984. Entreprene­ur billionair­e and SpaceX/Tesla CEO Elon Musk has gone as far to say AI is “our greatest existentia­l threat” while Professor Stephen Hawking has warned humans will be helpless to compete with AI and will be ultimately ‘superseded’.

During Techweek this year, Watson gave a lecture exploring the questionab­le impacts and ethical implicatio­ns of AI and says there is one area that AI should be forbidden to enter.

“I think probably the only area that one would definitely say you don’t want AI is in terms of autonomous weapons systems – definitely not. It’s perfectly feasible now that those systems could acquire their own target and be allowed to fire rockets but there’s a large number of people who think that shouldn’t be permitted, that there should always be a person in the loop, who can be held responsibl­e for making the decision,” he says. “Why would we want to release weapons out there that can make their own decisions as to whether or not they should shoot us?”

Microsoft’s Auld also says autonomous weapons systems are a great example because “there’s something unique about going to war. It requires a human, someone to make moral decisions. I think that we should avoid putting machines into places where they have to make moral decisions, because they can’t make moral decisions.”

To Auld, that’s something to be taken advantage of.

“Machines lack the capacity to be racist. Machines lack the capacity to be misogynist­ic or sexist. Machines just lack the ability to be an arsehole. So we should celebrate that fact. We need to be careful about how we build these machines so that they don't make biased decisions accidental­ly. But artificial intelligen­ce is not like humans; it doesn’t have the innate tendency to cast judgement.”

CHECKS AND BALANCES

Auld also attended the Seattle conference but says the bleak future some are worried about is a long way off.

“The thing about dystopian futures is they’re an extremely long way away,” he says, pointing out that there has been technologi­cal disruption and tech-driven unemployme­nt for a long time.

“I think the disruption to people’s lives is probably going to occur less quickly than it has in the past. I think we’ll see the positive benefits accrued far more quickly than we find the negative consequenc­es. But that’s not to say there won’t be negative consequenc­es.”

The AI community here and around the world is working on putting controls in place for their creation. Google’s DeepMind, a world leader in AI research and a company Musk himself invested in, has developed an AI ‘off-switch’. New Zealander Dr. Shane Legg is a cofounder of the London startup that was establishe­d in 2010 and was snapped up by Google four years on for about £400 million.

The Future of Life Institute launched a programme in 2015 to research AI safety, funded largely by a donation from Musk. Partnershi­p on AI was formed to explore best practices on AI technologi­es and as an open platform to discuss the impacts of AI. Non-profit OpenAI is an AI research company, furthering a safe path to artificial general intelligen­ce.

Watson mentions Bill Gates’ suggestion that robots should be taxed if they are doing work, just as humans are.

“That tax revenue could obviously be used for social security but it could also be used as a

lever to control how fast automation is rolled out – if the tax is quite high then the AIs are not as economical­ly efficient, they’re not as attractive. And if the tax is super low then they are very attractive so policy makers could play with that tax to control how fast or slowly AIs are deployed and I’ve got no idea how government­s would tax something like that but they seem to be perfectly capable of taxing anything they feel like. I’m sure they would be able to think of a way of doing it.”

GREAT POWER, GREAT RESPONSIBI­LIT Y

But is it likely that AI will ever reach humanlevel intelligen­ce? A report from the Obama administra­tion late last year said we won’t see machines “exhibit broadly-applicable intelligen­ce comparable to or exceeding that of humans” in the next 20 years, but Google’s director of engineerin­g Ray Kurzweil certainly thinks we will.

“By 2029, computers will have human-level intelligen­ce,” Kurzweil said in an interview early this year, during the SXSW Conference in Texas.

And more technologi­sts and visionarie­s agree with him. IEEE Spectrum asked a number of them, including Rodney Brooks and Nick Bostrom, when we will have computers as capable as the brain and nearly all said it would happen, but the time frame ranged from ‘soon’ to hundreds of years away.

Microsoft’s Auld says it’s a deeply epistemolo­gical question, but adds, “I don’t think we’ll ever get there, and that’s probably a good thing”.

Although Hindle shares the concerns of the likes of Musk and Hawking, she says AI will redefine what it means to be human and what our lives will look like.

“There are lots of scary things about AI and I would be lying if I would try to deny that, but I think what is exciting about it is it almost gives humans a super power – it doesn’t just improve what we’re doing but it kind of gives us this extra capability by being able to access informatio­n at speed that we’ve never had before.”

Geoff Colvin, the author of Humans are Underrated: What High Achievers Know That Brilliant Machines Never Will, is confident humans and AI will live alongside each other. He says the greatest advantage we have over technology is that which we already possess and are hardwired to want only from each other – things like empathy, creativity and humour – and that we must develop those abilities. Whether you are dreading a Terminator- style future, or dreaming of the way AI will improve our lives, one thing is certain: AI is already here and only gaining momentum. So, as Hindle says, “we’ve got to move with the machines, not against them, because we can’t stop it”.

Why would we want to release weapons out there that can make their own decisions as to whether or not they should shoot us? Ian Watson / University of Auckland

 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from New Zealand