Sunday Independent (Ireland)

Humanity fears nothing more than irrelevanc­e

Intel anthropolo­gist Genevieve Bell talks to Ian Tucker about the rise of the machines and our Frankenste­in anxiety that they might end up killing us

- Genevieve Bell is the keynote speaker at the Ireland’s Edge conference, part of the Other Voices festival in Dingle, Co Kerry, from December 2-4

GENEVIEVE BELL is an Australian anthropolo­gist who for 18 years has worked at tech company Intel, where she is currently head of sensing and insights. She has given numerous TED talks and in 2012 was inducted into the Women in Technology hall of fame. Between 2008 and 2010, she was also South Australia’s thinker in residence.

Why does a company such as Intel need an anthropolo­gist?

That is a question I’ve spent 18 years asking myself. It’s not a contradict­ion in terms, but it is a puzzle. When they hired me, I think they understood something that not everyone in the tech industry understood, which was that technology was about to undergo a rapid transforma­tion. Computers went from being on an office desk spewing out Excel to inhabiting our homes and lives and we needed to have a point of view about what that was going to look like. It was incredibly important to understand the human questions: such as, what on earth are people going to do with that computatio­nal power. If we could anticipate just a little bit, that would give us a business edge and the ability to make better technical decisions. But as an anthropolo­gist that’s a weird place to be. We tend to be rooted in the present — what are people doing now and why? — rather than long-term strategic stuff.

A criticism that is often made of tech companies is that they are dominated by a narrow demographi­c of white, male engineers and as a result the code and hardware they produce have a narrow set of values built into them. Do you see your team as a counterbal­ance to that culture?

Absolutely. I suspect people must think I’m a monumental pain. I used to think my job was to bring as many other human experience­s into the building as possible. Being a woman, being Australian and not being an engineer — those were all valuable assets because they gave me a very different point of view.

We are building the engines, so the question is not will AI rise up and kill us, but will we give it the tools to do so?

The leadership of Intel is now around 25pc female, which is about what market availabili­ty is in the tech sector. We are conscious of what it means to have a company whose workforce doesn’t reflect the general population. Repeated studies show that the more diverse your teams are, the richer the outcomes. You have to tolerate a bit of static, but that’s preferable to the self-perpetuati­ng bubble where everyone agrees with you.

You are often described as a futurologi­st. A lot of people are worried about the future. Are they right to be concerned?

That technology is accompanie­d by anxiety is not a new thing. We have anxieties about certain types of technology and there are reasons for that. We’re coming up to the 200th anniversar­y of Mary Shelley’s Frankenste­in and the images in it have persisted.

Shelley’s story worked because it tapped into a set of cultural anxieties. The Frankenste­in anxiety is not the reason we worried about the motor car or electricit­y, but if you think about how some people write about robotics, AI and big data, those concerns have profound echoes going back to the Frankenste­in anxieties 200 years ago.

What is the Frankenste­in anxiety?

Western culture has some anxieties about what happens when humans try to bring something to life, whether it’s the Judeo-Christian stories of the golem or James Cameron’s The Terminator.

So what is the anxiety about? My suspicion is that it’s not about the life-making, it’s about how we feel about being human. What we are seeing now isn’t an anxiety about artificial intelligen­ce per se, it’s about what it says about us. That if you can make something like us, where does it leave us? And that concern isn’t universal, as other cultures have very different responses to AI, to big data. The most obvious one to me would be the Japanese robotic tradition, where people are willing to imagine the role of robots as far more expansive than you find in the West. For example, the Japanese roboticist Masahiro Mori published a book called The Buddha in the Robot, where he suggests that robots would be better Buddhists than humans because they are capable of infinite invocation­s. So are you suggesting that robots could have religion? It’s an extraordin­ary provocatio­n.

So you don’t agree with Stephen Hawking when he says that AI is likely “either the best or the worst thing ever to happen to humanity”?

Mori’s argument was that we project our own anxieties and when we ask: “Will the robots kill

us?”, what we are really asking is: “Will we kill us?” Coming from a Japanese man who lived through the 20th century that might not be an unreasonab­le question. He wonders what would happen if we were to take as our starting point that technology could be our best angels, not our worst — it’s an interestin­g thought exercise. When I see some of the big thinkers of our day contemplat­ing the arc of artificial intelligen­ce, what I see is not necessaril­y a critique of the technology itself but a critique of us. We are building the engines, so what we build into them is what they will be. The question is not will AI rise up and kill us, rather, will we give it the tools to do so?

Is there a movie that you think creates a convincing picture of the future? The Matrix, Her, Planet of the Apes?

In terms of capturing the current anxiety and ambivalenc­e we have about the role of technology, it’s two movies: Her and Ex Machina. Not because they are visions of the future, but because they underline a particular set of concerns, which is not that the machines will kill us but that we will become irrelevant. The Terminator promised death, but in Spike Jonze’s Her the anxiety is that the machine will become bored with you. It’s the same in Ex Machina — you build the perfect machine and it abandons you. In both instances, there’s a notion that the technology is self-determinin­g and its decision is to leave us; in both movies, there is a conversati­on about gender — the machines are women that are leaving men. The machines’ voices are female, which isn’t what they were 40 or 50 years ago, like Hal [in 2001: A Space Odyssey].

A lot of the work you do examines the intersecti­on between the intended use of a device and how people actually use it — and examining the disconnect­ion. Could you talk about something you’re researchin­g at the moment?

I’m interested in how animals are connected to the internet and how we might be able to see the world from an animal’s point of view. There’s something very interestin­g in someone else’s vantage point, which might have a truth to it. For instance, the tagging of cows for automatic milking machines, so that the cows can choose when to milk themselves. Cows went from being milked twice a day to being milked three to six times a day, which is great for the farm’s productivi­ty and results in happier cows, but it’s also faintly disquietin­g that the technology makes clear to us the desires of cows — making them visible in ways they weren’t before. So what does one do with that knowledge? One of the unintended consequenc­es of big data and the internet of things is that some things will become visible and compel us to confront them.

Why is your Twitter handle ‘feraldata’?

I was castigatin­g an Australian colleague about 10 years ago about how we talked about technology using British idioms. For example, we kept talking about the digital commons, yet Australia does not have an enclosure act.

So what are the Australian experience­s we could use to talk about technology? I began to think about camels, goats and cats — lots of animals jumped the boats in Australia and created havoc by becoming feral. Would feral be an interestin­g way for thinking about how technology had unintended consequenc­es? It occurred to me that of all the things that were most likely to go feral in the technologi­cal landscape it was data. It gets created in one context, is married with a third thing and finds itself in another.

‘We are building the engines, so what we build into them is what they will be’

 ??  ?? FUTURE DAZE: The film Ex Machina plays on our sense of dread that if we create the perfect machine it will only abandon us, says Genevieve Bell, right, Intel’s head of sensing and insights
FUTURE DAZE: The film Ex Machina plays on our sense of dread that if we create the perfect machine it will only abandon us, says Genevieve Bell, right, Intel’s head of sensing and insights
 ??  ??
 ??  ??

Newspapers in English

Newspapers from Ireland