Culture shock
What if bots started developing their own social behaviours?
Everyone is talking about artificial intelligence as if it is “the year of AI”. Just as last year was “the year of virtual reality” and the year before was “the year of programmatic”. Of course, every year is “the year of mobile”. It’s strange because AI’S possibilities and consequences are so far-reaching that it would be more accurate to talk of “the century of AI”. This is a technology, or suite of technologies, that deserves long-term thinking.
As usual, we have snaffled great technologies to use them to make the most mundane of everyday events more efficient (so visionary is the human race!). Bots became the new apps last year when Facebook and others took the concept to the masses by opening up their platforms to third-party bots. “By 2020, the average person will have more conversations with bots than with their spouse,” Gartner predicted, trying to show its scale while also normalising it.
Other service sectors are going further, moving into real-world robotics. The Henn na Hotel
(or “Strange Hotel”) in Japan is almost exclusively staffed by robots – there are only seven humans employed there. The hotel group plans 100 more just like it. In time, our connected living spaces will be more immersed in this kind of technology, creating a rich world of conversations that will include, as Microsoft’s Satya Nadella said, “people to people, people to personal assistants, people to bots, even personal digital assistants calling bots on your behalf”. To the extent that brands are working out how to create messages that are appealing to bots, bots will become the new consumer, with plenty of autonomous purchasing power.
We seem to have no problem forming intimate and emotional relationships with AI. More than a quarter of 18- to 34-year-olds in the UK said they would happily date a humanoid robot, according to Nesta, and Jeff Bezos revealed that Alexa has received more than 250,000 marriage proposals. Yuval Noah Harari recently wrote: “In a Dataist society, I will ask Google to choose. ‘Listen, Google,’ I will say, ‘both John and Paul are courting me. I like both of them, but in a different way, and it’s so hard to make up my mind. Given everything you know, what do you advise me to do?’” The bot becomes your BFF.
AI is being used to create editorial, to recruit new employees, to sit on boards, even to diagnose terminal illness.
Is there nothing AI cannot do?
Yes, we all say, AI cannot create. It cannot ideate. AI might be the best at making decisions with rational information but it cannot beat humans when it comes to creating culture.
Well, I listened to an interesting interview the other day with Alan Winfield, professor of robot ethics at the University of the West of England. His project “The Emergence of Artificial Culture in Robot Societies” demonstrated that new traditions can emerge among a multitude of robots. In this experiment, the robots had the ability to imitate each other. And because they would do so imperfectly, this would give rise to innovation – to new ways of doing things. Winfield said: “Under the right circumstances with the right dynamics, completely new behaviour takes over, like fashion.” Now that sounds a lot like culture to me. And OK, we might be talking about robot culture, rather than human culture or a mix of the two, but it’s still culture of sorts.
So while we are paying attention to the way in which AI can help us with short-term decisions, make us more efficient and productive in the process, the long term continues to be underinvestigated – certainly within the media and advertising industry.
I would start with the language. We have become accustomed to talking about AI in the context of “super-intelligence”. This leads us to depict the possibilities of AI to be cognitive only. As Winfield’s study suggests, more is possible. And so I would encourage us all to start talking not of “superintelligence” but of “social intelligence” to reclaim the AI conversation as a cultural one rather than a purely cognitive one. And maybe, just maybe, that will help engage us all in the long-term ethical debates around AI, not just the short-term functional, financial ones about how AI makes things quicker and cheaper.
You think you are safe, that your job is safe, that the industry is safe because you can stop robots from creating culture? I wouldn’t be so sure. Perhaps 100 years from now, artificially intelligent beings will be talking about “the year of the human”. Wouldn’t that be strange?
“We depict the possibilities of AI to be cognitive only. As Professor Winfield’s study suggests, more is possible”