Machine Learning in the Attention Economy
Early in 2017, the School of Media & Public Affairs at George Washington University (GWU) conducted a study on the rise of “attention metrics” in publishing and its use in both editorial and advertising.
Attention metrics, according to the study, “refers to measures of website visitors’ engaged time, determined by concrete evidence of their presence on a page, such as cursor movement, keystrokes, and scrolling.” Two things struck me when reading this paper:
1. Attention metrics were limited to measuring websites only.
2. The winners in the Attention Economy were predicted to be top-tier publishers (mainstream and digital) who already have reach and resources. Smaller and local publishers were not seen as likely candidates for this type of technology due to their lack of both. So where does that leave news apps on mobile which are on the rise, and the majority of newspaper and magazine publishers around the world?
Now, I’m not saying I’m against attention metrics at all, having written about “smart data” and the power of behavioral analytics many times. I just believe they need to be inclusive in terms of platforms and publishers, and they need to work in real time to ensure that the metrics are being used for the right reasons — to create a more engaging experience for readers.
Attention Economy — the global digital currency
We live in a world that bombards us with content every minute of every day. There are far too many things competing for our attention (news, social media, advertising, entertainment, etc.), which require us to be more and more diligent in separating the wheat from the chaff; but we can’t do it alone.
That’s why platforms that truly serve the needs of users, by using technology to help turn content chaos into content control, will win in the battle for the big bucks.
Recently the CEO of Netflix said its biggest competitor wasn’t HBO or Amazon as one might expect; it was sleep. This is an excellent example of a company that understands its customers and focuses on them rather than real or imagined corporate challengers. It’s no wonder Netflix has over 100 million subscribers worldwide.
I just wish more publishers would follow suit. Sadly, too many of them still view readers in a negative light because they’re not willing to pay. So they spend too much time and money worrying about Facebook, Google, and other platforms stealing money from their pockets — money to which they feel entitled despite how they’ve treated
their audiences over the past two decades with rubbish-ridden websites and quality-deprived content.
Attention is the most valuable commodity on the planet for consumer-centric companies — a global currency that most publishers don’t pay enough “attention” to.
“Optimizing a site to attract and keep a visitor’s attention requires more than measurement. It takes expertise and resources — human, technological, and financial — that most news publishers simply don’t have. Tech giants, however, have created a virtuous cycle of measuring attention, analyzing the massive data intake, and using it to optimize their sites, thereby getting even more attention.”
Matthew Hindman, Associate Professor School of Media and Public Affairs, George Washington University
Hindman is right. It takes a lot of technological and behavioral analytics expertise to build a platform that can attract and retain a users’ attention and keep them coming back for more. It takes Artificial Intelligence (AI) in the form of unsupervised machine learning algorithms to review massive amounts of data and determine the optimal content to present to a reader to maximize engagement — when, where, and how it should be presented.
Human complexities and machine learning
In March 2017, Netflix announced that it would replace its five-star review system with a binary “thumbs up/thumbs down” ranking. On the surface, one would question how a binary rating could possibly be better than a 5-choice option.
There are two reasons. Through its massive amount of user data, Netflix found that viewers…
1. Tend to volunteer 200% more ratings when given the choice of thumbs up/down versus five stars
2. Often rank more respected content (e.g. a notable documentary) with five stars and more frivolous content with a single star, despite the fact that they are far more likely to actually watch the comedy
The first discovery probably comes as no surprise since a less complex choice is easier for viewers to make. And since more ratings means more data for the recommendation engine, it’s natural that Netflix would switch to a binary rating system to gather more information.
The second finding, however, is a bit more complex and can have fundamental implications on the design of learning machines. This conflicting behavior is a good example of “cognitive dissonance” in action. Let me explain…
Back in 1959, what is now recognized as the classic experiment of cognitive dissonance was reported in the Journal of Abnormal and Social Psychology by researchers Leon Festinger and James M. Carlsmith.
Undergraduate students of Introductory Psychology at Stanford University were asked to perform a boring task and then tell another subject that the task was exciting.
Half of the subjects were paid $20 to do that — the other half only $1. Behaviorists theorized that the $20 recipients would like the task more because of the monetary value they would associate with their role in “selling it”.
Cognitive dissonance theorists believed that those paid only $1 would feel the most inner conflict between the belief that they were not evil or stupid with the action of carrying out a boring task and then lying to another person — all for only a dollar.
So they predicted that those in the $1 group would be more motivated to resolve their dissonance by re-conceptualizing or rationalizing their actions. They would form the belief that the boring task was, in fact, fun — which is exactly what happened.
In the case of Netflix viewers, cognitive dissonance occurs when a viewer experiences conflict over what they thought they should watch (the documentary) with what they actually watched (a frivolous comedy).
If one were to subscribe to Festinger’s theory, these Netflix viewers would resort to “selective exposure.”
Selective exposure refers to an individual’s tendency to seek out information that reinforces their opinions, while avoiding content that conflicts with them
According to Festinger, when people encounter ideas that don’t map to their pre-existing beliefs, selective exposure helps produce harmony between them.
So in the case of the $1 subjects, they would search for ways to support the belief that the boring task was actually fun.
In the case of Netflix viewers, what would they do? Would they force themselves to watch more documentaries to support their ranking? Would they ask people they admire/trust (e.g. friends, family or influencers on social media) their opinions about the documentary to justify their rating of the documentary?
There’s no data to provide those insights, but it is fascinating stuff, don’t you think? That said, regardless of what those viewers did, there’s little doubt that human beings often obfuscate their true preferences even from themselves.
This makes the design of a learning machine even more challenging because it needs to examine and analyze closely what consumers do (e.g. give a documentary five stars without watching it) versus what they prefer to do (e.g. watch Dumb and Dumber, but still give it only one star).
There are many scholarly articles on why this behavioral dichotomy exists and I invite you to check them out if you’re suffering from insomnia. The opinions of these experts may conflict with every theory before them, but that doesn’t change the fact that all humans suffer from intrapersonal conflict that has yet to be fully understood.
So, in this increasingly algorithmdriven world we must be careful not to treat all data as “truth” as our friend, Ester Dyson, warns…
“We need to be very aware of the influence of the data used by the algorithms in our lives. People make the decisions; AI just makes us more efficient in reapplying the criteria and biases of people’s decisions in new but similar situations. The most important thing to consider is that much of what happens is under our control, but ‘our’ is an ambiguous concept.”
She knows of which she speaks, having worked in AI since the 1980s. Algorithms are driven by data that comes from millions of people — flawed human beings who are not as predictable, consistent or even reliable as we would like. We’ve seen this far too often lately in the rampant rise of fake news and the editors and duped readers who help spread misinformation and propaganda. It makes it hard to trust anyone these days.
Trust in Algorithms
When I first started researching this article, I couldn’t help but recall a Reuters Institute report, Brand and trust in a fragmented news environment, which discovered that most people (more so the younger or techsavvy ones) prefer algorithms over human editors when it comes to news curation.
News consumers value the “independence of algorithms” — believing them to be less biased or swayed by editorial and political agendas. They also like the fact that content is selected based on their personal reading habits.
But, and this is quite interesting, some participants — particularly those using news aggregators — thought that algorithms helped introduce them to a broader range of content and brands based on their interests and preferences. “It gets a variety of things like I’m interested in certain topics that I probably wouldn’t find or I’d have to search for it myself so it’s like a one stop shop of things that interest me.” (20-34, US) Brand and trust in a fragmented news environment
But as trusted as algorithms are with readers, they are not all created equal. All of us have seen evidence of that with echo chambers being spawned out of the implementation of inferior recommendation algorithms.
So consumers trust algorithms, but what do publishers think of them?
According to a recent survey done with 100 newspaper publishers, increasing digital audiences was not only their biggest challenge, it was a higher priority than both subscription and advertising revenues.
However, despite users preferring algorithms to curate news for them, less than 33% of publishers were using the most popular content aggregators with readers. That seems odd given they are trying to increase reach. How can they walk away from two billion Facebook users? Here again, we see an example of cognitive dissonance at play. The majority of publishers say they want to increase audience, but then refuse to work with those offering access to powerful recommendation algorithms that could help them reach millions.
This is something I’ve never understood, having been a strong advocate for “be everywhere your readers are”, even when that everywhere is on one of our so-called competitor’s platforms.
But, like Netflix PressReader doesn’t consider Magzter, Texture, Readly, or Blendle as competitors. It’s actually the publishers who don’t recognize that they need help from experts outside their walled gardens. They still believe they can make it on their own.
They don’t see value in technology companies even though they have the talent and expertise in AI, learning machines, and algorithms that can deliver not just want readers think they want, but a broader range of relevant content that, based on behavioral analytics, will attract and engage them longer.
So how are these publishers bringing harmony to their inner conflict created by wanting a bigger audience, but refusing help from those who know how to deliver it?
I guess you have to ask them, but my first thought would be that they are selectively exposing themselves to only things and beliefs that live within their tightly protected publishing ecosystem, where outsiders are not welcome.
I may be wrong, but whatever they’re doing, it’s painfully obvious that they’re not helping themselves, their readers or, in most cases, shareholders.
Stanford psychologist, Stephen Kull observed in a study of nuclear planners that “The instinct to survive is strong, but the instinct to alleviate fear is stronger.”
Is fear of aggregation, platforms, algorithms, and tech companies what’s paralyzing publishers?
Are they so entrenched in their selective behaviors that they fail to see that they’re moving away from survival rather than towards it?
Is there any hope? Absolutely!
The passion for quality content has never been stronger and I believe that, as an industry, we can work together for a better future for all. It all starts with trust and the willingness to collaborate. Publishers may not trust anyone but themselves, but if we, as technology and platform providers do, we just might help pull our industry out of the grave its digging for itself.
If you think it’s worth the effort like I do,
"It's not about denial. I'm just very selective about the reality I accept." - Bill Watterson, American cartoonist