Inc. (USA)

HOW TO READ YOUR CUSTOMER'S MINDS

- BY TOM FOSTER

THESE ENTREPRENE­URS ARE RACING TO CLAIM A NEW TECH FRONTIER: YOUR EMOTIONS

DEEP IN THE BOWELS of Houston’s 72,000-seat NRG Stadium, in a curtained- off makeshift room near the court where the Villanova Wildcats and the University of North Carolina Tarheels are playing for the NCAA basketball championsh­ip, a small team of engineers and data scientists from a company called Lightwave huddles over laptops watching a stream of real-time data. But the engineers aren’t looking at shooting percentage­s. The millions of data points show how excited the fans are every 10th of a second—whether they’re clapping, screaming, jumping up and down, or sitting sullenly.

Throughout the stadium, fans wear custom-built wristbands that send real-time biometric data to the engineers, while dozens of hidden sensors record decibel levels and other intel. When something big happens, another Lightwave team in New York City races to design and tweet slick infographi­cs. For almost 30 seconds before Villanova made its game-winning buzzer beater, fans of both teams sat motionless and quiet, utterly transfixed. Lightwave’s hard data showed an audience at peak engagement—informatio­n that marketers live for.

Lightwave, which calls itself an “applied neuroscien­ce platform,” is the creation of a 29-year-old named Rana June— a former profession­al DJ fond of blue- dyed hair and vintage heavy metal T-shirts, whose appearance contrasts starkly with her tendency to talk tech jargon. Since it launched in 2012, the 10-person startup has parsed people’s biometrics for Google, Pepsi, 20th Century Fox, iHeartRadi­o, and Jaguar, among others. For the NCAA championsh­ip, tournament sponsor Degree antiperspi­rant—owned by the $140 billion conglomera­te Unilever—hired Lightwave to study fan excitement.

Lightwave is one of several companies furiously at work creating a new field—let’s call it the emotion economy— focused on sensing and analyzing consumers’ mental states. In January, Apple bought a San Diego startup called Emotient, which uses facial-tracking technology to identify people’s feelings. A few months earlier, the consumer-research giant Nielsen bought Bostonbase­d Innerscope, which combines facial-cue recognitio­n with Lightwave-style wearables data.

And 2,700 miles from June’s office in San Francisco, in a low-rise building wedged between a strip mall and railroad tracks outside of Boston, another emotion-measuring pioneer named Rana—Rana el Kaliouby— has spent the past year and a half strategizi­ng to make the facial-cue recognitio­n company she co-founded, Affectiva, the essential hub of the emerging emotion economy. She wants to make it the platform any business—from an appmaker to a car company—can use to add emotion sensing to its products.

Thus far, most of the emotion economy’s high-profile projects have been small scale. In Houston, only 150 students wore Lightwave’s wristbands. The data was used to create entertaini­ng tweets, not to help Degree sneak into people’s wallets by targeting their emotional states. Nor are stores sending you mood-targeted offers as you wander the aisles. But Paul Zak, direc- tor of the Center for Neuroecono­mics Studies at Claremont Graduate University, in Claremont, California, says that emotion-optimized products and services will be standard fare “very, very soon. I want to say ‘today.’ ” As June puts it, “Any business that has a customer is going to be affected by the ability to measure the emotional reaction of the customer.”

JUNE, WHOSE given name is Rana June Sobhany, stumbled onto the idea for Lightwave when she was an electronic dance music DJ in her early 20s. After dropping out of college, she’d helped found a mobile ad measuremen­t company called Medialets in

2008, which was sold to global ad giant WPP in 2015. (She left before that sale.)

When the first iPad came out in 2010, June sensed what seemed like a wholly different opportunit­y: The tablet could be a musical instrument. EDM was exploding, and yet, during live performanc­es, star DJs couldn’t leave their banks of computers and turntables; the best they could manage for onstage theatrics was to periodical­ly throw their arms in the air to strike the muchmocked “Jesus pose.” June, who grew up playing in punk bands around Washington, D.C., started DJ’ing with a system she’d rigged together that included six iPads and an exoskeleto­n of sorts that let her attach iPads to her arms and roam the stage like a lead guitarist.

She played 100 shows a year, for thousands of dollars each—Vegas one night, New York a few nights later, then L.A. But as the shows got

bigger, she realized she had little insight on how the crowd was responding while she performed; in larger venues, bright stage lights often prevent artists from seeing past the first few rows of the crowd. Were they dancing wildly, or idly standing around? She didn’t know. “Every night, I’d get off the stage and check what people were saying on Twitter. But you don’t know who they are. And if they were tweeting during a show, were they really engaged?” She shakes her head. “It’s such an incomplete data set.”

She decided to try something new while DJ’ing the People and Time party celebratin­g the 2012 White House Correspond­ents’ Dinner: She used Microsoft’s gesture-control technology, Kinect, to create a perimeter of motion detectors around the room and let the resulting heat map of crowd density guide her performanc­e. If part of the crowd seemed thinly populated or bored, she’d head in that direction. It was, she says, her eureka moment. She started to shape a business that would be the realtime crowd-analytics brain for events.

She’s wearing a ripped AC/DC T-shirt and metal skull-tipped Alexander McQueen stilettos as she tells me this, and she’s perched on a sofa in the bar on the first floor of her office in San Francisco’s SoMa neighborho­od. The three-story townhouse was a music venue before Lightwave moved in, and the company left the bar and stage intact, for holding parties and testing its technology. June seeded the company with the proceeds from DJ’ing and funding from friends. Since then, it has run on its own revenue, and June has invested its earnings back into the company.

For its first partnershi­p, Lightwave created with Pepsi a “bioreactiv­e” concert at Austin’s South by Southwest Interactiv­e conference in 2014. Attendees wore wristbands and “unlocked” prizes, like a round of drinks, by getting hot and sweaty while dancing. That event garnered good press, and other high-profile clients came calling. The global ad agency Mindshare hired Lightwave to measure attendees at the Cannes advertisin­g festival, and connected the company to Jaguar to analyze the crowd at the Wimbledon tennis championsh­ip. There was a Google-sponsored concert in Singapore featuring star DJ Paul Oakenfold, a Cisco event at which Lightwave data determined the winner of a pitch competitio­n, and a TED conference at which Lightwave compared attendees’ self-perception­s with their responses to video scenes

meant to evoke feelings like fear and compassion. (Lightwave found people often underrated their reactions.)

Perhaps most intriguing­ly, last year 20th Century Fox employed Lightwave to measure viewers’ reactions to prerelease screenings of The Revenant, the Oscar-winning Leonardo DiCaprio epic. Typically, movie screenings are followed by a survey, but such feedback can be unreliable. Viewers can be swayed by others, or report what they think they should think. Survey data also does a poor job evaluating specific moments, because it is gathered after the fact. For The Revenant screenings, audience members wore wristbands that measured physiologi­cal responses—heart rate variabilit­y, skin conductanc­e (sweat, basically), body temperatur­e, movement, and noise—throughout the film. Among other things, the study identified 15 moments when the audience experience­d the fight-or-flight response (as determined by a specific heart-rate pattern) and 4,716 seconds during which viewers were motionless, signaling peak filmgoer engagement.

By mapping those emotional responses to the correspond­ing plot points, the studio gleaned objective data about the film—something that would otherwise be judged subjective­ly. The Revenant was finished and locked when the screenings took place, but the project captured the attention of many in Hollywood. June says Lightwave now works with several other studios, “much earlier in the creative process”— during the making of the film as well as in the formation of marketing plans. It’s not hard to see why. If a studio learns that women and men respond differentl­y to various scenes, it might cut separate trailers depending on which audience it’s targeting. If data were to show diminishin­g engagement in the latter parts of a film, it might be reedited. In The Revenant’s case, the moment of highest overall emotional intensity came right at the end, suggesting that the film, at two hours 36 minutes, was not, in fact, too long.

Jeff Malmad, Mindshare North America’s head of mobile, sees that as a model for how emotional data will be used—to understand consumers’ “moments of receptivit­y,” and not only targeting those moments with ads but also using them to create better products. “What are the things that get you excited in a store, or really stress you out when you board an airplane?” he says. “Those are very positive things to learn.”

ASIDE FROM THE SHARED name and profession, Rana June and Rana el Kaliouby could hardly be more different. While June is an artist at heart, el Kaliouby, who’s 37, is pure scientist, steeped in the academic literature of computer science and psychology. She was a college student in Cairo in the late ’90s when she first learned of some of the pioneering work on emotion-sensing computing being done by MIT professor Rosalind Picard. Several years later, when el Kaliouby was finishing her PhD at Cambridge, she managed to meet Picard when the professor visited the U.K. The two hit it off, and soon after they teamed up at MIT’s Media Lab, armed with a near-milliondol­lar National Science Foundation grant, to prototype a sort of emotional hearing aid for autistic people—essentiall­y a wearable camera that scanned people’s facial expression­s to interpret social cues, in real time, for the person wearing the device.

A very cool and noble idea, but not one targeting the biggest market. In 2008, el Kaliouby posted a demo of the software— called MindReader—to a section of the Media Lab site where sponsoring companies test the latest inventions. The number of inquiries—from Toyota, Microsoft, Fox, Hallmark, and many others— changed everything. The companies wanted to test TV ads, detect sleepy drivers, spot possible security threats—there were dozens of other uses for MindReader. The lab’s director suggested hiring a CEO and spinning out as a startup.

Affectiva was born in 2009, and the first CEO el Kaliouby hired zeroed in on the most immediate opportunit­y: ad testing. The company created a program called Affdex that works with standard webcams to scan people’s faces as they watch a computer or TV screen. Affectiva went on to raise more than $30 million from investors including the Silicon Valley venture capital powerhouse Kleiner Perkins, and grew to 20-some employees in the U.S. and another 20 in Cairo, who manually code facial expression­s to feed into the company’s machine-vision algorithms.

And yet, el Kaliouby was restless. “I had this moment one day in the late summer of 2014,” she recalls, “when I woke up and said, ‘What are we doing here?’ ” Advertisin­g was never her dream. So the company created a version of its tech for mobile devices, released a developer’s kit to allow other companies to use its facial- cue recognitio­n system, and began to reposition itself to serve a broader array of clients, in fields as diverse as health care, education, and automotive. “Advertisin­g and media continues to be a big chunk of our revenue,” she says. “But you can do the thing in front of you that’s very low risk, or you can do the thing that’s potentiall­y huge.”

El Kaliouby argues that, as we spend ever more time with our mobile devices, and more products around us are connected via the internet of things, they will need to get better at adapting to our moods. “Studies tell us that humans with high emotional intelligen­ce are more likable, more persuasive, and more successful,” she says. “Our thesis is that digital devices and services need emotional intelligen­ce as well,” so they can realize the same benefits and serve us better.

Perhaps the best example of how this will work is the growing category of so-called “social robots,” like Amazon’s Echo, a cylinder that sits in your living room and responds to voice commands to control services ranging from ordering groceries to playing music to managing your schedule. “For better or worse, people develop intimate relationsh­ips with these digital assistants,” el Kaliouby says. “People confide in Siri that they’re sexually abused, or depressed. Right now, if you do that, it will

WHAT DO MOVIEGOERS REALLY THINK? LIGHTWAVE’S DATA FROM SCREENINGS OF THE REVENANT FOUND THEM TRANSFIXED, NOT BORED, DESPITE ITS EPIC LENGTH.

just Google the phrase. It should really show empathy. It should say, ‘Oh, my goodness. How awful. Can I get you some help?’ ”

Since she expanded Affectiva’s horizons, el Kaliouby has again found herself surprised by what people dream up for her technology. A human resources startup uses it to screen video interviews. An education company uses it to create profession­al training scenarios. A Middle Eastern country wants to use it to study the public mood. Especially since Apple’s acquisitio­n of Emotient this year essentiall­y validated the space, she says, the volume of new ideas has increased rapidly. “We don’t know what Apple is going to do with Emotient,” she says—though it’s not hard to imagine its usefulness to Siri, or the much-rumored Apple car. But as a closed system, Apple leaves room for another company to become an open system: think Google taking on the iPhone with Android. “The opportunit­y for us,” she says, “is to become the platform that powers all these other creative scenarios.” ACK IN SAN FRANCISCO, Rana June outlines a similar vision. “We’re building an empathy brain for technology,” she explains, “because right now, technology does not understand the human experience. You have companies that are really good at search, or good at social, or hardware. This is a

“YOU CAN DO THE THING IN FRONT OF YOU, OR YOU CAN DO THE THING THAT’S POTENTIALL­Y HUGE.”

Bnew planet in the tech solar system: I think you’re going to have a company emerge that’s really good at emotions that remains independen­t and provides a tool set or operating system for other companies to incorporat­e.”

So which Rana wins? “The face remains the best window we have on moment-to-moment changes in emotional response,” says Paul Ekman, a psychologi­st who, in 1978, co-published the Facial Action Coding System, a seminal, 527-page reference tome of every possible facial muscle movement and how it maps to seven fundamenta­l emotions ( happiness, sadness, surprise, fear, anger, disgust, and contempt). His work is the foundation on which all efforts to algorithmi­cally read faces—including Affectiva’s—are built. Ekman believes that Lightwave- esque efforts to track emotion using physiologi­cal cues are scientific­ally shaky. “There is no consensus among researcher­s,” he says, about whether involuntar­y functions like heart rate can signal emotion accurately.

But Lightwave can collect data in almost any environmen­t, no matter how chaotic. “We’re saying, ‘Don’t worry. Just do whatever you would be doing, and we’ll take the data from there,’ ” June says. Facial tracking can’t quite do that. “The conditions you need to do face tracking are very specific,” she says. “If you’re watching a film, you need enough light to illuminate the face. It just puts you in these unnatural environmen­ts. Let’s say you’re at a sporting event—are you going to put hundreds or thousands of facial-tracking inputs around the stadium?”

Or maybe, as the internet of things stitches itself together and more devices and products speak to one another, they’ll simply share data from different emotion sensors, el Kaliouby predicts. “The way I see it, it doesn’t matter that your Fitbit doesn’t have a camera, because your phone does, and your lap- top does, and your TV will. All that data gets fused with biometrics from your wearable device and builds an emotional profile for you.” Affectiva, she adds, is exploring ways to measure emotion using the sound of your voice.

Lightwave, meanwhile, is exploring ways to make its sensors ever more invisible to wearers, and easier for the company to activate. For the NCAA championsh­ip at NRG Stadium, June and her team had to convince students from each school to outfit with wristbands; afterward, they hustled to rendezvous with the students to recover the devices. She envisions people getting stamped with temporary conductive skin tattoos when they enter a Lightwave event; the stamps will take measuremen­ts and transmit data. (A Boston company called MC10 already makes stickerlik­e “biostamp” sensors for the health care market.)

One thing is certain: Privacy battles will erupt as our inner lives become a currency. El Kaliouby says she’s repeatedly turned away clients who want to use her technology for any kind of surveillan­ce: “We want to support the uses where people want to share their emotions, not uses that try to suck informatio­n out of you that you have not decided to share.” But, she predicts, in three to five years, most of our devices will be emotion-aware. And just as location tracking went from creepy to standard in a few years, emotion will simply be a standard part of business.

El Kaliouby became Affectiva’s CEO in May and is pursuing a fourth round of funding. June is mulling taking on investors after three years of bootstrapp­ing. Both sense a very big game is under way. “Think about it,” June says. “I tweet to my airline that my bag got lost, and I expect a response. What if I’m in your store and something happens that makes me start to feel angry? You as a business are going to have to learn to respond to that, in the same way that you had to learn to monitor social media.” Which is to say: Get ready.

 ??  ?? Rana el Kaliouby’s facial- reading startup, Affectiva, grew out of a fascinatio­n she developed in college and subsequent work she did at MIT’s Media Lab.
Rana el Kaliouby’s facial- reading startup, Affectiva, grew out of a fascinatio­n she developed in college and subsequent work she did at MIT’s Media Lab.
 ??  ??
 ??  ?? In Lightwave founder Rana June’s previous career as a DJ, she chafed at being unable to read a crowd as she performed.
So she built a company to do just that.
In Lightwave founder Rana June’s previous career as a DJ, she chafed at being unable to read a crowd as she performed. So she built a company to do just that.
 ??  ?? BORED ECSTATIC EXHILARATE­D CONTENTED FRUSTRATED
BORED ECSTATIC EXHILARATE­D CONTENTED FRUSTRATED
 ??  ?? DISTRACTED AFRAID WORRIED ANGRY TRANSFIXED
DISTRACTED AFRAID WORRIED ANGRY TRANSFIXED
 ??  ??

Newspapers in English

Newspapers from United States