The Phnom Penh Post

Silicon Valley’s blind spots and biases ruining tech for rest of us

- Sara Wachter-Boettcher

IT WAS Christmas Eve 2014 when Eric Meyer logged onto Facebook, expecting the usual holiday photos and well-wishes from friends and families. Instead, Facebook showed him an ad for its new Year in Review feature.

Year in Review allowed Facebook users to create albums of their highlights from the year – top posts, photos from vacations, that sort of thing – and share them with their friends. But Meyer wasn’t keen on reliving 2014, the year his daughter Rebecca died of aggressive brain cancer. She was 6.

Facebook didn’t give him a choice. Instead, it created a sample Year In Review album for him, and posted it to his page to encourage him to share it. “Here’s what your year looked like!” the copy read. Below it was a picture of Rebecca. And surroundin­g her smiling face and curly hair were illustrati­ons, made by Facebook, of partyers dancing amid balloons and streamers.

Meyer, a friend of mine who is also one of the Web’s early programmer­s and bloggers, was gutted. “Yes, my year looked like that,” he wrote in Slate. “True enough. My year looked like the now-absent face of my Little Spark. It was still unkind to remind me so tactlessly, and without any consent on my part.”

When I started working in tech in 2007, I could never have imagined a blunder like this. Facebook had just begun transformi­ng from a college-centric site to the behemoth it’s since become. Google had just bought YouTube. The iPhone hadn’t even launched yet. People were still writing “click here” on their links (and I was trying to get them to stop). But seven years later, something had started to feel off.

Despite all the improvemen­ts in technology, my peers and I weren’t getting better at serving people. And Meyer’s story really drove that home. Facebook had designed an experience that worked well for people who’d had a good year, people who had vacations or weddings or parties to remember. But because the design team focused only on positive experience­s, it hadn’t thought enough about what would happen for everyone else – for people whose years were marred by grief, illness, heartbreak or disaster.

It’s not just Facebook, and it’s not just grief or trauma. The more I started paying attention to how tech products are designed, the more I started noticing how often they’re full of blind spots, biases and outright ethical blunders – and how often those oversights can exacerbate unfairness and leave vulnerable people out.

Like in the spring of 2015, when Louise Selby, a pediatrici­an in Cambridge, England, joined PureGym, a British chain. But every time she tried to swipe her membership card to access the women’s locker room, she was denied; the system simply wouldn’t authorise her. Finally, PureGym got to the bottom of things: The thirdparty software it used to manage its membership data – software used at all 90 locations across England – was relying on members’ titles to determine which locker room they could access. And the title “doctor” was coded as male.

In 2016, JAMA released a study showing that the artificial intelligen­ce built into smartphone­s from Apple, Samsung, Google and Microsoft isn’t programmed to help during a crisis. The phones’ personal assistants didn’t understand words like “rape” or “I was beaten up by my husband.” In fact, instead of doing even a simple Web search, Apple’s Siri cracked jokes and mocked users.

It wasn’t the first time. In 2011, if you told Siri you were thinking of shooting yourself, it gave you directions to a gun store. After getting bad press, Apple partnered with the National Suicide Prevention Lifeline to offer users help when they said something that Siri identified as suicidal. But five years later, no one had looked beyond that one fix. Apple had no problem investing in building jokes and clever comebacks into the interface from the start. But investing in crisis or the safety of its users? Just not a priority.

The examples go on and on. In August 2016, Snapchat launched a new face-morphing filter – one it said was “inspired by anime”. In reality, the effect had a lot more in common with Mickey Rooney playing IY Yunioshi in Breakfast at Tiffany’s than a character from Akira. The filter morphed users’ selfies into bucktoothe­d, squinty-eyed caricature­s – the hallmarks of “yellowface”, the term for white people donning makeup and masqueradi­ng as Asian stereotype­s. Snapchat said that this particular filter wouldn’t be coming back, but insisted it hadn’t done anything wrong, even as Asian users mounted a campaign to delete the app.

Individual­ly, it’s easy to write each of these off as a simple slip-up, an oversight, a shame. We all make mistakes, right? But when we start looking at them together, a clear pattern emerges of an industry that is willing to invest plenty of resources in chasing “delight” and “disruption” but one that hasn’t stopped to think about who’s being served by its products and who’s being left behind, alienated or insulted.

There’s a running joke in the HBO comedy Silicon Valley: Every would-be entreprene­ur, almost always a 20-something man, at some point announces that his product will “make the world a better place” – and then describes something absurdly useless or technicall­y trivial (“constructi­ng elegant hierarchie­s for maximum code reuse and extensibil­ity”, for example).

I’m sure it’s funny, but I don’t watch the show regularly. It’s too real. It brings me back to too many terrible conversati­ons at tech conference­s, where some guy who’s never held a job is backing me into a corner at cocktail hour and droning on about his idea to “disrupt” some industry or other, while I desperatel­y scan the room for a way out.

What Silicon Valley gets right is that tech is an insular indus- try: a world of mostly white guys who’ve been told they’re special, the best and brightest. It’s a story that tech loves to tell about itself, because the more everyone on the outside sees technology as magic and programmer­s as geniuses, the more the industry can keep doing whatever it wants. And with gobs of money and little public scrutiny, far too many people in tech have started to believe that they’re truly saving the world. Even when they’re just making another ride-hailing app or restaurant algorithm. Even when their products actually harm more people than they help.

We can’t afford that anymore. Ten years ago, tech was still, in many ways, a discrete industry – easy to count and quantify. Today, it’s more accurate to call it a core underpinni­ng of every industry. As tech entreprene­ur and activist Anil Dash writes, “Every industry and every sector of society is powered by technology today, and being transforme­d by the choices made by technologi­sts.”

Tech is only going to become more fundamenta­l to the way we understand and interact with our communitie­s and government­s. Courts are using software algorithms to influence criminal sentencing. Detailed medical records are being stored in databases. And, as informatio­n studies scholar Safiya Noble puts it, “People are using search engines rather than libraries or teachers to make sense of the world we’re inhabiting.”

The more technology becomes embedded in all aspects of life, the more it matters whether that technology is biased, alienating or harmful. The more it matters whether it works for real people facing real-life stress. And the more it matters that we stop allowing tech to make us feel like we’re not important enough to design for. Because there’s nothing wrong with you. There’s something wrong with tech.

 ?? KEVORK DJANSEZIAN/GETTY IMAGES NORTH AMERICA/AFP ?? A logo of Snapchat is seen at the front entrance of the company’s new headquarte­rs on November 14, 2013, in Venice, California.
KEVORK DJANSEZIAN/GETTY IMAGES NORTH AMERICA/AFP A logo of Snapchat is seen at the front entrance of the company’s new headquarte­rs on November 14, 2013, in Venice, California.

Newspapers in English

Newspapers from Cambodia