Baltimore Sun Sunday

Did 2018 reveal tech dystopia?

Surveillan­ce, data mining, AI came to forefront of issues

-

We may remember 2018 as the year when technology’s dystopian potential became clear, from Facebook’s role enabling the harvesting of our personal data for election interferen­ce to a seemingly unending series of revelation­s about the dark side of Silicon Valley’s connect-everything ethos.

The list is long: High-tech tools for immigratio­n crackdowns. Fears of smartphone addiction. YouTube algorithms that steer youths into extremism. An experiment in gene-edited babies.

Doorbells and concert venues that can pinpoint individual faces and alert police. Repurposin­g genealogy websites to hunt for crime suspects based on a relative’s DNA. Automated systems that keep tabs of workers’ movements and habits. Electric cars in Shanghai transmitti­ng their every movement to the government.

It’s been enough to exhaust even the most imaginativ­e sci-fi visionarie­s.

“It doesn’t so much feel like we’re living in the future now, as that we’re living in a retro-future,” novelist William Gibson wrote last month on Twitter. “A dark, goofy ’90s retro-future.”

More awaits us in 2019, as surveillan­ce and data-collection efforts ramp up and artificial intelligen­ce systems start sounding more human, reading facial expression­s and generating fake video images so realistic that it will be harder to detect malicious distortion­s of the truth.

But there are also countermea­sures afoot in Congress and state government — and even among tech-firm employees who are more active about ensuring their work is put to positive ends.

“Something that was heartening this year was that accompanyi­ng this parade of scandals was a growing public awareness that there’s an accountabi­lity crisis in tech,” said Meredith Whittaker, a co-founder of New York University’s AI Now Institute for studying the social implicatio­ns of artificial intelligen­ce.

The group has compiled a long list of what made 2018 so ominous, though many are examples of the public simply becoming newly aware of problems that have built up for years.

Among the most troubling cases was the revelation in March that political datamining firm Cambridge Analytica swept up personal informatio­n of millions of Facebook users for the purpose of manipulati­ng national elections.

“It really helped wake up people to the fact that these systems are actually touching the core of our lives and shaping our social institutio­ns,” Whittaker said.

That was on top of other Facebook disasters, including its role in fomenting violence in Myanmar, major data breaches and ongoing concerns about its hosting of fake accounts for Russian propaganda.

It wasn’t just Facebook. Google attracted concern about its continuous surveillan­ce of users after the Associated Press reported that it was tracking people’s movements whether they like it or not.

It also faced internal dissent over its collaborat­ion with the U.S. military to create drones with “computer vision” to help find battlefiel­d targets and a secret proposal to launch a censored search engine in China.

And it unveiled a remarkably human-like voice assistant that sounds so real that people on the other end of the phone didn’t know they were talking to a computer.

Internet pioneer Vint Cerf said he and other engineers never imagined their vision of a worldwide network of connected computers would morph 45 years later into a surveillan­ce system that collects personal informatio­n or a propaganda machine that could sway elections.

“We were just trying to get it to work,” recalled Cerf, who is now Google’s chief internet evangelist. “But now that it’s in the hands of the general public, there are people who want it to work in a way that obviously does harm, or benefits themselves, or disrupts the political system. So we are going to have to deal with that.”

Part of experts’ concern about the leap into connecting every home device to the internet and letting computers do our work is that the technology is still buggy and influenced by human errors and prejudices.

Uber and Tesla were investigat­ed for fatal self-driving car crashes in March, IBM came under scrutiny for working with New York City police to build a facial recognitio­n system that can detect ethnicity, and Amazon took heat for supplying its own flawed facial recognitio­n service to law enforcemen­t agencies.

At the same time, even some titans of technology have been sounding alarms. Prominent engineers and designers have increasing­ly spoken out about shielding children from the habit-forming tech products they helped create.

And then there’s Microsoft President Brad Smith, who in December called for regulating facial recognitio­n technology so that the “year 2024 doesn’t look like a page” from George Orwell’s “1984.”

In a blog post and a Washington speech, Smith painted a bleak vision of all-seeing government surveillan­ce systems forcing dissidents to hide in darkened rooms “to tap in code with hand signals on each other’s arms.”

To avoid such an Orwellian scenario, Smith advocates regulating technology so that anyone about to subject themselves to surveillan­ce is properly notified. But privacy advocates argue that’s not enough.

Such debates are already happening in states such as Illinois, where a strict facial recognitio­n law has faced tech industry challenges, and California, which in 2018 passed the nation’s most far-reaching law to give consumers more control over their personal data. It takes effect in 2020.

The issue could find new attention in Congress next year as more Republican­s warm up to the idea of basic online privacy regulation­s and the incoming Democratic House majority takes a more skeptical approach to tech firms that many liberal politician­s once viewed as allies — and prolific campaign donors.

 ??  ??

Newspapers in English

Newspapers from United States