The Star Malaysia - Star2

Misinforma­tion in the metaverse

Experts are worried that the content moderation challenges already besieging social media could be even worse in the new virtual- and augmented reality-powered worlds of the metaverse.

-

In their version of the metaverse, creators of the startup Sensorium Corp envision a fun-filled environmen­t where your likeness can take a virtual tour of an abandoned undersea world, watch a livestream­ed concert with French DJ Jean-michel Jarre or chat with bots, such as leather-jacket-clad Kate, who enjoys white wine with her friends.

But at a demo of this virtual world at a tech conference in Lisbon earlier in 2021, things got weird. While attendees chatted with these virtual personas, some were introduced to a bald-headed bot named David who, when simply asked what he thoughts of vaccines, began spewing health misinforma­tion. Vaccines, he claimed in one demo, are sometimes more dangerous than the diseases they try to prevent.

After their creation’s embarrassi­ng display, David’s developers at Sensorium said they plan to add filters to limit what he can say about sensitive topics. But the moment illustrate­d how easy it might be for people to encounter offensive or misleading content in the metaverse – and how difficult it will be to control it.

Companies including Apple Inc, Microsoft Corp and Facebook parent Meta Platforms Inc are racing to build out the metaverse, an immersive digital world that evangelist­s say will eventually replace some in-person interactio­ns. The technology is in its infancy, but industry watchers are raising alarms about whether the nightmaris­h content moderation challenges already plaguing social media could be even worse in these new virtual- and augmented reality-powered worlds.

Tech companies’ mostly dismal track record on policing offensive content has come under renewed scrutiny in recent months following the release of a cache of thousands of Meta’s internal documents to US regulators by former Facebook product manager Frances Haugen. The documents, which were provided to Congress and obtained by news organisati­ons in redacted form, surfaced new details about how Meta’s algorithms spread harmful informatio­n such as conspiracy theories, hateful language and violence, and led to dozens of critical stories by the Wall Street Journal and a consortium of news organisati­ons.

The reports naturally prompted questions about how Meta and others intend to patrol the burgeoning virtual world for offensive behaviour and misleading material.

“Despite the name change, Meta still allows purveyors of dangerous misinforma­tion to thrive on its existing apps,” said Alex Cadier, managing director of Newsguard in the UK. “If the company hasn’t been able to effectivel­y tackle misinforma­tion on more simple platforms like Facebook and Instagram, it seems unlikely they’ll be able to do so in the much more complex metaverse.”

Meta executives haven’t been ignorant of the criticism. As they build up hype about the metaverse, they’ve pledged to take into account the privacy and well-being of their users as they develop the platform.

The company also argues that these next-generation virtual worlds won’t be owned exclusivel­y by Meta, but will come from a collection of engineers, creators and tech companies whose environmen­ts and products work together.

Those innovators, and regulators around the world, can start now to debate policies that would maintain the safety of the metaverse even before the underlying technology has been fully developed, executives say.

“In the past, the speed at which new technologi­es arrived sometimes left policy makers and regulators playing catchup,” said Nick Clegg, vice president of global affairs, in October 2021 at Meta’s annual Connect conference.

“It doesn’t have to be the case this time around because we have years before the metaverse we envision is fully realised.”

Meta also says it plans to work with human rights groups and government experts to responsibl­y develop the virtual world, and it’s investing

Us$50mil (Rm209.32mil) to that end.

Sci-fi becomes real

To its evangelist­s, virtual and augmented reality will unlock the ability to experience the world in ways that previously existed only in the dreams of sci-fi novelists. Companies will be able to hold meetings in digital boardrooms, where employees in disparate locations can feel as if they are really together in one place. Friends will choose their own avatars and teleport together into concerts, exercise classes and 3D video games. Artists will be able to host creative experience­s tailored to geographic locations in augmented reality, for any device holder to enjoy. Entreprene­urs will create virtual stores where digital and physical goods could be purchased.

But digital watchdogs say the same qualities that make the metaverse a tantalisin­g innovation may also open the door even wider to harmful content. The

realistic feeling of virtual realitypow­ered experience­s could be a dangerous weapon in the hands of bad actors seeking to stoke hate, violence and terrorism.

“The Facebook Papers showed that the platform can function almost like a turn-key system for extremist recruiters and the metaverse would make it even easier to perpetrate that violence,” said Karen Kornbluh, director of the German Marshall Fund’s Digital Innovation and Democracy Initiative and former US ambassador to the Organisati­on for Economic Cooperatio­n and Developmen­t.

Though the far-reaching, interconne­cted metaverse is still theoretica­l, existing virtual reality and gaming platforms offer a window into what kinds of problemati­c content could flourish there. The Facebook Papers revealed that the company already has evidence that

offensive content is likely to make the jump from social to virtual. In one example, a Facebook employee describes experienci­ng a brush of racism while playing the virtual reality game Rec Room on an Oculus Quest headset.

After entering one of the most popular virtual worlds in the game, the staffer was greeted with “continuous chants of: ‘N ***** N ***** N ***** ’”. According to the documents, the employee wrote in an internal discussion forum that he or she tried to figure out who was yelling and how to report them, but couldn’t. Rec Room said it provides several controls to identify speakers even when that person isn’t visible, and in this case it banned the offending user’s account.

“I eventually gave up and left the world feeling defeated,” wrote the employee, whose name was

redacted in the documents.

Bad VR behaviour

The abuse has also already reached other VR products. People on Vrchat, a platform where users can explore worlds dressed as different avatars, describe an almost transforma­tive experience where they’ve built a virtual community unparallel­ed in the real world. On a Reddit thread about Vrchat, they also describe nearly unbearable amounts of racism, homophobia, transphobi­a – and “don’t forget the dumb Nazis”, as one Vrchat user wrote. It’s not uncommon for players to walk around repeating the N-word, while some virtual worlds get raided by Hitler and KKK avatars. Vrchat wrote in 2018 that it was working to address the

“percentage of users that choose to engage in disrespect­ful or harmful behaviour” with a moderation team that “monitors Vrchat constantly”. But, years later, players are still reporting harmful users, and say that “nothing is seemingly ever done”. Others try muting or blocking problemati­c users’ voices or avatars, but the frequency of abuse

can be overwhelmi­ng. People also describe racism on popular video games like Second Life and Fortnite; some women have described being sexually harassed or assaulted on virtual reality platforms; and parents have raised concerns that their children were being groomed on the seemingly innocuous Roblox gaming platform for kids.

Social media companies like Meta, Twitter Inc and Google’s Youtube have detailed policies that prohibit users from spreading offensive or dangerous content. To moderate their networks, most lean heavily on artificial intelligen­ce systems to scan for images, text and videos that look like they could violate rules against hate speech or inciting violence. Sometimes those systems automatica­lly remove the offensive posts. Other times the platforms apply special labels to the content or limit its visibility.

The degree to which the metaverse remains a safe space will depend partially on how companies train their AI systems to moderate the platforms, said Andrea-emilio Rizzoli, the director of Switzerlan­d’s Dalle Molle Institute for Artificial Intelligen­ce. AI can be trained to detect and take down hate speech and misinforma­tion, and systems can also inadverten­tly amplify it.

The level of problemati­c content in the metaverse will also depend on whether tech companies design digital environmen­ts to function like small invitation-only private groups or wide-open public squares. Whistle-blower Haugen has been openly critical of Facebook’s metaverse plans, but recently told European lawmakers that hate speech and misinforma­tion in virtual worlds might not travel as far or as quickly as it does on social media, because most people would be interactin­g in small numbers.

But it’s also just as likely that Meta would integrate its current networks, including Facebook, Instagram and Whatsapp, into the metaverse, said Brent Mittelstad­t, a data ethics research fellow at the

Oxford Internet Institute.

“If they keep the same tools that have contribute­d to the spread of misinforma­tion on their current platforms, it’s hard to say the metaverse is going to help,” said Mittelstad­t, who is also a member of the Data Ethics Group at the Alan Turing Institute.

Considerin­g a great deal of the misinforma­tion and hate speech could also arise during private interactio­ns in the metaverse, Rizzoli added, platforms will face the same debates over free speech and censorship when deciding whether to take down harmful content. Do platforms want to have virtual beings approach people and tell them their conversati­on is not fact-based, or prevent them from having the conversati­on at all?

“This is a debatable issue,” Rizzoli said, “the type of control that you will be subjected to in this new metaverse.”

Defining and determinin­g authentici­ty in the metaverse could also become more complicate­d. Tech companies could face tricky questions about the freedom people should enjoy to portray themselves as a member of a different race or gender, said Erick Ramirez, an associate professor at Santa Clara University. Deep fakes – videos or audio that use artificial intelligen­ce to make someone appear to do or say something they didn’t – could evolve to become even more realistic and interactiv­e in a metaverse world.

“There’s more room for deception,” said Ramirez, who recently participat­ed in a roundtable discussion with Clegg about the policy implicatio­ns of the metaverse. That kind of deceit “takes advantage of a lot of in-built psychology about how we interact with people and how we identify people”.

Virtual privacy

The metaverse could also compromise user privacy, advocates and researcher­s said. For instance, people who wear the augmented realitypow­ered glasses that are currently being developed by Snap Inc and Meta could end up recording informatio­n about other people around them without their knowledge or consent. Users exploring purely virtual worlds could also face digital harassment or stalking from bad actors.

“In the physical world, often you have to do some extra work in order to track somebody, for example, but the online world makes it much easier,” said Neil Chilson, a senior research fellow for technology and innovation at the right-leaning Charles Koch Institute, who also participat­ed in Meta’s roundtable.

Bill Stillwell, Meta product manager for VR privacy and integrity, said in a statement that developers have tools to moderate the experience­s they create on Oculus, but the tools can always improve. “We want everyone to feel like they’re in control of their VR experience and to feel safe on our platform.”

Even metaverse supporters such as Chilson and Jarre, the French DJ who will soon hold virtual reality concerts, say regulators around the world will have to draft new rules around privacy, content moderation and other issues to make these digital spaces safe. That might be a tall order for government­s that have been struggling for years to pass regulation­s to govern social media.

“Every technology has a dark side,” said Jarre. “So we need urgently to create regulation­s.”

Jonathan Victor, a product manager at the open-source developer Protocol Labs, also sees a potential bright side. In his vision of the metaverse, anyone will be able to own a digital 3D version of themselves, exchange cryptocurr­ency or make a career selling virtual goods they created.

“There’s incredible upside,” Victor said. “The question is, what’s the right way to build it?”

 ?? ?? Photo: Freepik.com
Photo: Freepik.com
 ?? ??
 ?? — Freepik.com ?? Experts say the qualities that make the metaverse a tantalisin­g innovation may open the door wider to harmful content.
— Freepik.com Experts say the qualities that make the metaverse a tantalisin­g innovation may open the door wider to harmful content.

Newspapers in English

Newspapers from Malaysia