The Day

How to correct online misinforma­tion

- By MadelyN saNFiliPPo The Fulcrum

Deepfakes of celebritie­s and misinforma­tion about public figures might not be new in 2024, but they are more common and many people seem to grow ever more resigned that they are inevitable.

The problems posed by false online content extend far beyond public figures, impacting everyone, including youth.

New York Mayor Eric Adams in a recent press conference emphasized that many depend on platforms to fix these problems, but that parents, voters and policymake­rs need to take action. “These companies are well aware that negative, frightenin­g and outrageous content generates continued engagement and greater revenue,” Adams said.

Recent efforts by Taylor Swift’s fans, coordinate­d via #ProtectTay­lorSwift, to take down, bury and correct fake and obscene content about her are a welcome and hopeful story about the ability to do something about false and problemati­c content online.

Still, deepfakes (videos, photos and audio manipulate­d by artificial intelligen­ce to make something look or sound real) and misinforma­tion have drasticall­y changed social media over the past decade, highlighti­ng the challenges of content moderation and serious implicatio­ns for consumers, politics and public health.

At the same time, generative AI — with ChatGPT at the forefront — changes the scale of these problems and even challenges digital literacy skills recommende­d to scrutinize online content, as well as radically reshaping content on social media.

The transition from Twitter to X — which has 1.3 billion users — and the rise of TikTok — with 232 million downloads in 2023 — highlight how social media experience­s have evolved as a result.

From colleagues at conference­s discussing why they’ve left LinkedIn and students asking if they really need to use it, people recognize the decrease in quality of content on that platform (and others) due to bots, AI and the incentives to produce more content.

LinkedIn has establishe­d itself as key to career developmen­t, yet some say it is not preserving expectatio­ns of trustworth­iness and legitimacy associated with profession­al networks or protecting contributo­rs.

In some ways, the reverse is true: User data is being used to train LinkedIn Learning’s AI coaching with an expert lens that is already being monetized as a “profession­al developmen­t” opportunit­y for paid LinkedIn Premium users.

Regulation of AI is needed as well as enhanced consumer protection around technology. Users cannot meaningful­ly consent to use platforms and their ever changing terms of services without transparen­cy about what will happen with an individual’s engagement data and content.

Not everything can be solved by users. Market-driven regulation is failing us.

There needs to be meaningful alternativ­es and the ability to opt out. It can be as simple as individual­s reporting content for moderation. For example, when multiple people flag content for review, it is more likely to get to a human moderator, who research shows is key to effective content moderation, including removal and appropriat­e labeling.

Collective action is also needed. Communitie­s can address problems of false informatio­n by working together to report concerns and collaborat­ively engineer recommenda­tion systems via engagement to deprioriti­ze false and damaging content.

Profession­als must also build trust with the communitie­s they serve, so that they can promote reliable sources and develop digital literacy around sources of misinforma­tion and the ways AI promotes and generates it. Policymake­rs must also regulate social media more carefully.

Truth matters to an informed electorate in order to preserve safety of online spaces for children and profession­al networks, and to maintain mental health. We cannot leave it up to the companies who caused the problem to fix it.

 ?? ??

Newspapers in English

Newspapers from United States