Chicago Sun-Times

Teens find support on social media, but need protection from harmful content. Can AI help bridge the gap?

- BY AFSANEH RAZI Afsaneh Razi is assistant professor of Informatio­n Science at Drexel University. This article was originally published on theconvers­ation.com.

Meta announced on Jan. 9 that it will protect teen users by blocking them from viewing content on Instagram and Facebook that the company deems to be harmful, including content related to suicide and eating disorders. The move comes as federal and state government­s have increased pressure on social media companies to provide safety measures for teens.

At the same time, teens turn to their peers on social media for support that they can’t get elsewhere. Efforts to protect teens could inadverten­tly make it harder for them to also get help.

Congress has held numerous hearings in recent years about social media and the risks to young people. The CEOs of Meta, X (formerly known as Twitter), TikTok, Snap and Discord testifed before the Senate Judiciary Committee on Jan. 31 about their efforts to protect minors from sexual exploitati­on.

The tech companies “finally are being forced to acknowledg­e their failures when it comes to protecting kids,” according to a statement in advance of the hearing from committee chair Sen. Dick Durbin, D-Ill., and ranking member Sen. Lindsey Graham, R-S.C.

I’m a researcher who studies online safety. My colleagues and I have been studying teen social media interactio­ns and the effectiven­ess of platforms’ efforts to protect users. Research shows that while teens do face danger on social media, they also find peer support, particular­ly via direct messaging. We have identified a set of steps that social media platforms could take to protect users while also protecting their privacy and autonomy online.

What kids are facing

The prevalence of risks for teens on social media is well establishe­d, from harassment and bullying to poor mental health and sexual exploitati­on. Investigat­ions have shown that companies such as Meta have known that their platforms exacerbate mental health issues, helping make youth mental health one of the U.S. Surgeon General’s priorities.

Much of adolescent online safety research is from self-reported data such as surveys. There’s a need for more investigat­ion of young people’s real-world private interactio­ns and their perspectiv­es on online risks. My colleagues and I collected a large dataset of young people’s Instagram activity, including more than 7 million direct messages. We asked young people to annotate their own conversati­ons and identify the messages that made them feel uncomforta­ble or unsafe.

We found that direct interactio­ns can be crucial for young people seeking support. Based on mutual trust in the settings, teens felt safe asking for help.

Research suggests that privacy of online discourse plays an important role in the online safety of young people, and at the same time a considerab­le amount of harmful interactio­ns comes via private messages. Unsafe messages flagged by users in our dataset included harassment, sexual messages, sexual solicitati­on, nudity, pornograph­y, hate speech and sale or promotion of illegal activities.

However, it has become more difficult for platforms to use automated technology to detect and prevent online risks for teens because the platforms have been pressured to protect user privacy. For example, Meta has implemente­d end-to-end encryption for all messages on its platforms to ensure message content is secure and only accessible by participan­ts in conversati­ons.

Also, the steps Meta has taken to block suicide and eating disorder content keep that content from public posts and search even if a teen’s friend has posted it, so the teen who shared that content would be left alone without their friends’ and peers’ support. In addition, Meta’s content strategy doesn’t address the unsafe interactio­ns in private conversati­ons teens have online.

Striking a balance

We conducted a study to find out how we can use the minimum data to detect unsafe messages without invading user privacy. We wanted to understand how various features or metadata of risky conversati­ons — such as length of the conversati­on and average response time — can contribute to machine learning programs detecting these risks. For example, previous research has shown that risky conversati­ons tend to be short and one-sided, as when strangers make unwanted advances.

We found that our machine learning program was able to identify unsafe conversati­ons 87% of the time using only metadata.

These results could be used as a guideline for platforms to design artificial intelligen­ce risk identifica­tion. The platforms could use high-level features such as metadata to block harmful content without scanning that content and thereby violating users’ privacy. For example, a persistent harasser whom a young person wants to avoid would produce metadata — repeated, short, one-sided communicat­ions between unconnecte­d users — that an AI system could use to block the harasser.

Ideally, young people and their care-givers would be given the option by design to be able to turn on encryption, risk detection or both so they can decide on trade-offs between privacy and safety for themselves.

The views and opinions expressed by contributo­rs are their own and do not necessaril­y reflect those of the Chicago Sun-Times or any of its affiliates.

 ?? DIGITALVIS­ION VIA GETTY ?? Teens can get help on social media, but it can also harm their mental health.
DIGITALVIS­ION VIA GETTY Teens can get help on social media, but it can also harm their mental health.

Newspapers in English

Newspapers from United States