Los Angeles Times

A toll on children, ignored by Big Tech

Companies defending addictive platforms to the justices don’t address the youth mental health crisis.

- By Ed Howard Ed Howard is senior counsel at the Children’s Advocacy Institute of the University of San Diego School of Law.

In legal disputes as in life, sometimes what isn’t said reveals more than what is. Consider the briefs filed with the Supreme Court in defense of a law granting Google and other tech companies limited immunity from lawsuits.

Gonzalez vs. Google, slated for oral argument before the Supreme Court on Tuesday, concerns Section 230 of the Communicat­ions Decency Act, a law enacted in 1996 to regulate the then-new internet. Child advocacy groups that filed friend-of-the-court briefs in the case note that social media platforms are knowingly hurting children by delivering dangerous content in an addictive manner. Tellingly, none of the scores of briefs filed on the side of the tech companies address those harms.

One of Congress’ primary purposes in enacting Section 230 was to provide, as some senators put it, “much-needed protection for children,” not just from explicit content but also from abuse. Ironically, the platforms are now arguing that Congress actually intended to offer them immunity for business decisions that they know will hurt children.

The Gonzalez case was brought by the family of an American murdered by the Islamic State group in the 2015 Paris terrorist attacks. The family alleges that as a foreseeabl­e consequenc­e of efforts to keep as many eyes on Google’s YouTube as possible, terrorist recruitmen­t videos are delivered to people who are likely to be interested in terrorism. In a similar case to be argued Wednesday, Twitter vs. Taamneh, the court will weigh whether the platforms’ alleged failure to take “meaningful steps” to remove terrorist content violates federal antiterror­ism law.

The repercussi­ons of social media’s rise go well beyond increased access to terrorist content. During the years Instagram exploded from a million to a billion users, the U.S. saw an astonishin­g 146% spike in firearm suicides among children ages 10 to 14. The number of suicides overall for young people rose an unpreceden­ted 57%.

Although the correlatio­n between the platforms’ growth and the youth mental illness crisis does not prove causation, Facebook’s leaked internal research noted that 6% of teen American Instagram users “trace their desire to kill themselves” to the platform.

Researcher­s and clinicians have likewise repeatedly documented widespread social mediarelat­ed mental health and physical harms to children. Last Monday, the U.S. Centers for Disease Control and Prevention reported that teen girls are suffering from record levels of sadness and suicide risk, which some experts attribute partly to the rise of social media. And on Tuesday, a U.S. Senate committee heard gut-wrenching stories about the dangers of, as one grieving parent described it, the “unchecked power of the social media industry.”

Social media platforms make money by selling advertisin­g. More time spent on a platform means more eyes on its ads, which means it can charge more for those ads. Plus, the more time a user spends on the platform, the more data the platform develops on the user, which it can in turn use to keep the user on the platform longer.

Humans aren’t personally sorting who sees what is on these platforms. Rather, humans give artificial intelligen­ce technologi­es the instructio­n to maximize what platforms call “user engagement.” AI does this at fantastic speeds by testing what recommenda­tions work best across billions of users. Then it delivers content based on not just what a child says she wants but also what is statistica­lly most likely to keep children like her glued to the screen. Too often, the answer is whatever exploits her fears and anxieties.

This means that with disturbing frequency, depressed teens are offered suicide tips, body-image-anxious girls get content promoting eating disorders and drug-curious youths get opportunit­ies to buy pills laced with lethal fentanyl. Moreover, platforms use neuroscien­tifically tailored gimmicks such as auto-scrolling, constant reminders to return to the platform and dopamine-firing “likes” that can be addictive to children. Often, children who earnestly want to turn off the platform can’t; their brains just aren’t old enough to resist addictions to the same degree as adults’.

To maintain growth every quarter, platforms have to find ways to attract and keep more users longer. If platforms are allowed to continue profiting from technology they know will harm great numbers of children without fear of financial consequenc­es, they will continue to perfect their techniques, and more children will be hurt. The child suicide and mental health crisis we are experienci­ng now will get worse with no end in sight.

It doesn’t have to be this way. The Google search engine’s method of prioritizi­ng content for viewers, designed to be based on websites’ expertise, authoritat­iveness and trustworth­iness, shows there are ways of deciding who sees what that are far less risky to children — and everyone else.

The court’s decision won’t end the debate over Section 230, but it could begin to restore the law to the original purpose of protecting young people. But it should not be a matter of debate that it should be illegal to knowingly weaponize children’s vulnerabil­ities against them.

And if we can’t agree on that, anyone who believes the unpreceden­ted harm children are enduring is the price society has to pay for freedom on the internet should at least acknowledg­e that harm.

Newspapers in English

Newspapers from United States