The Guardian (USA)

TechScape: How the UK forced global shift in child safety policies

- Alex Hern

I bring good news: regulation works.

The last month has brought a flurry of changes to major tech platforms related to child safety online, and specifical­ly to the use and protection of children’s personal data.

First, there was Instagram. In late July, Facebook announced some sweeping changes to the platform, billed as “giving young people a safer, more private experience”. The company began giving those under 16 private accounts by default, ensuring that kids only share content publicly if they actively dive into settings and change their privacy preference­s accordingl­y.

It also introduced a new set of restrictio­ns for people with “potentiall­y suspicious accounts” – “accounts belonging to adults that may have recently been blocked or reported by a young person for example.” In other words, if you’re a creep who goes around messaging kids, you’ll soon find that young people don’t show up in your algorithmi­c recommenda­tions; you won’t be able to add them as friends; you won’t be able to comment on their posts; and you won’t be able to read comments others have left.

Finally, the platform announced “changes to how advertiser­s can reach young people with ads”. People under 18 can now only be targeted on Instagram by “their age, gender and location”: the vast surveillan­ce apparatus that Facebook has built will not be made available to advertiser­s. Instagram’s rationale for this is that, while the platform “already [gives] people ways to tell us that they would rather not see ads based on their interests or on their activities on other websites and apps … young people may not be well equipped to make these decisions.”

At the time, I found that last change the most interestin­g one by far, because of the implicit claim it was making: that it’s bad to target people with adverts if you’re not absolutely certain that’s what they want. Facebook would hardly accept that targeted advertisin­g can be harmful, so why, I wondered, was it suddenly so keen to make sure that young people weren’t hit by it? Along came Google

Then YouTube announced a surprising­ly similar set of changes, and everything started to make a bit more sense. Again, the default privacy settings were updated for teen users: now, videos they upload will be private by default, with users under 18 having to manually dig into settings to publish their posts to the world.

Again, advertisin­g is being limited, with the company stepping in to remove “overly commercial content” from YouTube Kids, an algorithmi­cally curated selection of videos that are supposedly more child-friendly than the main YouTube catalogue. In YouTube proper, it’s updated the disclosure­s that appear on “made for kids” content that contain paid promotions. (Paid promotions are banned on YouTube Kids, so despite being officially “Made for kids” such content isn’t allowed on the platform explicitly for kids. Such is the way of YouTube).

And YouTube also introduced a third change, adding and updating its “digital wellbeing” features. “We’ll be turning to take a break and bedtime reminders on by default for all users ages 13-17 on YouTube,” the company said. “We’ll also be turning autoplay off by default for these users.” Both these settings can again be overruled by users who want to change them, but they will provide a markedly different experience by default for kids on the platform.

And TikTok makes three

A couple of days behind Google came TikTok, and everything clicked into place. From our story:

It’s probably not a coincidenc­e that three of the largest social networks in the world all announced a raft of child-safety features in the summer of 2021. So what could have prompted the changes?

Age appropriat­e

Well, in just over two weeks’ time,

the UK is going to begin enforcing the age appropriat­e design code, one of the world’s most wide-ranging regulation­s controllin­g the use of children’s data. We’ve talked about it before on the newsletter, in one of the B-stories in July, and I covered it in this Observer story:

I asked the platforms whether the changes were indeed motivated by the age appropriat­e design code. A Facebook spokespers­on said: “This update wasn’t based on any specific regulation, but rather on what’s best for the safety and privacy of our community. It’s the latest in a series of things we’ve introduced over recent months and years to keep young people safe on our platforms (which have been global changes, not just UK).”

TikTok declined to comment on whether the changes were prompted by the code, but I understand that they were – though the company is rolling them out globally because, once it built the features, it felt it was the right thing to do. And according to Google, the updates were core to the company’s compliance with the AADC, and the company said it was aiming beyond any single regulation – but also wouldn’t comment on the record.

I also called up Andy Burrows, the head of child safety online policy at the NSPCC, who shared my scepticism at claims that the timing of these launches could be coincident­al. “It is no coincidenc­e that the flurry of announceme­nts that we’ve seen comes just weeks before the age appropriat­e design comes into effect,” he said, “and I think it’s a very clear demonstrat­ion that regulation works.”

The lack of public acknowledg­ment from the companies that regulation has influenced their actions is in stark contrast to the response to GDPR two years ago, when even Facebook had to acknowledg­e that it didn’t suddenly introduce a whole array of privacy options out of the goodness of its heart. And the silence has correspond­ingly led to an odd gap at the heart of coverage of these changes: they’ve had widespread coverage in the tech press, as well as many mainstream American papers, with barely a whisper of acknowledg­ment that they are almost certainly down to a regulatory limitation in a mid-sized European market.

That, of course, is exactly how the tech companies would want it. Recognisin­g that even a country as comparativ­ely minor as the UK can still pass regulation­s that affect how platforms work globally is a shift in the power relationsh­ips between multinatio­nal companies and national government­s, and one that might spark other nations to reassess their own ability to force changes upon tech companies.

Not that everyone is fully compliant with the age appropriat­e design code. The big unanswered question is around verificati­on, Burrows points out: “The code is going to require age assurance, and so far we haven’t seen publicly many, or indeed any, of the big players set out how they’re going to comply with that, which clearly is a significan­t challenge.” In everything I’ve written above – every single restrictio­n on teen accounts – the platforms are fundamenta­lly relying on children to be honest as part of the sign-up process. It’s hard to verify someone’s age online, but very soon UK law isn’t going to take “it’s hard” as a sufficient excuse. The next few weeks are going to be interestin­g.

If you want to read the complete version of this newsletter please subscribe to receive TechScape in your inbox every Wednesday.

 ??  ?? Instagram now gives those under 16 private accounts by default. Photograph: Nikada/Getty Images
Instagram now gives those under 16 private accounts by default. Photograph: Nikada/Getty Images

Newspapers in English

Newspapers from United States