4 things to know about YouTube’s new children privacy practices
IN SEPTEMBER, Google agreed to pay a $170 million fine and make privacy changes as regulators said its YouTube platform had illegally harvested children’s personal information and used it to profit by targeting them with ads.
The penalty and changes were part of an agreement with the Federal Trade Commission and the attorney-general of New York.
Last week, YouTube said it was beginning to introduce changes to address regulators’ concerns and better protect children. What you need to know about the changes:
Limited collection of digital data
YouTube said it would limit the collection and use of personal information from people who watched children’s videos, no matter the age of the viewer.
YouTube said it also turned off or limited some features on children’s videos tied to personal information. These include comments and livechat features.
Changes to ads and recommendations
YouTube will no longer show ads on children’s videos that are targeted at viewers based on their webbrowsing or other online activity data. Instead, the company said, it might show ads based on the context of what people were viewing.
It said viewers who watched a video made for children on its platform would be more likely to see recommendations for other children’s videos.
New requirement for YouTube producers
YouTube said it would require all video producers on its platform to designate their videos as made for children or not made for children.
In November, it introduced a new setting to help producers flag children’s content, a designation that signals YouTube to limit data collection on the videos. It said it was also using artificial intelligence to help identify children’s content and that it could override a video producer’s categorisation if its system detected a mistake.
Why the YouTube changes matter
YouTube is one of the most popular platforms for children. Some animated videos on its channels aimed at younger children have been viewed more than a billion times.