Button to hide harmful content considered for online harm law
Social media users would be able to make tech platforms filter out abuse under plans for Bill
HARMFUL online content will be hidden with the click of a button under proposals being considered by ministers.
Social media users will be able to get tech platforms to filter out abuse, harassment, sexism or racist comments under plans put forward for the Online Safety Bill.
“What they want platforms to do is to provide a button to turn off racism or harassment. They want a user to be empowered to not see things they don’t want to,” said a source.
The move comes as ministers plan to scrap the provision in the Bill that would have required social media firms to offer blanket protection for adults from “legal but harmful” content.
This is set to be ditched after a backlash by senior Tories and free speech campaigners who complained it would have allowed “woke” tech firms to censor controversial comments that upset people but are legal.
The filtering plan – billed by sources as an “adult safety” or “user empowerment” duty – is seen as a final line of defence to tackle online abuse in the same way as users can choose to only see content from verified accounts.
It comes on top of a legally enforced requirement for social media companies to abide by their terms and conditions under which most pledge to protect users from hate speech like antiSemitism, harassment and bullying. The
Bill, which was delayed after being pulled from the Parliamentary schedule before the summer, will make Ofcom the online regulator with powers to fine tech firms up to 10 per cent of their global turnover if they fail to prevent and remove illegal content such as child abuse and terrorism.
It is, however, the row over legal but harmful content for adults and its impact on free speech that has threatened to derail the Bill and which Michelle Donelan, the Culture Secretary, has been working to solve.
She has also pledged to increase protections for children and it is understood ministers plan to retain a list of “primary priority” content which children must be prevented from encountering. This is likely to include promoting self-harm and eating disorders, legal suicide content and pornography.
A second list of “priority” content would see companies expected to ensure it is appropriate for the age of the child. This could include online abuse, cyberbullying, harassment, misinformation over health or vaccines and material depicting or encouraging violence.
Ministers are also considering tougher age verification requirements for social media platforms to ensure children only see content appropriate to their age.
In an online article for The Telegraph today, Lord Bethell, a former health minister, calls for mandatory “hard” age verification for all porn sites – a measure previously approved by Parliament but then shelved.
This would mean no child under 18 would be able to access porn sites as they would have to prove with ID via a secure third party that they were adults.