How the White Paper has all tech firms, from Facebook to Fortnite, in its sights
Social media, games and messaging apps face tough sanctions drawn up to protect users from harm
TENS of thousands of tech companies are to be subject to a statutory duty of care to protect their users. The new rules will cover media giants such as Facebook, search engines like Google, interactive games such as Fortnite and messaging apps and online forums such as Snapchat and Tripadvisor.
A White Paper, published today, says any tech company that allows the sharing of content or user interaction will be bound by the duty of care and face draconian sanctions if they breach it. The sanctions range from fines based on the company’s size, to criminal prosecutions of directors, removal of their sites from search engines and a block on access to users. The regulator The duty of care, which requires firms to take “all reasonable steps” to keep users safe and tackle illegal and harmful content, will be policed by an independent regulator.
The regulator will be funded by the tech industry in the medium term, with the prospect of a levy on firms to ensure long-term sustainability.
The regulator will be able to demand information from the companies, including details of controversial algorithms that have been blamed for driving content to users, as is thought to have happened with Molly Russell, 14, who took her own life after viewing self-harm material on Instagram.
It will also draw up legally enforced codes of practice spelling out what tech companies must do to protect users from harms such as terrorist content, child sexual abuse, illegal drugs or weapons sales, cyber bullying, self-harm, harassment, disinformation, violence and pornography. Enforcement The regulator will serve notices on errant firms to force them to put right breaches. It will also have “name and shame” powers, requiring companies to publish public notices on their sites about how they have failed to comply with standards. Every company will have to produce annual reports for the regulator on the scale of harms and what they are doing to combat them.
The fines are expected to mirror those for data protection laws, which are up to 4 per cent of global turnover. Ministers are consulting on additional powers, including disrupting businesses by removing them from search results, app stores and links.
There could be civil or criminal liability for breaches on senior staff and, as a last resort, the power to require Internet Service Providers to block firms’ access to UK users. Codes of Practice Each code, covering three areas of concern, says the firms’ platforms need to be “safe by design”, explaining clearly to users what is harmful content, how they can complain, response times and what action they can take, such as blocking, muting or staying hidden from other users.
1. Terror and child abuse: The regulator will decide the time frame within which platforms should remove content, such as the live-streamed New Zealand mosque massacre, and require them to identify and act on paedophiles’ grooming of children. NSPCC statistics show there were 5,161 offences of grooming children online in 18 months. Breck Bednar, 14, was murdered after being groomed by a man he met via an online forum.
The two codes – which have to be signed off by the Home Secretary – will require tech firms to develop technology to stop terrorist or child abuse content getting online, and to prevent potential extremists or paedophiles from searching their sites.
Although Facebook says it took down 14 million pieces of terrorist content last year, the Government says the Islamic State of Iraq and the Levant used more than 100 platforms to spread terrorist propaganda.
2. Disinformation: Firms will be required to have dedicated factchecking, especially during elections, to promote authoritative news sources. Platforms will be required to ensure their algorithms do not drive “extreme and unreliable” information to users to keep them online, which boosts advertising revenues.
3. Self-harm and suicide: Algorithms that promote self-harm or suicide to users will be banned, while firms will have to remove “illegal” or “unacceptable” content “rapidly”. Other harms include hate crime, harassment and serious violence.
Firms could face sanctions if they lack robust age verification. It will mean millions of children being removed or barred
The regulator will have ‘name and shame’ powers, requiring companies to publish public notices about how they have failed to comply with standards
Age verification Tech firms will be under a legal duty to protect children from inappropriate content and to enforce their terms and conditions, which set out age limits. This means they could face sanctions if they lack robust age verification. With up to 40 per cent of under-13s on social media, it will mean millions of children being removed or barred.
Platforms may have to impose different settings for children and filters to block inappropriate content. Complaints Users will have a legal right to an “effective and easy-to-access” complaints function with a set timescale for responses and an internal appeals system. It will be backed by either an independent redress or super-complaints system.