AI tips off regulators to likely EU data issues
Some of the world’s largest technology firms might be breaking the European Union’s data privacy law, according to an analysis of their policies by artificial intelligence software.
Researchers from the European Union Institute in Florence worked with an EU consumer organisation to create the software. They then used the program to examine the privacy policies of 14 major technology businesses, including by Alphabet, Amazon.com, and Facebook. They found that a third of those clauses were “potentially problematic” or contained “insufficient information.” Another 11 per cent of the policy’s sentences used unclear language, the academics said. The researchers didn’t make public which companies’ policies violated which provisions of the law, publishing only aggregate findings for all of the companies in the study.
Clear and comprehensive explanations of what data a company collects, how it uses the data, and who it shares the information with, are key requirements of Europe’s new General Data Protection Regulation (GDPR), a sweeping privacy law that took effect on May 25. In many cases, companies must get explicit consent from customers to hold and process their data. Companies that violate the new rule can face fines as high as four per cent of global sales.
Among the problems found by the AI software — which is called “Claudette” — were policies that did not identify third parties a company might share personal data with, policies that stated users would be deemed to have agreed to a plan simply by using the company’s website and others that used vague and confusing language. Monique Goyens, director general of BEUC, the Brussels-based European consumer organisation, said the research was “v urged EU regulators to look at the possible violations the researchers spotted.