Tech groups cannot be allowed to hide from scrutiny
TECHNOLOGY companies have governments over a barrel. Whether they are maximising traffic flow efficiency, matching pupils with their school preferences, trying to anticipate drought based on satellite and soil data, most governments rely heavily on critical infrastructure and artificial intelligence (AI) developed by the private sector. This growing dependence has profound implications for democracy.
An unprecedented information asymmetry is growing between companies and governments. We can see this in the long-running investigation into interference in the 2016 US presidential elections. Companies build voter registries, voting machines and tallying tools, while social media companies sell precisely targeted advertisements using information gleaned by linking data on friends, interests, location, shopping and search. This has big privacy and competition implications, yet oversight is minimal. Governments, researchers and citizens risk being blindsided by the machine room that powers our lives and vital aspects of our democracies.
Governments and companies have fundamentally different incentives on transparency and accountability. While openness is the default and secrecy the exception for democratic governments, companies resist providing transparency about their algorithms and business models. Many of them actively prevent accountability, citing rules that protect trade secrets.
We must revisit these protections when they shield companies from oversight. There is a place for protecting proprietary information from commercial competitors, but the scope and context need to be clarified and balanced when they have an impact on democracy and the rule of law. Regulators must act to ensure that those designing and running algorithmic processes do not abuse trade secret protections.
Tech groups also use the EU's General Data Protection Regulation to deny access to company information. Although the regulation was enacted to protect citizens against the mishandling of personal data, it is now being wielded cynically to deny scientists access to data sets for research. The European Data Protection Supervisor has intervened, but problems could recur.
To mitigate concerns about the power of AI, provider companies routinely promise that the applications will be understandable, explainable, accountable, reliable, contestable, fair and - don't forget - ethical. Yet there is no way to test these subjective notions without access to the underlying data and information. Without clear benchmarks and information to match, proper scrutiny of the way that vital data is processed and used will be impossible.
In general, laws should apply online in the same way they do in the physical world. But digitalisation creates specific new contexts. Algorithms are not just a secret sauce allowing technology companies to compete, they provide the architecture for all online information, affect economic processes and fundamental rights. To assess whether principles such as fair competition, non-discrimination or free-speech rights are being upheld, authorities need to be able to look under the algorithmic hood. Governments as major technology customers should also be responsible buyers and write public accountability into their tendering requirements.
None of this is revolutionary. Coca-Cola protects its formula from competitors, but regulators can still measure its safety, health impact and compliance with food standards. The same principle must apply to intangible algorithms, especially those used for government services. We must rebalance the information asymmetry between tech companies and democratic governments by making oversight proportionate to the power technology companies already have. FT