'Regulation stifles innovation' is a misguided myth
We've heard the argument that regulation stifles innovation too many times. It’s not only boring, it’s a misguided myth.
In reality, regulation is less about stifling innovation and more about channelling it responsibly - before it’s too late and irreversible harm is caused. Regulation keeps big tech in check and without it, we end up with endless apologies from tech executives who say “Just trust us”.
Two years ago, the European Commission proposed the first EU regulatory framework for AI, to ensure that AI systems can be analysed and classified according to the risk they pose to users. Now, with the EU AI act in play, progress has been made but there is still a lot to be done to prevent those irreversible harms.
Upcoming regulation needs to be more upstream - rather than classifying and regulating end outputs, regulation should be weighing in even earlier and trying to identify and address root challenges that might result in harmful outputs so that innovation can happen responsibly.
However, this is not a common ground for everyone, and that age-old argument still needs battling.
There are three clear arguments to evolve the regulation vs innovation debate - namely, evolving the power dynamics at play in setting the agenda. With that in mind, it’s first worth briefly touching on the ‘Brussels effect’ and the EU’s role in this.
Understanding ‘The Brussels Effect’
‘The Brussels Effect’ refers to the phenomenon whereby the EU ends up de facto regulating global markets by setting rules and standards with which other companies must comply if wanting to access the European market.
We see this in areas like environmental regulation, data privacy and competition law. Regulations like GDPR largely become global standards, with other markets "copy-pasting" regulatory policies for their own markets.
From dangerous cheapfakes to sophisticated election deepfakes to AI bias, platforms and systems have become weaponised in ways that erode the integrity of information and democratic values worldwide.
This effect can be positive in that lots of advanced regulation created by the EU influences other markets and spurs regulatory enforcement across the globe.
That said, the EU is not perfect, and by taking the lead in regulating big macro areas, there is also an added responsibility that these may likely set the tone for the rest of the world too.
What’s next after Europe’s ‘year of AI’? Is generative AI truly making disinformation worse?
Policymakers across the globe also have an added responsibility to scrutinise how these policies apply to their unique markets and contexts, and what adjustments or additional considerations are needed.
Which leads neatly to how the regulatory landscape needs to change.
Regulation is not a single source action
From dangerous cheapfakes to sophisticated election deepfakes to AI bias, platforms and systems have become weaponised in ways that erode the integrity of information and democratic values worldwide.
The dangers of allowing private companies to self-regulate are now readily apparent thanks to social media.
We also see the subsequent challenges of retrospective policymaking and trying to fix ubiquitous technologies that are already in the hands of users and a part of daily life.
Considering its major impact on the whole of society, passively deferring to tech companies to dictate and shape narratives around regulation is not a solution. Collaboration on a level playing field is essential.
With AI, the responsibility of regulation falls on multiple shoulders - tech companies, civil society, academics, and governments and policymakers.
The reality is that these problems were first and foremost created by the tech companies and platforms themselves - either unintentionally or through benign neglect.
However, citizens and governments must now also be mindful of the role they play in maintain