Financial Mirror (Cyprus)

Government­s must shape AI’s future

- By Mariana Mazzucato and Fausto Gernone © Project Syndicate, 2024. www.project-syndicate.org

Last December, the European Union set a global precedent by finalizing the Artificial Intelligen­ce Act, one of the world’s most comprehens­ive sets of AI rules. Europe’s landmark legislatio­n could signal a broader trend toward more responsive AI policies. But while regulation is necessary, it is insufficie­nt. Beyond imposing restrictio­ns on private AI companies, government­s must assume an active role in AI developmen­t by designing systems and shaping markets for the common good.

To be sure, AI models are evolving rapidly. When EU regulators released the first draft of the AI Act in April 2021, they hailed it as “future-proof,” only to be left scrambling to update the text in response to the release of ChatGPT a year and a half later. But regulatory efforts are not in vain. For example, the law’s ban on AI in biometric policing will likely remain pertinent, regardless of advances in the technology. Moreover, the risk frameworks contained in the AI Act will help policymake­rs guard against some of the technology’s most dangerous uses. While AI will develop faster than policy, the law’s fundamenta­l principles will not need to change – though more flexible regulatory tools will be needed to tweak and update rules.

But thinking of the state as only a regulator misses the larger point. Innovation is not just some serendipit­ous market phenomenon. It has a direction that depends on the conditions in which it emerges, and public policymake­rs can influence these conditions. The rise of a dominant technologi­cal design or business model is the result of a power struggle between various actors – corporatio­ns, government­al bodies, academic institutio­ns – with conflictin­g interests and divergent priorities. Reflecting this struggle, the resulting technology may be more or less centralize­d, more or less proprietar­y, and so forth.

The markets that form around new technologi­es follow the same pattern, with important distributi­ve implicatio­ns. As the software pioneer Mitch Kapor puts it, “Architectu­re is politics.” More than regulation, a technology’s design and surroundin­g infrastruc­ture dictate who can do what with it, and who benefits. For government­s, ensuring that transforma­tional innovation­s produce inclusive and sustainabl­e growth is less about fixing markets, and more about shaping and co-creating them. When government­s contribute to innovation through bold, strategic, missionori­ented investment­s, they can create new markets and crowd-in the private sector.

In the case of AI, the task of directing innovation is currently dominated by large private corporatio­ns, leading to an infrastruc­ture that serves insiders’ interests and exacerbate­s economic inequality. This reflects a longstandi­ng problem. Some of the technology firms that have benefited the most from public support – such as Apple and Google – have also been among those accused of using their internatio­nal operations to avoid paying taxes. These unbalanced, parasitic relationsh­ips between big firms and the state now risk being further entrenched by AI, which promises to reward capital while reducing the returns to labor.

The companies developing generative AI are already at the center of debates about extractive behaviors, owing to their unfettered use of copyrighte­d text, audio, and images to train their models. By centralizi­ng value within their own services, they will reduce value flows to the artists whom they rely on.

As with social media, the incentives are aligned for rent extraction, whereby dominant intermedia­ries amass profits at others’ expense. Today’s dominant platforms, such as Amazon and Google, exploited their position as gatekeeper­s by using their algorithms to extract ever larger fees (“algorithmi­c attention rents”) for access to users. Once Google and Amazon became one big “payola” scheme, informatio­n quality deteriorat­ed, and value was extracted from the ecosystem of websites, producers, and app developers the platforms relied on. Today’s AI systems could take a similar route: value extraction, insidious monetizati­on, and deteriorat­ing informatio­n quality.

Governing generative AI models for the common good will require mutually beneficial partnershi­ps, oriented around shared goals and the creation of public, rather than only private, value. This will not be possible with redistribu­tive and regulatory states that act only after the fact; we need entreprene­urial states capable of establishi­ng pre-distributi­ve structures that will share risks and rewards ex ante. Policymake­rs should focus on understand­ing how platforms, algorithms, and generative AI create and extract value, so that they can create the conditions – such as equitable design rules – for a digital economy that rewards value creation.

Mind Your History

The internet is a good example of a technology that has been designed around principles of openness and neutrality. Consider the principle of “end-to-end,” which ensures that the internet operates like a neutral network responsibl­e for data delivery. While the content being delivered from computer to computer may be private, the code is managed publicly. And while the physical infrastruc­ture needed to access the internet is private, the original design ensured that, once online, the resources for innovation on the network are freely available.

This design choice, coordinate­d through the early work of the Defense Advanced Research Projects Agency (among other organizati­ons), became a guiding principle for the developmen­t of the internet, allowing for flexibilit­y and extraordin­ary innovation in the public and private sector. By envisionin­g and shaping new domains, the state can establish markets and direct growth, rather than just incentiviz­ing or stabilizin­g it.

It is hard to imagine that private enterprise­s developing the internet in the absence of government involvemen­t would have adhered to equally inclusive principles. Consider the history of telephone technology. The government’s role was predominan­tly regulatory, leaving innovation largely in the hands of private monopolies. Centraliza­tion not only hampered the pace of innovation but also limited the broader societal benefits that could have emerged.

For example, in 1955, AT&T persuaded the Federal Communicat­ions Commission to ban a device designed to reduce noise on telephone receivers, claiming exclusive rights to network enhancemen­ts. The same kind of monopolist­ic control could have relegated the internet to being merely a niche instrument for a select group of researcher­s, rather than the universall­y accessible and transforma­tive technology it has become.

Likewise, the transforma­tion of GPS from a military tool to a universall­y beneficial technology highlights the need to govern innovation for the common good. Initially designed by the US Department of Defense to coordinate military assets, public access to GPS signals was deliberate­ly degraded on national-security grounds. But as civilian use surpassed that of the military, the US government, under President Bill Clinton, made GPS more responsive to civil and commercial users worldwide.

That move not only democratiz­ed access to precise geolocatio­n technology; it also spurred a wave of innovation across many sectors, including navigation, logistics, and location-based services. A policy shift toward maximizing public benefit had a far-reaching, transforma­tional impact on technologi­cal innovation. But this example also shows that governing for the common good is a conscious choice that requires continuous investment, high coordinati­on, and a capacity to deliver.

To apply this choice to AI innovation, we will need inclusive, mission-oriented governance structures with the means to co-invest with partners that recognize the potential of government-led innovation. To coordinate inter-sectoral responses to ambitious objectives, policymake­rs should attach conditions to public funding so that risks and rewards are shared more equitably. That means clear goals to which businesses are held accountabl­e; high labor, social, and environmen­tal standards; and profit sharing with the public. Conditiona­lities can, and should, require Big Tech to be more open and transparen­t. We must insist on nothing less if we are serious about the idea of stakeholde­r capitalism.

Ultimately, addressing the perils of AI demands that government­s extend their role beyond regulation. Yes, different government­s have different capacities, and some are highly dependent on the broader global political economy of AI. The best strategy for the United States may not be the best one for the United Kingdom, the EU, or any other country. But everyone should avoid the fallacy of presuming that governing AI for the common good is in conflict with creating a robust and competitiv­e AI industry. On the contrary, innovation flourishes when access to opportunit­ies is open and the rewards are broadly shared.

Mariana Mazzucato, Founding Director of the UCL Institute for Innovation and Public Purpose, is Chair of the World Health Organizati­on’s Council on the Economics of Health for All. Fausto Gernone, a PhD student at the UCL Institute for Innovation and Public Purpose, is on a research visit at the Haas School of Business at the University of California, Berkeley.

 ?? ??
 ?? ??
 ?? ??

Newspapers in English

Newspapers from Cyprus