Morning Sun

Navigating the murky waters of AI regulation

- By Tim Heffernan Tim Heffernan is the chief growth officer at T3 Expo.

Regulation is trust. The problem is that “trust” has different meanings in government, technology and marketing. While all build on pyramids that start the same, they diverge in their destinatio­n.

The destinatio­n for technology is utility for the user. Think physics; if you do it wrong, such as on a spacecraft, things might get destroyed and explode.

In government, trust is the exchange of individual sovereignt­y to the government to protect one’s liberties and property. This time, consider Thomas Hobbes’ “Leviathan,” written during the English Civil War (1642—1651). It argues for a social contract and rule by an absolute sovereign.

In marketing, this relates as it is a promise by the brand to the buyer to curate a good/service, and, most important, an identity through curation from that purchase. Consider taking a Tiktok/insta selfie with a new purchase or an executive touting the new technology implementa­tion transformi­ng her organizati­on.

There has never been a successful technology that has waited for regulation before launching. Nor has there been a successful technology that hasn’t eventually been regulated.

Now that artificial intelligen­ce is a success, and the race for applicatio­ns to become platforms has begun, the test of market power versus anti-competitiv­e behavior will go through the natural cycle of all new technologi­es.

What makes AI different is that it has the potential to interfere with the government’s ability to protect liberty and property and to interfere with the core value of marketing/ad industries of creators, curators, gatekeeper­s and distributo­rs of identity.

What that means is you will have the technology platform companies vie for the position that their regulatory framework should be the one that is adopted. Against that business backdrop, AI regulation is coming very quickly out of the European Union around privacy and data sovereignt­y, primarily through a host of cumbersome rules from the EU AI ACT and the June 2023 G7 meeting.

The EU AI Act, in its current form, would give significan­t power to closed platforms and place an undue burden on opensource­d ones, thus stifling innovation. Ironically, this would give more market power to the establishe­d players and effectivel­y regulate new market entrants. The act states: “(A) provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requiremen­ts set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open source licenses, as a service, as well as other distributi­on channels.”

This means is that an opensource provider would have to have risk mitigation strategies, data governance measures and a 10-year documentat­ion requiremen­t, among others. Take the open-sourced platform like Huggingfac­e, where more than 50,000 organizati­ons have used its model. It would force these organizati­ons to a closedsour­ced provider only to comply with this new rule.

Another issue will be cross-border compliance, a sticky subject since the General Data Protection Regulation will become more so as where the data is trained and where it comes from will come under new regulation­s.

Government­s should focus on end-use case applicatio­ns rather than prescribin­g specific technologi­cal mandates around the underlying models. To do so, policymake­rs would do better by protecting liberties and property and not focusing on regulating the bits that don’t byte.

Newspapers in English

Newspapers from United States