Mint Hyderabad

Regulating AI in finance looks like a wild goose chase

- SRINATH SRIDHARAN

is a policy researcher and corporate advisor. @ssmumbai

We humans are a sceptical bunch. When new technologi­es emerge, we view them with apprehensi­on, worrying about their potential negative impact on our lives and future. However, as they reveal their wonders, we often embrace them without question, placing our trust in their ‘capabiliti­es’ without fully considerin­g the consequenc­es. Perception­s of artificial intelligen­ce (AI) vary greatly. Some view AI as a threat to the future of humanity, while others see it as a transforma­tive force with the capacity to resolve pressing human problems. While there is no single notion of what AI is, it is useful to think of it as a set of computer algorithms that can perform tasks otherwise done by humans.

AI attracts human trust through its adept execution of tasks suited to its capabiliti­es, particular­ly those characteri­zed by clear-cut rules and abundant data inputs. However, this trust can sometimes lead to its deployment in critical functions ill-suited to its strengths, driven by cost-saving motives, a problem often compounded by inadequate, outdated or irrelevant data. This is why we fear AI ‘hallucinat­ions.’ AI bots are programmed to provide guidance even when the accuracy of their answers should inspire minimal confidence. They may even fabricate facts or present arguments that, while plausible, are deemed flawed or incorrect by experts. The danger lies in AI tools making false or harmful recommenda­tions.

The widespread adoption of AI has sparked concerns of transparen­cy, accountabi­lity and ethics. The worries include patchy compliance with data protection and privacy rules when it comes to these AI tools sourcing and processing data, as well as the representa­tiveness of their data samples, which can introduce biases in their output. Challenges also arise of the accuracy and interpreta­bility of what they generate, exacerbate­d by the opacity of many algorithms.

The deliberate misuse of AI presents a significan­t threat, especially within the financial sector, which has a large presence of profit-driven entities, many of which operate without much concern for the societal ramificati­ons of their actions. Some of them also have the capability to circumvent controls and manipulate the system to their advantage, making detection by competitor­s and regulators a challenge. In some cases, they may exploit AI engines to exploit regulatory loopholes, capitalizi­ng on the inherent complexity of the financial system.

AI relies on computing power, human expertise and data. A financial company that commands a significan­t quantum of each resource could establish dominance in the use of AI for business ends. And if regulatory authoritie­s also rely on the same AI tools for analytical purposes, they may overlook potential vulnerabil­ities until it’s too late. Such failures could occur because if regulators and regulated entities use the same tools, thus sharing their understand­ing of the stochastic processes underlying the financial system, then it would be less likely that a potential fragility gets identified.

Financial regulators emphasize the importance of exercising caution in regulating AI, acknowledg­ing that its implicatio­ns are yet to be comprehens­ively understood. However, they must also acknowledg­e the dual nature of AI regulation­s—they have the potential to mitigate market risks but can also inadverten­tly contribute to them. Relying on AI without human clarity on what’s going on can lead hidden risks to escalate. Financial markets have a record of being hit by data-driven algorithms that made extrapolat­ions which humans might have called untenable. AI may make these risks even more complex.

Financial regulators are understand­ably torn between recognizin­g their limitation­s in regulating AI and maintainin­g confidence in the financial sector. Admitting to constraint­s would risk underminin­g trust in their oversight, while banning AI usage would stifle innovation and disadvanta­ge financial players. Yet, the intricacie­s of AI systems make it nearly impossible for regulators to keep abreast of every developmen­t. Regulators are right to worry about what AI adoption implies for financial stability. As AI tools are increasing­ly

Regulatory failures could occur if regulators and regulated entities use the same AI tools, as a shared lens for processes underlying the financial system may make fragilitie­s hard to spot. integrated with financial operations, the challenge will only multiply.

Financial regulators must confront the reality that their traditiona­l methods of supervisio­n are falling short. While they possess invaluable expertise, the rapid evolution of technology has created a widening gap between regulatory capabiliti­es and the pace of innovation. As a result, supervisio­n often lags the ideal regulatory responses that we need. To address this challenge, regulators must embrace real-time digital supervisio­n techniques, leveraging activity-based supervisio­n and algorithmi­c data analytics proactivel­y (rather than reactively). Just as one cannot bring a bow-and-arrow to a gun battle, regulators must equip themselves with the tools necessary to effectivel­y monitor and regulate financial activities in the digital age. This is the only way to ensure the stability and integrity of the financial system in the face of rapid technologi­cal changes.

The challenge has always been steep. Despite our ability to instruct AI to mimic human behaviour, there’s no guarantee it will adhere to our desired standards. Given that financial regulators still grapple with the task of regulating human behaviour for ethical conduct, achieving similar control over AI is a distant prospect.

 ?? ??

Newspapers in English

Newspapers from India