Navigating AI needs strong sails
Irresistible though it may appear, the use of artificial intelligence (AI) remains fraught. A new report by the World Economic Forum ( WEF) has highlighted the issues that organisations must address as they inevitably move to enhance the use of AI in financial services.
Firms that are moving quickly to adopt AI in financial processes will be far ahead of competition, but they will also face unprecedented and unanticipated risks. The nature of AI is such that it could take decisions which may violate rules, be opaque while also be at a speed where timely human intervention is impossible.
The WEF report called Navigating Uncharted Waters has advised that financial institutions, regulators and policy-makers should deploy, scale and harness the power of AI in a “responsible” manner.
AI will cause several important shifts in the way the financial ecosystem operates. Firstly, AI systems “think” differently from humans, therefore, some of the steps taken may not be comprehensible by humans. Current systems are based on human accountability and slow-moving safeguards. AI systems will take decisions at rapid pace and will be uninfluenced by humans.
For instance, two AI systems can collude on pricing of a product or service. Or if an AI makes lending decisions, it could develop a bias towards certain type of borrowers based on factors that humans may not be able to comprehend. Typically, AI tends to learn by its mistakes. It may identify such mistakes and take a wrong decision. If three customers from one locality default, the AI may disqualify all others from the area. Therefore, organisations will have to create new protocols for accountability. If an AI takes an erroneous decision, no human or professional can be expected to be accountable for it. Organisations will have to create a system where quick response, remedy and learning system is put into place. The AI itself would have to be governed in a real-time fashion.
Secondly, “AI will drive policy shifts outside of the control of any one institution,” says the report. Policy-makers and regulatory response systems face a steep learning curve of protecting consumer and citizen interests against fast moving tech developments. As AI and digitisation permeate every part of our lives, controlling and curbing anti-competitive behaviour will no longer be possible by any single regulator. Several bodies within a country will have to collaborate to understand anti-competitive behaviour. In an age of data supremacy, the definitions of market dominance have changed.
Transnational regulatory frameworks which were restricted to global transactions will have to be emulated in many sectors. It is already difficult to distinguish between local, regional and global marketplaces. Barriers to entry and market dominance will be driven by the strength of AI deployed. Strong players will get stronger rapidly while new players will struggle to survive. Many such examples abound around us.
Emerging markets such as India are eagerly embracing AI especially because of clear advantages of transparency, traceability and accountability. However, AI can easily mislead people with behaviour that is difficult to control or understand. In India, the relevant systems in private entities and regulators are not keeping pace with the profound changes being witnessed. Government departments and regulators are beginning to use AI but need to develop strong capacities quickly. Blind faith in AI can be harmful for all stakeholders.
Its need is not in doubt, but unchecked AI can be terribly destructive too. We must navigate carefully to ride the strong wave that is AI. A strong wave which propels a ship forward can also sink it.
Emerging markets are eagerly embracing AI because of clear advantages of transparency, traceability and accountability. However, AI can easily mislead people with behaviour that is difficult to control or understand