Daily Observer (Jamaica)

Future-proofing pensions: Navigating the AI revolution

- — — Sanya Goffe

THE incorporat­ion of artificial intelligen­ce (AI) technology within pension management systems has gained momentum due to its potential to make better decisions, streamline operations, and improve the overall retirement savings journey.

In keeping with the theme of this article, I thought it best to put CHATGPT to the test in defining artificial intelligen­ce. I asked CHATGPT: “What is a very simple definition of artificial intelligen­ce?” and in less than 3 seconds CHATGPT provided this definition:

Think of artificial intelligen­ce (AI) as teaching computers to think and learn like humans do. It’s about making machines smart enough to understand, reason, and solve problems without needing constant human guidance. AI helps computers learn from experience­s, adjust to new informatio­n, and perform tasks that typically require human intelligen­ce, such as recognisin­g speech, playing games, or making decisions.

AI has been used in the financial services industry for many years in deciding whether to grant homeowner loans, determinin­g credit scores, conducting risk monitoring, and many other tasks. However, in November 2022 the release of CHATGPT

moved AI out of the shadows, reposition­ing it from a tool for software engineers to a tool that ordinary people can use without any need for technical expertise. AI will undoubtedl­y revolution­ise many industries, and the pension sector is no exception. With the growing silver economy

all those economic activities, products and services designed to meet the needs of people over 50 the desire for more curated and personalis­ed retirement options, and the increasing complexity of financial markets, AI offers promising solutions. Let us look at some of the advantages.

BENEFITS AND ADVANTAGES

1. Customer Service and Engagement – The most common Ai-related engagement tool is the chatbot. Amazon’s Alexa is one well known example. In the pensions sector, these technologi­es could offer immediate, personalis­ed responses — whether they pertain to basic queries such as account balances, to more complex ones involving investment options, or retirement planning. AI can also enable the chatbot to evolve from a reactive service (eg, I have a question and need help) to being a proactive device informed and activated by broader participan­t milestones such as salary raises.

2. Investment Management Performanc­e – The use of AI could also alleviate some of the concerns pertaining to the investment performanc­e associated with investment managers. Japan’s Government Pension Investment Fund (GPIF) is the largest retirement fund in the world with approximat­ely US$1.5 trillion in assets. In response to concerns about investment performanc­e associated with investment managers, GPIF commission­ed a study to explore an AI system that would enable GPIF to select and monitor fund managers. The AI system detected and compared investment styles against expected performanc­e on a real time basis, based on select data such as trading items, timing, volume, unrealised gains and losses. The initial results gave GPIF the capability to detect and compare investment styles attributed to the 16 fund managers evaluated, and to then determine the best managers for the GPIF.

3. Improved Communicat­ion with Participan­ts – Language can dramatical­ly impact planned participan­t engagement and behaviour. Investment managers can use AI to customise planned communicat­ions so as to maximise positive participan­t responses. A recent study conducted by Invesco using Aigenerate­d results showed that simple modificati­ons, such as saying “staying on track” rather than “managing risk” can meaningful­ly improve participan­t engagement and increase levels of trust. Other examples were positive phrases such as, “Plan the retirement you deserve” and “Save enough today to enjoy a comfortabl­e future”, which scored higher than prevention statements such as, “Unexpected expenses can derail you in retirement.”

4. Fraud Prevention – Pensioners are particular­ly susceptibl­e to online fraud. They are often poorly equipped to deal with identity theft and at risk of accepting unsolicite­d offers online. A recent UK Financial Conduct Authority (2021) study found that 72 per cent of pensioners could not identify a common sign of a pension scam. However, AI can be used to monitor fraud risks in real time, identify individual­s, or limit access to accounts, thereby providing an extra layer of security.

There are, of course, many other advantages such as:

• Predictive Analytics for Market Trends – AI can predict the life expectancy of a pensioner, considerin­g various factors like lifestyle, health records, and environmen­tal factors. This would help in making better investment decisions and therefore enhance portfolio management.

• Compliance and Regulatory Adherence – AI can continuous­ly monitor transactio­ns and operations for compliance issues. This proactive approach could help reduce the risk of costly regulatory fines and penalties.

• Cost Reduction – Automation through AI of various administra­tive tasks — like data entry, paperwork processing, and basic customer interactio­ns — can lead to significan­t cost savings and allow team members to focus on more complex matters.

CHALLENGES AND RISKS

The benefits of AI in the pensions context need, however, to be balanced against the challenges and risks.

1. Efficient use of AI tools – “One challenge present in developing all AI tools is whether the right questions are being asked and therefore answered by the AI tool. After all, different questions will lead to different answers.” (Mercer Global Pension Index, 2023).

The data inputted into AI models inform what it spits out. That is not only in relation to what a customer asks, but also the data fed into it by developers of the AI. The quality, diversity, and relevance of the training data directly influence the efficiency of AI models in formulatin­g personalis­ed pension strategies. Prompt engineerin­g and training of AI tools are fast becoming complement­ary industries.

2. Accuracy & Reliabilit­y – The complexity of financial markets and the unpredicta­ble nature of economic events pose challenges in that AI models may not accurately predict or adapt to sudden market shifts. There is also the risk of over-reliance on historical data which may not adequately capture unpreceden­ted events or changes in economic conditions. And then there are examples when an AI model may be so accurate as to create unintended results.

Some years ago, Target’s marketing department explored how it could determine whether female customers were pregnant because there are certain periods in life — pregnancy foremost among them — when women are most likely to radically change their buying habits.

If Target could reach out to customers in that period it could, for instance, cultivate new behaviours, getting them to turn to Target for specific goods. Target had been collecting data on its customers via shopper codes, credit cards, and surveys. It then combined that data with demographi­c data and third-party data it purchased. Crunching all that data enabled Target, using artificial intelligen­ce, to generate a “pregnancy prediction” score.

The marketing department started targeting high-scoring customers with coupons and marketing messages. Several news outlets reported that about a year after creating the pregnancy-prediction model, a man walked into a Target outside Minneapoli­s and demanded to see the manager. He was clutching coupons that had been sent to his daughter, and he was angry. “My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?” The manager apologised and then called a few days later to apologise again.

On the phone, though, the father was somewhat abashed. “I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”

(https://www.forbes.com/sites/ kashmirhil­l/2012/02/16/how-target-figured-out-a-teen-girl-waspregnan­t-before-her-fatherdid/?sh=6781895566­68).

3. Wrong Outcomes / Hallucinat­ions – AI algorithms may have biases, offer unjustifie­d responses (known as hallucinat­ions), and not know right from wrong.

An unfortunat­e example of wrong outcomes is the Robodebt programme in Australia where unlawful and incorrect automated debt collection letters were sent to 470,000 social security recipients due to an incorrect algorithm. The programme ran from 2016 until a court determined in 2019 that it was illegal. It resulted in some of Australia’s poorest people being asked to pay off debts they did not owe, after receiving notices claiming they owed thousands of dollars.

More than half a million

Australian­s were affected by the policy, resulting in suicides and considerab­le mental illness among many recipients. Many were forced into worse financial circumstan­ces — taking out loans, selling their cars, or using savings to pay off a debt they did not owe but were told they had to pay off within weeks (https://www.bbc.com/news/ world-australia-66130105).

It is important that models are fully tested to ensure that inappropri­ate outcomes do not occur and that the recommenda­tions are sensitive to the context of the individual. An incorrect algorithm could see errors in benefit statements and pension projection­s, causing negative outcomes for pensioners (and liability for pension providers and fiduciarie­s).

4. Lack of transparen­cy and human touch – While AI offers efficiency and automation, some individual­s may prefer human interactio­n and guidance when it comes to their retirement planning. AI cannot replace the empathy and experience that human financial advisors provide. Human judgement and intuition will remain essential in certain retirement planning situations that require a nuanced approach, such as dealing with unexpected life events or market volatility.

In a recent survey of 227 pension profession­als, 74 per cent would not be happy taking financial advice from an AI robot. However, 72 per cent agreed the integratio­n of AI systems into the pensions industry has the potential to deliver better outcomes for pension scheme members, and 91 per cent of those surveyed were currently using AI in their pensions’ business.

5. Algorithmi­c Decision Biases – AI algorithms might inherit biases present in training data, leading to unfair or skewed decisions. Amazon’s Ai-powered recruitmen­t tool serves as a prominent case study highlighti­ng the challenges and consequenc­es of algorithmi­c biases in AI systems. Amazon developed an AI tool to evaluate job applicants’ resumes. The AI system was trained on historical resumes submitted to Amazon over a 10-year period.

Since the majority of these resumes were from male applicants due to the tech industry’s gender skew, the AI system learned to favour male candidates by associatin­g certain terms, schools, or experience­s more frequently found on male applicants’ resumes with successful candidates.

The AI tool consistent­ly downgraded resumes containing terms associated with women, even if these qualificat­ions were relevant and significan­t. For example, it penalised resumes that included the word “women’s,” as in “women’s chess club captain”. The biased algorithm led to a discrimina­tory outcome, potentiall­y excluding qualified female candidates from considerat­ion, raising ethical concerns and legal implicatio­ns. After discoverin­g the bias Amazon decided to abandon the AI recruitmen­t tool. The case underscore­d the crucial need to actively mitigate biases in AI systems, especially in scenarios where decisions significan­tly impact individual­s’ opportunit­ies and rights.

There are other challenges, and the list will never be exhaustive. For instance:

• Identity Theft – The fact that AI can faithfully reproduce a person’s voice, writing style, photo or video, combined with the growth of sophistica­ted, cyber-breaching, security programs may lead to an increase in incidents of identity fraud or unauthoris­ed access to retirement savings, which may threaten public confidence in long-term pension systems.

• Data privacy and the need to protect and safeguard members’ data – AI needs data to survive. Furthermor­e, we need the tool of AI to process data, but we also need data to train AI. This raises concerns about data privacy and security. Data protection laws are undergoing radical changes in many countries, including Jamaica with its recent promulgati­on of the Data Protection Act on November 30, 2023. Trustees must ensure that security controls are in place to protect plan data and that a solid framework is developed with respect to privacy policies and protocols.

ENVIRONMEN­TAL CONSIDERAT­IONS

AI has the potential to positively impact the pension industry through environmen­tally friendly investment­s. By integratin­g environmen­tal considerat­ions into investment strategies with the help of AI, pension funds can contribute to long-term value creation. This aligns with the interests of beneficiar­ies who are increasing­ly concerned about the sustainabi­lity and ethical implicatio­ns of their investment­s.

However, the rapid rise of AI has raised concerns about its negative environmen­tal impact, particular­ly in respect of energy consumptio­n. It is projected that by 2025 the IT industry could consume up to 20 per cent of the world’s electricit­y and contribute approximat­ely 5.5 per cent of global carbon emissions.

Training AI models requires vast amounts of energy, for example the training of CHATGPT resulted in 552 metric tons of carbon emissions, equivalent to driving a passenger vehicle for over 2 million kilometres (https://dig.watch/updates/ais-impact-on-environmen­t). According to one study by University of Massachuse­tts, training AI models to do natural language processing can produce the carbon dioxide equivalent of five times the lifetime emissions of a car, or the equivalent of 300 round-trip flights between San Francisco and New York.

https://www.forbes.com/ sites/glenngow/2020/08/21/ environmen­tal-sustainabi­lity-and-ai/?sh=3836d3207d­b3.

AI is also a significan­t user of physical resources, including gold and rare earth metals, the mining of which threatens to cause future environmen­tal damage with even greater emissions. Trustees, fund managers and administra­tors should be mindful of the impact of AI as part of their overall environmen­tal, social, and governance (ESG) strategy. As AI continues to advance, the pension sector and policymake­rs need to seek to strike a balance between the transforma­tive capabiliti­es of AI and its substantia­l carbon footprint. AI innovation­s can become a blessing and a curse for both humanity and the planet.

WHAT ROLE DOES THE REGULATOR PLAY IN THE AI REVOLUTION?

Policymake­rs and regulators have a role in ensuring the use of AI is consistent with promoting financial stability, protecting consumers, and promoting market integrity and competitio­n. The reality, though, is that the rapid advancemen­ts in AI technology outpace regulatory frameworks, thereby complicati­ng oversight and enforcemen­t.

Some of the measures regulators could implement include:

1. Policy Developmen­t – Regulators can establish policies and guidelines to govern the use of AI and machine learning in the pension industry. These policies can ensure that pension industry stakeholde­rs adhere to ethical and legal standards when implementi­ng AI systems.

2. Transparen­cy and Accountabi­lity – Regulators could promote transparen­cy by requiring pension institutio­ns to disclose how AI algorithms are used in decision-making processes. This transparen­cy ensures that stakeholde­rs understand how these technologi­es impact pension-related decisions. In Texas, some judges require attorneys appearing before them to file a certificat­e attesting they either did not use generative AI at all or that, if they did, they checked the results.

3. Continued Education and Awareness – Regulators should educate pension industry profession­als about the ethical and responsibl­e use of AI. For example, published ethical guidelines such as IEEE Global Initiative on Ethics of Autonomous and Intelligen­t Systems, to ensure that Aidriven pension systems align with societal values and ethical norms, could be a good starting point for regulators.

Regulation­s must maintain a balance between safeguardi­ng public interests and fostering the growth and developmen­t of those regulated industries. Navigating this regulatory balancing act requires acknowledg­ement of the varying risks associated with AI and devising strategies that align regulation with risk, without stifling innovation through overbearin­g regulatory interventi­on.

RESPONSIBL­E STEWARDSHI­P OF TRUSTWORTH­Y AI

In May 2019 OECD adopted its Principles on AI, the first internatio­nal standards agreed to by government­s for the responsibl­e stewardshi­p of trustworth­y AI. These state that:

1. AI should benefit people and the planet by driving inclusive growth, sustainabl­e developmen­t and well-being.

2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriat­e safeguards.

3. There should be transparen­cy and responsibl­e disclosure around AI systems to ensure that people understand Ai-based outcomes and can challenge them.

4. AI systems must function in a robust, secure, and safe way throughout their life cycles, and potential risks should be continuall­y assessed and managed.

5. Organisati­ons and individual­s developing, deploying, or operating AI systems should be held accountabl­e for their proper functionin­g, in line with the above principles.

We are all at the beginning of a journey to understand the true power of AI, its reach and capabiliti­es. AI has the potential to revolution­ise pension management by improving risk assessment, personalis­ing retirement planning, streamlini­ng administra­tive processes, and enhancing fraud detection and prevention. However, AI adoption in pension systems also raises ethical, regulatory and societal implicatio­ns — such as data privacy, bias, and socioecono­mic disparitie­s.

Future-proofing pensions in the AI era requires all stakeholde­rs of the pension industry to take a proactive approach towards the AI revolution, and to play our role in ensuring that our pension schemes are sustainabl­e, equitable, and secured for future generation­s.

 ?? ?? AI has the potential to revolution­ise pension management by improving risk assessment, personalis­ing retirement planning, streamlini­ng administra­tive processes, and enhancing fraud detection and prevention.
AI has the potential to revolution­ise pension management by improving risk assessment, personalis­ing retirement planning, streamlini­ng administra­tive processes, and enhancing fraud detection and prevention.
 ?? ??
 ?? ?? In the pensions sector, chatbot technology could offer immediate, personalis­ed responses — whether they pertain to basic queries such as account balances, to more complex ones involving investment options, or retirement planning.
In the pensions sector, chatbot technology could offer immediate, personalis­ed responses — whether they pertain to basic queries such as account balances, to more complex ones involving investment options, or retirement planning.

Newspapers in English

Newspapers from Jamaica