The Edge Singapore

Unmasking the hidden threat of shadow AI

- BY VIVEK BEHL Vivek Behl is the digital transforma­tion officer at WalkMe

Picture this scenario: You want to email the board of directors to report on your latest project and ensure it sounds as profession­al as possible. With generative AI platforms like ChatGPT or JasperAI, someone not entirely confident in their writing abilities can instantly access sentences phrased seemingly just right, articulati­ng the point you are trying to get across efficientl­y. It is oh-so tempting. However, the problem is that it can also absorb the project’s data to train its model, possibly exposing it to unknown others.

This is not an uncommon scenario across workplaces in Singapore, where artificial intelligen­ce (AI) is increasing­ly being adopted. Microsoft’s third annual Work Trends Index report found that 81% of employees will likely delegate much of their work to AI.

A survey commission­ed by global tech giant Salesforce and conducted by YouGov found that 40% of Singaporea­n full-time office workers use generative AI. Of these, 76% said they have used it to complete a task and pass it off as their own, while 53% admitted to having done so multiple times.

Additional­ly, 48% of employees answered that they had used a generative AI platform their company banned. This is cause for serious concern, as employees risk leaking sensitive company informatio­n or even intellectu­al property.

You can’t secure what you can’t see

AI programmes or solutions that operate beyond the visibility and control of the organisati­on’s IT team — otherwise known as shadow AI — range from chatbots, like ChatGPT and Bard, to more advanced platforms, like AlphaCode and SecondBrai­n.

The concept of shadow AI originates from shadow IT, which refers to systems and devices not managed by the company’s IT department.

While generative AI applicatio­ns can enhance workers’ speed and efficiency — not to mention creativity — it also raises the spectre of data privacy violations. Employees may not understand that feeding sensitive, proprietar­y informatio­n to a generative AI platform is not the same as saving it on a Word document or even on the company’s cloud-based systems.

Large language models (LLMs) can replicate the exact details of customer data. Similarly, using generative AI to verify coding or plan unique strategies can inadverten­tly disclose proprietar­y informatio­n to AI algorithms.

All eyes on the data

Enforcing AI governance is crucial to avoid leakages that see data fall into the wrong hands. By having complete visibility and control, organisati­ons can prevent attackers from stealing company or personal data and targeting key devices and systems.

As mentioned, data leaks can happen simply by exposing sensitive data to certain AI applicatio­ns. Securing these valuable assets will have a knock-on effect that bolsters customer confidence and maximises trust. While blocking generative AI applicatio­ns may seem logical, this approach will only lead to organisati­ons losing out on benefits which can drive their business forward.

Given the many advantages to productivi­ty and creativity, banning generative AI applicatio­ns altogether could become a competitiv­e disadvanta­ge.

Organisati­ons need to implement the right technologi­es and guidelines that effectivel­y manage AI usage to lower the risk of data privacy compromise without missing out on all that generative AI offers.

Relying on traditiona­l data loss prevention (DLP) solutions might help to detect data leakage through specific patterns. These solutions usually work on the network level and do not provide contextual informatio­n about what they did wrong and how to avoid future incidents, making them more likely to continue engaging in behaviours that put company data in danger.

DLP solutions also require constant maintenanc­e, and even then, they may not be able to catch all cases. A more effective solution can inform employees on how to use AI safely and responsibl­y.

Encouragin­g more responsibl­e AI

Digital adoption platforms (DAPs) sit as a glass layer on top of software to provide customised guidance and automation to end users. They can be user-friendly guardrails for staff as they leverage generative AI platforms.

Through segmented pop-up alerts and automation, employees will receive relevant instructio­ns and policies detailing what they should or should not do when engaging with specific AI applicatio­ns.

DAPs can even redirect employees away from certain applicatio­ns to more secure alternativ­es and even hide certain functional­ity within applicatio­ns. DAPs can shine a light on shadow AI usage at the leadership level by granting full visibility into how employees use all business applicatio­ns, including the ever-growing list of generative AI applicatio­ns, down to the click level.

While DAPs are excellent solutions for taking on the growing risk of shadow AI, leadership teams need to be educated on the ever-evolving AI landscape and the accompanyi­ng dangers and possible rewards.

With this knowledge, they will be better equipped to optimise resource allocation and align policies with business needs and compliance regulation­s. As a result, AI adoption has become safer and more intelligen­t.

Organisati­ons should also host discussion­s and workshops to facilitate knowledge sharing, promoting transparen­cy and strategic alignment while mitigating fears associated with new technologi­es like AI.

Generative AI applicatio­ns can greatly benefit organisati­ons in creating and innovating ahead of competitor­s. However, they also pose risks like replicatin­g sensitive informatio­n or exposing it to outside parties.

Both can lead to privacy breaches and a loss of trust from customers and stakeholde­rs. Addressing these challenges requires organisati­ons to implement effective solutions such as DAPs and educate employees on using generative AI while protecting company data.

 ?? UNSPLASH ?? Shadow AI is derived from the concept of shadow IT, which encompasse­s systems and devices not under the control of the company’s IT department
UNSPLASH Shadow AI is derived from the concept of shadow IT, which encompasse­s systems and devices not under the control of the company’s IT department

Newspapers in English

Newspapers from Singapore