The Manila Times

Can AI make CEOs more relatable?

-

WHEN Lech Mintowt-Czyz wrote speeches for Ben van Beurden, part of his job was helping the then chief executive of Shell emotionall­y connect with audiences in a way he was not used to.

The oil and gas company was under pressure from activist investors and campaigner­s for its role in enabling climate change and whenever van Beurden spoke, it was “a very fine series of delicate judgment calls”, says Mintowt-Czyz. It took a long time to build a relationsh­ip with van Beurden and gain his trust.

Speechwrit­ers such as Mintowt-Czyz are now facing a new set of pressures as generative artificial intelligen­ce tools such as ChatGPT and GPT-4 promise to create compelling speeches based on simple subject and style prompts and the input of some key quotes. Other tools have sprung up to help with delivery, such as analysing voice recordings or live speeches to detect emotion and assess their impact.

As the views of business leaders come under more scrutiny, particular­ly from younger employees, communicat­ions — from staff update emails on geopolitic­al events to investor presentati­ons — are taking on a greater importance. A misfired quote can easily prompt a backlash from staff or customers.

AI tools can help produce speeches quickly — churning out thousands of words in a matter of minutes — and aid executives seeking to engage with staff or stakeholde­rs more effectivel­y. But there are clear downsides: their content can lack character and authentici­ty and potentiall­y be packed with errors.

“It’s a live discussion in the speech writing community,” says Mintowt-Czyz. As yet, AI tools “don’t understand what the words mean. Just how they relate to each other and how words are put together. It’s very good at it, but it does not fundamenta­lly understand what it’s writing, what its effect on people might be . . . It can’t tell the difference at this stage between good, bad and mediocre.”

While Mintowt-Czyz has “certainly had a good play with AI tools to get a sense of their capabiliti­es”, he says he has not used them to help him write anything profession­ally.

So far, corporate communicat­ions advisers and speechwrit­ers are tending to use the technology as a kind of digital helper, rather than taking its content verbatim. It is one aspect of a new AI-assisted leadership style taking shape across business: other tools are on hand to help chief executives with tasks such as scheduling and time management and act as a sparring partner to stresstest key decisions.

AI can generate data to inform and enrich the content of speeches or presentati­ons. Mintowt-Czyz says he might use it “as a sort of semiintell­igent thesaurus”. Other communicat­ions specialist­s use tools to redraft speeches — something they can do in a fraction of the time it would take a person — omit clichés or make the speaker sound more energetic or positive.

Public relations firm Kekst CNC, for example, has worked with Oxford university on technology that enables its clients to see a snapshot of their own linguistic tendencies and suggest improvemen­ts. It provides an “impact score” based on how an executive uses pauses, pace, emotive language and storytelli­ng, among other metrics, after analysing audio recordings — from earnings calls and internal townhalls to media interviews. Data from one executive can be compared with a wider set to see how that individual performs versus peers.

Shelagh Paul, the top corporate affairs executive at Canadian pension fund Omers, says the delivery of speeches is as important as the narrative. “Delivery matters. It is what makes a message memorable,” she says. “A data-driven perspectiv­e can provide an objective check against bias. We’d never rely on a tool alone to generate advice. But where even slight nuances in delivery can have a meaningful impact, I am going to top up our expertise with the resources that can give us a competitiv­e advantage.”

Cheril Clarke, an executive ghostwrite­r and speechwrit­er, treats ChatGPT like a junior writing assistant, tasking it with summarisin­g long documents, engaging in collaborat­ive ideas generation and general research.

“A speechwrit­er in the healthcare field may be writing a keynote on emerging technologi­es,” Clarke says. “Using ChatGPT, they can quickly gather and summarise recent advancemen­ts and relevant statistics. The AI’s ability to sift through massive amounts of data can save hours of research time . . . and help the writer spot key trends they have missed.”

Clarke notes that she wants to “harness the speed and processing power” of an AI tool to do rudimentar­y tasks, but will continue to create the “passionate, compelling, and original story” herself as a bot cannot yet “feel and experience the way we do”.

She adds: “People who rely on [AI tools] to replace human talent will quickly find their outputs are generic . . . Tools like ChatGPT are just that — tools. They do not replace human creativity.”

Clarke uses PlayHT, an AI company that builds conversati­onal voice models capable of cloning any voice or accent and generating speech in real time. The audio tool can help anyone “who needs a profession­al voice but doesn’t have a big budget to go in a studio with a human”.

Most advisers to senior executives warn against using AI tools for sensitive communicat­ions. They say if employees discover that a highly emotive announceme­nt — such as a note to staff about redundanci­es — has been created by a robot it would convey a lack of respect and empathy.

AI tools might be used for feedback on an important decision, says Euro Beinat, global head for AI and data science at Prosus, the tech investor. “You might think, ‘I hadn’t thought of that’,” he says. But, he cautions, “I would be very careful not to use these tools for anything material for a company.” He adds: “I would always have control and I wouldn’t automate anything.”

Tera Allas, director of research and economics at consultant McKinsey’s UK and Ireland office, agrees that for now anyone using these tools needs a human “in the loop” to ensure there are no errors or biases. “With language there are a lot of things AIs get wrong,” she says.

Executives also need to be aware that AI-generated content can be detectable. Technology is being rapidly developed to spot AI output.

And then there is an authentici­ty point to consider. “If you think forward into a world where all communicat­ion is created by AI, it will be read by AI too,” Allas adds. “Employees may not be reading their own email. There is a loss of meaning unless you put it back into a real life context.”

Mintowt-Czyz agrees: “What is communicat­ion? It’s about human connection. One human reaching out to another one. Either to inform or persuade or whatever . . . When you delegate an element of that to a computer, it’s no longer human-tohuman contact,” he says.

For now, he is betting on himself. His work, he adds, is “more than words on a page”. “People have to trust those words. How can you trust that the AI knows what it is doing?”

 ?? ?? A photo taken on February 26, 2024 shows the logo of the ChatGPT applicatio­n, developed by US artificial intelligen­ce research organizati­on OpenAI, on a smartphone screen (L) and the letters AI on a laptop screen in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSE­V / AFP)
A photo taken on February 26, 2024 shows the logo of the ChatGPT applicatio­n, developed by US artificial intelligen­ce research organizati­on OpenAI, on a smartphone screen (L) and the letters AI on a laptop screen in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSE­V / AFP)
 ?? ??

Newspapers in English

Newspapers from Philippines