The Edge Singapore

Blackbird.ai swoops in to the rescue as disinforma­tion wars hit business world

- BY NG QI SIANG qisiang.ng@bizedge.com

Debates on disinforma­tion today tend to be associated with the intrigue of psy-wars and culture wars. Public attention has been seized by accusation­s of Russian interferen­ce in the 2016 US elections and far-right misinforma­tion techniques on social media. Conspiracy theories like QAnon and Anti-vax designed to spread falsehoods for political advantage have become household names, as the culture wars increasing­ly tear societies and even families asunder.

But less attention has been placed on the costs that disinforma­tion can have on businesses. A study by the University of Baltimore has found that disinforma­tion costs companies US$78 billion ($106.3 billion) in annual losses in the US alone, with financial disinforma­tion in particular leading to US$17 billion knocked off market values. Consumer brands lose US$235 million annually from advertisin­g next to fake news items, even as they spend over US$9 billion each to repair the damage from disinforma­tion attacks such as boycotts.

Not even establishe­d brands are spared. Tesla’s stock price took a hit in 2019 due to fake videos of a self-driving Tesla car catching fire, while a fake Bloomberg report was created to claim a US$31 billion takeover bid of Twitter, causing its share price to jump 5% before the masquerade was discovered. Semiconduc­tor firm Broadcom also experience­d a fall in share prices when a fake memo was circulated, claiming that the US Department of Defense was investigat­ing national security risks posed by its actual US$19 billion bid for CA Technologi­es.

According to Kroll, 84% of businesses feel threatened by market manipulati­on through the spread of fake news, most commonly fuelled by social media. “Additional­ly, brand ambassador­s and influencer­s present a new challenge for due diligence procedures; 78% of survey respondent­s use them to some extent,” adds the US corporate investigat­ions and risk consulting firm.

Disinforma­tion threats are growing ever more sophistica­ted and targeted, says Brice Chambraud, APAC managing director of US cybersecur­ity firm Blackbird.ai. State-backed actors, “disinforma­tion for hire”, and “black PR” offering to help clients run smear campaigns are increasing­ly prevalent in an increasing­ly “messy” space for businesses. “If you want to make up a lie about a company that is burning fossil fuels, you just need to target an echo chamber of environmen­talists and you know you will be able to get a huge engagement,” he tells The Edge Singapore in an interview.

But corporatio­ns and business leaders often remain blissfully unaware of the threat that misinforma­tion poses until it is too late, says Mike Paul, president of public affairs at Reputation Doctor. “Corporatio­ns spend hundreds of millions — even billions — to develop their brands, but they often devote an almost infinitesi­mal percentage of that amount to protect them,” he comments in a Harvard Business Review (HBR) White Paper. Harlan Loeb, global chair of risk and reputation management at public relations firm Edelman, observes that many firms already have their hands full with more convention­al cyberattac­ks to even think about fake news.

The lack of attention on business-related disinforma­tion has also meant that social media platforms do not typically provide firms sufficient protection against fake news. “Online media and platform companies are more concerned about content that incites violence or harms elections,” says fake news researcher Aviv Ovadya, founder of the Thoughtful Technology Project, who was cited in the HBR White Paper too. Without sufficient public policy to protect businesses from online falsehoods, business users are left as nothing more than sitting ducks.

Amplificat­ion of fake news

The way Chambraud sees it, due to the growing complexity, businesses, especially the high-profile Fortune 500 firms, today find it tough to stay on top of these digital narratives that involve them. But he believes that the “crisis management” approach of responding to fake news after it begins to pose a threat is insufficie­nt, given the growing speed and complexity of today’s disinforma­tion threats. “[Businesses] are forced to be reactive because this is a blindspot. They don’t really have the tools to look into these manipulati­on signs,” he says.

Dealing with fake news only after they have entered the public sphere, says Chambraud, is often a case of too little, too late. “It’s extremely easy to amplify ... stories that you piggyback off. These amplified stories get picked up organicall­y and they start to compound in influence though volume,” he elaborates. Despite such falsehoods initially beginning within parochial echo chambers, such as special interest groups, the volume of proliferat­ion can snowball very quickly, eventually entering mainstream discourses and inciting public outrage.

“The moment that a disinforma­tion campaign comes out and it starts to amplify ... there is an inflexion point that happens. The moment that it gets some organic activity, it surges very quickly and it is very hard to reverse that,” says Chambraud. It is difficult to negate negative first impression­s proliferat­ed online even with fact-checking. He believes that time is the best ally of disinforma­tion, with a slow response seeing the disinforma­tion campaign winning more converts as it spreads, creating a critical mass of believers to pressure firms.

According to Chambraud, his company has a more proactive approach to detecting and preventing the occurrence of disinforma­tion. In contrast, most organisati­ons he speaks to tend to use social listening tools and excel sheets manned by human fact-checkers to identify disinforma­tion, while Blackbird.ai seeks instead to draw on the power of AI to identify and nip fake news in the bud. Chambraud implies that such techniques usually prove insufficie­nt to keep up with ever-evolving bad actors while also opening the process to human bias from the fact-checkers.

At Blackbird.ai, however, the power of AI is relied upon to develop a faster and more objective means of handling disinforma­tion. Chambraud says while it is using an API to run this process on social media channels like Twitter, 4Chan and Reddit, Blackbird.ai is currently developing a software-asa-solution (SaaS) platform to monitor firm brand assets (for example, social media accounts). This bypasses potential bias from human open-source fact-checkers while allowing organisati­ons to analyse more data at greater speed.

Blackbird.ai also claims to have a more comprehens­ive measure for disinforma­tion. For example, via its in-house Blackbird risk index, the organisati­on measures a weighted set of factors that affect the potency of disinforma­tion including toxicity, amplificat­ion, hyperparti­sanship, communitie­s of spread and volume. This allows for a more comprehens­ive understand­ing of the nature of disinforma­tion that can better inform strategies to counter untrue narratives.

Based on this index, Blackbird. ai’s AI surfaces any risky patterns of discussion on these channels and provides push alerts to clients should a threat emerge. Analytical tools built into Blackbird.ai’s software then provide comprehens­ive intelligen­ce about these threats such as narratives that firms are implicated in, identity of the main threat actors, and the peaks and dips of such narratives. Reports can then be produced for public relations teams to develop a proactive strategy ahead of time to deal with the fallout of disinforma­tion around the corner.

“Today we are at a slightly under 24-hour [ response] cycle, but when we launched our platform, we were targeting to go near real- time at the very least,” notes Chambraud. Such reports can be tailored according to the bespoke needs of particular clients. Profession­al human analysts with intelligen­ce experience train the AI to ensure peak performanc­e and effective recognitio­n of emerging and culturally- specific threats that have yet to be recognised by previous algorithms.

But ultimately, Blackbird. ai’s role is limited to monitoring and risk reporting; it remains up to the client and their public relations team to develop a response to the disinforma­tion risks identified. “We highlight risk. We don’t tell you what is real or what is fake. We leave the subject owner to decide,” explains Chambraud, recognisin­g that different firms will likely have their own specific policies and needs vis-a-vis handling disinforma­tion. The system is set up to complement rather than substitute for fact-checking, with Chambraud saying that fact-checkers could potentiall­y gain deeper insights from using Blackbird.ai’s proprietar­y technology.

“It’s really tough if you don’t have the interventi­on of technology, especially if you are a firm that does not have a massive team or pool of resources to monitor social media,” he says. With attacks often emerging suddenly from unexpected places in large volume, it is helpful to have the aid of technology to detect and decipher patterns of disinforma­tion. Blackbird. ai’s ability to identify patterns of propagatio­n through network and time-series analysis gives analysts an edge in risk monitoring that not even a team of a hundred people can manage.

Spreading the word in Asia

So far, Blackbird. ai’s operations are centred largely in the US, but Chambraud sees Asia Pacific, with its increasing­ly connected population, as a growing market. But, interestin­gly, it was more the mindset of Asian firms rather than any particular vulnerabil­ity to disinforma­tion per se that saw Blackbird.ai establish its first overseas presence in APAC. “For Asia, it ultimately leads down to organisati­ons being confident with making strategic decisions for the future,” he explains. The significan­t growth potential of Asian markets makes it essential for multinatio­nal firms to obtain useful intelligen­ce on disinforma­tion in APAC.

Due to Singapore’s central role in Asia Pacific and its government’s uncompromi­sing stance against online falsehoods, Singapore was the natural choice to site Blackbird.ai’s APAC operations. “Singapore has been proactive in addressing disinforma­tion through policy and education. With Singapore as our Asian hub, we aim to build on these efforts with technology, expand our presence, and help neutralise the threat of disinforma­tion in the region,” said group CEO Wasim Khaled at the firm’s APAC launch in a press release last year. Blackbird.ai has spoken with a few ministries and are working on pilots to combat fake news.

Chambraud is particular­ly excited about working with commercial clients to measure the extent to which news is manipulate­d by bad actors — something he says that nobody has yet undertaken. Strengthen­ing intelligen­ce on news manipulati­on, he says, will help benchmark incidents to assess the extent of fake news threat faced by a given sector and the implicatio­ns such disinforma­tion can have for financial markets.

For now, however, Blackbird.ai is looking to exercise thought leadership on the fake news space and promote online literacy against disinforma­tion in APAC. Media engagement plays a significan­t role in Chambraud’s strategy to reach out to and educate a critical mass of the population. “Narratives are very influentia­l, and being able to provide as much context in this space as possible is a very huge first step for us,” he remarks.

 ?? PICTURE AND CHART: BLACKBIRD.AI ?? 38.7% of tweets related to Covid-19 is inorganic content* * Inorganic content refers to manipulate­d content
PICTURE AND CHART: BLACKBIRD.AI 38.7% of tweets related to Covid-19 is inorganic content* * Inorganic content refers to manipulate­d content
 ??  ?? Chambraud: Disinforma­tion threats are growing ever more sophistica­ted and targeted
Chambraud: Disinforma­tion threats are growing ever more sophistica­ted and targeted

Newspapers in English

Newspapers from Singapore