Call to build responsible AI to tackle cyberattacks
The use of generative artificial intelligence (AI), the powerful tool behind OpenAI’s ChatGPT, could push the capabilities of cyberattacks to new heights while also offering new defence mechanisms, but most organisations are still learning to harness the tool, according to one of Microsoft’s leading AI experts.
“AI is an incredibly powerful technology, and so it’s unfortunately a very exciting tool, for example, in cybersecurity for threat actors,” Sarah Bird, Microsoft’s chief product officer of responsible AI, said at the HSBC Global Investment Summit in Hong Kong.
Amid a frenzy of AI development worldwide, technology firms are trying to speed up research and development as they push to develop their own large language models in the highly competitive field. But Bird warned it was also crucial to think “how to build with the technology responsibly and safely”.
“Like any new technology … [AI] has some limitations,” she said.
AI can generate harmful content and code, according to Bird, possibly making systems more susceptible to new types of attacks, such as prompt injection attacks and jailbreaking, which allow attackers to bypass software restrictions.
Bird noted, though, that AI could be both the cause and solution to tackling these new cybersecurity challenges.
Microsoft was using AI to help security analysts assess the number of threat signals in an attack to help the company respond more effectively, Bird said.
“So we’re [going to] see a new level of attack and defence because of this technology,” she said.
Another challenge in adopting generative AI tools was varying regulations across different industries and countries, said Mark McDonald, head of data science and analytics for the global research arm of HSBC.
“We have seen multiple regulations focus on the area,” McDonald said, adding it had become very difficult for global organisations with businesses across multiple regions to comply with these disparate rules.
The tech community is calling for more clarity and consistency in the regulation of emerging technologies.
Bird said regulators should think about the whole ecosystem when formulating new rules, as generative AI could be applied in many sectors, including highly regulated ones such as financial services and healthcare, each with their own requirements.
“One of the challenges is the regulations are moving quickly,” Bird said. “They’re all taking different approaches.”
Educating regulators in fields in which they may not have first-hand knowledge is important, according to Bird.
“Frankly, a lot of them just don’t have the experience with the technology or the complex practices required for that,” she said.
“So I have an enormous urgency to go and educate around this space if people don’t understand what actually works and what doesn’t work.”
We’re [going to] see a new level of attack and defence because of this technology
SARAH BIRD, MICROSOFT