New library to secure AI systems
IBM has released a security library into the open source community that is designed to help protect artificial intelligence (AI) systems. IBM’s aim is to make this toolbox become a repository and source of information on threats to current and future AI solutions.
Certain weaknesses in AI systems expose them to exploitation, such as undetectable alterations in content including images, video, and audio recordings being crafted by those with malicious intent, for which one does not need a deep knowledge of AI.
These changes can be small in size but result in huge security breaches. They impact the performance of AI systems like prompting them to make a choice which would be deemed malicious.
Aimed to combat so called ‘Adversarial AI’, the toolbox records threat data as well as assists developers in creating, benchmarking and deploying practical defence systems for real-world AI. IBM shared that this research looks at the best ways to defend the AI systems before the bad guys attack.
By introducing the toolkit to the open source community, others may also become inspired enough to create solutions before Adversarial AI becomes a true threat. The toolbox also includes a library, interfaces, and metrics that will help developers begin to create cyber security solutions for this emerging field.
“Considering tools didn’t provide the defences needed to protect AI systems, this is the first and only AI library that contains attacks, defences, and benchmarks to implement improved security,” the company sources have said.