European Parliament passes first AI Act
THE European Parliament on March 13, 2024, approved the world’s first comprehensive framework for regulating artificial intelligence (AI), known as the AI Act.
The law aims to address the rapid growth of AI technology, which has led to concerns about bias, privacy issues, and potential societal impacts. The AI Act classifies AI products based on their level of risk and imposes varying degrees of scrutiny accordingly.
“The rapid proliferation of AI, particularly Generative AI has brought immense opportunity as well as significant risks,” said Peter Sandkuijl, vice president, EMEA Engineering and Check Point Evangelist.
“The new EU AI Act aims at establishing controls and gradations for AI usage, as the risk of automatically recognizing every face in a room and analyzing the emotions, facial expressions and descent is a worry. It is not about stifling innovation but rather creating a legal framework that aligns with democratic values but also safeguards the rights of EU citizens.
What ripple effect will this cause? What are the cybersecurity implications?
“This is the first global law attempting to address the risk that AI may introduce and to mitigate the risk of AI applications infringing upon human rights or perpetuating biases,” said Sandkuijl.
“Whether it is CV scanning with inherent gender bias, or pervasive surveillance in the public space with AI-powered cameras, or invasive medical data analysis affecting your health insurance, this EU AI
Act seeks to set clear boundaries for AI deployment, so that vendors and developers have some guidelines and guardrails. With that in place, the ‘good guys’ will be able to see the demarcation line and provide access and tools to prosecute the ones who go against it.”
The EU AI Act has several cybersecurity implications, both directly and indirectly affecting the landscape:
Stricter development and deployment guidelines. AI developers and deployers will need to adhere to strict guidelines, ensuring that AI systems are developed with security by heart. This means incorporating cybersecurity measures from the ground up, focusing on secure coding practices, and ensuring AI systems are resilient against attacks.
Increased transparency. The Act mandates transparency in AI operations, especially for high-risk AI applications. This can mean more detailed disclosures about the data used for training AI systems, the decision-making processes of AI, and the measures taken to ensure privacy and security. Transparency aids in identifying vulnerabilities and mitigating potential threats.
Enhanced data protection. Given that AI systems often rely on vast datasets, the Act’s emphasis on data governance will necessitate enhanced data protection measures. This includes ensuring the integrity and confidentiality of personal data, a core aspect of cybersecurity.
Accountability for AI security incidents. The Act’s provisions likely extend to holding organizations accountable for security breaches involving AI systems. This can mean more rigorous incident response protocols and the necessity for AI systems to have robust mechanisms to detect and respond to cybersecurity incidents.
Mitigation of bias and discrimination. By addressing the risks of bias and discrimination in AI systems, the Act indirectly contributes to cybersecurity. Systems that are fair and unbiased are less likely to be exploited through their vulnerabilities. Ensuring AI systems are trained on diverse, representative datasets can reduce the risk of attacks that exploit biased decision-making processes.
CERTIfiCATION AND COMPLIANCE audits. High-risk AI systems will need to undergo rigorous testing and certification, ensuring they meet the EU’s standards for safety, including cybersecurity.
Compliance audits will further ensure that AI systems continuously adhere to these standards throughout their lifecycle.
Prevention of malicious AI use. The Act aims to prevent the use of AI for malicious purposes, such as creating deep fakes or automating cyberattacks. By regulating certain uses of AI, the Act contributes to a broader cybersecurity strategy that mitigates the risk of AI being used as a tool in cyberwarfare and crime.
Research and collaboration. The Act can spur research and collaboration in the field of AI and cybersecurity, encouraging the development of new technologies and strategies to secure AI systems against emerging threats.
Transparency is seen as a central tenet of the EU’s approach, especially concerning generative AI. By mandating transparency in the AI training process, this legislation aims to expose potential bias and AI mistakes made before they are accepted as truth.
“Let us not forget that AI is not always correct; on the contrary, it makes more mistakes than we will allow virtually any technology today to make and thus transparency becomes a critical tool in mitigating its shortcomings,” Sandkuijl commented.
“The initial attention will fall on the hefty fines imposed, however, that should not be the main focus; as laws are accepted, they will still be tested and tried in courts of law, setting precedents for future offenders,” Sandkuijl added. ”We need to understand that this will take time to materialize, which may actually be more helpful, though not an end goal.”
The rapid speed of AI adoption demonstrates that legislation alone cannot keep pace and the technology is so powerful that it can and may gravely affect industries, economies and governments.
Sandkuijl’s hope for the EU AI law is that it will serve as a catalyst for broader societal discussions, prompting stakeholders to consider not only what the technology can achieve but also what the effects may be.
“By establishing clear guidelines and fostering ongoing dialogue, it paves the way for a future where AI serves as a force more for good, underpinned by ethical considerations and societal consensus,” Sandkuijl concluded.