Who do you trust?
This is going to feel like a cautionary tale. That’s because it is.
The nation’s top cybersecurity agency has launched a program aimed at boosting election security in the states, shoring up support for local offices and hoping to provide reassurance to voters that this year’s presidential elections will be safe and accurate.
Vermont’s own former director of elections in the Secretary of State’s office, Will Senning, started there this week. Vermont has been a leader in the nation when it comes to safeguarding elections. It is work that started under former Secretary of State Jim Condos and has been carried over to Secretary of State Sarah Copeland Hanzas. Condos and now Copeland Hanzas seem assured Vermont is ready for the 2024 election cycle, which begins March 5 with the presidential primary.
Officials with the U.S. Cybersecurity and Infrastructure Security Agency (CISA), are introducing the program to the National Association of State Election Directors and National Association of Secretaries of State.
For state and local election officials, the list of security challenges keeps growing. Among them: potential cyberattacks waged by foreign governments, criminal ransomware gangs attacking computer systems and the persistence of election misinformation that has led to harassment of election officials and undermined public confidence.
You don’t have to look far to find recent examples that are cause for concern. Just in the past few weeks, AI-generated robocalls surfaced in New Hampshire before the state’s presidential primary and a cyberattack affecting the local government in Fulton County, Georgia, has created challenges for its election office.
The prospect of hostile governments abroad attacking election systems has been a particular concern this year for the agency. In an interview, Eric Goldstein, CISA’s executive assistant director for cybersecurity, described “a really difficult cybersecurity environment” that includes “extraordinary advances by nation-state adversaries China, Russia, Iran, North Korea.”
CISA was formed in the aftermath of the 2016 election, when Russia sought to interfere with a multipronged effort that included accessing and releasing campaign emails and scanning state voter registration systems for vulnerabilities. Election systems were designated as critical infrastructure, alongside the nation’s banks, dams and nuclear power plants, opening them up to receiving additional support from the federal government.
The program announced
this week includes 10 new hires, including Senning, all of whom join the federal agency with extensive election experience. They will be based throughout the country and join other staff already in place that have been conducting cyber and physical security reviews for election offices that request them.
Here’s the cautionary part of the AI story unfolding around us.
Microsoft confirmed this week that U.S. adversaries — chiefly Iran and North Korea and to a lesser extent Russia and China — are beginning to use generative artificial intelligence to mount or organize offensive cyber operations.
According to The Associated Press, Microsoft said it detected and disrupted, in collaboration with business partner OpenAI, threats that used or attempted to exploit AI technology they had developed.
In a blog post, the Redmond, Washington, company said the techniques were “early-stage” and neither “particularly novel or unique” but that it was important to expose them publicly as U.S. rivals leveraging large-language models to expand their ability to breach networks and conduct influence operations.
Microsoft has invested billions of dollars in OpenAI, and the announcement coincided with its release of a report noting that generative AI is expected to enhance malicious social engineering, leading to more sophisticated deepfakes and voice cloning. A threat to democracy in a year where over 50 countries will conduct elections, magnifying disinformation and already occurring, the AP reported.
Last April, the director of CISA, Jen Easterly, told Congress that “there are two epoch-defining threats and challenges. One is China, and the other is artificial intelligence.”
It is also worth mentioning that the CEO of OpenAI said this week that the dangers that keep him awake at night regarding artificial intelligence are the “very subtle societal misalignments” that could make the systems wreak havoc.
Sam Altman told the World Governments Summit in Dubai that AI needs oversight. “There’s some things in there that are easy to imagine where things really go wrong. And I’m not that interested in the killer robots walking on the street direction of things going wrong,” Altman said. “I’m much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”
It comes down to trust. AI provides the tools with which anyone can have doubts about information they are seeing, hearing or sharing. As thrilling as it is to see the potential of AI becoming a reality, we are glad CISA is creating another layer of protection. Now we just need to create a board of overseers for AI … before AI does?