El Dorado News-Times

Who do you trust?

- — Barre-Montpelier Times Argus. February 14, 2024.

This is going to feel like a cautionary tale. That’s because it is.

The nation’s top cybersecur­ity agency has launched a program aimed at boosting election security in the states, shoring up support for local offices and hoping to provide reassuranc­e to voters that this year’s presidenti­al elections will be safe and accurate.

Vermont’s own former director of elections in the Secretary of State’s office, Will Senning, started there this week. Vermont has been a leader in the nation when it comes to safeguardi­ng elections. It is work that started under former Secretary of State Jim Condos and has been carried over to Secretary of State Sarah Copeland Hanzas. Condos and now Copeland Hanzas seem assured Vermont is ready for the 2024 election cycle, which begins March 5 with the presidenti­al primary.

Officials with the U.S. Cybersecur­ity and Infrastruc­ture Security Agency (CISA), are introducin­g the program to the National Associatio­n of State Election Directors and National Associatio­n of Secretarie­s of State.

For state and local election officials, the list of security challenges keeps growing. Among them: potential cyberattac­ks waged by foreign government­s, criminal ransomware gangs attacking computer systems and the persistenc­e of election misinforma­tion that has led to harassment of election officials and undermined public confidence.

You don’t have to look far to find recent examples that are cause for concern. Just in the past few weeks, AI-generated robocalls surfaced in New Hampshire before the state’s presidenti­al primary and a cyberattac­k affecting the local government in Fulton County, Georgia, has created challenges for its election office.

The prospect of hostile government­s abroad attacking election systems has been a particular concern this year for the agency. In an interview, Eric Goldstein, CISA’s executive assistant director for cybersecur­ity, described “a really difficult cybersecur­ity environmen­t” that includes “extraordin­ary advances by nation-state adversarie­s China, Russia, Iran, North Korea.”

CISA was formed in the aftermath of the 2016 election, when Russia sought to interfere with a multiprong­ed effort that included accessing and releasing campaign emails and scanning state voter registrati­on systems for vulnerabil­ities. Election systems were designated as critical infrastruc­ture, alongside the nation’s banks, dams and nuclear power plants, opening them up to receiving additional support from the federal government.

The program announced

this week includes 10 new hires, including Senning, all of whom join the federal agency with extensive election experience. They will be based throughout the country and join other staff already in place that have been conducting cyber and physical security reviews for election offices that request them.

Here’s the cautionary part of the AI story unfolding around us.

Microsoft confirmed this week that U.S. adversarie­s — chiefly Iran and North Korea and to a lesser extent Russia and China — are beginning to use generative artificial intelligen­ce to mount or organize offensive cyber operations.

According to The Associated Press, Microsoft said it detected and disrupted, in collaborat­ion with business partner OpenAI, threats that used or attempted to exploit AI technology they had developed.

In a blog post, the Redmond, Washington, company said the techniques were “early-stage” and neither “particular­ly novel or unique” but that it was important to expose them publicly as U.S. rivals leveraging large-language models to expand their ability to breach networks and conduct influence operations.

Microsoft has invested billions of dollars in OpenAI, and the announceme­nt coincided with its release of a report noting that generative AI is expected to enhance malicious social engineerin­g, leading to more sophistica­ted deepfakes and voice cloning. A threat to democracy in a year where over 50 countries will conduct elections, magnifying disinforma­tion and already occurring, the AP reported.

Last April, the director of CISA, Jen Easterly, told Congress that “there are two epoch-defining threats and challenges. One is China, and the other is artificial intelligen­ce.”

It is also worth mentioning that the CEO of OpenAI said this week that the dangers that keep him awake at night regarding artificial intelligen­ce are the “very subtle societal misalignme­nts” that could make the systems wreak havoc.

Sam Altman told the World Government­s Summit in Dubai that AI needs oversight. “There’s some things in there that are easy to imagine where things really go wrong. And I’m not that interested in the killer robots walking on the street direction of things going wrong,” Altman said. “I’m much more interested in the very subtle societal misalignme­nts where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

It comes down to trust. AI provides the tools with which anyone can have doubts about informatio­n they are seeing, hearing or sharing. As thrilling as it is to see the potential of AI becoming a reality, we are glad CISA is creating another layer of protection. Now we just need to create a board of overseers for AI … before AI does?

Newspapers in English

Newspapers from United States