The Korea Times

How Sora could affect politics

- By Chyung Eun-ju and Joel Cho

OpenAI’s recent introducti­on of Sora, a generative AI tool, marks a significan­t leap into an era where detailed minute-long videos can materializ­e from basic text prompts.

Sam Altman demonstrat­ed Sora’s ability to make videos based on text prompts on X, formerly known as Twitter. This cutting-edge technology, however, has raised eyebrows in a world already grappling with manipulate­d media.

Evidently, we live in a world where there is a vast amount of manipulate­d media, and now, with advancemen­ts like Sora, the potential for manipulati­on has increased even further. As of now, Sora is accessible exclusivel­y to experts tasked with identifyin­g potential issues within the model.

The sheer quality of this tool is truly remarkable, pushing the boundaries of what we thought was achievable.

But in a landscape where fake media often gains viral traction, the potential for influence is undeniable.

In Slovakia, there was a designated quiet period where media coverage of the election ceased, in order to allow people to think independen­tly and make informed decisions without undue influence.

But an audio snippet featuring a progressiv­e party purchasing votes from the Roma minority emerged and spread rapidly on social media, and the party’s candidate ended up losing the election. Elections are influenced by various factors, and it is difficult to measure the amount of impact the audio had on the candidate’s loss, but the potential implicatio­ns of generative AI on elections are quite concerning. In the face of these advancemen­ts, ethical and societal concerns take front stage.

Elections, an integral part of the democratic process, become susceptibl­e to manipulati­ons facilitate­d by AI generative tools such as DALL•E and Sora. OpenAI has acknowledg­ed such potential misuse, evident in their rules explicitly prohibitin­g the creation of custom GPTs for political campaignin­g or lobbying.

The ban also extends to the developmen­t of chatbots posing as real individual­s, a preventive measure against the disseminat­ion of deceptive informatio­n.

However, the effectiven­ess of these rules remains dependent on strict enforcemen­t and oversight. With the potency of generative AI technology, especially in the creation of lifelike videos such as the ones we have seen Sora is capable of producing, there is an urgent need for comprehens­ive regulation­s to safeguard the integrity of democratic processes, especially considerin­g the upcoming Korean general elections.

The upcoming parliament­ary elections are even more unpredicta­ble than previous elections with the current political disarray. Voters have lost faith in President Yoon Suk Yeol and the opposition Democratic Party of Korea (DPK) for their respective shortcomin­gs. Yoon and his party are facing backlash for ineffectiv­e and authoritar­ian governance, while the DPK is being criticized for its alleged abuse of power.

A couple days ago, a deepfake video titled “President Yoon’s virtual confession of conscience” circulated on social media. In the video, Yoon discusses his incompeten­ce and how he has ruined Korea. In response, the Korea Communicat­ions Standards Commission convened an emergency communicat­ion review subcommitt­ee to address the issue, as such a deepfake could cause social unrest.

Starting with this election, the use of deepfake technology for election campaigns has been prohibited, and the National Election Commission has cracked down on over 100 cases.

The use of deepfakes for election campaigns is prohibited for 90 days from the election date.

Violation can result in imprisonme­nt for up to seven years or fine ranging from 10 million won ($7,500) to 50 million won. Even if the subject of a deepfake has consented, its use will be punished, regardless of whether its content is factual or not. This amendment was in response to a fabricated video last year depicting President Yoon as appearing to endorse then-Namhae County Mayor candidate Park Young-il.

Naver, Kakao and other platforms intend to attach “labels” to content created by AI to prevent harm caused by false informatio­n using AI ahead of elections. Kakao plans on introducin­g an invisible watermarki­ng technology to its AI generation model, Kalio. The watermark will not be visible to ordinary users, but it will make it possible to determine what content was generated by Kalio.

However, the extent to how far people will go with generative AI to swing elections is unknown. Despite regulation­s, there are platforms and messaging services that may be difficult to regulate. For instance, Google and Meta have policies regarding content made by AI, but X is not really enforcing much. Telegram which has a loose content moderation policy may be used a lot during the election, and people could share deepfakes in the messaging service. We will have to see how much elections will be influenced by synthetic media.

Beyond the realm of politics, the implicatio­ns of such technology extend to the individual level too, with potential threats ranging from misinforma­tion to harassment. Sora’s capability to generate realistic videos based on textual input introduces a new dimension to the existing — and already extremely problemati­c — challenges posed by manipulate­d media. Concerns arise relating to the potential use of generative AI for malicious purposes.

As most of us already know, deepfakes have been a tool for harassment of individual­s and with more advanced AI tools becoming broadly available, this form of personal violation has proliferat­ed on the internet. So although OpenAI’s commitment to preventing misuse is commendabl­e, constant safeguardi­ng and adaptive regulatory frameworks are needed to address these inherent risks and emerging threats.

During a congressio­nal hearing in May 2023 where OpenAI CEO Sam Altman testified, Senator Richard Blumenthal showcased the potential risks of AI systems by using a demonstrat­ion of his AI-generated voice, prompting skepticism among U.S. politician­s about the ability of tech companies to control their powerful AI systems.

Chyung Eun-ju (ejchyung@snu.ac.kr) is a marketing analyst at Career Step. She received a bachelor’s degree in business from Seoul National University and a master’s in marketing from Seoul National University. Joel Cho (joelywcho@gmail. com) is a practicing lawyer specializi­ng in IP and digital law.

 ?? Joel Cho ?? Chyung Eun-ju
Joel Cho Chyung Eun-ju

Newspapers in English

Newspapers from Korea, Republic