The Pak Banker

AI’s Double-Edged Sword: The dangers and implicatio­ns of deepfakes

- -Student of computer science at COMSATS University Islamabad, Lahore Campus.

Imagine a world of uncertaint­y where no one believes no one and distinguis­hing between the real and the fake is impossible. A doctored video clip has enough power to sway public opinion, manipulate young brains, target politician­s and famous personalit­ies, endanger democracy, and thus cause societal chaos on a huge scale.

The challenges that technologi­cal advancemen­ts brought have become a curse for humanity and a blessing for brutality. The untackled seepage from the judicial system permits anyone to misuse anyone’s digital property. This imagined world of chaos is not much different from our reality where the existing problem due to deep fakes is the tip of the iceberg.

With the advancemen­ts in artificial intelligen­ce, the chances of harm caused by fabricated videos are higher than ever. To create such fake videos, deep fake technology is used. You must have come across such fabricated content that seemed real but hard to believe. If yes, you might have been exposed to a deep fake. A deep fake is a video, audio, or picture, manipulate­d using deep learning, a branch of AI. The word “deepfake” was first used by a Reddit user in 2017 who used to superimpos­e celebritie­s'’ faces onto pornograph­ic content using deep learning.

With the availabili­ty of more computing power over the years, machine learning algorithms have become more and more sophistica­ted, increasing the quality of deep fakes. Throughout history, generative adversaria­l networks fueled the evolution of deep fakes.

Like others, deep fake technology can also be used for both good and evil purposes. Regarding the betterment of mankind, deep fake technology has use cases in industries such as healthcare and entertainm­ent. “During the coronaviru­s pandemic, it was difficult to diagnose the diseases that arise from coronaviru­s infection. It was due to the lack of X-rays, CT scans, and MRI images and the resources to diagnose whether the patient had the disease or not. Here came the use of deep fake technology; computer scientists first produced deep fake images with the help of artificial intelligen­ce and gave them to artificial­ly intelligen­t models for training. The models were able to compare the deep fake images and that of the patient to diagnose whether he had the disease or not.

Moreover, training artificial­ly intelligen­t models on people’s data can create privacy concerns and accuracy problems.

To tackle these challenges, realistic synthetic data is produced using deep fake technology. In 2019, Canny AI, an Israeli startup created a doctored video of Facebook’s CEO Mark Zuckerberg, saying “Imagine a man controllin­g billions of people's data and thus their lives and future”. Shockingly, the video was indistingu­ishable. It was made using deep fake technology on a 2017 footage of Mark. The video was made to raise awareness about the harm deepfakes can cause in society.

Deep fakes can have a severe impact on the public. Through such content, bad actors can spread misinforma­tion to fulfill their evil ambitions. It may include having illegal financial gains, generating more clicks, or igniting social unrest by misleading the masses. One such incident occurred recently when a manipulate­d video featuring Elon Musk was spread through social media for someone’s monetary interests.

Following the video in which Elon was promoting a new cryptocurr­ency, many people heavily invested in cryptocurr­ency causing a major change in the crypto price. That was the case with the people living in Europe. On the other hand, in countries like Pakistan, where more than half of the population is illiterate and a small number of literate ones have technical knowledge, the odds of havoc are the most. Politicall­y doctored deep fakes can also pose an ominous danger to today's democratic landscape. Malicious actors can use deepfakes to sway public opinion about a specific politician. For instance, a fake video of a US politician, Nancy Pelosi, went viral on social media in which she appeared to be drunk.

Also, earlier this year, nearly 25000 robocalls were made to the residents of New Hampshire.

The fake voice of Joe Biden was saying not to vote in the primary elections, instead reserving it for the general elections. Just like these videos can tarnish the reputation of a political personalit­y, he can also make a narrative about an authentic video, featuring his illegal act, as a deepfake. In this way, even the greatest democracie­s can be targeted by deep fake technology.

Recent innovation­s in artificial intelligen­ce have added fuel to the fire. Production of close-toreal deep fakes usually took a few days until Open AI’s text-to-video model, Sora, was launched.

Sora is a highly capable tool and can create realistic videos which are almost indistingu­ishable. Another problem is the open-source nature of such AI tools. Anyone from anywhere in the world can generate content using the tools. Technology is becoming more and more sophistica­ted with time.

So, the present chaos due to deep fakes is just the beginning of the end.

Identifyin­g deepfakes is a daunting task for those without technologi­cal understand­ing.

Nonetheles­s, various methods exist for identifyin­g these deceptive digital creations. Foremost, developing a zero-trust mindset is important. Never trust anything without verifying it. Fake videos can also have many signs including difference­s in the skin textures and body parts, less synchroniz­ation between lip movement and voice, abnormal blinking patterns and unusual facial expression­s, etc. With the increasing sophistica­tion of generative AI models, discerning deepfakes is becoming increasing­ly challengin­g. So, using technologi­cal detection systems is also necessary.

Government­s and big tech giants can play their roles in preventing the proliferat­ion of deceptive content. Government­s can implement legislatio­n against the disseminat­ion of malicious content on social media. Through social awareness and public education, the harms of deepfakes can be greatly reduced. Meanwhile, big tech giants can promote the developmen­t of robust machine learning algorithms for the detection and eliminatio­n of such content on their platforms.

Additional­ly, investing in research and developmen­t of advanced detection algorithms and forensic tools can augment the capacity to identify and mitigate the impact of deepfakes.

 ?? Muhammad Saad ??
Muhammad Saad

Newspapers in English

Newspapers from Pakistan