CYBERSECURITY:
Deepfake is a danger to democracy, writes Andile Ngcaba
The Boston University College of Communication traces the history of human communication from oral form to simple symbols and graphics, to the age of literacy with handwritten symbols, the alphabet and the printing press. The Gutenberg press resulted in massive disruptions that made information available to more people, decentralising and democratising knowledge and information.
The electronic communications era stretches from the telegraph, radio and television to the internet. Today, people spend more time on Twitter, WhatsApp, Facebook, YouTube or WeChat than on traditional media. According to US peace activist Jessica Mathews, people write 500-million tweets, send 65billion WhatsApp messages, and post four petabytes of material on Facebook every day. Social media allows people to create, share, exchange and comment on content in virtual networks.
The widespread use of digital platforms and the internet has created opportunities for entrepreneurs and digital media. However, it has also given way to misinformed amateur opinions instead of reliable quality information.
Deepfake (from deep learning and fake) is raising serious concerns globally. With the rapid development of artificial intelligence (AI) technology, experts predict that deepfakes may soon be indistinguishable from original videos, eroding trust in public institutions.
New institutions for fact-checking deepfakes are required. To protect ethics, data scientist Cathy O’Neil argues that algorithms must be auditable.
The emergence of deepfake leading up to the US presidential election in 2016 completely changed the public’s perception of how technology can spread misinformation globally. Marco Rubio, a Republican senator, even went so far as to call them the modern equivalent of nuclear weapons. This demonstrates how dangerous deepfake is to citizens globally.
Deepfake makes use of generative adversarial networks (GANs) to create doctored videos as a way of spreading misinformation. GANs are created by making machine learning (ML) models train on a data set while one creates video “forgeries”. One of the ML models works by attempting to detect “forgeries”. These two ML models continue to do this until one model can’t detect a forged video from the true one. In order to create a realistic video, large amounts of data are required.
The World Economic Forum recently released an article analysing how deepfakes could be used to influence elections. One example it gave was how in Gabon a deepfake video was circulated claiming the president was not healthy and could no longer hold office. This demonstrates the risk deepfake poses to democracy globally and how we need to be more aware of this new reality. Deepfake does not only run the risk of spreading misinformation, it could also erode trust in public institutions.
In celebration of cybersecurity awareness month, one cannot view the rise of deepfake in isolation from the rising dangers of the internet. Cybersecurity threats continue to wreak havoc around the world. In 2019 alone, 8.5-billion records were breached, giving attackers access to more stolen credentials. This demonstrates how important it is to secure credentials and access controls.
IBM X-Force data picked up a 2,000% increase in operational technology targeting incidents in 2019. Large organisations and nation states are struggling to ensure their critical infrastructure is protected.
Threat intelligence platforms allow organisations to further understand data security and collaborate on analysis with other organisations. Advanced, persistent threats also continue to attack intellectual property, conduct total site takeovers, sabotage critical infrastructure and steal sensitive information.
The recent reported hack of the office of the chief justice is an indication that we in SA are not immune to these threats. It is imperative that we limit the threat vectors that create vulnerabilities in both public and private organisations. Advanced, persistent threats often identify smaller organisations in the value chain of large ones to gain access to their networks.
The dark web also continues to be an issue. After the Experian data breach, a number of South Africans’ personal information ended up on the dark web. Even though one may not have entered the dark web, it is highly likely that your data may have ended up there through numerous data breaches.
Transparency reporting by social media companies is one of the steps being taken to protect human rights online. With social media becoming the centre point of information, it is critical that the digital rights of all are protected, in the same way that the press ombud in SA provides impartial adjudication to settle disputes between the media and members of the public.
As we mark cybersecurity awareness month, it is clear that as the technology, content and medium of communicating news all change, the institutions that hold everyone accountable will also need to adapt. Like the Gutenberg press’s disruptive nature, new technologies like AI, ML and deep learning will transform digital media life as we know it.
Unesco has pointed out that social media platforms have become fertile ground for computational propaganda in the form of “trolling” and “troll armies”. The Computational Propaganda Research Project at the Oxford Internet Institute researches the widespread technology-enabled weaponisation of information against persons, organisations and governments.
In his book Lie Machines, Philip Howard paints a global picture of how machines (bots) interplay with politicians, scammers, authoritarian governments and more to control the information narrative.
Therefore, as the relationship shifts from human and machine to machine and machine, our interpretation of rights in the digital world needs to be redefined. Similarly, the institutions and technologies guarding those rights need to be redefined in light of this paradigm shift.