The Rise of 'Deepfakes' and Misinformation

Introduction

Deepfake audio has recently made headlines after a recording supposedly featuring Sir Kier Starmer, the Labour Party leader, verbally abusing staff during a party conference circulated online. Experts have struggled to determine whether the audio is real. The repetition of a particular phrase with an identical intonation as well as minor glitches in background noises were some of the signs implying that the recording is in fact fake.

 

What is a ‘Deepfake’?

The term deepfake refers to content (videos, audio, or images) that has been digitally manipulated to reproduce a person with very similar features or characteristics as found in the real world. This is primarily achieved through the use of Artificial Intelligence (AI). Deepfake software relies on deep learning algorithms and expansive datasets to produce fake media content. As algorithms become more sophisticated, they make the generated results more convincing.

 

Deepfake technology is not a modern concept – the practice dates back to the 1990s. However, the most significant advancements in deepfake technology have been made in the last 6 years. Many industries have adopted the technology for a range of purposes. For example, AI software can be used to scan millions of photos of people with resemblances to create fake models for advertising purposes. Another utility of deepfakes is the potential of age progression photos. These may help the police find missing people or track down criminals who have been on the run for prolonged periods of time.

 

Increased Accessibility to Deepfake Technology

As AI software improves, deepfake technology is becoming more widely accessible. The emergence of open-source AI programs, such as DALL-E2, has placed deepfake technology within the immediate vicinity of any person online, resulting in a growth of fake content. Moreover, improvements in deepfake technology make it significantly more challenging to differentiate between real and fake content.

 

Ultimately, a combination of greater presence and persuasive features has produced an undesirable alloy: deception. As a result, deepfakes can be used to circulate misinformation. In the past, this included targeting topics such as vaccination. This problem becomes even more serious for the older generation, often lacking online awareness.

 

Another abuse of deepfake technology is the creation of non-consensual pornography. In 2017, deepfakes of celebrities in explicit videos began to appear online. Since, the practice has become more frequent. A study conducted by Deeptrace Labs in 2019 concluded that 96% of deepfakes are sexualised images/videos of women.

 

Superimposing the face of another person in such videos, without explicit consent, is set to be criminalised in England and Wales through the Online Safety Bill. Once this law is introduced, the maximum sentence for revenge pornography and deepfakes, with intent, will be 2 years imprisonment. If intent is not established, the maximum sentence will be a 6-month jail term.

 

The Impact of Deepfakes in Politics

Deepfakes have also started to create issues in politics. Considering the difficulty in discerning whether a recording is a deepfake, the technology poses a serious threat to the democratic process in the UK, especially during election periods.

 

This threat posed by deepfakes already prompted the Californian legislature to act. In 2019, California passed a law that criminalised the publication and distribution of false videos and audio that damaged politicians’ image or actions. This applied to any candidate within 60 days of the election. However, as a sunset provision, it ended on 1st January 2023.

 

Consequently, fake videos and audio recordings which intend to sway people’s beliefs may circulate online for hours – or even days – before they are debunked. This results in the public becoming more sceptical of information that they read or hear online; undermining political debates.

 

Following the audio recording purporting to feature Sir Kier Starmer, fake audio has been condemned across the political spectrum. Conservative MP Matt Warman declared deepfakes as “a new low, supercharged by AI and social media”. Warman continued by warning that “democracy is under real threat” and “technology to verify content is essential”.

 

Conclusion

Deepfake technology, like the majority of developments powered by AI, has grown at an unprecedented rate. The quality of such fictitious media is destined to improve as technological advancements continue.

 

Deepfake technology is not inherently malicious. It has the potential to offer societal benefits such as the creation of age progression photos. On the other hand, it has been used for a range of inappropriate purposes. This creates a need for legislatures to regulate the use and distribution of deepfake content. This requirement is not simply to mitigate the spread of misinformation, but also to protect the reputation of those affected by deepfakes. The UK seems to set a strong example through the promises of the Online Safety Bill.

 

By Alexander McLean