Can Deepfake Technology be Regulated? A Look Into a Sinister Aspect of Artificial Intelligence
In the above picture, Jim Carrey’s face has replaced that of Jack Nicholson’s in the classic horror movie The Shining. The realism of the edit demonstrates to viewers how the technology behind it has progressed in the past few years, and further highlights the inherent dangers that come along with it. Not everything you see on the internet can be believed, and the deepfake industry only serves to prove this point.
What is Deepfake?
Deepfake technology, at its core, is essentially artificial intelligence scanning thousands of pictures of an individual’s face in order to superimpose their features onto another person in a separate video. The result is “video proof” of someone doing something they did not actually do. Previously, video manipulation was largely reserved for those with impressive technical expertise – but those barriers are being eroded with the advent of AI. This technology takes a large chunk of work out of deciding which parts of the video to edit, making it easier and cheaper to produce doctored videos. Right now, the video-editing applications one can find on the App Store on their phones are rudimentary versions of how deepfake videos are produced.
The Depth of Fakery
The political implications of this technology are scary enough. A doctored video of Labour leader Jeremy Corbyn punching a homeless man or Conservative leader Boris Johnson doing the same demonstrates how deepfake technology could cause irreparable political damage. Even worse, less reputable political leaders could claim that video evidence of unsavoury behaviour was falsified. Deeptrace’s report regarding Malaysia’s and Gabon’s claims on falsified videos further destabilise an already shaky political arena. In Malaysia’s case, it was determined that evidence of Malaysian politicians acting inappropriately was falsely declared as fake in an attempt to subvert due judicial process.
Perhaps even more chilling, deepfake technology could also be used to release “revenge porn” videos, irreversibly ruining the reputation of the subject of the video. Deeptrace, a cyber-security company, recently reported that 96% of deepfake videos are of pornographic nature, and it is very likely that the subjects in these videos did not consent to their image being used for such purposes. For as little as US$2.99 and some pictures of an individual, companies can produce explicit videos of them which are indistinguishable from reality in approximately 2 days. And unlike politicians or other high-profile victims of deepfake smear campaigns, these individuals are unlikely to have the resources to deal with such harassment campaigns. Furthermore, doctored media has never had the opportunity to be distributed to such a wide audience before the advent of the Internet. Having a technologically savvy arbiter declaring such videos fake would be too little, too late – a compromising video being leaked onto the Internet, fake or not, would have an immediate impact on one’s reputation in today’s hyper-connected society. Combined with a strong public tendency to believe what one sees on the Internet and to look for supporting evidence afterwards, it is no surprise that misuse of deepfake technology gives governments cause for alarm.
Regtech and its Limitations
Companies are already looking into developing preventive technology. Facebook and Microsoft, as reported by the Financial Times, are teaming up with AI researchers from leading institutions like Oxford and Berkeley in order to develop technology that can detect videos that have been manipulated. Google is also taking steps to combat deepfake videos by developing similar tools for detection. Yet experts are limited by the videos used to train their deepfake-sorting algorithms - a Canadian company specialising in AI failed to determine the veracity of random deepfake videos on the internet 40% of the time. Ultimately, there is a risk that countermeasures simply cannot keep up with altered videos increasingly indistinguishable from reality.
Is it possible for governments to regulate such technology? The immediate political impact of deepfake technology alone should be impetus enough for governments worldwide to take action, and the social impact of such technology should not be understated either. The immediate impact of such videos, according to a BBC article on the matter, would be on vulnerable individuals and not governments or corporations. In the United States, Republican Senator Ben Sasse has already attempted to implement a bill criminalising malicious creation and distribution of deepfakes, which is a step in the right direction. However, this is an unusual case of current regulatory technology imposing limitations on political problems, not the other way around. And if development of the tools that can detect doctored videos are outpaced by that of the videos themselves, society has a lot more to worry about than Jim Carrey’s face appearing where it shouldn’t.
Ronald Poh