Defeating Disinformation: Microsoft’s Deepfake Detector

Getty Images, by Rick_Jo

Getty Images, by Rick_Jo

Microsoft has launched its own video analysis tool, Video Authenticator, designed to identify when media has been artificially altered. The tool can pinpoint telltale signs of a manipulated image or video that may not be detectable by the human eye. This is in response to the growing threat that synthetic media, commonly referred to as deepfakes, brings to cybersecurity. 

Deepfakes are created using AI software, which is fed still images of one person and video footage of another to generate artificial images and videos. This process has over the years been simplified and consequently more accessible, with some apps only requiring a single photo to achieve an unbelievably real result. The danger that lies behind this technology is that it can create content which misleads viewers into thinking a person - often times than not a prominent figure - had said something they had not. Disinformation is widespread, coming in countless forms. Though deepfakes are uncommon for now, the potential of this technology in spreading disinformation is unknown.

“The only really widespread use we’ve seen so far is in non-consensual pornography against women. But synthetic media is expected to become ubiquitous in about three to five years, so we need to develop these tools going forward,” explains Nina Schick, author of Deep Fakes and the Infocalypse.

The tool, developed by the R&D division of Microsoft, gives a confidence score on the likelihood a clip is a deepfake, by spotting giveaway signs that an image or video has been manipulated, from fading to greyscale pixels to merging boundaries of the subject. When analysing videos, the tool assesses each frame and provides a score in real-time as the video plays. It has been tested with success against the Facebook face-swap database, which holds over 100,000 deepfake clips. 

With the date of the US presidential election nearing, Microsoft is hoping that Video Authenticator can help campaigns and journalists identify any synthetic media intended to mislead the public. This tool will be beneficial in helping users spot deepfakes, combating the spread of misleading statements. 

However, Video Authenticator is only one part of Microsoft’s efforts to reinforce the authenticity of online content. “Disinformation comes in many forms, and no single technology will solve the challenge of helping people decipher what is true and accurate,” states Microsoft. A two-part process has been explained to include the deployment of an internet tool that will certify authentic media and a reader that will check for third-party changes to content. 

Media manipulation has been proven to be effective in spreading misleading or incorrect information on the internet. Though deepfake technology is not currently in widespread use and thus is yet to pose an imminent threat to content creation, Microsoft has definitely provided a valuable contribution to the fight against online disinformation that will only become more reliable and accurate over time.


By: Nicole Woo