Emergence of Taylor Swift Deepfakes Increases Pressure for Legal Intervention
Introduction
An unfortunate consequence that flows from the evolution of technology is the increased potential for misuse. As software becomes increasingly sophisticated and yet more widely accessible, more individuals are able to utilise it for nefarious purposes.
Deepfake media is a perfect illustration of technological malpractice. Deepfakes are created by using Artificial Intelligence (AI) to manipulate audio, images, or videos.
Women are disproportionately affected by such media – the State of Deepfakes report published in 2023 revealed that a staggering 98% of all Deepfake videos online are pornographic, with 99% of those targeted being women.
Illicit deepfake photos of Taylor Swift recently circulated on the social media platform X (formerly known as Twitter). One of the posts gained 47 million views before being struck down. The call for legal action in the US against Deepfakes has increased significantly in the wake of this controversy.
Different Legislative Frameworks
The US is yet to implement federal laws preventing the creation and distribution of Deepfake media. In contrast, following the introduction of the Online Safety Act in 2023, the sharing of Deepfake pornography has been deemed illegal in the UK.
The Demand for Action
Whilst deepfake videos of celebrities are certainly the most common in the public eye, it is not only famous people who are affected. Anyone unfortunate enough to be the subject of a deepfake may suffer serious emotional, reputational and financial harm.
The Taylor Swift Deepfakes have brought greater attention to the issue. Representatives from both ends of the American political spectrum have condemned the content depicting Swift. Democrat Yvette Clarke acknowledged that the creation of Deepfakes is “easier and cheaper” than ever before due to advancements in AI. Republican Tom Kean JR expressed his support for “safeguards to combat this alarming trend”.
Pressure has also been applied to social media sites, with users demanding improvements to the monitoring of unsavoury content. The overwhelming majority of the complaints have been made against X - a site notorious for the distribution of distasteful media. Since Elon Musk’s takeover of the company in 2022, the content moderation team has decreased by approximately 80%.
To combat the spread of Taylor Swift Deepfakes, X has introduced bans against searches such as “Taylor Swift” and “Taylor Swift AI”. This solution was not perfect as results could still be provided by simply rearranging the search words. Whilst it is unrealistic to catch all of these lewd posts before they are spread across social media, a more radical solution is necessary.
Conclusion
Deepfake pornography is a significant issue that must be tackled as software becomes more accessible and advanced. The absence of laws regulating Deepfake media in the US leaves victims with no protection. Furthermore, there is no deterrent for perpetrators, meaning they are able to create harmful content without sanctions.
Commenting on the Taylor Swift situation, Marcel Wendt of the Dutch identity company Digidentiy, argued that “high-profile cases like these should serve as a wake-up call to lawmakers”. There is support for this legal lacuna to be resolved. With increased attention on this debate following the Taylor Swift Deepfakes, reform may be imminent.
By Alexander McLean