How Cambridge Analytica Isn’t Really Dead

cover image.jpg

Facebook’s refusal to police political content ahead of the upcoming 2020 American presidential elections is worrying. Both Mark Zuckerberg and Sheryl Sandberg, Facebook’s founder and chief operating officer respectively, are defying congressional pressure to fact-check political advertisements. The problem here, then, is safely navigating the complex landscape of post-social media election campaigns. Do large social media companies have any responsibility in regulating what appears on their websites? Why do such companies remain unaccountable except to themselves?

A brief run-up to 2016

3 years ago, Financial Times editor Gillian Tett paid a visit to then-presidential candidate Hillary Clinton’s campaign office in Brooklyn. It was assumed that the Democratic candidate’s campaign was miles ahead, digitally speaking, as compared to the Republican counterpart. This assumption was flawed. In the recent years, news broke that Cambridge Analytica, a UK-based political consulting firm, had harvested personal data belonging to millions of Facebook users without their consent. This data was then used to create advertisements targeting the users with compromised data. This breach of privacy and subsequent misappropriation of data likely contributed to the election of Republican candidate Donald Trump. With the election looming, Tett believes that Democrats have a decision to make – whether to beat the Republicans at their own game or to maintain the course.

A global issue

This is not to say that Cambridge Analytica’s actions were affecting the US elections alone; in the run-up to the company’s apparent shutdown, it was revealed that the company was involved in 60 other countries, with their campaigns featuring deliberate misinformation and voter suppression, amongst other shady tactics. For example, the firm was hired by the Leave party in the 2016 Brexit referendum to sway public opinion regarding Great Britain’s continued membership of the European Union. Neither does this give Facebook a free pass. With over 2 billion active monthly users, Facebook stands at the top of the social media industry. Its standing affords it the opportunity to affect wide-scale change with regards to dissemination of misinformation. Facebook’s continued refusal to do so sets it apart from other social media companies. Ahead of the elections, Twitter announced that it was banning all political advertisements on its platform in the name of “civic discourse”. Google followed suit shortly after, limiting the quantity of political adverts on platforms under it. It remains to be seen if Facebook is willing to do anything regarding the blatant misinformation campaigns waged on its platform.

Public responsibility? Or private company? What is Emerdata?

That being said, do private companies like Facebook have a duty to police public discourse? Unless legislation is put in place to directly combat misinformation campaigns, this remains unlikely. Such legislation is also risky to implement in the first place; the question of who should determine what is fake and real remains, and is open to abuse by governments with less-than-savory attitudes to critics. Furthermore, companies like Cambridge Analytica are becoming easier to set up. Soon after the shutdown of the company, a new firm calling itself Emerdata emerged, with much of the same executives running this new company. It seems evident that Cambridge Analytica has only undergone a rebranding campaign. With the increased connectivity that the Internet affords, misinformation remains ever easier to spread by both private and government actors. Zuckerberg’s seemingly neutral stance on policing the truth of political adverts on his platform will thus keep him and his company safe, for now.


by Ronald Poh