Regulation in the Age of Online Hate – The French Government’s Defeat and Its Implications
This week the French government experienced a significant setback in its agenda after the French Constitutional Council brought down vital elements of a law that was designed to combat hate speech online. The legislation introduced only earlier this month, envisioned Tech giants as more accountable for failure in removing illicit content, within the set-out, 24-hour window. The proposal saw fines of up to €1.25 million (over £1.13 million) for non-compliance.
The highest constitutional authority in France ruled that the proposed time frame was disproportional in relation to the potential penalties at stake. The tipping point argument insisted that online platforms would not be given enough time to confer with legal bodies, in order to establish whether flagged content should be considered as inappropriate, when in denial. Many digital rights lawyers support this judgement as the law evoked debates regarding undesired control; potential censorship. A member of the Les Républicains opposition party voiced his concerns by stating that the ‘law is an attack on freedom of expression’ while the European Commission urged France to halt their efforts. The Commission’s Work Programme for 2020 plans out its own Digital Services Act, under the slogan of ‘A Europe fit for the digital age’. The European body believes the French law posed concerns over contradicting rules.
France is not the first European country to attempt a law altering the conduct of Silicon Valley giants. Since 2018, firms operating in Germany are responsible for deleting harmful comments within the same, 24-hour time frame (this window can be extended to 7 days when dealing with more serious allegations). If the company fails to cease publications, it could face fines reaching €50 million (£45 million). As a result of the 2018 Network Enforcement Act, Twitter has been encouraged to hire more German speakers to improve the way content is being monitored and consequently avoid additional costs. Reports addressing the German law, during its first year of operation, present a contrasting image. With only 27.1% of YouTube’s flagged reports being permanently deleted, and Twitter declaring an even lower rate of 10.8%, one could suggest an overall careful consideration and lack of ‘overblocking’ incidents. However, every individual case is incomparable, and the legislation continues to face backlash, with repeated apprehension over censoring harmless content.
In Britain, talks about similar proposals began under Theresa May’s government. The UK is determined to introduce a law which focuses on a greater scope of the industry, unlike the German and French legislation, primarily fixated around hate speech online. The broader approach has been set out by the Online Harms White Paper in February 2020. Firms like Facebook have declared their support with representatives stating that a “standardised approach” across the sector is necessary.
However, the UK can look at the French defeat as a reprimand, warning the government of potential compromises that will have to be made to avoid setbacks.
Others believe that the Tech industry has been able to prove its self-regulated policies are sufficient enough. The latest domestic example saw the permanent suspension of the controversial commentator, Katie Hopkins from Twitter. Twitter found that the rightwing reporter violated the firm’s ‘hateful conduct policy’ repeatedly. Thus, one could say the platform is determined to bring an end to comments attacking race, gender or religion. Nonetheless, until the removal on the 19th of June 2019, Hopkins posted it to an audience of 1.1 million. Some argue this highlights Twitter’s lethargy in taking action. Facebook has faced similar criticism over conduct as its AI technology, designed to detect hateful commentary, operates only in 40 languages (primarily ones used in developed nations, such as English or Mandarin). Consequently, minorities using less common languages, are left at greater risk, as they rely on hate speech being flagged solely by individual users.
The power of social media in influencing critical events such as elections, as well as permitting growth in criminal activity, continues to attract heavy criticism. Governments are experiencing increasing pressure to provide regulations after massacres such as the 2019 Christchurch mosque shooting (the event was live-streamed on Facebook). As nations continue facing challenges over the rapidly growing sector, advocates warn that new laws have to distinguish the fine line between blocking illegitimate content and exploitation.
by Zuzanna Potocka