The Online Safety Bill
The Online Safety Bill aims to make the UK the safest place in the world to be online by ensuring public safety and protecting children, whilst safeguarding freedom of expression and innovation. It applies to online platforms where users upload and share their content, such as social media (Pt. 3, Ch. 2) and search engines (Pt. 3, Ch. 3).
The Regime
Ensuring public safety and protecting children
The Bill aims to increase the accountability tech companies have for their users’ safety on their platforms by imposing numerous duties, including the duty to protect children’s online safety (s.11), duties to protect adults’ online safety (s.13) and most controversially, the duty to remove legal but harmful content (ss. 53 & 54). These aim to oblige companies to put measures in place to protect users from harmful and/or illegal content online. Furthermore, the Bill grants Ofcom, the UK’s communications regulator, the power to scrutinise tech companies to determine whether or not they comply with their obligations under the Bill by allowing it to enter a company’s premises, interview employees; and scrutinise a company’s data and algorithms (Part. 7; Sch. 11). If Ofcom finds a company to be in violation of its obligations, Ofcom can impose a fine of up to 10% of its annual turnover and in severe cases, block it (Sch. 12). Moreover, to prevent impunity, even for these companies’ senior employees, the Bill introduces new information-related offences, which criminalise failing to attend or providing false information at interviews and failing to comply with, obstructing or delaying Ofcom when exercising its powers of entry, audit and inspection, amongst others (ss.80-95). Violations of these offences result in a fine or up to two years’ imprisonment (s.96). Additionally, executives of these companies can be subject to up to two months’ imprisonment (s.96).
Freedom of expression and innovation
At the Bill’s initial stages, there were concerns that it would inhibit freedom of expression. For instance, MP David Davis worried that it would entail a “chilling effect.” One manifestation of this occurred last year, when YouTube forced TalkRadio offline for an unspecified violation, despite it not being clear how TalkRadio violated YouTube’s terms and conditions. However, the Bill takes measures to safeguard freedom of expression. These include users’ right to appeal if their post has been taken down (s.14(c)-(e)) and providing an exemption for news content to safeguard press freedom, journalism and political debate. Furthermore, companies are under duties that safeguard users’ freedom of expression, including the duties about freedom of expression and privacy (s.19(5)-(7)), duties to protect content of democratic importance (s.15) and duties to protect journalistic content (s.16).
Moreover, the Bill is cautious not to hamper innovation by distinguishing between category 1 companies: the largest, most impactful platforms from category 2 companies: smaller companies with a significant amount of users. For instance, category 1 companies have more stringent duties about fraudulent advertising compared to category 2 companies (s. 34 VS s.35). Consequently, the Bill exercises proportionality in stipulating its obligations. Additionally, Ofcom has a responsibility to ensure that its powers are used in a proportional manner (s.77(4)(d)).
The main concern: the duty to protect users from legal but harmful content
Parliament will determine what constitutes “legal but harmful content” in secondary legislation that is yet to be enacted. MP Lucy Powell is against this, arguing that doing so constitutes “government overreach.” However, such a determination provides legal certainty and safeguards freedom of expression by preventing companies from over-censoring their platforms, which is an improvement from the initial draft Bill since it left the definition of “legal but harmful content” to be determined by the companies themselves, potentially causing legal uncertainty and over-censorship. Furthermore, Higson-Bliss argues that the government defining harmful content as content that would cause “physical or psychological harm” (s.187(2)) was too broad since it may encompass content promoting drinking and gambling, causing these to potentially be banned. For instance, the government considers content promoting self-harm to be “legal but harmful.” This may not only cause content actively encouraging users to self-harm to be banned, but also content that supports people who self-harm. Consequently, Julie Bentley, Samaritans chief executive, argues that this may inhibit safe spaces for those struggling with self-harm. Hence, Higson-Bliss argues that the government needs to rethink what is considered to be “harmful” or to give the definition of “harm” a more precise meaning.
The future
The Bill is currently being amended by Parliament and has yet to receive approval from both the House of Commons and the House of Lords. Afterwards, it shall receive the royal assent and pass into law. This is likely to occur at the end of the year, with companies having an additional year to put measures in place to comply with the new law.
By: Maiya Dario