Close
svg
Jerry Richardson 15 December 2021

Free speech is under threat

Opinionsvg7 min read
Post Image

 

 

Draft Online Safety Bill: The Latest

 

 

 

The Joint Committee’s report on the Draft Online Safety Bill, published yesterday, made significant recommendations including a number of new offences designed to protect children and vulnerable people online. It also calls for the “Duty of Care” to be removed from the bill, which would require social networking companies to take down broadly ‘harmful’ content and could lead to excessive and unwarranted censorship.

 
 
 
 
 

The Committee’s recommendations are in line with criticism from charities and campaigners concerned that the bill is not fit for purpose and does not go far enough to achieve one of its core objectives – to protect children from harmful and inappropriate content. But the disproportionate state surveillance measures that ultimately put our online safety at risk, to meet the objectives of combatting terrorism and child sexual abuse material (CSAM), go largely unacknowledged.

 
 
 
 
 

Whilst the Committee has sought to redress concerns regarding children’s online safety, platform liability as well as broader cyber-crime, it has not recognised the threat that this bill poses to free speech, the censorship of Black voices and its restraining potential on investigative journalism.

 
 

The government now has two months to respond to the recommendations laid out by the Committee, before it is put to Parliament for approval in early 2022.

 

 
 

 

Restraining Potential

 

 

On 9 November, the Digital, Culture, Media and Sport Sub-committee on Online Harms and Disinformation discussed protections for journalism within the Online Harms Bill. Their focus was on the potential for the “overzealous takedown of content”, but made no mention of Part 4, Chapter 4, Clauses 63-69 of the bill, which would weaken the encryption of private messages in order to use bulk scanning software, powered by AI, to scan for terrorism content or child sexual exploitation and abuse content.

 

 
 

Encryption is an essential tool for the protection of millions of people across the world. Journalists, activists, whistleblowers and truth tellers of all kinds rely on this technology to exercise their right to free speech. Source protection is fundamental to press freedom and the lack of adequate safeguards may deter whistleblowers from coming forward with information in the public interest. Given the government’s history of using legislation to conceal state wrongdoings, spy on journalists and punish those who blow the whistle, it is not unreasonable to have concerns that new legislation could be subject to the same abuse.

 

 
 

Though the application of Clauses 63-69 are restricted to ‘terrorism content and child sexual exploitation and abuse content’, there is nothing within this bill to prevent future governments from extending the use of bulk scanning software to a longer list of offences and furthermore, it sets a dangerous precedent for overseas regimes, who may be inspired to use similar software to further stifle dissenters.

 

 
 

This was one of the concerns raised earlier this year when Apple proposed using its newly developed photo-scanning technology to scan Apple devices for child sexual abuse material (CSAM). The New York Times reported that “More than a dozen prominent cybersecurity experts on Thursday criticized plans by Apple to monitor people’s phones for illicit material, calling the efforts ineffective and dangerous strategies that would embolden government surveillance.”

 

 
 

Apple ultimately decided to delay implementing its technology due to the widespread backlash, after 90 global privacy groups wrote to Apple urging it not to pursue the CSAM surveillance measures as well as an open letter, signed by roughly 9,000 security experts, which described Apple’s technology as “well-intentioned” but claimed that it created a “backdoor for abuse”. The letter quotes the Electronic Frontier Foundation as saying:

 

 
 

“One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society.”

 

 
 

When we consider the Snowden revelations, which in 2013 confirmed that GCHQ considers investigative journalists as posing the same level of threat as terrorists, particularly those whose work relates to UK intelligence agencies or the armed forces or that Extinction Rebellion were last year labelled as ‘extremists’ by Counter-terrorism police, the lack of external oversight in the application of the new counter-terror surveillance measures proposed in this bill is incredibly worrying.

 
 
 
 
 

There is also enough evidence to suggest artificial intelligence technologies are themselves often flawed. Multiple researchers have determined that artificial intelligence is often innately racist and sexist – the content scanning technologies that would be implemented on social networking sites to remove content deemed ‘harmful’ are more likely to flag content from Black social media users.

 

 
 

In a post published to Facebook by Index on Censorship they state: “There is a gap in the Committee report on the safeguards needed to prevent algorithms removing legal content unintentionally. Tweets by platform users who are Black are up to two times more likely to be labelled as offensive, than tweets by others, while research has shown posts written in language primarily associated with the Black community are disproportionately flagged as “rude” or “toxic”.”

 

 
 

In May 2021 the landmark ruling on mass surveillance, Big Brother Watch v UK ruled that the UK’s bulk surveillance regime is incompatible with the European Convention on Human Rights (ECHR). It demonstrated that scanning the online communications of an entire population, as though they are all potential suspects, is not a proportionate way to deal with threats online.

Leave a reply

6 + 12 =