Online Safety Bill won’t force removal of ‘legal but harmful’ content

The government’s Online Safety Bill has seen certain measures against dangerous speech dropped ahead of its return to parliament next month. 

Digital, Media and Culture Secretary Michelle Donelan has defended the changes as many criticised the decision to drop a measure that would have seen large technology platforms forced to take down ‘legal but harmful’ content. 

Newly introduced language will instead see the largest platforms, such as Twitter, Facebook and Youtube, told to have clear terms of services against harmful content and bring in systems that allows users to protect themselves from seeing certain content. 

Some, including the father of Molly Russell who took her own life after viewing self-harm content on social media, have said that the new language amounts to a watering down of the legislation. However, Ms Donelan told the BBC: ‘These are massive, massive corporations that have the money, the knowhow and the tech to be able to adhere to this.’ 

Earlier this year, the original bill had been criticised by some Conservative MPs who said the ‘legal but harmful’ provision could be used by a future Labour government to ‘censor’ free speech. The bill has also been criticised by Labour politicians who said that the power given to future Culture Secretaries to add new clauses to the code of conduct was ‘not a normal approach to regulation.’ 

brown concrete building near body of water during daytime

The changes have also been criticised by mental health support charity Samaritans, which earlier this month published a report that found 83% of social media users were recommended self-harm content without searching for it. 

A tweet from the charity said: ‘Instead of doing the right thing by forcing sites to take accountability for legal but harmful suicide and self-harm content by law, Government will be asking for sites to give users more control of the content they see, shifting responsibility onto them. This is a cop-out.’ 

Ms Donelan claims a ‘triple shield of protection’ that requires platforms to: remove illegal content; remove content violating the platforms terms and conditions; and give users controls to limit what they see, will sufficiently protect users. 

Aside from allowing for exemptions around legitimate debate, the legislation could allow users to limit seeing content that incites hate based on race, sexual orientation, ethnicity and gender identity.

Photo by Paul Silvan


Leave a Reply

Your email address will not be published. Required fields are marked *

Help us break the news – share your information, opinion or analysis
Back to top