Social Media Bill: UK To Set New Regulations For Social Media Companies

0
British Prime Minister Boris Johnson leaves 10 Downing Street for PMQs at the House of Commons on 25 March, 2020 in London, England. The month-long parliamentary Easter recess begins today as the UK is under lockdown imposed to slow down the spread of the Coronavirus. (Photo by WIktor Szymanowicz/NurPhoto via Getty Images)

UK’s telecommunication regulator, Ofcom, will have the power to make tech firms responsible for protecting people from harmful content under a new bill to be presented to Parliament next year, an official said.

Harmful content such as violence, terrorism, suicide content, cyber-bullying, and child abuse will be regulated, the UK government said on Tuesday.

Under the new Online Safety Bill, social media sites, websites, apps, and other services that host user-generated content or allow people to talk to others online that fail to remove and limit the spread of such harmful content will face fines of up to £18 million (24 million dollars) or ten per cent of their annual global turnover.

“We are giving internet users the protection they deserve and are working with companies to tackle some of the abuses happening on the web.

“We will not allow child sexual abuse, terrorist material, and other harmful content to fester on online platforms.

“Tech companies must put public safety first or face the consequences,” Home Secretary Priti Patel said, as quoted in the official statement.

Digital Secretary Oliver Dowden said that with the introduction of the new regulations the UK is setting the global standard for safety online.

“I’m unashamedly pro-tech but that can’t mean a tech-free for all,” he said.

The new regulations, which will apply to any company hosting user-generated content online that can be accessed by people in the UK or enables them to interact with others online, will establish different responsibilities for each tech company, based on a categorized approach.

Social media platforms like Facebook, Tik Tok, Instagram, and Twitter, for example, will be in Category 1.

Category 1 means that they will need to assess the risk of legal content or activity on their services with “a reasonably foreseeable risk of causing significant physical or psychological harm to adults”.

They will also be required to ensure users are able to easily report harmful content or activity and will need to publish transparency reports about the steps they are taking to tackle online harms.

Category 2 will be reserved for platforms hosting dating services or pornography and private messaging apps, while financial harms such as fraud and the sale of unsafe goods will be excluded from this framework to avoid replication of existing regulations.


CLICK TO COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.