UK leads the way in a ‘new age of accountability’ for social media
Digital Secretary Oliver Dowden and Home Secretary Priti Patel have today announced the government’s final decisions on new laws to make the UK a safer place to be online
• New rules to be introduced for tech firms that allow users to post their own content or interact
• Firms failing to protect people face fines of up to ten per cent of turnover or the blocking of their sites and the government will reserve the power for senior managers to be held liable
• Popular platforms to be held responsible for tackling both legal and illegal harms
• All platforms will have a duty of care to protect children using their services
• Laws will not affect articles and comments sections on news websites, and there will be additional measures to protect free speech
The full government response to the Online Harms White Paper consultation sets out how the proposed legal duty of care on online companies will work in practice and gives them new responsibilities towards their users. The safety of children is at the heart of the measures.
Social media sites, websites, apps and other services which host user-generated content or allow people to talk to others online will need to remove and limit the spread of illegal content such as child sexual abuse, terrorist material and suicide content. The Government is also progressing work with the Law Commission on whether the promotion of self harm should be made illegal.
Tech platforms will need to do far more to protect children from being exposed to harmful content or activity such as grooming, bullying and pornography. This will help make sure future generations enjoy the full benefits of the internet with better protections in place to reduce the risk of harm.
The most popular social media sites, with the largest audiences and high-risk features, will need to go further by setting and enforcing clear terms and conditions which explicitly state how they will handle content which is legal but could cause significant physical or psychological harm to adults. This includes dangerous disinformation and misinformation about coronavirus vaccines, and will help bridge the gap between what companies say they do and what happens in practice.
Ofcom is now confirmed as the regulator with the power to fine companies failing in their duty of care up to £18 million or ten per cent of annual global turnover, whichever is higher. It will have the power to block non-compliant services from being accessed in the UK.
The legislation includes provisions to impose criminal sanctions on senior managers. The government will not hesitate to bring these powers into force should companies fail to take the new rules seriously - for example, if they do not respond fully, accurately and in a timely manner to information requests from Ofcom. This power would be introduced by Parliament via secondary legislation, and reserving the power to compel compliance follows similar approaches in other sectors such as financial services regulation.
Digital Secretary Oliver Dowden said:
"I’m unashamedly pro tech but that can’t mean a tech free-for-all. Today Britain is setting the global standard for safety online with the most comprehensive approach yet to online regulation. We are entering a new age of accountability for tech to protect children and vulnerable users, to restore trust in this industry, and to enshrine in law safeguards for free speech.
"This proportionate new framework will ensure we don’t put unnecessary burdens on small businesses but give large digital businesses robust rules of the road to follow so we can seize the brilliance of modern technology to improve our lives."
Home Secretary Priti Patel said:
"We are giving internet users the protection they deserve and are working with companies to tackle some of the abuses happening on the web.
"We will not allow child sexual abuse, terrorist material and other harmful content to fester on online platforms. Tech companies must put public safety first or face the consequences."
Dame Melanie Dawes, Ofcom’s Chief Executive, said:
"We’re really pleased to take on this new role, which will build on our experience as a media regulator. Being online brings huge benefits, but four in five people have concerns about it. That shows the need for sensible, balanced rules that protect users from serious harm, but also recognise the great things about online, including free expression. We’re gearing up for the task by acquiring new technology and data skills, and we’ll work with Parliament as it finalises the plans."
Richard Pursey, Group CEO & Co-Founder of safety technology company Safe To Net, said:
"Online safety is a fundamental human right. That is why we are so proud to support the UK Government, who are leading the way in tackling online harm. The forthcoming legislation marks a pivotal moment for online safety, one that we hope will mean social platforms are made safe by design. This action can’t come soon enough: As our lives continue to become more digital, ourselves and our children are increasingly exposed to online threats. The UK safety-tech industry is leading the way with SafeToNet playing its part to make online harms a thing of the past."
The government plans to bring the laws forward in an Online Safety Bill next year and set the global standard for proportionate yet effective regulation. This will safeguard people’s rights online and empower adult users to keep themselves safe while preventing companies arbitrarily removing content. It will defend freedom of expression and the invaluable role of a free press, while driving a new wave of digital growth by building trust in technology businesses.
The new regulations will apply to any company in the world hosting user-generated content online accessible by people in the UK or enabling them to privately or publicly interact with others online.
It includes social media, video sharing and instant messaging platforms, online forums, dating apps, commercial pornography websites, as well as online marketplaces, peer-to-peer services, consumer cloud storage sites and video games which allow online interaction. Search engines will also be subject to the new regulations.
The legislation will include safeguards for freedom of expression and pluralism online - protecting people’s rights to participate in society and engage in robust debate.
Online journalism from news publishers’ websites will be exempt, as will reader comments on such sites. Specific measures will be included in the legislation to make sure journalistic content is still protected when it is reshared on social media platforms.
Companies will have different responsibilities for different categories of content and activity, under an approach focused on the sites, apps and platforms where the risk of harm is greatest.
All companies will need to take appropriate steps to address illegal content and activity such as terrorism and child sexual abuse. They will also be required to assess the likelihood of children accessing their services and, if so, provide additional protections for them. This could be, for example, by using tools that give age assurance to ensure children are not accessing platforms which are not suitable for them.
The government will make clear in the legislation the harmful content and activity that the regulations will cover and Ofcom will set out how companies can fulfil their duty of care in codes of practice.
A small group of companies with the largest online presences and high-risk features, likely to include Facebook, TikTok, Instagram and Twitter, will be in Category 1.
These companies will need to assess the risk of legal content or activity on their services with “a reasonably foreseeable risk of causing significant physical or psychological harm to adults”. They will then need to make clear what type of “legal but harmful” content is acceptable on their platforms in their terms and conditions and enforce this transparently and consistently.
All companies will need mechanisms so people can easily report harmful content or activity while also being able to appeal the takedown of content. Category 1 companies will be required to publish transparency reports about the steps they are taking to tackle online harms.
Examples of Category 2 services are platforms which host dating services or pornography and private messaging apps. Less than three per cent of UK businesses will fall within the scope of the legislation and the vast majority of companies will be Category 2 services.
Financial harms will be excluded from this framework, including fraud and the sale of unsafe goods. This will mean the regulations are clear and manageable for businesses, focus action where there will be most impact, and avoid duplicating existing regulation.
Where appropriate, lower-risk services will be exempt from the duty of care to avoid putting disproportionate demands on businesses. This includes exemptions for retailers who only offer product and service reviews and software used internally by businesses. Email services will also be exempt.
Some types of advertising, including organic and influencer adverts that appear on social media platforms, will be in scope. Adverts placed on an in-scope service through a direct contract between an advertiser and an advertising service, such as Facebook or Google Ads, will be exempt because this is covered by existing regulation.
The response will set out how the regulations will apply to communication channels and services where users expect a greater degree of privacy - for example online instant messaging services and closed social media groups which are still in scope.
Companies will need to consider the impact on user privacy and that they understand how company systems and processes affect people’s privacy, but firms could, for example, be required to make services safer by design by limiting the ability for anonymous adults to contact children.
Given the severity of the threat on these services, the legislation will enable Ofcom to require companies to use technology to monitor, identify and remove tightly defined categories of illegal material relating to child sexual exploitation and abuse. Recognising the potential impact on user privacy, the government will ensure this is only used as a last resort where alternative measures are not working. It will be subject to stringent legal safeguards to protect user rights.