(8th May 2024) sees OFCOM – The UK’s telecommunications regulator, set out a series of steps aimed at online content providers – predominantly Social Media operators to help protect children when using Internet services.

The steps allow OFCOM to enforce the regulations set out in the Online Safety Act 2023 which became law in the UK on the 26th October 2023.

The Act covers a number of issues including:

  1. Illegal content and activity, including fraudulent advertising
  2. Content and activity that is harmful to children
  3. User-to-user services & search services
  4. Pornographic content
  5. Fees imposed by service providers
  6. Appeals and complaints
  7. OFCOMs powers
  8. Communications Offences

The Act makes platforms – including social media, search, and pornography services – legally responsible for keeping people, especially children, safe online. These services have new duties to protect users in the UK by assessing risks of harm and taking steps to address them

OFCOMS proposals to the online industry

The proposed steps are the 1st in a number of new actions from OFCOM in line with their new powers, and are divided into the following different categories:

  • Governance & Accountability
  • Age Assurance
  • Content Moderation
  • User Reporting & Complaints
  • Terms of Service / Publicly Available Statements
  • Recommender Systems
  • User Support

OFCOM has produced a document with details covering all the proposed steps but in brief, the steps are aimed at the following 5 areas of online activity:

Robust age checks. OFCOM expect much greater use of age assurance, so services know which of their users are children. All services which do not ban harmful content, and those at higher risk of it being shared on their service, should implement highly effective age-checks to prevent children from seeing it.

  • Safer algorithms. Recommender systems – algorithms which provide personalised recommendations to users – are children’s main pathway to harm online. Under the new proposals,
    any service which operates a recommender system and is at higher risk of harmful content should identify who their child users are and configure their algorithms to filter out the most harmful content from children’s feeds and reduce the visibility of other harmful content.
  • Effective moderation. All user-to-user services should have content moderation systems and processes that ensure swift action is taken against content harmful to children. Search services should also have appropriate moderation systems and, where large search services believe a user to be a child, a ‘safe search’ setting which children should not be able to turn off should filter out the most harmful content.
  • Strong governance and accountability. Proposed measures here include having a named person as accountable for compliance with the children’s safety duties; an annual senior-body review of all risk management activities relating to children’s safety; and an employee Code of Conduct that sets standards for employees around protecting children.
  • More choice and support for children. This includes ensuring clear and accessible information for children and carers, with easy-to-use reporting and complaints processes, and giving children tools and support to help them stay safe.

At present, the steps outlined by OFCOM are simply proposals, and key stakeholders are required to submit any comments, objections, changes, etc. to OFCOm by the 17th of July 2024, where feedback will be taken into account, as well as comments form children.

OFCOM expect to finalise the proposals and publish the final statement and documents in the spring of 2025.