
Campaigners and advocacy groups have raised concerns about Meta’s reported plans to rely heavily on artificial intelligence for conducting risk assessments, a key requirement under the UK’s Online Safety Act. The law mandates social media platforms like Facebook, Instagram, and WhatsApp to evaluate potential harm on their services and outline mitigation strategies with a strong emphasis on protecting child users and preventing illegal content.
Organisations including the NSPCC, the Molly Rose Foundation, and the Internet Watch Foundation have called on Ofcom, the UK’s communications regulator, to take a stance against the mass use of AI in delivering these critical assessments. In a letter addressed to Melanie Dawes, Ofcom’s chief executive, concerns were expressed that AI-based risk evaluations are a “retrograde and highly alarming step.” The signatories urged Ofcom to reject risk assessments that are conducted predominantly through automation and stress the need for thorough human oversight in such processes to meet the legal standards of the act.
An Ofcom spokesperson has confirmed the receipt of the letter, stating that the regulator is reviewing the highlighted concerns and will issue a formal response in due course. Ofcom has emphasised that platforms are required to disclose details of who handled, reviewed, and approved their risk assessments.
A spokesperson for Meta has responded to these claims, defending the company’s approach and stating that the concerns around automated assessments are unfounded. “We are not using AI to make decisions about risk,” Meta stated, explaining that the company has developed a tool that assists teams in identifying legal and policy requirements relevant to specific products. Meta affirmed that human oversight remains integral to the process, and their advancements in AI are aimed at improving safety outcomes.
However, these reassurances have not alleviated the fears of some campaigners. Reports have suggested that Meta is leaning towards automating the moderation process for sensitive areas such as monitoring youth risks and misinformation. A former Meta executive speaking anonymously to US media raised concerns that this automation could potentially increase risks for end users. The shift to an AI-led system, they pointed out, allows features and updates to be rolled out at a faster pace but may compromise the ability to prevent issues proactively.
The debate places Meta at the centre of broader conversations around corporate responsibility and effective regulation of online platforms. With regulators like Ofcom under pressure to enforce safety requirements rigorously, the reliance on AI for risk management has become a flashpoint for both industry leaders and safety advocates.
The following content has been published by Stockmark.IT. All information utilised in the creation of this communication has been gathered from publicly available sources that we consider reliable. Nevertheless, we cannot guarantee the accuracy or completeness of this communication.
This communication is intended solely for informational purposes and should not be construed as an offer, recommendation, solicitation, inducement, or invitation by or on behalf of the Company or any affiliates to engage in any investment activities. The opinions and views expressed by the authors are their own and do not necessarily reflect those of the Company, its affiliates, or any other third party.
The services and products mentioned in this communication may not be suitable for all recipients, by continuing to read this website and its content you agree to the terms of this disclaimer.






