
Proposals to regulate artificial intelligence (AI) in the UK have been delayed by at least a year, as ministers aim to introduce a more comprehensive legislative framework encompassing key issues such as safety and copyright. Technology Secretary Peter Kyle has confirmed that a bumper AI bill is expected to be brought forward in the next parliamentary session. However, this decision has sparked concerns regarding the extended lack of regulatory oversight for this rapidly advancing sector.
The originally planned bill, intended to focus on large language models like ChatGPT, was expected within Labour’s first few months in office. It would have required companies to hand over their models for testing by the UK’s AI Security Institute, addressing growing fears around systems potentially advancing to a level that poses risks to humanity. Nevertheless, ministers decided to delay these measures, seeking alignment with the Donald Trump administration in the United States amid concerns that premature regulation could undermine Britain’s appeal to AI innovators.
Kyle has revealed that the government plans to include copyright regulation for AI companies in the forthcoming legislation, aiming to resolve disputes surrounding the use of copyrighted material in training datasets. A source indicated discussions with creators and technology leaders have begun, with more robust work commencing after the passage of the data bill currently awaiting approval. However, this bill has caused significant controversy among the creative sector, with figures such as Elton John, Paul McCartney and Kate Bush opposing provisions that would enable AI companies to train on copyrighted works unless rights holders opt out.
Amendments to require AI companies to declare their use of copyrighted material have been supported by peers in the House of Lords, demonstrating staunch resistance. Still, ministers have held their ground, insisting this issue requires separate consideration under the planned comprehensive legislation. Critics, including filmmaker and campaigner Beeban Kidron, believe the government risks harming the UK’s creative industries, which represent the country’s second-largest industrial sector.
Public sentiment on AI regulation contrasts sharply with governmental hesitancy. Recent surveys show 88 per cent of the population believes the government should retain powers to restrict AI products deemed dangerous. Over 75 per cent favour state or regulatory oversight rather than leaving safety enforcement to private firms. Experts suggest the UK is attempting to balance fostering innovation while ensuring consumer protection, positioning itself strategically between the US and EU frameworks.
Despite reassurances of an economic impact assessment and technical reports to explore copyright-related concerns, frustrations remain high among stakeholders. As the next king’s speech, anticipated for May 2026, approaches, all eyes will be on whether the proposed bill succeeds in addressing the complexities of AI regulation without sacrificing industry competitiveness or public safety.
The following content has been published by Stockmark.IT. All information utilised in the creation of this communication has been gathered from publicly available sources that we consider reliable. Nevertheless, we cannot guarantee the accuracy or completeness of this communication.
This communication is intended solely for informational purposes and should not be construed as an offer, recommendation, solicitation, inducement, or invitation by or on behalf of the Company or any affiliates to engage in any investment activities. The opinions and views expressed by the authors are their own and do not necessarily reflect those of the Company, its affiliates, or any other third party.
The services and products mentioned in this communication may not be suitable for all recipients, by continuing to read this website and its content you agree to the terms of this disclaimer.






