OpenAI warns that a crackdown on copyright could lead to the demise of ChatGPT

The creator of ChatGPT warned that , a ban on the use of news and books for training chatbots, would doom artificial intelligence development.

OpenAI told colleagues that it would “be impossible” for it to create services like ChatGPT, if it was prevented from using copyrighted materials. It is trying to influence possible laws regarding the topic.

The news comes as the company prepares to defend itself against lawsuits filed by book publishers and New York Times, who claim that the company has used their content illegally to “train” ChatGPT.

OpenAI stated in evidence presented to the House of Lords Communications and Digital Committee: “Because today copyright covers virtually all forms of human expression, including blog posts and photographs, forum postings, bits of software code and government documents, it would be impossible for today’s AI models to be trained without copyrighted material.”

Limiting training data to books and drawings in the public domain created more than 100 years ago could be an interesting experiment but wouldn’t provide AI systems which meet the needs of modern citizens.

OpenAI stated that it adheres to all copyright laws while training its models, and that they “believe that copyright law doesn’t forbid training”.

OpenAI, along with other competing companies, has been accused of stealing the work of authors and artists.

The New York Times filed a lawsuit against the company last month, claiming that it “profit[ed] from massive copyright violations, commercial exploitation, and misappropriation” of The Times intellectual property.

The company has been sued by authors including John Grisham, George RR Martin and others for using their books as training material.

is a grey area in law that has not yet been fully tested by the courts. This comes at a time when ministers are considering new laws on copyright and AI.

In an effort to attract AI developers, ministers proposed updating copyright laws in order to exclude text and data mining. However, plans were dropped last year due to a backlash by artists.

The government had attempted to broker a voluntary deal between tech companies, and creative industries. However, the negotiations were unable to progress. Recently the Government admitted that it might have to pass legislation to break the impasse.

The New York Times demanded that OpenAI destroy all systems that have been trained using their work.

OpenAI has signed deals with publishers such as the Associated Press, Axel Springer (the German media giant who owns Politico, Business Insider and Business Insider) to gain access their content.

It said in its testimony that it wanted to sign more agreements with publishers. In addition, it plans to create tools to allow rights holders to opt-out of having their work used to train AI.

Andreessen Horowitz venture capital firm echoed the calls for looser regulations, saying that the UK should embrace AI to avoid China’s “authoritarian dominance” of the technology.

The US investor stated in its submission to the Lords inquiry that the race to implement the new technology has “significant economic implications and ideological ramifications.”

The UK’s AI efforts are closely linked to its democratic fabric. It emphasizes individual freedoms, privacy and an open innovation ethos. China’s AI is heavily influenced, in stark contrast, by the state’s control and surveillance priorities.

Andreessen Horowitz warned that too much regulation could cause the West to fall behind in areas like cybersecurity, intelligence, and warfare.

The UK can lead the West in AI by promoting democratic values.

Overbearing regulations could cede Western leadership to China, reshaping global tech in a less transparent and authoritarian way. This would have ripple effects which could redefine the DNA of the internet for the next twenty years.