Researchers say AI models do not meet draft EU rules

US research warns that companies developing new artificial intelligence models such as ChatGPT creator OpenAI and Google/Facebook owner Meta could be in violation of EU draft rules.

Stanford University’s paper highlights a conflict between global regulators and companies that are spending billions on AI models. Politicians who see the technology as crucial to national security often support the development of these AI models.

Rishi Bommasani is an AI researcher with the Stanford Center for Research on Foundation Models. He said that companies are not meeting the proposed rules, especially on the issue of copyright.

Bommasani stated that “if foundation models are creating content, then they must summarise what data they used to train on is copyrighted.” At the moment, most providers do a particularly poor job on this.

Launch of ChatGPT, in November, prompted the release a wave generative AI tools – software that is trained on large data sets to create humanlike text and images.

EU legislators were prompted by the rapid pace of AI development to adopt a set of strict rules. According to the AI Act proposals, developers of AI tools like ChatGPT and Bard would be required to publish summaries copyrighted datasets used for training, and disclose the content generated by AI.

Stanford’s study, led Bommasani ranked 10 AI-models against the EU draft rules for describing and summarizing data sources, disclosing the technology’s computing and energy requirements, as well as reports on evaluations, tests and the risks that could be associated with them.

Six of the 10 providers scored less than 50%. Researchers found that closed models such as OpenAI’s ChatGPT and Google’s PaLM 2 suffered from a lack transparency regarding copyrighted information, while open source rivals or publicly accessible models were more transparent, but harder to control. The study’s 48 point scale ranked Aleph Alpha from Germany and California’s Anthropic at the bottom, with the open-source BLOOM model ranking first.

Rumman Chowdhury, of Harvard University, told the US Congress Science, Space and Technology Committee hearing on AI that “AI is neither neutral, trustworthy, nor beneficial” on Thursday.

She added, “Conscious and directed efforts are needed to ensure that this technology is used appropriately.” “Building a robust AI industry doesn’t only involve processors and chips. “Trustworthiness is the real competitive edge.”

The findings of Bommasani’s research, which was cited during Thursday’s hearing will help regulators around the world as they grappled with technology, expected to disrupt industries such as professional and financial services, pharmaceuticals, and media.

They also brought to light the tension between rapid development and responsible development.

Frank Lucas, Republican committee chair, stated on Thursday that “our adversaries are catching-up” in AI. “We should not and cannot copy China’s playbook. But we can continue to be a leader in AI and ensure its development by embracing our values of fairness, transparency, and trustworthiness.”

The US is preparing to introduce legislation in the coming months. However, the EU’s AI Act draft is more advanced in adopting specific rules.

Bommasani stated that a greater level of transparency in the AI sector would allow policymakers to regulate AI in a more effective manner than in the past.

He said that “from social media, it was clear that we didn’t understand how platforms were used. This compromised our ability” to govern them.

The non-compliance of companies with the AI Act draft shows that it will be hard to enforce the laws.

Bommasani said that it is “immediately unclear” how to summarise these massive data sets in terms of the copyrighted part. She expects lobbying to increase in Washington and Brussels as regulations are finalised.