The UK’s Competition Watchdog warned that people should not expect a positive outcome in the artificial-intelligence boom. It cited risks such as a proliferation of fake information and fraud, as well as the high price for using the technology.
The Competition and Markets Authority stated that people and businesses can benefit from new AI systems, but the dominance of entrenched players and disregard for consumer protection laws pose a number potential threats.
The CMA issued the warning during an initial review on foundation models. This is the technology behind AI tools like the ChatGPT Chatbot and image creators such as Stable Diffusion.
ChatGPT, in particular, has sparked a debate on the economic impact of generative AI, a term that refers to tools that generate convincing text, images and voices from human inputs. This includes the elimination of white-collar positions in law, media and IT, and also the possibility of mass-producing misinformation aimed at consumers and voters.
Sarah Cardell said that the pace at which AI is becoming part of the everyday lives of people and businesses has been “dramatic”. AI could make millions of tasks easier, as well as boost productivity, a measure for economic efficiency or the amount produced by an employee for every hour worked.
Cardell cautioned that people shouldn’t assume a positive outcome. She said that we can’t assume a good future. “There is still a risk that AI will be used in a way to undermine consumer trust, or dominated by few players with market power and prevent the full benefits from being felt throughout the economy.”
The CMA defines the foundation models as “large, general machine learning models that have been trained on vast quantities of data and are adaptable to a variety of tasks and operations”, including powering Microsoft’s Office 365 products, chatbots and image generators.
The watchdog estimates that about 160 foundation models were released by Google, Meta (the owner of Facebook) and Microsoft as well as other AI firms like the ChatGPT developer OpenAI, and Stability AI in the UK, which funded Stable Diffusion, an image generator.
CMA noted that many companies already have a presence on at least two key aspects of the AI ecosystem. For example, Google, Microsoft, and Amazon are major AI developers who own vital infrastructure such as servers, datacentres and data repositories. They also have a strong presence in online shopping, software, and search.
The regulator said that it would also closely monitor the impact of big tech investments in AI developers such as Microsoft’s OpenAI, and Alphabet, the parent company of Google, in Anthropic. Both deals included cloud computing services, an important resource for this sector.
The CMA stated that it is “essential”, that the AI market not be monopolized by a few companies. This could lead to a short-term risk of consumers being exposed to false information, fraud enabled by AI, and fake reviews.
It could also lead to companies charging high prices for the use of the technology.
According to the report, a lack access to data and computing power – two key elements in building an AI model – could lead to higher prices. The report says that “closed-source” models, such as OpenAI GPT-4 which is the basis of ChatGPT, and can’t be accessed by the public, could limit the development of advanced models to a few firms.
The report states that “the remaining firms could develop positions of power which would give them the incentive and ability to only provide models from closed sources and to impose unfair terms and prices.”
The CMA also proposed a number of principles to guide the development of AI-based models. The CMA proposed a set of principles for the development of AI models. These include: ensuring foundation model developers are able to access data and computing power, and that early AI developers don’t gain an advantage. “Closed source” AI models like OpenAI’s GPT-4, and publicly accessible “open source”, which can be adapted externally, are also allowed to develop. Businesses have multiple options to access AI, including developing their models. Consumers should have multiple AI providers.
The CMA announced that it would update its principles and the feedback they received in 2024. Early November, the UK government will host a global AI safety summit.
Post Disclaimer
The following content has been published by Stockmark.IT. All information utilised in the creation of this communication has been gathered from publicly available sources that we consider reliable. Nevertheless, we cannot guarantee the accuracy or completeness of this communication.
This communication is intended solely for informational purposes and should not be construed as an offer, recommendation, solicitation, inducement, or invitation by or on behalf of the Company or any affiliates to engage in any investment activities. The opinions and views expressed by the authors are their own and do not necessarily reflect those of the Company, its affiliates, or any other third party.
The services and products mentioned in this communication may not be suitable for all recipients, by continuing to read this website and its content you agree to the terms of this disclaimer.