There is a “real risk” that the artificial intelligence industry could develop in a way that could end up with only a few firms dominating the market, while consumers are bombarded with harmful information, according to the United Kingdom’s competition watchdog.
In a report published Sept. 18, the Competition and Markets Authority looked into AI Foundation Models, concluding that while AI has the potential to change how people live and work, “these changes may happen quickly and have a significant impact on competition and consumers.”
The competition regulator cautioned that in the short term, if competition is weak, or developers fail to heed consumer protection law, consumers may be exposed to significant levels of false information or AI-enabled fraud.
In the long term, there’s a chance that a handful of firms could end up gaining or entrenching positions of market power, which could lead them to not offer the best products or services, or charge high prices, it said.
“It is essential that these outcomes do not arise,” said the CMA, with CEO Sarah Cardell adding:
“There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy.”
To remedy this, the watchdog proposed several “guiding principles” to ensure “consumer protection and healthy competition while allowing full economic benefits.”
These guiding principles appear to focus on increasing access and transparency — particularly when it comes to preventing firms from gaining advantages by using AI models.
The U.K. competition regulator said it will publish an update on the principles and their adoption in early 2024, along with an insight into further developments in the AI ecosystem. It has engaged with AI developers and businesses deploying the technology already, it said.
It is not the first time the U.K. has cautioned over rapid advances in AI. In June, the British prime minister’s AI task force adviser, Matt Clifford, said the technology would need regulation and control within the next two years to curb major existential risks.
Also in June, Japan’s privacy watchdog warned ChatGPT’s parent company OpenAI about its data collection methods.