OpenAI says it could ‘go out of business’ in the EU if it can’t comply with future regulation
OpenAI CEO Sam Altman has warned that the company could withdraw its services from the European market in response to AI regulation being developed by the EU.
Speaking to reporters after a talk in London, Altman said he had “a lot of concerns” about the EU’s AI Act, which lawmakers are currently finalizing. The terms of the Act have been expanded in recent months to include new obligations for manufacturers of so-called “core models”: large-scale artificial intelligence systems that power services like OpenAI’s ChatGPT and DALL-E.
“The details really matter,” Altman said, according to a report from financial time. “We will try to comply, but if we can’t comply, we will stop operating.”
In comments reported by TimeAltman said the concern was that systems like ChatGPT would be designated as “high risk” under EU law. This means that OpenAI would have to meet a series of security and transparency requirements. “Will we be able to figure out those requirements or not,” Altman said. “[T]there are technical limits to what is possible here.”
In addition to the technical challenges, the disclosures required by the EU AI Law also present potential business threats to OpenAI. A provision in the current draft requires basic modelers to disclose details about their system design (including “computing power required, training time, and other relevant information related to the size and power of the model”) and provide “summaries of copyright protected data”. used for training.”
OpenAI used to share this type of information, but stopped doing so as its tools became more and more commercially valuable. In March, Open AI co-founder Ilya Sutskever said the edge that the company had been wrong to reveal so much in the past, and that information such as training methods and data sources needed to be kept secret to prevent rivals from copying its work.
In addition to the potential commercial threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using vast amounts of data pulled from the web, much of it protected by copyright. When companies disclose these data sources, they open themselves up to legal challenges. OpenAI’s rival, Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI imager.
Altman’s recent comments help fill out a more nuanced picture of the company’s desire for regulation. Altman has told US politicians that the regulation should mainly apply to the most powerful future artificial intelligence systems. By contrast, the EU AI Law focuses much more on the current capabilities of AI software.