ChatGPT faces shutdown in EU as OpenAI warns against overregulation of AI

"AI over-regulation": OpenAI threatens to shut down ChatGPT in the EU

Sam Altman, the head of ChatGPT provider OpenAI, has called for the regulation of large language models, such as the in-house GPT-4, in the US Senate. While Altman agrees with regulation, he believes that what the EU is planning with the upcoming AI regulation is overregulation. There are concerns that the current draft law requires high-risk systems, such as ChatGPT and GPT-4, to meet numerous requirements and carry out a risk assessment. Altman suggests a regulatory approach that mixes both the EU’s traditionally European approach and the US’s laissez-faire principle.

However, two weeks ago, committees of the EU Parliament agreed on new regulations for AI. New services such as ChatGPT will not be classified as high-risk from the outset, but they will still be regulated particularly strictly. Operators of AI base models will have to examine and, if necessary, mitigate foreseeable risks. Manufacturers of generative AI models, such as OpenAI, will also have to document the use of copyrighted training data.

Altman emphasized in a panel at University College London that he was against regulations that restricted user access to the technology. He believes that there should be an authority that tests basic AI models. The European Data Protection Board (EDPB) has already set up a task force around ChatGPT. The task force’s objective is to ensure uniform law enforcement in the EU and to prevent various inspectors in member states from single-handedly imposing sanctions on complaints.

Most recently, the G7 leaders have agreed to initiate a Hiroshima AI process that sets minimum standards for AI for a transitional period before new laws come into effect. Overall, there is a push for regulations that protect users without inhibiting the growth and development of AI technology.

Leave a Reply