Deep Dive: Strategies for the EU’s Success in AI Regulation

Deep Dive: How the EU can succeed in clever AI regulation

The EU Parliament is progressing towards the regulation of artificial intelligence (AI) with the implementation of the AI Act. One particular aspect of the Act that garnered significant anticipation was the classification of new services like ChatGPT from OpenAI. Sandra Wachter, a lawyer from the Oxford Internet Institute, pointed out that the initial intention of the Act was to regulate classification systems since they are often used in decision-making processes such as hiring, loan approvals, or university admissions. However, the introduction of ChatGPT changed the landscape entirely.

Wachter, who specializes in AI ethics and explainable AI, highlights the bias problem present in large AI models and discusses the ambiguities still found in the draft of the AI Act. She applauds the fact that generative AI has been included in the Act without abandoning the risk-based approach. However, she believes that systems like ChatGPT should be strictly regulated, even though they are not initially classified as high-risk technology. She emphasizes the importance of transparency, suggesting that training data should be made largely open to identify and correct biases or prejudices.

While Wachter appreciates certain aspects of the AI Act, she criticizes the self-assessment process in which manufacturers certify that their products comply with the Act. She argues that merely making an effort to address bias is not sufficient. The episode also delves into the implications of the law for users and businesses, as well as the influence of lobbyists in its regulation.

For a comprehensive discussion on these topics, the entire episode can be accessed through the podcast formats offered by TR, including the weekly news podcast “Weekly” and the monthly podcasts “Unscripted” and “Deep Dive”.

Leave a Reply