The Limitations of AI: Exploring Fairness in Three Questions and Answers

Three questions and answers: There can be no complete fairness with AI

Lobbyists, companies, and politicians are often responsible for creating smokescreens around data protection and bias in AI systems. The AI ​​Act aims to address the security of AI systems in relation to humans, but public understanding of the topic has been poor so far. Health Minister Carl Lauterbach’s comparison of AI systems to heating systems and his suggestion to “flush them with synthetic data” highlights the lack of clarity in the public debate. Boris Ruf, a data scientist at AXA, explains in an interview how biases occur in AI systems, the importance of data, and how fairness can be achieved technically.

Biases can enter an AI system at various stages of its lifecycle. Training data plays a crucial role in describing the world to be modeled. If data is missing or certain groups are underrepresented, biases can result. Additionally, biases can arise from societal inequalities that are already perceived as unfair. Users of the system can also introduce biases, either by interpreting the system’s output in their own way or by using the AI application for unintended purposes. To realize the potential of AI and gain public trust, responsible operation is paramount, which includes fairness, transparency, reliability, and data protection.

Boris Ruf stresses that fairness, although sometimes considered a vague concept, can be defined and implemented technically through mathematical formulas. Different definitions of fairness, such as demographic parity or equalized odds, exist, but they can conflict with each other. Therefore, it is necessary to determine what would be fair in a specific application for a given AI system. Combating bias in ever-growing AI systems is challenging, and compromises must be made due to the conflicting concepts of fairness. While increasing volume and variety in training data can improve the system’s performance, it does not address the causes of bias that can arise throughout the AI lifecycle.

The fairness of language models, such as ChatGPT, is a relatively new area of research compared to classification and regression problems. Sir concludes by emphasizing the importance of understanding and controlling biases in AI systems. While it may be impossible to completely eliminate bias, operators must strive to mitigate its impact. In a detailed article in the iX Special, Boris Ruf provides further insights into combating bias in AI systems.

The “Three Questions and Answers” series offered by iX seeks to explore IT challenges from various perspectives, whether from users, managers, or administrators. The publication encourages readers to share their suggestions and experiences related to AI, and invites them to provide feedback through comments or forum participation.

Leave a Reply