Aleph Alpha Calls for European Values in Trustworthy AI

Interview: Aleph Alpha demands trustworthy AI with European values

The discussions about the opportunities and risks of AI are currently omnipresent. In an interview, Hans-Jörg Schäuble, Vice President Customer of the Heidelberg AI start-up Aleph Alpha, a competitor of OpenAI, comments on the competition with American AI providers, the different ethical standards, and the struggle for the AI Act of the EU.

Ad heise online: How will artificial intelligence transform value creation in the future? Hans-Jörg Schäuble: The transformation will happen in many different ways. Generalizing AI is not purpose-bound and can therefore fulfill many tasks. Improving expectation cycles based on machine data is one of the use cases. And we optimize administrative work, such as writing invoices. This AI can optimize, accelerate, and thus revolutionize every handling of information. In large manufacturing plants, every minute of production downtime counts. If we can use machine data to compare the error messages with the machine manufacturer’s knowledge databases and thus shorten the interruption, then that is a huge added value.

Left behind by the USA and China What role do German players play in the AI revolution? And how big is the risk that we will be left behind by the USA and China? Of course, there is a risk of dependency and it can mean unpleasant consequences for us, namely that added value is lost and we have to buy it. But there is a chance that we have excellent basic research in Germany and Europe. In Europe, however, we only have a few companies that are close to research. In generative AI, only Aleph Alpha has a commercial offering. We have some catching up to do in Europe. The AI models bring an understanding of the world: How do I look at the world? How do I rate everything I see? The models learn this during training and it depends very much on the data they are trained with. That is why it is important that we in Europe preserve and reflect our image of the world.

What are the main differences between your software Luminous and the market leader ChatGPT? The basis, models, algorithms, and method are the same. They’re both Transformer-based models, meaning they share the same architecture. ChatGPT is very focused on being usable by everyone. It’s a B2C use case. We differ in that ChatGPT trained knowledge into its model with the aim of making it as easy as possible for all users. This means that you cannot connect any company knowledge. We go a different way. The knowledge should remain in the connected knowledge databases and only be used by the model to complete the task.

What are the fields of application of Luminous? They are very diverse. In principle, any task that can be expressed in words. It can be a poem, a text summary, or finding an error code in machine data. The industry is irrelevant to the technology because the processes are the same. Being in Europe and able to meet other data protection requirements makes us particularly attractive to industries dealing with sensitive use cases. These are medicine, administration, insurance companies, and banks, but also industry, for example because of their research data.

What ethical standards do we have to apply in order to spark innovation on the one hand and to ensure responsible action on the other? The discussion is multifaceted. On the one hand we have to be able to transfer our European values to the technology and on the other hand we have to make sure that the use of the technology does not become uncontrollable, that we can understand how the AI came to make decisions. We paraphrase this with the term trust, trustworthiness. The cultural perspective of Europeans, for example, on images and artistic representations of naked people is different from that of Americans, who prevent such images from being uploaded to social media. We must reflect on these cultural differences and preserve our European culture. At Aleph Alpha, we have a strong focus on explainable AI research and development to make AI-generated content understandable for humans. On the one hand, this creates the necessary basis for the use of generative AI for safety-critical applications and, on the other hand, it creates trust among the users.

What regulatory benchmarks should we apply? Rules should not only be restrictive, but also open up possibilities. We need a social consensus. We are currently discussing technical details in the regulation, which will also be taken up by the upcoming EU AI Act, for example the training data. However, it would be more important to first find a social consensus on the necessity of this technology, on which appropriate regulation can then be based.

At an AI panel at BDI Industry Day, you said you were looking forward to guidelines, but that rules should not only restrict but also shape. How should AI be regulated? You shouldn’t just refer to the AI. It is already an elementary part of our lives. As a society, we must fundamentally discuss and agree on where we want to go in the long term. As technical specialists, we then have to implement it. However, the technology must be built in such a way that it can conform to the rules by which society wants to live.

Leave a Reply