ChatGPT & Co: The Solution to Saving Lives Amidst Moratorium on Large AI Models

Moratorium on large AI models?  ChatGPT & Co. can "save lives"

Scientists are reacting with caution to the Future of Life Institute’s appeal for a six-month moratorium on training systems with AI capabilities that are more powerful than those of OpenAI’s GPT-4 language model. The call has been criticized for fueling misunderstandings and misperceptions about AI whilst distracting from factual problems. However, scientists do acknowledge the risks that advanced AI may pose, such as discrimination, malicious intent or the spreading of misinformation.

Thilo Hagendorff, head of the Interchange Forum for Reflecting on Intelligent Systems at the University of Stuttgart, argues that AI language models can be used productively, even offering the potential to save lives. The use of such models in medicine could improve care and reduce suffering. However, since technical innovations generally outpace policies and regulation, the risks must be addressed proactively.

Jessica Heesen, head of the research focus on media ethics and information technology at the University of Tübingen, emphasizes the importance of designing technology according to ethical principles. She encourages transparency and participation in development so that society can better judge the truth, veracity and content of AI-generated communication. Consequently, at the very least, there should be a labeling obligation for texts, images and videos produced by AI to tackle propaganda, misinformation and loss of control.

Silja Vöneky, a Professor of legal ethics in Freiburg, pointed out the limits of the mostly static regulation being discussed by the EU for AI systems such as GPT-4, arguing that it is not enough that chatbots identify themselves as such to users. She highlights the challenges of regulation for possible risks. Even so, it is important to point out the risks of AI and try to engage in a broad democratic discourse to tackle the problem of uncontrollable flood of propaganda and untruths, the loss of jobs and loss of control.

Ute Schmid, Head of the Cognitive Systems working group at the University of Bamberg, sees the appeal as an opportunity to ensure a broad democratic discourse around the use of large language models and other emerging AI technologies. She emphasizes the importance of this in light of the risks posed by the issue of propaganda, untruths, and loss of control.

Lastly, Florian Gallwitz, a Nuremberg Professor for media informatics, commented that since the technical details and capabilities of GPT-4 are not publicly known, competitors of OpenAI can continue to work on the technology. Therefore, it is important to develop verifiable specifications and tests rooted in fairness and anchored in reality. Such tests and labels should be developed in Europe so that products can be certified and regulated.

Leave a Reply