Youth protection officers in Bavaria have increased their influence with the help of artificial intelligence. The Bavarian State Center for New Media announced that an AI tool detected around 1,400 suspected cases on the internet last year. The tool, which has been in use since the beginning of 2022, automates the search for problematic content online. After the AI flags a suspicious case, BLM experts then evaluate it.
In addition to the AI-identified cases, BLM experts looked into a further 800 cases based on inquiries, complaints, and their own searches, 560 of which were located online. The majority of cases were related to pornography and violations in the area of right-wing extremism and anti-Semitism.
Protecting minors and internet users is becoming increasingly important in the digital world, according to BLM President Thorsten Schmiege. The sheer volume of problematic content online means that manual supervision is no longer effective. “In the digital world, modern supervision must also work with the help of AI,” said Schmiege.
The Media Council Chairman, Walter Keilbart, emphasized the social relevance of combating illegal media offerings, especially in relation to right-wing extremism, anti-Semitism, hate, and hate speech. The BLM is particularly committed to this goal.
Using AI to detect problematic content online is a powerful approach to increasing youth protection and preventing hate speech and extremism online.