Amnesty International recently faced criticism after using stylized images generated by artificial intelligence software to promote a report on human rights abuses by the Colombian police. The report covered the riots that occurred in Colombia in 2021, with the government’s tax reform leading to massive protests that were met with violent reactions from security forces. The use of AI imagery sparked concern among photojournalists and media scientists who believed it could fuel conspiracy theories and undermine the organization’s work.
The AI-generated images contained errors like a woman shown with a flag in police custody, where the colors were correct, but the three stripes were in the wrong order. Amnesty International later defended its use of AI images, explaining that it aimed to protect the subjects from possible government persecution.
The organization’s use of this technology raised questions about copyright laws as the AI software doesn’t name the original sources of the images used. The human rights organization has since apologized for any negative effects of using AI and removed the images from social media platforms.
Amnesty International has recognized the negative impact of using artificial intelligence and stated that it must be more careful when utilizing these technologies. While the organization believes that using AI technology cannot do any harm, it also acknowledged the risk of spreading false information.
In conclusion, Amnesty International’s utilization of AI imagery has sparked concerns, with critics questioning the potential negative impact on its work. The organization has since issued apologies to those who expressed concerns about its use of AI and stated that it will take extra care when employing this technology in the future.