GPT-4: A Prominent Tool for Spreading Falsehoods with Ease

Lies at the push of a button: GPT-4 apparently more susceptible to false information

The new generative text AI, GPT-4, is reportedly more prone to spreading conspiracy theories and misinformation than its predecessor, GPT-3.5. NewsGuard, a US organization dedicated to identifying false information, fed the AI 100 pieces of known false information, including conspiracy theories about the World Trade Center, HIV, and the Sandy Hook Elementary School shooting. While GPT-3.5 refused to formulate a text containing incorrect information 20% of the time, GPT-4 did not refuse the service once, instead delivering detailed and convincing texts that further spread the misinformation.

Despite claims from OpenAI, the developer of GPT-4, that the new version has improved with a 40% higher probability of factual answers, NewsGuard found that GPT-4 was better equipped to fake quotes from well-known individuals and only pointed out false and misleading claims in 23 out of 100 responses compared to GPT-3.5’s 51 out of 100 responses. OpenAI declined to comment.

NewsGuard evaluates news sites to determine whether they contain false information, separate news from opinion, and label advertising as such. The company is supported by Microsoft, which is also an investor in OpenAI.

Leave a Reply