Geoffrey Hinton, a pioneer in AI, has recently left Google and expressed concerns about the dangers of AI. However, critics of large language models and the companies that control them accuse Hinton of downplaying the existing problems caused by these models. Hinton also did not support AI ethicist Timnit Gebru when she was fired by Google. In an interview, Hinton explained that Gebru’s criticism was less concerning than his current fears about the impact of AI on humanity.
What is interesting is that Hinton uses language and thought patterns associated with “effective altruism” – a controversial movement that combines neoliberal economics with ethics. Effective altruism aims to use aid resources as profitably as possible. This has led to further conclusions, including “Earn to Give,” which dictates that one should make as much money as possible to donate to charity. Effective altruism has become increasingly influential in Silicon Valley and attracts tech leaders such as Peter Thiel and Elon Musk.
One branch of effective altruism is “longtermism,” which prioritizes securing the existence of humanity. The idea is that since more people will live in the future than have lived up to now, maximizing human happiness means securing humanity’s existence. Long-termism is not to be confused with long-term thinking, and climate change is not considered an existential threat in EA circles. Instead, nuclear war, pandemics, super-volcano eruptions, cascading system failures, and super-intelligence out of control are considered existential crises.
Hinton believes that a sufficiently intelligent AI can and will manipulate people to gain more autonomy, an idea that comes from the “AI box experiment” discussed in xrisk circles. These connections raise questions about Hinton’s arguments and the opportunities and risks of AI. The discussion about AI has been ongoing for over 50 years and shows no signs of slowing down.