GPT Development Continues: Sam Altman’s Insights

Sam Altman: The development of GPT is far from over

OpenAI’s ChatGPT has taken the IT world by storm, with some saying that it needs to be tightly regulated. However, others rank artificial intelligence and language models as less critical and life-changing. In an interview with “Zeit”, OpenAI CEO Sam Altman talks about data protection, the AI Act, the dangers of over-regulation, and staying in Europe.

Privacy and Acceptance of AI

Sam Altman describes his “conversations” with ChatGPT as “very personal”. Normal data subject to data protection regulations, or information that is particularly worthy of protection, which would be comparable to attorney-client privilege, are not. Altman emphasizes the need for a new category of data protection that also regulates exceptions, such as submissions about suicide. People need time to get used to AI technology, and he believes it is essential to understand what is in store for them. The same applies to politicians and businesses. It is therefore helpful to release an “imperfect” version 3.5 of ChatGPT than to roll out GPT-4 directly, as that would have been an earthquake.

Fear of Over-Regulation

The head of OpenAI recently acknowledged the risks associated with the development of AI at a US Senate hearing. Altman believes that it took many years to understand how to secure technical development and its possible impact on society’s future. OpenAI was in no hurry to release ChatGPT, but Altman fears that others would be in a rush to replicate its language model. After completion, Altman’s team worked another eight months to keep ChatGPT safe from potential threats before release.

Withdrawal from Europe not Planned

OpenAI does not want to withdraw from the EU, as previously threatened. They love Europe and intend to try to adhere to the guidelines. However, the systems would have to be technically capable of complying with the specifications of the AI Act. One point of contention in the planned European law is the question of responsibility.

Future Developments

Altman believes data protection is essential but is concerned that over-regulation will destroy the advantages of AI. Another danger is the spread of disinformation and the potential for cyberattacks and biological weapons. Also, many scientists believe the development of GPT is not over. OpenAI is using more data for training purposes, such as purchased books and images. Altman predicts that the next big step in AI development will involve a combination of GPT and a self-taught system like AlphaGo-Zero.

Conclusion

In conclusion, OpenAI CEO Sam Altman discussed the regulation of AI and language models in an interview with “Zeit.” Altman believes that it is essential to understand what is in store for people, and politicians and businesses need time to get used to AI technology. He also emphasized the need for a new category of data protection that regulates exceptions. Altman fears over-regulation may lead to a loss of the advantages of AI, and he is also concerned about the potential for cyberattacks and biological weapons. The next big step in AI development involves a combination of GPT and a self-taught system like AlphaGo-Zero.

Leave a Reply