According to the letter’s writers, the development of artificial intelligence might significantly impact Earth’s history, for better or ill. An open letter signed by more than 2,600 IT industry leaders and researchers calls for a temporary halt to further artificial intelligence (AI) development due to “deep hazards to society and mankind.”
Elon Musk, the CEO of Tesla, Steve Wozniak, the co-founder of Apple, and several CEOs, CTOs, and academics in the field of artificial intelligence were among the signatories of the letter, which was released on March 22 by the US think tank Future of Life Institute (FOLI).
The institute expressed fears that “human-competitive intelligence might pose serious hazards to society and mankind,” among other things. It urged all AI businesses to “immediately cease” developing AI systems more potent than GPT-4 for at least six months.
📢 We're calling on AI labs to temporarily pause training powerful models!— Future of Life Institute (@FLIxrisk) March 29, 2023
Join FLI's call alongside Yoshua Bengio, @stevewoz, @harari_yuval, @elonmusk, @GaryMarcus & over a 1000 others who've signed: https://t.co/3rJBjDXapc
A short 🧵on why we're calling for this – (1/8)
Advanced AI should be prepared for and handled with equal attention and resources since it might signal a significant shift in the evolution of life on Earth. Sadly, according to the institution, this degree of organization and administration needs to be present.
The most recent version of OpenAI’s chatbot driven by artificial intelligence, GPT-4, was made available on March 14. So far, it has achieved 90-percentile passing rates on some of the most difficult high school and legal tests in the United States. It is understood to be ten times more advanced than the original version of ChatGPT.
There is an “out-of-control race” between AI firms to develop more powerful AI, while “no one — not even their creators — can understand, predict, or reliably control,” FOLI claimed.
BREAKING: A petition is circulating to PAUSE all major AI developments.— Lorenzo Green 〰️ (@mrgreen) March 29, 2023
e.g. No more ChatGPT upgrades & many others.
Signed by Elon Musk, Steve Wozniak, Stability AI CEO & 1000s of other tech leaders.
Here's the breakdown: 👇 pic.twitter.com/jR4Z3sNdDw
Among the top concerns were whether machines could flood information channels, potentially with “propaganda and untruth,” and whether machines will “automate away” all employment opportunities.
FOLI took these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:
“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
The letter said, “Such choices must not be handed off to unelected tech executives.”
Having a bit of AI existential angst today— Elon Musk (@elonmusk) February 26, 2023
The institution also agreed with Sam Altman, the creator of OpenAI, who recently said that a third-party assessment should be necessary before developing new AI systems.
In a blog post on February 24, Altman emphasized the need to become ready for robots with artificial general intelligence (AGI) and artificial superintelligence (ASI).
Yet, not every AI expert has hurried to join the petition. In a tweet exchange with Rebooting author Gary Marcus on March 29, SingularityNET CEO Ben Goertzel clarified. Language learning models (LLMs) will not progress into artificial general intelligence (AGI), which has yet to see much advancement.