Tech executives and Elon Musk ask for a halt to AI research

by Mar 30, 2023CryptoNews0 comments

According to the letter’s writers, the development of artificial intelligence might significantly impact Earth’s history, for better or ill. An open letter signed by more than 2,600 IT industry leaders and researchers calls for a temporary halt to further artificial intelligence (AI) development due to “deep hazards to society and mankind.”

Elon Musk, the CEO of Tesla, Steve Wozniak, the co-founder of Apple, and several CEOs, CTOs, and academics in the field of artificial intelligence were among the signatories of the letter, which was released on March 22 by the US think tank Future of Life Institute (FOLI).

The institute expressed fears that “human-competitive intelligence might pose serious hazards to society and mankind,” among other things. It urged all AI businesses to “immediately cease” developing AI systems more potent than GPT-4 for at least six months.

Advanced AI should be prepared for and handled with equal attention and resources since it might signal a significant shift in the evolution of life on Earth. Sadly, according to the institution, this degree of organization and administration needs to be present.

The most recent version of OpenAI’s chatbot driven by artificial intelligence, GPT-4, was made available on March 14. So far, it has achieved 90-percentile passing rates on some of the most difficult high school and legal tests in the United States. It is understood to be ten times more advanced than the original version of ChatGPT.

See also  US Air Force STRATFI awarded SIMBA Chain $30 million

There is an “out-of-control race” between AI firms to develop more powerful AI, while “no one — not even their creators — can understand, predict, or reliably control,” FOLI claimed.

Among the top concerns were whether machines could flood information channels, potentially with “propaganda and untruth,” and whether machines will “automate away” all employment opportunities.

FOLI took these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

The letter said, “Such choices must not be handed off to unelected tech executives.”

The institution also agreed with Sam Altman, the creator of OpenAI, who recently said that a third-party assessment should be necessary before developing new AI systems.

In a blog post on February 24, Altman emphasized the need to become ready for robots with artificial general intelligence (AGI) and artificial superintelligence (ASI).

Yet, not every AI expert has hurried to join the petition. In a tweet exchange with Rebooting author Gary Marcus on March 29, SingularityNET CEO Ben Goertzel clarified. Language learning models (LLMs) will not progress into artificial general intelligence (AGI), which has yet to see much advancement.

See also  Binance withdraws from Canada, blaming new regulations

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Related Post