Billionaire Elon Musk and the world's leading AI experts call for a pause in the development of next-generation AI systems
(Baonghean.vn) - In an open letter citing potential risks to society and humanity, billionaire Elon Musk and a group of leading artificial intelligence experts and technology executives called for a six-month pause in the development of next-generation AI systems.
The letter, released by the nonprofit Future of Life Institute and signed by more than 1,000 people including billionaire Elon Musk - the boss of Tesla Company, calls for a pause in the development of advanced AI systems until common safety protocols for such designs are developed, deployed and audited by independent experts.
![]() |
Illustration photo. |
“Powerful AI systems should only be developed when we are confident that their impacts will be positive and their risks will be manageable. The current race to develop AI systems is dangerous and calls for the establishment of independent regulators to ensure future systems are deployed safely,” the letter said.
The letter details the potential risks to society and human civilization posed by AI systems competing with humans, impacting the economy and politics, and calls on developers to work with policymakers and regulators.
“We call on all AI labs to immediately halt training AI systems stronger than GPT-4 for at least six months. This pause must be public and verifiable. If such a pause cannot be enacted quickly, governments should step in and impose a moratorium,” the letter said.
“AI labs and independent experts should use this pause to jointly develop and implement a shared set of safety protocols for advanced AI design and development, closely tested and monitored by independent external experts,” the letter said.
Co-signers of the letter include Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, CEO of Stability AI Emad Mostaque, researchers at AI research company DeepMind (owned by Alphabet), and several prominent AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque.
Earlier, on March 27, the European Union Police Agency (Europol) also voiced concerns about ethical and legal issues with advanced systems like ChatGPT. This agency warned of the possibility of AI being exploited to carry out fraud, spread false information as well as other cybercrimes.
Since its release late last year, Microsoft-backed OpenAI's ChatGPT has sparked a race to develop large language models and integrate AI into products and services.
A Future of Life representative said that Sam Altman, CEO and founder of OpenAI, did not sign the letter.
Regarding this issue, Gary Marcus, a professor at New York University (USA), who signed the letter, said: “The letter does not cover all the contents, but has expressed the general spirit that people need to slow down until the impacts of AI are better understood. Systems can cause serious harm, when large companies in the AI field are increasingly secretive about what they are developing. This makes it difficult for society to counter any possible harm.”
According to
1.https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/
2.https://www.theverge.com/2023/3/29/23661374/elon-musk-ai-researchers-pause-research-open-letter