Digital Transformation

AI can go beyond human control

Phan Van Hoa DNUM_AIZACZCACF 10:29

The world's leading scientists warn that, if not tightly controlled, AI could spiral out of human control, causing unpredictable consequences.

Accordingly, two of the world's leading AI scientists, Max Tegmark - Professor at the Massachusetts Institute of Technology (USA) and Professor Yoshua Bengio at the Université de Montréal (Canada), warned that artificial general intelligence (AGI) designed according to the "agent" model could become dangerous because humans risk losing control of this system.

Ảnh minh họa
Yoshua Bengio (left) and Max Tegmark (right) discuss the development of AGI in a live podcast recording of CNBC's "Beyond The Valley." Photo: CNBC

According to them, AGI is a term used to refer to AI systems with intelligence superior to humans, with the ability to think and make decisions independently. Without strict control mechanisms, these systems can act against human will, leading to unpredictable consequences.

Scientists’ concerns stem from the fact that many large technology corporations are promoting the concept of “AI agents”, or chatbots that can act as digital assistants, supporting work and daily life. However, the actual time of AGI is still a question mark, with many different predictions.

The problem, according to Yoshua Bengio, is that AI systems are gradually gaining “autonomy” and the ability to think independently. Sharing on the podcast “Beyond The Valley" (translation: "Beyond Silicon Valley”) of CNBC on February 4, he explained: “AI researchers are inspired by human intelligence to build machine intelligence. In humans, intelligence is not only the ability to understand the world but also includes agent behavior, using knowledge to achieve goals.”

Bengio warns that this is how AGI is being developed, becoming agents with a deep understanding of the world and the ability to act accordingly. However, he stresses that this is “really dangerous.”

Pursuing this model, he said, would be tantamount to “creating a new species” or “a new intelligent entity” on Earth that humans cannot be sure will behave in ways that benefit us.

“We need to consider the worst-case scenario, and the key is always the agent factor. In other words, AI can have its own goals, and that can get us into trouble.”

Bengio also warned that AI could develop self-protection mechanisms as it gets smarter, which could put humanity at odds.

“Do we want to compete with entities that are smarter than us? That’s clearly not a safe bet. We need to understand how self-protection can become a goal of AI.”

According to Professor Max Tegmark from MIT, the safety solution lies in the “tool AI” model, that is, systems designed for a specific task, without acting as an independent agent.

He gives the example of an AI tool that could help find a cure for cancer or a self-driving car system. These technologies can still have powerful capabilities but must ensure that humans can control them with a high level of trust.

“I believe that, optimistically, we can still exploit most of the benefits that AI brings, as long as we apply basic safety standards before widely deploying powerful AI systems,” Tegmark emphasized.

Companies must demonstrate that humans can control AI before commercializing it, he said. Once this principle is followed, the industry can quickly innovate to find safer ways to deploy AI.

Ảnh minh họa1
AGI could go beyond human control, causing serious consequences. Photo: Internet

In 2023, the Future of Life Institute, founded by Tegmark, called for a moratorium on the development of AI systems that could match or surpass human intelligence.

While that call hasn't come to fruition, he argues that at least the topic has become more of a conversation, and that it's time to take action to put in place barriers to AGI.

“A lot of people are talking about this, but the important question is whether we can get them to act,” Tegmark told CNBC’s Beyond The Valley podcast.

“It would be crazy for humans to create something smarter than themselves and not figure out how to control it,” he warned.

Predictions about when AGI will appear are controversial, in part due to differences in how AGI is defined. Some experts believe AGI is far away, while others believe it could happen within the next few years.

Sam Altman, CEO of OpenAI, has claimed that his company has mastered how to build AGI and that the technology could be available sooner than many people think. However, he has also tried to downplay the hype surrounding AGI's impact on society.

“I suspect we will achieve AGI sooner than most people think, but the impact will not be as big as many people imagine,” Altman said in December.

While not revealing specifics about the progress of AGI development, Altman's statement shows that OpenAI is getting closer to this goal, while raising many questions about the world's preparation for the emergence of an intelligent system with the ability to equal or surpass humans.

Phan Van Hoa