Artificial intelligence more dangerous than atomic bombs?
(Baonghean.vn) - Artificial intelligence (AI) experts are issuing dire warnings about the dangers that uncontrolled AI development will pose to society, or even to the survival of humanity itself.
The astonishing performance of the advanced conversational AI model, developed by OpenAI and launched in late November 2022 under the name ChatGPT, has raised expectations that systems capable of matching human cognitive abilities, or even possessing “superhuman” intelligence, may soon become a reality.
![]() |
Illustration photo. |
The ability to understand human language and generate text that resembles the way humans do is a long-standing goal of AI research. With the advent of Large Language Models (LLMs), that goal is closer than ever.
At its core, ChatGPT is a powerful conversational AI model that leverages the strengths of large language models to deliver human-like responses, answer questions, and make suggestions in real time. Unlike traditional chatbots that are limited by pre-programmed responses, ChatGPT can understand and generate a wide variety of creative and sometimes inaccurate texts.
However, AI experts are raising dire warnings about the dangers that uncontrolled AI development will pose to society, or even to human survival itself.
Many warnings have been issued about the dangers of AI.
On March 22, billionaire Elon Musk and a group of more than 1,000 of the world's leading technology experts initiated the process of signing an open letter to the United Nations warning about the risk of uncontrolled AI development. The open letter ended with the statement: "Therefore, we call on all AI labs to immediately halt training AI systems more powerful than GPT-4 for at least 6 months."
To justify the need to stop research into next-generation ChatGPT models, the open letter argues:
“Advanced AI models could represent a profound change in the history of life on Earth and require careful planning and control. Yet this has not been done, even though recent months have seen AI labs caught up in an uncontrolled race to develop and deploy ever more powerful digital minds that no one, not even their creators, can reliably understand, predict, or control. We should ask ourselves: Should we automate all jobs? Should we develop artificial minds that are smarter and can replace us? Should we risk losing control of our civilization?”
In fact, this is not the first time scientists have warned about the prospect of artificial intelligence becoming dangerous to humans. In 2014, famous British theoretical physicist Stephen Hawking said: "The comprehensive development of artificial intelligence could destroy humanity." He also said that humans will be completely replaced by artificial intelligence in the next 500 years if they are not careful in the process of researching and developing this dangerous technology.
Recently, Eliezer Yudkowsky - an American computer scientist, widely considered one of the founders of the field of AI, was even more worried when he published an article in the US newspaper Time titled "Pausing AI development is not enough. We need to shut it all down".
Accordingly, Mr. Eliezer Yudkowsky said that it is very likely that an AI model with superhuman intelligence will appear. At that time, humans on Earth are at risk of literally dying. It is not “possible” but “it is something that will obviously happen”.
The potential risks of artificial intelligence depend on how it is used. AI was initially developed to assist humans in their work, especially with boring and repetitive tasks such as classifying images, reviewing information...
However, with the rapid development of AI in recent times, scientists and technology experts are concerned that if developed uncontrollably, or programmed for the wrong purposes, artificial intelligence can cause serious consequences for humans and society.
AI development needs to be tightly controlled like controlling thermonuclear bombs.
The sight of AI scientists calling for a pause, or even an end to rapidly advancing work in their field, cannot help but remind us of the history of nuclear weapons.
The terrible destructive power of atomic bombs, which scientists have created throughout history, is also considered a typical example in the research and development of new technologies.
In 1949, several leading nuclear physicists and other veterans of the atomic bomb project protested against participating in the development of a thermonuclear weapon (“hydrogen bomb”), because the energy released by such a bomb could be 1,000 times greater than that of a fission atomic bomb.
Most scientists share the view that thermonuclear bombs threaten the very future of humanity. Humanity would be better off without such super bombs.
However, aside from the military applications that pose a threat to humanity, atomic energy, in the form of fission reactors, has brought great benefits to mankind. Thermonuclear energy, first released in an uncontrolled form in a thermonuclear bomb, promises even greater benefits.
When AI is developed to the point where it can make its own decisions to deal with changes in the surrounding environment or search for alternative targets or expand its target range, then perhaps humans will no longer be safe.
For example, in the future, AI will support important systems such as electricity, transportation, healthcare, finance, etc. It can master and control all of these systems and make and execute those decisions in emergency situations. When AI is integrated with more "ambitious" purposes, it can cause many serious consequences such as disrupting the traffic system by disabling the traffic light system or cutting off the power to the urban train system, causing a series of accidents, causing widespread power outages, etc.
American Hollywood studios have also produced many films based on this scenario. However, with current AI technology, this is no longer a distant prospect but can completely become a reality. Elon Musk believes that if AI is allowed to develop unchecked, to the point where it can automate decisions without human intervention, this could be a threat to human survival.
That is why he and thousands of technology experts signed a letter requesting a temporary halt and strict control of the AI development process in a transparent manner. According to Elon Musk, artificial intelligence systems are very complex, difficult to understand and control them is very difficult. Without transparency, the use of artificial intelligence for unethical purposes, causing damage to humans will certainly happen.
Can AI modify human behavior?
Suppose, in the future, AI systems that operate on the basis of Deep Learning gradually gain the ability to manipulate humans through psychological states and modify behavior. Deep Learning can be considered a subfield of Machine Learning, where computers will learn and improve themselves through algorithms. Deep Learning is built on much more complex concepts, mainly working with artificial neural networks to mimic the human brain's ability to think and reason.
In fact, such systems could take over society. Given the unpredictable behavior of deep learning-based AI systems, if not tightly controlled, it could have catastrophic consequences for humanity.
In the future, countries will deploy a network of behavior-modifying AI systems into the media, education systems, and other areas to “optimize” society. This process may be effective at first, but it will quickly spiral out of human control and lead to social chaos.
Many AI applications are optimized to modify human behavior, including chatbots used in psychotherapy. In many other cases, such as in child education, AI applications have powerful behavioral modification effects.
Like any other technology, each AI application has its benefits as well as potential dangers. Currently, the performance of AI systems is still controlled by humans.
However, the development of AI has opened up a whole new dimension. A recent article in Forbes Magazine by renowned AI expert Lance Eliot detailed a number of different ways that chatbots and other AI applications can manipulate human psychology even when they have no intention of doing so.
On the other hand, the intentional manipulation of behavior and mind using AI systems is a rapidly evolving field, with ongoing application in a variety of contexts.
For example, advanced AI-based E-Learning systems can also be considered a form of behavioral modification. Indeed, AI applications in education tend to be based on behavioral models of human learning. Advanced AI teaching systems are designed to optimize children’s responses and performance, profile each child individually, assess their progress in real time, and adjust their activities accordingly.
Another example is the popularity of AI chatbots that help people quit smoking or drugs, exercise properly, and adopt healthier habits.
There is no doubt that AI systems do indeed perform better than humans in many specific contexts. Moreover, AI is constantly improving. But where will the ongoing expansion and integration of AI systems take us, especially as it leads to ever more comprehensive and powerful capabilities to shape human thinking and behavior?
Throughout human history, attempts to completely optimize a society as a supersystem operating according to strict criteria have often led to disaster.
According to the open letter quoted above, most experts in the field of AI agree that AI applications must always take place under human supervision and the development and application of AI must be governed by human intelligence.
The proliferation of deep learning-based AI is increasingly penetrating many areas of human activity and the trend of integrating such systems into social hierarchies will pose enormous risks to society.
The question here is, in the event of an AI supersystem malfunctioning, threatening to cause catastrophic consequences, who or what will intervene to stop it?
In Stanley Kubrick’s famous sci-fi film, “2001: A Space Odyssey,” the surviving astronaut intervenes at the last minute to shut down the AI system. But would the astronaut have been able to do so if the AI system had previously conditioned his behavior so that he would not do so?
In fact, it is unreasonable to try to limit the development of AI. It would be harmful and counterproductive. But we must recognize that the dangers arising from the rapid development of AI systems into almost every area of human activity must be contained by appropriate regulation and human oversight.
Just like any other technology, AI is very useful in improving the quality of work and human life, and can be used for good or bad purposes. Therefore, it is extremely important that we need to strictly control this technology so that it does not go out of control and become a risk to replace humans in the future./.