Hackers Are Using ChatGPT to Create Sophisticated Malware
Recently, the leading artificial intelligence research company OpenAI (USA) confirmed that hackers are exploiting its ChatGPT artificial intelligence model to create malware and carry out cyber attacks.
A recent report has exposed the alarming reality of ChatGPT being exploited for cyberattacks. More than 20 incidents have been recorded since the beginning of 2024, in which hackers have used ChatGPT to conduct phishing attacks, steal data, and disrupt systems, causing serious damage to many organizations and individuals.
OpenAI’s report has revealed an alarming fact that state-linked hacker groups, particularly from China and Iran, are taking full advantage of ChatGPT’s capabilities to enhance their cyber operations. From fine-tuning complex malware to creating malicious social media campaigns, ChatGPT has become a powerful tool for large-scale cyberattacks.

In an alarming development, the Chinese hacker group 'SweetSpecter' has been found to be abusing ChatGPT to carry out malicious activities. The group has been using ChatGPT to collect sensitive information, search for security vulnerabilities in the system, and develop new types of malware. They have even dared to attack OpenAI itself with phishing attacks, in order to gain access and steal data from the company's employees.
Another serious threat comes from the Iranian hacker group 'CyberAv3ngers', which is closely linked to the Islamic Revolutionary Guard Corps. The group has been using ChatGPT to find weaknesses in the control systems of factories and critical infrastructure. Their goal is to create malicious code to attack and paralyze these systems, causing serious consequences.
In addition, another hacker group from Iran, called 'STORM-0817', has also been discovered to be using artificial intelligence to create malware for Android phones. This malware is capable of stealing users' personal information such as phone numbers, call history and location.
While hacker groups have experimented with using ChatGPT to create malware, OpenAI says this has not led to any breakthroughs in the field of cyberattacks. ChatGPT is just a supporting tool, and creating effective cyberattacks still requires human expertise and experience.
The company asserts that what they observed when hackers used GPT-4 was still within the range of capabilities they had previously predicted. In other words, GPT-4 was not able to create more sophisticated and effective cyberattacks than other tools.
The report found that the rise of generative AI is making it easier to become a hacker. Anyone, even those with little technical knowledge, can use AI to create cyberattack tools, increasing their risk of being attacked.
In response to these threats, OpenAI has implemented measures to prevent malicious activity, including banning accounts associated with identified activities. The company is also collaborating with industry partners and stakeholders to share threat intelligence and improve collective cybersecurity defenses.
Security experts are concerned that this situation will only get worse as AI technology becomes smarter. They say that companies developing AI need to have strong protection measures and good detection systems to prevent bad actors from abusing this technology for malicious purposes.
The revelations from OpenAI serve as a wake-up call for the tech industry and policymakers to address the potential risks associated with advanced AI systems.
AI is becoming more and more prevalent in our lives. To maximize the benefits of AI while ensuring safety, we need to find a balance between developing new technology and protecting personal information. This will help prevent bad actors from exploiting AI for malicious purposes.
OpenAI has pledged to do everything it can to prevent bad actors from abusing its AI tools for malicious purposes. The company will continue to look for and eliminate bad behavior, and share its learnings with other researchers to build stronger security systems together. OpenAI will also strengthen its defenses against attacks from professional hackers, especially those with government ties.
In the future, AI will play an important role in our lives. However, to avoid potential risks, AI developers, cybersecurity experts, and governments need to work closely together. Together, they need to find ways to develop AI safely and securely, and prevent bad actors from exploiting AI for harm.