Cybercriminals are using artificial intelligence to commit fraud and create malware

Phan Van Hoa (According to Tomshardware, PCmag) DNUM_AHZBAZCACD 14:34

(Baonghean.vn) - A new warning issued by the US Federal Bureau of Investigation (FBI) shows that cybercriminals are using artificial intelligence (AI) programs for fraud schemes and to help them create malware.

The FBI recently issued a warning about the rise of cyberattacks powered by AI programs. According to the agency, cybercriminals are exploiting open-source AI programs for all sorts of malicious activities, including developing malware and phishing attacks. The number of people who are inclined to use AI technology as part of phishing attacks or developing malware is increasing at an alarming rate, and the impact of this activity is increasing.

Anh minh hoa1.jpg
Illustration photo.

The FBI also believes that well-secured AI models like ChatGPT have been exploited to develop malware that is capable of bypassing even the latest security systems.

Some IT experts have recently discovered the dangerous potential of ChatGPT and its ability to create malware that is nearly undetectable by Endpoint Detection & Response (EDR) systems.

EDR is a set of cybersecurity tools designed to detect and remove malware or any other suspicious activity on the network. However, experts say this traditional protocol is not suitable for the potential harm that ChatGPT can create.

Malicious activities range from using AI programs to refine and launch scams to terrorists consulting technology to help them build more powerful chemical attacks.

Most large language models (LLMs) like ChatGPT are designed with built-in filters to avoid generating content that the creator deems inappropriate. This can range from specific topics to, in this case, malicious code. However, it doesn’t take long for users to find ways to bypass these filters. It’s this tactic that makes ChatGPT particularly vulnerable to individuals who want to create malicious scripts.

“We expect that over time as the adoption and democratization of AI models continue, these trends will increase,” said a senior FBI official.

While the FBI did not disclose the specific AI models that criminals are using, the agency warned that hackers are looking for free, customizable open-source models, along with AI programs that hackers develop themselves.

FBI officials added that seasoned cybercriminals are exploiting AI technology to develop new malware attacks, including using AI-generated websites as phishing pages to secretly distribute malicious computer code. The same technology is helping hackers develop new malware that can evade antivirus software.

The FBI has previously warned that scammers are using AI image generators to create realistic, sexually-themed fake images of victims in order to extort money. The majority of the cases the agency is investigating involve criminals using AI models to augment their traditional schemes. This includes attempts to trick loved ones or the elderly with fraudulent phone calls using AI voice cloning technology.

Faced with this issue, the world's leading technology companies such as Amazon, Google, Meta, OpenAI, ... have signed a commitment to have measures to manage their AI tools and will independently audit the AI ​​programs they create before releasing them to the public, and will develop solutions to mark AI-generated content to help users avoid misinformation and fraudulent behavior.

Tech companies have also vowed to invest in cybersecurity to protect their proprietary AI code from being stolen or leaked to the public. The three foundational principles for the future of AI are safety, security, and trust.

Featured Nghe An Newspaper

Latest

x
Cybercriminals are using artificial intelligence to commit fraud and create malware
POWERED BYONECMS- A PRODUCT OFNEKO