FBI Warns Cybercriminals Are Already Using AI for Almost Every Kind of Fraud
The US Federal Bureau of Investigation (FBI) has warned that cybercriminals are increasingly using artificial intelligence (AI) to create fake text, images, audio and video, increasing the scale and sophistication of their scams.
Fraudsters are leveraging advances in AI to create large volumes of highly convincing fake content and automate their scams to scale their operations. The FBI has outlined dozens of ways cybercriminals are misusing AI to further their fraudulent schemes.
“Generative AI, in particular, reduces the time and effort required for criminals to execute scams,” the FBI noted in a public notice. “These tools not only assist in content creation, but can also correct for human errors that can be a red flag for fraudulent activity.”
.jpg)
Using AI-generated content is not always illegal. However, when it is used for nefarious purposes such as fraud, extortion, or other crimes, it becomes extremely difficult to identify the source and detect whether it was generated by AI.
The FBI believes that providing concrete examples of how AI is being used in cybercrime will help expose scams and raise public awareness of the threat.
Cybercriminals Use AI-Generated Text to Fool Readers
AI-generated text is often highly convincing and easily fooled, and is being exploited by criminals to commit fraud such as social engineering, email scams, romance scams, investment scams, and many other scams. This content can be abused in situations such as:
- Create multiple fake profiles on social media platforms using fake information, images, and identities to gain trust with victims.
- Using rapid response tactics helps scammers optimize their ability to reach and engage with a wide range of target audiences online.
- Scam websites are often designed with attractive, professional content to attract users to participate in non-transparent investment activities.
- On scam websites, chatbots with integrated AI technology are programmed to interact with victims in a natural and convincing way.
Cybercriminals use AI-generated fake images to commit fraud
Cybercriminals are rapidly leveraging AI to create believable fake images for use as profile pictures, identification cards, and fake documents. The FBI warns that these images are being used for a variety of scams, including:
- Fake social media profiles are a tool in social engineering attacks, used to carry out email scams, romance scams, trust scams, and lure victims into fake investments.
- Forged documents, such as driver's licenses or identification cards, are sophisticated imitations of official documents, allowing criminals to impersonate and steal identities to carry out complex scams.

- Using sophisticated edited images in private chats to convince victims of a fake identity, thereby building trust.
- Exploiting the image of celebrities or influencers to promote fake products or fraudulent programs, exploiting the reputation of public figures to increase persuasiveness and attract victims.
- Using images of natural disasters and tragic accidents to evoke compassion, call for donations to fake charities, and trick victims into transferring money.
- Creating sensitive images of victims to blackmail, threaten their reputation and force them to pay money or meet other demands.
Cybercriminals clone voices using AI to fool victims
Cybercriminals have impersonated celebrities, public figures, or even people in their victims’ personal relationships to scam them out of money. The FBI has documented several cases where AI-generated audio has been used as a sophisticated tool, including:
- Use AI to create fake voices of loved ones, simulate emergency situations to exploit compassion and trick victims into sending money or paying ransom.
- Exploiting AI sound generation technology to bypass voice authentication, gain access to bank accounts and conduct unauthorized transactions, causing serious damage to victims.
Cybercriminals use AI-generated videos in scam video calls
The FBI warns that cybercriminals are increasingly leveraging AI technology to create fake videos that appear authentic and believable, including content like:
- Using AI-generated fake videos to conduct live video chats, impersonating company leaders, law enforcement officers, or other authorized individuals to scam and gain the trust of victims.

- Using fake videos to conduct private conversations, in order to “prove” that the person being contacted online is a “real” and trustworthy individual, thereby manipulating the victim more effectively.
- Using fake videos created to promote fraudulent investment programs, often containing false or fictional content to attract and deceive investors. These videos are designed to create a sense of professionalism and trust, thereby convincing victims to invest money in projects that do not exist.
And the FBI's tips for protecting yourself from cybercrime
Identifying AI-generated scams is becoming increasingly complex, as they are designed to fool even experienced humans. In light of this, the FBI offers some helpful tips to help you protect yourself from potential threats.
- Establish a secret word or phrase within your family, to use as a method of identity verification in emergency situations.
- AI-generated images or videos often contain flaws that can give away a fake. These include misshapen hands or feet, unusual-looking teeth or eyes, asymmetrical or unclear faces, and accessories like glasses or jewelry that look unrealistic.

Additionally, other anomalies may appear such as inaccurate shadows, unnatural watermarks, increased latency in real-time, out-of-sync voice, and motion that lacks smoothness or fidelity.
- Pay close attention to tone of voice and word choice, as these can be important clues to distinguish between a real call from a loved one and a fake voice generated by AI.
- Be careful about the content online that includes your image and voice. Set your social media accounts to private and limit your follower list to people you actually know. These measures reduce the risk of your personal information being used for fraud.
- Never share sensitive information with people you have only met online or over the phone. Also, never transfer money, gift cards, cryptocurrency, or any assets to people you don't know well or have only met online or over the phone.
An important rule is to not trust anyone who calls you. Instead, take the initiative to look up the official contact information of the bank or relevant institution on reliable sources and call directly to the phone number provided on the official website.