Meta Will Reuse Facial Recognition Technology to Fight Fraud and Account Hijacking
Meta recently made a decision to re-use facial recognition technology to fight increasingly sophisticated scammers, protecting the accounts of billions of social network users like Facebook and Instagram.
Meta's main goal is to protect users from increasingly sophisticated forms of fraud. By using facial recognition technology, Facebook and Instagram will be able to effectively prevent fake ads that exploit celebrity images.
Not only that, this feature also helps users recover their accounts more quickly and securely when they are attacked. Meta is looking to maximize the potential of facial recognition technology, while ensuring strict compliance with security regulations to protect user privacy.
Meta's ad system has been working hard to screen and remove fraudulent ads before they reach users. However, distinguishing between legitimate celebrity ads and fake ads is a major challenge.

Many legitimate ads rely on celebrities to gain traction, making it difficult for our algorithms to detect scams, according to Monika Bickert, Meta’s vice president of content policy.
To solve this problem, Meta plans to use facial recognition technology to compare images in celebrity-related ads with their official profile photos on Facebook and Instagram.
“Initial testing with a small group of celebrities and public figures has shown that the tool is very effective at detecting and removing fake ads,” Bickert said. “We have seen a significant improvement in the speed and effectiveness of stopping this type of fraud.”
In addition to combating fraud, Meta also leverages facial recognition technology to help users recover their accounts in emergency situations. For example, if an account is hacked and security information is changed, or if a user loses their device and cannot access their two-factor authentication code, the system uses facial recognition technology to verify their identity and help users regain control of their accounts.
In the event of an account hijacking, Meta offers a number of methods for users to restore access. One common method is to ask users to upload a copy of their ID or passport.
However, the company is testing a new, more convenient method of identity verification that requires users to take a selfie video. This video will be carefully compared to the user's previously verified profile photo. If the system confirms a match, the user will immediately regain control of their account.
However, we still have concerns about the potential for bad actors to exploit deepfake technology to fool identity verification systems. A Meta spokesperson shared: "We are closely monitoring to see if there are any vulnerabilities in our system that deepfake creators could exploit. While the initial results of the test are promising, we still need more time to evaluate the true effectiveness of this method against sophisticated fake videos.
Meta’s decision to use facial recognition technology to authenticate users’ identities has come as a surprise. Just a few years ago, the company decided to stop using automated facial recognition systems in photos and videos, citing privacy concerns.
At the time, Meta used the technology to suggest tagging friends in photos. However, the collection of users' biometric information without explicit consent landed the company in lawsuits in the US states of Illinois and Texas, resulting in large settlements.
While facial recognition technology has been controversial in the past, Meta believes it can be used safely and beneficially. This time, the company will limit testing to regions with strict data protection regulations, such as the EU, the UK, the states of Illinois and Texas.
At the same time, Meta pledged to only use the technology for the purpose of securing user accounts. To ensure privacy, the company will delete all facial data generated during the verification process, regardless of whether the results match, Ms. Bickert affirmed.
Before implementing facial recognition technology, Meta conducted a thorough and comprehensive assessment of potential risks and privacy implications. This process included consulting with legal, technical, and policy experts.
To ensure transparency, Meta will conduct a public test of the system in December, focusing on preventing fake celebrity ads. Celebrities will have the right to decide whether they want to participate in the process or not. Additionally, the selfie video identity verification feature will be expanded to more users in the future.