Deepfake scams will increase in 2024

Phan Van Hoa (According to CNBC) DNUM_CDZADZCACE 10:54

(Baonghean.vn)- According to data verification company Sumsub (UK), the number of deepfakes worldwide increased 10 times in the period 2022 - 2023, in which the number of deepfakes in the Asia-Pacific region (APAC) increased by 1,530%.

What is Deepfake?

Deepfake is a term created by combining “Deep Learning” and “Fake”, which is a technology that simulates human facial images and creates fake audio, image or even video products. Deepfake creators can manipulate media and replace it with the appearance of a real person.

1000013982.jpg

In essence, deepfake technology is built on the open source machine learning technology platform of Google technology company. Deepfake will scan a person's video and portrait photo, then merge it with a separate video using artificial intelligence (AI) technology and replace facial details such as eyes, mouth, nose with real facial movements and voices.

The more original images there are, the more data the AI ​​has to learn and create more realistic fake images or videos. Deepfakes can superimpose one person's face onto another in a video with incredible realism.

Deepfakes will increase in the 2024 election year

The use of deepfakes to impersonate politicians is becoming increasingly common, especially as 2024 is the biggest global election year in history.

According to reports, at least 60 countries and more than 4 billion people around the world will vote for their leaders and representatives this year, making the issue of deepfakes a serious concern.

According to the Sumsub report, online media, including social media platforms and digital advertising, saw the highest increase in identity fraud between 2021 and 2023, at 274%. Industries such as healthcare, transportation, and gaming are also among those affected by identity fraud.

In addition, in a report on global threats in 2024, the US cybersecurity technology company CrowdStrike said that with a large number of elections taking place this year, there may be misinformation and/or fake information to sow instability.

Accordingly, before the election in Indonesia on February 14, social networks spread a video recording the scene of the late President Suharto supporting the political party he once chaired. However, this video was identified as a deepfake created by AI, copying his entire face and voice. This deepfake video attracted 4.7 million views, only on the social network platform X (formerly Twitter).

This isn’t the first time deepfakes have surfaced. In Pakistan, a deepfake hoax of former Prime Minister Imran Khan surfaced during the election, suggesting his party would boycott the election. Meanwhile, in the US, voters in New Hampshire were suddenly bombarded with a fake news story about President Joe Biden asking them not to vote in the primary.

Experts say most deepfakes are created by actors within a country. Carol Soon, a researcher and head of the culture and society division at the Singapore Institute of Policy Studies, said domestic actors could include political opponents and opponents, or far-right and far-left actors.

Meanwhile, Mr. Simon Chesterman, Senior Director of AI Project Management at Singapore's National AI Program, said that Asia is not ready to address the issue of deepfake in elections in terms of three aspects: legal, technological and educational.

What are the dangers of deepfakes?

Commenting on the dangers posed by deepfakes, Carol Soon said that at a minimum, deepfakes pollute the information ecosystem and make it difficult for people to find accurate information and form informed opinions about a party or candidate.

Voters may also turn away from a particular candidate if they see content about a scandalous issue spread quickly on social media before it is debunked, Simon Chesterman said. While some governments have tools to combat online misinformation, the concern is that the truth will be distorted before it can be corrected.

“We’ve seen how quickly X (formerly Twitter) can be taken over by deepfake porn involving Taylor Swift. These things can spread incredibly quickly,” Chesterman said, adding that the measures are often inadequate and extremely difficult to enforce. “Often it’s too late.”

Adam Meyers, of cybersecurity technology firm CrowdStrike, says deepfakes can also trigger confirmation bias, an effect in people’s information processing. “Even if they know deep down that it’s not true, they’ll still accept it if it’s the message they want and what they want to believe,” he explains.

Adding to the concerns, Mr. Chesterman said that a fake video recording election misconduct such as ballot stuffing could cause voters to lose confidence in the validity of the election.

According to Ms. Soon, candidates can deny the truth about themselves, by claiming that negative information about them is all fake.

Top tech companies pledge efforts to fight deepfakes

In February, 20 leading tech companies, including Microsoft, Meta, Google, Amazon, IBM, along with artificial intelligence company OpenAI and social media companies like Snap, TikTok and X, agreed to a joint commitment to combat the use of artificial intelligence for the purpose of cheating in this year's election.

The joint commitment is an important first step, but its effectiveness depends on implementation and enforcement. This will require a multi-pronged approach for tech companies to implement different measures across their platforms, said Soon. Tech companies also need to ensure transparency about the decisions they make, such as the processes they follow.

But Simon Chesterman says it’s not too much to expect private companies to perform basic public functions. Deciding what content is allowed on social media is a difficult decision, and some companies can take months to make a decision.

To achieve this goal, the Content Attestation and Authentication Alliance (C2PA), has introduced a platform dedicated to authenticating digital content. The platform allows viewers to receive verified information such as information about the creator, the origin and time of creation, as well as the generality of the information or whether the information was generated by AI.

C2PA member companies include Adobe, Microsoft, Google, and Intel. OpenAI announced that it would deploy C2PA authentication technology for images generated with its DALL·E 3 image generative language model earlier this year.

In an interview with Bloomberg House at the World Economic Forum in January, OpenAI founder and CEO Sam Altman said the company is focused on ensuring its technology is not used to manipulate elections.

Adam Meyers, of cybersecurity firm CrowdStrike, has proposed creating a non-partisan, non-profit engineering organization that would analyze and identify deepfakes. “The public could send them content that they suspect has been tampered with. It’s not an easy thing to communicate, but at least there’s some mechanism that people can trust,” Meyers said.

But ultimately, while technology is part of the solution, much of it lies with users. The public needs to be more vigilant, and in addition to fact-checking when something looks very suspicious, users also need to fact-check important information, especially before sharing it with others.

Featured Nghe An Newspaper

Latest

x
Deepfake scams will increase in 2024
POWERED BYONECMS- A PRODUCT OFNEKO