The following breakthroughs in artificial intelligence technology could threaten the future of humanity.

Phan Van Hoa (According to Livescience) DNUM_BFZABZCACE 20:59

(Baonghean.vn) - Artificial intelligence (AI) is developing at a dizzying speed, 2024 is predicted to be an explosive year with many new breakthroughs. Among them, there are three breakthroughs that are considered the most frightening, because of the negative potential they bring.

While 2023 saw some groundbreaking advances in AI, it was just the beginning of something bigger. 2024 is set to usher in terrifying breakthroughs that could include artificial general intelligence (AGI), the proliferation of AI-powered killer robots, and deepfakes so realistic they’re hard to tell apart.

AI has been around for decades, but 2023 is the year that this creepy technology really hits its stride. OpenAI's ChatGPT has made AI accessible and practical for the masses. But AI has a rocky history, and today's technology is built on the foundations of failed experiments from the past.

anh-minh-hoa-1-9408.jpg
Illustration photo.

Most of the innovations in AI are aimed at improving areas like medical diagnosis and scientific discovery. For example, an AI model could analyze an X-ray to determine whether you are at high risk of lung cancer.

During COVID-19, scientists have also built an algorithm that can diagnose the virus by listening to subtle differences in a patient's cough. AI is also being used to design quantum physics experiments beyond human imagination.

However, not all technological breakthroughs are for the benefit of humanity. Here are the 3 scariest AI breakthroughs that could happen in 2024:

Artificial intelligence could have terrifying consequences

Artificial General Intelligence (AGI) is a form of AI that has the ability to be equal to or superior to humans in intelligence, able to think, learn on its own, solve complex problems, and adapt to new situations.

If AGI is successfully developed, it could lead to a revolution in many areas, from economics to the military. However, AGI could also have dire consequences, such as causing mass unemployment, robot wars, or even the extinction of humanity.

Meanwhile, Q* is the name of a new AI model developed by the US company OpenAI, which is said to have the potential to achieve AGI.

Information about the Q* model is still quite limited, but according to sources, this model can solve problems, has the ability to reason and perceive problems much better than current AI models. In addition, Q* is trained on a huge dataset, including text, images, and code. This allows the model to learn and adapt to new situations quickly.

We still don't know for sure why OpenAI CEO Sam Altman was fired and then reinstated in late 2023. However, some inside information suggests that it has to do with advanced technology that could threaten the future of humanity.

According to Reuters, OpenAI's Q* system could represent a groundbreaking breakthrough in the field of AGI. Although information about the mysterious system is still limited, if the rumors are true, Q* could take AI capabilities to a whole new level.

AGI is a hypothetical tipping point, or “Singularity,” where AI becomes smarter than humans. Current generations of AI still lag in areas where humans excel, such as contextual reasoning and true creativity. Most AI-generated content is simply regurgitating training data.

But scientists say AGI could potentially outperform humans at certain tasks. It could also be weaponized and used to create more dangerous pathogens, launch large-scale cyberattacks, or conduct mass manipulation.

The quest for AGI has long been the stuff of science fiction, and many scientists believe we’ll never get there. But OpenAI reaching this milestone would certainly be a shock, but it’s not impossible.

We know that Sam Altman laid the groundwork for AGI in a blog post outlining OpenAI’s approach in February 2023. In addition, experts began predicting an imminent breakthrough, including Jensen Huang, CEO of US processor maker Nvidia, who said in November 2023 that AGI could be achieved within the next five years.

Will 2024 be the breakthrough year for AGI? Only time will tell.

AI-powered killer robots are on the rise

Governments around the world are increasingly using AI as tools of war. On November 22, the US government announced that 47 countries had endorsed a declaration on the responsible use of AI in the military. Why such a declaration? Because “irresponsible” use is a disturbing and frightening prospect. For example, we have seen AI drones allegedly hunting down soldiers in Libya without human intervention.

AI is playing an increasingly important role in the military. AI is capable of recognizing patterns, learning on its own, making predictions or recommendations in a military context, and the AI ​​arms race has already begun. By 2024, AI may not only be used in weapon systems, but may also be applied in logistics, decision support, research and development.

For example, in 2022, AI created 40,000 new hypothetical chemical weapons. Various branches of the US military have ordered drones that can identify targets and monitor the battlefield better than humans. According to NPR (US), in the most recent war between Israel and Hamas, Israel used AI to identify targets at least 50 times faster than humans.

But one of the most worrying developments is Lethal Autonomous Weapon Systems (LAWS) – or killer robots. Many leading scientists and technologists have warned about the dangers of killer robots, including theoretical physicist Stephen Hawking in 2015 and billionaire Elon Musk in 2017, but the technology has yet to become widespread.

While killer robots aren't on the horizon yet, some worrying developments suggest that 2024 could be their breakthrough year.

During the war in Ukraine, Russia was accused of deploying Zala KYB-UAV drones that are capable of automatically identifying and attacking targets, according to a report by The Bulletin of the Atomic Scientists.

Meanwhile, according to the Australian Financial Review, Australia is also developing Ghost Shark, an autonomous submarine system, and preparing for mass production.

The level of AI spending by countries around the world is also a sign. According to Reuters, China's AI spending increased from $11.6 million in 2010 to $141 million in 2019. Part of the reason is that China is competing with the US in implementing LAWS.

These examples show that AI is bringing significant changes to warfare. The race to develop military AI is increasingly fierce, requiring many countries to consider the responsible use of AI to avoid unintended consequences.

Using AI to create incredibly realistic deepfake videos to manipulate voters in elections

Deepfake is a type of technology that allows the face, voice, and gestures of one person in a video to be changed into someone else's, making it look and sound indistinguishable from reality.

In an election context, these hyper-realistic deepfakes could be used in a number of ways to manipulate voters, including:

Discrediting a candidate:Create deepfake videos of candidates making bad, racist, or offensive statements to discredit them in the eyes of voters.

Support other candidates:Creating deepfake videos of celebrities or influential people endorsing a particular candidate, even if they didn't actually do so.

Create false rumors:Creating deepfake videos of candidates engaging in illegal or improper activity to sow doubt and ruin their reputation.

Causing loss of confidence in elections:Creating deepfake videos of election fraud to undermine public confidence in the accuracy of election results.

The danger of using incredibly realistic deepfake videos to manipulate voters in elections lies in their incredible realism. These videos can trick viewers into believing they are witnessing a real event, even if it is completely staged. This can pose serious problems for democracy, as it can influence voter turnout and the outcome of elections.

Detecting and preventing the use of deepfakes in elections is a major challenge. Deepfake detection technology is developing, but it is far from perfect. Countries are starting to enact laws to combat the misuse of deepfakes, but it is unclear how effective these laws will be.

This would make it nearly impossible to distinguish fact from fiction with the naked eye. While some tools can help detect deepfakes, they are not yet widespread. Intel, for example, has developed a real-time deepfake detection tool that uses AI to analyze blood flow. However, according to the BBC, the tool, called FakeCatcher, has so far produced inconsistent results.

The most important thing is to raise public awareness about the existence of deepfakes. People need to know that the videos they see or hear on social media may not be real, and they need to be cautious about information that seems suspicious. By educating the public, we can hope to reduce the impact of deepfakes on future elections.

Featured Nghe An Newspaper

Latest

x
The following breakthroughs in artificial intelligence technology could threaten the future of humanity.
POWERED BYONECMS- A PRODUCT OFNEKO