Countries and international organizations push for regulation of artificial intelligence tools

Phan Van Hoa (According to Reuters) DNUM_CAZAJZCACD 14:20

(Baonghean.vn) - Rapid advances in artificial intelligence (AI) such as ChatGPT by Microsoft-backed OpenAI are complicating governments' efforts to unify regulations to govern the use of this technology.

Here are the latest steps national and international regulators are taking to regulate AI tools:

United Kingdom

On September 18, the UK Competition and Markets Authority (CMA) published seven principles designed to hold developers accountable, prevent Big Tech from introducing regulations that aim to dominate the market, and prevent anti-competitive behavior.

The principles, which come six weeks before the UK government hosts a global AI safety summit, will underpin the country's approach to AI as it takes on new powers in the coming months to oversee digital markets.

In May, the CMA said it would begin examining the impact of AI on consumers, businesses and the economy, and whether new controls were needed.

China

China has issued a series of temporary measures to regulate artificial intelligence, which took effect on August 15. Accordingly, China requires service providers to submit to security assessments and obtain licenses before releasing AI products on the mass market.

After gaining approval from the Chinese government, major Chinese tech companies including Baidu Inc and SenseTime Group launched AI chatbots on August 31.

France

France's privacy watchdog CNIL is investigating several complaints about ChatGPT after the app was temporarily banned in Italy earlier this year over suspected privacy violations. Meanwhile, in March, the French parliament approved the use of AI-powered video surveillance technology at the 2024 Paris Olympics despite warnings from human rights groups.

Japan

The Japanese government is expected to introduce AI regulations by the end of 2023. According to sources familiar with the matter, Japan’s stance on the technology appears to be closer to the U.S. than the strict regulations planned in the European Union (EU), as Japan looks to leverage the technology to spur economic growth and turn it into an economic opportunity and make Japan a leader in advanced chip manufacturing.

Japan's privacy watchdog also warned OpenAI in June not to collect sensitive data without users' permission and to minimize the amount of sensitive data the company collects.

Ireland

AI needs to be regulated, but regulators must figure out how to do it properly before introducing bans, Ireland's data protection chief says.

Israel

In June, the Israel Innovation Authority said the Israeli government has been studying AI regulations to strike the right balance between innovation and protecting human rights.

Israel published a 115-page draft AI policy in October 2022 and is considering public feedback before making a final decision.

Spain

Spain's data protection authority said it is conducting a preliminary investigation into potential data breaches by ChatGPT. It has also asked the EU's privacy watchdog to assess privacy concerns around ChatGPT.

IDEA

Italy's data protection agency Garante said it plans to conduct a large-scale review of active generative AI and machine learning applications to determine whether these new tools comply with privacy and data protection laws.

The agency is also looking for technology experts to support the data security field while AI tools are developing rapidly.

Previously, in March, Garante decided to temporarily ban ChatGPT by OpenAI company and opened an investigation into the application for suspected violations of privacy rules.

Australia

The Australian government has said AI needs to be controlled by tough laws. The country has also planned to regulate AI technology, including a potential ban on deepfakes and other realistic-looking fake content, amid concerns that the technology could be misused by bad actors.

At the same time, Australia will also require search engines to draft new codes to prevent the sharing of AI-generated child sexual abuse material and the production of deepfakes.

America

The U.S. Congress is holding three hearings on AI on September 11, 12, and 13 to explore legislation to mitigate the dangers of emerging technology. Discussions include testimony from Microsoft President Brad Smith and Nvidia Chief Scientist William Daly, an AI forum featuring Meta Platforms CEO Mark Zuckerberg and Tesla CEO Elon Musk, and meetings with various subcommittees in the House and Senate.

The US Federal Trade Commission (FTC) has also launched an extensive investigation into OpenAI, based on allegations that the company violated consumer protection laws by endangering reputations and personal data.

Senator Michael Bennet also called on leading tech companies to label AI-generated content and limit the spread of material that could mislead users. He introduced a bill in April to create a task force to review US AI policies.

European Union

European Commission (EU) President Ursula von der Leyen on September 13 called for the creation of a global council to assess the risks and benefits of AI, similar to the global Intergovernmental Panel on Climate Change (IPCC) that provides information to policymakers on climate change.

EU lawmakers agreed on changes to the bloc’s draft AI Act in June. Lawmakers will now have to discuss the details with EU countries before the draft regulation becomes law.

The biggest issue is expected to be facial recognition and biometric surveillance, where some lawmakers want an outright ban while EU countries want an exception for national security, defense and military purposes.

G7 group of countries

G7 leaders meeting in Hiroshima, Japan, this past May acknowledged the need for governance of AI and immersive technologies, and agreed to ask ministers to discuss the technology under the name “Hiroshima AI process” and report on the results by the end of 2023.

G7 digital ministers said after the meeting that G7 countries should adopt risk-based regulation on AI.

United Nations

The United Nations Security Council also held its first formal discussion on AI in New York City in July. UN Secretary-General Antonio Guterres said the council addressed AI applications in both military and non-military domains, stressing that they “could have very serious implications for global peace and security.”

Mr. Guterres also supported a proposal by some senior managers in the AI ​​industry to establish an AI watchdog, operating similarly to the International Atomic Energy Agency (IAEA). But he noted that “only member states can create it, not the UN Secretariat.”

The UN secretary-general has also announced plans to begin work later this year with a high-level AI advisory body to review AI governance arrangements.

Featured Nghe An Newspaper

Latest

x
Countries and international organizations push for regulation of artificial intelligence tools
POWERED BYONECMS- A PRODUCT OFNEKO