Countries and regions around the world are promoting the development of regulations on artificial intelligence.

Phan Van Hoa June 23, 2023 10:08

(Baonghean.vn) - The rapid development of artificial intelligence (AI) in general and generative AI such as ChatGPT in particular brings many great benefits. In addition, emerging technology has potential risks, forcing countries and regions to promote the development of control regulations.

Anh minh hoa1.jpg
Illustration photo.

What is generative artificial intelligence?

Generative Artificial Intelligence (Generative AI) is a branch of artificial intelligence that describes a class of algorithms that are capable of generating new content including text, images, video, and audio. Generative AI is developed from advances in deep learning technology to produce results similar to those generated by humans. ChatGPT is the latest development in the world of Generative AI.

Generative AI allows computers to create all kinds of new and exciting content, from music and art to entire virtual worlds. Beyond entertainment, generative AI has a wide range of practical applications, such as creating new product designs and optimizing business processes.

Generative AI has exploded in use and application in its early stages. This powerful disruptive technology will be used in existing and new businesses in the very near future to reduce costs, provide new services better and faster, and create new manufacturing capabilities.

Generative AI offers positive prospects for the economy and society, but it also poses risks. Therefore, taking appropriate countermeasures will play an important role.

Discussions regarding the regulation of generative AI have been heating up globally in recent times, with concerns about societal risks including misinformation and job displacement being raised by data science leaders and technology stakeholders.

On May 30, scientists and technology leaders around the world signed an open letter to publicly acknowledge the dangers of artificial intelligence, even the risk of causing human extinction.

“Minimizing the risk of extinction posed by artificially generated AI to humanity must be a global priority alongside other societal risks such as pandemics and nuclear war,” the letter said.

Sam Altman, CEO of OpenAI – the company that researched and provided the ChatGPT product, and Geoffrey Hinton, a computer scientist known as the “godfather” of artificial intelligence, along with hundreds of famous people in the world, signed the letter posted on the website of the Center for AI Safety – a non-profit organization based in San Francisco (USA).

Concerns about unregulated, more intelligent artificial intelligence systems than humans are growing, such as new-generation AI chatbots that could potentially replace humans in the future. The emergence of AI chatbots like ChatGPT in recent times has prompted countries around the world to push for regulations for this developing technology to mitigate the risks associated with this emerging technology.

Australia

The Australian Government recently announced the 2023 Federal Budget, which includes a $26.9 million Responsible AI Network investment to deploy a range of responsible AI technologies across the country. Australian regulators are pushing for regulations aimed at allaying concerns shared by Australian Human Rights Commissioner Lorraine Finlay, who has said that AI chatbots like Microsoft’s ChatGPT and Google’s Bard could be harmful to society.

Additionally, discussions are ongoing among regulators about possible changes to the Privacy Act to address the lack of transparency that can come with training AI models without human oversight. There are also discussions around the use of data and biometrics to train models, which may require additional privacy rules.

Victor Dominello, a member of the Australian Technology Council, has called for the establishment of an “oversight committee within the technology regulator” to oversee and monitor the development of generative AI technology and advise government agencies on how to monitor this emerging technology for potential risks.

Brazil

The AI-related legislative process in Brazil has been spurred by a legal framework approved by the government in September 2022, but has received criticism due to the vagueness of the regulations.

With the launch of ChatGPT last November, discussions were held and a report was also sent to the Brazilian Government, detailing proposals regarding the regulation of AI. The study was conducted by legal experts, scientists, business leaders and members of the national data protection supervisory authority (ANPD), which addressed three main areas: citizens' rights, risk classification, and administrative measures and sanctions related to the management and use of AI.

This document is currently being discussed within the Brazilian Government and a specific publication date has not yet been given.

America

The United States, a hotbed of AI innovators and developers, including ChatGPT creator OpenAI, has stepped up regulation of AI tools. The head of the Federal Trade Commission (FTC) said on May 3 that the agency is committed to using existing laws to curb the dangers of artificial intelligence, such as the expansion of influence by large companies and fraudulent behavior.

Canada

The Artificial Intelligence and Data Act (AIDA) is expected to be introduced as early as 2025. In addition, legal regulations are being studied to manage the risks and pitfalls caused by AI to encourage responsible adoption of this technology.

According to the Government of Canada, the risk-based approach is consistent with similar regulations in the United States and the European Union, with plans to build on Canada’s existing human rights and consumer protection laws to recognize the need for “high-impact” AI systems to meet human rights and information security laws.

Canadian authorities will be responsible for ensuring that new regulations are in place to keep pace with technological developments and to limit their use for malicious purposes.

China

Regulations for AI-powered services for citizens across China, including chatbots, are currently being drafted by the Cyberspace Administration of China (CAC). A proposal has been put forward to call on Chinese tech companies to register their generative AI models with the Cyberspace Administration of China before releasing their products to the public.

According to the draft law, the evaluation of these new AI products must ensure the “legality of the data source before training,” with developers required to demonstrate the products’ alignment with “core values ​​of socialism.” Products will be restricted from using personal data for training and will be required to require users to verify their real identities. Additionally, AI models that share extremist, violent, pornographic content or messages calling for “overthrowing the government” will be in violation of the regulations.

Violators will now be fined between 10,000 yuan (about $1,454) and 100,000 yuan (about $14,545), along with service suspension and possible criminal investigation. Additionally, providers found to be sharing content deemed inappropriate will be required to update their systems within three months to ensure they do not repeat the offense. The bill is scheduled to be finalized by the end of this year.

European Union

The European Union is seen as leading the way, with its AI Bill expected to be approved later this year. The development of this AI bill marks a landmark development in the race among authorities to regulate AI, which has been developing at a breakneck pace in recent years. The bill, dubbed the European AI Act, is also the first law for AI systems in the West.

The European Union (EU) AI Act is currently being drafted through the European Parliament, with a plenary vote expected in June this year. AI regulation in the region has been in the works for the past few years, with the European Commission submitting a proposal in April 2021.

The EU AI bill classifies AI applications into four risk levels: unacceptable risk, high risk, limited risk, and minimal or no risk.

Unacceptable risk:Includes any system that poses a clear threat to the safety and rights of citizens, such as systems that evaluate the behavior and lifestyle of citizens in a country and voice assistants that pose a risk of harm.

High Risk:Using artificial intelligence in critical infrastructure like transportation; education; law enforcement; recruitment and healthcare robots.

Limited Risk:Including the use of chatbots, users need to be clearly informed what they are interacting with the systems for from the outset.

Rminimal or no risk:These include systems like AI-powered video games or spam filters that involve generative AI.

India

In March 2021, the Indian government announced that it would take a cautious approach to AI regulation in an effort to foster innovation across the country. AI-related technologies have been identified as “critical and strategic” by India’s Ministry of Electronics and IT, but the agency said it would introduce policies and infrastructure measures to help combat bias, discrimination, and ethical concerns.

The Indian government has proposed voluntary regulatory frameworks to regulate AI. The National Strategy for Artificial Intelligence in 2018 looked at five key areas of AI development: agriculture, education, healthcare, smart cities, and smart mobility. Then, in 2020, the ethical use of AI was detailed in the draft National Artificial Intelligence Strategy, which calls for all systems to be transparent, accountable, and unbiased.

Korea

South Korea’s AI bill is currently in the drafting stage and is expected to be voted on in the National Assembly in the near future. Accordingly, it will specify which new AI models can be created without government approval and which models must be approved by the government before being put on the market.

This new AI bill also focuses on establishing legal regulations related to the responsible development of AI systems.

Additionally, the country's Personal Information Protection Commission has announced plans to establish a task force dedicated to reviewing biometric data protection, based on the development of generative AI.

United Kingdom

In the UK, in addition to publishing a White Paper that includes specific guidance on implementing AI-related regulations, the UK Government announced on May 9 that it will begin assessing the impact of artificial AI on consumers, businesses and the economy, as well as considering the possibility of applying new measures to control technologies such as ChatGPT by OpenAI Company.

Currently, regulation of generative AI in the UK is left to the industry regulators where AI is used, with no common law planned beyond the European Union’s General Data Protection Regulation (GDPR). The government has taken a “pro-innovation approach” in its official statements on the subject, aiming to position the UK as a leader in the global AI race.

Meanwhile, the UK Competition and Markets Authority (CMA) has launched a review of AI platform models, reviewing the development of tools including ChatGPT to consider competition and consumer protection. The results of the review will be published in early September 2023, the agency said.

Japan

According to Japanese officials, generative AI brings positive prospects for the economy and society but also poses risks. Therefore, taking appropriate countermeasures from both aspects will play an important role.

Japanese Prime Minister Kishida Fumido said he will direct relevant agencies to promptly promote discussions in many areas to maximize the prospects and respond to risks from artificial intelligence.

Japanese officials also said they would develop international rules for artificial AI.

According to information-age
Copy Link

Featured Nghe An Newspaper

Latest

x
Countries and regions around the world are promoting the development of regulations on artificial intelligence.
POWERED BYONECMS- A PRODUCT OFNEKO