Digital transformation

How will the world's first artificial intelligence law affect tech giants?

Phan Van Hoa August 11, 2024 8:00

August 2024 marks an important milestone when the European Union's Artificial Intelligence Act officially comes into effect, opening a new chapter with stricter regulations, especially for tech giants.

Earlier in May, the European Union (EU) set a new precedent by passing the Artificial Intelligence Act (AI Act), marking a major milestone in the regulation of artificial intelligence. The consensus of member states and legislative bodies has created a comprehensive legal framework that promises to reshape the future of AI in Europe and around the world.

Ảnh minh họa.
Illustration photo.

What is the AI ​​Act?

The AI ​​Act, considered one of the most advanced AI laws in the world, has officially come into effect. By protecting privacy, preventing discrimination and ensuring transparency of AI systems, the act not only protects citizens' "fundamental rights" but also creates a healthy business environment, encouraging innovation and investment in the field of AI.

At the same time, this law is also expected to become a roadmap for other countries in building a safe and trustworthy AI future. First proposed by the European Commission in 2020, the Act aims to address the negative impacts of AI, especially on large US technology companies, which are currently the main builders and developers of the most advanced AI systems.

The AI ​​Act is extremely broad in scope, covering any product or service using artificial intelligence offered on the EU market, regardless of its origin. From global tech giants to local startups, everyone will have to comply with these new rules.

“Europe’s approach to technology puts people first and ensures that people’s rights are protected. With the AI ​​Act, the EU has taken an important step to ensure that AI technology is applied in Europe in accordance with EU rules,” said European Commission Executive Vice President Margrethe Vestager.

Meanwhile, Mr. Tanguy Van Overstraeten, head of technology, media and technology at law firm Linklaters in Brussels (Switzerland), affirmed that the EU AI Act is a “historic turning point” in the regulation of artificial intelligence globally. This is a pioneering law, opening a new era in the management of AI technology, at the same time setting comprehensive standards for businesses, from technology giants to innovative startups. He emphasized: “This law will have a profound impact on all organizations, especially those developing, deploying or simply using AI systems.”

The EU AI Act has adopted a smart and flexible approach, regulating AI based on the actual risks it poses to society. This will ensure that different AI applications are appropriately regulated, ensuring that innovation is encouraged while protecting the rights of citizens.

For AI applications that are considered “high risk,” the law sets strict requirements to minimize negative impacts. Specifically, businesses must conduct comprehensive risk assessments, build effective risk mitigation systems, use high-quality data to train models, and make operations transparent by logging and sharing information with regulators.

From self-driving cars to medical devices to critical decision-making systems, AI has become an integral part of modern life. However, along with its enormous benefits, AI also poses many ethical and legal challenges.

Recognizing the potential risks, strict regulations have been put in place to manage high-risk AI applications. Systems such as self-driving cars and medical devices must adhere to strict standards of safety and effectiveness.

At the same time, AI applications that could have serious social consequences for social scoring systems, crime prediction and emotion recognition would be banned altogether.

How does the AI ​​Act impact the world's major tech companies?

US giants such as Microsoft, Google, Amazon, Apple and Meta have been actively collaborating and investing billions of dollars in companies they believe can lead in AI as the technology is developing strongly globally.

Cloud platforms such as Microsoft Azure, Amazon Web Services and Google Cloud also play a key role in supporting AI development, as massive computing infrastructure is required to train and run AI models.

In this respect, big tech companies will certainly be the most targeted under the new regulations.

The AI ​​Act is not limited to the EU, but has global implications. Any organization whose activities are directly or indirectly related to the EU may be subject to the regulation.

As such, major tech companies will face increased scrutiny over their business operations in the EU market, particularly regarding the collection and use of users' personal data.

Due to concerns about compliance with the EU's General Data Protection Regulation (GDPR), Meta has decided to temporarily suspend the availability of its AI model LLaMa in the European market. Although the EU AI Act also imposes new requirements, GDPR remains the main factor that makes Meta cautious in this decision.

The company was previously ordered to stop training its models on posts from Facebook and Instagram in the EU over concerns it could be in breach of GDPR.

“Europe’s risk-based regulatory framework encourages innovation while prioritizing the safe development and deployment of technology,” said Eric Loeb, executive vice president of government affairs at enterprise technology giant Salesforce. “Other governments should consider these rules when developing their own policy frameworks.”

How is the next generation of artificial intelligence handled?

The EU AI Act has identified next-generation artificial intelligence (generative AI), typically advanced AI models such as OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude, as subjects that need to be strictly regulated. These models are designed to perform a variety of tasks, from generating text content to solving complex problems, and can even surpass human capabilities in some areas.

To ensure the safe and trustworthy development and application of AI, the EU AI Act sets strict requirements for general-purpose AI models, including compliance with copyright laws, transparency in the training process, and ensuring cybersecurity.

However, AI developers worry that overly restrictive regulations could limit the development of open-source models, which play a key role in fostering innovation and democratizing AI.

Notable names in the open source AI community today include Meta's LLaMa, Stability AI's Stable Diffusion, and Mistral AI's Mistral 7B. The EU AI Act has created a number of "loopholes" that allow open source AI models to receive special treatment.

However, to enjoy these incentives, providers must be fully transparent about the structure and operation of the model, and allow the community to freely research and develop on it. However, open source models that pose significant risks to society will not enjoy these incentives.

What are the penalties for companies that violate the provisions of the AI ​​Act?

Violations of the EU AI Act could result in hefty fines for companies, ranging from €35 million ($41 million) to 7% of annual global turnover, depending on the severity of the violation and the size of the company.

Notably, this fine is even higher than the fines imposed under GDPR, which allows companies to face fines of up to €20 million ($23.4 million) or 4% of their annual global turnover for violating GDPR, showing that the EU is placing very strict requirements on the AI ​​sector. To ensure the effective implementation of these regulations, the EU has established a European AI Office, responsible for comprehensive oversight of AI-related activities.

As Jamil Jiva, global head of asset management at French fintech company Linedata, said, these heavy penalties will be a strong deterrent to illegal acts in the AI ​​field.

Just as the GDPR set a global standard for data protection, the EU AI Act is shaping the future of AI. The EU is taking a leading role in establishing a coherent and comprehensive legal framework for AI, ensuring that the technology is developed and applied safely, reliably and responsibly.

However, the full implementation of the AI ​​Act will take some time, with many important provisions not coming into effect until 2026. This shows the need for businesses to start preparing now to adapt to the big changes to come.

Restrictions on general-purpose AI modeling systems will start to apply 12 months after the AI ​​Act comes into effect, while generative AI systems like OpenAI's ChatGPT and Google's Gemini will have 36 months to perfect their systems to comply with the AI ​​Act's regulations.

Phan Van Hoa