How might AI regulations change in 2025?
2025 could mark a major turning point in shaping global artificial intelligence (AI) regulations, from the progress of the EU's groundbreaking AI Act to the policies that the Trump administration could shape in the US.
The year 2025 marks a major shift in the US political landscape that promises to have a profound impact on AI governance. President-elect Donald Trump will take office on January 20, bringing with him a team of top business advisors, including luminaries like Elon Musk and Vivek Ramaswamy. They are expected to shape policy thinking on frontier technologies like AI and cryptocurrencies, opening up new directions full of challenges and potential.

Meanwhile, an interesting juxtaposition is emerging between the two jurisdictions as the UK and the European Union (EU) take different views on technology regulation. The EU has taken a more hands-off approach, aiming to clamp down on the Silicon Valley giants behind powerful AI systems, while the UK has preferred a more flexible strategy that fosters innovation while maintaining the necessary oversight.
The influence ofbillionaire Elon Musk on policyAI ofAmerica
Although AI was not a central focus of Donald Trump’s presidential campaign, it is expected to become a priority under the new administration. One significant milestone was Trump’s appointment of Elon Musk, CEO of Tesla and co-founder of OpenAI, and Vivek Ramaswamy, a biotech entrepreneur, to head the Department of Government Effectiveness.
Regarding this issue, Mr. Matt Calkins, CEO of the technology company Appian (USA), commented that the close relationship between Trump and Musk could bring advantages to the US in the field of AI. He appreciated the experience of Musk, the founder of xAI, and believed that Musk's deep understanding of AI would shape effective policies. "Finally, we have someone in the administration who really understands AI," Calkins shared with CNBC.

While there have been no official announcements of AI-related directives or executive orders, Calkins predicts Musk will push for safeguards to control risks and ensure AI does not threaten civilization, a concern Musk has emphasized for years.
Currently, the US has no comprehensive federal law on AI, only fragmented regulations at the state and local levels. More than 45 states, along with Washington DC, Puerto Rico, and the Virgin Islands, have introduced their own bills to regulate the field, highlighting the urgent need for a unified legal framework in the future.
EU AI Act
The EU is currently the only region in the world to have a comprehensive legal framework for AI. Earlier this year, the groundbreaking EU AI Act was officially adopted, marking a major step forward in AI governance.
Although not yet fully effective, the law has caused concern in the US tech community, with major companies such as Amazon, Google and Meta warning that strict regulations could stifle innovation.

In December, the EU Office for AI published a second draft of a set of rules for general purpose AI (GPAI) models, like OpenAI’s GPT. The draft includes exemptions for some open source models and requires GPAI developers to conduct rigorous risk assessments.
However, the Computer & Communications Industry Association said that some provisions in the draft exceeded the original scope of the Act, especially measures related to copyright, causing controversy among stakeholders in the technology industry.
European tech leaders are concerned that EU sanctions against major US tech companies could face a backlash from President Trump, which could force the EU to adjust its policies.
United Kingdomand a more flexible approach to AI
Unlike the EU, the UK has been cautious about introducing legal obligations for AI model makers, concerned that new regulations could be too restrictive and stifle innovation.
The government led by Prime Minister Keir Starmer recently announced plans to develop its own AI legislation. The UK is expected to adopt a more flexible approach, focusing on fundamental principles rather than the strict risk-based framework of the EU.
Last month, the government released its first indication of where the regulation is headed, with a consultation on how to manage the use of copyrighted content in training AI models. This is a particularly important issue for generative AI and large language models (LLMs), which rely heavily on copyrighted data.
Most LLMs today use public data from the open web to train their AI, but this often includes artwork and copyrighted material. Artists and publishers are concerned that these systems are copying their valuable content without their consent.
To address this issue, the UK government is looking at the possibility of creating exceptions in copyright law that would allow AI models to be trained using copyrighted works, but still ensure the rights of copyright owners to refuse permission to use their intellectual property.
The UK could become a "global leader" in tackling copyright infringement of AI models, Matt Calkins said, stressing that the country is not under pressure from strong lobbying campaigns by domestic AI leaders like the US.
US-China relations are a potential point of tension
As governments around the world struggle to regulate rapidly evolving AI systems, geopolitical tensions between the United States and China risk escalating under the Trump administration.
During his first term, Trump took a hard line on China, including blacklisting Huawei, restricting the company from working with US technology suppliers. He also attempted to ban TikTok, owned by Chinese company ByteDance, in the US, though he later reversed his stance on TikTok.

China is pushing to overtake the US for dominance in AI, while the US has taken steps to restrict China’s access to key technologies, particularly microprocessors like Nvidia chips that are essential for training advanced AI models. In response, China has stepped up efforts to develop its own domestic chip industry.
Tech experts worry that the geopolitical divide between the US and China in the field of AI could lead to potential risks, such as the possibility of either country developing a form of AI that is far more intelligent than humans.
Max Tegmark, founder of the nonprofit Future of Life Institute, warned that in the future, the US and China could create a form of AI that is capable of self-improvement and designing new systems without human intervention, which could force both countries to develop separate AI safety rules to control the risks.
“The optimistic path I hope for is for the U.S. and China to unilaterally establish national safety standards to prevent their companies from developing AI out of control, not to appease rival superpowers, but simply to protect themselves,” Tegmark told CNBC in November.
Currently, governments are trying to work together to develop regulations and frameworks around AI. In 2023, the UK hosted a global AI safety summit, attended by both the US and Chinese governments, to discuss potential barriers and challenges in developing effective AI regulatory policies.