Global AI Regulation Shifts: South Korea Passes AI Act, Trump Rescinds Biden’s AI Order, and EU AI Act Takes Effect

The global AI regulatory landscape is undergoing significant changes, with governments adopting divergent approaches to artificial intelligence governance. In Asia, North America, and Europe, policymakers are setting new standards that could reshape the future of AI development and deployment.

On 26 December, 2024, South Korea’s National Assembly passed the AI Basic Act, which is set to take effect in January 2026, making it the first country in Asia to introduce a comprehensive AI regulatory framework. Meanwhile, in the United States, on his first day back in office (20 January, 2025), President Donald Trump rescinded Joe Biden’s AI executive order, signalling a shift towards a more industry-driven approach. At the same time, in Europe, several provisions of the EU AI Act came into force on 2 February, 2025, marking a new phase in Europe’s AI governance model.

These developments highlight the growing importance of AI regulation worldwide, as nations seek to balance technological innovation with ethical concerns, security risks, and economic competitiveness. However, their approaches vary widely, raising critical questions about the future of global AI governance: will regulations converge towards harmonisation, or will they continue to fragment along regional lines?

South Korea’s AI Basic Act: A New Milestone in AI Regulation

South Korea’s AI Basic Act, passed on 26 December, 2024, establishes clear ethical and safety guidelines for AI development and deployment, positioning the country as a leader in AI governance in Asia.

Set to take effect in January 2026, the Act:

  • Mandates strict safety compliance, requiring AI systems to be designed in such a way as to prevent discrimination, misinformation, and harm.
  • Enforces transparency and accountability, obligating developers to document AI decision-making processes clearly.
  • Establishes a government oversight body, tasked with monitoring compliance and enforcing penalties for violations.

Unlike the EU AI Act, which categorises AI systems based on risk levels, South Korea’s approach prioritises innovation while setting firm ethical boundaries. In contrast, the direction of U.S. regulation remains uncertain, especially following Trump’s reversal of Biden’s AI policies.

By taking this step, South Korea positions itself as a key player in AI governance, potentially influencing future international regulatory discussions and setting a precedent for other Asian nations.

Trump Rescinds Biden’s AI Executive Order: A Shift in U.S. AI Policy

On 20 January, 2025, Donald Trump rescinded Executive Order 14110, signed by Joe Biden in October 2023, which had introduced a regulatory framework for artificial intelligence.

Biden’s Executive Order 14110, entitled “Safe, Secure, and Trustworthy AI,” aimed to mitigate AI risks through safety guidelines, transparency requirements, and ethical considerations. By revoking it, Trump signalled a major shift in U.S. AI policy, moving away from regulatory oversight towards an industry-led approach.

Just three days later, on 23 January, 2025, Trump signed Executive Order 14179, entitled “Removing Barriers to American Leadership in Artificial Intelligence.” This new order:

  • Directs federal agencies to develop an AI action plan within 180 days to ensure U.S. leadership in AI.
  • Emphasises deregulation, aiming to eliminate barriers to AI innovation and avoiding ideological constraints on AI development.
  • Prioritises the role of AI in economic growth and national security, particularly in relation to competition with China.
  • Appointed David Sacks, a former PayPal executive and venture capitalist, as the administration’s AI and cryptocurrency czar to lead the Presidential Council of Advisors on Science and Technology.

This move has created uncertainty for U.S. tech companies, many of which had begun adapting to Biden’s regulatory framework. With no other immediate regulations in place, businesses must now navigate a shifting landscape: will Trump’s administration introduce targeted AI safeguards, or will it fully embrace an unrestricted, free-market approach?

EU AI Act: First Provisions Come into Effect

On 2 February, 2025, the European Union’s AI Act began its phased implementation, marking a significant step towards comprehensive AI governance.

Key Provisions Now in Effect:

  • AI Literacy (Article 4): Requires organisations deploying AI to ensure that their staff understand both the risks and opportunities of AI. Companies must implement AI governance policies and training programs to enhance compliance and ethical awareness.
  • Prohibited AI Practices (Article 5): The Act bans AI applications that pose “unacceptable risks,” including:
    1. AI systems that manipulate individuals through subliminal or deceptive techniques.
    2. AI that exploits the vulnerabilities of children, the elderly, or people with disabilities.
    3. Social scoring by public authorities that leads to discriminatory treatment.
    4. Real-time biometric identification in public spaces, with limited exceptions for law enforcement.

Future Enforcement Phases:

  • 2 August, 2025: Regulations for general-purpose AI models and penalties for non-compliance take effect.
  • 2 August, 2026: Most provisions, including those governing high-risk AI systems, become fully enforceable.
  • 2 August, 2027: Extended transition periods for high-risk AI embedded in regulated products conclude.

How Companies Are Preparing:

To comply with the new regulations, companies operating in the EU are:

  • Launching AI training programs to ensure staff understand and adhere to the law.
  • Conducting system audits to identify and mitigate potential risks in existing AI models.
  • Developing internal policies to align with transparency, accountability, and ethical standards.

Failure to comply with these regulations could lead to fines of up to €35 million or 7% of a company’s annual global turnover. To assist with AI regulation compliance, the Commission has released guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act, as well as guidelines on the defintion of an artificial intelligence system established by EU AI act.

As the AI Act’s provisions continue to roll out, its impact on global AI governance is expected to be significant, potentially setting new ethical and legal standards worldwide.

Conclusion: A Diverging Global AI Governance Landscape

The latest AI regulatory developments in South Korea, the U.S., and the EU reveal a widening gap in global AI governance.

  • South Korea’s AI Basic Act introduces a structured legal framework, balancing ethics, transparency, and innovation.
  • Trump’s rollback of Biden’s AI executive order shifts the U.S. towards a deregulated, industry-driven approach, prioritising economic and strategic leadership.
  • The EU AI Act, meanwhile, is moving forward with enforceable laws, establishing clear compliance obligations for businesses operating in the region.

These contrasting strategies will have far-reaching implications for AI businesses worldwide, shaping global AI development, compliance requirements, and technological competition.

As AI continues evolving at an unprecedented pace, governments face the challenge of fostering innovation while ensuring safety, ethics, and accountability. The choices made today will determine whether AI serves as a force for progress or a source of unchecked disruption.

P.R.

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email