← Terug naar blog

Global AI governance matrix 2025 strategic divergence, convergence, and democratic implications

AI Governance

Executive Summary

The year 2025 marks a pivotal moment in the global governance of artificial intelligence (AI). As the technology’s capabilities expand at an exponential rate, the world’s major technological powers—the United States, the European Union, the People’s Republic of China, and the United Kingdom—have solidified distinct and often competing regulatory frameworks. This report provides a comprehensive analysis of these four primary governance models, mapping their strategic divergences, identifying areas of tactical convergence, and evaluating their profound implications for democratic values, geopolitical stability, and the future of the international order.

The global landscape is characterized by four core philosophies. The United States, through its 2025 “America’s AI Action Plan,” has adopted an aggressive “market-first” model. This approach prioritizes deregulation, economic competitiveness, and geopolitical dominance, strategically promoting open-source AI to establish an “American AI Technology Stack” as a global standard while simultaneously pursuing domestic ideological control through its “Preventing Woke AI” directive. In stark contrast, the European Union has operationalized a “rights-first” model with its comprehensive, risk-based AI Act. Now in its implementation phase, the Act establishes a product safety-style regime with the explicit goal of protecting fundamental rights and creating legal certainty, enforced by a powerful new centralized body, the European AI Office. This approach, however, creates a persistent tension between robust regulation and the goal of fostering innovation.

Meanwhile, China has consolidated its “control-first” model, where AI governance is an extension of its national security and information control apparatus. Through new 2025 regulations on generative AI labeling and the overarching authority of the Cyberspace Administration of China (CAC), Beijing pursues a cyclical strategy of balancing state-driven innovation with strict ideological and political alignment. Finally, the United Kingdom, charting a distinct post-Brexit course, has advanced a “flexibility-first” model. Its “pro-innovation” framework, articulated in the 2025 “AI Opportunities Action Plan,” deliberately eschews prescriptive legislation in favor of empowering existing sectoral regulators, shifting their focus from mere oversight to the active promotion of AI for economic growth.

This strategic divergence is creating a fragmented global landscape, forcing multinational organizations to navigate a complex web of compliance requirements. Yet, amidst this competition, areas of tactical convergence are emerging as all powers grapple with shared technical challenges, particularly in managing the risks of frontier, general-purpose AI models.

The democratic implications of these divergent paths are profound. The models are reshaping the balance of power between the state, the corporation, and the citizen, with significant consequences for freedom of expression, algorithmic fairness, privacy, and surveillance. The very definition of “bias” has become a new front in an ideological contest between the US and EU models. Looking ahead, the world is on a trajectory toward one of several futures: a fragmented world of competing techno-regulatory blocs, a patchwork of limited interoperability, or a bipolar AI Cold War. This report deconstructs each governance matrix, analyzes its strategic intent, and provides a forecast of the geopolitical landscape to come.

Part I: The American Model: AI for Geostrategic Dominance

1.1 Deconstructing the 2025 “America’s AI Action Plan”

The “America’s AI Action Plan,” released on July 23, 2025, represents a fundamental and decisive pivot in United States AI policy.1 It moves the nation away from the previous administration’s focus on developing “Safe, Secure, and Trustworthy” AI and toward an aggressive, unapologetic strategy aimed at securing “global dominance in AI”.1 This strategic realignment was formally initiated on January 23, 2025, with President Donald Trump’s Executive Order (EO) 14179, titled “Removing Barriers to American Leadership in Artificial Intelligence.” This order explicitly revoked the preceding Biden-era EO 14110, setting the stage for a new policy framework built on speed, competitiveness, and market power.3

The Action Plan itself is structured around three core pillars: (1) Accelerating AI Innovation, (2) Building American AI Infrastructure, and (3) Leading in International AI Diplomacy and Security.1 This structure is not merely an organizational convenience; it reflects a clear strategic prioritization of market-led growth, rapid infrastructure deployment, and national power projection over the precautionary, risk-mitigation principles that had previously gained traction. The plan’s overarching goal is to unleash the American private sector, viewing it as the primary engine for achieving and maintaining a competitive edge over global rivals, particularly China.8

The stated rationale for this dramatic policy reversal was the belief that the prior regulatory framework “hampered the private sector’s ability to innovate” by imposing “burdensome” requirements.4 EO 14179 was designed to “clear a path for the United States to act decisively to retain global leadership,” framing regulation not as a tool for safety and trust, but as an obstacle to progress and national strength.5 This philosophy underpins every facet of the subsequent Action Plan.

1.2 The Ideology of “Winning the Race”: Deregulation and Infrastructure

The central tenet of the Action Plan is that American AI leadership can only be achieved by removing perceived obstacles to innovation. This translates into two primary lines of effort: a comprehensive regulatory rollback and an aggressive acceleration of infrastructure development.

The plan mandates that federal agencies conduct a sweeping review of existing rules to identify and subsequently eliminate or revise any regulations deemed to hinder AI development and adoption.1 This deregulatory impulse extends beyond the federal level. The plan recommends that federal agencies consider a state’s “regulatory climate” when making decisions about discretionary funding for AI-related projects.9 This creates a powerful incentive for states to align with the federal government’s deregulatory stance, effectively using federal funds as leverage to discourage the kind of “burdensome AI regulations” that have emerged in states like Texas and Utah.3 This policy move is particularly notable following the failure of a legislative effort in the U.S. Senate to impose a 10-year moratorium on states’ ability to enforce their own AI laws, which was removed from a bill in a near-unanimous vote on July 1, 2025.3

Complementing the regulatory rollback is a massive push to build the physical foundation for an AI-driven economy. Recognizing that advanced AI models are voracious consumers of energy and computational resources, the Action Plan prioritizes the rapid construction of data centers and supporting energy infrastructure.1 It directs federal agencies to streamline permitting processes, including expediting environmental reviews under the National Environmental Policy Act (NEPA) and the Clean Water Act, and explicitly makes federal land available for data center development.6 This represents a direct government intervention to lower the capital and logistical costs for private companies building the essential hardware backbone of the AI industry.

1.3 The “Unbiased AI” Mandate: Analyzing the “Preventing Woke AI” Directive

Running parallel to the theme of deregulation is a powerful, and seemingly contradictory, push for ideological regulation. This is most clearly articulated in the Executive Order on “Preventing Woke AI in the Federal Government,” which was released alongside the Action Plan.6 This order fundamentally reshapes federal procurement standards, restricting government agencies from contracting with developers of large language models (LLMs) unless those models adhere to two “Unbiased AI Principles”: “Truth-Seeking” and “Ideological Neutrality”.15

The order defines “Truth-Seeking” as prioritizing “historical accuracy, scientific inquiry, and objectivity” and acknowledging uncertainty.14 “Ideological Neutrality” is defined as ensuring LLMs are “neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI”.6 The explicit targeting of DEI as a “pervasive and destructive” ideology that can “distort the quality and accuracy” of AI output marks a direct government intervention into the ethical alignment of AI systems.1

This ideological project extends to the very standards that underpin AI development. The Action Plan directs the National Institute of Standards and Technology (NIST) to revise its widely respected and globally influential AI Risk Management Framework.7 The revision’s goal is to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change”.8 This is a strategic attempt to dismantle the existing consensus around “trustworthy AI”—which typically includes fairness and bias mitigation as core components—and replace it with a framework that aligns with a specific political agenda. This move seeks to redefine what constitutes a “safe” or “trustworthy” model, shifting the focus from mitigating societal harms like discrimination to ensuring alignment with a particular worldview.

1.4 Open-Source as a Geopolitical Tool: The Strategic Rationale and Inherent Contradictions

A cornerstone of the American strategy for global dominance is the strategic promotion of open-source and open-weight AI models.18 The Action Plan argues that open models are more likely to become global standards, possess significant “geostrategic value,” and help allies avoid vendor lock-in with proprietary systems.15 This policy is not merely about fostering a collaborative innovation environment; it is an instrument of foreign policy. The ultimate goal is to facilitate the export of the “American AI Technology Stack”—a complete package of hardware, data systems, models, and applications—to create a global alliance of nations built on US-developed technology and, by extension, American values.6

This strategy, however, is fraught with internal contradictions. The very nature of open-source software—its transparency and modifiability—is fundamentally at odds with the administration’s goal of controlling the ideological content of AI. Once a powerful model is released under an open-source license, the original developer and the US government lose all effective control over its subsequent use. Any user, anywhere in the world, can fine-tune the model to reintroduce the very “woke” elements the procurement order seeks to eliminate, or to serve any other purpose, rendering the “Ideological Neutrality” mandate unenforceable in the open ecosystem the plan champions.18

This presents a clear trade-off. The push for open-source AI is a calculated geopolitical move to establish a de facto global standard, creating a powerful network effect that could lock partner nations into the US technological sphere and outcompete both the EU’s regulated ecosystem and China’s state-controlled one. The “American AI Technology Stack” is the explicit packaging of this strategy, using the appeal of “openness” as a competitive tool.6 Yet, this strategy of weaponizing openness directly undermines the parallel domestic goal of ideological purity.

Furthermore, this policy creates significant challenges for the private sector it purports to liberate. While promoting innovation, the emphasis on open-source models introduces complex intellectual property risks. Companies that integrate open-source components into their proprietary systems may face legal obligations to disclose their own valuable code or extend open-source license terms to their commercial products.18 The risk of inadvertently infringing on IP or breaching complex licensing terms is magnified when models are trained on vast datasets of unknown or mixed provenance, creating a new layer of legal and compliance burdens that runs counter to the plan’s deregulatory ethos.18

The American approach, therefore, contains a fundamental and potentially self-defeating tension. It seeks to achieve global market dominance through deregulation and openness while simultaneously imposing rigid ideological controls on the domestic market, particularly the lucrative federal procurement sector. Global consumer and enterprise markets, especially in allied democracies, often demand models that are sensitive to the very issues of fairness and DEI that the US government’s policy rejects. This could force American developers into a difficult position: creating two distinct and costly product lines—a politically sanitized “Gov-AI” for federal contracts and a globally competitive “Global-AI” for the open market. This internal friction, born from the clash between market logic and ideological control, could ultimately fragment American R&D efforts and undermine the very goal of global dominance the Action Plan seeks to achieve.

Part II: The European Model: AI as a Regulated Product

2.1 The EU AI Act in 2025: From Text to Enforcement

While the United States pursues a strategy of deregulation, the European Union is moving in the opposite direction, operationalizing the world’s first comprehensive, legally binding framework for artificial intelligence. The EU AI Act (Regulation (EU) 2024/1689), which officially entered into force on August 1, 2024, is transitioning from legislative text to regulatory reality throughout 2025.22 This year marks a critical implementation phase, with key provisions becoming legally applicable on a staggered timeline.24

Two dates in 2025 are particularly significant. February 2, 2025, was the application date for some of the Act’s most crucial provisions: the prohibitions on certain “unacceptable risk” AI practices under Article 5 and the requirement for AI literacy under Article 4.22 Later in the year, on August 2, 2025, the rules governing General-Purpose AI (GPAI) models, the formal governance structures including the AI Board, and the framework for administrative fines and penalties will come into effect.23

The core philosophy of the AI Act is fundamentally different from the American approach. It treats AI not primarily as a tool for geopolitical competition, but as a product and service that must be safe for consumers and society. The Act is essentially a piece of product safety legislation, designed to protect the health, safety, and fundamental rights of EU citizens.25 Its horizontal, cross-sectoral nature is intended to create a predictable and harmonized legal environment for businesses operating across the 27 Member States, thereby fostering trust and legal certainty.28

2.2 The “Rights-First” Framework: Risk Tiers and Safeguards

The centerpiece of the EU AI Act is its tiered, risk-based approach, which tailors regulatory obligations to the level of potential harm an AI system could cause.25 AI applications are classified into one of four categories:

A distinct set of rules applies to General-Purpose AI (GPAI) models, the powerful foundation models that underpin many AI applications. These rules, applicable from August 2, 2025, require GPAI providers to maintain technical documentation, comply with EU copyright law, and provide detailed summaries of the content used for training.23 A sub-category of GPAI models deemed to pose

“systemic risk”—a designation based on factors including the computational power used for training (e.g., a threshold of having been trained using a cumulative amount of compute greater than 1025 FLOPs)—face even stricter obligations. These include conducting model evaluations, assessing and mitigating systemic risks, tracking and reporting serious incidents, and ensuring a high level of cybersecurity.32

2.3 The Enforcement Architecture: A Multi-Layered System

The enforcement of the AI Act is managed through a sophisticated, multi-layered governance structure that combines centralized EU oversight with national-level implementation.

2.4 The Innovation vs. Regulation Dilemma: The “Brussels Effect” and Its Trade-offs

The EU’s comprehensive and stringent regulatory model is a strategic choice with significant global implications. The intended outcome is the “Brussels Effect,” whereby the EU’s high standards become the de facto global norm because multinational companies find it easier to adopt the strictest rules across all their operations rather than creating different products for different markets.25

The Act is not without mechanisms to support innovation. It mandates that Member States establish AI regulatory sandboxes by August 2026, creating controlled environments where companies can test innovative AI systems under the supervision of competent authorities without fear of immediate penalties.25 The Act also contains limited exemptions for open-source AI, although these are narrow and do not apply to high-risk systems or GPAI models with systemic risk, which must still comply with most obligations.33

Despite these provisions, the AI Act has faced persistent criticism that its heavy regulatory burden will stifle innovation, deter investment, and ultimately cause Europe to fall further behind the US and China in the global AI race.39 The central trade-off is clear: does prioritizing the protection of fundamental rights and establishing legal certainty necessarily come at the cost of technological leadership and economic competitiveness?.40 This question lies at the heart of the debate over the EU’s role in the digital age.

The very complexity and rigor of the AI Act are giving rise to a new and powerful “Compliance-as-a-Service” industry. The extensive requirements for risk assessments, quality management, technical documentation, conformity assessments, and continuous monitoring, combined with the severe penalties for non-compliance, create a significant burden that is too complex and costly for most small and medium-sized enterprises—and even many large corporations—to manage internally.26 This economic reality is fueling a burgeoning market for specialized legal, technical, and consulting firms dedicated to navigating the intricacies of AI Act compliance, mirroring the growth of the privacy industry in the wake of the GDPR.

This dynamic, however, may lead to an unintended and paradoxical consequence. While the AI Act is designed to protect citizens and foster a trustworthy AI ecosystem, its high compliance costs could inadvertently centralize power in the hands of the very large technology companies it seeks to regulate. Well-resourced giants, primarily based in the US, are far better positioned to absorb the financial and administrative costs of compliance than smaller European startups.25 This could create a “regulatory moat,” where only a handful of “certified” high-risk or GPAI models from major global players become widely available on the EU market. Such an outcome would not only stifle competition but could also undermine the EU’s long-term goal of strategic autonomy, making European businesses

more reliant on a few non-EU technology providers.39

Part III: The Chinese Model: AI as an Instrument of State Control

3.1 The “Control-First” Doctrine: AI Governance as an Extension of National Security

China’s approach to AI governance is fundamentally distinct from the models developed in the West. It is not a standalone policy area but is deeply interwoven with the state’s comprehensive framework for cybersecurity, data security, and, most importantly, information control.42 The governing philosophy is a delicate and constantly recalibrated balancing act: to unleash the immense innovative and economic potential of AI while ensuring that the technology serves, and never challenges, the strategic goals of the state, national security, and social stability.45

This approach has resulted in a pattern of cyclical regulation, where the balance between promotion and control shifts in response to both internal and external pressures. Chinese AI policy has evolved through several distinct phases: an initial “Go-Go Era” (2017-2020) of massive investment and minimal regulation to build an industrial base; a “Crackdown Era” (2020-2022) where the Communist Party reasserted control over the tech sector; a “Catch-Up Era” (late 2022-early 2025) of pragmatic loosening in response to the launch of ChatGPT and economic headwinds; and the current “Crossroads Era”.47 This latest phase is defined by a new confidence in its domestic capabilities, exemplified by the breakthrough of models like DeepSeek-R1, set against a backdrop of persistent economic fragility and intensifying geopolitical competition.47

3.2 Dissecting the 2025 Regulations: Labeling, Content Moderation, and “Dual Filing”

The Chinese regulatory framework is characterized by a series of targeted, vertical regulations rather than a single, horizontal law like the EU’s AI Act. The foundational rules are the Interim Measures for the Administration of Generative Artificial Intelligence Services (GenAI Measures), which came into effect in August 2023.42 These measures apply to all public-facing generative AI services in China and establish the core principle that service providers are legally responsible for the content their systems generate.48

In 2025, this framework was significantly strengthened by new rules on transparency and traceability. Effective September 1, 2025, the Measures for Labeling Artificial Intelligence-Generated Content and the accompanying mandatory national standard (GB 45438-2025) impose comprehensive labeling requirements.50 These rules mandate two types of labels:

Furthermore, online distribution platforms like social media sites are required to implement technical mechanisms to detect these labels and categorize content as “confirmed,” “possible,” or “suspected” AI-generated, reinforcing the labeling at the point of distribution.50

A central pillar of the control framework is the rigorous system of content moderation and pre-market review. Service providers are legally obligated to prevent the generation of illegal content, defined broadly to include anything that threatens national security, undermines state power, or deviates from “socialist core values”.48 Any service deemed to have “public opinion attributes or social mobilization capabilities”—a vaguely defined but powerful category—is subject to a mandatory

security assessment and must complete an algorithm filing with the Cyberspace Administration of China (CAC). This “Dual Filing” requirement gives the state deep visibility and control over the most influential AI models before they are released to the public.45

3.3 The Cyberspace Administration of China (CAC): The Central Nervous System of AI Oversight

At the heart of China’s AI governance model is the Cyberspace Administration of China (CAC). The CAC, working in concert with other powerful bodies like the Ministry of Industry and Information Technology (MIIT) and the Ministry of Public Security (MPS), functions as the primary regulator, standard-setter, and enforcer for the AI industry.45

Its role is dominant and pervasive. The CAC oversees the critical security assessment and algorithm filing processes, which effectively serve as a gatekeeping mechanism for public-facing AI services.48 This gives the agency direct insight into the technical architecture, training data, and intended purpose of new models, allowing it to enforce ideological alignment from the design phase onward. The CAC’s enforcement powers are extensive. It can issue warnings, order the suspension of services, and levy fines under the authority of China’s broader legal framework, including the Cybersecurity Law and the Personal Information Protection Law (PIPL).45

Crucially, the CAC’s authority is not confined to China’s borders. The GenAI Measures grant it extraterritorial reach, empowering it to take “technical and other necessary measures” (such as blocking access) against foreign-based AI services that are provided to the public in China but fail to comply with Chinese regulations.42 This was demonstrated in 2025 through the “Qinglang” series of special enforcement actions, which specifically targeted AI-generated misinformation as a key priority.50

3.4 The State-Innovation Symbiosis: Balancing Ambition with Discipline

Despite the tight grip of the state, China’s model is not solely about restriction. It is a symbiotic relationship where the state actively fosters innovation to achieve its national ambition of becoming the global AI leader by 2030.46 This promotion takes many forms, including massive state-led investment in R&D, national initiatives to catalog and utilize public data resources for training models, and even a surprisingly supportive judicial stance on granting copyright protection to AI-generated content, which contrasts sharply with the US position.46

To avoid stifling development, the regulations are carefully crafted to exempt internal, non-public-facing research and development from the most onerous requirements.45 However, this support is strictly conditional. The unwavering requirement for all public-facing AI to align with the “correct political direction” and reflect “socialist core values,” combined with the CAC’s deep oversight, ensures that technological advancement is always disciplined by and subordinated to the party’s agenda.46

This “agile” regulatory posture is a deliberate strategic choice. Unlike the EU’s slow, consensus-driven legislative process, China’s rapid, iterative, and targeted rule-making allows the state to react swiftly to technological breakthroughs and to selectively tighten or loosen controls based on its assessment of economic needs and geopolitical conditions.46 This creates a strategically ambiguous and unpredictable environment that keeps domestic companies closely attuned to state signaling and makes it exceedingly difficult for foreign competitors to establish a stable, long-term compliance strategy.

Ultimately, China’s control-first model is forging a distinct, self-contained AI ecosystem. The stringent requirements for content filtering, data localization, CAC security reviews, and ideological alignment are creating what can be described as a “Glass Wall” or a “Model Curtain”.45 While Chinese AI models may achieve technical parity or even superiority over their Western counterparts, they are built on a foundation of control that makes them fundamentally incompatible with and untrustworthy to democratic societies. This is leading to a profound bifurcation of the global AI landscape, not just in hardware due to semiconductor restrictions, but at the level of the foundational models themselves. Global companies will find it impossible to deploy a single AI strategy across both Chinese and Western markets, forcing a costly and inefficient duplication of effort and accelerating a global technological decoupling.54

Part IV: The British Model: A Pro-Innovation Gambit

4.1 The UK’s Post-Brexit Path: A “Flexibility-First” Philosophy

Having formally exited the European Union, the United Kingdom has deliberately charted its own course on AI governance, seeking to position itself as a nimble and attractive global hub for AI innovation. The UK’s strategy is a calculated rejection of the EU’s comprehensive, prescriptive legal framework, opting instead for a “pro-innovation” and “flexibility-first” approach.55 The explicit goal is to leverage its regulatory autonomy to become an “AI superpower,” fostering an environment that encourages investment, talent, and rapid development by minimizing upfront regulatory burdens.55

The initial foundation of this approach, outlined in a 2023 government white paper, is a non-statutory framework built upon five cross-sectoral principles intended to guide existing regulators. These principles are: (1) Safety, security, and robustness; (2) Appropriate transparency and explainability; (3) Fairness; (4) Accountability and governance; and (5) Contestability and redress.55 The core idea was to empower regulators with domain-specific expertise to apply these high-level values in a context-specific manner, avoiding a one-size-fits-all law.

4.2 From Regulation to Promotion: The 2025 “AI Opportunities Action Plan”

The UK’s strategy underwent a significant evolution with the publication of the “AI Opportunities Action Plan” in January 2025. This plan signaled a crucial shift in the government’s posture, moving beyond a light-touch approach to regulation and toward the active promotion of AI as a primary driver of national economic growth.61

This new emphasis fundamentally alters the role of the UK’s sectoral regulators, such as the Information Commissioner’s Office (ICO) and the Financial Conduct Authority (FCA). Under the plan, these bodies are now expected to prioritize “enabling safe AI innovation” as a core part of their statutory “Growth Duty”.61 Instead of acting primarily as enforcement-focused watchdogs, they are being tasked with becoming facilitators of AI adoption within their respective industries. To ensure accountability, regulators are now required to publish annual reports detailing, with transparent metrics, how their activities have enabled AI-driven innovation and growth.58

Most radically, the plan introduces the possibility of a central override mechanism. It suggests that if existing regulators are deemed to be insufficiently promoting innovation—perhaps due to a lower risk tolerance—the government could empower a new central body to intervene. This body could “override” existing sector-specific regulations by issuing pilot sandbox licenses for non-compliant AI products, with the government itself assuming the associated liability.61 This marks a profound change, where the ambition for economic growth through AI could formally supersede the traditional regulatory mandate of risk mitigation and rights protection.

4.3 The Tension Between Principles and Law: A Contentious Debate

The UK’s determinedly non-legislative stance has created a persistent and contentious debate within its own political and industrial landscape. The government has consistently resisted calls for a broad, statutory AI law, arguing that premature legislation would “smother in bureaucracy” and stifle the very innovation it seeks to foster.1 This “light-touch” philosophy was publicly reaffirmed in a joint press conference with the US administration in February 2025, where Prime Minister Keir Starmer stated, “Instead of over-regulating these new technologies, we’re seizing the opportunities they offer”.59

However, this position faces mounting pressure from legislators, civil society groups, and even parts of the industry. Critics argue that the reliance on non-binding principles and the discretion of disparate regulators creates significant “regulatory uncertainty,” which can itself deter investment and leave citizens without clear, enforceable rights.59 This tension is vividly illustrated by the reintroduction of the “Artificial Intelligence (Regulation) Bill” in the House of Lords on March 4, 2025. Although a private member’s bill without government backing, its proposal to create a statutory central AI Authority—akin to the EU AI Office—and to codify the five principles into law reflects a strong appetite for a more robust governance framework.59

In response to this pressure, the Labour government has signaled its intent to introduce some form of binding legislation within 2025. However, it is expected to be narrowly targeted at the developers of the most powerful frontier AI models, rather than a comprehensive, horizontal act, leaving the exact shape of the UK’s future legal landscape uncertain.58

4.4 Navigating the Middle Ground: Risks and Opportunities

The UK’s “flexibility-first” model is a high-stakes bet on regulatory arbitrage. The strategy is predicated on the belief that by offering a more permissive and agile regulatory environment than the EU, the UK can carve out a niche as the premier destination for AI research, development, and investment in Europe.55 The opportunity lies in becoming a global hub for regulatory sandboxing and attracting the talent and capital that might be deterred by the EU’s heavy compliance burdens or the US’s emerging ideological battles.

The primary risk, however, is that this “middle way” proves to be an unstable and ultimately untenable position. The UK lacks both the immense market power of the United States and the regulatory gravity of the European Union. The extraterritorial scope of the EU AI Act means that many UK-based businesses serving European customers will have no choice but to comply with its stringent requirements, regardless of the UK’s lighter-touch domestic regime. This could render the UK’s flexibility largely irrelevant in practice for any company with global ambitions, a phenomenon known as the “Brussels Effect”.56 This situation creates significant legal uncertainty for businesses and risks leaving UK citizens with weaker and less coherent protections than their European counterparts.61

This inherent instability suggests the UK’s current model may be a transitional phase rather than a permanent equilibrium. The country is subject to powerful external forces pulling it in opposite directions. On one hand, the practical realities of trade and market access create a strong pull toward regulatory convergence with the EU to ensure interoperability and reduce compliance friction for its businesses.58 On the other hand, its close political, strategic, and economic alliance with the US creates an equally strong pull toward the American “market-first” model, a preference clearly signaled in bilateral government statements.59 Over the next five to ten years, as global AI standards solidify and geopolitical alignments harden, the UK will likely be forced to navigate away from its ambiguous middle ground and align more definitively with one of the two major Western regulatory blocs.

Part V: The Global AI Governance Matrix: A Comparative Analysis

5.1 Mapping Strategic Divergence: Ideology, Enforcement, and Economic Objectives

The year 2025 has crystallized the strategic divergence among the world’s leading AI powers. The governance frameworks of the United States, European Union, China, and United Kingdom are not merely different sets of rules; they are manifestations of deeply held, and often conflicting, national ideologies, geopolitical ambitions, and economic objectives. The US model is explicitly geared toward maintaining geostrategic and market dominance through rapid, private-sector-led innovation, viewing deregulation as a primary tool.1 The EU’s framework is an exercise in normative power, aiming to export its values by establishing a global gold standard for rights-protecting, ethical AI, even at the potential cost of short-term innovation.25 China’s approach subordinates all technological and economic goals to the imperatives of state control, social stability, and national security, creating a tightly managed, state-centric ecosystem.43 The UK, in its post-Brexit search for a unique global role, has gambled on a model of regulatory flexibility, prioritizing economic growth and agility above all else.55 These foundational differences are reflected in every aspect of their governance architectures, from risk classification to enforcement mechanisms.

5.2 Identifying Tactical Convergence: Frontier Models, Risk Management, and Sandboxes

Despite these profound strategic and ideological divides, areas of tactical convergence are emerging. This is not the result of deliberate coordination but rather a reflection of all four jurisdictions confronting the same novel and complex technological challenges. As a result, they are independently arriving at functionally similar solutions.

One of the most significant areas of convergence is the governance of frontier or general-purpose AI (GPAI) models. Recognizing that these highly capable and adaptable models pose unique and systemic risks, the US, EU, and UK are all developing specific regulatory regimes to govern their development and deployment, separate from rules for narrower AI applications.23 China’s category of services with “public opinion attributes or social mobilization capabilities” serves a similar function, singling out the most powerful systems for heightened scrutiny.48

There is also a shared, albeit differently articulated, adoption of risk-based approaches. The EU’s four-tier system is the most explicit and formalized.25 However, the US Action Plan, while broadly deregulatory, still identifies and prioritizes specific high-risk domains for enhanced oversight, such as national security, biosecurity, and critical infrastructure.6 China’s proposed “negative list” for AI applications is another form of risk-tiering, subjecting certain activities to stricter pre-approval requirements.33 The UK tasks its sectoral regulators with performing context-specific risk assessments based on its guiding principles.55

Finally, regulatory sandboxes have emerged as a widely accepted tool for balancing innovation with safety. The EU AI Act mandates their creation, the UK’s Action Plan champions them, and the US plan calls for their use to test new AI solutions in real-world environments.12 This indicates a global consensus that this particular policy mechanism is valuable for managing the uncertainty of a rapidly evolving technology.

5.3 Table 1: Comparative AI Governance Matrix (2025)

The following table provides a systematic comparison of the four primary AI governance models as they stand in 2025, summarizing their core features across key dimensions.

Dimension****United States (Market-First)****European Union (Rights-First)****People’s Republic of China (Control-First)****United Kingdom (Flexibility-First)****Governing PhilosophyGeostrategic dominance, market-led innovation, deregulation. 1Protection of fundamental rights, legal certainty, ethical leadership. 25State control, social stability, national security, technological self-sufficiency. 45Pro-innovation, economic growth, regulatory flexibility, sector-specific approach. 55Primary Legal Instrument (2025)“America’s AI Action Plan”; Executive Orders (14179, “Woke AI” EO, etc.). 1The AI Act (Regulation (EU) 2024/1689). 23Generative AI Measures; Labeling Rules (GB 45438-2025); Data Security Law. 42“AI Opportunities Action Plan”; Non-statutory principles; Existing sectoral laws. 59Enforcement ArchitectureDecentralized; existing federal agencies (FTC, Commerce); OMB guidance for procurement. No central AI body. 1Centralized & multi-layered: European AI Office (for GPAI), AI Board, National Competent Authorities. 28Centralized: Cyberspace Admin. of China (CAC) is dominant; other ministries support. 45Decentralized: Existing sectoral regulators (ICO, FCA, Ofcom) tasked with promotion and oversight. 60Risk CategorizationInformal; prioritizes national security, biosecurity, critical infrastructure. Rejects risk frameworks based on “misinformation” or “DEI”. 8Formal 4-tier system: Unacceptable (banned), High, Limited, Minimal. 25Implicit: “Public opinion/social mobilization” services get special scrutiny. Negative list approach proposed. 33Context-specific, determined by sectoral regulators based on the five principles. 55Stance on Open-Source ModelsStrategic Promotion: Encouraged as a tool for geopolitical influence and to set global standards. 15Cautious Exemption: Limited exemptions from some obligations, but not for high-risk systems or GPAI with systemic risk. 33Encouraged (for innovation) but Controlled: Must still comply with content and security rules if public-facing. 33Generally encouraged as part of the pro-innovation stance. 55Public Sector AI RulesStrict Procurement Rules: “Preventing Woke AI” EO imposes ideological neutrality and truth-seeking principles on federal LLM procurement. 6Fully Applicable: AI Act applies equally to public and private sector deployers of AI systems. 53Primarily Private Sector Focus: Main regulations target public-facing commercial services, leaving government use less transparently regulated. 45Governed by the same principles-based, sector-specific approach. 55Extraterritorial ScopeLimited: Primarily through export controls and influencing allies to adopt the “American AI Stack”. 7Extensive (“Brussels Effect”): Applies to any AI system placed on the EU market or whose output is used in the EU. 31Extensive: Applies to any service targeting the Chinese public; CAC can take action against foreign providers. 42Limited: Primarily through influence in global standards bodies. Subject to the EU’s extraterritoriality. 56

Part VI: Democratic Implications: Freedom, Fairness, and the Future of Governance

The divergent paths of AI governance are not merely technical or economic policy choices; they carry profound implications for the health and future of democratic societies. The design of these regulatory frameworks directly impacts fundamental rights, the integrity of the public sphere, and the delicate balance of power between the citizen, the state, and the corporation.

6.1 AI and the Public Sphere: Free Expression, Disinformation, and Censorship

Each governance model interacts with the public sphere in a way that reflects its core ideology, with significant consequences for freedom of expression.

The United States model presents a complex paradox. Its “Preventing Woke AI” directive, while framed in the language of “truth” and “neutrality,” constitutes a direct government intervention into the substance of AI-generated speech.2 By mandating the removal of certain ideological viewpoints from federally procured models and directing NIST to revise its framework to exclude concepts like “misinformation,” the policy attempts to use the power of the state to define and enforce a particular version of acceptable discourse. This raises significant First Amendment concerns and could create a chilling effect on the development of AI that engages with a wide range of social and political topics.2

The European Union, through the AI Act, seeks to empower citizens rather than control content. Its transparency requirements for deepfakes and chatbots are designed to give individuals the context they need to critically evaluate information and make informed decisions.26 While the Act’s primary focus is on mitigating risks to safety and fundamental rights, its broad principles could be interpreted by national authorities to justify content moderation measures that impact expression, creating a potential area of tension.

The Chinese model is explicitly and unapologetically designed for censorship and narrative control. The legal requirement for AI systems to adhere to “socialist core values,” combined with the CAC’s pervasive oversight, institutionalizes political control over the information environment.48 In this framework, AI is not a tool for open discourse but an instrument for enforcing ideological conformity and conducting mass surveillance, fundamentally subverting the principles of free expression.46

The United Kingdom’s flexible, regulator-led approach leaves the handling of complex issues like AI-driven disinformation largely to the discretion of different sectoral bodies.55 This creates the risk of an inconsistent and fragmented response, potentially leaving significant gaps in the protection of the public sphere from manipulation.

6.2 Algorithmic Justice: Bias, Discrimination, and the Right to Redress

The potential for AI systems to perpetuate and amplify existing societal biases is one of the most critical challenges for democratic governance. The four models address this challenge in starkly different ways.

The US Action Plan actively moves to dismantle the conceptual tools used to address algorithmic bias. By rejecting DEI-focused risk management in its procurement standards and NIST framework revisions, and by weakening consumer protection bodies, the plan risks exacerbating algorithmic discrimination in high-stakes areas like credit, housing, and employment.2 The policy’s premise of “ideological neutrality” fails to recognize the well-documented reality that historically generated data, if used without corrective measures, will inevitably reproduce historical patterns of discrimination.

The EU AI Act, in contrast, places algorithmic fairness at its core. It establishes legally binding obligations for developers of high-risk systems to use high-quality, representative datasets and to implement human oversight precisely to minimize the risk of biased and discriminatory outcomes.26 Crucially, it provides a legal framework that gives individuals a clear basis to contest and seek redress for harmful decisions made by AI systems.

China’s framework presents a troubling contradiction. While its regulations pay lip service to preventing discrimination, the state’s widespread use of AI for social scoring and mass surveillance constitutes a system of institutionalized, state-sanctioned discrimination on a massive scale.22 Here, AI becomes a tool to enforce social and political hierarchies, not to promote equity.

In the UK, the principles of “Fairness” and “Contestability and redress” are central to the government’s stated policy.55 However, their non-statutory nature and the reliance on the discretion of individual regulators mean that the enforceability of these rights is potentially weak and inconsistent across different sectors of the economy.69

This divergence reveals that the very definition of “bias” has become a key battleground in the ideological competition over AI. The US Action Plan’s effort to reframe bias as a “woke” political imposition and replace it with a narrow concept of “neutrality” is a profound attempt to delegitimize years of research and policy work on fairness and accountability.14 This creates a direct philosophical clash with the EU’s framework, which is explicitly built on the legal and ethical imperative to prevent discrimination based on protected characteristics like race and gender.26 As these competing models and their associated technologies are exported globally, the world will face a choice between fundamentally different ethical operating systems. The future of democratic and equitable AI will depend significantly on which definition of “fairness” prevails in global standards.

6.3 The State, the Corporation, and the Citizen: Shifting Power Balances

Ultimately, each AI governance model reconfigures the triangular relationship between the individual citizen, the state, and the corporation. An analysis across the models reveals an emerging “democratic deficit” in AI governance, where in most cases, the power of the individual is diminished relative to that of large institutions.

The US model, with its emphasis on deregulation and market-led innovation, clearly empowers large corporations, granting them greater freedom to develop and deploy AI with fewer constraints and less oversight.1 The Chinese model represents the opposite pole, concentrating immense power in the hands of the state and its security apparatus, using AI as a tool for social management.45 The UK’s light-touch, pro-growth approach risks leaving citizens with uncertain and potentially weaker protections, caught between powerful corporate actors and a government hesitant to legislate.59

Only the EU’s model is explicitly designed to empower the citizen by codifying a set of fundamental rights and establishing robust regulatory oversight.62 Yet, as previously discussed, this approach carries the unintended risk of entrenching the market power of the few large corporations that can afford the high costs of compliance. This suggests a troubling global trend: the governance of the most transformative technology of our era is largely happening

to citizens, rather than by or for them. This deficit poses a long-term, systemic challenge to the principles of democratic accountability and self-determination.

Part VII: Geopolitical Forecast and Strategic Recommendations

7.1 Scenarios for 2030: Fragmentation, Patchwork, or Bipolarity?

The divergent trajectories of AI governance established in 2025 are setting the stage for several plausible futures for the global technological and political landscape over the next five to ten years. The interplay between strategic competition, economic incentives, and the pace of technological change will likely lead to one of the following scenarios.

7.2 The Battle for the “Global South”: Exporting Regulatory Models

A key arena where these geopolitical dynamics will play out is the “Global South.” The major powers are already using their AI governance models as a form of soft power and a tool of foreign policy, competing to have their respective approaches adopted by developing nations across Asia, Africa, and Latin America.71 This is a race to set the default operating system for the next generation of digital infrastructure worldwide. The United States is actively promoting its “full-stack AI export packages,” offering a model of rapid innovation and market integration.7 China is extending its Digital Silk Road, providing technology and infrastructure often bundled with its model of state surveillance and control.80 The European Union leverages its “Brussels Effect” and development aid, offering a model built on rights, legal certainty, and ethical governance. The choices these developing nations make will not only shape their own societies but will also determine the future balance of power in the global AI ecosystem.

7.3 Strategic Recommendations for Policymakers and Industry

Navigating this complex and contested landscape requires foresight and strategic action from both public and private sector leaders.

For Policymakers:

For Industry Leaders:

Conclusion: Synthesizing the 2025 Landscape and Projecting the Path Forward

The year 2025 has laid bare the deep-seated divisions in how the world’s major powers intend to govern artificial intelligence. The strategic divergence between the American market-first, the European rights-first, the Chinese control-first, and the British flexibility-first models is not a temporary misalignment but a reflection of fundamental competition over economics, ideology, and the future international order. While tactical convergence on shared technical problems offers a glimmer of hope for a functional global ecosystem, the dominant trend is toward fragmentation and geopolitical rivalry.

The democratic implications are stark. With the notable exception of the European Union’s ambitious—if imperfect—framework, the prevailing models of AI governance tend to concentrate power in the hands of either the state or large corporations, often at the expense of individual rights and democratic accountability. The very language of AI ethics, including the definition of “fairness” and “bias,” has become a contested space.

The path forward is fraught with uncertainty. The world is at a crossroads, facing a choice between a fragmented landscape of competing techno-blocs, a functional but complex patchwork of interoperable standards, or a tense bipolar AI Cold War. The decisions made today by policymakers, industry leaders, and civil society will determine which of these futures materializes. Building a future where AI promotes human flourishing and democratic values, rather than undermining them, will require a renewed commitment to international cooperation, a clear-eyed assessment of the risks, and a steadfast defense of the fundamental rights that must guide the development of this transformative technology.

Geciteerd werk

DjimIT Nieuwsbrief

AI updates, praktijkcases en tool reviews — tweewekelijks, direct in uw inbox.

Gerelateerde artikelen