Executive Summary
The year 2025 marks a pivotal moment in the global governance of artificial intelligence (AI). As the technology’s capabilities expand at an exponential rate, the world’s major technological powers—the United States, the European Union, the People’s Republic of China, and the United Kingdom—have solidified distinct and often competing regulatory frameworks. This report provides a comprehensive analysis of these four primary governance models, mapping their strategic divergences, identifying areas of tactical convergence, and evaluating their profound implications for democratic values, geopolitical stability, and the future of the international order.
The global landscape is characterized by four core philosophies. The United States, through its 2025 “America’s AI Action Plan,” has adopted an aggressive “market-first” model. This approach prioritizes deregulation, economic competitiveness, and geopolitical dominance, strategically promoting open-source AI to establish an “American AI Technology Stack” as a global standard while simultaneously pursuing domestic ideological control through its “Preventing Woke AI” directive. In stark contrast, the European Union has operationalized a “rights-first” model with its comprehensive, risk-based AI Act. Now in its implementation phase, the Act establishes a product safety-style regime with the explicit goal of protecting fundamental rights and creating legal certainty, enforced by a powerful new centralized body, the European AI Office. This approach, however, creates a persistent tension between robust regulation and the goal of fostering innovation.
Meanwhile, China has consolidated its “control-first” model, where AI governance is an extension of its national security and information control apparatus. Through new 2025 regulations on generative AI labeling and the overarching authority of the Cyberspace Administration of China (CAC), Beijing pursues a cyclical strategy of balancing state-driven innovation with strict ideological and political alignment. Finally, the United Kingdom, charting a distinct post-Brexit course, has advanced a “flexibility-first” model. Its “pro-innovation” framework, articulated in the 2025 “AI Opportunities Action Plan,” deliberately eschews prescriptive legislation in favor of empowering existing sectoral regulators, shifting their focus from mere oversight to the active promotion of AI for economic growth.
This strategic divergence is creating a fragmented global landscape, forcing multinational organizations to navigate a complex web of compliance requirements. Yet, amidst this competition, areas of tactical convergence are emerging as all powers grapple with shared technical challenges, particularly in managing the risks of frontier, general-purpose AI models.

The democratic implications of these divergent paths are profound. The models are reshaping the balance of power between the state, the corporation, and the citizen, with significant consequences for freedom of expression, algorithmic fairness, privacy, and surveillance. The very definition of “bias” has become a new front in an ideological contest between the US and EU models. Looking ahead, the world is on a trajectory toward one of several futures: a fragmented world of competing techno-regulatory blocs, a patchwork of limited interoperability, or a bipolar AI Cold War. This report deconstructs each governance matrix, analyzes its strategic intent, and provides a forecast of the geopolitical landscape to come.
Part I: The American Model: AI for Geostrategic Dominance
1.1 Deconstructing the 2025 “America’s AI Action Plan”
The “America’s AI Action Plan,” released on July 23, 2025, represents a fundamental and decisive pivot in United States AI policy.1 It moves the nation away from the previous administration’s focus on developing “Safe, Secure, and Trustworthy” AI and toward an aggressive, unapologetic strategy aimed at securing “global dominance in AI”.1 This strategic realignment was formally initiated on January 23, 2025, with President Donald Trump’s Executive Order (EO) 14179, titled “Removing Barriers to American Leadership in Artificial Intelligence.” This order explicitly revoked the preceding Biden-era EO 14110, setting the stage for a new policy framework built on speed, competitiveness, and market power.3
The Action Plan itself is structured around three core pillars: (1) Accelerating AI Innovation, (2) Building American AI Infrastructure, and (3) Leading in International AI Diplomacy and Security.1 This structure is not merely an organizational convenience; it reflects a clear strategic prioritization of market-led growth, rapid infrastructure deployment, and national power projection over the precautionary, risk-mitigation principles that had previously gained traction. The plan’s overarching goal is to unleash the American private sector, viewing it as the primary engine for achieving and maintaining a competitive edge over global rivals, particularly China.8
The stated rationale for this dramatic policy reversal was the belief that the prior regulatory framework “hampered the private sector’s ability to innovate” by imposing “burdensome” requirements.4 EO 14179 was designed to “clear a path for the United States to act decisively to retain global leadership,” framing regulation not as a tool for safety and trust, but as an obstacle to progress and national strength.5 This philosophy underpins every facet of the subsequent Action Plan.
1.2 The Ideology of “Winning the Race”: Deregulation and Infrastructure
The central tenet of the Action Plan is that American AI leadership can only be achieved by removing perceived obstacles to innovation. This translates into two primary lines of effort: a comprehensive regulatory rollback and an aggressive acceleration of infrastructure development.
The plan mandates that federal agencies conduct a sweeping review of existing rules to identify and subsequently eliminate or revise any regulations deemed to hinder AI development and adoption.1 This deregulatory impulse extends beyond the federal level. The plan recommends that federal agencies consider a state’s “regulatory climate” when making decisions about discretionary funding for AI-related projects.9 This creates a powerful incentive for states to align with the federal government’s deregulatory stance, effectively using federal funds as leverage to discourage the kind of “burdensome AI regulations” that have emerged in states like Texas and Utah.3 This policy move is particularly notable following the failure of a legislative effort in the U.S. Senate to impose a 10-year moratorium on states’ ability to enforce their own AI laws, which was removed from a bill in a near-unanimous vote on July 1, 2025.3
Complementing the regulatory rollback is a massive push to build the physical foundation for an AI-driven economy. Recognizing that advanced AI models are voracious consumers of energy and computational resources, the Action Plan prioritizes the rapid construction of data centers and supporting energy infrastructure.1 It directs federal agencies to streamline permitting processes, including expediting environmental reviews under the National Environmental Policy Act (NEPA) and the Clean Water Act, and explicitly makes federal land available for data center development.6 This represents a direct government intervention to lower the capital and logistical costs for private companies building the essential hardware backbone of the AI industry.
1.3 The “Unbiased AI” Mandate: Analyzing the “Preventing Woke AI” Directive
Running parallel to the theme of deregulation is a powerful, and seemingly contradictory, push for ideological regulation. This is most clearly articulated in the Executive Order on “Preventing Woke AI in the Federal Government,” which was released alongside the Action Plan.6 This order fundamentally reshapes federal procurement standards, restricting government agencies from contracting with developers of large language models (LLMs) unless those models adhere to two “Unbiased AI Principles”: “Truth-Seeking” and “Ideological Neutrality”.15
The order defines “Truth-Seeking” as prioritizing “historical accuracy, scientific inquiry, and objectivity” and acknowledging uncertainty.14 “Ideological Neutrality” is defined as ensuring LLMs are “neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI”.6 The explicit targeting of DEI as a “pervasive and destructive” ideology that can “distort the quality and accuracy” of AI output marks a direct government intervention into the ethical alignment of AI systems.1
This ideological project extends to the very standards that underpin AI development. The Action Plan directs the National Institute of Standards and Technology (NIST) to revise its widely respected and globally influential AI Risk Management Framework.7 The revision’s goal is to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change”.8 This is a strategic attempt to dismantle the existing consensus around “trustworthy AI”—which typically includes fairness and bias mitigation as core components—and replace it with a framework that aligns with a specific political agenda. This move seeks to redefine what constitutes a “safe” or “trustworthy” model, shifting the focus from mitigating societal harms like discrimination to ensuring alignment with a particular worldview.
1.4 Open-Source as a Geopolitical Tool: The Strategic Rationale and Inherent Contradictions
A cornerstone of the American strategy for global dominance is the strategic promotion of open-source and open-weight AI models.18 The Action Plan argues that open models are more likely to become global standards, possess significant “geostrategic value,” and help allies avoid vendor lock-in with proprietary systems.15 This policy is not merely about fostering a collaborative innovation environment; it is an instrument of foreign policy. The ultimate goal is to facilitate the export of the “American AI Technology Stack”—a complete package of hardware, data systems, models, and applications—to create a global alliance of nations built on US-developed technology and, by extension, American values.6
This strategy, however, is fraught with internal contradictions. The very nature of open-source software—its transparency and modifiability—is fundamentally at odds with the administration’s goal of controlling the ideological content of AI. Once a powerful model is released under an open-source license, the original developer and the US government lose all effective control over its subsequent use. Any user, anywhere in the world, can fine-tune the model to reintroduce the very “woke” elements the procurement order seeks to eliminate, or to serve any other purpose, rendering the “Ideological Neutrality” mandate unenforceable in the open ecosystem the plan champions.18
This presents a clear trade-off. The push for open-source AI is a calculated geopolitical move to establish a de facto global standard, creating a powerful network effect that could lock partner nations into the US technological sphere and outcompete both the EU’s regulated ecosystem and China’s state-controlled one. The “American AI Technology Stack” is the explicit packaging of this strategy, using the appeal of “openness” as a competitive tool.6 Yet, this strategy of weaponizing openness directly undermines the parallel domestic goal of ideological purity.
Furthermore, this policy creates significant challenges for the private sector it purports to liberate. While promoting innovation, the emphasis on open-source models introduces complex intellectual property risks. Companies that integrate open-source components into their proprietary systems may face legal obligations to disclose their own valuable code or extend open-source license terms to their commercial products.18 The risk of inadvertently infringing on IP or breaching complex licensing terms is magnified when models are trained on vast datasets of unknown or mixed provenance, creating a new layer of legal and compliance burdens that runs counter to the plan’s deregulatory ethos.18
The American approach, therefore, contains a fundamental and potentially self-defeating tension. It seeks to achieve global market dominance through deregulation and openness while simultaneously imposing rigid ideological controls on the domestic market, particularly the lucrative federal procurement sector. Global consumer and enterprise markets, especially in allied democracies, often demand models that are sensitive to the very issues of fairness and DEI that the US government’s policy rejects. This could force American developers into a difficult position: creating two distinct and costly product lines—a politically sanitized “Gov-AI” for federal contracts and a globally competitive “Global-AI” for the open market. This internal friction, born from the clash between market logic and ideological control, could ultimately fragment American R&D efforts and undermine the very goal of global dominance the Action Plan seeks to achieve.
Part II: The European Model: AI as a Regulated Product
2.1 The EU AI Act in 2025: From Text to Enforcement
While the United States pursues a strategy of deregulation, the European Union is moving in the opposite direction, operationalizing the world’s first comprehensive, legally binding framework for artificial intelligence. The EU AI Act (Regulation (EU) 2024/1689), which officially entered into force on August 1, 2024, is transitioning from legislative text to regulatory reality throughout 2025.22 This year marks a critical implementation phase, with key provisions becoming legally applicable on a staggered timeline.24
Two dates in 2025 are particularly significant. February 2, 2025, was the application date for some of the Act’s most crucial provisions: the prohibitions on certain “unacceptable risk” AI practices under Article 5 and the requirement for AI literacy under Article 4.22 Later in the year, on August 2, 2025, the rules governing General-Purpose AI (GPAI) models, the formal governance structures including the AI Board, and the framework for administrative fines and penalties will come into effect.23
The core philosophy of the AI Act is fundamentally different from the American approach. It treats AI not primarily as a tool for geopolitical competition, but as a product and service that must be safe for consumers and society. The Act is essentially a piece of product safety legislation, designed to protect the health, safety, and fundamental rights of EU citizens.25 Its horizontal, cross-sectoral nature is intended to create a predictable and harmonized legal environment for businesses operating across the 27 Member States, thereby fostering trust and legal certainty.28
2.2 The “Rights-First” Framework: Risk Tiers and Safeguards
The centerpiece of the EU AI Act is its tiered, risk-based approach, which tailors regulatory obligations to the level of potential harm an AI system could cause.25 AI applications are classified into one of four categories:
- Unacceptable Risk: These systems are considered a clear threat to EU values and fundamental rights and are therefore banned outright. The prohibitions under Article 5, which became effective in February 2025, include AI systems that deploy subliminal or manipulative techniques to distort behavior, exploit the vulnerabilities of specific groups (based on age, disability, etc.), use social scoring by public authorities, or create or expand facial recognition databases through the untargeted scraping of images from the internet or CCTV footage.22 Violations of these prohibitions carry the most severe penalties under the Act, with fines reaching up to €35 million or 7% of a company’s total worldwide annual turnover, whichever is higher.22
- High Risk: This category includes AI systems used in sensitive domains where they could have a significant impact on people’s lives or safety. Examples include AI used in critical infrastructure, medical devices, recruitment software, credit scoring, and law enforcement.26 These systems are not banned but are subject to a strict set of ex-ante and ongoing compliance obligations. Before being placed on the market, they must undergo rigorous risk assessments, use high-quality and representative training data to minimize bias, ensure detailed logging for traceability, maintain comprehensive technical documentation, provide clear information to users, and be designed for appropriate human oversight. They must also meet high standards of robustness, cybersecurity, and accuracy.26
- Limited Risk: This category covers AI systems where the main risk is a lack of transparency. The primary obligation is disclosure. For instance, users interacting with chatbots must be informed that they are communicating with a machine. AI-generated content, such as deepfakes, must be clearly labeled as such.26
- Minimal Risk: The vast majority of AI applications, such as AI-enabled video games or spam filters, fall into this category and are largely left unregulated by the Act.25
A distinct set of rules applies to General-Purpose AI (GPAI) models, the powerful foundation models that underpin many AI applications. These rules, applicable from August 2, 2025, require GPAI providers to maintain technical documentation, comply with EU copyright law, and provide detailed summaries of the content used for training.23 A sub-category of GPAI models deemed to pose
“systemic risk”—a designation based on factors including the computational power used for training (e.g., a threshold of having been trained using a cumulative amount of compute greater than 1025 FLOPs)—face even stricter obligations. These include conducting model evaluations, assessing and mitigating systemic risks, tracking and reporting serious incidents, and ensuring a high level of cybersecurity.32
2.3 The Enforcement Architecture: A Multi-Layered System
The enforcement of the AI Act is managed through a sophisticated, multi-layered governance structure that combines centralized EU oversight with national-level implementation.
- The European AI Office: Established within the European Commission, the AI Office is the central pillar of the new regulatory regime.28 It holds exclusive, direct supervisory and enforcement powers over GPAI models. It can conduct model evaluations, request information from providers, and impose sanctions for non-compliance.28 The AI Office is also tasked with fostering a trustworthy AI ecosystem, developing codes of practice in cooperation with stakeholders, and representing the EU on the international stage.25
- The European Artificial Intelligence Board: This body is composed of high-level representatives from each of the 27 EU Member States. Its primary function is to advise the Commission and ensure the consistent and effective application of the AI Act across the Union.28 It acts as a coordination hub for national authorities, facilitating the exchange of expertise and best practices.35
- National Competent Authorities: The day-to-day enforcement of the Act for the majority of AI systems (i.e., those that are not GPAI models) falls to authorities at the national level. By August 2, 2025, each Member State must designate at least two types of authorities: Market Surveillance Authorities, which are responsible for monitoring the market and enforcing the rules on AI systems, and Notifying Authorities, which are responsible for assessing and monitoring the independent third-party conformity assessment bodies that will certify high-risk AI systems.34
2.4 The Innovation vs. Regulation Dilemma: The “Brussels Effect” and Its Trade-offs
The EU’s comprehensive and stringent regulatory model is a strategic choice with significant global implications. The intended outcome is the “Brussels Effect,” whereby the EU’s high standards become the de facto global norm because multinational companies find it easier to adopt the strictest rules across all their operations rather than creating different products for different markets.25
The Act is not without mechanisms to support innovation. It mandates that Member States establish AI regulatory sandboxes by August 2026, creating controlled environments where companies can test innovative AI systems under the supervision of competent authorities without fear of immediate penalties.25 The Act also contains limited exemptions for open-source AI, although these are narrow and do not apply to high-risk systems or GPAI models with systemic risk, which must still comply with most obligations.33
Despite these provisions, the AI Act has faced persistent criticism that its heavy regulatory burden will stifle innovation, deter investment, and ultimately cause Europe to fall further behind the US and China in the global AI race.39 The central trade-off is clear: does prioritizing the protection of fundamental rights and establishing legal certainty necessarily come at the cost of technological leadership and economic competitiveness?.40 This question lies at the heart of the debate over the EU’s role in the digital age.
The very complexity and rigor of the AI Act are giving rise to a new and powerful “Compliance-as-a-Service” industry. The extensive requirements for risk assessments, quality management, technical documentation, conformity assessments, and continuous monitoring, combined with the severe penalties for non-compliance, create a significant burden that is too complex and costly for most small and medium-sized enterprises—and even many large corporations—to manage internally.26 This economic reality is fueling a burgeoning market for specialized legal, technical, and consulting firms dedicated to navigating the intricacies of AI Act compliance, mirroring the growth of the privacy industry in the wake of the GDPR.
This dynamic, however, may lead to an unintended and paradoxical consequence. While the AI Act is designed to protect citizens and foster a trustworthy AI ecosystem, its high compliance costs could inadvertently centralize power in the hands of the very large technology companies it seeks to regulate. Well-resourced giants, primarily based in the US, are far better positioned to absorb the financial and administrative costs of compliance than smaller European startups.25 This could create a “regulatory moat,” where only a handful of “certified” high-risk or GPAI models from major global players become widely available on the EU market. Such an outcome would not only stifle competition but could also undermine the EU’s long-term goal of strategic autonomy, making European businesses
more reliant on a few non-EU technology providers.39
Part III: The Chinese Model: AI as an Instrument of State Control
3.1 The “Control-First” Doctrine: AI Governance as an Extension of National Security
China’s approach to AI governance is fundamentally distinct from the models developed in the West. It is not a standalone policy area but is deeply interwoven with the state’s comprehensive framework for cybersecurity, data security, and, most importantly, information control.42 The governing philosophy is a delicate and constantly recalibrated balancing act: to unleash the immense innovative and economic potential of AI while ensuring that the technology serves, and never challenges, the strategic goals of the state, national security, and social stability.45
This approach has resulted in a pattern of cyclical regulation, where the balance between promotion and control shifts in response to both internal and external pressures. Chinese AI policy has evolved through several distinct phases: an initial “Go-Go Era” (2017-2020) of massive investment and minimal regulation to build an industrial base; a “Crackdown Era” (2020-2022) where the Communist Party reasserted control over the tech sector; a “Catch-Up Era” (late 2022-early 2025) of pragmatic loosening in response to the launch of ChatGPT and economic headwinds; and the current “Crossroads Era”.47 This latest phase is defined by a new confidence in its domestic capabilities, exemplified by the breakthrough of models like DeepSeek-R1, set against a backdrop of persistent economic fragility and intensifying geopolitical competition.47
3.2 Dissecting the 2025 Regulations: Labeling, Content Moderation, and “Dual Filing”
The Chinese regulatory framework is characterized by a series of targeted, vertical regulations rather than a single, horizontal law like the EU’s AI Act. The foundational rules are the Interim Measures for the Administration of Generative Artificial Intelligence Services (GenAI Measures), which came into effect in August 2023.42 These measures apply to all public-facing generative AI services in China and establish the core principle that service providers are legally responsible for the content their systems generate.48
In 2025, this framework was significantly strengthened by new rules on transparency and traceability. Effective September 1, 2025, the Measures for Labeling Artificial Intelligence-Generated Content and the accompanying mandatory national standard (GB 45438-2025) impose comprehensive labeling requirements.50 These rules mandate two types of labels:
- Explicit Labels: Visible indicators (text, graphics, etc.) that must be affixed to AI-generated content, especially if it could mislead or confuse the public.
- Implicit Labels: Technical metadata embedded within the content file, containing information such as the service provider’s name and a unique content ID to ensure traceability.
Furthermore, online distribution platforms like social media sites are required to implement technical mechanisms to detect these labels and categorize content as “confirmed,” “possible,” or “suspected” AI-generated, reinforcing the labeling at the point of distribution.50
A central pillar of the control framework is the rigorous system of content moderation and pre-market review. Service providers are legally obligated to prevent the generation of illegal content, defined broadly to include anything that threatens national security, undermines state power, or deviates from “socialist core values”.48 Any service deemed to have “public opinion attributes or social mobilization capabilities”—a vaguely defined but powerful category—is subject to a mandatory
security assessment and must complete an algorithm filing with the Cyberspace Administration of China (CAC). This “Dual Filing” requirement gives the state deep visibility and control over the most influential AI models before they are released to the public.45
3.3 The Cyberspace Administration of China (CAC): The Central Nervous System of AI Oversight
At the heart of China’s AI governance model is the Cyberspace Administration of China (CAC). The CAC, working in concert with other powerful bodies like the Ministry of Industry and Information Technology (MIIT) and the Ministry of Public Security (MPS), functions as the primary regulator, standard-setter, and enforcer for the AI industry.45
Its role is dominant and pervasive. The CAC oversees the critical security assessment and algorithm filing processes, which effectively serve as a gatekeeping mechanism for public-facing AI services.48 This gives the agency direct insight into the technical architecture, training data, and intended purpose of new models, allowing it to enforce ideological alignment from the design phase onward. The CAC’s enforcement powers are extensive. It can issue warnings, order the suspension of services, and levy fines under the authority of China’s broader legal framework, including the Cybersecurity Law and the Personal Information Protection Law (PIPL).45
Crucially, the CAC’s authority is not confined to China’s borders. The GenAI Measures grant it extraterritorial reach, empowering it to take “technical and other necessary measures” (such as blocking access) against foreign-based AI services that are provided to the public in China but fail to comply with Chinese regulations.42 This was demonstrated in 2025 through the “Qinglang” series of special enforcement actions, which specifically targeted AI-generated misinformation as a key priority.50
3.4 The State-Innovation Symbiosis: Balancing Ambition with Discipline
Despite the tight grip of the state, China’s model is not solely about restriction. It is a symbiotic relationship where the state actively fosters innovation to achieve its national ambition of becoming the global AI leader by 2030.46 This promotion takes many forms, including massive state-led investment in R&D, national initiatives to catalog and utilize public data resources for training models, and even a surprisingly supportive judicial stance on granting copyright protection to AI-generated content, which contrasts sharply with the US position.46
To avoid stifling development, the regulations are carefully crafted to exempt internal, non-public-facing research and development from the most onerous requirements.45 However, this support is strictly conditional. The unwavering requirement for all public-facing AI to align with the “correct political direction” and reflect “socialist core values,” combined with the CAC’s deep oversight, ensures that technological advancement is always disciplined by and subordinated to the party’s agenda.46
This “agile” regulatory posture is a deliberate strategic choice. Unlike the EU’s slow, consensus-driven legislative process, China’s rapid, iterative, and targeted rule-making allows the state to react swiftly to technological breakthroughs and to selectively tighten or loosen controls based on its assessment of economic needs and geopolitical conditions.46 This creates a strategically ambiguous and unpredictable environment that keeps domestic companies closely attuned to state signaling and makes it exceedingly difficult for foreign competitors to establish a stable, long-term compliance strategy.
Ultimately, China’s control-first model is forging a distinct, self-contained AI ecosystem. The stringent requirements for content filtering, data localization, CAC security reviews, and ideological alignment are creating what can be described as a “Glass Wall” or a “Model Curtain”.45 While Chinese AI models may achieve technical parity or even superiority over their Western counterparts, they are built on a foundation of control that makes them fundamentally incompatible with and untrustworthy to democratic societies. This is leading to a profound bifurcation of the global AI landscape, not just in hardware due to semiconductor restrictions, but at the level of the foundational models themselves. Global companies will find it impossible to deploy a single AI strategy across both Chinese and Western markets, forcing a costly and inefficient duplication of effort and accelerating a global technological decoupling.54
Part IV: The British Model: A Pro-Innovation Gambit
4.1 The UK’s Post-Brexit Path: A “Flexibility-First” Philosophy
Having formally exited the European Union, the United Kingdom has deliberately charted its own course on AI governance, seeking to position itself as a nimble and attractive global hub for AI innovation. The UK’s strategy is a calculated rejection of the EU’s comprehensive, prescriptive legal framework, opting instead for a “pro-innovation” and “flexibility-first” approach.55 The explicit goal is to leverage its regulatory autonomy to become an “AI superpower,” fostering an environment that encourages investment, talent, and rapid development by minimizing upfront regulatory burdens.55
The initial foundation of this approach, outlined in a 2023 government white paper, is a non-statutory framework built upon five cross-sectoral principles intended to guide existing regulators. These principles are: (1) Safety, security, and robustness; (2) Appropriate transparency and explainability; (3) Fairness; (4) Accountability and governance; and (5) Contestability and redress.55 The core idea was to empower regulators with domain-specific expertise to apply these high-level values in a context-specific manner, avoiding a one-size-fits-all law.
4.2 From Regulation to Promotion: The 2025 “AI Opportunities Action Plan”
The UK’s strategy underwent a significant evolution with the publication of the “AI Opportunities Action Plan” in January 2025. This plan signaled a crucial shift in the government’s posture, moving beyond a light-touch approach to regulation and toward the active promotion of AI as a primary driver of national economic growth.61
This new emphasis fundamentally alters the role of the UK’s sectoral regulators, such as the Information Commissioner’s Office (ICO) and the Financial Conduct Authority (FCA). Under the plan, these bodies are now expected to prioritize “enabling safe AI innovation” as a core part of their statutory “Growth Duty”.61 Instead of acting primarily as enforcement-focused watchdogs, they are being tasked with becoming facilitators of AI adoption within their respective industries. To ensure accountability, regulators are now required to publish annual reports detailing, with transparent metrics, how their activities have enabled AI-driven innovation and growth.58
Most radically, the plan introduces the possibility of a central override mechanism. It suggests that if existing regulators are deemed to be insufficiently promoting innovation—perhaps due to a lower risk tolerance—the government could empower a new central body to intervene. This body could “override” existing sector-specific regulations by issuing pilot sandbox licenses for non-compliant AI products, with the government itself assuming the associated liability.61 This marks a profound change, where the ambition for economic growth through AI could formally supersede the traditional regulatory mandate of risk mitigation and rights protection.
4.3 The Tension Between Principles and Law: A Contentious Debate
The UK’s determinedly non-legislative stance has created a persistent and contentious debate within its own political and industrial landscape. The government has consistently resisted calls for a broad, statutory AI law, arguing that premature legislation would “smother in bureaucracy” and stifle the very innovation it seeks to foster.1 This “light-touch” philosophy was publicly reaffirmed in a joint press conference with the US administration in February 2025, where Prime Minister Keir Starmer stated, “Instead of over-regulating these new technologies, we’re seizing the opportunities they offer”.59
However, this position faces mounting pressure from legislators, civil society groups, and even parts of the industry. Critics argue that the reliance on non-binding principles and the discretion of disparate regulators creates significant “regulatory uncertainty,” which can itself deter investment and leave citizens without clear, enforceable rights.59 This tension is vividly illustrated by the reintroduction of the “Artificial Intelligence (Regulation) Bill” in the House of Lords on March 4, 2025. Although a private member’s bill without government backing, its proposal to create a statutory central AI Authority—akin to the EU AI Office—and to codify the five principles into law reflects a strong appetite for a more robust governance framework.59
In response to this pressure, the Labour government has signaled its intent to introduce some form of binding legislation within 2025. However, it is expected to be narrowly targeted at the developers of the most powerful frontier AI models, rather than a comprehensive, horizontal act, leaving the exact shape of the UK’s future legal landscape uncertain.58
4.4 Navigating the Middle Ground: Risks and Opportunities
The UK’s “flexibility-first” model is a high-stakes bet on regulatory arbitrage. The strategy is predicated on the belief that by offering a more permissive and agile regulatory environment than the EU, the UK can carve out a niche as the premier destination for AI research, development, and investment in Europe.55 The opportunity lies in becoming a global hub for regulatory sandboxing and attracting the talent and capital that might be deterred by the EU’s heavy compliance burdens or the US’s emerging ideological battles.
The primary risk, however, is that this “middle way” proves to be an unstable and ultimately untenable position. The UK lacks both the immense market power of the United States and the regulatory gravity of the European Union. The extraterritorial scope of the EU AI Act means that many UK-based businesses serving European customers will have no choice but to comply with its stringent requirements, regardless of the UK’s lighter-touch domestic regime. This could render the UK’s flexibility largely irrelevant in practice for any company with global ambitions, a phenomenon known as the “Brussels Effect”.56 This situation creates significant legal uncertainty for businesses and risks leaving UK citizens with weaker and less coherent protections than their European counterparts.61
This inherent instability suggests the UK’s current model may be a transitional phase rather than a permanent equilibrium. The country is subject to powerful external forces pulling it in opposite directions. On one hand, the practical realities of trade and market access create a strong pull toward regulatory convergence with the EU to ensure interoperability and reduce compliance friction for its businesses.58 On the other hand, its close political, strategic, and economic alliance with the US creates an equally strong pull toward the American “market-first” model, a preference clearly signaled in bilateral government statements.59 Over the next five to ten years, as global AI standards solidify and geopolitical alignments harden, the UK will likely be forced to navigate away from its ambiguous middle ground and align more definitively with one of the two major Western regulatory blocs.
Part V: The Global AI Governance Matrix: A Comparative Analysis
5.1 Mapping Strategic Divergence: Ideology, Enforcement, and Economic Objectives
The year 2025 has crystallized the strategic divergence among the world’s leading AI powers. The governance frameworks of the United States, European Union, China, and United Kingdom are not merely different sets of rules; they are manifestations of deeply held, and often conflicting, national ideologies, geopolitical ambitions, and economic objectives. The US model is explicitly geared toward maintaining geostrategic and market dominance through rapid, private-sector-led innovation, viewing deregulation as a primary tool.1 The EU’s framework is an exercise in normative power, aiming to export its values by establishing a global gold standard for rights-protecting, ethical AI, even at the potential cost of short-term innovation.25 China’s approach subordinates all technological and economic goals to the imperatives of state control, social stability, and national security, creating a tightly managed, state-centric ecosystem.43 The UK, in its post-Brexit search for a unique global role, has gambled on a model of regulatory flexibility, prioritizing economic growth and agility above all else.55 These foundational differences are reflected in every aspect of their governance architectures, from risk classification to enforcement mechanisms.
5.2 Identifying Tactical Convergence: Frontier Models, Risk Management, and Sandboxes
Despite these profound strategic and ideological divides, areas of tactical convergence are emerging. This is not the result of deliberate coordination but rather a reflection of all four jurisdictions confronting the same novel and complex technological challenges. As a result, they are independently arriving at functionally similar solutions.
One of the most significant areas of convergence is the governance of frontier or general-purpose AI (GPAI) models. Recognizing that these highly capable and adaptable models pose unique and systemic risks, the US, EU, and UK are all developing specific regulatory regimes to govern their development and deployment, separate from rules for narrower AI applications.23 China’s category of services with “public opinion attributes or social mobilization capabilities” serves a similar function, singling out the most powerful systems for heightened scrutiny.48
There is also a shared, albeit differently articulated, adoption of risk-based approaches. The EU’s four-tier system is the most explicit and formalized.25 However, the US Action Plan, while broadly deregulatory, still identifies and prioritizes specific high-risk domains for enhanced oversight, such as national security, biosecurity, and critical infrastructure.6 China’s proposed “negative list” for AI applications is another form of risk-tiering, subjecting certain activities to stricter pre-approval requirements.33 The UK tasks its sectoral regulators with performing context-specific risk assessments based on its guiding principles.55
Finally, regulatory sandboxes have emerged as a widely accepted tool for balancing innovation with safety. The EU AI Act mandates their creation, the UK’s Action Plan champions them, and the US plan calls for their use to test new AI solutions in real-world environments.12 This indicates a global consensus that this particular policy mechanism is valuable for managing the uncertainty of a rapidly evolving technology.
5.3 Table 1: Comparative AI Governance Matrix (2025)
The following table provides a systematic comparison of the four primary AI governance models as they stand in 2025, summarizing their core features across key dimensions.
| Dimension | United States (Market-First) | European Union (Rights-First) | People’s Republic of China (Control-First) | United Kingdom (Flexibility-First) |
| Governing Philosophy | Geostrategic dominance, market-led innovation, deregulation. 1 | Protection of fundamental rights, legal certainty, ethical leadership. 25 | State control, social stability, national security, technological self-sufficiency. 45 | Pro-innovation, economic growth, regulatory flexibility, sector-specific approach. 55 |
| Primary Legal Instrument (2025) | “America’s AI Action Plan”; Executive Orders (14179, “Woke AI” EO, etc.). 1 | The AI Act (Regulation (EU) 2024/1689). 23 | Generative AI Measures; Labeling Rules (GB 45438-2025); Data Security Law. 42 | “AI Opportunities Action Plan”; Non-statutory principles; Existing sectoral laws. 59 |
| Enforcement Architecture | Decentralized; existing federal agencies (FTC, Commerce); OMB guidance for procurement. No central AI body. 1 | Centralized & multi-layered: European AI Office (for GPAI), AI Board, National Competent Authorities. 28 | Centralized: Cyberspace Admin. of China (CAC) is dominant; other ministries support. 45 | Decentralized: Existing sectoral regulators (ICO, FCA, Ofcom) tasked with promotion and oversight. 60 |
| Risk Categorization | Informal; prioritizes national security, biosecurity, critical infrastructure. Rejects risk frameworks based on “misinformation” or “DEI”. 8 | Formal 4-tier system: Unacceptable (banned), High, Limited, Minimal. 25 | Implicit: “Public opinion/social mobilization” services get special scrutiny. Negative list approach proposed. 33 | Context-specific, determined by sectoral regulators based on the five principles. 55 |
| Stance on Open-Source Models | Strategic Promotion: Encouraged as a tool for geopolitical influence and to set global standards. 15 | Cautious Exemption: Limited exemptions from some obligations, but not for high-risk systems or GPAI with systemic risk. 33 | Encouraged (for innovation) but Controlled: Must still comply with content and security rules if public-facing. 33 | Generally encouraged as part of the pro-innovation stance. 55 |
| Public Sector AI Rules | Strict Procurement Rules: “Preventing Woke AI” EO imposes ideological neutrality and truth-seeking principles on federal LLM procurement. 6 | Fully Applicable: AI Act applies equally to public and private sector deployers of AI systems. 53 | Primarily Private Sector Focus: Main regulations target public-facing commercial services, leaving government use less transparently regulated. 45 | Governed by the same principles-based, sector-specific approach. 55 |
| Extraterritorial Scope | Limited: Primarily through export controls and influencing allies to adopt the “American AI Stack”. 7 | Extensive (“Brussels Effect”): Applies to any AI system placed on the EU market or whose output is used in the EU. 31 | Extensive: Applies to any service targeting the Chinese public; CAC can take action against foreign providers. 42 | Limited: Primarily through influence in global standards bodies. Subject to the EU’s extraterritoriality. 56 |
Part VI: Democratic Implications: Freedom, Fairness, and the Future of Governance
The divergent paths of AI governance are not merely technical or economic policy choices; they carry profound implications for the health and future of democratic societies. The design of these regulatory frameworks directly impacts fundamental rights, the integrity of the public sphere, and the delicate balance of power between the citizen, the state, and the corporation.
6.1 AI and the Public Sphere: Free Expression, Disinformation, and Censorship
Each governance model interacts with the public sphere in a way that reflects its core ideology, with significant consequences for freedom of expression.
The United States model presents a complex paradox. Its “Preventing Woke AI” directive, while framed in the language of “truth” and “neutrality,” constitutes a direct government intervention into the substance of AI-generated speech.2 By mandating the removal of certain ideological viewpoints from federally procured models and directing NIST to revise its framework to exclude concepts like “misinformation,” the policy attempts to use the power of the state to define and enforce a particular version of acceptable discourse. This raises significant First Amendment concerns and could create a chilling effect on the development of AI that engages with a wide range of social and political topics.2
The European Union, through the AI Act, seeks to empower citizens rather than control content. Its transparency requirements for deepfakes and chatbots are designed to give individuals the context they need to critically evaluate information and make informed decisions.26 While the Act’s primary focus is on mitigating risks to safety and fundamental rights, its broad principles could be interpreted by national authorities to justify content moderation measures that impact expression, creating a potential area of tension.
The Chinese model is explicitly and unapologetically designed for censorship and narrative control. The legal requirement for AI systems to adhere to “socialist core values,” combined with the CAC’s pervasive oversight, institutionalizes political control over the information environment.48 In this framework, AI is not a tool for open discourse but an instrument for enforcing ideological conformity and conducting mass surveillance, fundamentally subverting the principles of free expression.46
The United Kingdom’s flexible, regulator-led approach leaves the handling of complex issues like AI-driven disinformation largely to the discretion of different sectoral bodies.55 This creates the risk of an inconsistent and fragmented response, potentially leaving significant gaps in the protection of the public sphere from manipulation.
6.2 Algorithmic Justice: Bias, Discrimination, and the Right to Redress
The potential for AI systems to perpetuate and amplify existing societal biases is one of the most critical challenges for democratic governance. The four models address this challenge in starkly different ways.
The US Action Plan actively moves to dismantle the conceptual tools used to address algorithmic bias. By rejecting DEI-focused risk management in its procurement standards and NIST framework revisions, and by weakening consumer protection bodies, the plan risks exacerbating algorithmic discrimination in high-stakes areas like credit, housing, and employment.2 The policy’s premise of “ideological neutrality” fails to recognize the well-documented reality that historically generated data, if used without corrective measures, will inevitably reproduce historical patterns of discrimination.
The EU AI Act, in contrast, places algorithmic fairness at its core. It establishes legally binding obligations for developers of high-risk systems to use high-quality, representative datasets and to implement human oversight precisely to minimize the risk of biased and discriminatory outcomes.26 Crucially, it provides a legal framework that gives individuals a clear basis to contest and seek redress for harmful decisions made by AI systems.
China’s framework presents a troubling contradiction. While its regulations pay lip service to preventing discrimination, the state’s widespread use of AI for social scoring and mass surveillance constitutes a system of institutionalized, state-sanctioned discrimination on a massive scale.22 Here, AI becomes a tool to enforce social and political hierarchies, not to promote equity.
In the UK, the principles of “Fairness” and “Contestability and redress” are central to the government’s stated policy.55 However, their non-statutory nature and the reliance on the discretion of individual regulators mean that the enforceability of these rights is potentially weak and inconsistent across different sectors of the economy.69
This divergence reveals that the very definition of “bias” has become a key battleground in the ideological competition over AI. The US Action Plan’s effort to reframe bias as a “woke” political imposition and replace it with a narrow concept of “neutrality” is a profound attempt to delegitimize years of research and policy work on fairness and accountability.14 This creates a direct philosophical clash with the EU’s framework, which is explicitly built on the legal and ethical imperative to prevent discrimination based on protected characteristics like race and gender.26 As these competing models and their associated technologies are exported globally, the world will face a choice between fundamentally different ethical operating systems. The future of democratic and equitable AI will depend significantly on which definition of “fairness” prevails in global standards.
6.3 The State, the Corporation, and the Citizen: Shifting Power Balances
Ultimately, each AI governance model reconfigures the triangular relationship between the individual citizen, the state, and the corporation. An analysis across the models reveals an emerging “democratic deficit” in AI governance, where in most cases, the power of the individual is diminished relative to that of large institutions.
The US model, with its emphasis on deregulation and market-led innovation, clearly empowers large corporations, granting them greater freedom to develop and deploy AI with fewer constraints and less oversight.1 The Chinese model represents the opposite pole, concentrating immense power in the hands of the state and its security apparatus, using AI as a tool for social management.45 The UK’s light-touch, pro-growth approach risks leaving citizens with uncertain and potentially weaker protections, caught between powerful corporate actors and a government hesitant to legislate.59
Only the EU’s model is explicitly designed to empower the citizen by codifying a set of fundamental rights and establishing robust regulatory oversight.62 Yet, as previously discussed, this approach carries the unintended risk of entrenching the market power of the few large corporations that can afford the high costs of compliance. This suggests a troubling global trend: the governance of the most transformative technology of our era is largely happening
to citizens, rather than by or for them. This deficit poses a long-term, systemic challenge to the principles of democratic accountability and self-determination.
Part VII: Geopolitical Forecast and Strategic Recommendations
7.1 Scenarios for 2030: Fragmentation, Patchwork, or Bipolarity?
The divergent trajectories of AI governance established in 2025 are setting the stage for several plausible futures for the global technological and political landscape over the next five to ten years. The interplay between strategic competition, economic incentives, and the pace of technological change will likely lead to one of the following scenarios.
- Scenario 1: The Fragmented World (Techno-Blocs). In this scenario, the current divergence deepens and solidifies, leading to the emergence of three distinct and largely incompatible regulatory and technological spheres of influence.54 The world would be carved into a US-led bloc championing the open, market-driven “American Stack”; an EU-led bloc operating under the “Brussels Effect” of its rights-based, comprehensive regulatory regime; and a China-led bloc expanding its state-controlled “Digital Silk Road.” In this future, technical interoperability between blocs would be extremely low, compliance costs for multinational corporations would be prohibitively high, and meaningful global collaboration on AI safety and ethics would be minimal. Geopolitical friction would be the default state.
- Scenario 2: The Patchwork of Interoperability. This more optimistic scenario posits that despite profound strategic and ideological differences, pragmatic cooperation prevails at the technical level. Nations would find common ground on specific issues, leading to a “noodle bowl” of overlapping bilateral and multilateral agreements on technical standards (through bodies like ISO and IEEE), risk management methodologies, and safety protocols for the most powerful GPAI models.73 This would not resolve the fundamental ideological divides but would create a functional patchwork of interoperability that reduces friction for global trade and allows for limited collaboration on shared risks, preventing a complete fracture of the global digital ecosystem.
- Scenario 3: A Bipolar AI Cold War. This scenario sees the intensifying US-China competition as the single overwhelming force shaping the global landscape. The nuanced positions of the EU and UK would become less relevant as they are pressured to align more closely with the United States to form a unified democratic-technology bloc against the Sino-Russian axis.76 This would lead to a stark technological decoupling, with the creation of parallel AI supply chains (from semiconductors to cloud infrastructure to foundation models), competing technical standards, and an escalating AI arms race that encompasses economic, intelligence, and military domains.54
7.2 The Battle for the “Global South”: Exporting Regulatory Models
A key arena where these geopolitical dynamics will play out is the “Global South.” The major powers are already using their AI governance models as a form of soft power and a tool of foreign policy, competing to have their respective approaches adopted by developing nations across Asia, Africa, and Latin America.71 This is a race to set the default operating system for the next generation of digital infrastructure worldwide. The United States is actively promoting its “full-stack AI export packages,” offering a model of rapid innovation and market integration.7 China is extending its Digital Silk Road, providing technology and infrastructure often bundled with its model of state surveillance and control.80 The European Union leverages its “Brussels Effect” and development aid, offering a model built on rights, legal certainty, and ethical governance. The choices these developing nations make will not only shape their own societies but will also determine the future balance of power in the global AI ecosystem.
7.3 Strategic Recommendations for Policymakers and Industry
Navigating this complex and contested landscape requires foresight and strategic action from both public and private sector leaders.
- For Policymakers:
- Prioritize Technical Interoperability: Even amidst ideological competition, governments should actively seek to build bridges on technical standards for AI safety, security, and risk management. Supporting multi-stakeholder standards development organizations can create a common language that reduces friction and prevents a complete technological fracture.
- Strengthen Democratic Alliances: The US and EU, in particular, must deepen their collaboration through forums like the Trade and Technology Council (TTC). Forging a common baseline for rights-protecting, democratic AI governance is the most effective counterweight to the spread of authoritarian technology models.
- Invest in Sovereign Capacity: All nations, especially middle powers, must invest heavily in domestic AI talent, research, and infrastructure (including compute and high-quality data sets). This is essential to avoid becoming strategically dependent on one of the major blocs and to retain a degree of policy autonomy.
- For Industry Leaders:
- Adopt a “Highest Common Denominator” Compliance Strategy: For multinational corporations, the most prudent and sustainable strategy is to align their global AI governance and product development with the strictest, most extraterritorial regime—which is currently the EU AI Act.42 Designing for compliance with EU standards from the outset will ensure market access and minimize the risk of future legal and reputational damage.
- Invest in AI Governance as a Core Business Function: Navigating the fragmented regulatory landscape is no longer a peripheral legal task. Companies must build robust, dedicated AI ethics and governance teams with the expertise and authority to oversee the entire AI lifecycle, from data acquisition and model training to deployment and post-market monitoring.
- Engage Proactively in Standards-Setting: The rules of the road for AI are being written now. Industry must actively and constructively participate in national and international standards-setting bodies to help shape practical, effective, and interoperable rules that foster both innovation and trust.
Conclusion: Synthesizing the 2025 Landscape and Projecting the Path Forward
The year 2025 has laid bare the deep-seated divisions in how the world’s major powers intend to govern artificial intelligence. The strategic divergence between the American market-first, the European rights-first, the Chinese control-first, and the British flexibility-first models is not a temporary misalignment but a reflection of fundamental competition over economics, ideology, and the future international order. While tactical convergence on shared technical problems offers a glimmer of hope for a functional global ecosystem, the dominant trend is toward fragmentation and geopolitical rivalry.
The democratic implications are stark. With the notable exception of the European Union’s ambitious—if imperfect—framework, the prevailing models of AI governance tend to concentrate power in the hands of either the state or large corporations, often at the expense of individual rights and democratic accountability. The very language of AI ethics, including the definition of “fairness” and “bias,” has become a contested space.
The path forward is fraught with uncertainty. The world is at a crossroads, facing a choice between a fragmented landscape of competing techno-blocs, a functional but complex patchwork of interoperable standards, or a tense bipolar AI Cold War. The decisions made today by policymakers, industry leaders, and civil society will determine which of these futures materializes. Building a future where AI promotes human flourishing and democratic values, rather than undermining them, will require a renewed commitment to international cooperation, a clear-eyed assessment of the risks, and a steadfast defense of the fundamental rights that must guide the development of this transformative technology.
Geciteerd werk
- White House Issues Action Plan, Three Executive Orders on Artificial …, geopend op augustus 2, 2025, https://www.maynardnexsen.com/publication-white-house-issues-action-plan-three-executive-orders-on-artificial-intelligence
- What to make of the Trump administration’s AI Action Plan – Brookings Institution, geopend op augustus 2, 2025, https://www.brookings.edu/articles/what-to-make-of-the-trump-administrations-ai-action-plan/
- AI Under the Spotlight: Key Insights Ahead of the White House Action Plan, geopend op augustus 2, 2025, https://www.workforcebulletin.com/ai-under-the-spotlight-key-insights-ahead-of-the-white-house-action-plan
- AI Action Plan (OSTP 2025) – Center for AI and Digital Policy, geopend op augustus 2, 2025, https://www.caidp.org/public-voice/ai-action-plan-ostp-2025/
- Removing Barriers to American Leadership in Artificial Intelligence – The White House, geopend op augustus 2, 2025, https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/
- White House Releases America’s AI Action Plan and Accompanying AI Executive Orders, geopend op augustus 2, 2025, https://www.quarles.com/newsroom/publications/white-house-releases-americas-ai-action-plan-and-accompanying-ai-executive-orders
- The Trump Administration’s 2025 AI Action Plan – Winning the Race: America’s AI Action Plan – and Related Executive Orders | Data Matters Privacy Blog, geopend op augustus 2, 2025, https://datamatters.sidley.com/2025/07/30/the-trump-administrations-2025-ai-action-plan-winning-the-race-americas-ai-action-plan-and-related-executive-orders/
- Trump Administration Issues AI Action Plan and Series of AI Executive Orders, geopend op augustus 2, 2025, https://www.insideprivacy.com/artificial-intelligence/trump-administration-issues-ai-action-plan-and-series-of-ai-executive-orders/
- President Trump AI Action Plan Key Insights – Latham & Watkins LLP, geopend op augustus 2, 2025, https://www.lw.com/en/insights/president-trump-ai-action-plan-key-insights
- America charts ‘fundamentally divergent path’ from EU on AI with new action plan, geopend op augustus 2, 2025, https://www.pinsentmasons.com/out-law/news/ai-action-plan-white-house
- A New Era for U.S. AI Policy: How America’s AI Action Plan Will Shape Industry and Government | Consumer Finance Monitor, geopend op augustus 2, 2025, https://www.consumerfinancemonitor.com/2025/07/28/a-new-era-for-u-s-ai-policy-how-americas-ai-action-plan-will-shape-industry-and-government/
- Trump Administration Issues AI Action Plan and Series of AI Executive Orders, geopend op augustus 2, 2025, https://www.insidegovernmentcontracts.com/2025/07/trump-administration-issues-ai-action-plan-and-series-of-ai-executive-orders/
- America’s AI Action Plan: What’s In, What’s Out, What’s Next | Insights | Holland & Knight, geopend op augustus 2, 2025, https://www.hklaw.com/en/insights/publications/2025/07/americas-ai-action-plan-whats-in-whats-out-whats-next
- What the New US AI Law Means for Real Deployments | NeuralTrust, geopend op augustus 2, 2025, https://neuraltrust.ai/blog/what-the-new-us-ai-law-means
- Trump Administration Releases AI Action Plan and Issues Executive Orders to Promote Innovation – O’Melveny, geopend op augustus 2, 2025, https://www.omm.com/insights/alerts-publications/trump-administration-releases-ai-action-plan-and-issues-executive-orders-to-promote-innovation/
- Trump Administration Releases Sweeping AI Action Plan | Fenwick, geopend op augustus 2, 2025, https://www.fenwick.com/insights/publications/trump-administration-releases-sweeping-ai-action-plan
- White House Releases AI Action Plan: Key Legal and Strategic Takeaways for Industry, geopend op augustus 2, 2025, https://www.skadden.com/insights/publications/2025/07/the-white-house-releases-ai-action-plan
- “Winning the Race: America’s AI Action Plan” – Key Pillars, Policy …, geopend op augustus 2, 2025, https://www.ropesgray.com/en/insights/alerts/2025/07/winning-the-race-americas-ai-action-plan-key-pillars-policy-actions-and-future-implications
- New Federal AI Action Plan Prioritizes Deregulation, Infrastructure, and Global Leadership, geopend op augustus 2, 2025, https://www.mofo.com/resources/insights/250728-new-federal-ai-action-plan-prioritizes-deregulation
- The Trump Administration’s 2025 AI Action Plan – Winning the Race: America’s AI Action Plan – and Related Executive Orders | Insights | Sidley Austin LLP, geopend op augustus 2, 2025, https://www.sidley.com/en/insights/newsupdates/2025/07/the-trump-administrations-2025-ai-action-plan
- US: What the New White House AI Action Plan and Executive Order Mean for Export Controls, geopend op augustus 2, 2025, https://sanctionsnews.bakermckenzie.com/us-what-the-new-white-house-ai-action-plan-and-executive-order-mean-for-export-controls/
- EU AI Act: Ban on certain AI practices and requirements for AI literacy come into effect, geopend op augustus 2, 2025, https://www.mayerbrown.com/en/insights/publications/2025/01/eu-ai-act-ban-on-certain-ai-practices-and-requirements-for-ai-literacy-come-into-effect
- Implementation Timeline | EU Artificial Intelligence Act, geopend op augustus 2, 2025, https://artificialintelligenceact.eu/implementation-timeline/
- AI Act implementation timeline | Think Tank – European Parliament, geopend op augustus 2, 2025, https://www.europarl.europa.eu/thinktank/en/document/EPRS_ATA(2025)772906
- EU Artificial Intelligence Act | Up-to-date developments and …, geopend op augustus 2, 2025, https://artificialintelligenceact.eu/
- AI Act | Shaping Europe’s digital future – European Union, geopend op augustus 2, 2025, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
- The European Union AI Act: premature or precocious regulation? – Bruegel, geopend op augustus 2, 2025, https://www.bruegel.org/analysis/european-union-ai-act-premature-or-precocious-regulation
- European AI Office | Shaping Europe’s digital future, geopend op augustus 2, 2025, https://digital-strategy.ec.europa.eu/en/policies/ai-office
- AI governance: EU and US converge on risk-based approach amid stark differences, geopend op augustus 2, 2025, https://www.hertie-school.org/en/digital-governance/research/blog/detail/content/ai-governance-eu-and-us-converge-on-risk-based-approach-amid-stark-differences
- EU AI Act: Summary & Compliance Requirements – ModelOp, geopend op augustus 2, 2025, https://www.modelop.com/ai-governance/ai-regulations-standards/eu-ai-act
- The EU AI Act – Ten key things to know – TLT LLP, geopend op augustus 2, 2025, https://www.tlt.com/insights-and-events/insight/the-eu-ai-act—ten-key-things-to-know/
- EU AI Act News: Rules on General-Purpose AI Start Applying, Guidelines and Template for Summary of Training Data Finalized | Mayer Brown – JDSupra, geopend op augustus 2, 2025, https://www.jdsupra.com/legalnews/eu-ai-act-news-rules-on-general-purpose-2407805/
- Navigating AI’s uncharted waters: Insights from China’s Model AI Law, EU AI Act | IAPP, geopend op augustus 2, 2025, https://iapp.org/news/a/navigating-ai-s-uncharted-waters-insights-from-china-s-model-ai-law-eu-ai-act
- Governance and enforcement of the AI Act | Shaping Europe’s digital future, geopend op augustus 2, 2025, https://digital-strategy.ec.europa.eu/en/policies/ai-act-governance-and-enforcement
- The AI Office – What You Need to Know – WILLIAM FRY, geopend op augustus 2, 2025, https://www.williamfry.com/knowledge/the-ai-office-what-you-need-to-know/
- Overview of all AI Act National Implementation Plans | EU Artificial Intelligence Act, geopend op augustus 2, 2025, https://artificialintelligenceact.eu/national-implementation-plans/
- Transatlantic AI Governance – Strategic Implications for U.S. — EU Compliance – Kslaw.com, geopend op augustus 2, 2025, https://www.kslaw.com/news-and-insights/transatlantic-ai-governance-strategic-implications-for-us-eu-compliance
- The EU AI Act: Application to Open-Source Projects – Orrick, geopend op augustus 2, 2025, https://www.orrick.com/en/Insights/2024/09/The-EU-AI-Act-Application-to-Open-Source-Projects
- The EU’s AI Power Play: Between Deregulation and Innovation, geopend op augustus 2, 2025, https://carnegieendowment.org/research/2025/05/the-eus-ai-power-play-between-deregulation-and-innovation?lang=en
- The EU’s AI Act Is Barreling Toward AI Standards That Do Not Exist | Lawfare, geopend op augustus 2, 2025, https://www.lawfaremedia.org/article/eus-ai-act-barreling-toward-ai-standards-do-not-exist
- The False Choice Between Digital Regulation and Innovation | Bradford – Scholarship Archive, geopend op augustus 2, 2025, https://scholarship.law.columbia.edu/context/faculty_scholarship/article/5567/viewcontent/Bradford_The_False_Choice_Between_Digital_Regulation_and_Innovation.pdf
- AI Watch: Global regulatory tracker – China | White & Case LLP, geopend op augustus 2, 2025, https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-china
- A Comparative Analysis of AI Governance Frameworks | Washington Journal of Law, Technology & Arts, geopend op augustus 2, 2025, https://wjlta.com/2024/07/09/a-comparative-analysis-of-ai-governance-frameworks/
- Balancing Innovation and Regulation: Comparing China’s AI Regulations with the EU AI Act, geopend op augustus 2, 2025, https://awapoint.com/balancing-innovation-and-regulation-comparing-chinas-ai-regulations-with-the-eu-ai-act/
- Data Protection Laws and Regulations Report 2025 AI Regulatory …, geopend op augustus 2, 2025, https://iclg.com/practice-areas/data-protection-laws-and-regulations/02-ai-regulatory-landscape-and-development-trends-in-china
- AI Dilemma: Regulation in China, EU & US – Comparative Analysis, geopend op augustus 2, 2025, https://pernot-leplay.com/ai-regulation-china-eu-us-comparison/
- China’s AI Policy at the Crossroads: Balancing Development and …, geopend op augustus 2, 2025, https://carnegieendowment.org/research/2025/07/chinas-ai-policy-in-the-deepseek-era?lang=en
- China finalizes generative AI regulation – Hogan Lovells, geopend op augustus 2, 2025, https://www.hoganlovells.com/en/publications/china-finalizes-generative-ai-regulation
- China finalises its Generative AI Regulation – Data Protection Report, geopend op augustus 2, 2025, https://www.dataprotectionreport.com/2023/07/china-finalises-its-generative-ai-regulation/
- China Releases New Labeling Requirements for AI-Generated Content – Inside Privacy, geopend op augustus 2, 2025, https://www.insideprivacy.com/international/china/china-releases-new-labeling-requirements-for-ai-generated-content/
- China’s Cyberspace Administration Releases “Interim” Rules Regulating the Use of Generative AI | Davis Wright Tremaine, geopend op augustus 2, 2025, https://www.dwt.com/blogs/artificial-intelligence-law-advisor/2023/07/china-issues-generative-ai-regulations
- China Cybersecurity and Data Protection: Monthly Update – March 2025 Issue – Bird & Bird, geopend op augustus 2, 2025, https://www.twobirds.com/en/insights/2025/china/china-cybersecurity-and-data-protection-monthly-update-march-2025-issue
- The EU and China Are Taking the Lead on AI Regulation – The U.S. …, geopend op augustus 2, 2025, https://www.citizen.org/article/the-eu-and-china-are-taking-the-lead-on-ai-regulation-the-u-s-must-not-be-left-behind/
- The Global AI Race: The Geopolitics of DeepSeek | Geopolitical …, geopend op augustus 2, 2025, https://www.geopoliticalmonitor.com/the-global-ai-race-the-geopolitics-of-deepseek/
- A pro-innovation approach to AI regulation – GOV.UK, geopend op augustus 2, 2025, https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper
- What does a pro-innovation approach to AI mean for UK regulation? – Macfarlanes, geopend op augustus 2, 2025, https://www.macfarlanes.com/what-we-think/102eli5/what-does-a-pro-innovation-approach-to-ai-mean-for-uk-regulation-102ic7o/
- AI regulation: a pro-innovation approach – GOV.UK, geopend op augustus 2, 2025, https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach
- AI regulation in 2025 – BCLP Perspectives, geopend op augustus 2, 2025, https://perspectives.bclplaw.com/emerging-themes/creating-connections/technology/ai-in-2025-will-the-UKs-regulation-keep-up-or-be-left-behind/
- The Artificial Intelligence (Regulation) Bill: Closing the UK’s AI Regulation Gap?, geopend op augustus 2, 2025, https://kennedyslaw.com/en/thought-leadership/article/2025/the-artificial-intelligence-regulation-bill-closing-the-uks-ai-regulation-gap/
- Regulation of AI in UK | Entertainment and Media Guide to AI – Reed Smith LLP, geopend op augustus 2, 2025, https://www.reedsmith.com/en/perspectives/ai-in-entertainment-and-media/2024/02/regulation-of-ai-in-uk
- What is the impact of the new “AI Opportunities Action Plan” on UK …, geopend op augustus 2, 2025, https://www.twobirds.com/en/insights/2025/uk/what-is-the-impact-of-the-new-ai-opportunities-action-plan-on-uk-ai-regulation
- Comparative Analysis of AI Development Strategies: A Study of China’s Ambitions and the EU’s Regulatory Framework – EuroHub4Sino, geopend op augustus 2, 2025, https://eh4s.eu/publication/comparative-analysis-of-ai-development-strategies-a-study-of-chinas-ambitions-and-the-e-us-regulatory-framework
- Barreling Towards “Significant Misalignment”: A Comparative Examination of AI Regulation In The European Union And United States, geopend op augustus 2, 2025, https://pur.pitt.edu/pur/article/view/111
- Entity-Based Regulation in Frontier AI Governance | Carnegie Endowment for International Peace, geopend op augustus 2, 2025, https://carnegieendowment.org/research/2025/06/artificial-intelligence-regulation-united-states?lang=en
- The UK’s framework for AI regulation – Deloitte, geopend op augustus 2, 2025, https://www.deloitte.com/uk/en/Industries/financial-services/blogs/the-uks-framework-for-ai-regulation.html
- AI regulation in the UK – where are we now?, geopend op augustus 2, 2025, https://www.twobirds.com/en/insights/2024/uk/ai-regulation-in-the-uk-where-are-we-now
- AI Watch: Global regulatory tracker – United Kingdom | White & Case LLP, geopend op augustus 2, 2025, https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-kingdom
- Artificial Intelligence Governance: A Comparative Analysis of China, the European Union, and the United States – University Digital Conservancy, geopend op augustus 2, 2025, https://conservancy.umn.edu/bitstreams/67b2719b-58a9-499c-b4fd-66eccc091515/download
- The United Kingdom Diverges from the European Union in Its Proposed “Pro-Innovation” Approach to Regulating Artificial Intelligence – Covington & Burling LLP, geopend op augustus 2, 2025, https://www.cov.com/-/media/files/corporate/publications/2023/11/the-united-kingdom-diverges-from-the-european-union-in-its-proposed-proinnovation-approach-to-regulating-artificial-intelligence–the-journal-of-robotics-artificia.pdf
- Democratic Governance of Digital Platforms and Artificial Intelligence? Exploring Governance Models of China, the US, the EU an, geopend op augustus 2, 2025, https://jedem.org/index.php/jedem/article/view/604/487
- AI Governance and Geopolitical Challenges: What’s Next after Italy’s G7 Presidency?, geopend op augustus 2, 2025, https://www.iai.it/en/pubblicazioni/c03/ai-governance-and-geopolitical-challenges-whats-next-after-italys-g7-presidency
- AI Governance and Geopolitical Challenges: What’s Next after Italy’s G7 Presidency? – Istituto Affari Internazionali (IAI), geopend op augustus 2, 2025, https://www.iai.it/sites/default/files/iaip2501.pdf
- Global AI governance: barriers and pathways forward – Oxford Academic, geopend op augustus 2, 2025, https://academic.oup.com/ia/article/100/3/1275/7641064
- Interoperability of Data Governance Regimes: Challenges for Digital Trade Policy | CITP, geopend op augustus 2, 2025, https://citp.ac.uk/publications/interoperability-of-data-governance-regimes-challenges-for-digital-trade-policy
- The Need for and Pathways to AI Regulatory and Technical Interoperability | TechPolicy.Press, geopend op augustus 2, 2025, https://www.techpolicy.press/the-need-for-and-pathways-to-ai-regulatory-and-technical-interoperability/
- The Geopolitics of Digital Regulation – Chicago Unbound, geopend op augustus 2, 2025, https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=6430&context=uclrev
- AI and Geopolitics: How Might AI Affect the Rise and Fall of Nations? | RAND, geopend op augustus 2, 2025, https://www.rand.org/pubs/perspectives/PEA3034-1.html
- Artificial Intelligence, the new frontier in geopolitical competition – Atalayar, geopend op augustus 2, 2025, https://www.atalayar.com/en/articulo/politics/artificial-intelligence-the-new-frontier-in-geopolitical-competition/20250801100000217131.html
- The AI Governance Arms Race: From Summit Pageantry to Progress …, geopend op augustus 2, 2025, https://carnegieendowment.org/research/2024/10/the-ai-governance-arms-race-from-summit-pageantry-to-progress?lang=en
- The US–China AI race is forcing countries to reconsider who owns their digital infrastructure, geopend op augustus 2, 2025, https://www.chathamhouse.org/2025/05/us-china-ai-race-forcing-countries-reconsider-who-owns-their-digital-infrastructure
Ontdek meer van Djimit
Abonneer je om de nieuwste berichten naar je e-mail te laten verzenden.