The International Scientific Report on the Safety of Advanced AI (2025)
AIBy Dennis Landman
The International Scientific Report on the Safety of Advanced AI (2025) is a landmark attempt to establish a shared global scientific understanding of the risks, capabilities, and governance challenges posed by general-purpose AI (GPAI). It was produced by a multidisciplinary team of 96 experts and endorsed by 30 governments and major international institutions like the UN, OECD, and EU.

Here’s a breakdown of its key contributions, limitations, and critical openings for further inquiry and intervention:
I. Beyond the “Given”: Reframing What This Report Is
Not a policy document: The report is explicitly non-prescriptive. It does not recommend concrete policies but instead synthesizes current scientific knowledge. This opens space for translating scientific insights into governance action. Focuses exclusively on risks: While acknowledging AI’s benefits, it sidesteps a full societal impact assessment. This imbalance skews the discourse toward technocratic risk over broader structural transformation. GPAI as a category: The choice to focus on GPAI is sensible yet problematically fuzzy, as it collapses a vast diversity of models and use cases into one monolithic framing.
II. Methodological Innovation & Transgression
Deconstructing dominant metrics: The report critiques existing benchmarks (e.g., MMLU, SWE-bench, GPQA) for failing to measure real-world generalization and reliability. However, it still relies heavily on them. Opening: A call for post-benchmark epistemologies—qualitative, adversarial, and systemic evaluations beyond narrow benchmarks. Model capabilities are emergent but poorly understood: Capabilities like chain-of-thought reasoning or multi-modal manipulation emerge unpredictably. Challenge: Prediction and control are still grounded in scaling laws, which may reify techno-determinism and obscure structural risks. Systemic risk lens is underdeveloped: It flags labor market shifts, environmental costs, market concentration, and geopolitical divides, but fails to offer a systems-theoretic or political-economic synthesis.
III. Findings as Seeds for Theoretical Reconceptualization
Loss of control is reframed as a gradual risk, not an AGI singularity scenario. This moves the discourse away from science fiction toward governance-relevant framings—but still lacks clarity on what “control” means and for whom. Labor automation, cyber threats, and biological misuse are highlighted, but the report avoids theorizing their entanglement with extractive data regimes, techno-capitalism, or global inequality.
IV. Transformative Implications & Activist Potential
Evidence dilemma for policymakers: Policymakers must act under conditions of scientific uncertainty. This validates precautionary governance but is left untheorized. Risk management frameworks remain industry-dependent: There is no structural critique of the information asymmetry between private developers and public institutions. Opportunity: Build public interest infrastructures for red-teaming, interpretability research, and international safety assessments. Open-weight models debate: While it acknowledges both the democratizing and misuse risks of openness, the report refuses to take a stance, leaving a power vacuum in policy discussions.
V. Generative Critique & Counter-Narrative Construction
From mitigation to transformation: The report still operates within a risk mitigation paradigm rather than one of democratic transformation, decolonization, or data justice. It centers “accidents” and “misuse” while minimizing the structural harms of systemic inequality, surveillance, and epistemic injustice that GPAI can reinforce.
VI. Toward a Scholarly and Strategic Vision
You can position this report as a launchpad for more ambitious interventions. Consider:
Epistemological alternatives: What if we redefined safety not as technical controllability but as democratic legitimacy, transparency, and accountability? Methodological futures: What if safety evaluations included lived experiences of impacted communities, not just benchmarks? Governance horizons: What if the question is not just how to govern GPAI, but who governs it—and whose values are encoded?
Link:
https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf
DjimIT Nieuwsbrief
AI updates, praktijkcases en tool reviews — tweewekelijks, direct in uw inbox.