by Dennis Landman

Reframing AI Failure and Success in Government

Artificial intelligence (AI) is increasingly integrated into public sector decision-making, influencing everything from welfare systems to law enforcement. While AI’s failures—especially concerning bias and discrimination—have been widely discussed, Kempeneer, Ranchordás, and van de Wetering argue for a more nuanced framework. Their study introduces a layered typology of AI failure and success, challenging the oversimplified “AI is biased” narrative. Instead, they present AI failures as occurring due to negligence, recklessness, or intent, while also proposing that AI success manifests in forms of care, caution, and trust.

By shifting the discourse beyond Western-centric perspectives, the paper highlights how AI governance shapes power dynamics globally, particularly in low-income countries. This analysis provides a critical lens through which policymakers, researchers, and technologists can assess AI’s real-world impact.

The Three Layers of AI Failure: Beyond Bias

1. AI Failure as Negligence

Negligence represents unintentional but harmful errors in AI systems. These include poor data management, flawed algorithms, or inadequate oversight that result in real-world consequences for citizens. An example of this is the Parivar Pehchan Patra (PPP) algorithm in India, which mistakenly declared thousands of elderly pensioners dead, cutting off their benefits. The harm was not intentional, but the bureaucratic failure to correct these errors exacerbated its impact.

2. AI Failure as Recklessness

Recklessness arises when known risks are ignored in AI deployment. A striking example comes from the UK Department for Work and Pensions, which admitted to deploying AI fraud detection models despite recognizing the lack of adequate data safeguards. Similarly, Danish healthcare algorithms assessing depression risk failed to account for disparities in diagnostic data, reinforcing existing inequalities.

3. AI Failure as Intent

At its most severe, AI failure results from deliberate misuse, often driven by systemic discrimination. The Dutch Childcare Benefits Scandal exemplifies this, where AI was weaponized to disproportionately target ethnic minorities under the guise of fraud detection. While AI itself does not possess intent, its deployment reflects the biases of those in power, reinforcing structural inequalities.

This framework reframes AI failures as more than technical glitches; they are deeply embedded in governance structures and political priorities.

The Three Layers of AI Success: A Constructive Vision for AI Governance

While AI failures dominate discussions, the authors advocate for an equally rigorous exploration of AI success. They outline three layers of AI success that counterbalance failure narratives:

1. AI Success as Care

Care-based AI governance ensures that failures are swiftly identified and corrected. The EU’s human-in-the-loop requirements in AI regulation illustrate this principle, mandating oversight to prevent unchecked automation. Another example is the US Census Bureau’s error-checking algorithms, which proactively reduce AI-driven misclassifications.

2. AI Success as Caution

Caution-based success focuses on designing AI with built-in ethical safeguards. The Dutch MONOcam initiative, an AI-driven traffic enforcement system, exemplifies best practices by integrating extensive testing, impact assessments, and accountability mechanisms before deployment. Such approaches contrast sharply with reckless AI rollouts seen elsewhere.

3. AI Success as Trust

Trust-driven AI explicitly aims to empower marginalized groups rather than control them. The city of Ghent’s Proactive Service Delivery project automatically grants social benefits to vulnerable residents without requiring them to navigate complex bureaucratic hurdles. This model demonstrates AI’s potential to support citizens rather than surveil them.

By structuring AI success in layers, the authors offer a counterpoint to the prevailing focus on AI’s dangers, proposing pathways for responsible AI adoption.

Power Dynamics and AI Colonialism: A Global Perspective

The study goes beyond Eurocentric AI debates, addressing how AI systems reinforce geopolitical inequalities. Many low-income countries rely on AI tools developed by high-income nations, creating dependencies that mirror historical colonial structures. For instance:

• Surveillance Exports: AI-driven predictive policing, developed in Western nations, is now deployed in African countries with minimal regulation.

• Data Exploitation: Global AI models are often trained on data extracted from developing regions without consent or compensation.

• Labor Exploitation: Tech giants outsource data annotation and moderation work to low-wage workers in countries like Kenya, exposing them to psychological harm.

The authors term this phenomenon AI colonialism, emphasizing the need for local AI governance frameworks that prioritize autonomy and data sovereignty.

Challenges and Gaps: The Need for Further Research

While the study provides a strong theoretical framework, some areas require further exploration:

1. Operationalizing AI Success: The paper outlines AI success conceptually but lacks concrete metrics for evaluation. How do we measure “trust” in AI systems? What benchmarks define AI care?

2. Corporate Power in AI Colonialism: The discussion primarily focuses on government-led AI deployment. However, private sector actors (Google, OpenAI, Microsoft) exert significant influence over AI systems used in low-income countries. A deeper analysis of corporate responsibility is needed.

3. AI’s Structural Limitations: The paper examines AI’s failures as power-driven but does not sufficiently address the technical constraints of AI. Many AI failures stem not just from human biases but from fundamental limitations in machine learning models.

Addressing these gaps could enhance the study’s practical applicability, providing policymakers with clearer guidelines for AI regulation.

Conclusion: AI as a Reflection of Power

Kempeneer, Ranchordás, and van de Wetering successfully move the AI ethics debate beyond the binary of success versus failure. Their layered model provides a more sophisticated framework for understanding AI’s societal impact, emphasizing that AI does not exist in isolation—it reflects and amplifies existing power structures.

For policymakers, the study underscores the importance of:

• Holding governments accountable for AI failures based on negligence, recklessness, or intent.

• Developing layered AI governance models that emphasize care, caution, and trust.

• Addressing AI colonialism by promoting local AI expertise and data sovereignty.

Ultimately, the success or failure of AI is not determined by technology alone—it is shaped by those who wield it. If AI governance is to be equitable, it must be designed with an acute awareness of power, responsibility, and justice.

This analysis highlights the depth of the paper’s contributions while identifying areas for further research and action. AI governance is not just a technical challenge but a political and ethical one—one that requires vigilance, accountability, and global collaboration.

Link to paper:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4983622


Ontdek meer van Djimit van data naar doen.

Abonneer je om de nieuwste berichten naar je e-mail te laten verzenden.