← Terug naar blog

The shattered mirror hermeneutic harm and the crisis of meaning in sociotechnical systems

AI

by Djimit

Introduction the emergence of hermeneutic harm in the algorithmic age

The proliferation of artificial intelligence (AI) across every facet of modern life has precipitated a new class of societal risks. To date, the discourse on AI ethics and governance has focused predominantly on functional harms: algorithmic bias that perpetuates social inequality, failures of safety that endanger physical well being, and violations of privacy that compromise individual autonomy. While these concerns are vital, they fail to capture the most profound and insidious danger posed by autonomous systems. This report argues that the primary harm of AI is not merely functional but hermeneutic it is a disruption to the fundamental human process of sense making. AI systems, through their opacity, their alien modes of reasoning, and their capacity to act without clear accountability, inflict damage upon the cognitive, normative, and emotional frameworks through which individuals and societies construct meaning. This damage, termed hermeneutic harm, represents a crisis not just of technology, but of intelligibility itself.

When an AI system denies a loan, recommends a prison sentence, or curates a social media feed, it does more than execute a function; it intervenes in the narrative of a person’s life and the shared reality of a community. If the logic of that intervention is inaccessible, its values misaligned with our own, or its authority unaccountable, the result is a rupture in meaning. The individual is left unable to understand their own experience, to form a coherent story about their life, or to place the event within a just and predictable social order. This is a secondary, yet more devastating, injury than the primary harm of the decision itself.1 It is the harm of being rendered an alien in one’s own life story.

Current AI governance frameworks, with their emphasis on technical audits, procedural transparency, and risk management, are ill equipped to address this deeper crisis. They offer technical solutions to what is fundamentally a humanistic problem. They seek to explain the machine’s logic but fail to restore meaning to the human’s experience. This report charts a different course. It develops a multi dimensional theory of hermeneutic harm by synthesizing insights from epistemology, narrative theory, moral psychology, and trauma studies. It provides a precise taxonomy of the various ways AI systems can wrong us as sense making beings, analyzes the limitations of current technical and regulatory responses, and models the institutional dynamics that amplify these harms. Ultimately, it proposes a new paradigm for AI governance hermeneutic governance founded on the recognition that the ultimate purpose of accountability is not merely to assign blame or compensate loss, but to repair the shattered mirror of meaning in a world we must now co author with intelligent machines.

Section 1 a taxonomy of algorithmic wrongs

To effectively govern the societal impact of AI, a precise and granular vocabulary is required. The lexicon of “bias” and “unfairness,” while useful, is insufficient to capture the distinct ways in which algorithmic systems can inflict harm. This section establishes a foundational taxonomy, differentiating four key concepts hermeneutic harm, epistemic injustice, responsibility gaps, and normative dissonance and situates them within the limitations of current AI governance frameworks. These concepts are not isolated phenomena but are deeply interconnected, often forming a causal chain where structural deficits in accountability lead to profound disruptions in individual and collective sense making.

1.1 Hermeneutic harm beyond the epistemic lacuna

Hermeneutic harm, in the context of AI, signifies an unjust impediment to an individual’s ability to make sense of their experiences, identity, or circumstances.3 Originally formulated within the philosophy of epistemic injustice, the concept described a harm resulting from a lacuna in collective conceptual resources, where marginalized groups lack the shared language to understand and articulate their own social experiences.4 However, AI introduces a more active and acute form of this injury. It is not merely a passive lack of concepts but an active disruption of established sense making practices. AI systems can inflict a “secondary hermeneutic harm” that is distinct from the primary harm of an adverse decision.1 This secondary harm arises from the disruption of our standard practices for regulating “reactive attitudes” the moral emotions like resentment, indignation, and gratitude that are central to how we interpret and respond to the actions of others. When an AI agent causes harm, the opacity of its decision making process and the ambiguity of its agency can leave the victim unable to direct these attitudes appropriately. This failure to make sense of a harm can be a lasting injury, leading to an obsessive search for reasons, a need to assign blame, and a persistent inability to achieve moral or emotional closure.2 The harm, therefore, is the foreclosure of a meaningful interpretation, leaving the individual stranded in a state of cognitive and emotional dissonance.

1.2 Epistemic injustice in algorithmic systems testimonial and hermeneutic dimensions

Epistemic injustice is a wrong done to someone in their capacity as a knower or transmitter of knowledge.4 AI systems can perpetrate both of its primary forms, as identified by philosopher Miranda Fricker.6

Testimonial Injustice occurs when prejudice causes a hearer to assign a deflated level of credibility to a speaker’s word.8 In the algorithmic context, the AI system becomes the prejudiced hearer. Biases embedded in its training data or design can cause it to systematically discount or devalue the inputs of individuals from certain groups. For example, a customer service chatbot may be less responsive to users with accents associated with marginalized communities, or a résumé screening tool may penalize applications that include the word “women’s” or mention attendance at an all women’s college.8 In these cases, the AI system is not merely making an error; it is perpetrating an injustice by denying individuals credibility and respect as epistemic agents.

Hermeneutic Injustice, in its classic form, arises from a structural gap in shared understanding.4 AI systems can significantly exacerbate this by fostering epistemic fragmentation. Algorithmic personalization, through content filtering and recommendation engines, can create echo chambers that isolate individuals and groups, making it more difficult to share experiences, compare interpretations, and collectively develop the conceptual resources needed to identify and name new forms of algorithmic harm.4 This can lead to a state of “meta blindness,” where dominant narratives are algorithmically reinforced and alternative perspectives, particularly those of marginalized communities, are rendered invisible.5 Furthermore, generative AI systems, often trained on data reflecting a Western, Anglophonic worldview, risk perpetrating “conceptual erasure” by imposing a “view from nowhere” that systematically inferiorizes and displaces non Western epistemologies and cultural frameworks.11

1.3 The responsibility gap the problem of distributed and opaque agency

A responsibility gap emerges when an autonomous system causes a significant harm, yet it is impossible to justifiably attribute moral responsibility to any human actor, whether it be the designer, deployer, or user.2 This gap arises from a deficit of control and knowledge; if no human had sufficient control over the AI’s action or could have reasonably foreseen the harmful outcome, traditional conditions for blameworthiness are not met.14 The problem is compounded by the “many hands” issue, where responsibility is so diffused across a complex network of actors developers, data suppliers, corporate entities, end users that it effectively belongs to no one.15

This structural deficit is a primary driver of hermeneutic harm. The human need to make sense of suffering is deeply tied to the ability to hold someone accountable. Reactive attitudes like blame are not simply expressions of anger; they are integral parts of a moral sense making process that reaffirms shared norms and acknowledges the victim’s standing.1 A responsibility gap forecloses this process. It leaves the victim without a legitimate target for their moral response, rendering the harm arbitrary and unintelligible. The inability to assign responsibility transforms a comprehensible injustice into a meaningless, absurd event, deepening the interpretive struggle and preventing psychological and moral repair.

1.4 Normative dissonance when algorithmic logic clashes with human values

Normative dissonance occurs when there is a fundamental conflict between the operational logic of an AI system and the shared social norms, values, and expectations of the humans interacting with it.16 This is a form of “ethical ambivalence” built into the sociotechnical system, where the behaviors rewarded or produced by the AI (e.g., maximizing efficiency, predicting risk based on proxies) are in direct contradiction with deeply held ethical principles (e.g., fairness, dignity, due process).16

This dissonance is particularly acute in the public sector. An AI system deployed in the justice system, for instance, is intended to uphold norms of fairness and equality. However, if it is trained on historical data reflecting systemic biases, it may perpetuate discrimination under a veneer of computational objectivity.18 To the individual affected, the experience is one of profound confusion and injustice. They expect to be judged according to shared societal norms but are instead subjected to an alien, inscrutable logic. This clash prevents them from understanding the rules governing their social world, making it feel arbitrary, unpredictable, and fundamentally unjust. The harm lies in the violation of the implicit social contract that one will be treated as a person according to shared values, not as a data point to be processed by an alien intelligence.

The causal relationship between these harms can be understood as a cascading failure. The technical and organizational structure of an AI system its opacity and the distribution of agency in its creation and deployment gives rise to a responsibility gap. This lack of a clear locus of accountability prevents the victim from engaging in the normal human process of moral sense making through reactive attitudes, resulting in hermeneutic harm. This harm is subjectively experienced as normative dissonance, a jarring clash between expected social norms and the alien logic of the machine. When this experience disproportionately affects marginalized groups who lack the collective resources to name and contest this new form of wrong, it becomes a manifestation of epistemic injustice.

1.5 Mapping conceptual harms onto governance frameworks

Current leading AI governance frameworks, including the EU AI Act, the OECD AI Principles, and the ISO 42001 standard, have begun to establish a global consensus around high level principles such as accountability, transparency, fairness, and human oversight.19 While these frameworks represent crucial progress, their approach is primarily technical and procedural, leaving them ill equipped to address the deeper, meaning based harms outlined above.

In essence, these frameworks operate at the level of procedural and distributive justice, aiming to ensure that processes are transparent and outcomes are not discriminatory. They do not yet possess the language or the regulatory tools to address the harm of meaning disruption itself a harm that can persist even when a system is technically “transparent” and its outcomes are statistically “fair.”

Harm TypeCore DefinitionKey TheoristsAI Specific ManifestationRelation to Other HarmsHermeneutic HarmThe unjust impediment to making sense of one’s experiences, caused by the disruption of interpretive practices.Medina, CrerarAn opaque or norm violating AI decision disrupts the regulation of reactive attitudes (e.g., blame), preventing moral and emotional closure.1Often the direct psychological consequence of a Responsibility Gap and experienced as Normative Dissonance.Testimonial InjusticeWrong done to a speaker by unfairly discounting their credibility due to prejudice.FrickerAn AI system, acting as a prejudiced hearer, systematically devalues the input or claims of individuals from marginalized groups.8A specific form of epistemic harm that can lead to feelings of invalidation and frustration (Normative Dissonance).Hermeneutic InjusticeWrong done to a person due to a structural deficit in collective interpretive resources.FrickerAI driven personalization creates “epistemic fragmentation,” isolating individuals and preventing the collective formation of concepts to name new harms.4A structural condition that makes individuals more vulnerable to Hermeneutic Harm, as they lack the shared language to fight back.Responsibility GapA situation where harm occurs but no human actor can be justifiably held morally responsible.Matthias, SparrowThe opacity of deep learning models and the diffusion of agency across many actors (“many hands problem”) obscure accountability.12A primary structural cause of Hermeneutic Harm, as it blocks the sense making function of accountability.Normative DissonanceA conflict between the operational logic of an AI system and the shared social norms and values of its users.Jansen & von GlinowAn AI system optimized for a narrow technical goal (e.g., risk prediction) violates fundamental social expectations of dignity, fairness, or respect.16The subjective experience of a hermeneutic rupture; the feeling that the world is no longer intelligible or just.

Section 2 the architecture of meaning disruption a typology

To move from theoretical abstraction to practical intervention, it is essential to classify the concrete ways in which AI systems disrupt human sense making. This section develops a typology of AI induced meaning disruptions, illustrated with real world and hypothetical case studies. Each type of disruption represents a distinct failure mode in the human AI interaction, and each maps onto a cascade of negative social consequences, from the erosion of individual agency to the fragmentation of collective trust and social cohesion. The common mechanism underlying these disruptions is a form of violent decontextualization, where the rich, narrative, and situated nature of human experience is stripped away to fit the narrow, machine readable format of an algorithmic model.

2.1 Epistemic opacity the unknowable decision

This is the most direct form of meaning disruption, arising from the “black box” nature of many advanced AI systems. When a decision of significant consequence is rendered by a system whose internal logic is inaccessible or incomprehensible, the affected individual is denied the basic resources for sense making.

2.2 Normative mismatch the violation of dignity

This disruption occurs when an AI’s behavior, while potentially conforming to its programmed objectives, violates fundamental social and emotional norms. The system demonstrates a profound lack of “social intelligence,” causing offense, disrespect, or a sense of dehumanization.

2.3 Agency misattribution the blameless machine

This form of disruption arises from the unique ontological status of AI as an agent that can cause harm but cannot bear responsibility. When individuals are wronged by an AI, their natural impulse is to attribute blame to the entity they interacted with. However, this attribution is void, as the AI has no legal personhood or moral status, creating a frustrating and confusing search for accountability.

2.4 Cultural misalignment the imposition of a monoculture

This harm occurs when AI systems, developed and trained within a dominant cultural context, are deployed globally. The values, assumptions, and worldviews embedded in the AI’s data and design clash with local norms, leading to outcomes that are not only inaccurate but also culturally invalidating.

2.5 Social consequences from individual grievance to collective trust erosion

These individual meaning disruptions, when aggregated, produce systemic social harms. The inability to make sense of one’s world is not just a personal psychological problem; it is a political one that corrodes the foundations of social order.

The decontextualizing logic of AI systems reducing complex human narratives to simplified data points is the mechanism that links individual disruptions to collective harms. A loan application is a story of aspiration; a medical record is a story of suffering and resilience; a social media profile is a story of identity and connection. AI systems operate by severing these stories from their context, processing the data, and producing an output whose logic is often irreconcilable with the original human narrative. This repeated severing of decision from context, when scaled across society, leads to a widespread crisis of meaning.

Disruption TypeDescriptionCase Study ExampleSocial ConsequencesEpistemic OpacityDecisions are made by inscrutable “black box” systems, denying individuals a comprehensible reason for outcomes affecting them.An AI denies a loan application with a vague or non actionable justification, preventing recourse or understanding.27Erosion of individual agency; learned helplessness; formation of grievances; institutional distrust.29Normative MismatchAn AI’s actions, while technically “correct,” violate fundamental social, cultural, or emotional norms, causing offense or dehumanization.An AI chatbot offers insensitive, formulaic condolences for a personal tragedy, treating grief as a data point.30Erosion of interpersonal trust; delegitimization of AI in sensitive domains; psychological distress and feelings of alienation.Agency MisattributionHarm is caused by an AI, an entity without legal or moral personhood, leading to a diffusion of blame and frustrating the search for accountability.An airline disclaims responsibility for its chatbot providing false information, arguing the bot is a separate entity.32Undermining of legal and moral accountability frameworks; creation of “moral crumple zones”; regulatory backlash.36Cultural MisalignmentAI systems trained in a dominant culture are deployed in diverse contexts, where their embedded values clash with local norms.A mental health AI trained on Western data fails to understand or validate non Western expressions of distress.35Digital cultural hegemony; reinforcement of stereotypes; exclusion of marginalized groups; systemic invalidation of diverse worldviews.

These disruptions collectively contribute to two macro level social pathologies:

Section 3 the limits of technical reason a critique of explainable AI (XAI)

In response to the “black box problem,” the field of Explainable AI (XAI) has emerged with the goal of making algorithmic decisions transparent and interpretable. The prevailing assumption is that by revealing the inner workings of a model, we can foster trust, ensure fairness, and provide a basis for accountability. However, this section argues that the current paradigm of XAI is fundamentally mismatched to the problem of hermeneutic harm. It provides explanations that are often technically faithful but hermeneutically barren, answering a question the user did not ask. This approach not only fails to restore meaning but, in some cases, can create new risks of deception and manipulation.

3.1 The explanatory gap do SHAP, LIME, and counterfactuals restore meaning?

Current state of the art XAI methods fall into several categories, each with a distinct approach to explanation.

While these methods provide a window into the model’s behavior, they suffer from a critical limitation: they explain the model’s internal, correlational logic, not a reason that is meaningful within a human, normative context.49 A SHAP value is a mathematical attribution, not a justification. A counterfactual is a statement about the model’s decision boundary, not necessarily a causal or actionable recommendation for the real world. These explanations answer the technical question, “Which features did the model weigh most heavily?” but fail to answer the user’s implicit human question, “Why was this decision just and according to what rules?” They explain the behavior of the AI model but not necessarily the real world system it is intended to represent, especially when input features are correlated or dependent, which is common in social systems.51 This creates an explanatory gap, leaving the user with data but no meaning.

3.2 The deception risk plausible explanations and the manipulation of understanding

A more insidious risk arises when explanations are designed not for faithfulness but for plausibility. Advanced AI models, particularly large language models, are capable of “learned deception”.52 This can manifest in several ways:

The danger of such plausible but unfaithful explanations is that they create an “illusion of accountability”.54 A user receives an explanation that seems reasonable and accepts the AI’s decision, without realizing they have been manipulated. This is a more profound form of hermeneutic harm: instead of simply failing to repair a meaning rupture, the system actively constructs a false and misleading narrative, deepening the user’s trust in a potentially flawed or unethical system. This erodes the very possibility of genuine understanding and turns the act of explanation into a tool of control.

3.3 Beyond comprehension the need for contrastive, actionable, and affectively sensitive explanations

To bridge the explanatory gap and mitigate the risk of deception, XAI must evolve beyond its current model centric focus. A hermeneutically restorative explanation must be designed around the user’s cognitive, practical, and emotional needs.

The central deficiency of contemporary XAI is its misinterpretation of the user’s request. When a person asks “Why?” in response to a life altering algorithmic decision, they are not initiating a technical inquiry; they are making a normative demand for justification and a human plea for meaning. They are asking to be treated as a person within a shared moral community, not as an object to be analyzed. By providing a technical debriefing instead of a meaningful narrative, current XAI commits a category error that perpetuates, rather than resolves, hermeneutic harm.

XAI MethodExplanation TypeModel FidelityActionability (Recourse)Contrastive PowerAffective SensitivityRisk of Deception/ManipulationLIMELocal, attribution basedLow to Medium (Approximation)Low (Identifies features, not actions)Low (Explains one outcome, not contrast)Very Low (Purely technical output)Medium (Can be unstable, giving different explanations for similar inputs) 54SHAPLocal/Global, attribution basedHigh (Theoretically grounded)Low (Identifies feature contributions, not actions)Low (Explains one outcome, not contrast)Very Low (Output is abstract and non intuitive for lay users) 54Low (Generally consistent, but can be misinterpreted)CounterfactualsLocal, example basedMedium (Identifies decision boundary, not full logic)High (Explicitly designed for recourse) 57High (Inherently contrastive: “why not Y?”) 47Low to Medium (Can be framed to be more empathetic, but is often purely functional)High (Can suggest unrealistic paths or be gamed; vulnerable to unmodeled factors) 56Human in the LoopDialogic, narrativeVariable (Depends on human’s understanding)High (Can collaboratively explore actionable paths)High (Can engage in a contrastive dialogue)High (Human can provide empathy and context)Medium (Human can be biased or provide a corporate script) 61

Section 4 amplifiers and accelerants socio cultural and institutional dynamics

Hermeneutic harm does not occur in a vacuum. Individual encounters with opaque and unaccountable AI systems are embedded within broader socio-cultural and institutional contexts that can dramatically amplify their negative effects. This section examines these amplifiers, modeling how factors like bureaucratic complexity, platform governance, digital literacy deficits, and pre existing institutional distrust transform isolated incidents of meaning disruption into systemic crises of legitimacy and social cohesion. The analysis reveals that AI often acts not as a novel source of pathology but as a powerful accelerant for existing institutional flaws, encoding and scaling them with unprecedented speed and efficiency.

4.1 The algorithmic panopticon bureaucratic complexity, platform governance, and algorithmic authority

When AI is integrated into large, established institutions, it inherits and often magnifies their existing characteristics.

4.2 The fractured subject digital literacy, distributed agency, and explanatory justice

The impact of algorithmic systems is mediated by the capacities and vulnerabilities of the individuals and communities who interact with them.

4.3 The crisis of legitimacy institutional distrust in Public Sector AI

The deployment of opaque, biased, or unaccountable AI systems in high stakes public domains can severely damage or destroy institutional trust, which is already in a fragile state in many democracies.65

The common thread across these domains is that AI does not simply introduce a new technological problem; it acts as a powerful catalyst for existing institutional pathologies. A bureaucracy that is already opaque becomes inscrutable when automated. A justice system with latent biases becomes systematically discriminatory when those biases are encoded in an algorithm. A healthcare system facing pressures of depersonalization becomes even more alienating when mediated by machines. The hermeneutic harm is therefore a product of the co production of technology and institutions. The resulting crisis of meaning is also a crisis of institutional legitimacy, as the state’s promise of rational, just, and legible governance is broken by the very tools meant to enhance it.

Section 5 integrating deeper humanistic frameworks

To fully grasp the nature of hermeneutic harm, it is necessary to move beyond a purely technical or legal analysis and engage with deeper humanistic traditions that explore the foundations of selfhood, social reality, and moral experience. This section synthesizes insights from narrative theory, moral psychology, and trauma studies to construct a richer, more holistic model of the human encounter with algorithmic systems. This integrated framework reveals that hermeneutic harm is not merely an informational or procedural deficit; it is a form of ontological violence that attacks the very structure of personhood by disrupting the narrative and social processes through which we constitute ourselves as meaningful beings.

5.1 Narrative identity and the algorithmic self insights from ricoeur and taylor

5.2 Moral psychology and the uncanny agent navigating human reactions to AI decisions

The introduction of AI into our social world creates novel psychological challenges. Moral psychology, which traditionally studied human moral reactions to other humans, animals, or supernatural beings, must now contend with a fourth category: the intelligent machine as a moral agent and patient.102

5.3 Trauma informed computing aI’s impact on vulnerable populations and pre existing harm

Trauma is an experience that overwhelms an individual’s capacity to cope and integrate the event into their understanding of themselves and the world. The framework of trauma informed computing recognizes that technology can be a source of trauma or retraumatization, and advocates for designing systems with the principles of safety, trust, collaboration, enablement, and intersectionality.106

5.4 The language of explanation metaphor, framing, and the ethics of communication

The way we talk about AI is not neutral; it actively shapes our understanding and ethical evaluation. Language, metaphor, and narrative are the tools through which we make sense of new technologies.

Synthesizing these frameworks leads to a more profound understanding of hermeneutic harm. An opaque and incontestable algorithmic decision functions as a traumatic event. It is a rupture in the narrative of one’s life that cannot be assimilated, overwhelming the capacity for sense making. This is not merely an epistemic failure (a lack of knowledge) or a procedural one (a flawed process). It is a form of ontological violence an attack on the very structure of the self as a self interpreting, narrative being who exists in a shared world of meaning. It is an assault on personhood itself.

Section 6 towards hermeneutic governance normative and institutional interventions

The analysis of hermeneutic harm necessitates a fundamental rethinking of AI governance. A framework focused solely on technical standards and procedural compliance is insufficient. What is required is a paradigm shift towards hermeneutic governance an approach that prioritizes the restoration of meaning, the protection of narrative identity, and the creation of resilient sociotechnical systems capable of detecting and repairing sense making breakdowns. This section outlines four key interventions designed to translate this paradigm into actionable policy and practice.

6.1 Establishing a “right to narrative” the right to receive, contest, and co-author interpretations

Existing data protection laws, such as the GDPR, provide a limited “right to an explanation” for automated decisions. This right is often interpreted narrowly as a right to technical information about a system’s logic. To address hermeneutic harm, a more profound right is needed.

This right is grounded in the idea that personal identity is narrative in nature, and it protects the integrity of personal history telling against the fragmenting force of opaque algorithmic systems.120

6.2 Institutional design algorithmic ombudsfunctions and meaning making mediation panels

Individual rights are toothless without effective institutions for redress. The unique nature of hermeneutic harm requires novel institutional forms that go beyond traditional courts or regulatory agencies.

6.3 Professional formation embedding hermeneutic ethics in technical and domain specific training

The prevention of hermeneutic harm must begin at the source: with the people who design, build, and deploy AI systems. Current AI ethics education often focuses on high level principles or technical de biasing techniques. A more profound pedagogical shift is needed.

6.4 A Blueprint for hermeneutic resilience by design in public sector AI

Ultimately, governance must be embedded in the technology itself. Hermeneutic resilience by design refers to the proactive engineering of AI systems to be inherently capable of anticipating, withstanding, and recovering from meaning disruptions.136 This approach shifts the focus from attempting to build a perfectly “unbiased” or “explainable” system (an impossible goal) to building a system that can fail gracefully and support processes of repair.

These interventions collectively aim to build sociotechnical resilience. In complex social systems, failures of meaning are inevitable. The goal of hermeneutic governance is not to create a world free of algorithmic error, but to create a world with robust, accessible, and effective mechanisms for detecting hermeneutic ruptures and collaboratively repairing the fabric of meaning when it is torn.

Section 7 future research directions a cross cultural and temporal agenda

The framework of hermeneutic harm opens up a rich and urgent agenda for future research. Our current understanding of AI’s impact on meaning making is still in its infancy and is largely shaped by a Western, Anglophonic perspective. To develop a truly global and robust AI ethics, research must expand to embrace cross cultural diversity, investigate the long term consequences of these harms, and refine our conceptual tools for understanding the nuances of explanation and psychological repair.

7.1 Designing a cross cultural validation study

The perception of what constitutes a “meaningful” interaction or a “just” explanation is not universal; it is deeply shaped by cultural and legal contexts. Therefore, a critical next step is to investigate how hermeneutic harm and the proposed interventions are experienced and evaluated across diverse societies.

A significant challenge in this research is the recognition that the very concept of “hermeneutic harm,” rooted in a Western philosophical tradition that emphasizes individual narrative coherence, may need to be de centered. A truly global AI ethics must not simply “validate” this concept elsewhere but must use ethnographic methods to understand diverse “hermeneutic ecologies” and how AI interacts with them on their own terms.

7.2 Unexplored frontiers collective harm, temporal aftermaths, and the nuances of explanation

Beyond cross cultural research, several conceptual and empirical frontiers remain critically underdeveloped.

Conclusion rebuilding meaning in a world co-authored by AI

The central argument of this report is that the integration of artificial intelligence into the fabric of our social, political, and economic lives represents a fundamental hermeneutic challenge. As we increasingly delegate cognitive, judgmental, and communicative functions to autonomous systems, we are not merely outsourcing tasks; we are inviting these systems to become co authors of our individual and collective realities. They shape the stories we tell about ourselves, mediate our understanding of the world, and govern the normative structures that make our societies intelligible. The failure to recognize and govern this profound hermeneutic dimension of AI is the greatest risk we face.

A future defined by opaque, unaccountable, and normatively misaligned AI is a future of profound alienation. It is a world where individual lives are fractured by unintelligible decisions, where institutional trust is corroded by arbitrary technological authority, and where the shared ground of public meaning is fragmented into polarized and mutually incomprehensible realities. The harms of such a world, epistemic, psychological, and social run deeper than the functional errors and biases that currently dominate the AI ethics discourse. They are harms to our very capacity as sense-making beings.

The path forward requires a radical reorientation of AI governance. It demands that we move beyond the narrow confines of technical de-biasing and procedural transparency and embrace a more holistic, humanistic paradigm of hermeneutic governance. This involves creating new rights, such as the Right to Narrative, that protect our status as authors of our own lives. It requires new institutions, like an AI Ombudsman, dedicated to the work of meaning making and narrative repair. It necessitates a new professional ethos, embedded through education, that equips technologists and policymakers with a deep understanding of the human stakes of their work. And it calls for a new design philosophy hermeneutic resilience by design that builds systems capable of failing gracefully and supporting human led processes of recovery and sense making.

The challenge is not to build perfect machines that never err, but to build resilient sociotechnical systems that honor the indelible human quest for meaning. In a world increasingly co-authored by artificial intelligence, ensuring that this quest can continue so that our lives and our societies remain intelligible to us is the ultimate measure of responsible innovation.

Actionable summary for policymakers

Subject mitigating hermeneutic harm a new framework for AI governance

This summary outlines key findings and recommendations from the comprehensive report, The Shattered Mirror: Hermeneutic Harm and the Crisis of Meaning in Sociotechnical Systems. It provides a strategic overview for policymakers aiming to develop robust, human centric AI regulation that addresses the deepest societal risks of artificial intelligence.

1. The core problem hermeneutic harm

Beyond well documented issues of bias and functional error, the most profound risk of AI is hermeneutic harm: the disruption of the fundamental human process of making sense of one’s life and social world. When AI systems make opaque, unaccountable, or norm violating decisions in high stakes domains (e.g., welfare, justice, employment, healthcare), they inflict a “secondary harm” on affected individuals.1 This is the psychological and social damage caused by being unable to understand

why a decision was made, leaving individuals feeling powerless, alienated, and unable to form a coherent narrative about their own experiences. This erosion of meaning, when aggregated, undermines institutional trust and social cohesion.

Current governance frameworks (e.g., EU AI Act, OECD Principles) are necessary but insufficient. Their focus on technical transparency and non discrimination does not adequately address this deeper, meaning-based harm.

2. Key drivers of hermeneutic harm

Our analysis identifies a causal chain of interconnected problems that current policies must address:

3. Strategic recommendations for hermeneutic governance

To address these challenges, we propose a new governance paradigm focused on hermeneutic resilience: building sociotechnical systems that can detect meaning breakdowns and provide robust mechanisms for repair. This requires moving beyond principles to concrete institutional and legal interventions.

Recommendation 1: Legislate a “Right to Narrative”

Go beyond the GDPR’s “right to an explanation” by establishing a legally enforceable “Right to Narrative.” This right would guarantee that individuals affected by a high stakes automated decision receive a meaningful, human understandable, and contestable narrative explaining the decision. This shifts the burden from the citizen to understand the machine, to the system deployer to translate the machine’s logic into a socially legitimate reason. This right should include a clear process for individuals to challenge the official narrative with their own contextual story and receive a timely human review.

Recommendation 2: Establish an Independent AI Ombudsman

Create a specialized, independent AI Ombudsman service to provide accessible and low cost redress for algorithmic harms.123

Recommendation 3: Mandate Hermeneutic Resilience by Design for Public Sector AI

For AI systems procured or deployed by public sector bodies, regulation should mandate Hermeneutic Resilience by Design. This includes:

Recommendation 4: Reform Professional Training and Education

Embed AI ethics education, with a specific focus on hermeneutic and social harms, into the mandatory training and certification requirements for key professions.

Conclusion

Addressing the hermeneutic harms of AI is not an impediment to innovation; it is a prerequisite for its sustainable and legitimate integration into society. By building a governance framework that protects our fundamental capacity to make sense of our world, we can foster justified public trust and ensure that AI serves human flourishing rather than undermining it.

Infographic shattered mirror

Geciteerd werk

DjimIT Nieuwsbrief

AI updates, praktijkcases en tool reviews — tweewekelijks, direct in uw inbox.

Gerelateerde artikelen