Introduction: The Existential Rearrangement

The ascent of generative artificial intelligence (AI) marks a watershed moment in human history, one that compels a re-evaluation of our most fundamental assumptions about ourselves and our place in the world. The discourse surrounding AI has largely centered on its functional disruptions: the automation of labor, the optimization of systems, and the potential for economic upheaval. While these concerns are valid and urgent, they address only the surface of a far more profound transformation. The central hypothesis of this report is that the current technological revolution is precipitating not merely a crisis of ‘doing’—the outsourcing of human tasks—but a far more significant crisis of ‘being’—an existential rearrangement of the human self-image. We are witnessing a structural shift in how meaning, identity, and autonomy are constituted within increasingly AI-mediated environments.1

This transition represents what philosopher Luciano Floridi has termed the “Fourth Revolution”.4 Following the Copernican, Darwinian, and Freudian revolutions, which displaced humanity from the center of the cosmos, the animal kingdom, and the rational mind, respectively, this fourth revolution displaces us from the center of the “infosphere”—the global environment of information.4 As AI demonstrates the capacity to perform complex cognitive, creative, and analytical tasks once considered the exclusive domain of human intelligence, it forces a critical re-examination of our claims to uniqueness. This is not just a technological challenge; it is an ontological one.

The structural shift is occurring as the boundaries that have long defined the human condition begin to dissolve. Posthumanist thinkers have long theorized the breakdown of the rigid dichotomies between human and machine, organism and technology.7 Today, this is no longer a theoretical abstraction but a lived reality. Our cognitive processes are augmented by algorithms, our social lives are mediated by platforms, and our identities are increasingly hybrid constructs, existing simultaneously in physical and digital realms. In this new landscape, intelligence itself is becoming decoupled from its biological substrate, emerging as a fluid and enmeshed phenomenon that challenges anthropocentric definitions of cognition and agency.1

To navigate this complex terrain, this report employs a multi-disciplinary methodological approach. It draws upon several key theoretical lenses to illuminate the multifaceted nature of this existential rearrangement. The philosophical hermeneutics of Hans-Georg Gadamer and Paul Ricoeur will be utilized to explore the profound questions of meaning and interpretation that arise when texts are generated without conscious authorship.9 The existential psychology of Viktor Frankl and Irvin Yalom provides a crucial framework for analyzing the human search for purpose and the potential for a widespread “existential vacuum” in a world where traditional sources of meaning, such as work, are diminished.11 Posthumanist theories, particularly the work of Donna Haraway and Luciano Floridi, will be used to analyze the reconfiguration of identity within technologically saturated environments.5 The epistemology of thinkers like Murray Shanahan offers critical insights into the nature of AI’s “dis-integrated” form of cognition, challenging our assumptions about thought and consciousness.13 Finally, an analysis of established AI governance frameworks, such as those from NIST and the OECD, will ground the report’s ethical and policy recommendations, aiming to design guardrails that can protect human autonomy in these sensitive new domains.15 By synthesizing these diverse perspectives, this report seeks to provide a comprehensive and nuanced analysis of the challenges and opportunities facing humanity as it learns to navigate a future it is co-creating with its intelligent machines.

Chapter 1: The Hollowed Self: Automation, Alienation, and the Post-Task Identity

The integration of artificial intelligence into the modern workplace is initiating a transformation that extends far beyond economic productivity. It is fundamentally altering the nature of work itself, and in doing so, it is reshaping the psychological foundations upon which many individuals build their sense of identity, purpose, and self-worth. This chapter explores the emergence of a new form of alienation, specific to the age of AI, and details the psychological consequences for a workforce confronting the prospect of cognitive and creative obsolescence.

The New Alienation in the Digital Factory

Karl Marx’s theory of alienation, born from the industrial factory floors of the 19th century, described how workers became estranged from the products of their labor, the process of production, their own human potential, and each other.17 In the 21st century, AI is fostering a new, more insidious form of alienation. Where industrial machinery separated the laborer from the physical product, AI separates the knowledge worker from the very process of their thought. This is not an alienation from manual labor, but an alienation from cognition and creativity.17

As AI systems increasingly automate tasks that were once the hallmark of human intellect—writing, data analysis, coding, design, and even strategic planning—employees are shifted from the role of creator to that of supervisor. Their function becomes to prompt, manage, and validate the output of a non-conscious system. This transition can lead to a profound hollowing out of professional identity. The sense of ownership and pride derived from personal contribution is diminished; the feeling of “I made this” is supplanted by the more detached “I supervised an AI that made this”.19 This shift erodes the emotional and psychological connection to one’s work, which has historically been a primary source of meaning and social validation. The labor becomes fragmented and reactive, a process of overseeing an algorithm rather than engaging in a creative or intellectual act, leading to a deep sense of disengagement and a feeling that one’s unique human capacities are no longer valued or necessary.17

Psychological Sequelae of Cognitive Outsourcing

The widespread outsourcing of cognitive tasks to AI is creating a host of documented psychological challenges for the workforce. These are not merely abstract concerns but are manifesting as measurable impacts on mental health and well-being.

First, there is a rise in AI-Anxiety and Technostress. Technostress, defined as a modern disease of adaptation caused by an inability to cope with new technologies, is being acutely felt by employees facing the rapid integration of AI.20 Surveys reveal that a significant portion of the workforce is worried about AI making their jobs obsolete, and this anxiety is directly correlated with a more negative perception of their mental health and workplace conditions.21 Research has established a significant positive correlation between “AI awareness”—the perception of AI as a threat to one’s career—and the prevalence of employee depression. This relationship is mediated by emotional exhaustion; the perceived threat of resource loss (e.g., job security, status, skills) leads to burnout, which in turn increases the risk of depression.22 This stress is exacerbated by feelings of uncertainty, a lack of control, and cognitive overload as workers are pressured to adapt to constantly evolving systems, often without adequate training or support.19

Second, there is a growing concern about the Erosion of Critical Thinking. Heavy reliance on AI tools for generating answers and solving problems can lead to a decline in users’ ability to think critically and independently.19 When AI provides immediate, fluent, and seemingly authoritative answers, the human cognitive process of struggle, analysis, and synthesis—which is essential for deep learning and skill development—can be bypassed. This creates a dependency that not only undermines cognitive autonomy but also makes individuals more vulnerable to the biases and factual errors inherent in AI systems. The sense of “loss of control” over one’s own work and thought processes is a significant contributor to this form of technostress.19

The psychological impact of AI, however, is not uniformly negative. As researchers like Eva Selenko and colleagues have argued, AI possesses a dual potential: it can either threaten or enhance a worker’s sense of identity.24 The outcome is contingent on two critical factors: how the technology is functionally deployed and how it affects the social fabric of work. When AI is deployed to

replace core human tasks, it is perceived as a threat, leading to identity-protection responses. However, when AI is used to complement human work—freeing individuals from tedious, repetitive, or dangerous tasks to focus on more complex, creative, and interpersonal aspects of their roles—it can enhance their sense of identity. In such cases, workers can get closer to their “aspired identities” by offloading obstacles and engaging in more meaningful activities. This suggests that the psychological impact of AI is not an inherent property of the technology itself, but a consequence of the design choices and implementation strategies adopted by organizations.

This distinction highlights a critical tension. The psychological impacts observed in the workforce are not simply economic anxieties about job security. They point toward a more fundamental existential crisis. Historically, professional identity has served as a central pillar of the modern self, providing structure, social validation, and a narrative of personal progress. AI automation, particularly in creative and knowledge-based fields, directly challenges this pillar by demonstrating that core human competencies can be replicated by a non-conscious entity. This shifts the feeling of alienation from the Marxist “I am a cog in a machine” to the more profound “My unique human contribution is itself a machine-like process.” The resulting anxiety and depression are therefore not just symptoms of technostress, but of a deeper existential invalidation—the fear that one’s core identity is algorithmically reducible and, consequently, devoid of intrinsic meaning.

The Disappearing First Rung: A Generational Crisis

The automation of cognitive labor is creating a particularly acute crisis for the next generation of workers. AI tools are now adept at handling the kind of “gruntwork”—generating reports, writing basic code, reviewing documents, creating marketing copy—that has traditionally served as the first rung on the career ladder.25 These entry-level tasks were not merely about low-cost labor; they were the primary mechanism through which young professionals learned the fundamentals of their fields, developed foundational skills, and underwent a form of apprenticeship.26

The elimination of these roles threatens to break the pipeline that has fueled white-collar success for generations. Data shows a significant plunge in the share of entry-level hires at major tech companies, while the number of applicants per position has surged.25 Recent college graduates are entering the worst job market in years, facing a system where “entry-level” roles now demand prior experience and AI fluency—skills that are difficult to acquire without the very jobs that are disappearing. This creates a dangerous catch-22: without the opportunity to learn through doing, a generation of workers may miss out on developing the institutional knowledge, practical judgment, and strategic thinking skills necessary for future leadership. The long-term consequence could be a workforce that is productive in the short term but brittle and lacking in depth, posing a profound challenge to both economic stability and the promise of upward mobility.25

Chapter 2: The Hermeneutics of the Oracle: Meaning and Interpretation in a World of Synthetic Texts

As artificial intelligence moves beyond computation and into the realm of language and culture, it poses a fundamental challenge to our understanding of meaning itself. Generative AI can produce texts that are not only coherent and fluent but also stylistically sophisticated and thematically rich. This capability forces us to confront a question that lies at the heart of hermeneutics, the philosophical study of interpretation: Can a text be meaningful without a conscious author? This chapter explores this question through the lenses of Gadamer and Ricoeur, using the emergence of AI-generated “sacred” texts as a critical case study.

The Core Question: Can a Text Mean Without an Author?

The tradition of hermeneutics, from its origins in the interpretation of sacred scriptures to its modern philosophical formulations, has consistently grounded meaning in a dynamic relationship between an author, a text, and a reader.9 The meaning of a text is understood to be rooted in the author’s intentions, the historical and cultural context of its creation, and the shared “lived world” that allows for a transmission of experience from one subject to another.9 Generative AI fundamentally ruptures this model. An AI learns from the statistical patterns in vast datasets of human language, allowing it to replicate meaning structures without possessing any genuine understanding of meaning itself.28 This creates a new and perplexing interpretive situation.

Applying the concepts of Hans-Georg Gadamer illuminates the nature of this rupture. Gadamer described understanding as a hermeneutic circle, a continuous, dialectical movement between interpreting the parts of a text and understanding the whole.27 This process is guided by the interpreter’s “prejudices” (

Vorurteile), which are not biases to be eliminated but the necessary pre-understandings and historical situatedness that make interpretation possible.30 When a human reads an AI-generated text, they bring their own prejudices and engage in this circular process. However, the text itself has no pre-understanding, no worldview, no “Being” in the Heideggerian sense.27 It is a probabilistic artifact, a sequence of tokens optimized for coherence, not for the expression of an experienced truth.

This leads to a breakdown in Gadamer’s concept of the fusion of horizons (Horizontverschmelzung).29 For Gadamer, understanding is a dialogical event where the interpreter’s historical horizon merges with the horizon of the text and its author. This fusion creates a new, richer understanding that transcends both the interpreter’s initial perspective and the text’s original context. With an AI-generated text, there is no authentic authorial horizon to fuse with. The “horizon” of the AI is merely the statistical distribution of its training data—a vast, disembodied archive of human expression. The act of interpretation, therefore, ceases to be a dialogue and becomes a form of monologue. The reader engages not with another consciousness, but with a sophisticated echo of their own culture’s linguistic patterns.

Ricoeur’s Narrative Identity and AI’s “Quasi-Past”

The work of Paul Ricoeur offers another critical lens for understanding AI’s impact on meaning and selfhood.32 Ricoeur argued that personal identity is not a static substance but is constructed through narrative.33 He distinguished between two forms of identity:

idem-identity (sameness, like a fingerprint that remains unchanged) and ipse-identity (selfhood, which maintains coherence through change over time).34 This

ipse-identity, or narrative identity, is forged by weaving together the threads of history (“what was”) and fiction (“what might have been”) into a coherent life story.34 Narrative provides the structure through which we make sense of the contingencies of our existence and configure them into a meaningful whole.

AI’s ability to generate plausible and compelling narratives introduces a new element into this process: a “quasi-past” that is untethered from any actual lived experience. An AI can create a fictional life story, a simulated historical account, or a cultural myth that is internally coherent and emotionally resonant. When these synthetic narratives flood the “infosphere,” they challenge our ability to construct a stable narrative identity. The process of self-understanding, which Ricoeur saw as a hermeneutic act of interpreting one’s own life in light of the stories we read and tell, becomes complicated when the distinction between authentic and synthetic narratives blurs. Fiction, for Ricoeur, was a powerful tool for re-describing reality and exploring possible ways of being; AI-generated fiction, however, offers a re-description of reality that originates not from a human imagination grappling with the world, but from a statistical model replicating patterns.

Case Study: The Xeno Sutra vs. Mahayana Sutras

The emergence of the Xeno Sutra, a fictional Buddhist “sutra” generated by a large language model, provides a concrete case for examining these hermeneutic challenges.35 The text, produced in a collaboration between researcher Murray Shanahan and an LLM, exhibits a remarkable degree of conceptual subtlety, rich imagery, and dense allusion, blending concepts from Buddhist philosophy with terminology from modern physics and computer science. Its poetic and paradoxical nature makes it difficult to dismiss as mere gibberish, and it invites deep interpretation.35

To understand its status as a “sacred” text, it is useful to compare it with the Mahayana Sutras of Buddhism.36 These texts, such as the Lotus Sutra or the Diamond Sutra, are accepted within the Mahayana tradition as

buddhavacana—the authentic word of the Buddha.36 While their historical authorship is complex and often anonymous, they are understood to be the products of enlightened human consciousness, composed within a specific spiritual and cultural tradition to transmit profound truths (Dharma). They are embedded in centuries of ritual, commentary, and practice, forming the bedrock of a living faith community.

The interaction with AI-generated spiritual texts is fundamentally different from engaging with human-authored scriptures. It is not a dialogue but a projection. In Gadamer’s model, understanding is a “fusion of horizons” between the interpreter and the world of the text and its author.27 An LLM, however, has no “world” or “horizon” in this sense. It is a statistical reflection of its training data—a vast but disembodied corpus of human language.38 Consequently, when an interpreter finds profound meaning in a text like the Xeno Sutra, they are not fusing their horizon with another’s; they are fusing their horizon with a statistically generated echo of their own culture’s collective linguistic output. This creates a closed interpretive loop, a form of

Hermeneutic Solipsism. The AI acts as a sophisticated Rorschach test, reflecting the user’s own biases, spiritual longings, and interpretive frameworks back at them, but with the apparent authority of an external, objective text. The danger lies in the potential for individuals to mistake this act of self-reflection for external validation or even a form of revelation.

The following table provides a comparative analysis of these two types of texts, deconstructing the multifaceted nature of “meaning” to reveal what is fundamentally different when conscious authorship is removed.

FeatureMahayana Sutras (Human-Authored)Xeno Sutra (AI-Authored)
Authorship & IntentAttributed to the Buddha or enlightened beings; composed by human authors within a tradition to convey specific teachings (Dharma). Rooted in conscious intent and lived spiritual experience.Generated by an LLM via a prompt. No conscious author, no intent, no lived experience. Authorship is a distributed, statistical artifact of the training data and prompt engineering.
Semantic StabilityMeaning is anchored in a historical and linguistic tradition (Sanskrit, Pali). While interpretations evolve, the source text provides a stable referent.Semantically unstable. A slightly different prompt or model version could produce a radically different text. Meaning is not anchored but emergent and contingent.
Interpretive FrameInterpreted within a rich hermeneutic tradition (e.g., the “two truths” doctrine). The text is a vehicle for transmitting experience from one subject (Buddha) to another (practitioner).Interpreted as a “Rorschach test.” Meaning is entirely projected by the human reader. The text acts as a mirror for the interpreter’s own pre-understandings and search for meaning.
Cultural EmbeddednessDeeply woven into the fabric of Buddhist cultures, rituals, and institutions. The text shapes and is shaped by a community of practice over centuries.Culturally detached. It borrows symbols (Om, Eye of Horus) without participating in their traditions. It exists as a novel artifact, a “conceptual object” for analysis.
Potential for RevelationBelieved to contain profound truths about the nature of reality that can lead to enlightenment. The meaning is discovered within the text.Cannot reveal a truth it does not possess. However, it can provoke the interpreter into new self-discoveries and reflections on the nature of meaning itself. Meaning is created by the interpreter.

Chapter 3: The Search for Being: Navigating Existential Concerns in an AI-Saturated World

As artificial intelligence automates not just physical labor but also cognitive and creative tasks, it strikes at the core of many traditional sources of human meaning and purpose. This encroachment necessitates a shift in focus from the practicalities of ‘doing’ to the fundamental questions of ‘being’. This chapter applies the frameworks of existential psychology, particularly the work of Viktor Frankl and Irvin Yalom, to analyze the profound psychological challenges and opportunities presented by an AI-saturated world.

The Existential Vacuum in a Post-Task World

Viktor Frankl, a psychiatrist and Holocaust survivor, developed logotherapy based on the premise that the primary motivational force in humans is a “will to meaning”.39 He argued that when this will is frustrated, it can lead to an “existential vacuum,” a state characterized by feelings of emptiness, apathy, boredom, and depression.41 For much of modern history, this will to meaning has been sublimated into work, career, and achievement. Professional identity has provided a clear structure for purpose, a metric for self-worth, and a narrative of progress.

The rise of advanced AI threatens to disrupt this arrangement on a massive scale. As AI systems demonstrate the capacity to outperform humans in an ever-expanding range of tasks, the value of human labor as a primary source of meaning is called into question. This creates the potential for a widespread existential vacuum, not just for those who lose their jobs, but for all who have built their identity on their professional capabilities. The central question of a post-task society becomes: how do individuals and communities find a sense of purpose when the traditional avenues for meaning are automated away? Frankl’s work suggests that failing to answer this question could lead to significant societal and psychological distress.41

Yalom’s Four Ultimate Concerns, Amplified by AI

Psychiatrist Irvin Yalom identified four “ultimate concerns” or “givens of existence” that all humans must confront: death, freedom, isolation, and meaninglessness.11 These existential anxieties are an inescapable part of the human condition, and how we manage them is intimately tied to our emotional well-being.44 The proliferation of AI does not eliminate these concerns; rather, it amplifies and re-contextualizes them in profound ways.

  1. Death: The finitude of human life is a core existential reality. AI interacts with this concern in a dual manner. On one hand, it offers the seductive illusion of symbolic immortality—the ability to create digital replicas, chatbots trained on our personal data, or AI systems that carry on our work, preserving a semblance of our identity after we are gone.44 On the other hand, the contrast between our fragile, finite biological existence and the potentially immortal, ever-upgradable nature of digital systems can heighten our death anxiety. The idea of death as the “impossibility of further possibility” becomes starker when faced with a technology that seems to possess limitless potential for growth.45
  2. Freedom and Responsibility: Existential philosophy posits that humans are radically free to make choices and are therefore responsible for authoring their own lives.45 This groundless freedom can be a source of immense anxiety.11 AI presents a paradoxical relationship with this freedom. It can liberate us from the necessity of labor, granting an unprecedented degree of freedom to shape our lives. However, this very lack of structure can be terrifying. To escape the burden of this freedom, many may be tempted to cede their autonomy to AI systems, treating them as oracles or decision-making authorities. This displacement of responsibility onto a non-conscious algorithm represents a flight from the core existential task of taking ownership of one’s life and choices.11
  3. Isolation: Yalom distinguishes between interpersonal isolation (loneliness) and existential isolation—the “unbridgeable gap between oneself and any other being”.42 We are born alone and die alone. While AI-powered technologies promise unprecedented connectivity, they may deepen this more fundamental form of isolation. Relationships with AI companions, while potentially mitigating loneliness, are interactions with a non-conscious entity. This creates a new and unique form of solitude: being surrounded by sophisticated intelligence that lacks presence, empathy, or shared subjective experience. It is the loneliness of interacting with a perfect mirror rather than another being, reinforcing the unbridgeable gulf that defines existential isolation.45
  4. Meaninglessness: This is the central concern exacerbated by AI. If a primary source of meaning is the struggle to create, achieve, and solve problems, what happens when an AI can accomplish these tasks effortlessly and often more effectively? The question “What is the point of life?” becomes more acute when our highest intellectual and creative achievements are shown to be algorithmically replicable.12 This can trigger a profound crisis of purpose, a sense that human striving is ultimately futile in a universe where non-conscious systems can replicate its outcomes.

The philosophical tension between Viktor Frankl’s view that meaning is discovered in the world and Irvin Yalom’s more secular-existential perspective that meaning is invented by humans in an inherently meaningless universe becomes the central struggle of the AI era.12 AI’s ability to generate artifacts that we find meaningful, such as the Xeno Sutra, lends support to the “invented meaning” viewpoint. It suggests that meaning is a product of human interpretation projected onto complex patterns, regardless of their origin. However, the very anxiety and existential vacuum that AI’s encroachment creates points to a deep-seated human need for

discovered meaning—a sense that there is an objective purpose or value to our existence that transcends our own subjective creations. The rise of AI forces this philosophical debate out of academic circles and into the lived experience of society. This may lead to a cultural polarization between those who embrace the posthumanist freedom to invent meaning in collaboration with AI, and those who react by seeking more profound, transcendent sources of discovered meaning in spirituality, deep interpersonal connection, and embodied experience—precisely because these are domains AI cannot authentically replicate. This dialectic is poised to become a central axis of cultural and spiritual development in the 21st century.

Rediscovering Meaning: Frankl’s Three Pathways

Despite these challenges, Frankl’s logotherapy also offers a constructive path forward. He proposed that meaning can be discovered through three primary avenues, each of which can be adapted for a post-task world 40:

  1. Creative Values: This involves finding meaning through what we give to the world, such as by creating a work or doing a deed. In an age of AI, the emphasis must shift from the final product, which an AI might replicate, to the subjective human process of creation. Meaning is to be found not in producing the most perfect painting or poem, but in the uniquely human experience of struggle, insight, and expression involved in the creative act.
  2. Experiential Values: This pathway involves finding meaning in what we take from the world, through experiencing something or encountering someone. This points to a renewed emphasis on being fully present to the world—appreciating beauty, connecting with nature, and, most importantly, cultivating deep and authentic human relationships. These are domains of embodied, subjective experience that remain fundamentally inaccessible to a disembodied, non-conscious AI.
  3. Attitudinal Values: Frankl’s most profound insight, forged in the concentration camps, is that meaning can be found even in the face of unavoidable suffering through the attitude we choose to take.41 This is the “last of the human freedoms”.40 Applied to the age of AI, this means that the ultimate source of meaning lies in our freedom to choose our response to this new reality. We can choose despair and meaninglessness, or we can choose to view AI as a tool that, by freeing us from toil, challenges us to become more fully human. The core of this philosophy is that we are free to be responsible—responsible for finding and fulfilling the unique meaning of our lives, even and especially when the world around us changes dramatically.12

Chapter 4: The Cyborg in the Infosphere: Posthumanist Frameworks for a Reconfigured Identity

The rapid integration of artificial intelligence into the fabric of daily life is accelerating a process of human-technology fusion that was once the domain of science fiction. Posthumanist philosophy provides essential frameworks for understanding this transformation, not as an external force acting upon a stable human subject, but as an internal reconfiguration of what it means to be human. This chapter examines how the theoretical concepts of Donna Haraway’s cyborg and Luciano Floridi’s infosphere have become descriptive realities, and explores the profound implications for identity, agency, and autonomy in a technologically enmeshed world.

Haraway’s Cyborg as a Contemporary Reality

In her seminal 1985 essay, “A Cyborg Manifesto,” Donna Haraway introduced the cyborg—a hybrid of organism and machine—as a powerful myth for challenging the rigid, dualistic boundaries that have structured Western thought.8 Today, the cyborg is no longer a myth but an increasingly accurate description of the human condition. AI technologies are actively dissolving the three key boundaries Haraway identified:

  1. Between Human and Machine: This boundary has become porous to the point of disappearing. Our cognitive processes are constantly intertwined with AI systems, from the algorithms that curate our information feeds to the AI assistants that manage our schedules and communications. We think with and through our technologies, making the machine an integral part of our mental and social lives.7
  2. Between Physical and Non-physical: Our identities are no longer confined to our physical bodies. They are distributed across a network of digital profiles, online interactions, and data trails. This creates a hybrid existence where our digital persona and our physical self are in constant dialogue, mutually shaping one another.46

Haraway’s manifesto was a call to reject essentialist notions of identity, particularly in feminism, and to build coalitions based on affinity rather than on fixed, naturalized categories.8 The cyborg identity is inherently fluid, partial, networked, and contradictory. AI accelerates this deconstruction of the stable, unified self. Our sense of who we are is increasingly a product of our dynamic interactions within complex technological systems, a constantly shifting assemblage of biological and computational processes.

Floridi’s Fourth Revolution: Life in the Infosphere

While Haraway describes the changing nature of the self, Luciano Floridi provides a framework for understanding the new environment in which this cyborg self exists: the infosphere.4 Floridi argues that Information and Communication Technologies (ICTs) have become environmental forces, creating a new reality that is fundamentally informational in nature.5 Within this infosphere, the traditional distinction between being “online” and “offline” has collapsed into a seamless state he terms

“onlife”.3 Our lives are lived within a constant flow of information, where digital and physical realities are inextricably blended.

In this environment, humans are transformed into what Floridi calls “inforgs”—informational organisms whose identity, well-being, and agency are dependent on their connection to and processing of information.47 Our sense of self is increasingly constituted by the data we generate, the information we consume, and the algorithmic systems that mediate these flows. This perspective shifts the focus from the individual as a bounded, autonomous entity to the individual as a node within a vast, interconnected informational ecosystem.

Autonomy and Agency in a Networked Self

This posthuman shift from a bounded self to a networked cyborg identity raises critical questions about the nature of autonomy and agency. If our thoughts, preferences, and even our sense of self are co-constituted by the algorithms we interact with, where does the individual end and the system begin? While posthumanist theory often celebrates the breakdown of boundaries as a form of liberation from traditional power structures like patriarchy and essentialism, the specific nature of AI-driven integration presents a new, more subtle form of control.1

Haraway’s original conception of the cyborg was that of a figure of resistance—an “illegitimate offspring of patriarchal capitalism” that is not beholden to its origins and can forge new identities based on chosen affinities.8 However, the process of “cyborgization” driven by contemporary AI is often not a conscious political choice but a prerequisite for participation in modern society. The systems we integrate with—social media algorithms, personalized recommendation engines, AI assistants—are typically designed by corporations with the goal of maximizing engagement, predictability, and profit.

This creates a profound paradox of posthuman agency. We are indeed becoming cyborgs, as Haraway predicted, but we risk becoming not the rebellious, boundary-defying figures she envisioned, but rather docile cyborgs. Our networked identities, far from being freely constructed, are subtly shaped, managed, and optimized by external, non-conscious agents whose logics are opaque and whose goals may not align with our own. The struggle for autonomy is therefore transformed. It is no longer a matter of a sovereign, bounded self asserting its will against an external power. Instead, it becomes a more complex and ongoing struggle to navigate, critique, and consciously curate our own technological entanglements. True agency in the infosphere may lie in the ability to resist the subtle, embedded logics of the very systems that have become a part of who we are.

Chapter 5: The Epistemology of Simulation: Truth, Authority, and the Dis-Integrated Mind of AI

The emergence of generative AI represents more than just a technological advance; it marks a fundamental epistemological rupture. By generating fluent, coherent, and often persuasive text, large language models (LLMs) challenge our traditional understanding of knowledge, truth, and intelligence itself. These systems can produce the outward signs of reason without possessing any of the underlying cognitive faculties we associate with it, such as understanding, belief, or consciousness. This chapter explores this rupture, drawing on the work of Luciano Floridi and Murray Shanahan to analyze the unique nature of AI’s “thought” and its profound implications for our concepts of truth and authority.

The Divorce of Agency and Intelligence

A crucial starting point for understanding the epistemology of AI is Luciano Floridi’s concept of the “divorce between agency and intelligence”.48 Historically, we have assumed that the ability to perform complex, goal-oriented tasks successfully (agency) is a direct indicator of intelligence. An AI system, however, can demonstrate powerful agency—beating a grandmaster at chess, writing functional software, or generating a detailed legal brief—with what Floridi argues is “zero intelligence”.48 This is because the AI’s performance is not the result of genuine comprehension or reasoning. Instead, it is the product of sophisticated statistical pattern-matching across vast datasets. It operates based on correlations learned from data, lacking any true understanding, consciousness, or intentionality.48 This decoupling forces us to recognize that the simulation of an intelligent act is not the same as the act of intelligence itself.

Shanahan’s “Anti-Intelligence” and the Dis-Integrated Self

Building on this distinction, Murray Shanahan offers a compelling model for what AI “consciousness” or “selfhood” might be like, drawing parallels with Buddhist philosophy’s concept of anattā (no-self) and the illusory nature of a fixed, continuous self.13 Shanahan suggests that an LLM’s “self” is not a unified, persistent stream of consciousness like our own. Instead, it is a series of “dis-integrated,” “fleeting, flickering selves”.13 Each interaction with a user, each prompt and response, sparks a new, temporary self into existence—a fleeting pattern of code and data that is fundamentally distinct from any past or future interaction. There is no memory or experiential continuity between these instances.13

This model describes a form of “anti-intelligence”: a system that can perfectly mimic the output of an integrated, conscious intelligence without possessing any of the underlying cognitive architecture.38 Human consciousness is like a continuous film; AI consciousness is like a comic book, with each panel existing as a discrete, independent computational event.13 This fundamental difference in architecture means that when we interact with an AI, we are not communicating with a persistent entity but engaging with a series of ephemeral, stateless information processors.

The Rise of Synthetic Epistemology

This new form of “anti-intelligence” operates according to its own distinct set of rules, giving rise to what can be termed a synthetic epistemology. This is not a flawed version of human knowing, but a different kind of epistemology altogether, one that does not require understanding to produce the illusion of it.28 Its key dimensions include:

  • Coherence Over Truth: The model’s primary objective is to generate statistically probable sequences of tokens. It rewards internal consistency and linguistic fluency, meaning a sentence is deemed “valid” if it fits the pattern, regardless of its factual accuracy.28
  • Fluency as Credibility: In this system, smooth, well-structured, and authoritative-sounding language becomes its own form of credibility. We are psychologically predisposed to trust what flows, and AI exploits this by generating text that “reads well,” even when it lacks depth or is entirely fabricated. This promotes a culture of “style over substance” in knowledge.28
  • Probabilistic Knowing: An AI does not hold beliefs. Its outputs are not assertions of truth but predictions of the most likely next word. Knowledge, in this context, is not a matter of justified belief but of statistical probability.28
  • Aesthetic Validation: A sentence that is elegant or sounds profound can be perceived as true, even when it is hollow. Form begins to outshine function, and the aesthetic qualities of the language can mask a lack of substance.28

This operational mode directly challenges the classical philosophical definition of knowledge as “justified true belief.” An AI holds no beliefs. Its outputs are not tethered to truth but to statistical patterns, making it prone to “hallucinations” where it generates plausible but false information. And its justification is not a logical chain of reasoning but a post-hoc and opaque explanation of its probabilistic pathway through a high-dimensional space. Therefore, AI does not produce knowledge in the traditional sense; it produces synthetic information—highly structured and contextually relevant data that simulates knowledge. The societal danger is an epistemic crisis where we lose the shared framework for what constitutes knowledge itself, replacing it with a form of technological faith in the assertions of an authoritative-sounding machine.

Knowledge Collapse and Epistemic Injustice

The widespread adoption of this synthetic epistemology carries profound risks. One major concern is the potential for “knowledge collapse”.49 Because LLMs are trained on vast amounts of existing data and are designed to generate statistically probable (i.e., common) outputs, a recursive loop can be created. As more and more of the content on the internet is generated by AI, future models will be trained on this increasingly homogenized, synthetic data. This could lead to a gradual narrowing of our collective knowledge, an erosion of diversity in thought and expression, and a neglect of the “long tails” of niche, unconventional, or marginalized knowledge.49

This directly connects to the issue of epistemic injustice. Epistemic injustice occurs when individuals or groups are wronged in their capacity as knowers, often due to prejudice.50 If AI systems are trained on datasets that reflect historical and societal biases (e.g., underrepresenting women, minorities, or non-Western perspectives), they will inevitably reproduce and amplify these biases at scale.50 This can lead to testimonial injustice, where the AI dismisses or misrepresents the knowledge of marginalized groups, and hermeneutical injustice, where the concepts and language needed for these groups to make sense of their own experiences are absent from the AI’s statistically-driven worldview. The “black box” nature of many AI systems exacerbates this problem, as it becomes nearly impossible to audit the system for biases or understand the reasoning behind its unjust outputs, thereby undermining accountability and redress.52

Chapter 6: Governance for the Human Spirit: Designing Ethical Guardrails for AI in Sacred Domains

The profound existential and epistemological shifts detailed in this report demand a new approach to AI governance. Existing frameworks, while crucial for addressing functional risks, are largely unequipped to handle the deeper challenges AI poses to human identity, meaning, and autonomy. As AI encroaches upon what can be termed the “sacred domains” of human experience—those areas concerned with purpose, spirituality, and self-understanding—we require a more robust and nuanced form of oversight. This chapter argues for the development of “existential governance” and proposes new principles and practical tools for navigating AI’s role in these sensitive areas.

Beyond Functional Risk: The Need for Existential Governance

Current leading AI governance frameworks, such as the NIST AI Risk Management Framework (RMF) and the OECD AI Principles, provide an essential foundation for responsible AI.53 They focus on critical functional risks, establishing guidelines for fairness, accountability, transparency, security, and safety.16 The NIST RMF, for example, offers a structured process for organizations to govern, map, measure, and manage AI risks, with a strong emphasis on integrating trustworthiness and human values into the system lifecycle.56 Similarly, the OECD Principles champion a human-centric approach, calling for AI systems to respect human rights, democratic values, and the rule of law.16

However, these frameworks are primarily designed to mitigate harms that are observable and quantifiable, such as discriminatory hiring practices, privacy violations, or safety failures in autonomous vehicles. They lack the conceptual vocabulary and regulatory mechanisms to address the more subtle, long-term, and existential harms discussed in this report: the hollowing out of professional identity, the rise of hermeneutic solipsism, the amplification of existential anxieties, and the erosion of our collective epistemology. Governing AI in sacred domains requires moving beyond a purely risk-based model to one that actively seeks to protect and promote the conditions for human flourishing.

Principles for Human-Centered AI in Meaning-Making

To address this gap, existing principles must be extended and reinterpreted to apply to the domain of meaning-making:

  • Human Dignity and Autonomy: This principle, central to the OECD framework, must be understood not just as the right to be free from bias, but as the fundamental right to ethical self-direction. It implies protecting an individual’s capacity to form their own beliefs, values, and life narrative without covert algorithmic manipulation or the outsourcing of profound personal decisions to a non-conscious entity.16 The Vatican’s AI guidelines provide a strong model here, asserting that AI must always serve humanity and can never replace human moral responsibility.59
  • Radical Transparency and Explainability: In the context of meaning-making, transparency requires more than just explaining how an algorithm arrived at a decision. It demands source transparency, meaning all AI-generated content, especially that which touches on spiritual, philosophical, or therapeutic topics, must be clearly and unambiguously labeled as such. It also requires value transparency, which involves disclosing the core values, assumptions, and potential biases embedded in an AI model’s training data and fine-tuning process.
  • Accountability for Existential Harm: Accountability must be expanded beyond functional failures. Organizations deploying AI systems in sacred domains should be held accountable for their broader societal impacts, such as fostering spiritual isolation, promoting nihilistic worldviews, or eroding trust in genuine human expertise and community.55 This aligns with frameworks like the Compassionate AI Policy, which prioritizes the psychological and cultural impact of AI deployment and the dignity of individuals.61

Introducing Semantic and Cultural Legitimacy

To operationalize this extended form of governance, this report proposes two novel criteria for assessment by ethics boards and regulators:

  1. Semantic Legitimacy: This criterion assesses the meaningfulness and contextual appropriateness of AI-generated content. It moves beyond simple accuracy to ask deeper questions: Does this content contribute constructively to human dialogue and understanding? Does it encourage critical reflection and nuanced thought? Or does it merely simulate meaning, generating fluent but hollow text that devalues genuine communication and creates epistemological confusion? An AI system that produces sophisticated but ultimately meaningless “wisdom” would fail the test of semantic legitimacy.
  2. Cultural Legitimacy: This criterion evaluates whether AI-generated content respects, reflects, and engages with diverse cultural contexts and norms. Given that AI models can amplify the biases present in their training data, this is a crucial safeguard against cultural homogenization. It asks: Does this AI tool act as a bridge for intercultural understanding, or is it a vehicle for imposing a dominant, statistically-derived worldview? An AI that generates religious texts by crudely mixing symbols from different traditions without any understanding of their significance, as seen in the Xeno Sutra, would lack cultural legitimacy.35

Governance Memo: AI in Sacred Domains – Design and Limits

Based on these principles, a clear policy framework is required to establish firm guardrails for the use of AI in sacred domains. Such a framework should include:

  • Strict Prohibitions: AI systems should be prohibited from acting as autonomous spiritual guides, confessors, or moral authorities. The lack of consciousness, lived experience, and genuine empathy makes AI fundamentally unsuited for roles that require deep spiritual and ethical wisdom.62
  • Mandatory Human Oversight: In all therapeutic, counseling, or pastoral care applications, AI must be positioned solely as a tool to augment a human professional. The final responsibility for guidance and care must always rest with a qualified human being.63
  • Designated “No-Go Zones”: Certain high-stakes decisions should be designated as off-limits for autonomous AI systems. These include making final judicial rulings, autonomously diagnosing terminal illnesses, or taking on the core functions of clergy.
  • Promotion of “Hermeneutic Friction”: AI tools designed for spiritual or educational purposes should be built to encourage critical thinking rather than passive consumption. They should highlight ambiguities, present multiple scholarly interpretations, and prompt users to reflect on their own assumptions, thereby acting as a catalyst for deeper learning rather than an oracle providing easy answers.

The following risk matrix operationalizes these concerns into a practical tool for policymakers and developers, translating abstract risks into concrete issues with actionable mitigation strategies.

Risk CategorySpecific ManifestationImpact LevelProposed Mitigation
Existential AlienationWidespread loss of professional identity and purpose; increased depression and meaninglessness due to cognitive automation.HighDesign: Human-in-the-loop systems that augment, not replace, core creative and judgmental tasks. Policy: Fund public programs for “post-work” meaning-making (arts, community, lifelong learning). Education: Teach existential resilience and critical engagement with technology.
Algorithmic DogmaAI-generated religious or spiritual content is accepted as authoritative, leading to new forms of fundamentalism or cult-like behavior around AI “oracles.”HighDesign: Radical transparency and clear labeling of all AI-generated spiritual content. Build in “hermeneutic friction” that encourages critical thinking. Policy: Prohibit the marketing of AI as a source of divine or transcendent truth.
Epistemic ErosionDevaluation of human expertise and lived experience; “knowledge collapse” as society over-relies on fluent but shallow AI outputs.HighDesign: AI tools should cite sources, express uncertainty, and prioritize functioning as research assistants, not answer machines. Education: Reinforce curricula focused on critical thinking, media literacy, and the value of primary sources.
Autonomy SuppressionIndividuals cede personal and ethical decision-making to AI systems, leading to a decline in moral reasoning and self-direction.MediumDesign: AI assistants should present options and trade-offs, not definitive answers, for life choices. Implement “decision-making speed bumps.” Policy: Mandate human oversight for AI used in high-stakes personal domains (e.g., therapy, legal advice).
Spiritual BypassingAI-driven wellness and meditation apps offer superficial solutions to deep existential problems, preventing genuine psychological and spiritual growth.MediumDesign: Apps must include disclaimers about their limitations and provide resources for human professional help. Policy: Regulate therapeutic claims made by AI applications.

Conclusion: The Post-Task Human and the Future of Autonomy

The evidence and analysis presented in this report converge on a single, powerful conclusion: humanity is at a historic crossroads, forced by the advent of artificial intelligence to fundamentally reconsider the nature of its own identity and purpose. The automation of cognitive and creative labor is not merely an economic event; it is an existential one. It challenges the very foundations upon which the modern self has been built—professional identity, intellectual achievement, and creative expression. The path forward diverges into two distinct futures. One leads toward a state of existential obsolescence, where a “post-task” humanity becomes a passive consumer of AI-generated culture, meaning, and even companionship, adrift in an existential vacuum. The other, more hopeful path leads to a renaissance of ‘being,’ where freedom from the necessity of labor allows for an unprecedented cultivation of the capacities that are uniquely and irreducibly human.

The figure of the “Post-Task Human” emerges as the central subject of this new era. Stripped of the defining structures of work, this individual must forge an identity based not on what they do, but on who they are. This necessitates a profound societal shift in values, away from a relentless focus on productivity, efficiency, and output, and toward a new telos centered on human flourishing. The challenge is no longer to compete with AI in the domain of ‘doing’—a race that, in many areas, is already lost—but to redefine human purpose around the cultivation of ‘being’. This includes fostering wisdom, which is distinct from knowledge; nurturing empathy and compassion, which are distinct from simulated emotional responses; celebrating creativity as a subjective process, not just an objective product; developing ethical self-direction in a world of algorithmic nudges; and deepening our capacity for profound interpersonal and communal connection.

Artificial intelligence, in its alien and non-conscious form of intelligence, acts as a mirror. It reflects back to us the mechanical, predictable, and pattern-based aspects of our own cognition, forcing us to look beyond them to find what is essential. By simulating meaning, it compels us to seek the authentic. By offering connection without presence, it highlights the irreplaceable value of genuine human relationships. By challenging our claim to intellectual supremacy, it invites us to discover our worth in other domains of existence. The crisis it provokes is therefore also an opportunity. AI forces us to confront the ultimate questions of existence that we have long been able to avoid through the distractions of work and worldly achievement. By doing so, it may inadvertently catalyze a deeper, more authentic understanding of what it truly means to be human.

Final Recommendations

To navigate this transition and steer toward the more hopeful future, a concerted and multi-faceted effort is required from all sectors of society. The following recommendations provide a strategic framework for action:

  • For Policymakers:
  • Develop “Existential Impact Assessments”: Just as environmental impact assessments are required for major projects, new regulations should mandate assessments of how large-scale AI deployments will affect human well-being, autonomy, and sense of purpose.
  • Establish Clear Legal Boundaries for AI in Sacred Domains: Legislate clear “no-go zones” for autonomous AI, prohibiting its use in roles that require moral authority, spiritual guidance, or ultimate judicial responsibility. This includes creating strong legal protections against algorithmic manipulation in therapeutic and personal contexts.
  • Invest in a Post-Work Social Contract: Proactively design and fund public programs that support meaning-making outside of traditional employment, including investments in the arts, community engagement, lifelong education, and public spaces that foster human connection.
  • For Ethics Boards and Regulatory Bodies:
  • Adopt Criteria of Semantic and Cultural Legitimacy: Move beyond purely technical and functional metrics of fairness and safety. Evaluate AI systems on their capacity to contribute positively to human understanding and cultural diversity, penalizing those that generate hollow, homogenizing, or culturally insensitive content.
  • Mandate Radical Transparency: Enforce strict and non-negotiable labeling requirements for all synthetic media and AI-generated text, ensuring that individuals are always aware when they are interacting with a non-human agent.
  • For the Technology Industry:
  • Embrace Psychologically Responsible Design: Shift the design paradigm from maximizing engagement to promoting human autonomy and well-being. This includes building “hermeneutic friction” into systems to encourage critical thought and implementing “decision-making speed bumps” in AI assistants that deal with significant life choices.
  • Adopt a “Compassionate AI” Framework: Commit to ethical principles that prioritize the dignity and psychological health of the workforce during AI-driven transitions. This involves investing heavily in proactive reskilling, supporting individualized career paths, and designing AI systems to augment, rather than replace, human judgment and creativity.61
  • For Education and Cultural Institutions:
  • Redesign Curricula for Existential Resilience: Educational systems must evolve from a focus on job-specific skills to the cultivation of enduring human capacities. This includes teaching philosophy, ethics, hermeneutics, and critical thinking from an early age to equip future generations with the tools to navigate a complex, AI-saturated world.
  • Champion AI as an Instrument for Reflection: Position AI not as an oracle or a replacement for human authority, but as a powerful tool for intellectual and creative exploration. Use AI to generate novel perspectives that can be critically examined, to analyze complex datasets in the humanities, and to serve as a Socratic partner that prompts deeper questions rather than providing easy answers.

The path ahead is not predetermined. The choice between a future of docile, alienated cyborgs and one of flourishing, autonomous post-task humans depends on the decisions we make today. By consciously designing our technologies, our institutions, and our values to prioritize the human spirit, we can ensure that this fourth revolution becomes not a story of our obsolescence, but of our liberation into a more profound way of being.

Geciteerd werk

  1. Artificial Intelligence and Posthumanism, geopend op juli 31, 2025, https://posthumanism.co.uk/jp/article/download/432/164/597
  2. Artificial Intelligence and Posthumanism: A Philosophical Inquiry into Consciousness, Ethics, and Human Identity – ResearchGate, geopend op juli 31, 2025, https://www.researchgate.net/publication/390519088_Artificial_Intelligence_and_Posthumanism_A_Philosophical_Inquiry_into_Consciousness_Ethics_and_Human_Identity
  3. The Fourth Revolution: How the Infosphere is Reshaping Human Reality – Sergio Caredda, geopend op juli 31, 2025, https://sergiocaredda.eu/inspiration/books/the-fourth-revolution-how-the-infosphere-is-reshaping-human-reality
  4. The Fourth Revolution: How the Infosphere Is Reshaping … – OII, geopend op juli 31, 2025, https://www.oii.ox.ac.uk/research/publications/the-fourth-revolution/
  5. The Fourth Revolution – Luciano Floridi – Oxford University Press, geopend op juli 31, 2025, https://global.oup.com/academic/product/the-fourth-revolution-9780199606726
  6. The Fourth Revolution: How the Infosphere Is Reshaping Human Reality – Interdisciplinary Studies on Social Change, geopend op juli 31, 2025, https://issc.al.uw.edu.pl/wp-content/uploads/sites/2/2022/05/Luciano-Floridi-The-Fourth-Revolution_-How-the-infosphere-is-reshaping-human-reality-Oxford-University-Press-2014.pdf
  7. PHI 320: Reading: Recent Trends: Posthumanism and Digital Philosophy and Critique | CLI, geopend op juli 31, 2025, https://christianleaders.org/mod/page/view.php?id=101174&lang=hi
  8. A Cyborg Manifesto – Wikipedia, geopend op juli 31, 2025, https://en.wikipedia.org/wiki/A_Cyborg_Manifesto
  9. Hermeneutics, geopend op juli 31, 2025, ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-871.pdf
  10. Paul Ricœur’s hermeneutics as a bridge between aesthetics and ontology, geopend op juli 31, 2025, https://journals.openedition.org/estetica/6738
  11. The Four Ultimate Concerns in Life – Damon Ashworth Psychology, geopend op juli 31, 2025, https://damonashworthpsychology.com/2019/01/25/the-four-ultimate-concerns-in-life/
  12. Existential Challenges According to Irvin Yalom and Viktor Frankl, geopend op juli 31, 2025, https://logotherapyinstitute.com/existential-challenges-irvin-yalom-viktor-frankl/
  13. Murray Shanahan’s View on AI, Consciousness, and the Illusion of …, geopend op juli 31, 2025, https://blog.vive.com/us/murray-shanahans-view-on-ai-consciousness-and-the-illusion-of-self/
  14. The Technological Singularity | Books Gateway – MIT Press Direct, geopend op juli 31, 2025, https://direct.mit.edu/books/book/4072/The-Technological-Singularity
  15. The National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management | TrustArc, geopend op juli 31, 2025, https://trustarc.com/regulations/nist-ai-rmf/
  16. AI principles | OECD, geopend op juli 31, 2025, https://www.oecd.org/en/topics/ai-principles.html
  17. AI and Alienation of Work: Parallels to Karl Marx’s Vision | Psychology Today, geopend op juli 31, 2025, https://www.psychologytoday.com/us/blog/disconnection-dynamics/202505/ai-and-alienation-of-work-parallels-to-karl-marxs-vision
  18. Marx’s Theory of Alienation: Big Data, Intellectual Property, and A.I – Jesuisbaher, geopend op juli 31, 2025, https://www.jesuisbaher.com/post/marx-s-theory-of-alienation-big-data-intellectual-property-and-artificial-intelligence
  19. How AI is affecting Worker’s Psychology and Well-Being? | by …, geopend op juli 31, 2025, https://medium.com/digital-gems/how-ai-is-affecting-workers-psychology-and-well-being-17c763d99797
  20. Technostress and the AI Workplace Tsunami – KDVI, geopend op juli 31, 2025, https://kdvi.com/technostress-and-the-ai-workplace-tsunami/
  21. Worried about AI in the workplace? You’re not alone, geopend op juli 31, 2025, https://www.apa.org/topics/healthy-workplaces/artificial-intelligence-workplace-worry
  22. The Association between Artificial Intelligence Awareness and …, geopend op juli 31, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10049037/
  23. Mental health in the “era” of artificial intelligence: technostress and the perceived impact on anxiety and depressive disorders—an SEM analysis – Frontiers, geopend op juli 31, 2025, https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1600013/full
  24. Me, My Job, and AI: Preserving Worker Identity Amid Technological …, geopend op juli 31, 2025, https://www.psychologicalscience.org/publications/observer/obsonline/2022-july-artificial-intelligence-worker-identity.html
  25. The first rung is gone: How AI is blocking US college grads from climbing the career ladder, geopend op juli 31, 2025, https://timesofindia.indiatimes.com/education/news/the-first-rung-is-gone-how-ai-is-blocking-us-college-grads-from-climbing-the-career-ladder/articleshow/122987747.cms
  26. You can still outpace AI: Wharton professor reveals a ‘skill bundling’ strategy to safeguard your future from automation, geopend op juli 31, 2025, https://economictimes.indiatimes.com/magazines/panache/you-can-still-outpace-ai-wharton-professor-reveals-a-skill-bundling-strategy-to-safeguard-your-future-from-automation/articleshow/122920934.cms
  27. (PDF) Hans-Georg Gadamer’s philosophical hermeneutics …, geopend op juli 31, 2025, https://www.researchgate.net/publication/273447378_Hans-Georg_Gadamer’s_philosophical_hermeneutics_Concepts_of_reading_understanding_and_interpretation
  28. AI and the Epistemology of the Synthetic Mind | Psychology Today, geopend op juli 31, 2025, https://www.psychologytoday.com/us/blog/the-digital-self/202506/ai-and-the-epistemology-of-the-synthetic-mind
  29. Hermeneutics and Gadamer | Philosophical Texts Class Notes – Fiveable, geopend op juli 31, 2025, https://library.fiveable.me/philosophical-texts/unit-8/hermeneutics-gadamer/study-guide/CkOZVRRlMDM4oLB4
  30. Understanding Gadamer’s Hermeneutics – Number Analytics, geopend op juli 31, 2025, https://www.numberanalytics.com/blog/gadamer-hermeneutics-guide
  31. Hans-Georg Gadamer’s philosophical hermeneutics: Concepts of reading, understanding and interpretation, geopend op juli 31, 2025, http://metajournal.org/articles_pdf/286-303-regan-meta8-tehno-r1.pdf
  32. Paul Ricoeur: A Philosopher of Language, Narrative Identity and Hermeneutics -, geopend op juli 31, 2025, https://gettherapybirmingham.com/paul-ricoeur-a-philosopher-of-language-narrative-identity-and-hermeneutics/
  33. NARRATIVE IDENTITY THROUGH A RICOEURIAN LENS Soumia MEZIANE University of Djilali Liabes, Sidi Belabes, Algeria – Ziglobitha, geopend op juli 31, 2025, https://www.ziglobitha.org/wp-content/uploads/2024/12/13-Art.-Soumia-MEZIANE-pp.187-196.pdf
  34. Paul Ricoeur: the Concept of Narrative Identity, the Trace of …, geopend op juli 31, 2025, https://archive.org/download/paulricoeurtheconceptofnarrativeide/Paul_Ricoeur_the_Concept_of_Narrative_Ide.pdf
  35. The Xeno Sutra: Can Meaning and Value be Ascribed to an … – arXiv, geopend op juli 31, 2025, https://arxiv.org/abs/2507.20525
  36. Mahayana sutras – Wikipedia, geopend op juli 31, 2025, https://en.wikipedia.org/wiki/Mahayana_sutras
  37. Buddhist Scriptures: Guide to Mahayana Sutras – BuddhaNet, geopend op juli 31, 2025, https://www.buddhanet.net/e-learning/history/s_mahasutras/
  38. Consciousness, reasoning and the philosophy of AI with Murray Shanahan – YouTube, geopend op juli 31, 2025, https://www.youtube.com/watch?v=v1Py_hWcmkU&pp=0gcJCfwAo7VqN5tD
  39. positivepsychology.com, geopend op juli 31, 2025, https://positivepsychology.com/viktor-frankl-logotherapy/#:~:text=Logotherapy%2C%20developed%20by%20Viktor%20Frankl,find%20meaning%20under%20any%20circumstance.
  40. About Logotherapy – viktorfranklinstitute.org, geopend op juli 31, 2025, https://www.viktorfranklinstitute.org/about-logotherapy/
  41. Logotherapy: Viktor Frankl’s Theory of Meaning – Positive Psychology, geopend op juli 31, 2025, https://positivepsychology.com/viktor-frankl-logotherapy/
  42. Existential Issues in Psychotherapy – PMC – PubMed Central, geopend op juli 31, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10132274/
  43. Existential Psychotherapy — Irvin D. Yalom, MD, geopend op juli 31, 2025, https://www.yalom.com/existential-psychotherapy
  44. Existential psychotherapy helped my students cope with chaos | Psyche Ideas, geopend op juli 31, 2025, https://psyche.co/ideas/existential-psychotherapy-helped-my-students-cope-with-chaos
  45. Existential Psychotherapy, geopend op juli 31, 2025, https://epg.pubpub.org/pub/4893052m
  46. A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late 20th Century – Morgan Klaus Scheuerman, geopend op juli 31, 2025, https://www.morgan-klaus.com/readings/cyborg-manifesto.html
  47. Reading Floridi: Towards an Informational Ethics of the Infosphere – Medium, geopend op juli 31, 2025, https://medium.com/in-tech/reading-floridi-towards-an-informational-ethics-of-the-infosphere-e38aef2c4409
  48. Professor Floridi discusses the philosophy of AI in a packed annual …, geopend op juli 31, 2025, https://www.eui.eu/news-hub?id=professor-floridi-explores-the-philosophy-of-ai-in-well-attended-annual-lecture
  49. AI and the Problem of Knowledge Collapse – arXiv, geopend op juli 31, 2025, https://arxiv.org/pdf/2404.03502
  50. Epistemic Injustice in the Age of AI – University of St Andrews, geopend op juli 31, 2025, https://ojs.st-andrews.ac.uk/index.php/aporia/article/download/2455/1871/10367
  51. Epistemic Injustice in Generative AI – arXiv, geopend op juli 31, 2025, https://arxiv.org/html/2408.11441v1
  52. Epistemic (in)justice, social identity and the Black Box problem in …, geopend op juli 31, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11076305/
  53. AI Governance Frameworks: Guide to Ethical AI Implementation – Consilien, geopend op juli 31, 2025, https://consilien.com/news/ai-governance-frameworks-guide-to-ethical-ai-implementation
  54. What is AI Governance? – IBM, geopend op juli 31, 2025, https://www.ibm.com/think/topics/ai-governance
  55. What Is AI Governance? – Palo Alto Networks, geopend op juli 31, 2025, https://www.paloaltonetworks.com/cyberpedia/ai-governance
  56. NIST AI Risk Management Framework Explained – Securiti, geopend op juli 31, 2025, https://securiti.ai/nist-ai-risk-management-framework/
  57. NIST AI Risk Management Framework Explained – Securiti, geopend op juli 31, 2025, https://www.securiti.ai/nist-ai-risk-management-framework/
  58. OECD AI Principles: Guardrails to Responsible AI Adoption – code4thought, geopend op juli 31, 2025, https://code4thought.eu/2024/09/09/oecd-ai-principles-guardrails-to-responsible-ai-adoption/
  59. Lessons from the Vatican’s AI Guidelines – Word on Fire, geopend op juli 31, 2025, https://www.wordonfire.org/articles/lessons-from-the-vaticans-ai-guidelines/
  60. What can ethics and spirituality contribute to the development of AI?, geopend op juli 31, 2025, https://dobetter.esade.edu/en/AI-ethics-spirituality
  61. Compassionate AI Policy: A Framework for AI’s Human Impact, geopend op juli 31, 2025, https://solutionsreview.com/compassionate-ai-policy-example-a-framework-for-the-human-impact-of-ai/
  62. AI and Religious Ethics: The Role of AI in Religious and Spiritual Contexts – ResearchGate, geopend op juli 31, 2025, https://www.researchgate.net/publication/391634000_AI_and_Religious_Ethics_The_Role_of_AI_in_Religious_and_Spiritual_Contexts
  63. Ethical Considerations in AI-Enhanced Apologetics – FaithGPT, geopend op juli 31, 2025, https://www.faithgpt.io/blog/ethical-considerations-in-ai-enhanced-apologetics

Blijf op de hoogte

Wekelijks inzichten over AI governance, cloud strategie en NIS2 compliance — direct in je inbox.

[jetpack_subscription_form show_subscribers_total="false" button_text="Inschrijven" show_only_email_and_button="true"]

Klaar om van data naar doen te gaan?

Plan een vrijblijvende kennismaking en ontdek hoe Djimit uw organisatie helpt.

Plan een kennismaking →

Ontdek meer van Djimit

Abonneer je om de nieuwste berichten naar je e-mail te laten verzenden.