The architecture of contemporary artificial intelligence governance faces increasing strain, revealing inherent limitations in its foundational assumptions 6. Despite numerous regulatory endeavors aimed at ensuring the ethical and responsible development and deployment of AI, these initiatives frequently remain constrained by a perspective that treats artificial intelligence as a self-contained technological entity. This viewpoint often fails to adequately recognize that AI is the product of intricate sociotechnical processes deeply interwoven with asymmetrical power dynamics. This article posits a fundamental reorientation of AI governance, asserting that data sovereignty should serve as the bedrock for establishing enforceable ethics and genuine legitimacy in this rapidly evolving field 12. This is not merely a critique of existing paradigms; it is a proposition for a comprehensive reconceptualization of how we approach the regulation of artificial intelligence.

I. From model-centricity to input ontology

The prevailing understanding that underpins AI regulation largely assumes that ethical considerations primarily arise and are addressed within the AI model itself 2. Consequently, the focus of regulatory efforts tends to gravitate towards the point at which decisions are made by AI systems. This model-centric epistemology manifests in a preoccupation with concepts such as explainability, fairness auditing of algorithms, and the deployment of transparency dashboards intended to illuminate the inner workings of AI. However, this article challenges this dominant assumption by shifting the analytical lens to the genesis of ethical concerns. It argues that the origins of ethical failures in AI systems are more accurately located within the data pipeline that feeds these models, rather than solely within the model’s parameters or decision-making processes.

The regulatory fallacy of model-centrism

Current regulatory approaches predominantly foreground the AI model as the primary target for intervention and oversight. Within this framework, harms generated by AI are often conceptualized as statistical anomalies that can be smoothed out through technical adjustments to the model, rather than as systemic reflections of deeper issues embedded within the data upon which these models are trained. This perspective leads to an oversupply of tools and techniques aimed at explaining the decisions made by AI, without ever fundamentally questioning the legitimacy or the ethical implications of the data that forms the very foundation of these decisions. While these model-centric approaches may offer a semblance of accountability at the level of outputs, they often fail to scrutinize the underlying biases, representational imbalances, and issues of consent that may be present in the training data 23. As a result, the focus on the model can inadvertently divert attention from the more fundamental ethical considerations related to the data that shapes the model’s behavior 1. This emphasis on model management, as noted in the context of data science teams’ perspectives on AI governance, highlights the prevailing tendency to view the model as the central element requiring governance 2. However, this approach has inherent limitations, as performance improvements achieved solely through model adjustments eventually plateau, and it can lead to overlooking biases and other critical issues within the data itself 23. Moreover, AI governance tools cannot ensure full accuracy due to biases in data and algorithms 7.

Data sovereignty as foundational construct

This article calls for a fundamental shift in perspective, a Kuhnian paradigm shift, in how we approach AI governance. This shift entails moving away from a primary focus on downstream accountability, which seeks to guard the decisions made by AI, towards an emphasis on upstream legitimacy, which focuses on governing the inputs that shape AI behavior. The core of this proposed reframing is the principle of data sovereignty. Sovereignty over data—encompassing its origins, the conditions under which it is captured, the modes of consent obtained for its use, and the embedded rights associated with it—must become the epistemic core around which regulatory knowledge and governance frameworks are organized. Data sovereignty, in this context, signifies that data should be governed by the laws and regulations of the jurisdiction where it originates 12. This concept extends beyond mere compliance with legal frameworks to encompass the rights of individuals and communities to control their data, including how it is collected, stored, processed, and used 16. In the realm of AI, embracing data sovereignty as the foundational construct for regulation offers the potential to decentralize innovation, ensuring that AI systems are trained on diverse and representative datasets that reflect the populations they serve 13. This epistemological reframing is not simply a matter of adding nuance to existing approaches; it fundamentally alters the structural assumptions upon which current AI governance regimes are built.

II. Methodological innovation and strategic transgression

A genuine transformation in AI governance necessitates more than incremental updates to existing regulations; it demands a methodological rupture with the prevailing orthodoxies of AI oversight. This article disrupts the conventional approach by demonstrating that the current methodological canon of AI oversight, which includes explainability, risk-tiering, and fairness metrics, rests on an unsustainable distinction between inputs (data) and outputs (model decisions).

Deconstructing the myth of contained ethics

The methodologies currently employed in AI governance often operate under the implicit assumption that ethical considerations can be contained and managed primarily at the level of the AI model’s outputs. However, this perspective overlooks the critical role of the data that serves as the foundation for these models. For instance, while explainability seeks to illuminate how an AI model arrives at a particular decision, it typically does not challenge or even inquire into the provenance or the ethical characteristics of the data upon which that decision is based. Similarly, risk-tiering frameworks often treat potential harms arising from AI as probabilistic inevitabilities to be categorized and managed, rather than as sociotechnical constructs that are deeply rooted in the way data is collected, processed, and used. These methods, while seemingly providing a structured approach to oversight, offer bureaucratic closure rather than genuine epistemic justice. They function as tools for containing perceived risks within the existing paradigm, rather than as instruments for fundamentally challenging and transforming the underlying assumptions and practices that lead to ethical concerns in the first place. The limitations of AI governance tools in ensuring full accuracy due to biases in data and algorithms further underscore the inadequacy of a purely output-focused approach 7. The narrow focus on models in current regulations also neglects the critical role of data in shaping AI capabilities and potential harms 1.

Speculative but actionable governance frameworks

To move beyond the limitations of current methodologies, this article ventures into the speculative realm, proposing governance frameworks that are not only feasible but also have the potential to be truly transformative. These proposals are intended as provocations towards a governance system that is more aligned with the realities of technological power and the fundamental importance of data in shaping AI outcomes.

AI input certification

Drawing inspiration from the well-established practice of nutritional labeling for food products, this framework proposes the implementation of “AI Input Certification”. These certifications would function as informative labels for AI datasets, providing crucial details about their origin, the provenance of consent obtained for their use, and an assessment of their representational balance across different demographic groups. Such a system would aim to bring a level of transparency to the often-opaque data supply chain of AI, enabling developers, regulators, and the public to make more informed decisions about the datasets being used to train AI models. By making the characteristics of training data visible and comparable, AI Input Certification could potentially foster a market demand for ethically sourced and well-governed data 29. Initiatives like the Data Provenance Initiative and the development of Data Provenance Standards by organizations such as the Data & Trust Alliance already represent steps in this direction, aiming to ensure metadata about the sourcing, quality, and permissions of datasets are provided in a consistent manner 31. These efforts underscore the growing recognition of the need for greater transparency in AI training data. Frameworks like IEEE CertifAIEd also aim to provide ethical certification for AI systems, potentially extending to data inputs 35.

Self-governing data capsules

Another proposed framework involves the concept of “Self-Governing Data Capsules”. These would be envisioned as portable data units that are embedded with smart contracts. These smart contracts would serve to specify the allowable usage of the data contained within the capsule, set expiration dates for its use, and outline protocols for the revocation of access rights. This approach envisions data as an active entity, imbued with embedded rights and usage rules that are automatically enforced through the technology of smart contracts 3. The idea of data capsules aligns with broader trends towards user-centric data ownership and control, potentially leveraging blockchain technology to create secure and transparent data assets 41. Smart contracts, which are self-executing contracts with the terms directly written into code, offer a mechanism to automate and enforce these data usage agreements 3. However, challenges in implementing such a system include establishing security and protection, managing quality and consistency, and defining ownership 42. This framework could empower individuals and communities to maintain control over their data, even after it has been shared for purposes such as AI model training.

Real-Time consent infrastructure

Recognizing the limitations of static, one-time consent models in the dynamic context of AI development and deployment, this article proposes the creation of a “Real-Time Consent Infrastructure”. Such systems would allow users to dynamically update, revoke, or extend their data sharing rights at any point in time, even after their data has been used for model training 47. This approach acknowledges that consent is not a fixed state but rather an ongoing and evolving relationship between data subjects and data users 53. The development of advanced Consent Management Platforms (CMPs) that incorporate AI capabilities to personalize privacy experiences and adapt to regulatory changes represents a technological foundation for such a system 47. Furthermore, regulatory initiatives like the EU AI Act, which emphasizes informed, explicit, and freely given consent, and legislative proposals such as the AI CONSENT Act in the US, which aims to require companies to obtain consent for using consumer data to train AI systems, highlight the growing importance of robust consent management in the age of AI 57. However, implementing a real-time consent infrastructure faces challenges such as ensuring transparency, managing consent across multiple channels, and adapting to changing regulations 56. Real-time consent infrastructure could significantly enhance user autonomy and build trust in AI systems by providing individuals with continuous control over their personal information.

These proposed frameworks are not intended as utopian ideals but rather as concrete provocations towards the development of a governance system that is better aligned with the realities of technological power and the fundamental rights of data subjects.

III. Ingestion, not cognition

The most consequential shift proposed by this article is theoretical in nature. It urges a fundamental redefinition of AI ethics, framing it as a problem of ingestion, rather than cognition. The underlying premise of this reconceptualization is that AI models do not spontaneously invent patterns; instead, they ingest, compress, and ultimately regurgitate social logics that are already present within the data they are trained on. Therefore, to regulate AI as if it possesses independent cognitive abilities, rather than as a reflection of the data it consumes, is to fundamentally misunderstand its nature and to pursue ineffective regulatory strategies.

Bias reimagined as upstream negligence

Within this theoretical reframing, the concept of bias in AI is not viewed as an emergent and somewhat unpredictable behavior of the model itself. Instead, bias is understood as a sedimented artifact, a consequence of systemic neglect that occurs upstream in the data pipeline 5. When datasets are collected without obtaining proper consent, when they are stripped of crucial context, or when they are unrepresentative of the populations that will be affected by the AI system, bias is not merely a glitch in the system; it is a design feature, an inherent characteristic baked into the very foundation of the model 66. This perspective aligns with the understanding that bias in AI, especially in machine learning models, often originates from training data that is unrepresentative or incomplete, leading to skewed outputs 66. Furthermore, biases can be introduced at various stages of the AI pipeline, starting with data collection and extending through data labeling and even model training 67. Recognizing bias as a form of upstream negligence underscores the responsibility of all actors involved in the data pipeline to ensure the ethical sourcing, preparation, and curation of data used to train AI models 74.

AI as epistemic mirror

This theoretical shift encourages us to move away from solely interrogating what an AI system is “thinking” or how it is making decisions. Instead, a more critical and revealing question to ask is: who gets to feed it, and under what conditions?. The AI model, in this view, is not an autonomous decision-making entity but rather a distorted mirror, reflecting the infrastructural injustices, the economic practices of data extractivism, and the legal permissiveness that characterize the environment from which its training data is drawn 75. This perspective highlights how AI systems can internalize implicit biases from their training data, which often reflects existing societal prejudices 66. The bias observed in AI outputs is not solely an AI problem but a reflection of human biases present in the data 75. Therefore, to understand and address bias in AI, we must critically examine the data itself and the social and political contexts in which it is generated and used.

False dichotomies exposed

This reconceptualization exposes profound contradictions that lie at the heart of modern AI ethics. For instance, the emphasis on explainability seeks to demystify the decisions made by AI systems without necessarily questioning the fundamental right of those decisions to exist, especially if they are based on data that is ethically compromised or inherently biased. Similarly, the focus on compliance often legitimizes the collection of consent at a single point in time, while largely ignoring the ongoing need for continuous data governance and the evolving nature of AI systems and their use cases. These approaches create a false sense of progress by focusing on transparency and adherence to formal requirements without addressing the deeper issues of data legitimacy and the ethical implications of the data itself. In essence, modern AI ethics often seeks clarity in the form of explainability without ensuring the underlying legitimacy of the data that fuels these systems.

IV. Transformative implications and the politics of sovereignty

This article is not concerned with incremental adjustments to the status quo; it advances a constructively insurgent agenda, one that seeks to move beyond mere harm mitigation towards a fundamental redesign of the structural underpinnings of AI governance.

From guardrails to gatekeeping

Current risk-based governance frameworks typically focus on providing “guardrails” for AI systems that are already in deployment, aiming to manage and mitigate potential harms after they have emerged. In contrast, this article argues for a proactive approach that demands we move upstream in the AI development lifecycle—to establish effective “gatekeeping” mechanisms that block the ingestion of unconsented and ethically problematic data before it can even be used to train AI models. This represents a fundamental shift from a reactive stance to a preventative one, encapsulated by the analogy: “stop the arsonist, not just build fire exits”. This approach aligns with the understanding that addressing bias and ensuring ethical considerations are integrated early in the AI development process is crucial 5.

Enforceable digital sovereignty

Envisioning a future where data is truly sovereign requires imagining a world where data itself carries its rights wherever it travels, effectively self-governing through embedded protocols 41. In this vision, consent is no longer a static checkbox marked at a single point in time but rather a continuously negotiable protocol, allowing individuals to maintain ongoing control over their information 53. Furthermore, the right to be forgotten would persist even beyond the training epochs of AI models, ensuring that individuals can effectively withdraw their data and have it removed from AI systems. Such a vision fundamentally transforms data from a passive raw material to an active, right-bearing agent within the digital ecosystem. This aligns with the growing recognition of digital sovereignty as the right to control one’s own data 12.

Democratizing the data economy

Embedding the principle of sovereignty directly at the data layer has profound implications for democratizing the data economy 84. It enables the establishment of community governance structures for culturally sensitive and Indigenous data, empowering these communities to control the collection, use, and interpretation of their information 12. It also fosters greater corporate accountability by enabling lineage audits that can trace the origins and usage of data, as well as mechanisms for retroactive deletion of data when necessary 42. Moreover, it enhances scientific reproducibility by ensuring that the training documentation for AI models includes detailed and traceable information about the data used 31. Perhaps the most radical proposal stemming from this perspective is to fundamentally flip the current value proposition, creating market mechanisms that make ethically sourced and sovereign data more valuable than data that has been scraped without consent or proper governance 86. This could involve introducing pricing mechanisms, taxation policies, or even tokenization strategies that reward organizations for adhering to high standards of data ethics and sovereignty 90.

V. Toward relational AI

This article not only offers a critique of the prevailing AI governance paradigm but also constructs a compelling counter-narrative. It actively resists the dominant portrayal of AI regulation as a purely technical endeavor focused on a race to keep pace with the ever-increasing capabilities of AI models.

Narrative inversion

This counter-narrative proposes a fundamental inversion of the dominant framing:

  • The core problem is not the often-cited opacity of AI models but rather the underlying opacity of the data upon which they are trained 74.
  • The primary solution is not simply to strive for greater interpretability of model decisions but to actively interdict the use of problematic data in the first place.
  • The ultimate outcome we should aim for is not just “responsible AI,” a term that can be vaguely defined and inconsistently applied, but “relational AI”—systems that are intentionally built to respect the specific contexts, the diverse communities, and the inherent sovereignties within which they operate 13.

This shift in perspective calls for a move away from viewing AI as an abstract, autonomous entity and towards understanding it as a technology that is deeply embedded in social and ethical relationships 13. It emphasizes the importance of building AI systems in a manner that is fundamentally relational, grounded in respect for human rights, cultural values, and individual autonomy, rather than in isolation from these critical considerations.

VI. The road ahead

To truly actualize the reframing of AI governance advanced in this article, an ambitious and cross-sectoral research agenda must be pursued. This agenda requires collaborative efforts across various disciplines to address the complex challenges and opportunities presented by a data sovereignty-centric approach to AI regulation.

Technological research

Several key technological research questions need to be explored to support this shift:

  • Can AI models be designed to retain comprehensive data lineage information throughout each stage of the training process, allowing for greater transparency and accountability regarding the data used108?
  • How can AI system architectures be developed to enable the selective and complete removal of specific data points or categories—retroactive forgetting—from trained models without causing catastrophic architectural collapse or significant performance degradation?

Legal research

Legal scholars and practitioners must grapple with critical questions to translate data sovereignty into effective regulation:

  • What are the most effective legal mechanisms for codifying the principles of data sovereignty into AI regulation at both national and international levels, considering the diverse legal traditions and jurisdictional complexities16?
  • At the intersection of intellectual property law, data protection regulations, and emerging AI-specific legislation, where do these legal domains converge, and where do they create potential conflicts that need to be resolved to facilitate data sovereignty in AI18?

Economic modeling

Economic research is needed to explore sustainable models for a data sovereignty-centric AI ecosystem:

  • What innovative incentive mechanisms can be designed and implemented to foster the creation and growth of markets for ethically sourced and certified data, making it economically viable for organizations to prioritize data governance over unchecked data extraction121?
  • Can the concept of ethical data provenance be effectively priced, taxed, or integrated into tokenized data economies to incentivize responsible data practices94?

Interdisciplinary ethics

Ethical considerations must be at the forefront of this research agenda:

  • In the context of AI systems that increasingly generalize beyond their original intended use cases, what constitutes truly valid and ongoing consent for data utilization, and how can this be practically implemented and enforced47?
  • How do different cultures and societies around the world conceive of the concept of sovereignty, particularly in the digital realm, and how should AI governance frameworks be designed to respect and accommodate these diverse understandings113?

These are not merely theoretical luxuries. They are governance imperatives.

VII. Toward a new social contract

Ultimately, this article’s thesis is not technical—it is philosophical.

Data is not inert. It is political, relational, and alive with rights 16.

To govern AI is not merely to discipline outputs, but to recognize the dignity of inputs. The true subject of AI governance is not the model—it is the human being whose digital traces animate its capabilities 84. This is not a call for more transparency. It is a call for a new social contract—between those who build AI and those whose knowledge, identity, and agency fuel it 86. Until governance begins at the point of ingestion, every ethical safeguard is downstream of the original injustice.

Geciteerd werk

  1. Data-Centric AI Governance: Addressing the Limitations of Model …, geopend op maart 25, 2025, https://openreview.net/forum?id=iuqprf3GuR
  2. AI governance versus model management: What’s the difference? – Collibra, geopend op maart 25, 2025, https://www.collibra.com/blog/ai-governance-versus-model-management-whats-the-difference
  3. Best Practices for Smart Contract Testing & how to – Metana, geopend op maart 25, 2025, https://metana.io/blog/best-practices-for-smart-contract-testing-how-to/
  4. 77+ Smart Contract Use Cases Enabled by Chainlink, geopend op maart 25, 2025, https://blog.chain.link/smart-contract-use-cases/
  5. Combating Bias in AI | 10x – Solutions for Better Civic Tech, geopend op maart 25, 2025, https://10x.gsa.gov/news/combating-bias-ai/
  6. Shortcomings in Model-Centric Systems | Download Scientific Diagram – ResearchGate, geopend op maart 25, 2025, https://www.researchgate.net/figure/Shortcomings-in-Model-Centric-Systems_fig2_378449783
  7. What AI Governance tools can’t do – limitations of AI Governance software – 10 Senses, geopend op maart 25, 2025, https://10senses.com/blog/what-ai-governance-tools-cant-do-limitations-of-ai-governance-software/
  8. What are the limitations of AI in linguistic analysis, and how can they be addressed? | ResearchGate, geopend op maart 25, 2025, https://www.researchgate.net/post/What_are_the_limitations_of_AI_in_linguistic_analysis_and_how_can_they_be_addressed
  9. What is AI Governance? | IBM, geopend op maart 25, 2025, https://www.ibm.com/think/topics/ai-governance
  10. Introducing the AI Governance and Regulatory Archive (AGORA): An Analytic Infrastructure for Navigating the Emerging AI Governan – AAAI Publications, geopend op maart 25, 2025, https://ojs.aaai.org/index.php/AIES/article/download/31615/33782/35679
  11. Advanced AI governance: a literature review of problems, options, and proposals, geopend op maart 25, 2025, https://law-ai.org/advanced-ai-gov-litrev/
  12. What Is Data Sovereignty? | Digital Realty, geopend op maart 25, 2025, https://www.digitalrealty.com/resources/articles/what-is-data-sovereignty
  13. Amplifying AI Use Cases Through Data Sovereignty: A Strategic Approach to Global Innovation – insideAI News, geopend op maart 25, 2025, https://insideainews.com/2024/09/24/amplifying-ai-use-cases-through-data-sovereignty-a-strategic-approach-to-global-innovation/
  14. What is data sovereignty? – Cloudflare, geopend op maart 25, 2025, https://www.cloudflare.com/learning/privacy/what-is-data-sovereignty/
  15. What is data sovereignty? | IBM, geopend op maart 25, 2025, https://www.ibm.com/think/topics/data-sovereignty
  16. Data Rights and Data Sovereignty in a Connected, AI-Driven World – TecEx, geopend op maart 25, 2025, https://tecex.com/data-rights-and-data-sovereignty/
  17. Overview of data sovereignty laws by country – InCountry, geopend op maart 25, 2025, https://incountry.com/blog/overview-of-data-sovereignty-laws-by-country/
  18. Data Sovereignty in the AI Era – insideAI News, geopend op maart 25, 2025, https://insideainews.com/2024/08/27/data-sovereignty-in-the-ai-era/
  19. Model AI Governance Framework – BSA Artificial Intelligence, geopend op maart 25, 2025, https://ai.bsa.org/wp-content/uploads/2019/09/Model-AI-Framework-First-Edition.pdf
  20. What Is AI Governance? – Palo Alto Networks, geopend op maart 25, 2025, https://www.paloaltonetworks.com/cyberpedia/ai-governance
  21. Model-Centric AI Benefits for Enterprises in 2025 – XenonStack, geopend op maart 25, 2025, https://www.xenonstack.com/blog/model-centric-ai-benefits
  22. Solving the data- vs. model-centric AI Governance debate | Collibra, geopend op maart 25, 2025, https://www.collibra.com/blog/ai-governance-solving-the-data-centric-versus-model-centric-debate
  23. Data-Centric AI Vs. Model-Centric AI – Everything You Need Know …, geopend op maart 25, 2025, https://www.artiba.org/blog/data-centric-ai-vs-model-centric-ai-everything-you-need-know
  24. Understanding The Limitations Of AI (Artificial Intelligence) | by Mark Levis | Medium, geopend op maart 25, 2025, https://medium.com/@marklevisebook/understanding-the-limitations-of-ai-artificial-intelligence-a264c1e0b8ab
  25. What artificial intelligence can’t do | Understanding the limitations of AI – Lumenalta, geopend op maart 25, 2025, https://lumenalta.com/insights/ai-limitations-what-artificial-intelligence-can-t-do
  26. Ensuring Data Sovereignty with AI Governance Solutions – EDB, geopend op maart 25, 2025, https://www.enterprisedb.com/sovereign-ai-data-management-and-governance
  27. From Compliance to Competitive Edge: Why Data Sovereignty Is the New Business Imperative, geopend op maart 25, 2025, https://www.datadynamicsinc.com/blog-from-compliance-to-competitive-edge-why-data-sovereignty-is-the-new-business-imperative/
  28. AI Era Challenges: The Role of Data Sovereignty | P&C Global, geopend op maart 25, 2025, https://www.pandcglobal.com/research-insights/the-strategic-imperative-of-data-sovereignty-in-the-ai-era/
  29. Using ‘Ethically Sourced’ Social Media Data to Study Health Marketing Algorithms | NORC at the University of Chicago, geopend op maart 25, 2025, https://www.norc.org/research/library/using-ethically-sourced-social-media-data-study-health-marketing-algorithms.html
  30. What It Means to Have Ethically Sourced Data – Multifamily Blogs, geopend op maart 25, 2025, https://www.multifamilyinsiders.com/multifamily-blogs/what-it-means-to-have-ethically-sourced-data
  31. Bringing transparency to the data used to train artificial intelligence – MIT Sloan, geopend op maart 25, 2025, https://mitsloan.mit.edu/ideas-made-to-matter/bringing-transparency-to-data-used-to-train-artificial-intelligence
  32. Work – The Data & Trust Alliance, geopend op maart 25, 2025, https://dataandtrustalliance.org/work/data-provenance-standards
  33. US state-by-state AI legislation snapshot | BCLP, geopend op maart 25, 2025, https://www.bclplaw.com/en-US/events-insights-news/us-state-by-state-artificial-intelligence-legislation-snapshot.html
  34. Summary Artificial Intelligence 2024 Legislation – National Conference of State Legislatures, geopend op maart 25, 2025, https://www.ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation
  35. IEEE CertifAIEd – IEEE SA, geopend op maart 25, 2025, https://standards.ieee.org/products-programs/icap/ieee-certifaied/
  36. Trustworthy AI Framework Training & Certification – Cognilytica Courses, geopend op maart 25, 2025, https://courses.cognilytica.com/courses/trustworthy-ai-framework-training-certification/
  37. AI Ethics Certification – IEEE CertifAIEd, geopend op maart 25, 2025, https://engagestandards.ieee.org/ieeecertifaied.html
  38. What is Smart Contract Storage Layout? – Alchemy Docs, geopend op maart 25, 2025, https://docs.alchemy.com/docs/smart-contract-storage-layout
  39. Canister smart contracts | Internet Computer, geopend op maart 25, 2025, https://internetcomputer.org/how-it-works/canister-lifecycle
  40. Smart Contract Portability Across Chains – t3rn, geopend op maart 25, 2025, https://www.t3rn.io/blog/smart-contract-portability
  41. Data Capsules – LayerAI, geopend op maart 25, 2025, https://www.layerai.org/data-capsules
  42. Overcoming Common Data Governance Challenges – CTG, geopend op maart 25, 2025, https://www.ctg.com/blogs/overcoming-common-data-governance-challenges
  43. Main Data Governance Challenges & Ways for Solving Them | Intellectsoft, geopend op maart 25, 2025, https://www.intellectsoft.net/blog/main-data-governance-challenges-and-ways-of-solving-them/
  44. 10 Data Governance Challenges & How to Address Them in 2025 – Atlan, geopend op maart 25, 2025, https://atlan.com/data-governance-challenges/
  45. 7 Data Governance Challenges & How to Beat Them | Immuta, geopend op maart 25, 2025, https://www.immuta.com/guides/data-security-101/data-governance-challenges/
  46. 6 Data Governance Challenges and their Solutions – Semarchy, geopend op maart 25, 2025, https://semarchy.com/blog/data-governance-challenges/
  47. The Impact of AI on Consent Management Practices: The Ultimate Guide – Secure Privacy, geopend op maart 25, 2025, https://secureprivacy.ai/blog/ai-consent-management
  48. The case for consent in the AI data gold rush – Brookings Institution, geopend op maart 25, 2025, https://www.brookings.edu/articles/the-case-for-consent-in-the-ai-data-gold-rush/
  49. Dynamic Consent Management: Leveraging Automation for GDPR and CCPA Compliance | by Akitra | Medium, geopend op maart 25, 2025, https://medium.com/@akitrablog/dynamic-consent-management-leveraging-automation-for-gdpr-and-ccpa-compliance-1f5ae77e13fd
  50. Enhancing Data Protection in Dynamic Consent Management Systems: Formalizing Privacy and Security Definitions with Differential Privacy, Decentralization, and Zero-Knowledge Proofs – PubMed Central, geopend op maart 25, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10490780/
  51. Dynamic Consent: A New GDPR Standard for Clinical Trials, geopend op maart 25, 2025, https://www.clinicaltrialvanguard.com/article/dynamic-consent-a-new-gdpr-standard-for-clinical-trials/
  52. The Definitive Guide to Consent Management – Privado.ai, geopend op maart 25, 2025, https://www.privado.ai/post/the-definitive-guide-to-consent-management
  53. Data Ethics in AI: 6 Key Principles for Responsible Machine Learning – Alation, geopend op maart 25, 2025, https://www.alation.com/blog/data-ethics-in-ai-6-key-principles-for-responsible-machine-learning/
  54. Good digital public infrastructure relies on effective consent mechanisms. Here’s how they work., geopend op maart 25, 2025, https://dial.global/effective-consent-within-dpi/
  55. 10 AI Consent Management Best Practices 2024 – Dialzara, geopend op maart 25, 2025, https://dialzara.com/blog/10-ai-consent-management-best-practices-2024/
  56. Consent Management Challenges & How to Overcome Them – PossibleNOW, geopend op maart 25, 2025, https://www.possiblenow.com/resources/consent-management-platform/consent-management-challenges-how-to-overcome-them/
  57. Text – S.3975 – 118th Congress (2023-2024): AI CONSENT Act, geopend op maart 25, 2025, https://www.congress.gov/bill/118th-congress/senate-bill/3975/text
  58. Building realtime infrastructure: Costs and challenges, geopend op maart 25, 2025, https://ably.com/blog/building-realtime-infrastructure-costs-and-challenges
  59. Challenges for Real-Time Consent Management – 4Comply, geopend op maart 25, 2025, https://4comply.io/articles/challenges-for-real-time-consent-management/
  60. Grid Operations Challenges: Solving the Complexities of Public Infrastructure Management, geopend op maart 25, 2025, https://tryve.eu/grid-operations-challenges-solving-the-complexities-of-public-infrastructure-management/
  61. Consent Management Challenges in IoT Devices – Secure Privacy, geopend op maart 25, 2025, https://secureprivacy.ai/blog/iot-consent-management
  62. Considerations for addressing bias in artificial intelligence for health equity – PMC, geopend op maart 25, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10497548/
  63. NEGLIGENCE AND AI’S HUMAN USERS – Boston University, geopend op maart 25, 2025, https://www.bu.edu/bulawreview/files/2020/09/SELBST.pdf
  64. Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review – PMC, geopend op maart 25, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10711067/
  65. Report on Artificial Intelligence and Civil Liability – British Columbia Law Institute, geopend op maart 25, 2025, https://www.bcli.org/wp-content/uploads/Report-AI-and-civil-liability-final.pdf
  66. The Impact of Unrepresentative Data on AI Model Biases – Anolytics, geopend op maart 25, 2025, https://www.anolytics.ai/blog/the-impact-of-unrepresentative-data-on-ai-model-biases/
  67. Bias in AI – Chapman University, geopend op maart 25, 2025, https://www.chapman.edu/ai/bias-in-ai.aspx
  68. Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies – MDPI, geopend op maart 25, 2025, https://www.mdpi.com/2413-4155/6/1/3
  69. Ethical Use of Training Data: Ensuring Fairness & Data Protection in AI – Lamarr Institute, geopend op maart 25, 2025, https://lamarr-institute.org/blog/ai-training-data-bias/
  70. When AI Gets It Wrong: Addressing AI Hallucinations and Bias, geopend op maart 25, 2025, https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/
  71. 7 Reasons For Bias In AI and What To Do About It – insideAI News, geopend op maart 25, 2025, https://insideainews.com/2022/02/09/7-reasons-for-bias-in-ai-and-what-to-do-about-it/
  72. What is Data Bias? – IBM, geopend op maart 25, 2025, https://www.ibm.com/think/topics/data-bias
  73. What is AI bias? Causes, effects, and mitigation strategies – SAP, geopend op maart 25, 2025, https://www.sap.com/resources/what-is-ai-bias
  74. Addressing bias in generative AI starts with training data explainability – RWS, geopend op maart 25, 2025, https://www.rws.com/artificial-intelligence/train-ai-data-services/blog/address-bias-with-generative-ai-data-explainability/
  75. Confronting the Mirror: Reflecting on Our Biases Through AI in Health Care, geopend op maart 25, 2025, https://postgraduateeducation.hms.harvard.edu/trends-medicine/confronting-mirror-reflecting-our-biases-through-ai-health-care
  76. Epistemic Injustice in Generative AI – arXiv, geopend op maart 25, 2025, https://arxiv.org/html/2408.11441v1
  77. Reflection on AI and Human Bias in Research – Qeludra Blog, geopend op maart 25, 2025, https://qeludra.com/blog/ai-bias-qualitative-research
  78. A Sketch of AI-Driven Epistemic Lock-In – Effective Altruism Forum, geopend op maart 25, 2025, https://forum.effectivealtruism.org/posts/G7KnxZ3Jpuy6hQ4Ew/a-sketch-of-ai-driven-epistemic-lock-in
  79. Algorithmic emergence? Epistemic in/justice in AI-directed transformations of healthcare – PMC – PubMed Central, geopend op maart 25, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11843219/
  80. AI Bias: 8 Shocking Examples and How to Avoid Them – Prolific, geopend op maart 25, 2025, https://www.prolific.com/resources/shocking-ai-bias
  81. AI Bias – What Is It and How to Avoid It?, geopend op maart 25, 2025, https://levity.ai/blog/ai-bias-how-to-avoid
  82. Digital Sovereignty: Protect Your Data in the AI Boom | Salesforce, geopend op maart 25, 2025, https://www.salesforce.com/blog/digital-sovereignty/
  83. What is digital sovereignty and how are countries approaching it? | World Economic Forum, geopend op maart 25, 2025, https://www.weforum.org/stories/2025/01/europe-digital-sovereignty/
  84. How personal data sovereignty could save us from AI’s darkest risks – Diplomatic Courier, geopend op maart 25, 2025, https://www.diplomaticourier.com/posts/personal-data-sovereignty-save-us-from-ais-darkest-risks
  85. Navigating Data Governance and Sovereignty in the Age of AI, geopend op maart 25, 2025, https://seerdata.ai/data-governance-and-sovereignty-in-the-age-of-ai/
  86. The Only Ethical Model for AI is Socialism – Current Affairs, geopend op maart 25, 2025, https://www.currentaffairs.org/news/the-only-ethical-model-for-ai-is-socialism
  87. The economy and ethics of AI training data – Marketplace.org, geopend op maart 25, 2025, https://www.marketplace.org/2024/01/31/the-economy-and-ethics-of-ai-training-data/
  88. How does data become powerful? Definition of ethical data market – SocietyByte, geopend op maart 25, 2025, https://www.societybyte.swiss/en/2024/02/14/how-does-data-become-powerful-definition-of-ethical-data-market/
  89. Data in Advertising: What It Means to Be an Ethical Marketer – IAB Tech Lab, geopend op maart 25, 2025, https://iabtechlab.com/data-in-advertising-what-it-means-to-be-an-ethical-marketer/
  90. Tokenization vs. Encryption: Best Practices for Protecting Your Data | McAfee, geopend op maart 25, 2025, https://www.mcafee.com/learn/tokenization-vs-encryption/
  91. What is Data Tokenization and How Does It Differ from Encryption? – Pragmatic Coders, geopend op maart 25, 2025, https://www.pragmaticcoders.com/blog/what-is-data-tokenization
  92. What Is Data Tokenization? Key Concepts and Benefits, geopend op maart 25, 2025, https://www.digitalguardian.com/blog/what-data-tokenization-key-concepts-and-benefits
  93. Data Tokenization Best Practices: A Guide to Protect Sensitive Data – Fortanix, geopend op maart 25, 2025, https://www.fortanix.com/blog/data-tokenization-best-practices-guide
  94. Data Sovereignty: Definition, Requirements and How to Ensure It, geopend op maart 25, 2025, https://www.spanning.com/blog/data-sovereignty/
  95. Sovereign Intelligence – Pricing, Reviews, Data & APIs – Datarade, geopend op maart 25, 2025, https://datarade.ai/data-providers/sovereign-intelligence/profile
  96. Sovereign Cloud Solutions | Google Cloud, geopend op maart 25, 2025, https://cloud.google.com/sovereign-cloud
  97. Data privacy and AI: ethical considerations and best practices – TrustCommunity, geopend op maart 25, 2025, https://community.trustcloud.ai/docs/grc-launchpad/grc-101/governance/data-privacy-and-ai-ethical-considerations-and-best-practices/
  98. Ethical Implications of the Use of Legal Technologies by Innovative M&A Lawyers, including Special Considerations for Use of AI in M&A Transactions – American Bar Association, geopend op maart 25, 2025, https://www.americanbar.org/groups/business_law/resources/business-law-today/2025-january/ethical-implications-use-legal-technologies-innovative-m-a-lawyers/
  99. AI Ethics: What It Is, Why It Matters, and More – Coursera, geopend op maart 25, 2025, https://www.coursera.org/articles/ai-ethics
  100. Unlocking the value of AI ethics – IBM, geopend op maart 25, 2025, https://www.ibm.com/thought-leadership/institute-business-value/blog/value-ai-ethics
  101. Ethics of Artificial Intelligence | UNESCO, geopend op maart 25, 2025, https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
  102. Responsible AI: Maximizing Value through AI Transparency – Slalom Consulting, geopend op maart 25, 2025, https://www.slalom.com/us/en/insights/responsible-ai-value
  103. What is AI Ethics? | IBM, geopend op maart 25, 2025, https://www.ibm.com/think/topics/ai-ethics
  104. Ethical considerations of AI: Fairness, transparency, and frameworks | Future of responsible AI | Lumenalta, geopend op maart 25, 2025, https://lumenalta.com/insights/ethical-considerations-of-ai
  105. The ethical dilemmas of AI | USC Annenberg School for Communication and Journalism, geopend op maart 25, 2025, https://annenberg.usc.edu/research/center-public-relations/usc-annenberg-relevance-report/ethical-dilemmas-ai
  106. Data Ethics & Responsible AI – EDM Council, geopend op maart 25, 2025, https://edmcouncil.org/training/data-ethics-responsibility/
  107. The Ethical Path to AI: Navigating Strategies for Innovation and Integrity | Emory University, geopend op maart 25, 2025, https://ece.emory.edu/areas-of-study/technology/ethics-ai.php
  108. AI and Data Governance: A Symbiotic Relationship – Semarchy, geopend op maart 25, 2025, https://semarchy.com/blog/ai-data-governance/
  109. Navigating AI regulations in North America: balancing innovation and data sovereignty, geopend op maart 25, 2025, https://gcore.com/blog/ai-regulations-2024-north-america/
  110. Data Sovereignty vs. Data Residency: 3 Key Differences – Oracle, geopend op maart 25, 2025, https://www.oracle.com/security/saas-security/data-sovereignty/data-sovereignty-data-residency/
  111. Data Sovereignty vs. Data Residency – IBM, geopend op maart 25, 2025, https://www.ibm.com/think/topics/data-sovereignty-vs-data-residency
  112. The AI Security Dilemma: Protecting Data, Sovereignty and Internal Use – Techstrong.ai, geopend op maart 25, 2025, https://techstrong.ai/articles/the-ai-security-dilemma-protecting-data-sovereignty-and-internal-use/
  113. Sovereign AI: meaning, advantages, and challenges – InCountry, geopend op maart 25, 2025, https://incountry.com/blog/sovereign-ai-meaning-advantages-and-challenges/
  114. Rethinking Data Sovereignty: From Regulating to Facilitating Utilisation – Taiwan Insight, geopend op maart 25, 2025, https://taiwaninsight.org/2024/10/09/rethinking-data-sovereignty-from-regulating-to-facilitating-utilisation/
  115. Do Existing Laws Apply to AI? The AI Applications Most at Risk – Holistic AI, geopend op maart 25, 2025, https://www.holisticai.com/blog/existing-laws-apply-ai
  116. How Is AI Regulated? Examples, Benefits, & Drawbacks | Britannica Money, geopend op maart 25, 2025, https://www.britannica.com/money/ai-rules-and-regulations
  117. Sovereign AI: Defining the Future of National Digital Security | Macquarie Data Centres, geopend op maart 25, 2025, https://macquariedatacentres.com/blog/sovereign-ai-defining-the-future-of-national-digital-security/
  118. Demystifying data sovereignty – Kearney, geopend op maart 25, 2025, https://www.kearney.com/service/digital-analytics/article/demystifying-data-sovereignty
  119. Securing Intellectual Property: Data Privacy and AI. – Evalueserve, geopend op maart 25, 2025, https://www.evalueserve.com/blog/data-privacy-and-ai-securing-intellectual-property-in-the-modern-digital-landscape/
  120. Data Sovereignty in AI: Five Key Considerations – Compare the Cloud, geopend op maart 25, 2025, https://www.comparethecloud.net/articles/data-sovereignty-in-ai-five-key-considerations/
  121. Considerations regarding Sovereign AI and National AI Policy, geopend op maart 25, 2025, https://sovereign-ai.org/media/papers/Considerations_regarding_Sovereign_AI_C_Sovereign_AI__Imperial_College.pdf
  122. Sovereign AI in a Hybrid World: National Strategies and Policy Responses – Lawfare, geopend op maart 25, 2025, https://www.lawfaremedia.org/article/sovereign-ai-in-a-hybrid-world–national-strategies-and-policy-responses
  123. Sovereign AI – Zadara, geopend op maart 25, 2025, https://www.zadara.com/sovereign-ai/
  124. The Essential Guide to Quality Training Data for Machine Learning – CloudFactory, geopend op maart 25, 2025, https://www.cloudfactory.com/training-data-guide
  125. Session 3: Public-Private Partnership Innovation Model from Data Sovereignty Perspective, geopend op maart 25, 2025, https://aifod.org/event_agendas/public-private-partnership-innovation-model-from-data-sovereignty-perspective/
  126. Pricing Data — Bonds – S&P Global, geopend op maart 25, 2025, https://cdn.ihsmarkit.com/www/pdf/1023/MI_DVA_2863701_Pricing-Data-Bonds-Factsheet_210x275mm_FD.pdf
  127. Corporate and Sovereign Bond Pricing Data – S&P Global, geopend op maart 25, 2025, https://www.spglobal.com/market-intelligence/en/solutions/products/pricing-data-bonds-corporate-sovereign
  128. Sovereign AI: What it is, and 6 ways states are building it | World Economic Forum, geopend op maart 25, 2025, https://www.weforum.org/stories/2024/04/sovereign-ai-what-is-ways-states-building/

Ontdek meer van Djimit van data naar doen.

Abonneer je om de nieuwste berichten naar je e-mail te laten verzenden.