KT4D Releases Final Policy Brief: The Evolving Role of Trust in AI-Mediated Societies

KT4D policy brief

As the KT4D project approaches its conclusion after three years of research, the consortium has released its second policy brief examining one of democracy's most fundamental challenges: how artificial intelligence is redefining trust between citizens, institutions, and technology itself. Published in December 2025, "The Evolving Role of Trust in AI-Mediated Societies" synthesises extensive research findings from across the project's lifespan, offering eight targeted policy recommendations to help safeguard democratic values in an increasingly AI-mediated world.

Trust as Democracy's Moving Target in the AI Age

The policy brief addresses a critical reality facing European democracies today. Trust, which has always served as a precondition for citizens to rely on institutions and participate in decision-making, is being fundamentally transformed as artificial intelligence increasingly mediates information exchange and influences public discourse. The growing likelihood that any piece of information, image, or video has been generated or transformed by AI creates profound uncertainty about how citizens can assess and relate to the information they encounter daily. This opacity, whether intentional or not, undermines citizens' ability to rely on institutions as trustworthy reference points.

Drawing on research conducted throughout the KT4D project, the brief demonstrates how trust challenges operate at multiple interconnected levels. These include trust between citizens and democratic institutions, trust across different cultural contexts where AI is deployed with varying social and political implications, and trust within organisations where AI systems are introduced without clear standards or shared understanding of acceptable use. The policy brief represents a major synthesis of the project's work on how advanced knowledge technologies affect cultural values, collective identity formation, and democratic participation.

Download the Policy Brief 

Eight Recommendations Spanning Institutional, Educational, and Governance Dimensions

The policy brief presents eight concrete recommendations organised across institutional, organisational, educational, and governance dimensions, each designed to address specific trust challenges identified through the project's research. These recommendations are deliberately broad in scope, recognising that trust in AI-mediated societies cannot be addressed through narrow technical fixes but requires comprehensive approaches that engage with social, cultural, and democratic values.

At the institutional level, the brief calls for public bodies to ensure the traceability and authenticity of information through mechanisms such as authenticated digital signatures or institutional origin labels. These systems should include visible and easily understandable signs allowing citizens to recognise genuine institutional communication. As AI-generated content grows exponentially, citizens face mounting uncertainty about the origin and reliability of information they encounter. Ensuring that content from public bodies can be clearly identified and traced back to legitimate sources becomes essential to maintaining trust and preventing manipulation of public discourse.

Organisations deploying AI systems, whether in public administration, research, education, business, or civil society, should establish and communicate shared rules for AI use. The brief recommends that these guidelines be developed through internal discussion and informed by clear mapping of how AI is used across organisational activities, similar to how universities have developed guidelines for generative AI use. When organisations adopt AI tools without clear internal standards, uncertainty arises not only about the technology but also about human intentions, competencies, and responsibilities. Collaborators and citizens need to know when and how AI is expected to be used, as without this shared understanding, both interpersonal and institutional trust may be undermined.

Educational recommendations emphasise the development of contextualised critical digital literacy programmes within local institutions including schools, libraries, and civil society organisations. These programmes should integrate the social, linguistic, and political specificities of each community, including critical consumption skills, analysis capabilities, and awareness of how AI systems operate. Digital and AI literacy cannot be universalised, the brief argues, but must reflect local languages, social norms, and cultural values. A culturally contextualised approach ensures that citizens understand and can critically engage with the digital tools that increasingly shape democratic participation.

Governance recommendations call for AI frameworks responsive to intercultural differences, utilising forums such as the UN Global Digital Compact, the Global Dialogue on AI Governance, and the Independent International Scientific Panel on AI to incorporate diverse intercultural perspectives. The brief emphasises that AI now affects countries and communities across the world, yet many remain unrepresented in international discussions. Intercultural dialogue proves crucial for understanding how different communities conceptualise trust, risk, and authority. Without this dialogue, AI governance risks being shaped by narrow cultural assumptions that may not translate across contexts.

The brief establishes the need for mandatory human oversight for all AI systems involved in critical decision-making, particularly in employment, education, healthcare, and legal contexts. It proposes implementing a "Democracy-in-the-Loop" framework that genuinely enables the public and affected communities to shape how machine learning systems are developed and governed throughout their lifecycle. This approach addresses power asymmetries that AI systems create or amplify by requiring impact assessments and providing resources for marginalised communities to participate in governance processes. While AI deployment in essential services may lead to cost and efficiency savings, maintaining human oversight and accountability remains central to fostering public trust and confidence in AI-assisted decisions and, ultimately, in democratic institutions themselves.

Online platforms require increased scrutiny to ensure epistemic responsibility in knowledge dissemination. AI-driven algorithms should promote accurate and verifiable information rather than misleading or inaccurate content. The brief calls for robust enforcement of the Digital Services Act and Digital Markets Act to steer the behavior of large platforms that function as epistemic gatekeepers. As knowledge dissemination has increasingly moved online to search engines and social media platforms, those who control these platforms have become mediators of epistemic trust with immense power in shaping what knowledge is disseminated and how.

The governance framework must adopt a broader, infrastructural lens to democratic AI governance, leveraging different policies including the AI Act, Digital Services Act, Digital Markets Act, and European Democracy Shield in harmony to tackle concentration of power in AI at the infrastructural layer. The brief calls for incentivizing investments in digital public infrastructure to support a sovereign, democratically governed European AI ecosystem. Focusing merely on AI systems deployment disregards the importance of decision-making in design and development processes. Democratic governance and trust prove difficult to realize if the digital public sphere is owned by large private platforms and gatekeepers with profit incentives.

Finally, the brief emphasises the need for sufficient capacities to enforce current AI regulation through investments in public sector implementation and enforcement capabilities. The European Union has established a strong digital rulebook to protect fundamental rights, but these rules only prove effective if properly enforced. The brief warns that haphazardly implemented deregulation risks undermining citizens' trust in these safeguards. Making robust enforcement of democratic safeguards a tangible competitive advantage that distinguishes the EU from unreliable, authoritarian countries requires providing sufficient resources to national authorities and the EU AI Office.

A Comprehensive Research Legacy for Democratic Resilience

This policy brief represents the culmination of extensive research conducted throughout the KT4D project's three-year lifespan. The project has uniquely positioned technologies such as AI and big data as advanced knowledge technologies, addressing the challenge of integrating cultural perspectives into the study of technology and democracy. By placing cultural values and identity formation at the heart of understanding how these technologies affect democratic processes, KT4D has provided distinctive insights into the trust challenges facing European societies.

The policy brief builds on the project's earlier work, including the first policy brief released in 2024, which examined European regulatory frameworks through the lens of culture and collective identity. That earlier brief identified structural blind spots in risk-based regulatory approaches, particularly the limited consideration of cultural risk, epistemic effects, and impacts on collective sense-making. Extensive feedback gathered through policy workshops, stakeholder consultations, and direct engagement with policymakers at national and European levels helped identify trust as a central theme requiring dedicated examination, leading to this second brief.

The recommendations are directed toward relevant policymakers and actors developing national AI strategies and European-level stakeholders responsible for AI regulation, particularly the AI Act. This strategic approach ensures that research outcomes reach decision-makers positioned to implement recommendations within regulatory frameworks. As the KT4D project concludes, these policy briefs, along with the comprehensive KT4D Toolkit released earlier, represent a substantial legacy empowering stakeholders with knowledge, methods, and tools for democratic resilience in the AI era.

The policy brief is openly accessible via Zenodo under a Creative Commons Attribution license, ensuring long-term availability and reuse by policymakers, researchers, civil society organisations, and citizens concerned with the future of democracy in increasingly AI-mediated societies.

Download the Policy Brief