Beyond the Project: KT4D Results to advance Human-Centric, Democratic Governace of AI and Big Data

KT4D results

As the Knowledge Technologies for Democracy (KT4D) project nears its close in January 2026, it caps three years of work at the nexus of AI, Big Data, and democracy. Highlights include the launch of the KT4D Toolkit, a comprehensive catalogue of project results, the final policy brief on trust in AI-mediated societies, and a high-impact joint event with the AI, Big Data & Democracy Task Force, showcasing how collaborative research can strengthen democratic processes across Europe.

Major Success at "Beyond the Algorithm" Brussels event

The culminating event, "Beyond the Algorithm: From Research to Action for Democratic Renewal" took place on December 4th, 2025 in Brussels and brought together results from four Horizon Europe projects: KT4D, AI4GOV, ITHACA, and ORBIS. The conference examined the relationship between artificial intelligence and democratic resilience. The event was organised by the AI, Big Data & Democracy Task Force in cooperation with the European Alliance for Social Science and Humanities. The Task Force, originally initiated by these four projects, has since expanded to eight Horizon Europe projects, strengthening collaboration and alignment among initiatives addressing democracy, AI, and data governance. KT4D was explicitly acknowledged by partner projects, Project Officers, and participants for its leading role in the scientific, technical, and organisational coordination of the conference.


Discussions in Brussels highlighted a shared understanding of the tensions between technological optimisation and democratic processes. KT4D introduced the concept of "Democracy in the Loop," emphasising the need to design AI systems that preserve deliberation, disagreement, and human judgment as core democratic elements. Rather than minimising friction, the framework argues for embedding it intentionally to ensure that technology adapts to democratic values. Trust and governance emerged as another central theme, with speakers stressing that trust in AI cannot rely solely on technical compliance but requires robust governance mechanisms that protect vulnerable and marginalised groups. The event also underlined the importance of critical digital literacy, moving beyond technical skills towards citizens' capacity to assess the societal and democratic implications of digital tools.

SLIDES 


Final Policy Brief Addresses Trust in AI-Mediated Societies

The project's final policy brief, "The Evolving Role of Trust in AI-Mediated Societies," addresses how artificial intelligence is fundamentally redefining trust in democratic institutions as it increasingly mediates information exchange and influences decision-making processes. The brief presents eight recommendations grouped across institutional, organisational, educational, and governance dimensions. At the institutional level, it calls for public bodies to ensure traceability and authenticity of information through authenticated digital signatures and origin labels. Organisations deploying AI systems are urged to establish and communicate shared rules for AI use to maintain interpersonal and institutional trust. Educational recommendations emphasise contextualised critical digital literacy programmes that reflect local languages, social norms, and cultural values.

The policy brief highlights critical governance challenges, noting that while the European Union has addressed the effects of artificial intelligence through regulations such as the GDPR, Digital Services Act, and the AI Act, recent focus has shifted toward innovation and deregulation. This creates tension between upholding democratic processes and promoting rapid deployment of AI. The brief draws on data showing that attitudes toward AI and institutional trust are culturally mediated, with a clear trust gap between the Global North and South. Citizens in Western democracies generally show low and declining trust in AI, while many countries in the Global South exhibit high and stable trust, suggesting that trust is shaped by broader structural, social, cultural, and governance factors.

DOWNLOAD THE POLICY BRIEF 

KT4D Toolkit Now Available to All Stakeholders

The newly launched KT4D Toolkit represents the consolidation of all project results in one accessible platform, empowering stakeholders with knowledge, methods, and tools for democratic resilience in the AI era. The toolkit includes the Social Risk Toolkit with eight thematic modules providing comprehensive research insights covering key socio-cultural issues associated with the intersection of AI, big data, and democracy. This addresses a critical gap by bringing cultural dimensions such as beliefs, languages, norms, creativity, history, and collective sense of self into discussions typically dominated by technological and industrial perspectives combined with social science insights from sociology, policy, or psychology.

EXPLORE THE TOOLKIT 

Digital Democracy Lab Introduces Innovative Approach to Participatory AI

Among the project's key exploitable results is the Digital Democracy Lab Handbook and Demonstrator, which provides a structured framework exploring the potential of AI and big data in supporting democratic discourse and civic engagement. The Demonstrator serves as an AI-assisted pilot system designed to support deliberation within mini-publics on complex topics that would typically be difficult to address in such democratic settings. The Handbook documents foundational principles of participatory algorithmic accountability and presents actionable principles grounded in design justice, introducing a Democracy-in-the-loop approach to AI design and development. These components were extensively tested during three iterations of the Digital Democracy Lab held across the project's four use cases.

ACCESS THE DEMONSTRATOR 

Real-World Validation Across Four European Capital Cities

The four use cases conducted in Brussels, Madrid, Warsaw, and Dublin provided essential validation and refinement of project outputs across different stakeholder groups. Use Case 1 in Brussels engaged policymakers and policy-facing civil society organisations to develop a governance framework and policy roadmap for democratic AI regulation. Through three meetings, including expert roundtables and a Delphi study with twenty experts in AI governance, participants identified critical gaps in current EU approaches and co-created policy recommendations. Results highlighted the need for addressing mass manipulation risks, concentration of power in few AI companies, enforcement challenges, and compatibility with collective bargaining models. The framework emphasises an infrastructural view on democratic AI governance, broadening the discussion from regulation to other policy instruments such as funding and investments in digital public infrastructure.

Use Cases 2 and 3 in Madrid and Warsaw focused on citizens and citizen-facing organisations, exploring perceptions of knowledge technologies and developing educational materials and games to enhance critical digital literacy. The Madrid use case involved diverse groups, including adult citizens, secondary school students and teachers, and a civil society organisation working with migrant women, testing both the serious game and interactive explainers on deepfakes and algorithms. The Warsaw use case brought together university students for intensive workshop sessions, working collectively through the educational tools. Results from both locations emphasised the importance of contextualised approaches, with participants noting that materials must be adapted for different age groups, digital literacy levels, and cultural contexts. The collective discussion format proved particularly valuable, with participants appreciating the opportunity to debate ethical dilemmas and see the consequences of choices in the game-based scenarios.

Use Case 4 in Dublin engaged software developers and technology professionals to develop and validate the Social Computing Compass, a researcher self-assessment tool for design justice. The final validation workshop, conducted with Huawei Ireland Research Centre, tested an interactive digital narrative called "Red Team Mission: The Incident at Pine Valley High." This gamified tool places participants in a crisis scenario where they must identify cultural blind spots in an AI content moderation system that failed due to lack of cultural awareness. The workshop successfully demonstrated how narrative-based approaches can be more effective than traditional checklist methods for raising awareness of cultural complexity in software development. Participants particularly valued the red teaming approach, which positioned them as external investigators rather than directly responsible developers, allowing for more objective analysis.

Across all use cases, several key themes emerged regarding effective approaches to AI ethics and democratic governance. The validation exercises consistently demonstrated the value of experiential learning through games and simulations rather than abstract principles or compliance checklists. Participants appreciated tools that presented nuanced ethical dilemmas without simple right or wrong answers, reflecting the genuine complexity of real-world situations. The importance of cultural context was repeatedly emphasised, with participants recognising that ethical AI development requires awareness of local languages, social norms, historical traumas, and cultural values that may not be immediately visible to developers working in different cultural contexts. The concept of meaningful friction emerged as particularly valuable, with carefully designed disruptions in technology interfaces serving as catalysts for democratic engagement and critical reflection.

Delivering Practical Tools for Democratic AI Governance

The project has produced several other key exploitable results that will continue to provide value beyond the project's conclusion. The narrative-based simulation game and two interactive explainers target citizens and citizen-facing organisations to enhance critical digital literacy through engaging, scenario-based learning. These tools have been extensively validated through the use case meetings and refined based on participant feedback regarding usability, clarity of content, and pedagogical effectiveness. The governance framework, policy roadmap, and recommendations provide policymakers with practical guidance on how the EU should regulate the disruptive nature of general-purpose AI systems in service of democratic values, drawing on literature review, use case findings, and Delphi study results to address both risks like algorithmic bias and disinformation as well as positive potential of AI to reinforce democratic practices.

Sustaining Collaboration Through the AI, Big Data & Democracy Task Force

As KT4D concludes in January 2026, the consortium partners are committed to continuing engagement through the AI, Big Data & Democracy Task Force, ensuring that the knowledge generated and networks established continue to inform European policy and practice. The project's emphasis on cultural dimensions of technology and democracy, its Democracy-in-the-loop framework, and its comprehensive toolkit provide lasting resources for policymakers, researchers, software developers, civil society actors, and citizens navigating the complex relationship between advanced knowledge technologies and democratic participation. The final event's success in bringing together multiple projects and stakeholders demonstrates the value of collaborative approaches to addressing challenges that transcend individual disciplines and national contexts, setting a foundation for continued work in this critical domain.