Sort by
Filtered results
- 13 results found
Sort by
Module B focuses on the risks AI poses for social fairness and trust: how the use of AI-based tools can generate inequality or dishonesty, particularly when human productions differ in nature (e.g. creative vs.
This document examines how AI-driven content curation and recommendation systems affect the quality of public deliberation.
Designing tech with friction
This recipe is about designing an entire democratic process—not just the AI tool within it. When AI is introduced into a deliberative setting, the surrounding process needs to change too: not just to make the AI work, but to make sure the democracy works.
This policy brief focuses on short-term action (2026-2028) around AI governance and provides practical guidelines for experts and policymakers. It introduces a framework that embeds democratic pillars — participation, freedom, equality, transparency, knowledge, and the rule of law — directly into the entire AI lifecycle.
Companies have significant influence over public discourse in online platforms, necessitating that the algorithms that shape these online platforms should be regulated and constrained to sufficiently consider the public interest (Susskind, 2018: 350).
This section analyses how different knowledge technologies impact people’s attention and, consequently, their decisions regarding which information is worth storing and remembering, and which is instead forgotten or not even registered in the first place.
This section examines how people develop trust – or distrust – in knowledge technologies. This section considers three main aspects.