Sort by
Filtered results
- 15 results found
Sort by
Module B focuses on the risks AI poses for social fairness and trust: how the use of AI-based tools can generate inequality or dishonesty, particularly when human productions differ in nature (e.g. creative vs.
This document examines how AI-driven content curation and recommendation systems affect the quality of public deliberation.
Recipe series. 1st describing DITL
This recipe is about designing an entire democratic process—not just the AI tool within it. When AI is introduced into a deliberative setting, the surrounding process needs to change too: not just to make the AI work, but to make sure the democracy works.
This document examines autonomy as a form of agentive control grounded in attention regulation, goal-directed action, and reflexivity.
The source, which comprises excerpts from Module A of the KT4D Social Risk Toolkit, explores the complex challenge presented by artificial intelligence to individual autonomy and free will within modern society.
This section considers how people’s autonomy and free will are hindered or supported by past and present KTs. By focusing on the structural level, we will examine systemic issues such as monopolies over KTs, data extraction and colonialism, labour, and political participation.