Sort by
Filtered results
- 15 results found
Sort by
Module B focuses on the risks AI poses for social fairness and trust: how the use of AI-based tools can generate inequality or dishonesty, particularly when human productions differ in nature (e.g. creative vs.
This document examines how AI-driven content curation and recommendation systems affect the quality of public deliberation.
Recipe series. 1st describing DITL
Designing tech with friction
The experimental component of Module A aims to further characterise internet users' behaviours when faced with online choices potentially undermining their autonomy: how people evaluate AI-generated information and/or content selected through AI-based algorithms, and how people are influenced by
This document examines autonomy as a form of agentive control grounded in attention regulation, goal-directed action, and reflexivity.
The source, which comprises excerpts from Module A of the KT4D Social Risk Toolkit, explores the complex challenge presented by artificial intelligence to individual autonomy and free will within modern society.