Filtered results
- 6 results found
- (-) Autonomy
- (-) Legitimacy
- (-) Ethics and Trust in AI
- Clear filter
Sort by
Module B focuses on the risks AI poses for social fairness and trust: how the use of AI-based tools can generate inequality or dishonesty, particularly when human productions differ in nature (e.g. creative vs.
The experimental component of Module A aims to further characterise internet users' behaviours when faced with online choices potentially undermining their autonomy: how people evaluate AI-generated information and/or content selected through AI-based algorithms, and how people are influenced by
This document examines autonomy as a form of agentive control grounded in attention regulation, goal-directed action, and reflexivity.
The source, which comprises excerpts from Module A of the KT4D Social Risk Toolkit, explores the complex challenge presented by artificial intelligence to individual autonomy and free will within modern society.
This document adopts a psychological and cognitive perspective on misinformation and disinformation, focusing on the interaction between cognitive biases, emotional motivations, social communication goals, and contemporary information environments