Filtered results
- 10 results found
- (-) Accountability
- (-) Trust
- (-) Agency
- Clear filter
Sort by
Module B focuses on the risks AI poses for social fairness and trust: how the use of AI-based tools can generate inequality or dishonesty, particularly when human productions differ in nature (e.g. creative vs.
This document examines how AI-driven content curation and recommendation systems affect the quality of public deliberation.
The experimental component of Module A aims to further characterise internet users' behaviours when faced with online choices potentially undermining their autonomy: how people evaluate AI-generated information and/or content selected through AI-based algorithms, and how people are influenced by
This document examines autonomy as a form of agentive control grounded in attention regulation, goal-directed action, and reflexivity.
The source, which comprises excerpts from Module A of the KT4D Social Risk Toolkit, explores the complex challenge presented by artificial intelligence to individual autonomy and free will within modern society.
The current EU approach to AI regulation faces several challenges and limitations that need to be addressed.
Since our liberal democracies generally employ forms of representativeness to their institutions, the impact of AI on free and fair elections is also one of the key ways in which technology affects our polities.