Sort by
Filtered results
- 11 results found
Sort by
This interactive explainer introduces the concept of AI-generated deepfake images and provides clues to help the user understand how and why they are created.
The experimental component of Module A aims to further characterise internet users' behaviours when faced with online choices potentially undermining their autonomy: how people evaluate AI-generated information and/or content selected through AI-based algorithms, and how people are influenced by
This policy brief focuses on short-term action (2026-2028) around AI governance and provides practical guidelines for experts and policymakers. It introduces a framework that embeds democratic pillars — participation, freedom, equality, transparency, knowledge, and the rule of law — directly into the entire AI lifecycle.
This document examines autonomy as a form of agentive control grounded in attention regulation, goal-directed action, and reflexivity.
The source, which comprises excerpts from Module A of the KT4D Social Risk Toolkit, explores the complex challenge presented by artificial intelligence to individual autonomy and free will within modern society.
The policy brief published by KT4D suggests that examining culture allows for a deeper understanding of societal responses to AI development.
The Recommendation Algorithms explainer aims to demonstrate how algorithms work on social media platforms. It allows the users to simulate their experience on a social media platform, where their choices shape a personalised feed.