Sort by
Filtered results
- 16 results found
Sort by
This interactive explainer introduces the concept of AI-generated deepfake images and provides clues to help the user understand how and why they are created.
The aim of the first three modules of KT4D’s Social Risk Toolkit thus focuses on the individual aspects of this challenge and is multifaceted.
This document examines autonomy as a form of agentive control grounded in attention regulation, goal-directed action, and reflexivity.
The source, which comprises excerpts from Module A of the KT4D Social Risk Toolkit, explores the complex challenge presented by artificial intelligence to individual autonomy and free will within modern society.
We adopt a systematic approach to map the entanglement between past and present knowledge technologies and culture. Unlike many contemporary discussions that focus on specific issues or technological applications (such as deepfakes or photo manipulation), we map the entirety of past and present knowledge technologies to identify trends, general divergences, and similarities.
This section analyses how different knowledge technologies impact people’s attention and, consequently, their decisions regarding which information is worth storing and remembering, and which is instead forgotten or not even registered in the first place.
This section examines how people develop trust – or distrust – in knowledge technologies. This section considers three main aspects.