Sort by
Filtered results
- 12 results found
Sort by
Module B focuses on the risks AI poses for social fairness and trust: how the use of AI-based tools can generate inequality or dishonesty, particularly when human productions differ in nature (e.g. creative vs.
This document examines how AI-driven content curation and recommendation systems affect the quality of public deliberation.
This interactive explainer introduces the concept of AI-generated deepfake images and provides clues to help the user understand how and why they are created.
The aim of the first three modules of KT4D’s Social Risk Toolkit thus focuses on the individual aspects of this challenge and is multifaceted.
We adopt a systematic approach to map the entanglement between past and present knowledge technologies and culture. Unlike many contemporary discussions that focus on specific issues or technological applications (such as deepfakes or photo manipulation), we map the entirety of past and present knowledge technologies to identify trends, general divergences, and similarities.
This section considers how people’s autonomy and free will are hindered or supported by past and present KTs. By focusing on the structural level, we will examine systemic issues such as monopolies over KTs, data extraction and colonialism, labour, and political participation.
This section analyses how different knowledge technologies impact people’s attention and, consequently, their decisions regarding which information is worth storing and remembering, and which is instead forgotten or not even registered in the first place.