Sort by
Filtered results
- 6 results found
Sort by
Module B focuses on the risks AI poses for social fairness and trust: how the use of AI-based tools can generate inequality or dishonesty, particularly when human productions differ in nature (e.g. creative vs.
This interactive explainer introduces the concept of AI-generated deepfake images and provides clues to help the user understand how and why they are created.
The aim of the first three modules of KT4D’s Social Risk Toolkit thus focuses on the individual aspects of this challenge and is multifaceted.
Gamified tool designed to address the social and cultural implications of AI development.
We adopt a systematic approach to map the entanglement between past and present knowledge technologies and culture. Unlike many contemporary discussions that focus on specific issues or technological applications (such as deepfakes or photo manipulation), we map the entirety of past and present knowledge technologies to identify trends, general divergences, and similarities.
Module C of the Toolkit has two primary objectives: First, to understand AI and big data within the context of a long history of interactions between technological affordances and cultural norms, values, and practices. This recognises that knowledge technologies—such as written language, the printing press, television, radio, etc.—have shaped culture and knowledge production. The relationship between technology and culture is fundamentally mutual and reciprocal. Second, building upon the first objective, Module C focuses on the particular definition of AI and big data as advanced knowledge technologies (AKTs). We analyse the past in this module to better understand the present and—potentially—to anticipate what may lie ahead.