Sort by
Filtered results
- 10 results found
Sort by
Module B focuses on the risks AI poses for social fairness and trust: how the use of AI-based tools can generate inequality or dishonesty, particularly when human productions differ in nature (e.g. creative vs.
This interactive explainer introduces the concept of AI-generated deepfake images and provides clues to help the user understand how and why they are created.
The aim of the first three modules of KT4D’s Social Risk Toolkit thus focuses on the individual aspects of this challenge and is multifaceted.
This document examines autonomy as a form of agentive control grounded in attention regulation, goal-directed action, and reflexivity.
Gamified tool designed to address the social and cultural implications of AI development.
The policy brief published by KT4D suggests that examining culture allows for a deeper understanding of societal responses to AI development.
The Recommendation Algorithms explainer aims to demonstrate how algorithms work on social media platforms. It allows the users to simulate their experience on a social media platform, where their choices shape a personalised feed.