Sort by
Filtered results
- 14 results found
Sort by
Module B focuses on the risks AI poses for social fairness and trust: how the use of AI-based tools can generate inequality or dishonesty, particularly when human productions differ in nature (e.g. creative vs.
This document examines how AI-driven content curation and recommendation systems affect the quality of public deliberation.
This interactive explainer introduces the concept of AI-generated deepfake images and provides clues to help the user understand how and why they are created.
Companies have significant influence over public discourse in online platforms, necessitating that the algorithms that shape these online platforms should be regulated and constrained to sufficiently consider the public interest (Susskind, 2018: 350).
The aim of the first three modules of KT4D’s Social Risk Toolkit thus focuses on the individual aspects of this challenge and is multifaceted.
The purpose of this document is to provide an overview of how AI, big data and frontier technologies impact rights from the data protection perspective.
The policy brief published by KT4D suggests that examining culture allows for a deeper understanding of societal responses to AI development.