Sort by
Filtered results
- 6 results found
Sort by
Module B focuses on the risks AI poses for social fairness and trust: how the use of AI-based tools can generate inequality or dishonesty, particularly when human productions differ in nature (e.g. creative vs.
This interactive explainer introduces the concept of AI-generated deepfake images and provides clues to help the user understand how and why they are created.
This document examines autonomy as a form of agentive control grounded in attention regulation, goal-directed action, and reflexivity.
The policy brief published by KT4D suggests that examining culture allows for a deeper understanding of societal responses to AI development.
This document adopts a psychological and cognitive perspective on misinformation and disinformation, focusing on the interaction between cognitive biases, emotional motivations, social communication goals, and contemporary information environments