The experimental component of Module A aims to further characterise internet users' behaviours when faced with online choices potentially undermining their autonomy: how people evaluate AI-generated information and/or content selected through AI-based algorithms, and how people are influenced by AI-based algorithms in their online choices.
Are people more vigilant when they know how AI works and/or when they are aware of the intentions of those who provide them with this content? Here, we study the evaluation of information with manipulated sources (AI vs. Expert vs. Layperson), as well as manipulated levels of accuracy and relevance. The stakes associated with the truthfulness of information are also measured and included as a variable for assessing the propensity to exercise critical thinking. (Experiment 1)
Regarding online choices, our aim is to clarify what consent means in situations when people are asked to agree on terms and conditions, by monitoring the decisions people make in such situations and by examining the contextual factors that matter most. (Experiment 2)
Based on its action research activities, this module will contribute to the development of concrete strategies that prioritise sound ethical considerations.
This experimental component examined how people evaluate AI-generated information, comparing epistemic trust in AI-based conversational agents compared with human experts and non-experts. It showed that credibility judgments depend strongly on contextual stakes and individual familiarity with AI.