This interactive explainer introduces the concept of AI-generated deepfake images and provides clues to help the user understand how and why they are created. This happens through the engagement of the participant, creating a personalised experience that makes the user aware of the risks of deepfakes and gives them tools on how to detect them. 

The main objectives are threefold: firstly, to help participants understand what deepfakes are and how to identify them; secondly, to encourage reflection on how emotions and habits influence our perception of truth; and finally, to examine how deepfakes impact society and political processes, with a focus on fostering empowerment and prevention.

Deepfake explainer screenshot homepage

 

Use Case Validation

The validation of the Deepfakes explainer highlighted both educational potential and areas for improvement. Participants found the hands-on approach engaging but reported some areas for improvement in some of the functions: for example, the red/green feedback system was often misunderstood: many participants interpreted green as meaning that the image was “real” or “correct”, even when it indicated the deepfake. Adding a clear label such as “DEEPFAKE” would reduce confusion. Users also noticed that the deepfake image always appeared on the right, which made the task predictable; randomising the image layout would solve this. Several small spelling and grammar corrections were also highlighted. Finally, participants said it would be helpful to have quick access to the “how to spot a deepfake” guide at any moment, for example through a permanent button or tab. In regards to linguistic issues, there were inconsistencies between formal and informal addresses, which confused users. 

The overall recommendation is to use only the informal form (“elige”, “selecciona”, etc.) throughout the tool. Participants also pointed out several specific fixes, such as updating the Deepfake scenario with the revised text, correcting typos, and improving certain expressions. Nevertheless, participants appreciated the realistic difficulty of distinguishing real images from manipulated ones. People engaged with the dynamic interactive experience acknowledging the educational potential of it. With the 14-year-old students, sustaining engagement proved more challenging, both because of the age- specific dynamics of the group and the reduced facilitator-to-student ratio. Overall, the tool was well received and succeeded in engaging participants from the three target groups, making it a versatile resource for raising awareness about media manipulation and digital literacy.