The Recommendation Algorithms explainer aims to demonstrate how algorithms work on social media platforms. It allows the users to simulate their experience on a social media platform, where their choices shape a personalised feed. Each piece of content is linked to a specific algorithm phenomenon. At the end of the experience, the user receives personalised feedback explaining how the (simulated) algorithm processed its interactions and what these mechanisms mean in real-world contexts.
The explainer helps participants recognise how algorithms affect their daily online habits, reflect on their influence on our beliefs and online interactions, and explore their impact on diversity and the reinforcement or reduction of bias. Ultimately, it aims to encourage participants to reflect on their own data, privacy and online choices.

Use Case Validation
The validation of the Algorithms tool showed that participants appreciated and enjoyed its interactive approach for exploring concepts like algorithmic influence and digital behaviour. Some usability challenges like the visual presentation of key concepts used in the game, were tackled due to participants’ feedback. Besides this, the Algorithms explainer presented other challenges: users often did not realise when or where they needed to scroll or tap, which made the tool feel unintuitive. The “what you missed” section was especially confusing because the interface did not clearly show that it contained extra information. The videos were another barrier, since they are all in English; to make the tool accessible, Spanish subtitles are needed. Participants also struggled with the explanations and concepts section.
Some users—particularly in the CSO group—reopened the same items without noticing, suggesting the need for a clearer visual way to distinguish what has already been read from what is new. Addressing these points—improving navigation cues, adding subtitles, and clarifying instructions—would make the interactive explainer fully accessible and effective for all target groups. Younger students (14–15 years) interacted with the Interactive Explainers, Deepfakes and Algorithms, but engagement was limited. Although their survey responses suggested similar levels of understanding to older students, observations indicated that their attention was often superficial, and self-reported comprehension may not reliably reflect actual understanding.
Sustaining focus was challenging for this group, highlighting the need for additional facilitation, simplified content, or more engaging formats when targeting younger audiences. Beneficiaries from the CSO reported high comprehension, with all participants selecting the maximum score of 5. We have to take into account that they only played the interactive explainers and not the Serious Game, which is understood as the most dense one. This suggests that the explainers were highly accessible to their needs. The clarity and structure of the explainers appear to have enabled participants to understand the key concepts easily.