The impact of social networks and artificial intelligence on citizens’ opinions and attitudes is often framed in terms of manipulation, persuasion, and cognitive vulnerability. In public debates, AI-driven content generation and algorithmic curation are portrayed as major threats to individuals’ ability to distinguish truth from falsehood and to form autonomous judgements. Such concerns, however, can only be properly assessed if they are grounded in a realistic understanding of how people process, evaluate, and use information in everyday contexts.

This section adopts a psychological and cognitive perspective on misinformation and disinformation, focusing on the interaction between cognitive biases, emotional motivations, social communication goals, and contemporary information environments. Rather than assuming passive belief formation, it examines why certain information is attractive, how engagement varies in depth, and under which conditions misleading content actually influences attitudes and behaviour. From this perspective, AI technologies are analysed not as a radical rupture, but as amplifiers of existing dynamics within social networks and public communication.