The source, which comprises excerpts from Module A of the KT4D Social Risk Toolkit, explores the complex challenge presented by artificial intelligence to individual autonomy and free will within modern society. It introduces a comprehensive literature review focusing on how highly personalized algorithmic content curation influences personal opinions by exploiting deep-seated human cognitive biases, such as the preference for emotionally charged or explanatory information. The document further discusses how platform design, notably the use of dark patterns, actively captures user attention and diminishes the sense of agency, comparing these tactics to addictive industries like gambling. To safeguard democratic participation and individual liberty, the source argues for user empowerment through enhanced digital literacy and the adoption of regulatory interventions that promote reflexivity and transparency over unconscious manipulation. Ultimately, the analysis seeks to identify necessary regulation to ensure AI benefits society while preserving genuine civic engagement and institutional trust.
 

Social networks like Facebook and X (formerly Twitter) are often blamed for spreading 'fake news', manipulate people’s opinions, and/or being used by foreign countries to destabilise democracy.  But what does it really imply? Are we as vulnerable to manipulation as this criticism suggests? What is our relationship with the online information we are exposed to?

In this literature review, we will consider two questions. The first concerns the influence that online information has on us, and our ability to form personal opinions without being influenced by algorithms. The second is about the preservation of our autonomy when our attention is so easily captured by online systems designed for that purpose.