Module B focuses on the risks AI poses for social fairness and trust: how the use of AI-based tools can generate inequality or dishonesty, particularly when human productions differ in nature (e.g. creative vs. repetitive tasks), and how such dynamics impact trust between individuals and institutions.
This document contains the Bibliography of KT4D Social Risk Toolkit Module B: AI, trust and awareness.
Since our liberal democracies generally employ forms of representativeness to their institutions, the impact of AI on free and fair elections is also one of the key ways in which technology affects our polities. The level of acceptance of the results from the elections - the legitimacy of the outcome - rests largely on how those who end up with less power shares in the representative system see the fairness of the election process itself.
The source, which comprises excerpts from Module A of the KT4D Social Risk Toolkit, explores the complex challenge presented by artificial intelligence to individual autonomy and free will within modern society. It introduces a comprehensive literature review focusing on how highly personalized algorithmic content curation influences personal opinions by exploiting deep-seated human cognitive biases, such as the preference for emotionally charged or explanatory information.
When we think of freedom or ‘liberty’ we typically think of it in certain ways: e.g., freedom to act as we please, freedom from harm or interference, freedom of thought, or freedom to be a member of a community (Susskind, 2018: 165). Philosophers have often said that freedom insofar as it is afforded to you by others is not freedom (Dworkin, 1989: Ch 1; Pettit, 2017; Skinner, 2012). Whilst AI and big data could in several ways enhance freedom, it may also limit it.