Module B focuses on the risks AI poses for social fairness and trust: how the use of AI-based tools can generate inequality or dishonesty, particularly when human productions differ in nature (e.g. creative vs. repetitive tasks), and how such dynamics impact trust between individuals and institutions.
This document contains the Bibliography of KT4D Social Risk Toolkit Module B: AI, trust and awareness.
Since our liberal democracies generally employ forms of representativeness to their institutions, the impact of AI on free and fair elections is also one of the key ways in which technology affects our polities. The level of acceptance of the results from the elections - the legitimacy of the outcome - rests largely on how those who end up with less power shares in the representative system see the fairness of the election process itself.
The experimental component of Module A aims to further characterise internet users' behaviours when faced with online choices potentially undermining their autonomy: how people evaluate AI-generated information and/or content selected through AI-based algorithms, and how people are influenced by AI-based algorithms in their online choices.
The source, which comprises excerpts from Module A of the KT4D Social Risk Toolkit, explores the complex challenge presented by artificial intelligence to individual autonomy and free will within modern society. It introduces a comprehensive literature review focusing on how highly personalized algorithmic content curation influences personal opinions by exploiting deep-seated human cognitive biases, such as the preference for emotionally charged or explanatory information.
The purpose of this document is to provide an overview of how AI, big data and frontier technologies impact rights from the data protection perspective. The newly adopted definition of AI by the Organisation for Economic Co-operation and Development (OECD) states that “an AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that (can) influence physical or virtual environments.
Companies have significant influence over public discourse in online platforms, necessitating that the algorithms that shape these online platforms should be regulated and constrained to sufficiently consider the public interest (Susskind, 2018: 350). Perhaps the easiest way of returning control of a public good to the people would be nationalisation of large AI companies and platforms. However, this also affords the government considerable power, to tailor public discourse to their interests (Susskind, 2018: 350).
Pagination
- Page 1
- Next page