2.1 Equality
Equality is by-and-large considered both a positive aspect of democracy, and a necessary feature for democracy. To be brief, the main benefit of equality in democracy is that it gives equal consideration to all individuals, thus each person is a free and willing self-legislator among equals. It can also be seen as a necessary condition too (as we know it), since it seems that in order to freely deliberate, and reach legitimate democratic decisions, no individual (qua citizen) should be given greater preferential treatment, if decisions are to be made ‘democratically’ (Fleming, 1993). This is not to deny that individuals may have greater power in democracy than others, for example, MPs in the UK have a greater deal of legislative power than a regular citizen. However, by virtue of citizenship everyone (and anyone) in-principle, could hold legislative office. Thus regardless of where and to what degree democratic procedures are equal, at some point equality plays a part.
As democratic deliberation has moved, at least in part, online, this has placed some of the power over our deliberation into the hands of private firms and away from individual citizens (Manheim and Kaplan, 2019: 152-153). Ultimately this infringes on equality since these firms can prioritise themselves and those whose interests align with their own. Moreover, relying on the analysis of big data and user profiling places moves this power, not just into the hands of private firms, but also into the algorithms used to target and sort the information we access.
There are many issues that arise in terms of (lack of) equality from the use of user profiling. Here we outline three such issues: access to information because of algorithmic targeting and sorting, and problems of bias for equality in decision-making. We take each of these in turn.
2.1.1 Access to Information
In order to make informed decisions, citizens must be able to access information. As has been explored in previous sections of the social risk toolkit, the ways in which we access information has changed with the development of knowledge technologies. In particular, social media has changed the way citizens participate in democracy, with citizens now playing more of a role in content creation and dissemination than we have previously seen (Kneuer, 2106: 666-667). Stewart, Cichocki and McLeod (2022) outline some of the ways that AI-driven algorithms on social media sort and target information. This unequal access to information exacerbates societal harms, for example epistemic injustice and social distrust (Stewart, et al., 2022). These ideas however can also meaningfully be translated into other uses of AI-driven algorithms in other knowledge technologies such as search engines. As outlined in section 1.2.2, user profiling is often used within these algorithms to create a more personalised experience for users. In doing so, the information that individuals citizens have immediate, and perhaps passive, access to varies according to their user profile. This section draws on Stewart, Cichocki and McLeod’s (2022) distinction between algorithmic sorting and algorithmic targeting to better understand the risks of user profiling in social media and search engine systems. Relatedly, we may also think the processes of algorithmic targeting and sorting of information poses threats to transparency if users are unaware of the way information is presented to them. The relationship of equal access to information and transparency will be explored in section 2.3.
2.1.1.1 Algorithmic Sorting and Filter Bubbles
The process of algorithmic sorting, as described by Stewart, Cichocki and McLeod (2022: 6) is “the increasing separation of people on social media into different epistemic worlds, which have little overlap in informational content”. This term does not describe the technical processes of sorting algorithms, but rather the effects that various social media and search engine algorithms have on how information is categorised.
One such effect is the development of filter bubbles. Whilst we often praise social media, search engines and other online tools for increasing our access to diverse viewpoints, the use of algorithms based on user profiles can actually have the opposite effect (Bozdag & van der Hoven, 2015: 249-250). Filter bubbles then are a form of algorithmic sorting that is ultra-personalised, invisible to us, and something we do not choose to enter (Pariser, 2011: 9-10). Because of this, filter bubbles are distinct from echo chambers, which are the effect of human agency and our self-selection into groups (Stewart, Cichocki and McLeod (2022: 7) offer Facebook groups as an example of such self-selection online), although our own choosing of information we want to see of course feeds into the user profiling that underpins filter bubbles.
Filter bubbles pose a threat to democracy and public deliberation in many ways. Bozdag and van der Hoven (2015: 255) identify different risks according to different conceptions of democracy. For example, under a liberal view of democracy, the restriction of users’ liberty and unawareness of other viewpoints are paramount (Bozdag & van der Hoven, 2015: 255). Under a deliberative view of democracy, we may identify the negative effects to civic discourse, commonground, and the decreasing of epistemic quality within society (Bozdag & van der Hoven, 2015: 255). Whatever conception of democracy is understood however, what seems to be important is that filter bubbles undermine civic participation because of the constraining of information and deliberation to highly personalised groups.
Even if the effects of algorithmic sorting resulting in filter bubbles may be overstated within the literature, the process of algorithmic sorting itself poses many issues for democratic institutions and civic participation. The first is that algorithmic sorting can reify societal issues, in particular testimonial injustice (Stewart, et al., 2022: 12-19). Testimonial injustice is a form of epistemic injustice, as coined by Miranda Fricker (2007). Epistemic injustice is a kind of injustice that wrongs someone in their capacity as a knower (Fricker, 2007: 1). Testimonial injustice is the primary form of epistemic injustice, and “occurs when prejudice causes a hearer to give a deflated level of credibility to a speaker’s word” (Fricker, 2007: 1). This can be both systematic or incidental (Fricker, 2007: 28-29).
Essentially, testimonial injustice occurs when someone’s word is not seen as credible because the person listening has some prejudice against the speaker, for example someone who holds racist views may not give a person of colour’s testimony about racism they face the credibility the testimony deserves. Algorithmic sorting can exacerbate testimonial injustices that are already present in society, for example the testimonial injustice that marginalised groups face because citizens are sorted into increasingly distant online communities, and contribute to Othering (Stewart, et al., 2022: 14).
Social media and search engine algorithms based on user profiling therefore pose significant risks to democracy because of the way they sort, and thus constrain, the information citizens can see and have easy access to. Of course, it may well be the case that citizens can still seek out alternative information, however the information we passively consume via social media feeds (especially with the advent of recommendation feeds rather than chronological feeds) is constrained by these filter bubbles and algorithmic sorting based on user profiling.
There is some empirical evidence of this, for example in a study conducted by Ro’ee Levy (2021: 831) it was found that “social media algorithms may limit exposure to counter-attitudinal news and thus increase polarisation”. Moreover, there is some evidence of social media posts having real-life outcomes, for example in a study by Müller and Schwarz (2023: 270) it was found that “Trump's tweets about Muslims predict increases in xenophobic tweets by his followers, cable news mentions of Muslims, and hate crimes on the following days” following the 2016 Presidential preliminaries.
2.1.1.2 Targeting and Dark Advertising
Algorithmic targeting utilises user profiling in order to provide content based on the likelihood that the user will engage with it (Stewart, et al., 2022: 9). We may not see the targeting of information to users as particularly harmful, after all targeted advertising and recommending information provides us with information about the things we are most interested in. However, in targeting information by using user profiling, “algorithmic targeting plays a special role in epistemic worldmaking, as it reinforces beliefs that users might already be sympathetic or susceptible to, and which appear “normal” within the epistemic worlds into which they have been sorted” (Stewart, et al., 2022: 9). This is particularly a concern for democracy if the information which is targeted towards users is misinformation, or disinformation.
Lynch (2017) makes a distinction between targeted advertising (advertising and post recommendations which are targeted using user profiles, but then may be shared or are at least discoverable by those not targeted), and dark advertising (adverts which are only shown to users part of highly “specific demographic parameters”. Some warn of the dangers of dark advertising, such as the lack of transparency, accountability, and the use of personal data (Lynch, 2017; Saunders, 2020; Trott, et al., 2021). Dark advertising is particularly pernicious when related to political campaigns and the access some citizens may have to information that others do not. These concerns go beyond the concerns we may have about traditional advertising campaigns for two reasons: it can occur “in the dark”, and different messages can be sent to different groups (Saunders, 2020: 75). This is something that has already been seen by the likes of the Cambridge Analytica scandal and their use of Facebook adverts in the Brexit vote, and the 2016 US election (Confessore, 2018; Manheim and Kaplan, 2019: 109). Theorists have described the rise of this phenomena as “the rise of the weaponized AI propaganda machine” (Anderson and Horvath, 2017).
Saunders (2020: 77) outlines five reasons dark advertising is a threat to democracy. Dark advertisements:
1. Are not part of the national conversation
2. Are difficult for the political media to track
3. Undermine the marketplace of ideas
4. Cannot be challenged or fact-checked
5. Cause long-term issues, such as disenfranchisement and polarisation
(Saunders, 2020: 77)
Dark advertising therefore undermines the equality of access to information needed for citizens to make informed voting decisions, and to be informed about issues in the first place. As Trott, Li, Fordyce, and Andrejevic (2021: 762-763) note, adverts have the power to shape our values and attitudes, even if this is not their primary aim. Moreover, dark advertising can focus debates on more divisive issues, for example immigration or welfare (Saunders, 2020: 76-77). These issues have traditionally been particularly mobilising for voters, and coupled with the inability to fact-check or challenge their claims, dark advertising poses a particularly important threat both in terms of polarisation of views and access to information.This is particularly harmful to democratic deliberation and civic participation. We will return to these further problems of algorithmic targeting in sections 2.3.1 and 2.4.1, and the steps that the EU, US and other institutions have begun to take to mitigate these risks in section 3 in this module and section 3 in Module E.
2.1.2 Decision-Making
Another way that the use of personal data and user profiling may threaten equality is through their use in decision-making. This section outlines two stages within AI-assisted decision-making where bias and discrimination may occur: in the collection and analysis of personal data to create user profiles, and in the use of user profiles in automated decision-making.
2.1.2.1 Bias and Discrimination in User Profiling
User profiling of course relies heavily on the generations and collection of personal data both by humans, and by systems created by humans (Ntoutsi, et al., 2020: 3). As such, bias and discrimination may arise in many different aspects such as the targeting hypotheses used to train the AI algorithms used within knowledge technologies, encoding of data, and problems with representativeness of data. Bias can come in many forms, and may occur at different stages of the data collection and analysis process. Some types of bias include selection bias (including self-selection, and exclusion bias), information bias (i.e., misinformation, and regression to the mean), or confounding (Delgado-Rodriguez & Llorca, 2004). Bias is a particular issue for the use of user profiling since the data used to design user profiles may be inaccurate, and this may lead to bias and discrimination when ADM uses these user profiles in its decision-making.
In terms of the hypotheses used to define the goal for an AI-assisted algorithm, when designing an AI model, businesses or institutions must translate their broader goal (for example to market a product at people who are likely to buy their product) into inputs, criteria, and labels that can be understood by an AI system (Roselli, et al., 2019). Roselli, Matthews and Talagala (2019) note that in doing so, corporations or institutions may use information such as historical data (for example information about those who have previously bought similar products), or “surrogate data” (for example, a credit score as a source of data about a person's reliability rather than a letter of recommendation). These can introduce bias in two ways. The first is that the data used to train the AI model may not be reflective of the data the model is eventually used upon. This may result in predictions being made that do not accurately relate to those whom the system should be targeting, thus excluding people because of historical bias. The second, is that the use of surrogate data results in information loss, which may introduce bias also if the information that has been excluded was important in some way.
Existing bias may also be imported into data encoding through the use of historical data or surrogate data. Importantly, this may be the case even if sensitive personal data is not present since the use of redundant encodings (other features that are correlated with the sensitive data) still exist (Ntoutsi, et al., 2020: 3). For example, certain neighbourhoods or areas may be associated with a certain demographic (Roselli, et al., 2019; Ntousti, et al., 2020: 4). This is likely the result of influences, causations, and correlations between data not being properly understood.
The data used within user profiles may also not be truly representative of either individuals, or demographic groups. For example, data may have been manipulated, mislabelled, mismatched, unlearned, or unseen (Roselli, et al., 2019). When these problems within the data themselves coincide with existing bias and discrimination in society against these certain groups, this can lead to discriminatory decisions (Calders & Žliobaitė, 2013: 53). This is also a problem on the bigger scale, since big data may over or under represent certain groups if the datasets have been created without sufficient attention to who is included (Ntousti, et al., 2020: 4). We return to this discussion in Module E when discussing the use of knowledge technologies, AI and ADM in criminal law.
2.1.2.2 Bias and Discrimination in Automated Decision-Making
As discussed, AI systems are trained on existing data sets, which may contain biases or reflect historical and structural inequalities that affect marginalised groups in society (O'Neill 2016, Eubanks 2018). As a result, AI systems may reproduce or amplify these biases and inequalities in their outputs, leading to discriminatory decisions that compromise the principle of equal treatment in democratic societies (FRA 2020; 2022). This is especially problematic when knowledge technologies that utilise AI systems are used in public services that have a direct impact on people’s lives and rights, such as social welfare, policing, judicial system, health care, and banking.
Several examples of such cases have been documented in the literature, such as the COMPAS system in the US that was found to be biased against African-American defendants in predicting recidivism rates (Angwin et al. 2016), and the SyRi system in the Netherlands that was used to detect fraud in social benefits and was accused of discriminating against low-income and immigrant households (Bekker 2021). To address such issues, more technical work within the field of machine learning and algorithmic fairness has blossomed, in order to develop methods and metrics to quantify and mitigate the potential biases of knowledge technologies towards different groups of people (Ntoutsi et al. 2020; Mehrabi et al. 2021). Yet, many scholars note that biases of AI systems are socio-technical issues that depend on societal context and cannot merely be reduced to technical fairness metrics or fixes (Selbst et al. 2019).
2.2 Privacy
Privacy is of both intrinsic and instrumental importance to democracy. For example, for deliberation to take place citizens must be afforded a degree of autonomy (Boehme-Neßler, 2016: 227-228). As Boehme-Neßler (206: 227-228) notes, privacy is inextricably linked to autonomy because it is only through an element of privacy that citizens can “develop, learn and then exercise autonomy”. Without privacy, citizens may for example feel that their actions are over scrutinised, or prescribed in some way. Moreover, citizens’ actions and behaviour may change if they feel they have not been afforded adequate privacy.
The use of personal data and user profiling by both institutions and business arguably infringes upon this democratic right to privacy. For one, it is now nearly impossible to use the internet or other knowledge technologies without leaving some kind of data trail. For example, it was predicted that by the end of 2023 there would be 43 billion devices connected to the Internet of Things (IoT) (Marr, 2022). The use of personal data and user profiling is of particular concern for the privacy of citizens because of the ways that behaviour and action can be predicted. We will return to this in Module E when discussing the way that knowledge technologies such as AI and big data can resist individual freedom of speech and action.
Conceptions of personal data often focus on ownership rights (Hazel, 2020; Hummel, et al., 2020; Janeček, 2018). However, the expansion of big data and the IoT has shifted the focus from data collection to data processing and analysis and this has provided some criticism of a purely ownership-based model of data privacy (Mai, 2016: 194-197). Given that data ownership can be relational - more than one person or institution can ‘own’ this data, for example both yourself and the business you frequent own the data on your shopping habits - ownership alone does not seem to explain our privacy protection concerns (Mai, 2016: 196). Floridi (2005: 195) outlines an ontological interpretation of personal data; it is not that you own your personal data, but rather your personal data constitutively belongs to you - it is you. This interpretation goes some way in helping us to understand why the use of personal data in user profiling and ADM may be an infringement of privacy.
Another way to understand our privacy concerns from the use of knowledge technologies, particularly the use of AI and big data, is from the view of the harm a lack of privacy can cause. Prosser (cited in Manheim and Kaplan, 2019: 117) outlines four distinct harms that arise from the violation of privacy: “1) intrusion upon seclusion or solitude, or into private affairs; 2) public disclosure of embarrassing private facts; 3) false light publicity; and 4) appropriation of name or likeness”. These harms are directly related to data privacy in that the collection, processing, and analysis of personal data can contribute to these harms directly and indirectly. Most obviously, the collection of personal data can directly infringe upon a person’s private affairs, or insecure storage of personal data can lead to data leaks. This section explores two main emerging privacy concerns about the use of personal data and user profiling: the way that profiling itself can infringe upon privacy, and the use and storage of training data in generative AI systems.
2.2.1 Profiling as an Infringement on Privacy
Due to the sensitivity of personal data, it follows that we believe individuals should have some privacy rights, even if these are not grounded in ‘ownership’ of data. As a consequence, it follows that institutions, businesses and corporations have a corresponding duty to respect this right. The predictive power of some AI-algorithms, ADM and user profiling however present a further issue that previous concerns about big data did not. It is not only that datasets together can help to glean information about individuals and collectives, but that new information can be generated from the analysis of this data. For an anecdotal example, consider the father who complained to US store Target for sending his daughter coupons for maternity clothes (Mai, 2016: 192). However it transpired that previous buying history of the daughter produced a “pregnancy prediction score” and which saw Target ‘knowing’ she was pregnant before anyone else (Duhigg, 2012; Mai, 2016: 192). As Mai (2016: 199) notes, the advent of big data, and we believe also the use of user profiling and ADM, has caused us to rethink privacy. The ‘datafication’ of privacy has arguably shifted the focus away from privacy concerns about data collection, and towards concerns about data analysis and processing (Mai, 2016: 199). One such concern is the way that user profiling can generate information, like in the case above. This is a threat to privacy because things may be understood about a ‘data subject’ even without them giving this information, whether knowingly or not, to the data controller. Moreover, if the information determined from user profiling is incorrect, this nonetheless counts as an infringement of privacy.
Some solutions have been proposed to protect the privacy of users whilst still harnessing the benefits of big data and machine learning techniques (Khalid, et al., 2023; Oseni, et al., 2021). These solutions often rest upon the idea that data ‘capture’ is never neutral or passive, but is “always socio-technical in nature” and that the way that technology captures our data affects the way we interact with it (Agre, 1994: 112). Mai’s (2016: 198) datafication model of privacy recognises this and argues that we must place focus on “facts the entities have produced about people”, not just those they have collected, to truly understand the risks to privacy. This is especially important for the use of user profiling.
2.2.2 Generative AI and Personal Data
Generative Adversarial Networks (GANs), large language models (LLMs) such as OpenAI’s ChatGPT, or Google’s Gemini utilise user data for training purposes. This poses two main concerns for the privacy of such data: the first is a concern about what data is collected and how it is used, and the second concerns how safe the data is stored. Concerning the training data used for generative AI systems, web scraping is often used. For example, GPT-3.5 was trained on 300 billion words of writing from the internet (Hughes, 2023). This training data may therefore inevitably include some personal data. Moreover, to improve the accuracy and performance of LLMs, they also often use the user data and prompts inputted.
This poses a risk to privacy of those whose personal data has made its way into training data, either through web scraping or via information users have inputted into the system. First, many users may not be aware that the data they input into generative AI systems may be used in this way. Second, the question arises about the safe storage of such data and who has access to it. This is a particular problem for LLMs as this personal data may be ‘generated’ into an answer given to other users. This may also be used for nefarious purposes. There have already been instances of this, with users being able to bypass safety systems and access training data within Chat GPT (Gupta, et al., 2023: 80242).
2.3 Transparency and Accountability
Transparency and accountability are fundamental concerns when we place our trust both in other people, and in technology. This is of particular importance in democracies because of the ways knowledge technologies intersect with civic participation. This section outlines two risks involved in the reliance on AI systems: transparency and accountability for data use, and transparency and accountability in decision-making by ADM and AI systems.
2.3.1 Considerations for Data Use
As discussed previously in section 2.2, personal data use is a fundamental privacy issue. With this however comes the responsibility to adequately protect this data. Personal data needs to be protected because of privacy concerns. This means not only concerns about who has access to what data, but for what purpose, and why. As discussed throughout this module, because of the interest private companies have in big data - data is a form of currency after all - citizens need to be able to trust that their data is only being used in the ways described by companies. As will be discussed in section 3 on regulation of AI and big data, GDPR has done much to bring these discussions to the fore, not just in Europe but worldwide. Whilst regulation like this can provide a good baseline of transparency, and arguably more importantly accountability, in the use of personal data, compliance with the law is not the be all and end all of ethical behaviour.
When things go wrong however, for example data leaks caused by human error, accountability for these mistakes is important. It has been suggested that the use of AI and ADM may in fact reduce data breach costs by containing them faster (Lehmann and Durfee, 2023). Whilst this may be positive from a data safety standpoint, problems with the transparency and accountability of ADM brings with it questions about whether we can fully understand the way that ADM makes decisions, and who is accountable if these decisions are wrong.
Moreover, in the context of democracy, the use of big data poses an interesting problem for accountability - decision-making is no longer theory based, instead there has been an epistemological shift towards data-led decisions (Kempeneer, 2021: 3). This is a problem for accountability and transparency, not only because of black-box thinking from AI and ADM, but because theory is seen to be necessary for understanding ‘why’ - correlation is not the full picture (Kempeneer, 2021: 3).
2.3.2 Black Box Decision-Making
AI systems are often complex and opaque, making it hard for users and affected parties to understand and contest their logic and outcomes. This can prevent public oversight and accountability of the impact of AI systems on democratic processes and values. The black box nature of automated decision-making systems can also undermine trust and confidence in democratic institutions and authorities. To address this issue, legal and ethical frameworks are needed to ensure that AI systems are transparent and accountable, and that people have the right to know and challenge the decisions that affect them. One such framework is the General Data Protection Regulation (GDPR), which arguably grants people the right not to be subject to a decision based solely on automated processing, as well as the right to obtain an explanation of the decision, although these are contested issues (Wachter, Mittelstadt & Floridi 2017; Kaminski 2019). However, transparency and accountability are not straightforward concepts, and may vary depending on the context and implications of a given AI system. Novelli, Taddeo & Floridi (2023) define algorithmic accountability in terms of answerability that requires authority recognition, interrogation and limitation of power. Furthermore, policymakers may utilise accountability either proactively or reactively in governing AI, depending on whether it targets compliance, oversight, or enforcement.
This black-box nature means we cannot be fully aware of how an AI is achieving a goal (Shaw, 2019: 2). By implementing AI into democratic decision making in any capacity we cannot be sure if it is using discriminatory methods to reach its goals (Manheim and Kaplan, 2019: 155). Often it is taken that AI’s reliance on data makes it impartial. However, some AI used to advise courts in the US on the likelihood of recidivism have been found to actually do the opposite, being almost twice as likely to suggest that a black defendant is likely to repeat a crime than white defendants (Angwin et al, 2016; Manheim and Kaplan, 2019: 157). More broadly, owing to issues mentioned in section 2.2 such as initial input data being biased, or creator biases, it seems often it is the case that AI systems falsely equate correlations with causations in their decision making (Sherman, 2018). Thus, increased implementation may lead to greater inequality (Shorey and Howard, 2018: 5033).
‘Transparent algorithms’ and explainable AI are often cited as a potential solution to some forms of black-box thinking. And these may be valuable tools for institutions to be seen to be fair and legitimate (Vredenburght, 2022: 78). These are discussed in section 3.1 when discussing how tech firms should, or should not be, regulated. A related solution is also ensuring human input and oversight over decision-making (Sartori and Theodorou, 2022: 4). This can come in many forms, such as ‘human in the loop’ (Sartori and Theodorou, 2022: 4; Zanzotto, 2019).
2.4 Participation
Public deliberation in its varying forms makes clear the preferences of the citizens of a society. This is important because knowledge technologies including AI and big data might seriously alter the ways in which deliberation can take place. We value public deliberation within democracies both instrumentally and intrinsically. Some instrumental reasons to value public deliberation include the development of better laws, those laws being publicly justified, and it can also improve the character of citizens, for example by increasing their autonomy or rationality (Christiano, 1997: 214). Intrinsically, the process of public deliberation may be seen as valuable because it embodies a mutual respect among citizens. Political debate is centred around deliberation on the public good, not based on self or group interests. When parties deliberate, weight should only be given to arguments which openly appeal to the public good (Rawls, 2004). Interestingly, the reasons we value public deliberation overlap with the reasons we value democracy outlined in 1.3.
This section outlines the ways in which civic participation and public deliberation may be undermined or threatened by the use of personal data and user profiling. The ways that knowledge technologies may alter democratic procedures, or erode the public sphere will be explored.
2.4.1 Potential to Alter Democratic Procedures
Knowledge technologies such as AI systems can influence, shape, or even replace human choices and actions, thereby undermining human autonomy and judgement in democratic decision-making. For instance, some scholars have warned of the risk of ‘algocracy’, a situation where increasing decision-making authority is given to AI systems, constraining and limiting opportunities for human participation and legitimacy of political decisions (Danaher 2016; Sætra 2020). Moreover, AI systems can diminish the epistemic agency of citizens, that is, their ability to acquire, evaluate, and communicate information, which is essential for a well-functioning democracy (Coeckelbergh 2023). To prevent or reduce the negative impacts of knowledge technologies on human autonomy and judgement, several measures can be taken at different levels of design, implementation, and governance. Some of these measures are: designing AI systems that respect and support human autonomy, rather than hinder or manipulate it (Laitinen & Sahlgren 2021); ensuring that human users and stakeholders are informed and aware of the capabilities, limitations, and potential biases of AI systems, and that they can exercise meaningful control and oversight over them (Verbeek 2011; Calvo et al. 2020).
There are several ways that democracy’s ability to make good laws could be improved by the presence of AI and big data. One way of looking at this is to ask how democratic systems could be adapted to improve decision making. These include wiki democracy, data democracy, and AI democracy. We take each of these in turn and explain how they may both provide an opportunity for, and a threat to, democratic participation.
There are large sources of information that can be edited by anyone, such as Wikipedia: this method for developing large platforms which anyone can contribute is called ‘open-source production’ (Noveck, 2009; Watkins, et al., 2016). Using a common platform, all citizens could contribute towards the production of policies and legislation (Susskind, 2018: 243). The main advantage of this regarding policy making is that large groups of individuals are more likely to come to correct conclusions. A wiki democracy would be a form of open source policy making, thus offering any individual with the ability to access it the ability to directly influence laws and policy decisions (Susskind, 2018: 242).
However, there are several issues with this. First, there may still exist a lack of motivation to participate. Regular democratic procedures are often subject to low voter turnout. In part this is because people are either too busy, or care too little (Cassidy, 2022). It stands to reason that asking them to become an active participant in making laws would yield even lower results (Susskind, 2018: 245). Further, very few people would be able to edit either code or laws. In effect, only those with the knowhow would be able to edit the laws, resulting in a more epistocratic system (Susskind, 2018: 245). Finally, in simple terms, deliberation does not always equate to decision making. Assuming people were both willing and able to do this, the ability to deliberate in continually altering the law would result in no definite decisions ever being made (Durning, 2004; Lanier, 2014: 55-60). It would therefore be highly impractical since its fluidity would limit individuals’ awareness of the law. This may however be solved through reasonable constraints set out to limit the extent to which policy could be open sourced.
Relatedly, the idea of a data democracy may be seen as a viable supplementation to our existing democratic processes. Whilst many decisions require consideration by individuals, we now have a wealth of data on some particular issues (Susskind, 2018: 247). Many decisions, such as where best to distribute funds to ensure long term prosperity could be made by AI analysing data to optimise decision making (Harari, 2016: 327-342). This reliance on data could offer a degree of impartiality; removing human bias from decision making (Bohannon, 2017; Manheim and Kaplan, 2019: 155).
However, data may be applied impartially, and as we have outlined throughout this module, there is no way of guaranteeing the data itself is of a suitable standard to make truly good decisions (Susskind: 2018: 248). We may also believe that a data democracy lacks legitimacy since neither citizens, nor their representatives have the power to make decisions (though this may be resolved through some veto power).
The concept of an AI democracy has many different ways it could be implemented, for example AI advisors or AI voter delegates. Essentially, we could use algorithms individually to improve our own decision making ability (Susskind, 2018: 251). AI Advisors could perform an advisory function in predicting how we should vote. Perhaps you enter in information about yourself (creating your user profile for yourself by entering personal data) then the AI builds a profile of you, advising you on which way would be the best way for you to vote (Mols, et al, 2022).
Alternatively AI voter delegates, using an algorithm and user profiling, could vote on our behalf for designated issues. This could be done frequently with regard to smaller pieces of legislation (Harari, 2016: 340). These systems would likely run into issues faced prior, namely, we cannot be sure how these AI come to their decisions, or if they have been created in such a way as to bias them. They could also potentially be hacked, which could severely threaten democracy (Manheim and Kaplan, 2019: 134 – 138; Susskind, 2018: 253).
These uses of AI could be justified in regard to enhancing equality on egalitarian grounds. Many people owing to their circumstances have neither the time or the ability to ‘do themselves justice’ and make an informed decision on who they should vote for. If an AI could provide advice and summaries of important information this could better accommodate the disadvantaged since many people lack the time to do a thorough investigation into all material facts. It could also be used to help those who lack the ability to understand the material even if they had all the facts. This would not only benefit laypersons with a general lack of expertise in politics, but also those with severe mental disabilities, or those suffering from long term illness such as Alzheimer’s, or those in comas. For example, such a summary could be written in simplified language, making it easier for people to comprehend. Moreover, the use of AI in this way may in some sense prevent misdirection. After the 2016 referendum on Brexit, some polls showed that near a quarter of ‘leave’ voters felt that they had been misled (Farand, 2017). It is now predicted that if the referendum was retaken, remain would have won, and that the misinformation offered by the vote leave campaign was a significant factor in the actual outcome. With this example in mind, if an AI system tailored to each individual acted as an impartial advisor, it may prevent decisions being made against the interests of the individuals making said decisions as a result of some misinformation. The AI knows what is important to that person and may explain why that person should vote a particular way.
However, as discussed in section 2.3.2, AI algorithms can be seen as a ‘black box’, since humans cannot be fully aware of how an AI is achieving a goal (Shaw, 2019: 2). By implementing AI into democratic decision making in any capacity we cannot be sure if it is using discriminatory methods to reach its goals (Manheim and Kaplan, 2019: 155). Owing to issues such as initial input data being biased, or creator biases, it seems often it is the case that AI falsely equates correlations with causations in decision making (Sherman, 2018). Thus, increased implementation may lead to greater inequality (Shorey and Howard, 2018: 5033).
2.4.2 Erosion of the Public Sphere
AI has become an integral part of the public sphere, the arena of discursive interaction where citizens form and express their opinions on matters of common concern. However, the increasing reliance on AI-driven applications by media platforms and political actors influences the quality and diversity of information accessible to citizens with consequences for democratic deliberation. According to Jungherr (2023), AI can affect democracy in various ways, such as influencing the conditions and opportunities for self-rule, the institution of elections, and the competition between democratic and autocratic regimes. Moreover, Jungherr and Schroeder (2023) argue that AI can also impact the public sphere itself, by shaping the information environment and the content that citizens encounter and share online. Algorithms that filter, rank, recommend, or generate content can create echo chambers, polarisation, misinformation, or propaganda that undermine democratic deliberation and the formation of publics and counterpublics. In particular Jungherr and Schroeder argue that AI is likely to strengthen the intermediary structures of the public arena, hide challenges to the political status quo and fortify control by gatekeepers.
AI applications such as automated decision-making can also unintentionally violate the fundamental rights and interests of citizens across domains, thereby harming the public sphere. Moreover, AI can also be used to target, persuade, or manipulate voters through personalization, microtargeting, or undermining the integrity of electoral democracy. These risks are more direct, whereas the structural risks that AI poses are explored further in section 3.2.1.
Unintentional risks of AI for society:
- Algorithmic bias and discrimination
- Opaqueness and intransparency of AI models
- Misalignment and safety risks of advanced models
Intentional misuse in politics:
- Disinformation and deep fakes
- Surveillance, profiling and privacy considerations
- Campaign and electoral manipulation