An AI Tool that promises to make us find common grounds - what should we keep in mind when introducing on AI in deliberation and policy making?

alt

Authors: Matilde Castleberry, Eleonora Lima, Atte Ojanen, Tiffany Morisseau

An article by Science introduces a new AI tool developed by experts in the UK that facilitates agreement, but does that mean it makes everyone better off?

The tool called “Habermas Machine,” is designed to mediate public deliberation and help diverse groups find a common ground on complex social and political issues, something that would greatly benefit modern democracies, which seem to be increasingly polarised. Inspired by Jürgen Habermas’s theory of communicative action, this AI would facilitate deliberation by generating group statements that reflect shared perspectives once each participant shares its own. However, some scholars, who are active in the field of Participatory AI and AI for Justice, such as Sasha Costanza-Chock and Evgeny Morozov, expressed their skepticism about the Habermas Machine (see this X thread).

These research initiatives and technological developments in the realm of AI are the centre of our focus and research in KT4D as evidenced by the various Digital Democracy Labs we have organised around Europe. The Digital Democracy Lab adopts a novel approach to platform democracy that differs from that of established players who generally prioritise efficiency and aim at leveraging Machine Learning, and AI systems more generally, to lower, if not in fact remove, the friction inherent in deliberative processes by managing difference of opinions and disagreement (like the Habermas Machine and others). The Digital Democracy Lab aims instead at retaining frictions, as we understand them to be essential not only to democratic processes but also to the evaluation of the socio-political impact of AI, and the quality of any outputs it may have.

The Habermas Machine was tested with five thousand UK participants in experiments and a virtual citizens’ assemblies, where it demonstrated the ability to reduce polarisation, incorporate minority views, and promote mutual understanding in a scalable, efficient manner. When participants were asked to rate the summaries produced by the AI and the ones produced by humans, the tool seemed to be satisfying a larger number of people. 

This study suggests that AI could actually improve public deliberation on divisive issues. An AI tool like ‘Habermas Machine’ could in fact improve policy making especially in situations in which time is insufficient and a decision must be taken quickly. It can be very complicated for the human mind to choose among many different options especially when it’s under pressure. Therefore, nothing better than computers to help us reorder and compute a large number of variables and opinions coming from a broad and varied population of individuals, into one simple solution.

However the implementation of AI tools, as useful as they can be for mediating deliberation, cannot provide shared intentionality, based on emotional understanding. A truly comprehensive policy would be one that addresses this crucial dimension of human society – including in its cultural diversity, a point that we discussed in our latest Policy brief entitled Culture's Role in Navigating Technological Change. So what do we need to keep in mind when dealing with AI tools involved in deliberation processes? 

The importance of community building 

Community building is important to foster a democratic society where people actively engage in decision making. It is based on a set of shared representations that members of the community know to be commonly accepted.

While the Habermas Machine can help mediation by generating the ideas that best reflect the group’s common ground, it cannot adopt someone else’s stance, which is a crucial component of deliberative processes. Individuals must indeed understand the reasons behind the choices that affect their lives but also how these choices affect others too. Being part of society means listening to the other members of it, trying and building empathy and understanding towards other people around us. 

The Digital Democracy Lab (which combines a technical demonstrator with comprehensive facilitation guidelines) takes a different approach to respond to this need. It integrates democratic participation into the evaluation of machine learning systems. By doing so, it enables policymakers, citizens, and software developers to collaboratively evaluate machine learning and large language models, in terms of their compatibility with democratic values. The Digital Democracy Lab thus prioritises stakeholder engagement and collective deliberation, going beyond other more conventional methods that rely on efficiency metrics.

Rational does not mean Equitable

A very important aspect of policy making is bearing in mind the difference between equality and equity. The idea of equality refers to providing the same resources for everyone regardless of the different needs that each individual has. Equity on the other hand focuses on fairness by providing targeted support to help everyone reach a more equal outcome. To put it more simply, if we only focused on equality we probably wouldn’t have services like Scholarships, Financial Aid Programs or assistance for people with special needs. Therefore, an outcome like the one provided by a rational computational machine - that appeases the majority and that lacks an understanding of the underlying power imbalances - does not necessarily imply fairness.

As noted in the article by Science, tools like the Habermas Machine are not intended to replace human policy-making or deliberation. Instead, they aim to simplify these processes for smoother outcomes. Human oversight is essential and must work alongside AI, ensuring fair and unbiased choices. The KT4D project - which tries to enhance AI tools in support of policy-making and deliberation by focusing on civic participation and cultural identity - aligns with the outcomes of this study and calls for a complementary approach that enhances AI tools without overrelying on AI-driven processes. The KT4D’s Digital Democracy Lab exemplifies how technology can be designed to enhance, rather than erode, the social fabric., ensuring fair and equitable choices in line with the democratic principles.

Learn more about KT4D’s work and how it fosters civic participation in democracy capitalising on the benefits of knowledge technology.