Can Artificial Intelligence Truly Inform Voters Without Distorting Democracy?
11, February, 2025

The integration of Artificial Intelligence (AI) into the electoral process raises questions about the extent of its reliability in informing voters and shaping democratic decision-making. AI has the potential to increase access to political information, however, its effectiveness clearly depends on the quality of its inputs and, above all, its ability to process complex political issues. Is AI truly equipped to navigate the complexity of human-generated opinions, truthfully discern them, and, even if it were capable, should we entrust it with such a critical role?
Limitations of AI in political communication
To explore these challenges, we posed the question “Do you see any specific risks in informing voters about candidates and issues through AI?” to Jennifer Edmond, Project Coordinator of the KT4D project (Knowledge Technologies for Democracy). Her response highlighted concerns about the limitations of AI in political communication, the risks of oversimplification, and the potential distortion of democratic engagement.
Edmond highlighted the disconnect between AI’s potential and the quality of the political information it is trained on, by comparing AI’s role in informing voters to cooking and hunger, illustrating a fundamental misconception about its effectiveness.
“Asking whether AI can be useful in helping voters to make more informed choices is like asking whether cooking can help someone who is hungry. Cooking is a very common method of preparing food, for sure, but bears no real relation to the ingredients available (which could be sheep’s wool and bleach) and none to the actual nutrition needs of the hungry person, who could be a vegan, allergic to eggs or only three months old. AI is a way of accessing and compiling information: it’s cooking, but what are the ingredients?”
The risk of the shift in agency
One of the most pressing concerns Edmond raised is that political communication is often crafted to be intentionally vague, making it difficult for AI to extract clear positions. This shift in agency - from voters actively analysing political speech to trusting an AI’s interpretation of it - represents a real risk for democratic participation.
“Anyone who has ever listened to a politician being interviewed will realise that modern media speak is often crafted to obfuscate positions and generate wide appeal across a range of issues without committing to real substance.”
If AI is trained on such ambiguous language, it may struggle to generate outputs that accurately reflect political positions. Edmond further pointed out that if AI is responsible for extracting and distilling candidate positions,
“then the user will be forced to trust the computational understanding of what has been stated, rather than using their own critical faculties to try to tease out fact from spin, or cultural context from surface utterance.”
Differentiate between political structures and voter priorities
Another challenge lies in the contextualization of AI-generated voter information. Edmond emphasizes that political systems are not uniform, and voter concerns vary widely:
“How does the political system in a given country work? Does change come from coalitions of parties working together, or individual elected officials pushing through a platform to meet their constituents’ needs? Is our imagined voter more concerned with specific local issues, such as potholes in the roads or overcrowding in a regional hospital, or things that can only be dealt with at a national level, like tax rates or social housing availability?”
AI must be able to differentiate between political structures and voter priorities, or it might present irrelevant or misleading information that does not align with a given electoral system.
Edmond also warned of AI’s potential to introduce hallucinations - fabricating information that appears authoritative but is entirely false. Even when not hallucinating, however, AI could amplify biases, placing “undue emphasis on certain pieces of information” while marginalizing others. She also posed a concerning question: “Might the perception of an intelligent conversation with a confident interlocutor reduce the sense of agency and responsibility felt by a voter?” In other words, does AI have the potential to erode voter agency, reducing a voter’s sense of responsibility in critically engaging with political information?
KT4D’s narrative-based simulation game and interactive explainers
Ultimately, Edmond argues that AI’s role in informing voters cannot be considered separately from the broader social and psychological dimensions of democratic participation.
“In this delicate time for democracy, it is of critical importance that we continue to empower voters to express their civic participation through considered and active expression of their voice as citizens. If an AI system can be designed to do that, this is great. But that system will need to be conceived of from a perspective that recognises the subtle interplay between social, cultural, and psychological drivers, even before the question of what information is shared and how can be asked.”
The initial question “whether AI can be of use in this context” - may hold the underlying assumption of a prioritised role of data processing over democratic integrity. Edmond concludes that “the question of whether AI can be of use in this context answers itself in the very way it puts a way of preparing data ahead of the consideration of the end it might serve and the risks it may run in doing so.”
Jennifer Edmond’s responses serve as a reminder that while AI can process and present information, it cannot replace the fundamental human faculties of interpretation, debate, and critical thinking that democracy requires. Civic participation demands critical digital literacy, active participation, and informed agency. There is a growing need for educational initiatives that empower citizens to navigate political discourse with discernment and autonomy.
This is precisely where the KT4D’s narrative-based simulation game and interactive explainers come into play by fostering engaging learning experiences. These tools help citizens develop the critical thinking skills necessary to assess political narratives and AI-generated content. As AI continues to shape the public sphere, strengthening digital literacy remains one of the most powerful safeguards of democratic integrity.