KT4D Talks: Civic Participation meets AI - Episode 1: Rethinking Regulation and Responsibility: AI’s Impact on Civic Participation
9, January, 2025
Welcome to the first article of the new KT4D series entitled ‘KT4D Talks: Civic Participation meets AI’! Join us in exploring the most pressing issues at the intersection of civic participation, democracy, and artificial intelligence.
In each episode we will select a different topic drawing from articles, interviews or research papers, and we will ask some of our experts to share their ideas and answer related questions. By tackling the most urgent and debated themes we aim to promote dialogue, and understanding and to provide policy-related perspectives on how emerging technologies are reshaping democratic societies.
For our first episode we have chosen to focus on the dual role of AI in modern democracies drawing from the Wired article ‘Algorithms Are Coming for Democracy—but It’s Not All Bad’ by Bruce Schneier and Nathan E. Sanders. The article explores both the opportunities and risks AI presents for democracy, from enhancing voter outreach and inclusivity, to raising concerns about disinformation and the concentration of power among tech giants. On the one hand the authors argue that AI tools can make elections more inclusive by providing real-time translation, aiding outreach to diverse constituencies, and improving voter engagement through more accessible communication strategies. On the other hand, AI can be weaponised to manipulate voters, spread disinformation, and centralise power in the hands of a few large tech corporations. As AI continues to permeate political spaces, the article underscores the importance of establishing ethical frameworks and regulatory measures to prevent these harms.
To gain a more comprehensive view of these critical issues, we asked Atte Ojanen, a KT4D researcher from Demos Helsinki, to respond to several key questions linked to this theme. The interview was focused on whether current regulations are sufficient, who should be held accountable for AI-driven disinformation, and how AI innovations can support democratic participation while safeguarding against potential risks.
1. Do you think the EU’s Artificial Intelligence Act provides sufficient safeguards against the misuse of AI in political campaigns? What additional measures could be implemented?
It is important to note that the AI Act is a piece of product legislation under the New Legislative Framework, that seeks to ensure consumer safety of the AI products placed in the EU market. As such, its risk-based approach is markedly different from GDPR, which was first and foremost designed to safeguard individual rights, namely those related to data protection. From this perspective, there is still certainly a need for something like the AI Liability Directive to establish accountability for harms of AI systems and provide access to effective remedies for those harmed.
While the AI Act classifies systems that are used to influence voting behaviour and elections as high-risk AI systems, the threats AI poses to democracy are not just limited to misuse in elections or campaigns. Rather there are multiple indirect, structural risks that proliferation of AI systems across society poses. AI systems can culturally disrupt and shift power dynamics in terms of pillars of democracy, such as participation, equality and knowledge. Arguably most problematic is the infrastructural concentration of power to few private companies and platforms across AI infrastructure and the related epistemic harms.
The above issues suggest that mere regulation is not sufficient. The concentration of market power likely requires other policy instruments like investments into digital public infrastructure to tackle, so that European AI stack is not at the whim of private US providers. A lot also comes down to how forcefully and resolutely the regulation is enforced, where the European AI Office plays a key role. However, even more so AI’s effects on democracy represent cultural and structural change, which means that the legislation is at most only part of the solution. Hence, there is a need for wider, more bottom-up interventions with civil society.
2. In the context of AI-driven disinformation campaigns, who should be held accountable - the creators of AI tools, the users who misuse them, or the platforms where they proliferate?
I would argue all of them should be held accountable and liable to a different degree. Certainly the AI developers or providers themselves need to ensure sufficient safety safeguards in their systems through robust evaluation of their societal impact in real-world use cases. This is where I would place the emphasis as development is the root cause of upstream harms and arguably the most effective intervention point across the AI lifecycle. Moreover, nowadays the largest AI developers also place their own models on the market through their own interfaces (e.g. ChatGPT for OpenAI’s models).
Platforms and deployers certainly also have some accountability, but this is a fast evolving space that is hard to assess exactly. The differences between providers and deployers are not always clear cut. Insofar as we see big tech corporate sponsors like Microsoft, Google, Amazon and Apple as the main platforms for using models of leading developers then certainly they bear a lot of responsibility due to their sheer reach and size. Lastly, end-users that intentionally misuse AI systems for scams, theft and disinformation should of course be held accountable as well. It is not one or the other.
While intentional disinformation is a threat with increasingly accessible open-source AI models and might give rise to new liability questions, such worries are also potentially overstated. In fact, the discourse about risks of AI-driven disinformation ahead of the 2024 election year appears overblown in retrospect. Rather, I am concerned about a general decline in the civic capacities and epistemic agency of citizens as we become increasingly reliant on AI systems. The potential decay of truth, knowledge and the information environment through personalized feeds presents dangers for the functioning of the public sphere as a whole.
3. What role should AI play in shaping democratic processes, such as improving voter outreach or facilitating deliberative platforms, and how can these innovations coexist with safeguards against disinformation?
While my focus has been on the risks AI poses for democratic processes, it certainly promises benefits as well. AI tools could advance democracy by making complex policies more easily understandable to citizens, aid political expression of unrepresented groups, facilitate deliberation between people and identify consensus in decision-making contexts. For example, deliberative processes can be rendered much more efficient, through live translation, fact-checking, documentation, and analysis of their results.
However, these positive use cases require concentrated efforts, steering and investments by public bodies. It is unlikely that such solutions are produced by private AI developers without the right incentives in place. There is certainly a need to strike a careful balance between different values like open democratic expression and safety. It is not exactly clear how governments should balance between safety and openness in instances where safeguarding democracy from disinformation might require limiting access to some open-source AI models.
These issues require contextual considerations and greater citizen participation for legitimate decisions. From this perspective the current focus on geopolitical competition, national safety and security of AI systems is a worrisome development as these domains often remain closed to democratic input.
In this first edition of ‘KT4D Talks: Civic Participation Meets AI’, we explored the dual role of AI in modern democracies. Atte Ojanen highlighted both the opportunities AI offers - such as enhancing voter outreach and facilitating deliberation - and the risks it poses, including disinformation and power concentration.
While the EU’s AI Act is a key step toward regulation, Ojanen stressed that broader measures, such as public investment in digital infrastructure and stronger civil society involvement, are essential. Accountability must be shared across developers, platforms, and users, ensuring that AI-driven innovations benefit democracy without compromising safety.
As AI continues to reshape the political landscape, these discussions are more relevant than ever. Ensuring that AI supports democratic values requires not only technical solutions but also cultural, structural, and policy-driven approaches. This series aims to foster continued dialogue and provide valuable insights for policymakers, practitioners, and citizens alike.
Stay tuned for the next episode of KT4D Talks, where we will explore how AI tools can help voters make informed decisions!