Effect of Technology:
Partners: logo

There is common confusion in civic tech and AI-for-democracy spaces: mixing up efficiency with efficacy. This piece argues that making a process faster isn't the same as making it more democratic — and introduces the concept of democratic efficacy as a more meaningful goal.

The central question for democratic practitioners isn't whether AI can make a process more efficient; it generally can. The question of significance is whether that efficiency serves or undermines democratic effectiveness. This distinction is crucial because it challenges us to think beyond technological capability toward democratic purpose.

Efficiency concerns operational smoothness: faster processing, reduced costs, streamlined workflows, elimination of friction and delays. These metrics are seemingly understandable in commercial contexts where the goal tends to be satisfied customers and profitable operations.

Efficacy concerns purposeful effectiveness: whether processes achieve their intended democratic outcomes, strengthen civic capacity, build trust, enable meaningful participation, and create conditions for legitimate collective decision-making. Democratic efficacy often requires apparent inefficiencies: time for reflection, space for disagreement, opportunities for changing minds.

The efficiency imperative assumes these two concepts align, and that making processes more efficient automatically makes them more effective. However, democratic processes often work precisely because they resist optimization. The "inefficiencies" of extended deliberation, redundant oversight, and built-in friction serve democratic purposes that seamless systems cannot provide.

A Framework for Democratic Assessment

Our research and experimentation with the Digital Democracy Demonstrator and Digital Democracy Lab drew on Graham Smith's framework for evaluating democratic innovations through both democratic goods and institutional goods. This framework provides a structured way to assess whether AI integration serves broader democratic purposes or merely institutional convenience.

Democratic Goods include:

  • Inclusiveness: This category includes both presence (diverse participants from different backgrounds) and voice (meaningful opportunities for all participants to speak, be heard, and influence outcomes). AI can theoretically enhance inclusiveness through translation tools or accessibility features, but it can also systematize exclusion through biased training data or design assumptions that favor particular communication styles or disempower others.
  • Popular Control: This can be defined as genuine citizen influence over agenda-setting and decision-making processes. This is where many AI systems prove most problematic. By virtue of their very design, they tend to shift decision-making power away from participants toward algorithmic processes and parameters that citizens cannot meaningfully influence or understand.
  • Considered Judgment: This is comprised of thoughtful, informed, reflective deliberation where decisions emerge from reasoned exchange and mutual learning rather than raw preferences or aggregated opinions. AI can potentially support this by providing access to information, but it can also undermine it by making complex issues appear simpler than they are or by providing authoritative-sounding answers that discourage further questioning.
  • Transparency: Clarity is needed about how decisions are made and communicated, both for participants (who must understand the purpose, structure, and consequences of their involvement) and the wider public (who need accessible information about the process). AI systems, particularly large language models, pose fundamental challenges to transparency through their black-box nature and proprietary constraints.

Institutional Goods include:

  • Efficiency: This good can reduce the demands a process places on systems and people. This is where AI most obviously provides benefits. It can enable a reduction of resource requirements, speed up processing, and handle larger volumes of input. The question is whether these efficiency gains come at the expense of democratic goods.
  • Transferability: How easily an approach can adapt across different political, cultural, or institutional contexts. AI systems often appear highly transferable because they're based on general-purpose technologies, but this apparent universality may mask cultural biases and context-specific design assumptions. Tools are inseparable from the contexts they are used and experienced in; Relative effectiveness in one context cannot certainly predict effectiveness anywhere else.

The crucial insight from applying this framework is that AI integration typically excels at institutional goods while posing risks to democratic goods if integrated uncritically or without substantial (re)design. 

author image

Efficiency is seductive — but it’s not the same as doing democracy well. This is what we need to unpack and understand well, when we want to use ML tech in democratic processes.

Elizabeth Calderón Lüning

What This Means for Democratic Practitioners

The implications of this analysis offer an opportunity for democratic practitioners to enhance their criticality and democratically constructive decision-making, design, and facilitation. Here, several key insights can be of particular value for practitioners to bear in mind:

Maintain Healthy Skepticism: The burden of proof should be on AI proponents to demonstrate not just that AI tools can make processes more efficient, but that it genuinely enhances democratic goods without unacceptable trade-offs. Most current proposals fail this test.

Critically Consider Innovation Pressure: The assumption that failure to adopt new technologies indicates stagnation or irrelevance is often wrong in democratic contexts. Democratic institutions may be most innovative when they resist or adapt rather than embrace technological optimization.

Focus on Democratic Rather Than Technological Sophistication: The goal isn't to become expert in AI capabilities but to become expert in recognizing how different tools and approaches serve democratic purposes. Technical sophistication is less important than democratic clarity.

Preserve Agency Over Technological Integration: You have both the right and the responsibility to say no to AI proposals that don't clearly serve democratic purposes, regardless of their technical impressiveness, investment to date, or institutional pressure.

The path forward isn't about rejecting AI entirely or embracing it uncritically. It's about developing the capacity to distinguish between technological solutions that genuinely serve democratic purposes and those that simply apply efficiency logic to democratic contexts. This distinction is crucial because it determines whether AI becomes a tool for democratic enhancement or democratic displacement—and the difference may be far more subtle than current discourse suggests.

This text is derived from the Digital Democracy Lab Handbook — a practical resource for democratic practitioners exploring how AI can be thoughtfully integrated into participatory and deliberative processes without compromising democratic values. For more insights on this topic download the Digital Democracy Handbook.

A longer journal article on this subject was published in the Journal Politics and Governance: From Efficiency to Deliberation: Rethinking AI’s Role in Institutionalizing Democratic Innovations, https://doi.org/10.17645/pag.10632