Methodology
1.3.1 Original contribution
As it has emerged from the literature review, many scholars from different fields have discussed the benefit of providing a historical contextualization to the analysis of the impact of AI and big data, as well as the need to recognise the central role of culture in the development and adoption of new technologies. While Module C takes advantage and builds upon this vast and relevant scholarship, it will also provide an original contribution to this debate. This is due to the two peculiar approaches distinguishing the KT4D project from similar existing academic endeavours and that are:
- First, the project’s focus on knowledge technologies and the importance, discussed in the previous sections, of this open and novel definition that identifies a category distinct from the more commonly used ones of media and information technologies. This is because the project recognises the process of knowledge creation and sharing as central to democratic participation.
- Second, this Module adopts a systematic approach to the comparative analysis of past and present knowledge technologies from a cultural studies perspective. As described in the previous sections, there is a growing number of scholars historically contextualising AI tools and systems and drawing parallels with past technologies. However, their focus is usually limited to one issue or technological application at a time, like in the already mentioned comparison between deepfakes and photographic manipulation of the 1920s. What Module C aims to accomplish, instead, is a comprehensive map of past and present knowledge technologies so as to identify general trends, divergences, and similarities. The overarching theme threading these case studies together is the definition of knowledge technologies to which they all refer to.
3.3.2 Establishing precedents: not a list of technologies, but a list of issues
To adopt a definition of technologies as systems (see section 1.2.3.1), and of knowledge technologies as systems specifically involved in the process of sense-making, it means that past and present examples of KTs can only be understood historically and contextually. For this reason, the analysis developed in Module C will not consider a list of specific examples of past KTs (e.g. the printing press, television, Web 1.0) and then compare them one by one to advanced KTs (AI and big data). This is because an approach of this sort would assume that the technological element is preponderant compared to the human one (something that is closer to the definition of information rather than knowledge technologies, see section 1.2.1.2), a hypothesis that many scholars has refuted, as shown in the literature review, and to which Module C subscribes.
Furthermore, this approach would lead to erroneously consider KTs as tools rather than systems and would impose an abstract and essentialist idea of what each technology is, which is something that Gitelman, among others, criticises and warns against:
So it is as much of a mistake to write broadly of ‘the telephone’, ‘the camera’, or ‘the computer’ as it is ‘the media’, and of – now, somehow, ‘the Internet’ and ‘the Web’ – naturalizing or essentializing technologies as they were unchanging, ‘immutable objects with given, self-defining properties’ around which changes swirl, and to or from which history proceeds. Instead, it is better to specify telephones in 1890 in the rural United States, broadcast telephones in Budapest in the 1920s […]. Specify is key. Rather than static, blunt, and unchanging technologies, every medium involves a ‘sequence of displacement and obsolescences, part of the delirious operations of modernization’, as Jonathan Crary puts it. (8)
Following Gitelman’s recommendation, Module C will develop its comparative analysis of past and present KTs by assuming that something such as ‘television’ or as ‘chatbots’ does not exist, instead there are only historically and culturally realised interactions between people and versions of these technologies. Accordingly, the study will focus on similar issues across time and across technologies that are relevant to contextualise AI and big data (see Section 1.4.1). Indeed, to identify relevant and reusable patterns from past interaction with KTs (either as cautionary tales or as virtuous examples), Module C will first identify people’s needs, fears, hopes, problems around AI and big data, drawing from the insights offered in Module A and B, and then look for similar entanglements in past interactions with KTs, without incurring in misleading generalizations such as that ‘AI is the new printing press’.
The initial framework adopted in Module C to map suitable case studies considers three main categories: agency, creativity, and identity, which subsume the two aspects of trust and free will discussed in the analyses of Module A and B:
- Agency pertaining the process of knowledge access and sharing. This includes issues concerning consciousness, intentionality, free will, and autonomy. This category addresses how KTs have always been used to manipulate people through propaganda and social control, but, at the same time, they have also been used to democratise access to information and to support liberation movements;
- Creativity pertaining the process of knowledge creation. This deals with two opposing views of KTs as capable of threatening people’s capacity for creativity, understood as a quintessentially human trait, and thus limiting their freedom of expression, or, at the opposite, as tools relieving people from boring menial tasks, or even offering opportunities to further enhance their creativity;
- Identity pertaining the process of knowledge acquisition. This category focuses on the link between KTs epistemology and people’s understanding of their role and place within their community. As KTs can lead to a more truthful, effective expression of one’s identity and thoughts, those same technology my pose a threat as they can enforce stereotypes, identity-based discrimination, or simply disrupt social hierarchies and cultural norms essential to people’s identity building.
As it emerges from this framework, the analysis developed in Module C will not only consider the threats posed by AI and big data, but will also examine the opportunities that these technologies present and will do so by identifying historical precedents of how people have leveraged the power of KTs for good. In doing so, it is worth considering Mike Ananny’s interpretation of the double role and function of algorithms used in machine learning and AI systems. Ananny states that “Algorithms are both ‘traps’ that sequester people in particular cultural worldviews, and ‘societies’ that transform how ‘people interact, associate, and think.’ They simultaneously give people options for what to do, and signal what people are expected to do and what most people do” (6). Understanding not only algorithms but KTs in general as both ‘traps’ and ‘societies’ helps us recognizing how these tools and systems have the power to hamper democratic participation and personal realization, but can also be enabler of positive change and serve the needs of society as a whole. Indeed, what Ananny writes about algorithms “creating descriptions of the world that people use to reflect upon their identities, communicate with others, and create public life” (6), also suits our definition of KTs intended as cultural systems for shared meaning-making.
We recognise that both threats and opportunities to democratic participation are posed by AI and big data—and by any knowledge technology more in general—arise from an everlasting negotiation in which established cultural values and norms are not passively shaped by technological progress, nor actively determining its course. Human culture is not an endangered territory, nor a post hoc cure to unethical applications of AI, but one among the active forces implicated in the process, and it needs to be recognised and studied as such.
3.3.3 Research questions
Once we identify our case studies, each section will address the following questions:
- Is the difference between past KTs and AI and big data a matter of substance or just scale?
- How did past examples of KTs shape and enhance democratic participation and human agency?
- What can be learned from these precedents? Are they still applicable after considering changes in our personal and societal values? To what extent?
- Did past examples of KTs lead to oppressive and antidemocratic systems and reduced human agency?
- What can be learned from these precedents? How did people respond and with what results?
- Historically, which groups of people (politicians, activists, artists, citizen associations, etc.) or institutions petitioned for a democratic use of KTs? Who were the groups historically left out from this progress/benefits?
3.3.4 Risks’ identification and management
There are a number of risks that need to be considered for Module C, such as:
- the model inferred from past knowledge technologies might not be applicable to the present, because of major technological and societal transformations occurred since;
- past knowledge technologies might not provide suitable models because the system of values in place at the time of their diffusion is now outdated and their biases and shortcomings are inherent to their specific historical context (both in terms of societal values, and of technological applications);
- the project determines that the role of culture is more elusive than expected or that perhaps cultural norms and processes are too contingent to the historical, geographical, and social context and thus impossible to be subsumed under a general analysis.
If risks 1 and 2 materialise, we will then refocus the investigation so as to understand which fundamental aspects have changed over time (societal, political, cultural, technological), why they have happened and, finally, whether the change constitutes progress or rather a tendency worth opposing. This will shed light on the trajectory of the relationship between knowledge technologies and people sense of trust and free will, as well as on democratic participation more in general. Were risk 3 to materialise, it would nonetheless be a useful—if disappointing—conclusion. It would clarify the relationship between cultural norms and processes, and technological development, potentially supporting the view that, after all, cultural and technological forces operate on different levels – the first on the local one, the second on a global scale – so that aiming at capturing this entanglement in its totality amounts to erase the very cultural specificities that one aimed at representing.
3.3.5 Chapters outline and changes to the initial plan
This interim report explores how Module C has evolved since its initial literature review (M12), presenting a series of examples that illuminate the complex interplay between knowledge technologies and democratic participation. Throughout the report, we juxtapose current technological developments with historical examples, providing a nuanced understanding of how societies have grappled with similar challenges in the past. This comparative approach allows us to contextualise contemporary concerns and identify potential pathways for addressing them. The analysis is structured around the same four main themes addressed in Module A and B:
- Free will and autonomy
- Attention
- Trust
- Creativity
This structure departs from the original plan of organising the original analysis into nine sections (KTs and 1. power structures; 2. access to information; 3. political participation; 4. Labour; 5. human autonomy; 6. human identity; 7. human creativity and expression; 8. community building; 9. The work of imagination). These initial sections have now been subsumed under the four more general ones listed above.
This is true for all the eight initial categories, except for the one initially dedicated to “KTs and the work of imagination”. Discussion of fictional narrations on past and present KTs are now weaved with the rest of the analysis. This is because we believe that, as much as historical accounts, these cultural products provide invaluable insights into people’s use and understanding of KTs and should therefore be considered along with other sources – although it remains important to recognise their different nature and impact.
The decision to reorganise the content of Module C was to ensure that a more coherent analysis is being developed across Modules A, B, and C. Indeed, these three Modules consider the same four themes of autonomy, attention, trust, and creativity from the point of view of psychology. The collaboration between the team working on Module A and B (STRANE) and Module c (TCD) is indeed essential to ensure the collaborative and interdisciplinary approach that distinguish the KT4D project.
The insights gathered here will inform the creation of the Digital Democracy Lab Demonstrators platform and guide the validation process across four European cities.
In the sections that follow, we will delve deeper into each of the four main themes, presenting a carefully curated selection of examples from both the present and the past. These examples will serve to illustrate the complex dynamics at play in the relationship between knowledge technologies and democracy, and will provide a foundation for the project's future research and development efforts.