Effect of Technology:
Links: OPEN
Partners: logo

3. Module C: Historical perspective

3.1 Introduction

3.1.1 Setting the stage

The transformative impact of technological progress on society has always generated concerns and excitement about the effects of the introduction of artificial agents to human agency and flourishing. Since the First Industrial Revolution, the enthusiasm for the opportunities offered by automation has equalled the fear of a dystopian technocratic society (Postman). In this scenario, the relationship between human users and their technological tools would be reversed if, following the economic and political interests of the few in power, the attainment of agency by machines were to translate into dominance over humans. This concern was captured early on by Isaac Asimov in his famous Three Laws of Robotics, which aimed at providing an ethical compass that is still relevant and widely quoted. 

The premises of this debate have fundamentally changed since the advent of the so-called Fourth Industrial Revolution (Schwab), brought up by the exceptional progress in the fields of AI and robotics since the 2010s. What distinguishes this new course from the previous Digital Age is the unprecedented level of pervasiveness and autonomy of intelligent artificial agents and systems, as well as the speed at which the technological changes in the field of AI are occurring. From recommendation engines suggesting to which song to listen to, or which charitable cause to support, to industrial and military applications of big data, AI is impacting every aspect of life and reshaping the way people earn their living (e.g. gig economy, see Wood et al.), interact with each other (e.g. virtual reality platforms like the Metaverse, see Boellstorff), consume (e.g. AI used by Amazon, see West), construct and perceive their own identity (Kosinky). The long-feared prospect of a generation of intelligent machines ruling over humans is perhaps less spectacular than what we learnt to expect from science fiction, but not less likely: when an AI-operated system can rule in a court of law (Alarie et al.), deny access to a mortgage (Anderson et al.), or decide which military target to hit (de Swarte et al), there is indeed ground for concern.

Yet, there is a risk in interpreting such changes to society and democratic participation brought by the latest developments in the field of AI and big data as something unparalleled. While some aspects are indeed peculiar to the specific features and affordances of these technologies, today’s core preoccupations around trust and free are the same posed by technological agents throughout history. For instance, the then disruptive new technologies of reading and writing was famously decried by Socrates in the Platonic dialogue Phaedrus as something that would have severely impaired people’s ability to memorise and retain information. Similarly, Swiss scientist Conrad Gessner in his book Bibliotheca Universalis published in Zurich in 1545 expressed great concern over the information overload caused by the advent of the printing press, which massively increased the number of published books. The societal and political repercussion of this did not escape Gessner, who called upon kings and queens to solve the situation (Blair 2003: 11). Or again, similar concerns around the ones on safety and psychological manipulation of young girls against predatory men raised today in connection with social media platform and deepfakes (Laffier and Rehman) were also debated in the late 19th-early 20th century when the telephone first entered many households and open a channel of – often unwanted and uncontrolled – communication for young women (Marvin: 22-39). Or similar preoccupations around election manipulation discussed in connection with AI and big data are to be found in several analysis on the influencing power of television (Cavgias et al.; Ragnedda and Glenn).

However, this is not to claim that AI and big data do not pose any unexpected and peculiar challenge nor that what society faces today are simply old problems under slightly different circumstances. Instead, what Module C posits is that it is essential to explore the ever-present entanglement between technological affordances and cultural norms and values, and to identify its peculiar manifestations – historically, geographically, technologically connotated – as well as its constant traits. It is thus within this framework, which recognises the mutual shaping of culture and past and present ‘knowledge technologies’ – a definition discussed in section 1.2.1 –, that Module C analyses AI and big data as a specific and distinct instance of a centuries-long interaction. Tracing this history is not to downplay AI’s peculiarities, but to contextualise them so as to fully understand them.

3.1.2 Goals and objectives

The historical contextualisation and the centrality of culture are the two crucial lenses through which Module C looks at the threats and opportunities posed by AI and big data to democratic and civic participation. Accordingly, Module C has two main goals. 

The first one is to identify historical precedents in the way knowledge technologies has shape the social so as to understand AI and big data as part of the long history of interactions between technological affordances and cultural norms, values, and practices. In other words, Module C is set to investigate how culture have adapted to the advent and evolution of knowledge technologies – such as written language, printing press, television, radio, etc. – but also how such technologies have been developed in response to cultural norms and changes. Module C recognises this mutual relationship as central to understand the link between culture, technologies, and democracy. In this context, culture is intended as a complex system of practices, knowledge, and norms that every person possesses and that is indispensable in the negotiation between the individual and its society. Knowledge technologies, being an expression of culture as well as a medium for it – and a far-from-neutral one – are essential to this negotiation, which is ultimately what civic and democratic participation depends on. 

The Module’s second goal, strictly connected to the historical contextualization, is to offer a definition of AI and big data as advanced knowledge technologies (AKTs), which consider the long history of the complex entanglement between culture, technology, and democracy mentioned above. Proposing a novel definition might seem to add confusion to a matter such as AI and big data that, while currently benefitting from a highly multidisciplinary discussion, is also rendered less intelligible due to single disciplines’ jargon and categorization. However, we believe that adopting the open definition knowledge technologies and applying it to AI and big data can lead to a soberer assessment of their uniqueness due to the historical contextualization proposed in Module C. The hope is that, by focussing on constant traits and similarities across time, our analysis will stay current beyond the present moment, since the pace and trajectory of AI development is extremely fast and unpredictable and thus makes its assessment quite volatile.

Before embarking upon the comparative analysis of AI and big data, alongside historical knowledge technologies, which will be Module C’s next step, it is now essential to clarify three foundational considerations. 

First, we need to explore the distinction between knowledge technologies and other definition such as information technologies and understand why our focus is on the former. This exploration involves delving into the nuanced differences between knowledge and information, justifying the prioritization of the former. 

Second, the importance of adopting a historical perspective needs examination, revealing the motivations behind situating AI and big data within the extensive history of knowledge technologies. This exploration of historical context acts as a lens to uncover the evolution, trends, and paradigm shifts in knowledge technologies, enhancing our understanding of the contemporary landscape. 

Lastly, the centrality of cultural processes in investigating AI and big data becomes a crucial theme, leading to an exploration of the reasons for emphasizing cultural dimensions in this inquiry. This focus on cultural processes highlights the socio-cultural influences that shape the development, implementation, and impact of AI and big data. 

These three crucial aspects are discussed in the following literature review, providing an overview of relevant scholarly discourse, and offering insights into the complexities of each dimension. This analysis sets the stage for a comprehensive understanding of the interconnected realms of AI, big data, and historical knowledge technologies, which is the final goal of Module C.

author image

We don’t deny that AI and big data pose unexpected and peculiar challenges, nor that what society faces today are simply old problems under slightly different circumstances. Instead, we posit that to understand the risks and benefits of AI and big data, we need to understand them as the latest step in the long history of interactions between technological affordances and cultural norms, values, and practices.

Eleonora Lima

3.2 Literature Review and Rationale

3.2.1 Defining ‘knowledge technologies’

3.2.1.1 Limitations of the existing definitions of ‘knowledge technologies’

The term ‘knowledge technologies’ has not been used extensively (24,800 results on Google Scholar compared to 1,670,000 results for ‘information technologies’) and definitely not in a critical way, but more as an operational definition. The term was a more popular definition between 2000s-early 2010s and it was often used to indicate practical tools (often software) for knowledge management (Garavelli et al.), or to talk about the Semantic Web (Rigau et al.). This means that knowledge technologies are usually intended solely as digital and computer technology (Milton: 13).

The definition is also found in publications and projects written by scholars who are not native speakers of English (many in the Balkans and Eastern European countries, and Italy) and whose main audiences are not Anglophone academics, used probably because it provides a more literal and thus accurate translation. More recently, the label ‘knowledge technologies’ has been used to designate educational tools – most exclusively digital ones – making remote learning possible during the COVID-19 pandemic. In these cases (Stewart and Khan; Dionisio-Flores et al.) the word ‘knowledge’ of the definition stands for ‘knowledge acquisition’ and has a specific pedagogic connotation.

This literature often considers how knowledge – understood as content – could be successfully transferred and managed by means of employing dedicated ‘knowledge technologies.’ More importantly, many of these analyses, especially the one developed within the field of knowledge management, tend to adopt the definition as self-explanatory. This is because they mostly focus on establishing what ‘knowledge’ is and, once satisfied with a definition, they assume that every tool used to share and manage it, is by necessity a ‘knowledge technology’. 

1.2.1.2 Difference between information and knowledge 

In the pursuit of delineating the essence of knowledge technologies within the Knowledge Technologies for Democracy (KT4D) framework, a critical point of consideration is the distinction between knowledge technologies and the more prevalent realm of information technologies. A noteworthy perspective emanates from the edited volume titled Information Technology for Knowledge Management (Borghoff and Pareschi). While belonging to the scholarship interested in information management discussed in the previous section, it nonetheless provides an interesting distinction between information and knowledge which aligns with our project's objectives and that it is worth quoting in its entirety: 

Knowledge is quite different from information, and managing knowledge is therefore decisively and qualitatively different from managing information. Information is converted into knowledge through a social, human process of shared understanding and sense-making at both personal level and organizational level. Managing knowledge starts with stressing the importance of people, their work practices, and their work culture, before deciding whether or how technology should be brought into the picture. Information management, on the other hand, often starts with a technological solution first – with consideration of people’s work practices and work culture usually a distant second (Holtshouse: V).

When extrapolated beyond the analysis of knowledge technologies for work practices and environments, these considerations encapsulate the intricate interplay between society, culture, and technology that underpins the analysis develop in Module C. The deliberate shift from a focus on information (the substance of knowledge) to knowledge itself (the process of sense-making) facilitates an engagement with a more culturally and socially intricate conceptualization of technologies. This conceptualization aligns harmoniously with the broader discourse in Media Studies (see section 1.2.3.1), underscoring the significance of a cultural and historical lens in comprehending the multifaceted dynamics of knowledge technologies.

Thus, when using the definition knowledge technologies, we aimed at addressing the link between: 

what we know 

(KTs as content display) 

how we know it 

(KTs as content moderator) 

with

1) our sense of self

2) our place in our community and in society

3) our agency

Only when these technologies are used to gate the information individuals have access to, to restrict the messages shaping their perception of public opinion, and to control their choices and interactions, we start to see the space of both opportunity and risk opening for these technologies to enhance or harm democracy, and this is the space of knowledge technologies.

3.2.1.3 Defining knowledge in the age of self-learning AI

The tendency of focusing on content rather than on processes encountered in the existing literature adopting the definition of KT discussed above risk imposing a simplistic definition of what knowledge technologies are. This is because it assumes that the mediation operated by the technology is a neutral one, as the tool is simply a carrier and an organizer for content and does not affect its nature. This is even more problematic when we consider the paradigmatic change that self-learning AI has brought about in recent years. In the case of self-learning agents, knowledge is not simply a content that needs to be managed and organised, as these systems do not merely support humans’ attainment of knowledge, but indeed replace them in the process of acquisition. 

While this change might be upsetting, as knowledge creation and acquisition have been traditionally considered human prerogatives, this situation might also offer an opportunity to explore with a renewed critical awareness the until now under-researched link between ‘knowledge’ and ‘technology’. This is because artificial neural network technologies require that we operationalise a complex concept like ‘knowledge’ in order to teach an artificial agent to reproduce what we humans have historically accomplished in an assisted but yet autonomous way. The dominant approaches in this field look at AI knowledge from the perspective of logics, computational knowledge, linguistics, and semantics (Aamodt and Nygard; Guarino; Levesque; Zhuang et al.). This approach risks missing the opportunity of recognising AI as a sandbox for exploring the role of culture in shaping the link between knowledge and technology. Many ever-present cultural patterns – of dominance, discrimination, manipulation, but also of inclusivity and reparation – are automatised and played out in front of us when it comes to knowledge produced and managed by AI agents. What used to be tacit it is brought to light, and this can lead to positive action and change. 

Also, current definitions of knowledge used in describing self-learning agents (Koggalahewa et al.; Stein et al.) and developed within the fields of logic in computer science and cognitive science risk of adopting a universalistic approach that erases the cultural peculiarities which are instead central to the way people understand and interact with both knowledge and AI technologies. Alan F. Blackwell, Addisu Damena and Tesfa Tegegne, in a recent article dedicated to the peculiar approach to AI research in Ethiopia, challenge the claim that current developments in AI technologies are “concerned with understanding of humans” (370), as if ‘to be human’ were a universally shared condition that applies to all people in the same way. Consequently, they reject the idea that “the fundamental understanding of humans [is] necessarily universal” (370) and assert that the skills and behaviours which are considered to be ‘human-like’ in AI – knowledge acquisition included – are instead Western-centric interpretations of otherwise multifaceted and culturally defined concepts. They thus provocatively ask: 

Will such understanding be the same wherever it is investigated, regardless of who the humans are, or of what culture they have inherited, or what their economic and political circumstances might be? Such attempted universalism seems extremely unwise, despite the AI reliance on supposedly universal principles of cognitive science (critiqued rather comprehensively by Geoffrey Lloyd in his book Cognitive Variations (Lloyd 2007). (Blackwell, Damena, Tegegnepp: 370-371).

It is evident that, while the development in AI technologies offers the chance for new critical approaches to the understanding of what knowledge technologies are, there is also a concrete risk that the old approach dominating the field of knowledge management in the early 2000s-2010s will now be replaced with another one that, while recognising the difference between ‘information’ and ‘knowledge’, will still ignore the pivotal role of culture. 

In adopting the definition of ‘knowledge technologies’ in an expansive way, the goal of this Module within the KT4D project is to highlight how these cultural patterns are essential in understanding the link between ‘knowledge’ and ‘technologies’ and how they can only be understood when adopting a historical approach. The rich and complex past analyses of KTs – although called in a different way – can contribute to the discussion by adding culture (intended as situated practices and knowledge across space, time, and technologies) to a discourse that is often solely preoccupied with technical aspects and with the present time. 

3.2.2 Why the historical perspective? 

3.2.2.1 Challenging the concept of newness.

The rationale for adopting a historical perspective in the analysis of AI and big data derives from one of the central hypotheses underpinning the KT4D project: that in order to understand the social and cultural impact of these advance knowledge technologies and their impact on civic and democratic participation, we need to contextualise them within the long history of interactions between technological affordances (writing, printing, television, etc.) and cultural norms, values, and practices. By doing so, Module C aims at providing a novel historical perspective allowing for a more sober and critical engagement with AI technologies, whose novelty and impact are often overhyped and thus misunderstood. It is only through a historical examination that significant precedents and paradigms can be fruitfully examined and tested against modern challenges.

The conviction that a full understanding and critique of the current impact of AI and big data on democratic participation cannot prescind from a historical contextualisation stems from the decades-long debate in the field of Media Studies on the concept of ‘newness’ applied to media and information-communication technologies. The need to critically unpack this concept came with the mass diffusion of the label ‘new media’ in the mid-1990s, when the definition was applied to digital media and web-related communication technologies. Influential media scholars like Friedrich Kittler first (1997) and Lev Manovich (2002 and 2003) later interpreted the advent of modern computer technologies as a moment of rupture from the past and thus adopted the definition of ‘new media’ to mark the beginning of a new era in the way people create and share knowledge and information. 

In response to these analyses and to the general enthusiasm towards the Web and digital media, other scholars around those same years started challenging the very concept of ‘newness’ as the result of a calculated hype serving the interest of tech companies, or, as in the case of many Media Studies scholars, of an excessive focus on the technical aspects to the detriment of the cultural and social dimension of media technologies.

One of the first and still authoritative sources is Carolyn Marvin’s book titled When Old Technologies were new (1990), in which she analysed two ‘new media’ of the 19th century: the electric lights – intended as a medium in sense indicated by Marshal McLuhan (1964: 8-9 and 52), and the telephone. In focusing on these old technologies, Marvin did not simply aim to demonstrate how every invention was once new, but instead focussed the attention on how the very concept of novelty is culturally and socially determined and, in turn, how any new media imposes and shapes social norms and hierarchies. In the introduction to her book, she immediately made clear that “the early history of electric media is less the evolution of technical efficiencies in communication than a series of arenas for negotiating issues crucial to the conduct of social life; among them, who is inside and outside, who may speak, who may not, and who has authority and may be believed” (4).

Therefore, what matters is not the technical aspects, the nature of the new media – which is what interested instead scholars like Kittler and Manovich – but the social and cultural substratum that receives and make sense of the new technologies. Evidently, the changes to such substratum do not happen abruptly and are not determined solely by technological progress. To further prove this point, Marvin further remarked the focus of her research and consequently of her book was

shifted from the instrument to the drama in which existing groups perpetually negotiate power, authority, representation, and knowledge with whatever resources are available. New media intrude on these negotiations by providing new platforms on which old groups confront one another. Old habits of transacting between groups are projected onto new technologies that alter, or seem to alter, critical social distances. […] Old practices are then painfully revised, and group habits are reformed. New practices do not so much flow directly from technologies that inspire them as they are improvised out of old practices that no longer work in new settings (5).

Marvin’s emphasis on social and cultural practices and on their resistance to technological changes is an essential aspect in her understanding of new media. This approach is in line with what this Module aims at demonstrating, meaning that AI and big data should be inscribed and understood as part of the long history of knowledge technologies. Like reading and writing (and the long line of knowledge technologies that followed them, from the printing press, to television and the Internet), also AI and big data institute and threaten established hierarchies and disrupt interactions between members of a community, for instance via data analytics, algorithmic filtering of information sources and microtargeting. These examples all represent disruptions in our relationship with the way we apprehend the world, narrativise the reality we see, and act upon these interpretations so as to maximise the quality of our live. However, these disruptions are not peculiar to AI and big data. Instead, a history of these issues and of how people adapted to and delt with them can be trace, like Marvin did, to past examples.

Lisa Gitelman (2006) offered criticism to the concept of ‘new media’ as an ontological reality similar to the one raised by Marvin. Gitelman too analysed two case studies, one old medium and one new, at least at the time of the publication of her book: the phonograph and the World Wide Web. Beside the focus on the social and cultural nature of mediation and, thus, of communication and information technology itself, she offered an important remark on the permanency and resilience of cultural norms and processes in the face of rapid technological change. Gitelman wrote: 

The introduction of new media […] is never entirely revolutionary: new media are less points of epistemic rupture than they are socially embedded sites for the ongoing negotiation of meaning as such. Comparing and contrasting new media thus stand to offer a view of negotiability in itself – a view, that is, of the contested relations of force that determine the pathways by which new media may eventually become old hat (6). 

The need for a comparative approach that alone can reveal the process of cultural negotiations that media technology enact is in line with the approach that our own analysis adopts and that ultimately justifies our chosen historical perspective. One of the overarching research questions of the KT4D project asks how we can place enhanced cultural processes, by their very nature subtle and intangible, at the heart of an investigation of technology. Gitelman’s definition of new media as “sites for the ongoing negotiation of meaning” thus suggests a valuable starting point for our investigation.

One last important contribution to the critical investigation of the concept of newness in media that we ought to consider is the concept of ‘remediation’ famously theorised by Bolter and Grusin. This posits the constant and mutual shaping of old and new media and consequently establishes the impossibility to consider any communication technology in isolation. In open disagreement with scholars supporting the idea of an unprecedented change in the media panorama of the late 1990s due to the advent of the Web – not much different from what is happening today in relation to AI – Bolter and Grusin argued that “No medium today, and certainly no single media event, seems to do its cultural work in isolation from other social and economic forces. What is new about new media comes from the particular ways in which they refashion older media and the ways in which older media refashion themselves to answer the challenges of new media” (15).

In this case, differently from what happens in Marvin’s and Gitelman’s analyses, the focus is firmly on the technologies rather than on socio-cultural processes. Nonetheless what is relevant to Module C’s comparative analysis of past and present knowledge technologies is that the concept of remediation postulates the need for a contextual analysis, both synchronic and diachronic. This in turn supports the claim that old media and AI and big data are not to be understood as sequential steps in the evolution of knowledge technologies, one replacing the next one by rendering it obsolete, but instead should be regarded as part of a complex system that needs to be analysed in its entirety. 

It is important to point out that these scholars challenged the concept of newness in relation to ‘media’, while in this Module, and in the KT4D project more in general, we choose to focus on ‘knowledge technologies’, a label that, while we are in the process of defining it (see section 1.2.1), it is nonetheless a non-negotiable point of reference. Media, with its accent on communication, speaks of a necessarily public dimension, because, even if consumed in solitude, any medium implies a broadcaster or a sender, and an infrastructure. Our definition, with its accent on ‘knowledge’ encompasses both the individual and the social dimension of sense-making. Moreover, it considers not only the process of mediation and acquisition of knowledge, but also the preceding and following steps, meaning the precondition that makes the acquisition of knowledge possible, desirable, or needed, and the consequences of such acquisition in terms of agency, freedom, and awareness. Differences notwithstanding, the focus on the socio-cultural dimension of information and communication technologies discussed in the field of Media Studies (further discussed in section 1.2.3) is an approach that Module C will heavily borrow and apply in its analysis of past and present KTs.

3.2.2.2 Using the past to understand AI and big data

As famously stated by Howard Rheingold (1985), the pioneering theorist of Internet technologies and virtualization, it is impossible to understand where mind-amplifying technology is going unless we understand where it came from. However, there is one aspect in needs of clarification before adopting an approach that centres such historical contextualization and this is the question of scale and nature that supposedly distinguish past and advanced knowledge technologies. In order to meaningfully compare the impact that ‘old’ knowledge technologies had on civic and democratic participation with the one imposed by AI and big data, one must first assert that the changes brought about the new technologies – or at least the ones salient for our analysis – are different only in scale, but not necessarily in nature.

That this should be the case is supported by the growing number of academic analyses reading AI and big data not just against past technologies and systems, but as direct evolution of what preceded them. Like in the case with the definition of ‘new media’ challenged by scholars wary of the hype around the Web in the late 1990s and early 2000s, today’s scholars who are interested in historically contextualising AI do so to counter the claims, coming from tech companies and mainstream media alike, overstating the unprecedented revolution brough by these technologies. The current effort to historically contextualise AI and big data looks at four main aspects.

First, there is a growing interest in the analysis of the power structures and social hierarchies that have allowed the recent rise and spread of modern AI technologies and systems. One main point of discussion is the role of historical colonialism (Adams; Hao) in creating the premises for current forms of exploitation and data extraction in part of the world that, while formally emancipated from foreign domination, are still subjected to Western economic power and political influence, which are often further exerted with the aid of AI and big data. The claim advanced is that it would be impossible to assess the impact on democratic participation of these technologies without understanding the socio-political context that make them viable and that, in turn, they reinforce.

Second, attention has being devoted to the technical and material aspects of AI which are borrowed and inherited from previous technologies. This approach is in line with the previously discussed concept or ‘remediation’ and aims at highlighting how affordances and constraints of past technologies that AI improves upon, necessarily shape and influence it. This is the case, for instance, of the analysis of the dependency of deep learning machine vision from traditional photography conducted by Daniel Chávez Heras and Tobias Blanke. They demonstrate how machine vision inherited from photography “its technical regimes and epistemic advantages” (1153) so what is labelled and detected by algorithms is not ‘the world’, but a (culturally and socially determined) vision of the world that two centuries of photography has previously codified. Their claim is thus that computer vision should treat “photographs not as detections of the world, but as measurements of these beliefs” (1158). To fully understand these beliefs, they posit, we ought to consider the history of photography from which machine vision stems. Therefore, investigating where AI and big data come from is essential not simply to critically understand their cognitive and cultural impact, but also to change their value system and to redirect their purposes.

Third, consideration has been given to the cognitive impact of AI-generated content and to the consequent moral panic that this is ensuing by comparing the present situation to past instances. This is, for example, the case with the study currently undertaken by Joshua Habgood-Coote, researcher at the University of Leeds in Philosophy of Language, who investigates the threats posed to our epistemic practices by deepfake videos. In a recent article, Habgood-Coote claims that both people’s current lack of trust in the images we see, due to the proliferation of deepfake and AI-generated content, and the consequent need to develop knowledge and cognitive tools in response to such changes, are not at all unprecedented. Instead, as he documents in his analysis, there is a long history of photographic manipulation that constitutes an important precedent, like in the case of the “composograph”. This was a forerunner method of photo manipulation and is a retouched photographic collage popularised in the 1920s by American publisher Bernarr Macfadden used to produce fake sensationalist pictures of celebrities. The analysis interestingly focuses on people’s reaction to this fraud and on the cultural and cognitive tools and strategies developed in reaction to it. Far from underplaying the issues raised by AI-generated content, studies like this one recognise people’s agency and awareness when confronted with unreliable sources and identify virtuous processes of shared meaning-making from which to learn.

The fourth and final aspect concerns the need for a new theoretical approach to modern AI that looks back at past conceptualizations of intelligent systems and autonomous agents. In the last few years some scholars have advocated for a ‘return to cybernetics’ (Bell et al; Pangaro; Pickering 2010), intended as the highly interdisciplinary and human-centred approach to human-computer interaction and intelligent systems laid out in the early 1950s and gone out of fashion in the late 1970s. To the proponents of this approach, historical cybernetics, within which the research on AI originated, could offer a valid alternative to the current trends. First, cybernetics aimed at offering a general epistemology that encompassed but also exceeded the technical issues at hand and thus provided a holistic and farsighted approach (Johnston; Vidales). Second, cybernetics was a truly interdisciplinary and collaborative field to the point of being described as ‘anti-disciplinary’ (Pickering 2013). Lastly, due to the lack of direct applicability of many of its projects and inventions (Pangaro: 17), and to the disastrous outcome of technological applications during WWII, to which cyberneticians contributed (Galison), endeavoured to follow strong ethical principles (Wiener). Therefore, in the hopes of the promoters of a return to cybernetics, its holistic approach would remedy the present utilitarian, task-driven vision of AI and allow for a complex, humanistic one; its ‘anti-disciplinary’ attitude will respond to the call for interdisciplinarity, in and outside of academia, in relation to the study of complex systems such as, for instance, the Internet of things (Adamson et al.); its relative autonomy from invested interests would set an example for a more ethical approach to AI, currently dominated by economic and military goals, when not even by antidemocratic and manipulative forces. 

Looking at past instances of knowledge technologies thus allow to contextualise and fully understand the hierarchies of power and domination, the technical and aesthetic beliefs and assumptions, the cognitive impact and literacy strategies, and the epistemic system upon which AI and big data rely. It is from this growing scholarship that looks at the past in order to understand the present that the historical contextualisation adopted in Module C stems from. 

3.2.3 Why the cultural perspective?

3.2.3.1 Knowledge technologies as systems

Technological changes, at least when we consider specific inventions and manufacts, occur at a fast pace and, when it comes to AI and big data, such changes are happening at an even higher speed than the one ever witnessed before. The call is thus for societies to adapt their cultural responses to these new technologies in order to master these tools, and guide and regulate their implementation to avoid being manipulated and overwhelmed. However, if we assume that technological progress does not exist outside culture, we must also concede that our culture has already changed in order for these advancements to even happen.

Determining the direction of this transformation – from culture to technology or vice versa – is what the long debate between the supporters of technological and cultural determinism has always tried to discern: is it technology that imposes cultural changes, or is it culture that makes any technological advancement possible? It is worth considering the positions famously hold by two of the most renown exponents of the two fronts, Marshall McLuhan, advocating for technological determinism, and Raymond William for social determinism. In his 1962 book The Gutenberg Galaxy dedicated to the technology of writing McLuhan stated that its invention and evolution marked any major steps in human history. He wrote that:

Any technology tends to create a new human environment. Script and papyrus created the social environment we think of in connection with the empires of the ancient world. […] Technological environments are not merely passive containers of people but are active processes that reshape people and other technologies alike. […] Printing from movable types created a quite unexpected new environment – it created the public. Manuscript technology did not have the intensity or power of extension necessary to create publics on a national scale. What we have called “nations” in recent centuries did not, and could not, precede the advent of electric circuitry with its power of totally involving all people in all other people. (McLuhan 1962: XXVII).

Opposite convictions were hold by William who in his 1971 book Television: Technology and Cultural Form asserted that any new technology, such as the printing press, is always developed in response to specific social, political, and cultural changes rather than these transformations proceeding from the introduction of the new technology:

The development of the press […] was at once a response to the development of an extended social, economic and political system and a response to crisis within that system. […] In Britain the development of the press went through its major formative stages in periods of crisis: the Civil War and Commonwealth, when the newspaper form was defined; the Industrial Revolution, when new forms of popular journalism were successively established; the major wars of twentieth century, when the newspaper became a universal social form. […] What matters, in each stage, is that a technology is always, in a full sense, social. It is necessarily in complex and variable connection with other social relations and institutions, although a particular and isolated technical invention can be seen, and temporarily interpreted, as if it were autonomous (14).

In Module C we do not espouse either of these positions exclusively, but rather, following a well-established and today dominant tendency, we combine and take advantages of the insights offered by both as we understand them not be mutually exclusive. This is because, rather than seeing culture and technologies as two self-defined forces in opposition, we consider knowledge technologies as complex system made of cultural, social, and technical components that constantly and mutually shape each other in a process that has no direction and can thus be apprehended only as a whole. Indeed, we use the term ‘advanced knowledge technologies’ to refer to the assemblages of advanced processing and big data not according to the kinds of methods that are used to develop them, but rather to those specific implementations or these technologies that are most likely to disrupt civic participation and democratic processes by intervening in the manner in which individuals develop their sense of themselves, others, and the world around them. What we aim to avoid is thus an essentialist and limiting definition of what knowledge technologies are and instead understanding them as systems, which is in line with the dominant definitions of technologies developed within the field of Media Studies.

Donald Mackenzie and Judy Wajcman, for instance, offer a three-level definition of technology. First, they define technologies as sets of physical objects, though they also concede that “few authors are content with such a narrow ‘hardware’ definition” (3). Second, they define the concept as referring to all the human activities associated with a particular technology, either those directly linked to a particular machine (e.g., the programming work essential to make a computer function) or the social behaviours a technology prescribes (for instance, urban habits developed in response to mass motorization). Finally, Mackenzie and Wajcman consider technologies as forms of knowledge, meaning the practical and theoretical know-how necessary to design, repair, and operate machines. 

Similarly, Ursula Franklin, elaborating on Jacques Ellul’s concept of technique (1954), describes technology as practice and rejects any definition that limits it to the material: “[t]echnology is not the sum of the artifacts, of the wheels and gears, of the rails and electronic transmitters” (10) Technology is a system. It entails far more than its individual material components. “Technology involves organization, procedures, symbols, new words, equations, and, most of all, a mindset” (Franklin: 10).

Also, Gitelman, in her previously mentioned analysis, provides a definition of media in line with the one we propose of knowledge technologies that stresses the same complex entanglement of technical and cultural aspects and how this convergence must be understood in its complexity. She writes: 

I define media as socially realized structures of communication, where structures include both technological forms and their associated protocols, and where communication is a cultural practice, a ritualized collocation of different people on the same mental map, sharing or engaged with popular ontologies of representation. As such, media are unique and complicated historical subjects. Their histories must be social and cultural, not the history of how one technology leads to another, or of isolated geniuses working their magic on the world. (7)

Finally, another important input comes from what Kember and Zylinska’s call a performative approach to mediation. Again, although their analysis focuses on ‘media’ rather than ‘knowledge technologies’, it is possible to extrapolate relevant point for our analysis in light of the shared attention to the cultural aspects and historical approach. Kember and Zylinska apply the concept of performativity to the understanding of information and communication technologies and posit that “media are generative, that is, that they are part of the material world and do not thus exist apart from it. Neither a reflection of nor a mask for the social, media actively contribute to the production of the social. In other words, media perform the social – sometimes alongside and sometimes in conflict with other agencies that are not solely establishment or antiestablishment” (38). This position evidently builds upon Bruno Latour’s and Michel Callon’s “Actor Network theory,” which famously challenges the distinction between linguistic, social, technological, and natural realms, a distinction on which traditional sociological studies are predicated. Indeed Kember and Zylinska write of mediation as a “multiagencial force that incorporates humans and machines, technologies and users, in an ongoing process of becoming-with that is neither revealed nor concealed but rather apprehended intuitively – inevitably from inside the process” (40).

Module C, following in the steps of this scholarship, recognizes that both threats and opportunities pose to democratic participation by AI and big data – and by any kind of KTs more in general – arise from this everlasting negotiation, in which established cultural values and norms are not passively shaped by technological progress, nor actively determining its course. Human culture is not an endangered territory, nor a post hoc cure to unethical applications of AI, but one among the active forces implicated in the process and it needs to be recognized and studied as such.

3.2.3.2 The cultural dimension of AI ethics

One of the goals the KT4D project, and of this Module specifically, is to investigate the cultural dimensions of ethical AI, understood in terms of languages and discourses, national or regional identities, religions, beliefs and practices, values and tolerances, etc. These elements are often disregarded by traditional approaches to AI ethics that instead focuses on more universal and abstract values.

However, when we consider the major threats and downfalls of AI systems in relation to democratic and civic participation, we notice that they tend to happen whenever these technologies – developed as standardised and neutral tools and marketed globally as such – impinge on the cultural values and social structures of the communities that adopt them. For this reason, Module C will consider more and lesser-known case studies that demonstrate the need for an approach to ethical AI that considers its cultural dimension. It will do so by focusing specifically on:

• The complexity and heterogeneity of identities, which we address by adopting the Feminist analytical framework of intersectionality (Crenshaw). People manage different aspects of their identity in different contexts and respond to situations differently depending on the social role they are playing at the time. Knowledge technologies, included AI and big data, can either provides tools empowering people to express their complex and stratified identities, or can enforce patterns of discrimination, which are further crystallised due to the technology’s affordances. For instance, it has been proven, especially during the Covid-19 pandemic (Leslie et al.) that AI systems used in the medical sector are trained on datasets that reflect the differences in treatment that white patients and patients of colour receive. Those differences are immortalized in data, which are then used to train algorithms that ultimately perpetuate the discrimination.

• The importance of cultural-difference awareness, which we draw from Geert Hofstede’s Cultural Dimensions Theory as our first and general point of reference. Hoftede’s framework identifies six key dimensions (power distance, uncertainty avoidance, individualism-collectivism, masculinity-femininity, short vs. long-term orientation) aimed at capturing cultural differences across countries. While scholars have pointed out many limitations inherent to this framework (e.g. the focus on the nations as homogeneous cultural sites (see McSweeney), and the lack of women’s perspective (see: Moussetes), its usefulness resides in its general statement against the claim that digital technologies are erasing cultural differences. Hofstede’s framework challenged the theory of the ‘global village’ and demonstrated the local dimension of culture. Indeed, while software product releases tend to be international, their use and applications depend on local habits, norms, and communities. An example is offered by the fast and positive reception of cryptocurrency in the Islamic world due to its compliance with Islamic banking that prohibits usury and speculation and thus any form of investments (Khan and Rabbani). In recent years some Islamic scholars deemed cryptocurrencies halal and thus religiously permissible and are trying to prove that rules and regulations from sharia are fully compatible with digital blockchain technology. Religious beliefs are then what made the new technologies acceptable and indeed desirable.

• The importance of people’s values in technology adoption. While this is a virtuous principle that guides the well-established field of User Experience (UX) Design, it is also true that its applicability often depends on designers and programmers who have, by training, limited knowledge of cross-cultural issues (Lachner et al.). It is a recurrent experience for people to have wrong expectations about software and technologies and misuse them with more or less severe consequences, or to deliberately choose a different purpose for their tool. This is the case, for instance, with a growing number of parents using Apple’s AirTags to track their children and ensure their safety. When Apple released AirTags in 2021, the company clearly stated that they were not to be used for children or pets, only on inanimate objects, but parents and caregivers are choosing to do otherwise. It would be easy to dismiss this as a reckless decision that speaks of their technological illiteracy. However, some newspaper articles and journal investigations (Kelly; Greenaway) uncovered a more complex picture of why parent, in negotiating with their kids the boundaries of freedom and autonomy, recur to AirTags: in a society in which technology poses new threats to young children (e.g. online grooming), it is only logical that parents also look for technological remedies.

To test our hypothesis, we have presented the three case studies mentioned above to the participants of the first workshop for our Use Case 4 (see deliverable 1.2), which invited software developers to assess and discuss their approaches to ethical AI. The cultural dimensions of the three issues raised by AI technologies were deemed by the participants the most elusive and difficult to deal with when designing AI systems and software, and the one for which a comprehensive and clear understanding is missing. This has reinforced our conviction that to focus our analysis of past and present knowledge technologies on the entanglement between cultural and technological aspects – in line with the scholarship discussed in the previous section – is a much-needed contribution that our project can offer.