This section considers how people’s autonomy and free will are hindered or supported by past and present KTs. By focusing on the structural level, we will examine systemic issues such as monopolies over KTs, data extraction and colonialism, and labour. This approach aligns with Module A’s analysis of free will and autonomy as the “ability to do what you want to do,” (Module A), an ability that is ultimately shaped by the economic and political structures within which KTs are developed and embedded.
KTs, Power Structures, and Political Participation
The negative impact of AI and big data on democratic participation is increasingly evident (see Modules D and E). The influence of AI and machine learning in shaping public opinion was particularly evident during the 2016 US Presidential election (Guglielmi 2020) and the UK Brexit referendum (Bastos and Mercea 2019). In these cases, Russian operatives employed bots – AI-automated accounts that share content – to spread fake news on social media and influence the electorate. Furthermore, governments have increasingly used AI to control their citizens, as seen in the profiling of the Uighur population by the Chinese government (Mozur 2019) and the 2019-2020 protests in Hong Kong (Fussell 2022).
Monopolisation of Knowledge Technologies
AI and Big Data
A significant challenge regarding the impact of AI and big data on people’s autonomy is the dominance of the field by a small number of powerful companies. This concentration of power is so widespread that these companies are often referred to by the acronym GAMMAN, representing the six Silicon Valley tech giants: Google, Apple, Microsoft, Meta, Amazon, and Nvidia. This effectively establishes a de facto monopoly.
In the present paradigm of developing increasingly large-scale AI systems, the interdependence between Big Tech and AI is undeniable. With few exceptions, every startup, new entrant, and even AI research lab relies on the computing infrastructure of Microsoft, Amazon, and Google for training their systems. These same companies also provide the vast consumer market reach necessary for deploying and selling AI products. This dependence profoundly impacts fair competition and poses a significant threat to citizen’s democratic participation. As AI technologies increasingly control aspects of private and social life, this monopolistic situation grants a handful of private companies immense control over the information we consume, the products we buy, and the politicians we elect.
A report titled AI in the Public Interest: Confronting the Monopoly Threat (Lynn et al. 2023), published by the US-based Open Markets Institute and the Center for Journalism and Liberty at Open Markets, highlights the problematic control exerted by a few Big Tech companies over the future of artificial intelligence. By exploiting existing monopoly power and co-opting other actors, these companies exacerbate several issues inherent in the digital age, including the spread of misinformation, distortion of political debate, decline of news and journalism, undermining of compensation for creative work, exploitation of workers and consumers, monopolistic abuse of smaller businesses, amplified surveillance advertising, online addiction, and threats to resilience and security due to extreme concentration.
Elaborating further on these threats, Kate Crawford, in her Atlas of AI, writes:
AI systems are built with the logics of capital, policing, and militarization—and this combination further widens the existing asymmetries of power. These ways of seeing depend on the twin moves of abstraction and extraction: abstracting away the material conditions of their making while extracting more information and resources from those least able to resist” (Crawford 2021: 18).
Crawford also argues: “If AI is defined by consumer brands for corporate infrastructure, then marketing and advertising have predetermined the horizon.” She concludes: “due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power.” (Crawford 2021: 18).
This situation presents a dual issue: an imbalance of power, manifested in monopolies, and the predominance of economic over political interests. However, it is crucial to acknowledge the numerous regulatory efforts undertaken by the EU (AI Act, 2024) and the US government (Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, 2023) to impose horizontal obligations on the use of AI by public and private actors.
Past Examples
The impact of monopolistic control over KTs is well documented throughout history. While today’s situation created by AI tech companies requires its own analysis, historical precedents provide valuable insights into the threats to democratic participation and offer pathways for a more equitable future.
The first example that comes to mind is the control of media and knowledge production by authoritarian governments. The extensive use of printed media, cinema, and radio for propaganda under Nazism in Germany, Fascism in Italy, and Communism in the Soviet Union has been thoroughly studied, offering a chilling analysis of the manipulative power of media.
While some scholars have rightly pointed to the risks of “AI-led totalitarianism” (McCarthy-Jones 2020; Minardi 2020; Kaspersen, et al. 2023), there are more relevant examples that highlight the dangers posed by monopolistic control in democratic societies, rather than hypothetical totalitarian scenarios. These examples focus on immediate and ongoing issues, avoiding an emphasis on hypothetical “existential threats.”
One such example is the political impact of monopolistic control over media in Italy under the government of Silvio Berlusconi. Since the mid-1980s, Silvio Berlusconi's family has controlled Italy’s top three private television channels, known as Mediaset, as well as several newspapers and magazines. Berlusconi’s tenure as Prime Minister of Italy (1994-5: 2001-2006; 2008-2011) was marked by a confluence of political power and media ownership. While his family maintained control over their media conglomerate, Mediaset, his government also played a significant role in shaping media regulations. This potential conflict of interest raises questions about the influence of media ownership on public policy in Italy (Hine 2001; Fabbrini 2011). The combined market share of Mediaset and RAI, the state-owned broadcaster, control approximately 90% of the national audience and advertising revenue shares.
While Italy’s specific situation was driven by the country's lax regulation of conflicts of interest, Fabbrini's analysis highlights a broader point. He argues that preventing politicians from controlling media is insufficient, as any monopolistic situation poses a threat to democracy:
Legislation on conflicts of interest may be useful in preventing media owners from gaining economic advantages through the control of political power, but even stricter COI [conflict of interest] regulation cannot prevent a broadcasting network from supporting one leader to the detriment of others – that form of support is, per se, fully legitimate. If that network monopolistic or oligopolistic condition, then its political bias have a significant effect on the outcome of political competition – that outcome is, per se, not fully legitimate. (Fabbrini 2011: 363).
This example underscores the risks to democracy when knowledge technologies are controlled by a few actors. We need not imagine extreme situations like those found in totalitarian states. It is not even necessary for heads of government to own the media, as in Berlusconi’s case. People’s autonomy to participate in their country’s political life can be significantly diminished by monopolistic control over AI systems when those systems become central to political communication and participation, as is happening today.
When examining the exploitative power of AI and big data, it is crucial to recognise the role of past knowledge technologies in establishing structures of domination that benefit certain groups whilst exploiting others. Past patterns of exploitation are being reinforced and reproduced by today's technologies. However, it is equally important to avoid denying agency to marginalised individuals and nations who retain the power to resist and propose alternative solutions.
Data Colonialism and Patterns of Exploitation
AI and Big Data
A second threat posed to people’s autonomy is that of digital colonialism (Couldry and Mejias 2019) and data extraction for commodification purposes. Increasingly, scholars are drawing parallels between historical colonialism and what has been defined as “AI colonialism” (Hao et al. 2022).
The term “digital colonialism” refers to the decentralised extraction of data from citizens without their explicit consent, facilitated by communication networks developed and owned by Western tech companies. This process involves the collection and control of personal data without obtaining clear consent from individuals. In these situations, users often unknowingly contribute to the data that is subsequently extracted through the monitoring of their online activities.
The underlying business model is well-established in the West: tech companies offer seemingly free communication services and search engines, track user behaviour across their platforms to enable advertisers to target consumers and voters with personalised ads based on their behaviour. Social networks like Facebook, which are gaining popularity across the African continent, serve as key tools to influence the public and stir political agendas, such as during elections (Lehohla 2018).
Digital colonialism exhibits several defining aspects. It promotes Western and Eurocentric knowledge, perpetuating the idea that data “determines how the world is measured and defined while simultaneously denying that this is an inherently political activity” (Crawford 2021: 11). This reinforces the misconception that knowledge infrastructures are objective rather than politically constructed. Digital colonialism also relies on physical infrastructures that still bear the imprint of colonial structures of oppression, and the dominant actors and companies are primarily based in Western countries, particularly the United States. Additionally, it often exhibits a white saviour attitude, exemplified by initiatives like the One Laptop per Child Project and Facebook’s Free Basics project in Africa.
Data extraction encompasses various forms, including images from surveillance cameras, personal and behavioural data from social media, data collected through surveys, questionnaires, and reviews, geolocation data, and purchasing behaviour patterns. The primary goal of data extraction is data commodification, like in the case of advertising, which leverages information about taste, habits, and locations; sale of data from social media platforms to other companies; and sale of data for political purposes, as exemplified by Cambridge Analytica’s use of Facebook user data in the 2014 US election.
While data exploitation is a global phenomenon, countries in the Global South often lack robust data protection laws, making their citizens’ data an attractive and lucrative resource for digital colonisers. This lack of regulatory oversight makes it easier to harvest data and target specific groups. This process reinforces the patterns of exploitation established by historical colonialism. The result is that actors with less power – both states and individuals – are subjected to greater exploitation and data extraction.
Past Examples
When examining the exploitative power of AI and big data, it is crucial to recognise the role of past knowledge technologies in establishing patterns of domination that benefit certain groups while exploiting others. This comparative approach is central to the project Calculating Empires: A Genealogy of Technology and Power Since 1500 by Kate Crawford and Vladan Joler (2023). As its authors state: “Calculating Empires is a large-scale research visualization exploring how technical and social structures co-evolved over five centuries. The aim is to view the contemporary period in a longer trajectory of ideas, devices, infrastructures, and systems of power.”
While the project considers a wide range of technologies across five centuries, encompassing both epistemic tools (e.g., Gutenberg’s printing press, the electronic computer) and epistemic practices (e.g., Boolean logic, biochemistry), it also highlights the centrality of “colonialism and the paths of empires over history.” This is because the infrastructures built by colonial powers over centuries continue to influence how knowledge is shared and created.
The legacy of colonialism extends beyond material artifacts and physical configurations. It also shapes spatial imaginaries, affective relations, and shared memories. Such legacies can be intangible, as exemplified by the contrasting perspectives on a colonial railway: for some, it evokes romantic memories of travel, while for others, it evokes the trauma of subjugation.
This dynamic is evident in the British submarine telegraph cable system, which played a pivotal role in maintaining control over British colonies. In his 1981 book, historian Daniel R. Headrick identified submarine cables as one of the essential ‘tools of Empire’ employed by Britain. Headrick wrote: “Addiction to data and speed are nothing new, of course. The need for rapid information was one of the forces behind the opening of steam communication with India.” (Headrick 1981: 157).
These undersea cables became instrumental in colonial governance during the second half of the nineteenth century. Prior to the invention of the electric telegraph, historian John Tully (2009) explains that news from a colonial outpost could take six months to reach Britain, making imperial control challenging. However, the Indian Rebellion of 1857 against the rule of the British East India Company prompted colonials rattled by the event, to demand an expanded telegraph system. Furthermore, British officials perceived a security risk in relying on telegraph lines that traversed non-British territory, as these lines could be cut and messages disrupted during wartime. They sought to establish a worldwide network within the empire, known as the All Red Line, while also devising strategies to quickly interrupt enemy communications. In 1870, Bombay was linked to London via submarine cable in a collaborative effort by four cable companies, at the behest of the British Government. In 1872, Australia was connected to Bombay via Singapore and China, and in 1876, the cable extended the British Empire’s reach from London to New Zealand (Headrick 1981: 160-161).
Tully (2009) maintains that this development would not have been possible without ‘gutta-percha’, a resin similar to rubber used to insulate underwater wires. However, the demand for gutta-percha had a detrimental impact on the rainforests where it was found as millions of trees were cut to extract the resin. The expanding telegraph system required as much as four million pounds of gutta-percha annually. By the 1890s, ancient forests were in ruins, and the species producing gutta-percha were so rare that some cable companies had to decline projects due to insufficient supply.
The image of the rainforest being decimated to build a technology primarily intended to control the Indian population and enable the British army to swiftly suppress any insurrection is conspicuously absent from the traditional British narrative. Rudyard Kipling, a prominent mythologist and advocate for the Empire, dedicated a poem to celebrate the British telegraph cable network (Davies 2019). “The Deep-Sea Cables”, originally published in 1896 by Methuen & Co. in The Seven Seas, was one of seven poems in the section titled “A Song of the English.” Kipling not only marvelled at this technical wonder and his people’s ingenuity but also presented British domination as a benevolent and unifying force: “Hush! Men talk to-day o’er the waste of the ultimate slime,/ And a new Word runs between: whispering, ‘Let us be one!’.” This idyllic portrayal, however, contrasts sharply with another of his poems from the same collection titled “Sappers.” Here Kipling lauded Her Majesty’s Royal Engineers for generously providing colonies with bridges, wells, huts, and, of course, telegraph wire – luxuries that, in the poet’s callous view, the ‘natives’ failed to appreciate sufficiently: “They haven't no manners nor gratitude too, / For the more that we help 'em, the less will they do.” Kipling thus transformed these ‘tools of Empire’ from instruments of dominance and oppression into unearned gifts bestowed by a benevolent power upon its uncivilized subjects.
Beyond shedding light on the cultural discourse surrounding Victorian underwater cable systems, Kipling’s poems demonstrate how different knowledge technologies – in this case, the telegraph and literary books – are interconnected, with material and symbolic aspects constantly influencing each other. Furthermore, fictional narratives have the power to either conceal or expose the political and exploitative aspects of technologies.
The legacy of these colonial knowledge infrastructures extends to modern times. Dhanashree Thorat (2019) examines India’s undersea internet infrastructure within the historical context of British colonialism, arguing that the colonial topography of the telegraph has been inherited by the submarine fibre optic infrastructure of the internet. However, Thorat also proposes using technologies to undo the technologies of colonialism, citing the SEACOM cable line built by Indian telecommunications company Tata Communications connecting cities in the Global South to the internet, specifically in East Africa, the Middle East, and South Asia as an example.
It is important to acknowledge that a growing number of scholars are challenging the traditional criticism directed at knowledge technology infrastructure built by colonial powers. Van der Straeten and Hasenöhrl (2016) argue that the traditional criticism against colonial control of knowledge infrastructure is biased:
Being caught in the notion that what was particular to Western technological culture was its “enormous capacity for expansion and dominance” (Friedel 2007: 4), they frame Western technologies as omnipotent “tools of empire” (Headrick 1981) for the subjugation and exploitation of non-Western people, environments and traditions. This perspective misses out on vital aspects of the transfer, (everyday) life and social and environmental preconditions and impacts of infrastructures in the Global South (357).
In attempting a critique of AI colonialism, it is thus vital to acknowledge how past patterns of exploitation are being reinforced and reintroduced by today’s technologies. However, it is equally important to avoid further denying agency to those – individuals and nations – who, despite being in a position of minority, possess the power to resist and propose alternative solutions.
KTs and Labour
AI and big data
The impact of AI and big data on workers' rights is a subject of ongoing concern. Issues of worker exploitation and the invisibility of labour are central to discussions surrounding crowdsourcing marketplaces like Amazon Mechanical Turk (Irani and Silberman 2013). This is also evident in the case of content moderators and data labellers, who are not only underpaid (Le Ludec et al. 2023) but also face emotional trauma due to the content they view, and the fast pace demanded by the job (Rowe 2023).
Recent scandals have exposed the reality of underpaid and concealed human labour behind AI systems. Notable examples of "AI-washing" include the Finnish tech firm Metroc employing Finnish prisoners for data labelling at a rate of only €1.54 per hour (Meaker 2023), and Amazon's supposedly AI-powered cashier-free shops, which were actually powered by remote workers in India (Bridle 2024).
There's also widespread fear of job displacement due to automation, including in intellectual professions. The net impact of AI on employment is theoretically ambiguous. While AI may displace some human labour, it can also increase labour demand due to greater productivity and create new types of jobs. However, immediate negative effects are not unjustified. A 2023 Goldman Sachs report stated that generative AI could lead to a "significant disruption of the labour market," with an estimated 300 million jobs potentially exposed to automation (Kelly 2023). Another example is British Telecom's announcement of plans to cut up to 55,000 jobs by 2030, with the potential to replace 10,000 of those jobs with AI (Sweney 2023).
Workers are mobilizing against algorithmic exploitation and job insecurity. For instance, in November 2022, workers at the Amazon Centre in Coventry, UK, went on strike just before Black Friday. This was the first industrial action ever taken against Amazon in the UK. While the primary demand was for a minimum wage of £15 per hour, there were also concerns about the surveillance software used by Amazon to track worker performance and the company's algorithms that set productivity rates. Workers felt they were treated like machines and, at the same time, controlled and exploited by machines.
Past Examples
The debate on automation and its effects on the organisation of labour, workers' rights, and their perceived self-worth dates back to the mid-1950s, when intelligent systems first emerged. The first "strike against automation" occurred in Coventry, UK, between April and May 1956, when Standard Motor Company workers initiated an industrial dispute to prevent the dismissal of 3,000 workers due to the introduction of automated production methods (Castoriadis 1988: 26-27). The impact of this strike extended beyond England, as noted by Greek economist Cornelius Castoriadis:
The Standard workers' strike has had immense repercussions in England. It would not be an exaggeration to say that, since April 26, 'automation' has become one of the major preoccupations of the workers, the unions, the capitalists, and the English government. What was for so long only utopia and 'science fiction,' what yesterday was still on the drawing boards and planning charts of the industry's engineers and top accountants, has become in a few days a predominant factor in the social history of our time and the subject for front-page headlines in the major newspapers" (Castoriadis 1988: 27).
The Coventry strike sparked global debates on labour automation. However, workers and unionists were not opposed to automation per se but to management decisions that prioritized profits over workers' interests. Indeed, workers saw management, not machines, as the real threat. For example, the national conference of shop stewards held in London immediately after the Coventry strike, on May 27, unanimously adopted a motion that declared: "We are not opposed to the introduction of new technological advances, but insist that full consultation with the workers" (Castoriadis 1988: 29).
This was not simply an empty statement to appease factory managers. The UNESCO study group found that workers were aware that the real enemy was not technological advancement but the people in charge of managing the transition to automated labour. They wrote in their report: “Fear of unemployment has not, as it sometimes did in the first industrial revolution, created an opposition to machines per se. It has, however, been accompanied by strong doubts on the part of workers as to the willingness of management to make reasonable efforts to minimize unnecessary hardships and unemployment that might result from automation” (Social Consequences of automation 1958: 105).
The workers in Coventry believed that management, not the machines, was hostile and inhumane, treating workers as disposable machines. In this regard, it is interesting to mention the personal experience of one of the workers who lost their job at Standard. This experience is documented in the UNESCO report:
A most enlightening comment was overheard by one of my colleagues in a pub in Coventry. One of the workers, talking over what had happened, objected to the ‘impolite’ way in which they had been put off. This was not an expression of resentment because of unfair treatment, or because of the genuine difficulties a lot of workers had been faced with, it was an expression of indignation at the abrupt inconsiderate way in which the workers were suddenly notified of their dismissal. […] The reaction of this worker was, therefore, a very interesting one. It showed that the man felt that he had been ill-used as a human being by another human being and that proper consideration for him had not been shown. (Social Consequences of automation 1958: 107).
The remark about the worker “being ill-used as a human being by another human being” echoes, perhaps not by chance, the book by Norbert Wiener, the father of cybernetics, titled The Human Use of Human Beings (1950). In the book Wiener described the cybernetic society, one in which intelligent machines are at service of people, as a society in which workers’ exploitation does not longer exist, because, Wiener explains “any use of a human being in which less is attributed to him than his full status is a degradation and a waste” (Wiener, 1950: 16).
An echo of this strike and the ensuing debate is recorded in Italo Calvino’s short story “Gli automi” (The Automaton), published in 1956 in the Italian left-wing magazine Il Contemporaneo. The story is set in a distant future in the state of Minnesota. The narrator recounts the first strike organised by intelligent machines in protest against their human co-workers, which occurred years earlier in 1986. In Calvino’s story, workers and machines are initially pitted against each other to protect management’s interests. However, both humans and machines soon realise the situation and unite to take control of the factory, establishing a socialist state in Minnesota. Calvino imagined a utopian world where the clash between humans and machines is resolved in a way that protects workers’ rights while also guaranteeing technological progress.
“Gli automi” still resonates today, particularly when read against the backdrop of the current development of AI and its disruptive impact on workers’ livelihoods. The story challenges the common perspective on human-machine cooperation and, like the unionists and workers of the late 1950s, emphasises the need for a change in labour rights and protection, rather than blaming technology itself.
Participation, Empowerment, and Civil Disobedience
AI and big data
The extractive and exploitative nature of AI and big data has been widely recognised, and people from diverse backgrounds are actively fighting to preserve or reclaim their autonomy in the face of monopolistic control.
For example, tools like Nightshade and Glaze allow artists to embed invisible changes into their digital artwork before uploading it online so that, if they are scraped into an AI training set, they interfere the AI model. This “poisoning” of training data can disrupt the performance of AI models, such as DALL-E, Midjourney, and Stable Diffusion, causing them to generate nonsensical outputs (Heikkilä 2023). This strategy aims to counter AI companies that use artists’ work without permission to train their models.
Gig workers, particularly those employed by food delivery services, are developing strategies to circumvent exploitative algorithms. Bonini and Treré (2024) conducted an ethnographic study of Deliveroo couriers in Italy, revealing how they use WhatsApp group chats to develop and share strategies for circumventing the platform's algorithms. Similar forms of resistance are observed globally, from Jakarta, Indonesia (Hao and Freischlad 2022) to Lagos, Nigeria (Arubayi 2021).
Protesters are also developing tactics to protect themselves from AI facial recognition cameras. During the 2019 Hong Kong protests, demonstrators employed various methods to evade surveillance, including lasers, face coverings, VPNs, and secret communication via Telegram. Protests like those in Hong Kong and the Black Lives Matter movement have demonstrated a sophisticated understanding of surveillance technology, combining advanced tools with "analogue" methods such as anti-surveillance makeup and hairstyles (Harvey 2013; de Vries and Willem Schinkel 2019; Sharma 2020).
While these examples highlight resistance against AI and big data, it is crucial to consider the potential for these technologies to become tools for autonomy, free will, and civil participation. To explore this possibility, it is valuable to examine historical examples of knowledge technologies that have empowered individuals and supported democratic processes. No matter how inspiring these stories are, they are all nonetheless stories of resistance against algorithms and AI systems. But is it possible for AI and big data to become tools for autonomy and for civil participation in themselves, and not necessarily an extractive technology to fight off? It is worth considering some examples from the past to see what the conditions were – material, economic, political – that allowed for past knowledge technologies to become instruments to exercise autonomy and support democracy.
Past Examples
The debate surrounding the empowering potential of knowledge technologies presents two contrasting perspectives. One view argues that technological affordances are solely determined by material and technical aspects, beyond the control of users. Conversely, the Socio-Technical System Theory posits that no technology is inherently oppressive, and its impact on freedom and autonomy depends on how its material and technical aspects interact with the socio-political environment.
Hans Magnus Enzensberger (1970) championed the latter interpretation, advocating for a “socialist strategy” for the emancipatory use of media. in “Constituents of a Theory of the Media” he argued that technologies like the transistor radio, in principle, do not inherently distinguish between transmitter and receiver. Instead, these technical distinctions reflect the social division of labour, ultimately rooted in the power dynamics between the ruling and the working classes. Enzensberger believed that active citizens and producers could harness the unexploited potential of these technologies, becoming producers themselves and transforming the communications media into tools for their own empowerment.
Jean Baudrillard (1971) countered Enzensberger’s argument, rejecting the notion that media are neutral systems whose impact depends solely on their users. He argued that media, in their very form and operation, actively shape social relations. Media are not merely mediators but “effectors of ideology,” (Baudrillard 1971: 280) echoing McLuhan's famous motto that “the medium is the message.” Indeed, he believed that media are inherently anti-mediatory and intransitive, to the point of declaring that: “transgression and subversion never get ‘on the air’ without being subtly negated as they are; transformed into models, neutralized into signs, they are eviscerated of their meaning” (Baudrillard 1971: 282).
Baudrillard’s statement resonates with Gil Scott-Heron's 1970 spoken-word song “The Revolution Will Not Be Televised,” which encapsulates the widespread disillusionment with the liberating power of television and mass media in general. This sentiment persists today, as evidenced by Malcolm Gladwell’s 2010 New Yorker article, which echoed Scott-Heron’s slogan in expressing distrust towards social media activism: “the revolution will not be tweeted” (Gladwell 2010).
Far from denying the limitations to people’s autonomy that knowledge technologies have and continue to impose onto people (something discussed at length in the previous sections), it is interesting to consider a few cases that challenge this pessimistic view. One such example is the subversive role of television in Czechoslovakia in the 1960s.
After Stalin’s Great Purge was denounced by Soviet leader Nikita Khrushchev in 1956 following Stalin's death, in Czechoslovakia began a process of “de-Stalinization” aimed at granting the country greater economic and political independence from Russia. This eventually led to the Prague Spring, a period of political liberalization and mass protest in Czechoslovakia, beginning in 1968 with the election of Alexander Dubček as First Secretary of the Communist Party of Czechoslovakia, and ended on 21 August 1968, when the Soviet Union invaded the country to suppress the reforms. In the years that prepared the Prague Spring television became a powerful tool for democratic participation and played a crucial role in subverting political hierarchies. Television programs exposed the misdeeds of politicians and Communist Party members implicated in the Great Purge, leading to televised trials that shocked and disillusioned viewers, but also fuelled their desire for change (Brenn 2010; Maxa 1970).
Historian Paulina Brenn (2010) and journalist Josef Maxa (1970) highlight how television not only disseminated information but also transformed people’s perceptions and relationships with their representatives. Politicians, confronted on live television, could no longer hide their flaws and present themselves as authoritative figures. Instead, their vulnerabilities and inauthenticity were exposed: “The communication media ended the political career of more than one ‘statesman’ invited before the television cameras, where under pitiless lights and the probing questions of the commentators, his inner poverty was revealed." (Maxa 1970: 52). Maxa further observed: “For the first time the public learned something of the private lives of government men – their families, interests, hobbies, and how they spent their leisure. For the first time it was possible to publish unofficial photographs in which political figures appeared as ordinary people, not important statesmen” (Maxa 1970: 112).
This period of openness ended with the Soviet invasion of Czechoslovakia in August 1968, which reimposed strict censorship on television and radio broadcasting. However, the Prague Spring demonstrates how knowledge technologies can empower individuals not only through the content they disseminate but also by challenging people’s judgement, perceptions, and trust in the people behind the political status quo.
A second interesting example is how in the 1990s the Zapatista Movement in Chiapas, Mexico – also known as the Zapatista Army of National Liberation (EZLN, Ejército Zapatista de Liberación Nacional) – succeeded in using the Web 1.0 to challenge the neoliberal economic model and the exploitative practices that harm indigenous communities and their environment, and to secure international support. In doing so, the Zapatistas, took advantage of the Web to navigate the localised and globalised dimension of their struggle. The Zapatista Movement emerged in 1994 in response to the implementation of the North American Free Trade Agreement (NAFTA). By adopting communications through worldwide computer networks, the Zapatistas acquired communicative autonomy. This was partly possible thanks to the creation of La Neta, an alternative computer network linking Chiapas to Mexico with support from NGOs and the Catholic Church, played a significant role in their success (Castells 2010) and speaks about the importance of non-monopolistic control over KTs.
The Zapatistas used the internet to obtain external support for their region, which was surrounded by the military, and to publish communications and alerts worldwide. Moreover, they successfully challenged the colonialist depiction of “subaltern” subjects who are unable to advocate for themselves or master advanced technologies. The Zapatistas’ information structure, especially the internet, was crucial to their transformation from a guerrilla group into a national social movement. Indeed, they shifted their strategy from direct military confrontation with Mexican authorities, to narrative construction and communication as the primary means of engaging with the Mexican state.
The example of the EZLN’s use of the Web constitutes a relevant example of knowledge technologies can be leveraged to both empower local communities and protect local culture, while also taking advantage of the connectivity that digital technologies offer. This balance between local and global dimension, indigenous identity and transnational network of solidarity provides a valuable model for envisioning how AI systems can support individual autonomy.
The last example, and perhaps the most popular one, is the impact of the printing press on the diffusion of new political and religious ideas – Protestantism being among the most relevant ones – and on people’s literacy. Each aspect concurred to increase people’s autonomy and agency over knowledge production and consumption. Victor Hugo, in his novel The Hunchback of Notre Dame, dramatically summarised the revolutionary and long-lasting impact of the printed press by stating that “The invention of printing is the greatest event in history. It is the mother of revolution” (Hugo 1831: book V chapter 2).
Such revolution was the result of the end of the monopolistic control over written text exercised by the Church and monarchical powers, but also made possible by the relatively low number of books that was produced before the advent of the Gutenberg’s press, which made such control relatively easy. The widespread dissemination of new ideas in printed books was made possible by the significant drop in book prices, which fell by 2.4% annually for over a century after Gutenberg. This enabled a new audience, accustomed to reading in vernacular languages rather than Latin, to purchase books en masse. (Dittmar and Seabold 2019)
Historical evidence also strongly indicates that competition and market structure in printing profoundly shaped the diffusion of ideas and radical social changes, not solely the technology itself. The liberating power of the printing press was fuelled, at least in part, by competition, a crucial lesson to consider in today’s context where a few tech companies (GAMMAN) hold monopolies over AI technologies. Indeed, Dittmar and Seabold conclude: “In an era of economic concentration in the use of cutting-edge technologies, the evidence from our past strongly suggests that the interaction between technology and competition may have considerable implications.”
The political impact of the printing press is captured in the historical novel Q: A Novel (1999) by Luther Blissett, which describes its crucial role in the German Peasants’ War of 1524-1525, Europe’s largest and most widespread popular uprising before the French Revolution of 1789. The novel implicitly draws parallels between the printing press and the early Web, envisioning both as tools for free information circulation and political activism. Media and political activists as much as writers, Luther Blissett were hoping that, like Gutenberg’s printing press, the Internet could open opportunities for a freer and alternative network of information, and they were actively working towards this. The German preacher Thomas Müntzer, one of the leaders of the German Peasants’ War and equally opposed to Martin Luther and to the Catholic Church, is one of the characters of the book. Considered a proto-Christian-communist figure, Müntzer in his final confession under torture of May 1525 re-signified the biblical principle omnia sunt communia (“all things in common”, Acts of the Apostles 2:44). Originally expressing spiritual harmony and solidarity, Müntzer intended it to have a more material message: that “all things are to be held in common and distribution should be to each according to his need.”
Luther Blissett in turn reappropriated the slogan and used it to express their opposition towards any form of copyright and in favour of the free circulation of knowledge, especially online. All of Luther Blissett/Wu Ming books are published under ‘copyleft’ licence – a pun on the definition of ‘copyright’: “Partial or total reproduction of this book and its diffusion in electronic form are consented for non-commercial purposes, provided that this notice is included". This notice is based on the concept of "copyleft", which was invented in the 1980's by Richard Stallman and the Free Software Movement and is being applied in other realms of communication, science information, creative writing and arts.” (Wu Ming 1 2003).
The parallelism between the liberating power of printing press and the early Web, supported by many scholars (e.g. Kertcher and Margalit 2006), was then enacted and narrativized by Luther Blissett in a transmedial experiment that connected political activism, literary imaginary, and history of knowledge technologies.