Democratic values:
Links: OPEN
Partners: logo

3.4.2 Attention

This section analyses how different knowledge technologies impact people’s attention and, consequently, their decisions regarding which information is worth storing and remembering, and which is instead forgotten or not even registered in the first place. We address two key issues: the problem of information overload and how it shapes knowledge infrastructures as well as individuals’ engagement with knowledge, and the issue of affective contents and how they direct and sometimes skew people’s attention. Following the analysis proposed in Module A, we focus on attention understood as the “ability to decide what to want.”

Information Overload

The fear of losing control over the speed and scale of knowledge production and the consequent inability to retain and process information has always haunted humanity. What might seem like a recent problem, born with the Internet and exacerbated by generative AI, is instead a long-lasting concern. For instance, the then disruptive new technologies of reading and writing were famously decried by Socrates in the Platonic dialogue Phaedrus as something that would have severely impaired people's ability to memorise and retain information. In the Platonic dialogue, Socrates condemns Theuth, king of Egypt, considered to be the inventor of written language, of being responsible for people's loss of memory and wit and declares:

This invention will produce forgetfulness in the minds of those who learn to use it, because they will not practise their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant" (Plato: 563).

Similarly, Swiss scientist Conrad Gessner, in his book Bibliotheca Universalis published in Zurich in 1545, expressed great concern over the information overload caused by the advent of the printing press, which massively increased the number of published books. The societal and political repercussions of this did not escape Gessner, who called upon kings and queens to solve the situation (Blair 2003: 11).

Pointing out the long-lasting and recursive nature of the complaint against information overload, however, does not diminish the gravity of the situation we are facing today since the latest development in the field of AI. Two are the main current issues discussed in the following pages. First, how knowledge infrastructures cope with a sudden increase or change in the amount of information produced. Or, borrowing Friedrich Kittler’s concept of “discourse network” (1985), how do networks of technologies and institutions allow a given culture to select, store, and produce relevant data? And what can we learn from this past experience?

Secondly, it is crucial to consider the question of how individuals select, process, and retain content when the information available widely exceeds humans' cognitive capacities. While it is evident that no one at any point in human history was ever capable of retaining all the available knowledge, it is also undisputed that the number of stimuli and information shared and received since the advent of digital media has put an unprecedented strain on people’s psychological wellbeing and cognitive abilities (Twenge 2019).

The following sections will then consider how information overload has been dealt with at the systemic and at the individual level in the past, but also in the age of AI and big data.

How Knowledge Infrastructures Cope with Information Overload
 AI and Big Data

With the rise of generative AI and LLMs, a renewed preoccupation about information overload has spread (Feng et al. 2021). If digital technologies, coupled with the connecting power of the Web, did make it easier to produce and share written and visual content by easily copying, pasting, and editing, the newest advancements of AI fully automated these processes. Moreover, producing content has become not only quick but relatively cheap. This situation has caused concerns about ‘data pollution,’ as the risk that generative AI can contaminate information supply with irrelevant, redundant, unsolicited, hampering, and low-value information is not simply a future scenario but a current reality (Kniaz 2023; Lutkevich 2023). This is what happens with AI-generated books, which are not only insipid and poorly constructed volumes but rip-offs of human authors’ works. This is true not simply because LLMs are indeed “stochastic parrots” (Bender et al. 2021) infinitely reshuffling human creative knowledge, but because people are literally using AI to plagiarise books and produce multiple volumes from a single original so to make profits.

AI researcher Melanie Mitchell experienced this issue firsthand in 2024. After publishing her book Artificial Intelligence: A Guide for Thinking Humans in 2019, she discovered an AI-generated imitation on Amazon following the ChatGPT-driven AI boom. This unauthorized 45-page summary crudely mimicked her work, attributed to “Shumaila Majid,” a non-existent author with no online presence. Further investigation revealed dozens of similar AI-generated summaries of recent publications under the same fake author name (Knibbs 2024). This incident is not isolated; Amazon has been inundated with unauthorized AI-generated book summaries (Trachtenberg 2019), compelling the platform to implement a rule limiting authors to three daily uploads (Creamer 2023). The academic publishing sector faces comparable challenges, with AI-generated submissions to journals steadily increasing since 2021 (Stokel-Walker 2024). This trend raises concerns about scientific integrity and imposes additional workloads on reviewers, editors, and researchers, particularly when evaluating poor-quality papers where AI was used not just for review but content creation. Some propose using AI to review and evaluate papers as a potential solution to this information overload issue (Checco et al. 2021; Escalante et al. 2023).

A quite unexpected area in which generative AI is massively polluting online content is baby learning videos, as discussed by Erik Hoel (2024). Hoel explains that 

YouTube for kids is quickly becoming a stream of synthetic content. Much of it now consists of wooden digital characters interacting in short nonsensical clips without continuity or purpose. [...] They don't use proper English, and after quickly going through some shapes like the initial video title promises [...] the rest of the video devolves into randomly-generated rote tasks, eerie interactions, more incorrect grammar, and uncanny musical interludes of songs that serve no purpose but to pad the time. It is the creation of an alien mind. (Hoel 2024). 

For instance, the AI-generated YouTube kid’s education channel called Super Crazy Kids has 11.9 million subscribers (as of 4 Sept. 2024). While some might well be fake subscribers – although this is against YouTube's terms of service – this gives the idea of the spread of such content. This example is particularly relevant as it demonstrates the cognitive impact that AI data pollution can have on children and their development, as well as highlighting how generative AI makes content curation – in this case on YouTube – extremely challenging.

Hoel defines AI content pollution as a “tragedy of the commons.” The common good, in this case, is not simply human knowledge but attention: “Since the internet economy runs on eyeballs and clicks the new ability of anyone, anywhere, to easily generate infinite low-quality content via AI is now remorselessly generating tragedy" (Hoel 2024). Here Hoel is making a parallel between the present situation and the one created by the advent of the Web 2.0. Indeed, it was in 2012, when the effect of the Web 2.0 became evident, that Bill Davidow published on an article titled “The Tragedy of the Internet Commons,” which sparked a heated debate. The article applied the concept of commons, usually referred to shared material goods like lands and fisheries, and applied it to the Internet and famously stated that “[l]ike overgrazing of public lands or over-fishing of the seas, the digital space will continue to be exploited — and that's why it needs to be regulated” (Davidow 2012). The definition attracted many critics who argued that the Internet does not satisfy all the essential criteria defining commons, and especially the idea of the Internet being ‘finite’ was criticised. Nonetheless, Davidow’s main claim in favour of recognising the digital space as a public good that needs to be protected and regulated as such is still relevant to these days.

Indeed, the belief that AI and big data, and the information system that they build, is a public problem that needs to be solved and addressed at the community level is at the centre of several recent contributions on the topic. What has emerged in recent years, which was mostly ignored in the mid-2010s when the concept of commons was used to describe the Internet, is how crucial attention economy is to civic performance and participation, and how AI and big data are reshaping the entire landscape. For instance, Andrejevic (2024) claims that in order to preserve attention, which he deems crucial for democratic participation, we need to protect information systems – a position that once again supports the link between metacognition and technologies on which our definition of critical digital literacy relies. Andrejevic writes: “For democracy to function, people need to pay attention to matters of public import. In an information environment swamped with automatically generated content, attention becomes the scarce resource. A world in which attention is monopolized by an endless flow of personalized entertainment might be a consumers’ paradise—but it would be a citizen’s nightmare.” (Andrejevic 2024: 83). Framing these issues as public problems rather than consumers’ problems to be solved by tech companies means that institutions responsible for knowledge infrastructures, and ultimately the people served by these institutions, should be in charge of regulating these aspects or at least part of the conversation.

Past Examples

Father Roberto Busa, a Jesuit who is considered the father of Digital Humanities, is famous for having personally met IBM CEO Thomas J. Watson in 1949 and convinced him to support his project of lemmatization of the complete opus of Thomas Aquinas, an endeavour that took him thirty years (Jones 2016). Busa, a tech-savvy Jesuit working with IBM, was at a vantage point to reflect on the past and present of text technologies and their impact on knowledge production. In 1962, he published an article on the literary journal Almanacco Letterario Bompiani for a special issue dedicated to the impact of computers and information science on the humanities. In the article, beside sharing his own research, Busa drew a parallel between the revolutionary – equally disruptive and exciting – impact of the printing press and the one of the then new mainframe computers. Busa wrote:

Domenico De Domenichi, a Venetian ‘de ordine plebejo,’ became vicar of Pope Sixtus IV. In the preface to an incunabulum printed in Venice in 1948, he commented on the then very recent invention of printing as follows: ‘Placuit autem clementissimo Deo his nostris temporibus novam artem docere homines’ [It also pleased the benevolent God to teach the men of our time a new art]. He then goes on to report the astonishing news that three men in just three months of work managed to print no less than 300 copies of a volume: ‘ad quae tota eorum vita haud quaquam sufficeret si cum digitis et cum calamo aut penna scribenda forent’ [To accomplish this would barely have sufficed their whole life if they had done it by hand and with the ink and pen] and then he concludes: ‘si quid in me est auctoritatis etiam admoneo: ne tanta Dei beneficentia abutantur’ [If I possess any authority to say so, I admonish: let there be no abuse of such beneficence from God]. What should we say today? (Busa 1962: 117).

While Busa, nor Domenico De Domenichi, did specifically refer to the reason why they suggested caution when taking advantage of these new writing technologies, the printing press and the computer, the mentioning of the amount and speed of volumes’ production was clearly at the centre of their preoccupation. Furthermore, the closing ironic question suggested that the Pope’s vicar concerns were nothing when compared to the ones that should worry Busa’s contemporaries. Indeed, while it is certainly true that the cultural shock provoked by the invention of the printing press is comparable to the one of the first computers, it is also undeniable that the new technology brought an exponential rather than incremental surge in text production. And this claim of an unprecedented deluge of information, brought about the latest knowledge technology, resurged only a few decades later with the advent of the World Wide Web.

Tiziana Terranova (2004) defines our contemporary global culture as a network culture that, she explains, is characterised by an unprecedented amount of information that is being shared at an unprecedented speed. This hyper-connectivity, rendered possible by the Internet and the Web, made the long-lived dream of a universal library a reality. Knowledge is no longer a tower to be conquered by few, but a dispersed network for all to navigate. At the same time, information overload and lack of source hierarchy can lead to people who, left alone in evaluating their sources, feel overwhelmed by the amount of information presented to them. It is interesting to consider how the concept of a network of information has developed since the advent of the first electronic mainframe computers, well before the Web was even a reality.

In 1945, American engineer Vannevar Bush published “As We May Think” in The Atlantic, describing his idea for an invention called “memex.” This electromechanical device for interacting with microform documents would allow people to compress and store all their books, records, and communications, “mechanized so that it may be consulted with exceeding speed and flexibility” (Bush 1945: 106). The memex was envisioned as an automatic personal filing system, “an enlarged intimate supplement to his memory” (Bush 1945: 107). It would enable individuals to develop and read a large self-contained research library, create and follow associative trails of links and personal annotations, and recall these trails to share with other researchers. This device would closely mimic the associative processes of the human mind but with permanent recollection.

As Bush writes, “Thus science may implement the ways in which man produces, stores, and consults the record of the race” (Bush 1945: 108). An associative trail as conceived by Bush would be a way to create a new linear sequence of microfilm frames across any arbitrary sequence of microfilm frames by creating a chained sequence of links in the way just described, along with personal comments and side trails. At the time, Bush saw the current ways of indexing information as limiting and instead proposed a way to store information that was analogous to the mental association of the human brain: storing information with the capability of easy access at a later time using certain cues (in this case, a series of numbers as a code to retrieve data).

The concept of the memex influenced later developments of hypertext systems, specifically the work of another pioneer, Ted Nelson. Ted Nelson is considered the ‘father of hypertext’ (Lima 2023b). In 1965, at the age of twenty-eight, he published a paper in which he coined the word “hypertext” and used for the first time the word “links” to refer to the connections between different texts in a digital document (Nelson 1965). His ambition was to create a universal digital library in the shape of a network, like the one imagined by the Argentinian writer Luis Borges in his essay “The Infinite Library” (1939) as well as in his story “The Library of Babel” (1941). In these texts, Borges imagines a universal library containing an infinite number of texts created by endless permutation of words. One of Borges’ sources of inspiration was the 'infinite monkey theorem,' theorised by the French mathematician Émile Borel in 1913. The theorem states that, given a monkey a typewriter and an infinite amount of time, the animal will be able, after millions of random attempts, to type any work ever written by William Shakespeare. One might say that generative AI operates not too dissimilarly from Borel's monkey.

Nelson’s system aimed at creating order into this chaotic universal library. In the system he envisioned, all the links – references and direct quotations – between documents – written and otherwise – would be traceable and modifiable. Nelson's project, for which he took inspiration from the work of Vannevar Bush, is different from the World Wide Web that we actually have, and that Nelson believes to be a simplification of his prototype. The Web today allows users to jump from one text to another, from one page to the next, but in doing so, Nelson decries, users lose track of the connections between documents, as they do not see the path they trace while navigating online. Nelson instead proposes a system that shows on the computer screen all the connections between documents and that has two-way links between pages. Furthermore, any time someone creates a new version of an existent text or quotes a source, this would leave a permanent record in the network.

Nelson began this project in 1960 and in 1967 he decided to call it “Project Xanadu,” after Kublai Khan's royal palace described in the famous poem by Samuel Taylor Coleridge “Kubla Khan. A Vision in A Dream: A Fragment” (Lima 2023b: 196-7). In the poem, the palace of Xanadu is the setting for a series of oneiric descriptions and evocative images, described in the first three stanzas. The poem, by its author's admission, is incomplete, a circumstance that adds to the sense of magical suspension created by the verses. The story of its composition is as famous as the poem itself and thus indispensable to its interpretation. The first published version of the poem, by Lord Byron’s suggestion, was prefaced by a text in which Coleridge explained its genesis. He recounted that, while having an opium-fuelled dream, he saw Kublai Khan's palace Xanadu and composed a poem about it. Once awake, he immediately tried to put on paper the verses conceived in his dreams, but a visitor – the now proverbial ‘person from Porlock’ – knocked at his door and distracted him, which caused him to forget the rest of the poem. The 54 lines published, Coleridge maintained, are thus the only fragment he was able to recollect.

The constant risk of losing content as well as the links connecting ideas into a full picture is exactly what Nelson, and Bush before him, tried to solve with their system. Moreover, Coleridge’s poem allows the readers to make their own connections between the many images described in the non-narrative poem, much like a hypertext would do. While Nelson did not elaborate on the reason why Coleridge's poem inspired him, these must have been the similarities that prompted him to name his hypertext project after Coleridge's depiction of the palace of Xanadu.

Indeed, “Project Xanadu” was meant to organise the chaos of universal knowledge, while at the same time preserving all the possible connections between the different elements. Also, essential to Nelson's idea of a computer network was the tension between remembering and forgetting: the two-way links were meant to preserve the memory of the connections between texts, but these connections also need to be erased and re-written so to allow new searches and thus new meaning. While Nelson’s and Vannevar’s visions were never realised and the idea of a network of information evolved in a different direction, it is nonetheless interesting to consider how the advent of computer technologies have prompted people to imagine new creative strategies to face the perils of information overload. This is important to keep in mind when considering the impact of AI technologies today, as they have the ability to stimulate exciting intellectual and cultural challenges, and not simply to overwhelm us.

Also, it is worth remembering that while the Web has indeed changed our approach to knowledge, as well as our perception of the world, it is also true that the fear and excitement provoked by the free proliferation of information long precedes digital technologies. With the advent of Gutenberg’s press – as discussed in the previous section on Autonomy – not only the number of books increased, but also the variety of their languages and content, as well as their audience. While this was praised and recognised by many as a sign of progress, some philosophers and intellectuals were also worried that the growing information equalled a decline in knowledge, this intended as true and meaningful acquisition of contents. This was the case, for instance, of the French philosopher Denis Diderot, who in the Encyclopédie (1755) expressed his concerns regarding the people’s ability to discern which sources to read and trust among the ever-growing number of books being printed. In the highly meta-reflexive article dedicated to ‘Encyclopaedia’ Diderot wrote: 

Tandis que les siècles s’écoulent, la masse des ouvrages s'accroît sans cesse, & l'on prévoit un moment où il seroit presqu'aussi difficile de s'instruire dans une bibliothèque, que dans l'univers, & presqu'aussi court de chercher une vérité subsistante dans la nature, qu'égarée dans une multitude immense de volumes ; il faudroit alors se livrer, par nécessité, à un travail qu'on auroit négligé d'entreprendre, parce qu'on n'en auroit pas senti le besoin. [As long as the centuries continue to unfold, the number of books will grow continually, and one can predict that a time will come when it will be almost as difficult to learn anything from books as from the direct study of the whole universe. It will be almost as convenient to search for some bit of truth concealed in nature as it will be to find it hidden away in an immense multitude of bound volumes.] (Diderot 1755).

Diderot’s words remind of another famous story by Luis Borges, “On Exactitude in Science”, in which the author describes a map that becomes so detailed and exact that it ends up coinciding with the territory it is supposed to represent, and thus becomes useless. Indeed, the entire project of the Encyclopédie can be considered a way to combat this type of information overload by selecting what is worth among all human knowledge and organising it in manageable series of volumes. Indeed, both the encyclopaedia and the Web are two strategies aimed at helping people's attention and at organising human knowledge and information in a way that encourages people's curiosity and literacy. However, after having dedicated twenty years of his life to the project, Diderot lost faith in the possibility of effectively organising universal knowledge and started wondering if the effort had been a complete waste of time and energy. A similar distrust is also at the centre of the last unfinished novel by Gustave Flaubert, Bouvard et Pécuchet (1881), which many literary critics indeed interpret as a pointed criticism of scientism in general, and of Diderot's Encyclopédie more specifically.

The novel tells the story of two clerks, Bouvard and Pécuchet, who retire to the country together. Not knowing how to use their free time, they busy themselves with one abortive experiment after another and plunge successively into scientific farming, archaeology, chemistry, and historiography, as well as taking an abandoned child into their care. Everything goes wrong because their futile book learning cannot compensate for their lack of judgment. Indeed, Flaubert's criticism was not pointed towards scientific knowledge, but rather towards a superficiality of popularised science and the obsession of his contemporaries for cataloguing, classifying, and mapping every field of knowledge without fully considering its usefulness and applicability (Sumberg 1982). Ferguson, in his analysis of the characters’ struggle for comprehension, captures a situation that is not at all dissimilar to the one people experience today when presented with an online deluge of information and sources, many unreliable and in contradiction with each other: 

discovering that the experts contradict one another, they end by reporting them all, as if the registration of all competing views were more important than the choice of any one. Theirs is what John Stuart Mill taught us to call the 'marketplace of ideas.' In distinction to the model he posited, however, ideas never reach the point of sale. Rather, Bouvard and Pecuchet continue to push their intellectual shopping cart through aisle after aisle, emptying it after they lose interest in a particular topic, and filling it again with yet another dizzying array of different positions and recommendations. (Ferguson 2010: 789).

Interestingly, Flaubert fell victim to a similar obsession. In order to acquire the knowledge necessary to describe the two characters' many scientific ventures, he conducted so much research and read so many books that all of his time was consumed by these activities, making it impossible for him to finish his novel. Much like in the case of his most notorious character, Flaubert could have said: “Bouvard and Pecuchet c'est moi.”

The two examples of networked information – in the shape of hypertexts or the Web – and encyclopaedia are relevant because they are both ingenious and successful accomplishments, while at the same time flawed and temporary solutions to the problem of information overload. In the search for an answer to AI's data pollution, it is important to remember that any solution should be flexible and constantly questioned.

author image

As Mark Andrejevic (2024) puts it: “A world in which attention is monopolized by an endless flow of personalized entertainment might be a consumers’ paradise—but it would be a citizen’s nightmare.” Framing these issues as public problems rather than consumers’ concerns to be solved by tech companies means that institutions responsible for knowledge infrastructures, and ultimately the people served by these institutions, should be in charge of regulating these aspects or at least participating in the conversation.

Eleonora Lima

How Individuals Cope with Information Overload
AI and Big Data

Generative AI, as already discussed, enables the creation of visual and written content at an unprecedented speed and volume. This situation exacerbates people’s ability to cope with the amount of available information and impacts their ability to direct their attention to content that is meaningful, relevant, and reliable, a challenge that has become increasingly prevalent since the advent of the internet. While generative AI is responsible for data pollution, AI algorithms are often invoked as the best solution for navigating this information overload (Costa and Macedo 2013; Hoving 2022; Siegel et al. 2024). Indeed, people increasingly rely on the curatorial ability of recommendation algorithms to select which music, films, and books to consume. As more and more aspects of people’s lives occur on online digital platforms, AI technologies are used as assistants, if not replacements, for users.

According to Accenture’s Consumer Pulse 2024 research, 79% of travellers wish generative AI could manage tasks such as booking hotels and flights on their behalf. Due to the overwhelming amount of information available for travel purchases (hotels, resorts, flights), 64% of consumers feel overwhelmed. Sixty-eight percent feel they check too many sources to understand the available options, and 77% wish they could identify the options that suit their needs more quickly and easily. The research also found that consumers feel that booking a hotel can be harder than buying a car and as nerve-wracking as getting a mortgage. This highlights the challenges travellers face in navigating the information overload that plagues the travel industry, and the potential loss of customers and revenue that companies could suffer as a result.

A similar situation is faced by people using dating apps. According to a survey conducted by Forbes Health in 2024, 79% of Gen Z users reported dating app fatigue due to the intensive effort required to find a good match, either for a casual or long-term relationship, due to the number of potential matches (Prendergast and DiGiacinto 2024). It is not surprising, then, that AI has been enlisted in helping users in this area as well. This is the case of the app Volar, an AI chatbot that identifies profiles based on shared interests and compatibility, and conducts initial chats with potential matches (Hoover 2024). According to the description on the Apple Store, Volar App offers “AI-Simulated Conversations” and organises “blind-dates” in the absence of the user, therefore allowing them to “jump past the repetitive and awkward icebreaker messages.” Besides the obvious concerns about the erosion of the ability to connect with other people, and the ethical implications of tricking someone into believing that they are having an intimate conversation with a person when it is instead a chatbot, the app clearly amplifies the general problem of data pollution, while perhaps helping the single user.

A final and perhaps more delicate field of application is the one of political debates and engagement. In a briefing titled Artificial Intelligence, Democracy and Elections drafted by Adam with Hocquard for the European Parliament (2023), the potential benefit of LLMs and AI chatbots for voters as well as politicians is clearly outlined: 

For instance, political recommender systems could form the basis of a chatbot responding to citizens’ questions on candidates’ electoral programmes. Moreover, specially designed AI tools could update citizens on how policies in which they have an interest are evolving and empower them to better express their opinions when addressing governments and politicians. Civic debate could improve thanks to the capability of AI to manage massive political conversations in chat rooms. […] On the politicians’ side, AI can be helpful in summarising citizens' comments made during public consultations or received by email.

 The hope that AI recommendation systems and LLMs technologies can be used to help people navigate the many complex and often technical political issues is not shared by all, as critics fear that AI, prone to hallucinations and riddled with biases, can be detrimental to people’s political awareness (Kreps and Kriner 2023). Furthermore, it is worth exploring if delegating attention to an AI agent will deprive people of their agency and autonomy. These are crucial questions that the KT4D project explores through the design and deployment of the Digital Democracy Lab, to which I refer for a further and more thorough analysis.

Past Examples

Over centuries, people have dealt with the recurring fear of losing the ability to memorise things, and therefore of 'owning' knowledge. This was the first concern about the introduction of written language, as explained by John Hollander: “The notorious charge levelled by Socrates in the Phaedrus against the technology of writing, and how inventing it supplanted and ruined the earlier, better, and somehow more natural operations of memory” (Hollander 1997: 306). Indeed, the ability to memorise and retain information without the support of any recording device has always been perceived not just as an empowering tool, but a demonstration of the essence of human resilience and indeed identity.

An extreme example is the value of memorised knowledge as described by Primo Levi in his memoir If This Is a Man (1947) about his incarceration in the Auschwitz concentration camp. One of the chapters of the book recounts how Levi tried to remember the Canto of the Divine Comedy dedicated to Ulysses so to teach it to a fellow prisoner. Levi struggling to remember Dante's verses is an act of resistance and defiance against the horror of the Lager. The knowledge he memorised cannot be taken away and it is part of who he is.

A less tragic plea in favour of the power of memorisation came in 2014 from the semiologist Umberto Eco who, voicing a common concern, wrote an open letter to his nephew to encourage him not to rely too much on the Internet and instead memorise information. Stopping to exercise one’s memory and only relying on search engines, in Eco's opinion, would be like unlearning how to walk and only relying on transportation vehicles.

In all these examples, data storage technologies of any type, be it written books or the Internet, are described as something that should be either regarded with suspicion or only used to support people’s ability to retain information. Memorisation is what keeps attention skills alive and prevents people from becoming passive consumers of information that, because always available, is never truly acquired.

Analyses that focus on the impact of knowledge technologies on attention and memorisation usually develop from two opposite stances. Some people believe that information overload does not depend on the technological affordances of the knowledge technology at hand, but on the amount of information created and shared. This position stems from the conviction that there is an intrinsic limitation to human ability to absorb and process knowledge (Klingberg 2009) and, when surpassed, it doesn't matter if the deluge of information comes from books or AI chatbots. Other people, instead, believe that the specific knowledge technology through which content is stored and shared matters a great deal in determining people’s attention skills (e.g. Barton et al. 2021). This is to say that reading or listening to the same story, or reading the news on a printed magazine or on one's phone, completely changes people's way of processing information.

To the first group belongs, for instance, Alvin Toffler, the man who popularised the term ‘information overload’ in his book Culture Shock (1970). In the book, Toffler discussed the impact of the then new technologies – television and computers – on people's cognitive abilities and psychological wellbeing. Toffler’s argument was that, only if individuals managed to survive the ‘culture shock’ of having to deal with an unprecedented amount of information at an unprecedented speed, they could survive modernity. In the chapter “Knowledge as fuel,” Toffler describes human history as the one of increasingly fast knowledge technologies, spewing increasingly more data than the one able for people to manage. The culmination of such evolution was the invention of the computer, as, in Toffler’s words, “it has raised the rate of knowledge-acquisition to dumbfounding speeds” (Toffler 1970: 32). His attention, however, was not much on the impact of information technologies on culture at large, but on individuals’ perception and attention skills. First, he was concerned with people’s ability to draw conclusions and predictions from the information they received, as this is the base of learning and of rational behaviour.

Toffler argued that rational behaviour depends on a constant flow of data from the environment. It relies on an individual’s ability to predict the outcomes of their actions with reasonable success. To achieve this, they must be able to anticipate how the environment will respond to their acts. Sanity itself, he posited, hinges on one’s capacity to forecast their immediate, personal future based on environmental information. However, when an individual is thrust into a rapidly changing or novelty-rich situation, their predictive accuracy plummets. Consequently, they struggle to make the reasonably correct assessments upon which rational behaviour depends. This situation, Toffler argued, can lead to fatigue to the point that he compared the experience of information overload to post-traumatic stress disorder: 

Managers plagued by demands for rapid, incessant and complex decisions; pupils deluged with facts and hit with repeated tests; housewives confronted with squalling children, jangling telephones, broken washing machines, the wail of rock and roll from the teenager's living room and the whine of the television set in the parlor—may well find their ability to think and act clearly impaired by the waves of information crashing into their senses. It is more than possible that some of the symptoms noted among battle-stressed soldiers, disaster victims, and culture shocked travelers are related to this kind of information overload. (Toffler 1970: 353).

While this might be an exaggeration, it has been proven that excessive stimuli can cause psychological damages, like in the case of social media burnout (Liu and Ma 2020). For Toffler, the problem with information overload is mainly the amount of information and the speed at which it was produced and shared, not necessarily the way in which different knowledge technologies accomplish this. His analysis focuses on the output, not on the media.

A different take is offered instead by Neil Postman, who belongs instead to the group who assign to each knowledge technology a peculiar effect on people's attention skills in consequence of its peculiar affordances. In his book Amusing Ourselves to Death: Public Discourse in the Age of Show Business (1985), Postman analyses the harmful impact of television on people's attention and ability to concentrate and evaluate the content of what they watch. For Postman, this is due to the television's peculiar affordances that render it a tool of mass distraction.

Postman contends that televised news presentation is essentially a form of entertainment programming. He points to elements like theme music, commercial interruptions, and the presence of “talking hair do’s” (Postman 1985: 100) as evidence that televised news lacks the gravity to be taken seriously. Postman further explores the contrasts between written speech, which he argues reached its zenith in the early to mid-nineteenth century, and televisual communication forms that primarily rely on visual images to ‘sell’ lifestyles. He posits that due to this shift in public discourse, politics has moved away from focusing on a candidate's ideas and solutions, instead emphasizing how favourably they come across on television. Drawing on media scholar Marshall McLuhan’s ideas – modifying McLuhan's aphorism “the medium is the message” to “the medium is the metaphor” (Postman 1985: 3) – Postman describes how oral, literate, and televisual cultures fundamentally differ in their processing and prioritization of information. He argues that each medium is suited to a different kind of knowledge, and that the faculties necessary for rational inquiry are weakened by televised viewing. Reading, a prime example cited by Postman, demands intense intellectual involvement that is both interactive and dialectical, whereas television only requires passive engagement.

Interestingly, Postman opened his book on TV and information overload with the comparative analysis of two of the most popular dystopic novels of the 20th century, Huxley’s Brave New World and Orwell’s 1984. In the forward of his book dedicated to the criticism of television, Postman claimed that while Brave New World is preoccupied with information overload, while 1984 with censorship, Postman sided with Huxley. Postman wrote: 

What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. […] In 1984 […] people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. (Postman 1985: XIX-XX)

Postman viewed television's entertainment value as a present-day “soma,” the fictitious pleasure drug in Brave New World, through which citizens' rights are exchanged for consumers' entertainment.

Attention and Affective Content

AI and Big Data

Beside the issue of data pollution, another aspect of AI technologies that impacts people's attention skills and, consequently, their ability to evaluate critically the content they consume, is what has been called “emotional AI” or “affective computing.” This refers to AI chatbots' ability to simulate understanding of affect, emotion and intention (Bakir and McStay 2020).

Affective computing, which involves AI systems simulating understanding of emotions and intentions, is increasingly prevalent in our daily lives. It is embedded in many applications on our mobile devices and in physical spaces, such as recommender algorithms, personal AI assistants, and work-related emotion trackers. Critically, emotional AI systems are not only capable of passively reading and classifying emotions, but also of predicting and influencing users' emotional states. These technologies can manipulate users' affective states (e.g., help regulate emotions), which ultimately has an impact on people’s attention.

For instance, affective computing is used in advertising to analyse consumers' behaviour and optimise sales, as well as in the health sector to improve patient care and enhance doctor-patient communication. Some applications of emotional AI are undeniably exploitative, like in the case of AI emotion-detection software tested by the Chinese Government on Uyghurs incarcerated population (Healy 2021). Indeed, in the AI Act the European Parliament added emotion recognition systems (by which it means biometrics) to its high-risk list, as well as banning these systems in places such as the workplace and education institutions. Other applications, instead, aim at rendering AI agents more aligned to humans' values and therefore more user-friendly, like the potential use of emotional AI used in library settings to “grasp users’ emotional responses to materials, empowering them to curate collections that resonate with diverse sentiments” (Bonal 2024).

However, when considering the impact of affective computing on people’s attention, there is another crucial element to be considered beyond its exploitative or useful evaluation. This has to do with the role that emotions play in enhancing or weakening people’s critical ability. Is emotional content more memorable and thus more effective, and therefore can emotional AI be used for good to enhance people's engagement with information, as well as with others? Or, at the opposite, do emotions reduce people's critical awareness and make them more prone to manipulation, so that affective computing should be regarded with suspicion under any circumstances? While much has been said about this in the field of psychology – and for this we refer to Module A – it is also important to consider the long-standing debate on the interplay between emotions, attention, and knowledge technologies.

Past Examples

There are two areas in which the debate on the impact of emotions on attention and critical awareness have particularly flourished over the centuries: the theatre and political communication. What they share is obviously their pedagogical aim and impact on public life and civic engagement, which makes the topic of attention and awareness way more relevant than it is when it comes to other forms of communication.

The impact of emotions on attention and critical awareness has been a subject of debate for centuries, particularly in the fields of theatre and political communication. These two areas share a pedagogical aim and impact on public life and civic engagement, making the topic of attention and awareness particularly relevant.

Western theatre was invented in all essentials in ancient Greece, and more specifically in Classical Athens. In Athens, however, theatre was always a mass social phenomenon, considered too important to be left solely to theatrical specialists or even confined to the theatre. Athenian tragic drama did not have merely a political background, a passive setting within the polis, or city, of the Athenians. Tragedy, rather, was itself an active ingredient, and a major one, of the political foreground, featuring in the everyday consciousness and even the nocturnal dreams of the Athenian citizen (Cartledge 1997).

Athenian theatre, a mass social phenomenon, was deeply intertwined with the political life of the city. Tragedy was not merely a backdrop to Athenian politics, but an active force shaping the consciousness of its citizens. It is in this socio-cultural setting that Aristotle elaborated his famous theory on the role of emotions in tragedies. Within this context, Aristotle developed his influential theory on the role of emotions in tragedies. Aristotle’s theory of tragedy in the Poetics incorporates a model of 'emotional understanding': understanding filtered through the affective and evaluative responses embodied in emotions. The Poetics treats the defining experience of tragedy as involving a concentrated surge of pity and fear. Indeed, Aristotle wrote, tragedy “is an imitation of an action […] through pity and fear effecting the proper purgation of these emotions” (Aristotle VI: 23). However, for such emotions not to overwhelm the audience, but rather convey an ethical message, tragedy must tie these emotions to the audience’s cognitive grasp of the unified patterns of human action represented in plot-structures. Aristotle then theorised that for the tragedy to be emotionally compelling, but not manipulative, it needs to respect three principles: unity of action, unity of place, and unity of time, meaning that tragedies should have a single action represented as occurring in a single place and within the course of a day. In a way, we can say that Aristotle was trying to balance emotional content and information overload.

The need to temper emotions or completely ban them from theatrical representation is at the centre of a century-long debate on the educational and moral role of the tragedy and the theatre more in general. Without wanting to retrace the entire debate, it is worth mentioning another pivotal playwright and theoretician of the role played by emotions in people’s attention, Bertolt Brecht. In his theatre and film practice, Bertolt Brecht hoped to write and design art that would lead spectators to think, question, and learn about the social conditions exhibited in the work (Woodruff 1988; Krasner 2006). Instead of encouraging immersion and illusion, or what Brecht refers to as a “hypnotic experience,” Brecht favoured “the ‘teaching’ of the spectator a certain quite practical attitude” (Brecht 1964: 78), or what Brecht also calls a critical attitude. Instead of identifying with characters, Brecht wanted spectators to keep a distanced perspective, something he aimed at accomplishing via an ‘alienation affect’, or Verfremdungseffekt (Kelly 2020). Indeed, in his plays thoughtless immersion and simple empathy would be disrupted by various techniques that led to thoughts and feelings that might challenge and change viewers, and ultimately transform human relations themselves.

Brechtian ideas about emotion were brought into mainstream film theory through the ‘apparatus’ theory of the 1970s and 1980s that combined Althuserrian Marxism, Lacanian psychoanalysis, and Saussurian semiotics (DeLauretis and Heath 2016). Many apparatus theorists affirmed what they took to be Brecht’s denigration of the emotions in the viewing experience, holding that emotions contribute to entranced immersion and political mystification at the expense of a more critical distance in film viewing. Apparatus theorists found mainstream film to be both critically numbing and reactionary, and tended to see pleasure – much of which, I would add, is obtained through emotional experiences – as a trap or lure. The unspoken premise was that emotion was opposed to reason and was the enemy of distanced, critical thought that enabled the spectator to escape the narrative pleasures that brought on complacent acceptance of bourgeois ideology. For many ‘apparatus theorists’ television was an even more manipulative media when compared to cinema (Hilmes 1985), because the commercial content of many programs counted on reducing people's alertness to make advertising more effective.

Interestingly, this analysis was challenged by Jean-Louis Comolli, editor of Cahiers du Cinéma from 1965 to 1973 and film studies scholar close to the ‘apparatus’ circle. Comolli, instead of focusing on the aesthetics and content of television and cinema, considered the material conditions in which the message was consumed. In his article “Notes sur le nouveau spectateur” (1966) Comolli points to the social and psychological function of the darkened movie-theatre which makes it extremely hard for the viewer to remain critical and alert. This situation, Comolli maintained, affects in the same way commercial and art films and it is not true that the second ones, because of their intellectual and engaged content, support viewers' critical awareness more than the others. The television set, instead, located in a lighted room, avoids the torpor that the dark cinema theatres induce. Comolli wrote: “the small screen is the only one that often opens onto a lighted ‘theatre’. This is confirmed by re-watching film masterpieces on television: [...] if you re-watch these films in a half-light propitious to attentiveness, I believe that you watch them differently, and better, than in the movie-theatre” (Comolli 1966). What is interesting in Comolli’s analysis is how he shifted the attention from the technical properties of cinema and television, as well as from their content and aesthetics. Instead, what matters to him is the material environment in which content is consumed and how it influences attention and emotional reactions.

While all these examples discuss emotion and attention in artistic production, it is important to consider how the emotional appeal of images and sounds has been exploited or feared by politicians. While some used it as a tool to gather people support by stirring emotions and visceral reaction, others worried that their political message could have become secondary to the audio-visual spectacle offered by mass media, and thus ignored or misunderstood. From the use of radio for Nazi propaganda to manipulate people through emotion as well as via disinformation (Birdsall 2021) to the Italian Communist Party in the 1950s, firstly highly suspicious of the use of television for political communication, and then happily taking advantage of TV's emotional communication (Fantoni 2023), there are many possible examples that can shed a light on the political applications of today's emotional AI. For the next and final version of Module C, this is an area that will be mapped and explored.