3.4.3 Trust
This section examines how people develop trust – or distrust – in knowledge technologies. This section considers three main aspects. First, how trust depend on technological affordances, meaning how people’s level of comfort changes due to the material and technical aspects of different devices. Second, how the level of confidence in different KTs depends on the type of information and output generated. Lastly, how trust in KTs and the knowledge they produced is fundamentally shaped by people’s faith in the cultural and political institutions regulating them.
Accuracy and reliability
AI and big data
The reliability of AI-generated knowledge, both written and visual content, is a topic of significant concern and debate. While AI systems have demonstrated remarkable capabilities in content generation, there are several issues that affect their trustworthiness.
The potential for AI to be used for malicious purposes, particularly in the realm of political communication and the creation of deepfakes, poses a significant threat to the integrity of online information. The emergence of AI-powered ‘disinformation machines’ like CounterCloud, capable of generating fabricated news articles, historical events, and even reader comments, underscores the ease with which AI can be weaponized to sow doubt and manipulate public opinion (Knight 2023). This raises critical questions about the ethical implications of such technologies and the need for robust mechanisms to detect and mitigate AI-generated misinformation. The dilemma surrounding CounterCloud exemplifies the complex balance between protecting democratic information and educating the public about the potential for AI-powered manipulation. While secrecy may seem necessary to prevent malicious use, transparency could empower individuals to critically evaluate information and develop a greater understanding of how AI-generated propaganda operates. This underscores the importance of fostering critical thinking skills and promoting media literacy to combat the growing threat of AI-driven disinformation.
A second concern is for AI-generated content to introduce errors and inaccuracies into academic publications (Currie 2023). The recent retraction of a scientific article containing nonsensical AI-generated images, including a depiction of a rat with an implausibly large penis, highlights the vulnerability of peer-review processes to AI-generated content (Knapton 2024). This incident underscores the need for stricter guidelines and verification processes to ensure the authenticity and accuracy of scientific publications. The response from scientific journals, such as Science’s updated editorial policies explicitly prohibiting the use of AI-generated content (Thorp and Vinson 2023), reflects a growing awareness of the potential for AI to undermine the credibility of academic research. However, the challenges of detecting AI-generated content, particularly when it spans international boundaries and involves multiple stakeholders, necessitate a collaborative approach to address this emerging issue.
The inherent biases present in AI systems, particularly in LLMs, raise concerns about the potential for AI-generated content to perpetuate and amplify existing societal inequalities. A UNESCO (2024) study examining gender bias in LLMs revealed clear evidence of stereotyping against women, with female names being associated with traditional gender roles and male names linked to careers and leadership positions. This highlights the need for ethical considerations in AI development to ensure that AI systems reflect human diversity and promote equality. The study’s findings underscore the importance of addressing bias in AI training data and algorithms to mitigate the risk of perpetuating harmful stereotypes. This requires a multi-faceted approach, including the development of ethical guidelines for AI development, the promotion of diversity in AI research and development teams, and the ongoing monitoring and evaluation of AI systems for bias.
The examples discussed above illustrate the multifaceted challenges to trust in AI-generated content, ranging from deliberate manipulation to unintentional biases and inaccuracies. Addressing these challenges requires a collaborative effort involving researchers, developers, policymakers, and the public. Robust verification processes, ethical guidelines, and ongoing scrutiny of AI systems are essential to ensure their reliability, fairness, and responsible use. As AI continues to evolve, it is crucial to prioritize the development of AI systems that are not only capable but also trustworthy, ethical, and aligned with human values.
Past Examples
The challenge of trusting knowledge technologies is not new. Throughout history, societies have grappled with the dual challenge of trusting both human operators and the technologies themselves. The decision to place more faith in a device than in a fellow human is often shaped by a multitude of factors – cultural, political, and technological – and is always contextual.
A compelling example of this trust dilemma can be found in the messaging systems of Ancient Greece. Greek messengers, known as hemerodromoi or “day couriers,” were renowned for their speed and equipped with darts for protection (Ceccarelli 2013: 11-12). Despite their status and skills, their reliability was constantly questioned, either due to the risk of message interception by enemies or the potential unreliability of human memory (Eidinow and Taylor 2010: 35).
To address these trust issues, the Greeks developed one of the earliest forms of encryption: the scytale (Diepenbroek 2023). This device, a cylinder with a strip of parchment wound around it, allowed for the creation of a transposition cipher. The recipient would need a rod of the same diameter to decrypt the message, providing a measure of security against interception. The scytale, first mentioned by the Greek poet Archilochus in the 7th century BC, represents an early attempt to use technology to enhance the trustworthiness of communication. However, it also introduced new vulnerabilities – anyone intercepting the message and understanding the method could potentially decrypt it with some effort.
For longer communications where the scytale was impractical, the reliability of the messenger’s memory became a critical concern. This led some politicians, like Nicias, to prefer written messages over oral ones. Thucydides reports on Nicias’ preference for written communication (Eidinow and Taylor 2010: 41; Ceccarelli 2013: 142-6), highlighting the tension between trusting human memory and trusting written records. Thucydides explained Nicias’ decision: ““He feared, however, that the messengers, either through inability to speak, or through failure of memory, or from a wish to please the multitude, might not report the truth, and so thought it best to write a letter, to ensure that the Athenians should know his own opinion without its being lost in transmission” (Thucydides 1950: 491-2).
This historical example illustrates how the choice of communication method – oral or written – was influenced by considerations of trust, accuracy, and the potential for manipulation or misinterpretation.
The debate over the trustworthiness of different forms of communication continued through history. With the dominance of written expression until the 19th century, textual manipulation became a central concern. The field of philology, for instance, initially focused on verifying religious texts, only later expanding to literary works. A notable example of early textual analysis for verification purposes is Lorenzo Valla’s De falso credita et ementita Constantini Donatione declamatio (On the Donation of Constantine, 1440). This work used philological methods to prove that the Donation of Constantine, a document purportedly from the 4th century granting the Western Roman Empire to the Catholic Church, was a forgery (Renna 2014). This demonstrates how critical analysis of texts could challenge established power structures and highlight the importance of verifying the authenticity of important documents.
The advent of mechanical image reproduction, particularly photography, introduced new dimensions to the trust debate. Interestingly, early reactions to photography were polarized. Some viewed it as an esoteric, potentially dangerous practice, while others saw it as a means of producing objective, reliable representations of reality.
The concept of “spirit photography,” which claimed to capture images of ghosts and spiritual entities, exemplifies the more mystical perceptions of early photography (Willburn 2012). Even renowned figures like Arthur Conan Doyle engaged with this practice, publishing The Case for Spirit Photography in 1922 to defend spiritualist photography.
Despite these anxieties, the potential of photography for reliable documentation was quickly recognized. Early practitioners, often inventors, engineers, and scientists, utilized photography for various forms of documentation, including portraiture and urban landscapes, as exemplified by Eugène Atget's photographic documentation of Parisian streets. The level of detail rendered by photography, surpassing the capabilities of traditional art forms, further solidified its reputation for objectivity.
This perceived objectivity, however, was not without its limitations. As Kosminsky et al. (2019) argue, the phrase "the camera cannot lie" reflects a belief in the camera's mechanical objectivity, excluding human bias. This belief, they contend, is a misconception, as the interpretation of photographic images is inherently influenced by individual and cultural experiences.
Drawing a parallel between photography, perspective drawing, and data visualization, Kosminsky et al. (2019) suggest that all three technologies, despite their apparent objectivity, are susceptible to bias. They argue that the history of visual media has conditioned us to perceive these technologies as representations of objective reality, leading to a potential "belief at first sight" – an uncritical acceptance of their representations.
Therefore, while photography initially held the promise of objective representation, its history reveals a complex relationship with trust. This historical context provides valuable perspective on current debates surrounding AI-generated content. Just as early photography was simultaneously viewed as mystical and objective, AI technologies today are seen as both potentially deceptive and capable of unprecedented accuracy. Understanding these historical parallels can inform our approach to building trust in modern knowledge technologies.
A final important aspect to consider is how narratives accompanying the introduction of new knowledge technologies shape people’s trust. This phenomenon is exemplified by the marketing strategies employed for early personal computers in the 1980s. Advertisements emphasized domesticity and tameness to assuage fears of a dystopian, machine-dominated future. The objective was to portray computers as user-friendly, adaptable to family life, and comparable to common household appliances.Marketing campaigns frequently depicted computers in domestic settings, such as kitchens, or showcased families using them for educational purposes. Some companies drew parallels between computers and small, amiable animals to counteract perceptions of an impersonal, robotic experience or anxieties about malevolent machines supplanting humanity (Lima 2020: 14).
Apple, Olivetti, and IBM were particularly adept at addressing these concerns through their marketing approaches. Apple’s iconic 1983 Super Bowl commercial for the Macintosh, directed by Ridley Scott, invoked George Orwell's 1984 to position their product as a tool of resistance against technocratic oppression (Friedman 102-120; Stein). Olivetti’s M20 advertisement similarly referenced Orwell, featuring a young girl with a lamb and the slogan “1984: Orwell was wrong” (Lima 2020: 14-15). IBM took a different approach, utilizing Charlie Chaplin's character to portray their PCs as approachable and user-friendly. This strategy cleverly inverted the popular image from Modern Times, where Chaplin’s character is subjugated by machinery, instead depicting him comfortably using a computer (Caputi 1986). These marketing strategies aimed to alleviate public apprehensions about computers by presenting them as benign, domesticated technologies that could enhance rather than dominate human life.
The challenge of trusting knowledge technologies is not new. Throughout history, societies have grappled with the dual challenge of trusting both human operators and the technologies themselves. The decision to place greater faith in a device than in a fellow human is always shaped by a multitude of contextual factors—cultural, political, and technological.
Knowledge Technologies and Social Trust
AI and Big Data
While AI technologies offer potential opportunities for fostering community and improving societal outcomes, they also pose significant threats to the very fabric of trust that underpins human interaction.
The pervasive influence of AI-driven algorithms and personalized online experiences can lead to a desensitization to information and a loss of trust in human judgment (Kreps and Kriner 2023). The constant filtering of information based on individual preferences can create echo chambers and reinforce existing biases, making it difficult to engage with diverse perspectives (Du 2023). This can also lead to a sense of uncertainty and distrust in the authenticity of information, as AI-generated content becomes increasingly sophisticated and indistinguishable from human-created content. The ability of AI to generate and disseminate false information at scale poses a significant threat to social trust. Malicious actors can exploit AI to create and spread disinformation campaigns, manipulate public opinion, and sow discord within communities (Islas-Carmona et al. 2024). This can erode trust in institutions, media outlets, and even individuals, leading to a fragmented and polarized society.
The opaque nature of many AI algorithms and the vast amounts of data they process can create a lack of transparency and accountability. This can lead to concerns about bias, discrimination, and the potential for misuse of personal data (de Fine Licht and de Fine Licht 2020). Without clear mechanisms for understanding and challenging the decisions made by AI systems, trust in these technologies can be undermined. The increasing use of AI for surveillance purposes raises serious concerns about privacy and individual autonomy (Saheb 2022). The ability to track and monitor individuals’ movements, activities, and online behaviour can create a chilling effect on free expression and erode trust in the institutions responsible for data collection and analysis.
Despite these challenges, AI and big data also offer potential opportunities for building trust. AI can facilitate new forms of community building and collaboration by connecting individuals with shared interests and providing platforms for collective action (Osborne 2024). Online platforms powered by AI can foster a sense of belonging and shared purpose, promoting social cohesion and trust (Pani et al. 2024). AI can be used to remove human biases and errors from decision-making processes, leading to fairer and more efficient outcomes (Houser 2019; Brown et al. 2023). This can enhance trust in institutions and systems by demonstrating their commitment to impartiality and effectiveness.
The development of participatory AI, where users have a voice in the design, development, and deployment of AI systems, is crucial for building trust (Birhane et al. 2022). This topic is the main focus of our Module H (submitted M 18). By involving diverse stakeholders in the process, participatory AI can ensure that these technologies are aligned with societal values and address concerns about bias, transparency, and accountability.
The impact of AI and big data on social trust is evident in specific relationships. Dating apps utilize recommendation engines that leverage user data to suggest potential matches. This can lead to “algorithmic awareness,” where individuals become conscious of the underlying assumptions and biases driving these recommendations (Shin et al. 2022). This awareness can erode trust in the authenticity of the matches presented and lead to a desire for “unfiltered” results, preserving freedom of choice and a sense of serendipity (Parisi and Comunello 2029). Another example is the use of tracking devices like Apple AirTag for children, which raises complex ethical questions about surveillance and trust (Kelly 2023). While these devices can provide peace of mind for caregivers, they also raise concerns about the potential for over-monitoring and the erosion of children’s autonomy. The question of whether these technologies are a necessary safety measure, or a form of surveillance culture gone overboard remains a subject of ongoing debate.
The impact of AI and big data on social trust is a complex and evolving issue. While these technologies offer potential benefits, they also pose significant threats to the very foundations of trust that underpin human society. By addressing concerns about transparency, accountability, and privacy, and by embracing participatory approaches to AI development, we can harness the power of these technologies to build a more just and equitable future while preserving the essential values of trust and human connection.
Past Examples
The historical evolution of trust in knowledge technologies offers a rich tapestry of anxieties and opportunities, providing valuable insights into the current discourse surrounding trust in AI and big data. Examining past concerns surrounding technologies like the telephone and the internet reveals recurring themes that resonate with contemporary concerns, highlighting the complex interplay between technological affordances, social norms, and individual choices.
One prominent theme is the intrusion of technology into personal spaces and the potential for its misuse. Carolyn Marvin’s (1988) analysis of “electric courtship” highlights the anxieties surrounding the telephone’s impact on romantic relationships. The telephone, she argues, disrupted the delicate balance between private and public spheres, raising concerns about privacy violations, predatory behaviour, and the authenticity of emotional connection. The fear of the telephone operator listening in, the possibility of unwanted advances, and the perceived lack of genuineness in phone conversations all contributed to a sense of distrust. Marvin poignantly captures this anxiety, writing:
If this expansion meant progress in the introduction of electricity, it also threatened a delicately balanced order of private secrets and public knowledge, in particular that boundary between what was to be kept privileged and what could be shared between oneself and society, oneself and one’s family, parents, servants, spouse, or sweetheart. Electrical communication made families, courtships, class identities, and other arenas of interaction suddenly strange, with consequences that were tirelessly spun out in electrical literature. (Marvin 1988: 64)
Similarly, American novelist Jonathan Franzen (2008) explores the impact of cell phones on emotional expression and social connection. Famously critical of digital technologies, Franzen, in his essay “I Just Called to Say I Love You” (2008), analyses the use of cell phones during the 9/11 terrorist attacks and in the aftermath. During the attack, cell phones were indispensable tools that allowed victims to say goodbye to their loved ones. Their messages were powerful, deep, and painfully real: “those terrible, entirely appropriate I-love-yous uttered on the four doomed planes and in the two doomed towers,” Franzen wrote. He then contrasts this poignant image of pain and love from 2001, when cell phones were not so ubiquitous, with people’s habits in 2008 when, he believes, cell phone are to blame for the “disastrous sentimentalization of American public discourse”. Franzen accuses cell phones of changing people’s ways of displaying emotion in interpersonal communication and believe that this is changing American society to its core. As it has become easier and cheaper to talk to loved ones, the words and messages traditionally reserved for intimate and rare occasions are now multiplied and cheapened: “I’m talking about the habit, uncommon 10 years ago, now ubiquitous, of ending cell-phone conversations by braying the words 'LOVE YOU!' Or, even more oppressive and grating: ‘I LOVE YOU!’”(Franzen 2008).
These examples from Franzen and Marvin illustrate how the lack of trust depends on both the technological affordances of the device and the empathy and respect of the users. In both cases, what is under threat is human connection and the genuine display of affection.
Another recurring theme is the tension between technological advancement and social responsibility. The example of parents sending their children through the mail in the early 20th century (Lewis 2016) illustrates how technological innovation can be exploited for personal convenience, even when it raises ethical concerns. The parents’ decision to use the Parcel Post service, driven by economic necessity and a desire for safety, highlights the complex interplay between technological affordances, social norms, and individual choices. As United States Postal Service historian Jenny Lynch tells Smithsonian Magazine (Lewis 2016) the story of baby James who was just shy of the 11-pound weight limit for packages sent via Parcel Post, and his “delivery” cost his parents only 15 cents in postage (although they did insure him for $50). These and other similar story were motivated by parent’s economic and organisational needs as postage was cheaper than a train ticket, Lynch explains.
This irresponsible practice, while perhaps amusing in retrospect, was short-lived and eventually prohibited. Nonetheless it demonstrates how parents have historically sought to use new systems and technologies in unexpected ways to ensure their children’s safe transit. Like modern parents using AirTags, these early 20th-century parents were negotiating between their desire for their children's safety, practical considerations, and trust in a new system of transportation and communication.
A final aspect to be considered in how knowledge technologies have allowed for networks of solidarity that have advanced social trust and sense of community, as well as resisted oppression and social control. One striking example is the red de la calle (street network) in Havana, Cuba, which emerged in the early 2000s as a response to limited internet access (Alarcón 2023).
This illegal street network allowed people in Havana to connect their computers with others across the city, creating a local intranet. By 2017, it connected at least 20,000 people. The network became an alternative for Cubans seeking new ways to access information and communicate when internet access was severely restricted. Building and maintaining this network required significant trust among participants, both because of its illegal nature and the physical infrastructure that often needed to pass through neighbours’ properties. As one participant, Ernesto de Armas, described to NPR, gaining the trust of neighbours was crucial:
I remember we had to run the cable from my room over the wall […] we passed it over to the neighbor next door and I remember we had to go down and talk to her, explain what we were doing, and she got scared. “A cable in my house? But what is this?” We told her, “No, look, ma’am, we want to connect to the network.” “And what is the network?” So, imagine a 70-year-old lady— explaining the network to her was impossible. (Alarcón 2023)
A similar historical example, although set in a very different socio-political environment, is the “cyber-street” experiment conducted by Microsoft in North London in 1998 (Giussani 1998). This project provided internet access to residents of a street in Islington to see how it would affect community interactions. The experiment yielded positive results, with residents using email and a local electronic bulletin board to coordinate on various community issues, from opposing a parking scheme to organizing social gatherings. Pearson Phillips, a semi-retired journalist and participant in the project, started the “Barnsbury Bugle,” a monthly e-mailed newsletter. Phillips noted the transformative effect of the internet on community interactions: “The day I saw somebody put a notice up saying, ‘We’ll be in the pub at eight o’clock - if anyone would like a drink, please come along,’ I realised that this was going to work” (Brake 1998).
However, the project also faced criticism for not being sufficiently inclusive, with accusations that it favored white, middle-class residents over council tenants. This highlights the importance of considering equity and representation in technological initiatives aimed at fostering community connections (Arthur 1997).
These historical examples provide valuable context for understanding current debates about AI and social trust. They demonstrate that concerns about privacy, authenticity, and the impact of technology on social relationships are not new. However, they also show how communities have often found ways to adapt to and benefit from new technologies, even in the face of initial scepticism or challenges. As we navigate the era of AI and big data, these historical lessons can inform our approaches to building and maintaining social trust in the digital age.
Trust in Regulation and Institutions
AI and Big Data
The implementation of AI systems in governance and public services raises critical questions about trust in cultural and political institutions and the responsibility for algorithmic decision-making. Public trust depends on people’s confidence in such institutions, as much as on their enthusiasm – or tolerance – towards such technologies. Nonetheless, incidents with manipulative and untrustworthy systems can further erode citizens’ scepticism towards institution.
One prominent example of how AI-driven decision-making can erode trust is the Robodebt scandal in Australia. The scheme, implemented by the Australian government between 2016 and 2019, used algorithms to calculate and recover social security debts. However, the system was flawed, leading to inaccurate debt assessments and significant financial hardship for many recipients (Lupton 2021). Cassandra Goldie of the Australian Council of Social Service aptly described the human cost of this failure: “The robodebt affair was not just a maladministration scandal, it was a human tragedy that resulted in people taking their lives” (Australian Associated Press 2022). This scandal raises crucial questions about accountability and responsibility in AI-driven systems, particularly regarding the roles of politicians, public servants, and developers. The public outcry over Robodebt contributed to the government's electoral defeat in 2022, highlighting the potential for AI errors to have significant political consequences.
The rise of AI-generated content presents another challenge to trust in institutions. As AI becomes increasingly sophisticated, it raises concerns about the authenticity and reliability of information. Leibowicz (2023) argues that audiences need to be able to distinguish between human-generated and AI-generated content, particularly in contexts like elections and historical record preservation. While watermarking and other disclosure methods are being explored, there are significant challenges to their implementation. This poses the issue of who has the authority to detect and certify AI-generated content is also critical. Controlled access to detection tools could lead to technical gatekeeping and undermine openness. Conversely, widespread access could be exploited by malicious actors. “If everyone can detect watermarks, that might render them susceptible to misuse by bad actors. On the other hand, controlled access to detection of invisible watermarks—especially if it is dictated by large AI companies—might degrade openness and entrench technical gatekeeping,” Leibowicz (2023) explains.
Finally, the use of AI in law enforcement, exemplified by the Oklahoma City Police Department’s use of ChatGPT to generate police reports, raises further concerns about trust. While officers are enthusiastic about the time-saving potential of AI, legal scholars and watchdogs express concerns about the impact on the criminal justice system. A recent article (Murphy and O’Brien 2024) report that “Police officers who’ve tried it are enthused about the time-saving technology, while some prosecutors, police watchdogs and legal scholars have concerns about how it could alter a fundamental document in the criminal justice system that plays a role in who gets prosecuted or imprisoned”. The use of AI in generating reports raises questions about the accuracy, objectivity, and potential for bias in these documents, as well as posing new legal challenges, as it is unclear if a police report generated by an AI chatbot is legally admissible in a court of law.
Past Examples
To understand the current challenges, it is helpful to examine historical precedents. Specifically, we will consider one that speaks of the difficulties faces by political institutions to certified reliable sources and knowledge, and one that pertain the use of KTs in police enforcement and investigations and how this impact people’s trust in the justice system.
The advent of the printing press in the 16th century led to a similar explosion of information and a need for new methods to verify sources. The old structure of monasteries and universities, which produced manuscripts and guaranteed their quality, quickly disappeared. Because of the exponential growth of information available due to the advent of the printing press, people needed to develop new tools and practices to discriminate between their sources of knowledge, as the old structure (e.g. monasteries and universities that produced manuscripts and guaranteed of their quality) quickly disappeared (Blair 2010). Like today, there was a rapid and unprecedented increase in the data being produced, and no structure in place to verify their reliability. Two main solutions emerged: one from the bottom down, one from the bottom up.
First, only political authorities such as the emperor, the local Government, the Pope could grant the licence to print, which led to an imbalance in people’s access to knowledge: books printed in the Republic of Venice or in the Netherlands, which were relatively free-thinking places, were more reliable as they did not undergo censorship like books printed, for example, in the Vatican State (Grendler 1975; Sachet 2020).
However, printers also took upon themselves to develop a way to reassure their customers about the quality of their product (and the reliability of the sources). Therefore, each printer developed a printer's mark which functioned as a trademark (Wolkenhauer and Scholz 2018). These became extremely important as they provided information about who and where a book was printed (a reputable printer? A free-thinking country?) and indeed to this day scholars who work on early printed texts need to be knowledgeable of this system.
This system, however, was susceptible to forgery, as less reputable printers counterfeited printer's marks from more respected workshops. This nullified the governments’ attempt to certify the good quality of the sources printed in their own countries. This situation shares many aspects with the one imagined by Leibowicz (2023) if the practice of watermarking AI-generated content became widespread and regulated. On the one hand, this might cause disparities in the access of knowledge and creation of enclaves of free speech versus more controlled one – like in the case of the Papal State opposed to the Netherland –, while on the other it can lead to watermark forgery, making it even harder for people’s to know what to trust.
The second example considered pertains the use of photography and video in law enforcement and also provides valuable insights. The Rodney King case in 1991 demonstrated the power of video evidence in shaping public perception and trust in law enforcement. The video footage of King’s beating by police officers became crucial evidence, sparking widespread outrage and highlighting the potential for technology to expose institutional misconduct. “The jury’s verdict will never blind what the world saw,” stated Los Angeles Mayor Tom Bradley after the officers were acquitted (Stone 2021). The case also revealed the influence of racial prejudices and social tensions in interpreting technological evidence. “The jurors responded by acquitting the officers, discarding the seemingly obvious interpretation of Holliday’s video. They instead believed the skillful frame-by-frame analysis of the defense attorneys, who argued that the video was not evidence of police misconduct but of a justified response to King’s allegedly frightening actions,” writes Ristovska (2021) in her analysis of the Rodney King’s case vis à vis George Floyd’s assassination caught on camera.
This case demonstrated how new technologies (in this instance, handheld video cameras) could dramatically impact public trust in law enforcement institutions and the judicial process. Also, it shows that racial prejudices and social tensions play a crucial role in people’s distrust in technologically reproduced information.
The historical parallels between the printing press, video recording, and AI highlight the ongoing challenges of navigating the relationship between technology, trust, and institutional authority. Failure to understand the complexity of these issues and the role played by social tensions and political opinions could lead to further erosion of trust in institutions and undermine the potential benefits of AI technologies.