UK Government Call for Evidence: KT4D Submission on AI in a Government Context

call4ev

“To mitigate against some of the risks outlined in this evidence, care must be taken to ensure that the use of AI is transparent to the public, that steps are taken to minimise bias and discriminatory effects, that AI systems are not trusted unduly, and that there is sufficient human oversight in the development of AI tools and their outputs”

The Public Accounts Committee of the UK Parliament serves to evaluate the effectiveness of public spending related to the implementation of public projects and delivery of services. Earlier this year they issued a call for expert submissions on how AI and other knowledge technologies can underpin government decision-making and be vehicles for knowledge dissemination.

Professor Keith Hyams and Dr. Jessica Sutherland from the University of Warwick, a consortium member of the KT4D project, submitted and had their written evidence published regarding the ethical risks and opportunities that can arise from the use of Artificial Intelligence (AI) in a government context. This effort expanded on the work of Professor Hyams and Dr. Sutherland at the Interdisciplinary Ethics Research Group (IERG) in the Department of Politics and International Studies, focusing specifically on three key areas; the use of automated decision-making and user profiling, generative AI in the government context, and the public perception and trust (of the widespread use of AI on government decision-making).

Core Elements of the Written Evidence

Automated Decision-Making and User Profiling

In this section of their written evidence submission, Hyams and Sutherland captured the advantages of the proliferation of automation systems to enhance efficient workflows and processes, while highlighting the vulnerabilities introduced by these increased efficiencies. Automated tools have the capacity to outperform traditional working patterns, assisting and even replacing humans on administrative, operations, and research tasks. However, without human intervention, the biassed and unethical use of personal data becomes a threat. Additionally, although user profiling in decision-making can accelerate the analysis and examination of large personal datasets that would be far too time consuming for humans, there is risk that AI automated systems were created and trained with inherently biassed datasets, potentially magnifying and reproducing social issues and discriminatory practices.

Generative AI in the Government Context

Professor Hyams and Dr. Sutherland also discussed the relationship between Generative AI and government decision-making.They discussed the ability of generative AI to assist governments with the processing, reporting, and visualisation of information, while being an avenue for citizens to access information through chatbots or robocalls. While the evolution of generative AI has significantly improved lagging government inefficiencies, there are inherent weaknesses with this technology that causes concern for experts. Hyams and Sutherland specifically reference the tendency of generative AI to lean on aggregated trends and can reproduce information that may not be correct, highlighting the importance of users to have a critical eye for these technologies until guardrails are in place to overcome these existing deficiencies.

Public Perception and Trust

This final section addresses the potential public concerns that might surface with the increase in the use of AI in government. While the list is extensive, the main opposition arguments presented here are related to the lack of human oversight, transparency about how, where, when and why AI is used by governments, and the inadequate user education and trust in AI systems.

Access to the full report can be found here

----------

KT4D Project Alignment

KT4D encourages participation in these types of research activities to not only underpin the power of knowledge technologies on all aspects of society, but to amplify what is being done by experts to identify and overcome current AI and Big Data related challenges. Defending democratic values is imperative to ensure a strong and united Europe, and with the rapid evolution and implementation of these technologies, it is important to spotlight and share information that can inform society and establish expectations for a transparent and secure future.

The most recent KT4D Policy Brief expands on the project's aim to defend democracy from the misuse of technological advancements in more detail, identifying a culture-centric approach to harness the power of emerging technologies.