AI is all around us – in our homes, cars, workplaces and in our pockets. The more pervasive AI becomes, the more important it is to ensure applications are trustworthy. It is just as important to build public trust. In this context, the EU-funded AI4Gov project will address the ethical, trust, discrimination and bias issues associated with AI and Big Data. As a collaborative project involving a wide range of stakeholders, including policymakers, public organisations, legal experts, and social scientists, it will introduce solutions and frameworks to increase trust in democratic processes and provide policymakers with automated and evidence-based decision-making tools. It will also leverage state-of-the-art tools to provide unbiased, fair and trusted AI.

AI4Gov is a joint effort of policy makers, public institutions / organizations, legal, Social Science and Humanities and Big Data/AI experts to unveil the potentials of Artificial Intelligence (AI) and Big Data technologies for developing evidence-based innovations, policies, and policy recommendations to harness the public sphere, political power, and economic power for democratic purposes. The project will also uphold fundamental rights and values standards of individuals when using AI and Big Data technologies. Hence, the project aims to contribute to the promising research landscape that seeks to address ethical, trust, discrimination, and bias issues by providing an in-depth analysis and solutions addressing the challenges that various stakeholders in modern democracies are faced with when attempts are made to mitigate the negative implications of Big Data and AI. In this direction, the project will introduce solutions and frameworks towards a two-fold sense, to facilitate policymakers on the development of automated, educated and evidence-based decisions and to increase the trust of citizens in the democratic processes and institutions. Moreover, the project will leverage the capabilities of state-of-the-art tools for providing un-bias, discrimination-free, fair, and trusted AI. These tools will be validated in terms of their ability to provide technical and/or organisational measures, causal models for bias and discrimination, and standardized methodologies for achieving fairness in AI.