Stressing the importance of transparency and fairness - Europe’s guidelines on the use of AI in Research
21, March, 2024
The European Research Area (ERA) forum recently developed a set of Guidelines on the responsible use of generative AI in research to support researchers within the European Community.
AI has become pervasive in all aspects of our lives, and just like it influences information and democracy - as highlighted in the first episode of our series entitled ‘Europe at the forefront of technology Regulation’ - it also influences research and knowledge creation. In this spiral of technological development a growing number of areas require regulation and guidelines on how to deal with AI and on how to avoid the risks that stem from its use. KT4D in its policy Brief entitled ‘Culture's Role in Navigating Technological Change’ underlines the importance of considering often underestimated spheres - such as the cultural one - when assessing the risks related to AI, departing from the latest developments in European Regulations.
If on one side AI is allowing all research fields - from scientific subjects to humanities ones - to develop faster, it also presents new challenges related to plagiarism, possible privacy violations and biases. Europe is well aware of the importance of regulating new technologies as it demonstrated with the AI Act and the spark of regulations and guidelines such as the ones on trustworthy AI.
Various institutions, such as universities and research organisations, have issued guidelines for using generative AI responsibly in research. However, the proliferation of these guidelines has created a complex landscape, making it challenging to navigate. To address this, the European Research Arerea Forum has developed comprehensive guidelines tailored for funding bodies, research organisations, and researchers. While these guidelines are not binding, they provide support for fostering responsible AI use in research complementing existing EU AI policies and initiatives and contributing to the advancement of ethical AI practices in science.
These principles - referring to the ones outlined in the European Code of Conduct for Research Integrity - emphasise reliability, honesty, respect, and accountability throughout the research process, encompassing aspects such as quality assurance, transparency, fairness, and consideration for societal impacts. As generative AI evolves, these guidelines will be regularly updated to remain relevant and supportive of researchers and organisations, reflecting ongoing collaboration within the ERA Forum.