KER 3

Social Computing Compass Tool for Democratic Design Justice and Democratic Framework to Foster Self-Regulation

General Overview & Rationale

KER 3 is an easy-to-use and gamified resource directed towards software developers working in academia and industry. It is designed to encourage a deeper level of self-reflection about the impact of AI software design on civic and democratic participation.

The purpose and motivation behind the Social Computing Compass Tool are twofold:

  • Enable software developers to make granular, actionable, culturally sensitive decisions about their coding practices, and help them incorporate these decisions into their work at an early phase (in line with KT4D Objective 1: Establish guidelines)
  • Encourage the development of more integrated and truly human-centric innovation paradigms around the use of AI and big data in contexts related to democracy (in line with KT4D Objective 3: Create technology innovation aligned to democratic values)

The Social Computing Compass Tool is situated at the level of coding decisions, rather than in high-level ethical principles. Its goal is to advance awareness of cultural specificity, rather than focusing just on universal human values.

Machine Learning and AI-based predictions are possible because the world has regularities. However, culture often defies these regularities and instead is made of differences and variations which matter greatly for democracy. Our goal is thus to offer a tool that encourages developers to adopt a different stance to potential users, one that considers and respects cultural differences and specificities.

In other words, the goal of KER3 is for software developers to appreciate how their decisions regarding technical aspects and requirements are received differently depending on the cultural values and identities of the communities who adopt those systems.

Input from Use Cases

To respond to developers’ real needs and workflow, we are partly basing the design of our Social Computing Compass Tool on the findings of the first meeting for Use Case 4, which took place in October 2023 in Dublin. The workshop provided us with extremely important inputs, and it allowed us to test our preliminary assumptions and to reconsider our initial plan.

Several issues and ideas crucial emerged during the workshop, such as:

  1. Communicating the need for ethics in software development to people who are not already receptive and knowledgeable of the issues at stake is extremely challenging. To minimize the risk of ‘preaching to choir,’ we concluded that the best targets for our tool are 3rd level instructors, and HR departments.
  2. Many tools supporting ethics in AI do exist, some very useful, but they do not reach their intended audience. Indeed, of all the tools we presented during the workshop, participants didn’t recognise any of them.
  3. Differently from what we imagined, checklists do not work, nor competitive games – though ‘red teaming’ was positively viewed by our participants. Instead, they suggested narrative and collaborative games, which are more nuanced and complex than gamified checklists.
  4. Participants were quite comfortable considering and including intersectionality in their work (gender identity, racial identity, different professional and social roles of the users). By contrast, multiculturality was described as extremely difficult to understand and to embed in software development.

Beside these insightful remarks, during the UC workshop participants created two prototypes for an AI ethics tool which further provided directions and inspiration for KER3.

Findings & Next Steps

KER 3 is developed in connection with other WPs and Tasks of the KT4D project. More specifically, it builds upon the insights into the cultural impact of AI and Big Data developed in Module C of the toolkit (WP3); the knowledge on ethical practices in developing AI systems and technologies, such as Participatory ML and Algorithmic Accountability described in Module H of the toolkit (WP7); the reflections on critical digital literacy as defined in Module F of the toolkit (WP6).

KER3 will be submitted at the end of the project in M36. Currently, we are working in conjunction with the other WPs, consulting with developers outside of the project team, conducting research on digital narrative and interactive fiction, and surveying existing AI ethical tools and games.

Stakeholder Groups:

CSO’s
Industry

Members involved in the development process

Lorem ipsum

Stakeholders & Testing

The gamified experience developed in KER 3 follow the tradition of the whodunit games. The tech company where the player work as a developer is facing legal and economic disastrous consequences for their oversights in the development and deployment of an AI-powered system. This comes as a shock as the company and its employees had followed widespread AI ethics principles. This is because those tools and procedures did not account for cultural issues and specificities.

Possible examples to be included in the game are:

  • AI-automated content moderation that does not distinguish between slurs and pejorative words reappropriated by minorities or local communities;
  • Participatory Machine Learning techniques that do not consider the historical tensions, social structure, and patterns of inclusion/exclusion of a specific community;
  • Overreliance on AI generated translation of less-spoken languages as a democratising tool;
  • Well-intentioned initiatives that end up reinforcing systems of oppression (e.g. white saviour complex).

The goal of the game is for the player to understand how this happened and how to fix it. However, by the time the player identifies the problems, the situation has already changed, and it poses new issues. This is of course not to suggest that these issues are unsolvable, but to stress that cultural settings are always evolving, and developers must constantly adapt and evolve their tools and approaches, as there are no golden rules and universal strategies.

The purpose of the Social Computing Compass Tool is thus to learn from this gamified experience and bring a new sensitivity towards cultural issues whenever designing AI systems and software. 

Why is it important?

“The drive toward more ethical, sustainable, user-centred software platforms cannot be expected to come from users and regulators alone.  We also need to find meaningful ways to help the software industry to enable software developers and their employers to take these values to heart, positioning them as a cornerstone of forward-looking innovation processes, rather than a perceived threat. Given the sensitivity of democracy and civic participation in multicultural environments to disruption by AI and big data, we feel this is an important facet of the problem for KT4D to specifically and uniquely address.”  - Jennifer Edmond, KT4D Coordinator