Lowering the risk of bias in AI applications

Open Access
Conference Proceedings
Authors: Jj LinkHelena DadakouAnne Elisabeth Krüger

Abstract: Data is not free of biases, and AI systems that are based on the data are not either. What can be done to try the best, to minimize the risk of building systems that perpetuate the biases that exist in society and in data? In our paper we explore the possibilities along the User Centered Design Process and in Design Thinking, to lower the risk of keeping imbalances or gaps in data and models. But looking at the design process is not enough: Decision makers, development team and design team, respectively their composition and awareness towards risks of discrimination and their decisions in involving potential users and non-users, collecting data and testing the application also play a major role in trying to implement systems with the least biases possible.

Keywords: AI, Artificial Intelligence, Anti, Discrimination, Biases

DOI: 10.54941/ahfe1003286

Cite this paper: