Artificial intelligence and machine learning are increasingly being used in decision-making processes such as hiring, sentencing, and credit approval. Given the significant impact these decisions have on people's lives, the models that drive them must be free of bias against any demographic group. In this thesis, we tested several algorithms designed to detect and mitigate bias in data and model predictions. The results show that the methods we tested are both effective and relatively easy to use. One of the main results of this work is an add-on developed for the Orange environment to help users without programming skills to address and understand bias in machine learning. This add-on is available on GitHub, where the source code is available, and has been integrated into the Orange programming environment. We reported on the implementation on the Orange tool website.
|