Increasing fairness in supervised classification : a study of simple decorrelation methods applied to the logistic regression
Files
deSchaetzen_34361600_2021.pdf
Open access - Adobe PDF
- 1.2 MB
Details
- Supervisors
- Faculty
- Degree label
- Abstract
- Nowadays, classifications algorithms perform tasks such as filtering college and loan applications or assessing the risk that an inmate reoffend when released from prison. As our society becomes more and more driven by data and as machine learning takes increasingly more place in everyday life's decision making processes, it becomes urgent that we find classification algorithms that ensure fairness and equity between individuals. For instance, unfairness or discrimination may happen in classification when the data is generated by a biased decision process. In such cases, classification models could not only repeat the bias but also introduce new ones. To tackle this issue, we look for accurate models for which the predictions are uncorrelated with a protected sensible attribute (e.g. race, gender,...). In particular, we propose decorrelation methods operating before, during and after the learning phase of classification models. We show that our methods are able to remove the correlation between the sensible attribute and the predictions while maintaining a high level of accuracy. By limiting the biases present in the predictions made by the classification algorithms, we reinforce the equality between individuals.