ATTENTION/WARNING - NE PAS DÉPOSER ICI/DO NOT SUBMIT HERE

Ceci est la version de TEST de DIAL.mem. Veuillez ne pas soumettre votre mémoire sur ce site mais bien à l'URL suivante: 'https://thesis.dial.uclouvain.be'.
This is the TEST version of DIAL.mem. Please use the following URL to submit your master thesis: 'https://thesis.dial.uclouvain.be'.
 

How to find Decision Trees that are fair ?

(2022)

Files

Laderrière_44261900_2022.pdf
  • UCLouvain restricted access
  • Adobe PDF
  • 1.64 MB

Details

Supervisors
Faculty
Degree label
Abstract
In the age of artificial intelligence, machine learning is a hot topic. It involves using data and learning algorithms to mimic the way humans learn in order to build fast and efficient models without being explicitly programmed. The main purpose of these algorithms is to help in decision making and thus they are used in many fields. In most cases, machine learning is not used in the political, moral, social or ethical domains. Its work is most often limited to sorting photos, predicting the price of real estate, playing video games or performing automated tasks. What to do when asked to make predictions based on sensitive data when there are laws to prevent discrimination? One of the problems with these algorithms is that they must learn from existing data. Therefore, if during their training they learn from bad or biased data, their future predictions will also be bad or biased. You could compare this to a young child. If he is not well educated by his parents, he will behave badly in the years to come. This work will therefore provide different ways to measure discrimination and discuss several approaches to reduce discrimination in a specific area of machine learning : Decision trees. Two approaches, through different libraries, were implemented and compared to see the consequences.