Loading...
Trustworthy algorithms for supervised machine learning with ill-defined quality criteria
Files
Demortier_327914_2022.pdf
Closed access - Adobe PDF
- 8.08 MB
Details
- Supervisors
- Faculty
- Degree label
- Abstract
- Engineers never lose sight of the fact that virtually every tool, architecture, or solution embodies a set of trade-offs. When it comes to the design of a \acrshort{ml} model, more often than not, one practitioner has to take into account interdependencies between competing metrics. It is typically impossible to simultaneously optimize all losses due to limited model capacity or fundamentally conflicting objectives. There may not be one single optimal model but rather a set of optimal solutions, each reflecting a different trade-off between the goals. The tuning of this weighing, albeit already cumbersome, might even lead to unpredictable behavior. Furthermore, assessing how strongly different weights can affect the model's performances is essential. These challenges are present in the domain of regression problems in supervised \acrfull{ml}. Using different loss functions or quality criteria leads to considerably disparate effects on the outcome. Our motivation is to build algorithms that produce good results for a wide range of quality criteria; optimizing for many different loss functions would typically yield more robust results. Our algorithm is, to our knowledge, the first one to use \acrshort{hns} to solve high-dimensional regression problems with multiple loss functions as a way to make our results more robust. In practice, we created and tested an algorithm capable of learning the entire Pareto front, capable of scaling properly to high-dimensional problems, and capable of being malleable for the user to interact with. Our solution also allows for visual interpretation of the results on the Pareto front, as part of our mission to best help the decision maker.