Loading...
Files
Lingier_23431700_2023.pdf
Closed access - Adobe PDF
- 3.31 MB
Details
- Supervisors
- Faculty
- Degree label
- Abstract
- Despite their power, modern machine-learning tools are sensitive and often fail without warning. To address this issue, there is a need for robust testing frameworks that can detect and quantify the impact of data drift on computer vision classifier in production. During my internship at GSK I was tasked to develop a robust statistical test that measures changes in input distributions truly altering model performance. The central focus of this thesis is the detection of drift using uncertainty evaluation of a model. We explored two main approaches: Monte Carlo Dropout (MCD) and Gaussian Process (GP). Both approaches demonstrated similar performance, effectively estimating uncertainty and enabling successful detection of concept drift based on these uncertainty scores.