John LeeLingier, CorentinCorentinLingier2025-02-042025-02-042023https://dial-mem.test.bib.ucl.ac.be/handle/123456789/32868Despite their power, modern machine-learning tools are sensitive and often fail without warning. To address this issue, there is a need for robust testing frameworks that can detect and quantify the impact of data drift on computer vision classifier in production. During my internship at GSK I was tasked to develop a robust statistical test that measures changes in input distributions truly altering model performance. The central focus of this thesis is the detection of drift using uncertainty evaluation of a model. We explored two main approaches: Monte Carlo Dropout (MCD) and Gaussian Process (GP). Both approaches demonstrated similar performance, effectively estimating uncertainty and enabling successful detection of concept drift based on these uncertainty scores.Computer visionDrift detectionUncertainty evaluationGaussian processConcept driftCovariate driftDataset shiftConcept drift detection in image classificationtext::thesis::master thesisthesis:40707