ATTENTION/WARNING - NE PAS DƉPOSER ICI/DO NOT SUBMIT HERE

Ceci est la version de TEST de DIAL.mem. Veuillez ne pas soumettre votre mƩmoire sur ce site mais bien Ơ l'URL suivante: 'https://thesis.dial.uclouvain.be'.
This is the TEST version of DIAL.mem. Please use the following URL to submit your master thesis: 'https://thesis.dial.uclouvain.be'.
 

Deep learning in mammography

(2022)

Files

DeBueger_33701700_Decroƫs_51651700_2022.pdf
  • Open access
  • Adobe PDF
  • 5.64 MB

Details

Supervisors
Faculty
Degree label
Abstract
Breast cancer is the most common cancer among women worldwide and its incidence is likely to increase in the coming years. The breast mass is one of the most distinctive signs for the diagnosis of this cancer. As this is largely influenced by the shape and margins of the breast mass, its segmentation can be of great help for the diagnosis. Although the introduction of early screening has greatly reduced the mortality rate of these cancers, it has led to problems of overdiagnosis and a high rate of false-positives. To avoid these potential errors, computer-aided diagnosis using deep learning networks has been introduced as a tool to assist senologists in the diagnosis process. Unfortunately, a major obstacle to achieving high performance with deep learning models is the lack of annotated data. To address this issue, different approaches have been developed, of which two were explored in this master thesis: self-supervised learning and active learning. This work focuses on the task of breast mass segmentation on the publicly available CBIS-DDSM dataset. The first approach, self-supervised learning, aims at extracting useful information from unlabelled data, allowing the pretrained model to be fine-tuned with less labelled data. It was investigated in two stages, using two different datasets. To begin with, different ImageNet self-supervised pretrained models (i.e. SimCLR, MoCo and Barlow Twins) were used in order to compare their performance with a classical supervised pretraining. For this, their pretrained weights were transferred to U-net, having a ResNet as encoder. It was shown that no self-supervised learning approach outperformed the supervised pretrained model. After which, given the low significance of the results obtained, the repetition of the experiment with a MoCo self-supervised pretraining was conducted on a medical image dataset, CheXpert. This did not yield significant results either. However, when this model was then trained on subdivisions of the dataset, it seemed to give slightly better results than the ImageNet supervised pretrained model. The second approach, active learning, aims at requesting specific annotations to achieve better performance. Based on that, various selection metrics were investigated to determine the appropriate images to annotate for model training, for subsequent application to an active learning model. The selection metric showing the best performance evaluates the agreement of the models, by calculating the IoU between all possible pairs of predictions. The ensemble approach was then applied with this selection metric to compare its performance with a random selection.