ATTENTION/WARNING - NE PAS DÉPOSER ICI/DO NOT SUBMIT HERE

Ceci est la version de TEST de DIAL.mem. Veuillez ne pas soumettre votre mémoire sur ce site mais bien à l'URL suivante: 'https://thesis.dial.uclouvain.be'.
This is the TEST version of DIAL.mem. Please use the following URL to submit your master thesis: 'https://thesis.dial.uclouvain.be'.
 

Organs detection in CT and CBCT using deep learning for radiotherapy applications

(2020)

Files

Derumier_58821500_2020.pdf
  • Open access
  • Adobe PDF
  • 8.42 MB

Details

Supervisors
Faculty
Degree label
Abstract
Radiotherapy uses high doses of radiation to damage cancer cells. For prostate cancer patients, the current treatment workflow of radiotherapy does not take into account the organ deformations occurring between sessions. This can lead to uncertainties about the dose delivered to the tumor and surrounding healthy organs, possibly giving rise to unwanted side effects that impact the patient’s living conditions. Segmenting those organs with deep learning methods on Computed Tomography (CT) or Cone Beam Computed Tomography (CBCT) scans acquired on treatment day would reduce these uncertainties. Currently, the scans are too large to fit inside a GPU, which is holding back the adoption of these methods. The aim of our work is to identify the organs of interest, locate them and reduce the size of the scan down to a small bounding box that contains them. This bounding box will then serve as input to a segmentation algorithm. We start by studying two algorithms based on traditional computer vision techniques and show that they are inefficient for our problem. We then use a deep learning model and train it to detect and localize bladder, rectum and prostate. We test our model on 450 2-dimensional CT images from 7 patients and obtain an average Intersection over Union (IoU) score of 0.681 ± 0.233, 0.496 ± 0.198 and 0.538 ± 0.245 for bladder, rectum and prostate respectively with and average inference time of 0.02 second per image. We extend the model in 3 dimensions by combining the slices of a same patient together. This time, we reach an IoU score of 0.822 ± 0.060, 0.688 ± 0.065 and 0.619 ± 0.192 for the same organs. Finally, we merge all the organ boxes to take out a final box that contains them and is directly usable by a segmentation algorithm. By doing this, we are able to get rid of 95.6% of the original data volume while still reaching 0.850 ± 0.033 of IoU and 1.72 seconds of processing speed on average. We define a baseline method to compare with ours. The baseline obtains 0.706 ± 0.097 of IoU on average and is beaten by our method on every single patient of the test set. Hence, our work brings out the great potential of deep learning methods for localizing organs and reducing the size of medical images. This algorithm could be used, not only for other pathologies than prostate cancer in radiotherapy or proton therapy, but also for any segmentation application of a large volume.