ATTENTION/WARNING - NE PAS DÉPOSER ICI/DO NOT SUBMIT HERE

Ceci est la version de TEST de DIAL.mem. Veuillez ne pas soumettre votre mémoire sur ce site mais bien à l'URL suivante: 'https://thesis.dial.uclouvain.be'.
This is the TEST version of DIAL.mem. Please use the following URL to submit your master thesis: 'https://thesis.dial.uclouvain.be'.
 

Deep learning based real-time tumor tracking on fluoroscopy image sequence

(2024)

Files

Montenez_25501900_2024.pdf
  • Embargoed access until 2025-06-24
  • Adobe PDF
  • 6.57 MB

Details

Supervisors
Faculty
Degree label
Abstract
Background : Radiotherapy (RT) is one of the most commonly used modalities in cancer treatment. While modern techniques have greatly enhanced the precision of radiation delivery, challenges persist due to internal tumor motions, potentially resulting in inadequate tumor dosage and excessive exposure to nearby healthy tissues. Image-guided radiotherapy (IGRT) using fluoroscopy imaging systems, combined with deep learning (DL) frameworks, enables markerless tumor tracking on X-ray images. However, this imaging technique introduces an additional radiation dose to the patient, highlighting the need for DL algorithms that track tumors with high precision while minimizing radiation exposure. Purpose To evaluate the robustness of several DL architectures for tumor tracking while minimizing the radiation dose delivered during imaging. Methods : We developed and compared the performance of six different DL architectures for both tumor segmentation and next-frame prediction. These architectures utilize convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) to leverage the spatiotemporal information between successive frames. The networks were trained and evaluated on data from five lung cancer patients, using simulated digitally reconstructed radiographs (DRR) derived from 4DCT scans. We assessed the robustness of the networks with respect to training set size, sampling frequency, and interfractional anatomical variations. Additionally, we measured the training and inference times for each network. Results : Our method identified a robust architecture that consistently achieved over a 95% Jaccard index for segmentation across all studied patients, requiring only 64 images for training and a sampling frequency of 2.5 Hz. The average training time was under 3 minutes, and the average inference time was 9 ms. Conclusion : These results demonstrate the robustness and efficiency of our real-time tracking network, significantly reducing the radiation dose required for imaging.