ATTENTION/WARNING - NE PAS DÉPOSER ICI/DO NOT SUBMIT HERE

Ceci est la version de TEST de DIAL.mem. Veuillez ne pas soumettre votre mémoire sur ce site mais bien à l'URL suivante: 'https://thesis.dial.uclouvain.be'.
This is the TEST version of DIAL.mem. Please use the following URL to submit your master thesis: 'https://thesis.dial.uclouvain.be'.
 

Explainable trajectory prediction

(2023)

Files

Bahrami_05742100_2023.pdf
  • Open access
  • Adobe PDF
  • 4.39 MB

Bahrami_05742100_2023_annexe.pdf
  • Open access
  • Adobe PDF
  • 69.45 KB

Details

Supervisors
Faculty
Degree label
Abstract
This thesis delves into the burgeoning field of trajectory prediction, placing particular emphasis on the vital role of explainability. At the heart of our investigation lie two predictive tasks: pinpointing a moving object's ultimate destination and forecasting its next stop mere minutes before arrival. Recognizing trajectories as sequences, we harnessed the power of Long Short-Term Memory (LSTM) networks. By experimenting with varied learning techniques, we adeptly addressed both tasks, achieving commendable accuracy. Notably, when tasked with predicting a destination 2.5 minutes in advance, our model showcased a commendable accuracy of 47.92%. This suggests that as a vehicle is close to its destination, our model can predict its final stop with nearly 48% precision 2.5 minutes before. To further the study's depth, we sought to explain our models. We leveraged two explanatory techniques: the LIME text explainer and the disturbance approach. Our explorations revealed that the LSTM model, when predicting the final destination, predominantly relies on the trajectory's recent points rather than its entirety. Armed with this understanding, we refined our model's learning approach, enhancing its predictive prowess for earlier destination estimations. Notably, both our explanation methods consistently highlighted the significant influence of the most recent trajectory points on next-position predictions.