Details
- Supervisors
- Faculty
- Degree label
- Abstract
- This thesis delves into the burgeoning field of trajectory prediction, placing particular emphasis on the vital role of explainability. At the heart of our investigation lie two predictive tasks: pinpointing a moving object's ultimate destination and forecasting its next stop mere minutes before arrival. Recognizing trajectories as sequences, we harnessed the power of Long Short-Term Memory (LSTM) networks. By experimenting with varied learning techniques, we adeptly addressed both tasks, achieving commendable accuracy. Notably, when tasked with predicting a destination 2.5 minutes in advance, our model showcased a commendable accuracy of 47.92%. This suggests that as a vehicle is close to its destination, our model can predict its final stop with nearly 48% precision 2.5 minutes before. To further the study's depth, we sought to explain our models. We leveraged two explanatory techniques: the LIME text explainer and the disturbance approach. Our explorations revealed that the LSTM model, when predicting the final destination, predominantly relies on the trajectory's recent points rather than its entirety. Armed with this understanding, we refined our model's learning approach, enhancing its predictive prowess for earlier destination estimations. Notably, both our explanation methods consistently highlighted the significant influence of the most recent trajectory points on next-position predictions.