Conception d'un senseur intégré multimodal pour l'observation des routes
Files
Desousa_42461600_RotsartdeHertaing_44161600_2021.pdf
Open access - Adobe PDF
- 11.4 MB
Details
- Supervisors
- Faculty
- Degree label
- Abstract
- In the field of road observation, multimodal systems are becoming more and more frequent thanks to the duality and complementarity of the information provided by different sensors. The fusion and filtering of data is carried out using a neural network: the Transformer architecture. To train this network, a study site was synthetically reproduced using an UnrealEngine4 game engine in order to obtain images and annotated data for a radar simulation. The extraction of camera and radar data is done respectively by Yolov5 and a search algorithm . The data fusion is performed, tested and compared using three different architectures: the Kalman filter, a Multi-Layer Perceptron (MLP) and the Transformer. Our results show that although the Transformer produces a higher prediction error than the MLP with a 32 times higher execution time, it obtains much smoother and less noisy trajectories, like a Kalman filter, but with a lower error margin. An experimental validation is then performed to validate our results. We notice however that some of our assumptions are too strong, making the results of the transformation less good than expected.