A hybrid deep learning approach for robust multi-sensor GNSS/INS/VO fusion in urban canyons

dc.contributor.authorGeragersian, Patrick
dc.contributor.authorPetrunin, Ivan
dc.contributor.authorGuo, Weisi
dc.contributor.authorGrech, Raphael
dc.date.accessioned2024-04-30T08:36:54Z
dc.date.available2024-04-30T08:36:54Z
dc.date.issued2023-09-15
dc.description.abstractThis paper addresses the significant challenges of robust autonomous navigation in Unmanned Aerial Vehicles (UAVs) within densely populated environments. The focus is on enhancing the performance of Position, Navigation, and Timing (PNT), as specified by the International Civil Aviation Organization, in terms of accuracy, integrity, continuity, and availability. The novel contribution introduces a Robust Multi-Sensor Fusion Architecture (RMSFA) that utilizes a Bayesian-LSTM machine learning algorithm, fusing GNSS, INS, and monocular odometry. Unlike existing solutions that rely on sensor redundancies or methods such as Receiver Autonomous Integrity Monitoring (RAIM), which have limitations in performance, or adaptability to erroneous signals, the proposed system offers improvements in both positioning accuracy and integrity. Furthermore, GNSS data is preprocessed to remove NoneLine-of-Sight data (NLOS) to improve positioning accuracy. Additionally, INS data errors are corrected using a GRU-based error correction architecture to improve INS positioning and reduce drifting. The addition of these post-processing steps reduced the 95th percentile horizontal error by 97.4% and 71.5% respectively. A CNN-LSTM architecture is used to obtain a Visual Odometer (VO) from the camera sensor. The Bayesian-LSTM architecture fusion performance was then compared to a GNSS/IMU/VO EKF-GRU architecture. The comparison showed a 95th percentile error improvement in the horizontal direction of 30.1% for the BayesianLSTM. The architecture was tested in a realistic simulated environment utilizing Unreal Engine and AirSim for UAV simulation, Spirent GNSS7000 simulator for Hardware-in-the-Loop (HIL) simulation, and OKTAL-SE Sim3D to mimic the effects of multipath on GNSS signals. Overall, this work represents a step toward improving the safety and effectiveness of drone navigation by providing a more robust navigation system suitable for safety-critical situations, without the stated disadvantages in previously mentioned literatures.en_UK
dc.identifier.citationGeragersian P, Petrunin I, Guo W, Grech R. (2023) A hybrid deep learning approach for robust multi-sensor GNSS/INS/VO fusion in urban canyons. In: Proceedings of the 36th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2023). 11-15 September 2023, Denver, USA, pp. 2624-2643en_UK
dc.identifier.urihttp://dx.doi.org/10.33012/2023.19271
dc.identifier.urihttps://dspace.lib.cranfield.ac.uk/handle/1826/21283
dc.language.isoen_UKen_UK
dc.publisherThe Institute of Navigationen_UK
dc.rightsAttribution-NonCommercial 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/*
dc.titleA hybrid deep learning approach for robust multi-sensor GNSS/INS/VO fusion in urban canyonsen_UK
dc.typeConference paperen_UK

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Robust_multi-sensor GNSS-INS-VO_fusion_in_urban_canyons-2023.pdf
Size:
2.86 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.63 KB
Format:
Item-specific license agreed upon to submission
Description: