Optimised obstacle detection and avoidance model for autonomous vehicle navigation

Date published

2024-02

Free to read from

2025-05-07

Journal Title

Journal ISSN

Volume Title

Publisher

Cranfield University

Department

SATM

Type

Thesis

ISSN

Format

Citation

Abstract

Driven by cutting-edge research in AI vision, sensor fusion and autonomous systems, intelligent robotics is poised to revolutionise Aviation hangars, shaping the "hangars of the future" by reducing inspection time and improving defect detection accuracy. Many hangar environments, especially in maintenance, repair and overhaul (MRO) operations, rely on manual processes and algorithms that need to be optimised for the increasing complexity of these settings. These include varied obstacle structures, often low-light conditions, and frequent changes in the scene. The application of mobile robot solutions demands enhanced perception, accurate obstacle avoidance, and efficient path planning, essential for effective navigation in the busy hangar environment and aircraft inspections. The application of ROS navigation stack has been at the center of most solutions and is mostly efficient in static settings while limited in complex environments. These systems are often computationally intensive and require pre-configuration of environmental parameters, making them less efficient in changing environments with real-time demand. Deep learning models and ROS integration have shown promising improvements, leveraging experiential learning and large datasets. However, accurately detecting obstacles of different shapes and sizes, especially in varying lighting conditions, poses a significant challenge and affects safe navigation. To overcome these challenges in complex environments, this research proposes a novel solution for enhanced obstacle detection, avoidance and path planning. Our system leverages LiDAR and camera data fusion with a real-time and accurate YOLOv7/YOLOv5 object detection model for robust identification of diverse obstacles. Additionally, we proposed a combination with ROS planners, including Dijkstra, RRT and DWA, for path planning optimisation to enable collision-free navigation. The system was validated with ROS Gazebo and real- Turtlebot3 robot. It achieved zero collisions with YOLOv7 and RRT integration, a 2.7% increase in obstacle detection accuracy, and an estimated 2.4% faster navigation speed than the baseline methods.

Description

Software Description

Software Language

Github

Keywords

Autonomous navigation, object detection, obstacle avoidance, mobile robot, deep learning, computer vision, ROS navigation

DOI

Rights

© Cranfield University, 2024. All rights reserved. No part of this publication may be reproduced without the written permission of the copyright holder.

Relationships

Relationships

Resources

Funder/s