Visual / LiDAR relative navigation for space applications: autonomous systems signals and autonomy group

Date published

2018

Free to read from

Journal Title

Journal ISSN

Volume Title

Publisher

Cranfield University

Department

Type

Thesis

ISSN

Format

Citation

Abstract

Nowadays, robotic systems are shifting towards increased autonomy. This is the case of autonomous navigation, which has been widely studied in literature and extensively implemented for ground applications, for example on cars or motorbike. However, autonomous navigation in space poses a number of additional and different constraints, e.g. reduced number of features, limited power and processing capabilities and life-cycle, among others, that differentiate the problem of that of the ground. In this framework, the I3DS Integrated 3D Sensors project intends to propose a solution for autonomous operations in space. I3DS is a joint venture between Cranfield University, Thales Alenia Space and other industrial European partners. The ambition of I3DS is to produce a standardised modular Inspector Sensor Suite (INSES) for autonomous orbital and planetary applications for future space missions. The project is co-founded under the Horizon 2020 EU research and development program and is part of the Strategic Research Cluster on Space Robotics Technologies. The goal for space applications is hence to develop a LiDAR- and visual-based navigation solution able to estimate the relative pose, i.e. position and orientation, of a non-cooperative target in orbit with respect to the chaser satellite. The navigation solution also encompasses a dedicated and application-oriented pre-processing of the raw data. This work aims to respond to this need by assessing the suitability and limitations of the different pre-processing and navigation algorithms for relative navigation on both the on-board computer and FPGA in space, given the particular and specific constraints imposed by the space environment. The data generated by the sensors require specific pre-processing in order to be converted into an optimum format for the posterior navigation algorithms. In particular, image pre-processing includes spatial and spectral corrections, while LiDAR data pre-processing comprises point cloud downsizing and outlier removal. Therefore, in order to properly simulate the I3DS INSES, FPGA and on-board computer (OBC) have been considered as hardware solutions to run the algorithms. The OBC has been simulated using a standard desktop computer on which LiDAR codes have been tested, whereas the FPGA has been simulated with the Xilinx UltraZed-EG FPGA board for image pre-processing. Different algorithms have been tested and tuned to achieve the navigation solution. ICP (Iterative Closest Point), GICP (Generalised Iterative Closest Point), TICP (Trimmed Iterative Closes Point) and a Kalman filter-based registration using a Histogram of Distances descriptor have been evaluated for LiDAR navigation. Stereo-based visual odometry and monocular navigation based on fiducial markers on the surface of the target satellite represented the solutions for visual navigation. Experimental tests on simulated data showed good results in terms of accuracy, with an error on position below 5% for all the sensors. However, the computational load on the FPGA board should be further optimised. Possible avenues, such as parallelisation on one or multiple FPGA boards, further optimisation of the algorithms and decreasing the acquisition frequency as a last resource are finally discussed.

Description

© Cranfield University, 2018

Software Description

Software Language

Github

Keywords

DOI

Rights

© Cranfield University, 2015. All rights reserved. No part of this publication may be reproduced without the written permission of the copyright holder.

Relationships

Relationships

Supplements

Funder/s