Multimodal Navigation for Accurate Space Rendezvous Missions

Date

2021-05

Journal Title

Journal ISSN

Volume Title

Publisher

Department

Type

Thesis

ISSN

Format

Citation

Abstract

Relative navigation is paramount in space missions that involve rendezvousing between two spacecraft. It demands accurate and continuous estimation of the six degree-of-freedom relative pose, as this stage involves close-proximity-fast-reaction operations that can last up to five orbits. This has been routinely achieved thanks to active sensors such as lidar, but their large size, cost, power and limited operational range remain a stumbling block for en masse on-board integration. With the onset of faster processing units, lighter and cheaper passive optical sensors are emerging as the suitable alternative for autonomous rendezvous in combination with computer vision algorithms. Current vision-based solutions, however, are limited by adverse illumination conditions such as solar glare, shadowing, and eclipse. These effects are exacerbated when the target does not hold cooperative markers to accommodate the estimation process and is incapable of controlling its rotational state. This thesis explores novel model-based methods that exploit sequences of monoc ular images acquired by an on-board camera to accurately carry out spacecraft relative pose estimation for non-cooperative close-range rendezvous with a known artificial target. The proposed solutions tackle the current challenges of imaging in the visible spectrum and investigate the contribution of the long wavelength infrared (or “thermal”) band towards a combined multimodal approach. As part of the research, a visible-thermal synthetic dataset of a rendezvous approach with the defunct satellite Envisat is generated from the ground up using a realistic orbital camera simulator. From the rendered trajectories, the performance of several state-of-the-art feature detectors and descriptors is first evaluated for both modalities in a tailored scenario for short and wide baseline image processing transforms. Multiple combinations, including the pairing of algorithms with their non-native counterparts, are tested. Computational runtimes are assessed in an embedded hardware board. From the insight gained, a method to estimate the pose on the visible band is derived from minimising geometric constraints between online local point and edge contour features matched to keyframes generated offline from a 3D model of the target. The combination of both feature types is demonstrated to achieve a pose solution for a tumbling target using a sparse set of training images, bypassing the need for hardware-accelerated real-time renderings of the model. The proposed algorithm is then augmented with an extended Kalman filter which processes each feature-induced minimisation output as individual pseudo measurements, fusing them to estimate the relative pose and velocity states at each time-step. Both the minimisation and filtering are established using Lie group formalisms, allowing for the covariance of the solution computed by the former to be automatically incorporated as measurement noise in the latter, providing an automatic weighing of each feature type directly related to the quality of the matches. The predicted states are then used to search for new feature matches in the subsequent time-step. Furthermore, a method to derive a coarse viewpoint estimate to initialise the nominal algorithm is developed based on probabilistic modelling of the target’s shape. The robustness of the complete approach is demonstrated for several synthetic and laboratory test cases involving two types of target undergoing extreme illumination conditions. Lastly, an innovative deep learning-based framework is developed by processing the features extracted by a convolutional front-end with long short-term memory cells, thus proposing the first deep recurrent convolutional neural network for spacecraft pose estimation. The framework is used to compare the performance achieved by visible-only and multimodal input sequences, where the addition of the thermal band is shown to greatly improve the performance during sunlit sequences. Potential limitations of this modality are also identified, such as when the target’s thermal signature is comparable to Earth’s during eclipse.

Description

© Cranfield University 2021. All rights reserved. No part of this publication may be reproduced without the written permission of the copyright owner

Software Description

Software Language

Github

Keywords

Spacecraft, Spaceflight manoeuvres, Computer vision, Thermal infrared imaging

DOI

Rights

© Cranfield University, 2015. All rights reserved. No part of this publication may be reproduced without the written permission of the copyright holder.

Relationships

Relationships

Supplements