Browsing by Author "Aouf, Nabil"
Now showing 1 - 20 of 51
Results Per Page
Sort Options
Item Open Access 3D automatic target recognition for missile platforms(2017-05) Kechagias Stamatis, Odysseas; Aouf, NabilThe quest for military Automatic Target Recognition (ATR) procedures arises from the demand to reduce collateral damage and fratricide. Although missiles with two-dimensional ATR capabilities do exist, the potential of future Light Detection and Ranging (LIDAR) missiles with three-dimensional (3D) ATR abilities shall significantly improve the missile’s effectiveness in complex battlefields. This is because 3D ATR can encode the target’s underlying structure and thus reinforce target recognition. However, the current military grade 3D ATR or military applied computer vision algorithms used for object recognition do not pose optimum solutions in the context of an ATR capable LIDAR based missile, primarily due to the computational and memory (in terms of storage) constraints that missiles impose. Therefore, this research initially introduces a 3D descriptor taxonomy for the Local and the Global descriptor domain, capable of realising the processing cost of each potential option. Through these taxonomies, the optimum missile oriented descriptor per domain is identified that will further pinpoint the research route for this thesis. In terms of 3D descriptors that are suitable for missiles, the contribution of this thesis is a 3D Global based descriptor and four 3D Local based descriptors namely the SURF Projection recognition (SPR), the Histogram of Distances (HoD), the processing efficient variant (HoD-S) and the binary variant B-HoD. These are challenged against current state-of-the-art 3D descriptors on standard commercial datasets, as well as on highly credible simulated air-to-ground missile engagement scenarios that consider various platform parameters and nuisances including simulated scale change and atmospheric disturbances. The results obtained over the different datasets showed an outstanding computational improvement, on average x19 times faster than state-of-the-art techniques in the literature, while maintaining or even improving on some occasions the detection rate to a minimum of 90% and over of correct classified targets.Item Open Access Automatic x-ray image segmentation and clustering for threat detection(SPIE, 2017-10-05) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Nam, David; Belloni, CaroleFirearms currently pose a known risk at the borders. The enormous number of X-ray images from parcels, luggage and freight coming into each country via rail, aviation and maritime presents a continual challenge to screening officers. To further improve UK capability and aid officers in their search for firearms we suggest an automated object segmentation and clustering architecture to focus officers’ attentions to high-risk threat objects. Our proposal utilizes dual-view single/ dual-energy 2D X-ray imagery and is a blend of radiology, image processing and computer vision concepts. It consists of a triple-layered processing scheme that supports segmenting the luggage contents based on the effective atomic number of each object, which is then followed by a dual-layered clustering procedure. The latter comprises of mild and a hard clustering phase. The former is based on a number of morphological operations obtained from the image-processing domain and aims at disjoining mild-connected objects and to filter noise. The hard clustering phase exploits local feature matching techniques obtained from the computer vision domain, aiming at sub-clustering the clusters obtained from the mild clustering stage. Evaluation on highly challenging single and dual-energy X-ray imagery reveals the architecture’s promising performance.Item Open Access Autonomous navigation for mobility scooters: a complete framework based on open-source software(IEEE, 2019-11-28) Cecotti, Marco; Kanchwala, Husain; Aouf, NabilIn recent years, there has been a growing demand for small vehicles targeted at users with mobility restrictions and designed to operate on pedestrian areas. The users of these vehicles are generally required to be in control for the entire duration of their journey, but a lot more people could benefit from them if some of the driving tasks could be automated. In this scenario, we set out to develop an autonomous mobility scooter, with the aim to understand the commercial feasibility of a similar product. This paper reports on the progress of this project, proposing a framework for autonomous navigation on pedestrian areas, and focusing in particular on the construction of suitable costmaps. The proposed framework is based on open-source software, including a library created by the authors for the generation of costmaps.Item Open Access B-HoD: A Lightweight and Fast Binary descriptor for 3D Object Recognition and Registration(IEEE, 2017-08-03) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Chermak, Lounis3D object recognition and registration in computer vision applications has lately drawn much attention as it is capable of superior performance compared to its 2D counterpart. Although a number of high performing solutions do exist, it is still challenging to further reduce processing time and memory requirements to meet the needs of time critical applications. In this paper we propose an extension of the 3D descriptor Histogram of Distances (HoD) into the binary domain named the Binary-HoD (B-HoD). Our binary quantization procedure along with the proposed preprocessing step reduce an order of magnitude both processing time and memory requirements compared to current state of the art 3D descriptors. Evaluation on two popular low quality datasets shows its promising performance.Item Open Access Benchmarking of local feature detectors and descriptors for multispectral relative navigation in space(Elsevier, 2020-04-07) Rondao, Duarte; Aouf, Nabil; Richardson, Mark A.; Dubois-Matra, OlivierOptical-based navigation for space is a field growing in popularity due to the appeal of efficient techniques such as Visual Simultaneous Localisation and Mapping (VSLAM), which rely on automatic feature tracking with low-cost hardware. However, low-level image processing algorithms have traditionally been measured and tested for ground-based exploration scenarios. This paper aims to fill the gap in the literature by analysing state-of-the-art local feature detectors and descriptors with a taylor-made synthetic dataset emulating a Non-Cooperative Rendezvous (NCRV) with a complex spacecraft, featuring variations in illumination, rotation, and scale. Furthermore, the performance of the algorithms on the Long Wavelength Infrared (LWIR) is investigated as a possible solution to the challenges inherent to on-orbit imaging in the visible, such as diffuse light scattering and eclipse conditions. The Harris, GFTT, DoG, Fast-Hessian, FAST, CenSurE detectors and the SIFT, SURF, LIOP, ORB, BRISK, FREAK descriptors are benchmarked for images of Envisat. It was found that a combination of Fast-Hessian with BRISK was the most robust, while still capable of running on a low resolution and acquisition rate setup. For large baselines, the rate of false-positives increases, limiting their use in model-based strategies.Item Open Access Biomimetic vision-based collision avoidance system for MAVs.(2017-05) Isakhani, Hamid; Aouf, Nabil; Whidborne, James F.This thesis proposes a secondary collision avoidance algorithm for micro aerial vehicles based on luminance-difference processing exhibited by the Lobula Giant Movement Detector (LGMD), a wide-field visual neuron located in the lobula layer of a locust’s nervous system. In particular, we address the design, modulation, hardware implementation, and testing of a computationally simple yet robust collision avoidance algorithm based on the novel concept of quadfurcated luminance-difference processing (QLDP). Micro and Nano class of unmanned robots are the primary target applications of this algorithm, however, it could also be implemented on advanced robots as a fail-safe redundant system. The algorithm proposed in this thesis addresses some of the major detection challenges such as, obstacle proximity, collision threat potentiality, and contrast correction within the robot’s field of view, to establish and generate a precise yet simple collision-free motor control command in real-time. Additionally, it has proven effective in detecting edges independent of background or obstacle colour, size, and contour. To achieve this, the proposed QLDP essentially executes a series of image enhancement and edge detection algorithms to estimate collision threat-level (spike) which further determines if the robot’s field of view must be dissected into four quarters where each quadrant’s response is analysed and interpreted against the others to determine the most secure path. Ultimately, the computation load and the performance of the model is assessed against an eclectic set of off-line as well as real-time real-world collision scenarios in order to validate the proposed model’s asserted capability to avoid obstacles at more than 670 mm prior to collision (real-world), moving at 1.2 msˉ¹ with a successful avoidance rate of 90% processing at an extreme frequency of 120 Hz, that is much superior compared to the results reported in the contemporary related literature to the best of our knowledge.Item Open Access A comparison of trajectory planning and control frameworks for cooperative autonomous driving(American Society of Mechanical Engineers, 2021-01-07) Bezerra Viana, Icaro; Kanchwala, Husain; Ahiska, Kenan; Aouf, NabilThis work considers the cooperative trajectory-planning problem along a double lane change scenario for autonomous driving. In this paper we develop two frameworks to solve this problem based on distributed model predictive control (MPC). The first approach solves a single non-linear MPC problem. The general idea is to introduce a collision cost function in the optimization problem at the planning task to achieve a smooth and bounded collision function and thus to prevent the need to implement tight hard constraints. The second method uses a hierarchical scheme with two main units: a trajectory-planning layer based on mixed-integer quadratic program (MIQP) computes an on-line collision-free trajectory using simplified motion dynamics, and a tracking controller unit to follow the trajectory from the higher level using the non-linear vehicle model. Connected and automated vehicles (CAVs) sharing their planned trajectories lay the foundation of the cooperative behaviour. In the tests and evaluation of the proposed methodologies, MATLAB-CARSIM co-simulation is utilized. CARSIM provides the high fidelity model for the multi-body vehicle dynamics. MATLAB-CARSIM conjoint simulation experiments compare both approaches for a cooperative double lane change maneuver of two vehicles moving along a one-way three-lane road with obstacles.Item Open Access DeepLO: Multi-projection deep LIDAR odometry for space orbital robotics rendezvous relative navigation(Elsevier, 2020-07-30) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Dubanchet, Vincent; Richardson, Mark A.This work proposes a new Light Detection and Ranging (LIDAR) based navigation architecture that is appropriate for uncooperative relative robotic space navigation applications. In contrast to current solutions that exploit 3D LIDAR data, our architecture suggests a Deep Recurrent Convolutional Neural Network (DRCNN) that exploits multi-projected imagery of the acquired 3D LIDAR data. Advantages of the proposed DRCNN are; an effective feature representation facilitated by the Convolutional Neural Network module within DRCNN, a robust modeling of the navigation dynamics due to the Recurrent Neural Network incorporated in the DRCNN, and a low processing time. Our trials evaluate several current state-of-the-art space navigation methods on various simulated but credible scenarios that involve a satellite model developed by Thales Alenia Space (France). Additionally, we evaluate real satellite LIDAR data acquired in our lab. Results demonstrate that the proposed architecture, although trained solely on simulated data, is highly adaptable and is more appealing compared to current algorithms on both simulated and real LIDAR data scenarios affording better odometry accuracy at lower computational requirements.Item Open Access Evaluating 3D local descriptors and recursive filtering schemes for LIDAR based uncooperative relative space navigation(Wiley, 2019-09-05) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Dubanchet, VincentWe propose a light detection and ranging (LIDAR)‐based relative navigation scheme that is appropriate for uncooperative relative space navigation applications. Our technique combines the encoding power of the three‐dimensional (3D) local descriptors that are matched exploiting a correspondence grouping scheme, with the robust rigid transformation estimation capability of the proposed adaptive recursive filtering techniques. Trials evaluate several current state‐of‐the‐art 3D local descriptors and recursive filtering techniques on a number of both real and simulated scenarios that involve various space objects including satellites and asteroids. Results demonstrate that the proposed architecture affords a 50% odometry accuracy improvement over current solutions, while also affording a low computational burden. From our trials we conclude that the 3D descriptor histogram of distances short (HoD‐S) combined with the adaptive αβ filtering poses the most appealing combination for the majority of the scenarios evaluated, as it combines high quality odometry with a low processing burden.Item Open Access Explainability of deep SAR ATR through feature analysis(IEEE, 2020-10-20) Belloni, Carole; Aouf, Nabil; Balleri, Alessio; Le Caillec, Jean-Marc; Merlet, ThomasUnderstanding the decision-making process of deep learning networks is a key challenge which has rarely been investigated for Synthetic Aperture Radar (SAR) images. In this paper, a set of new analytical tools is proposed and applied to a Convolutional Neural Network (CNN) handling Automatic Target Recognition (ATR) on two SAR datasets containing military targets.Item Open Access FPGA-based multi-sensor relative navigation in space: Preliminary analysis in the framework of the I3DS H2020 project(Internation Astronautical Federation, 2018-10-04) Estébanez Camarena, Monica; Feetham, Luke; Scannapieco, Antonio; Aouf, NabilThe Horizon 2020 Integrated 3D Sensors (I3DS) project brings together the following entities throughout Europe: THALES ALENIA SPACE - France / Italy / UK / Spain, SINTEF (Norway), TERMA (Denmark), COSINE (Netherlands), PIAP Space (Poland), HERTZ Systems (Poland), and Cranfield University (UK). I3DS is co-funded under the Horizon 2020 EU research and development program and is part of the Strategic Research Cluster on Space Robotics Technologies. The ambition of I3DS is to produce a standardised modular Inspector Sensor Suite (INSES) for autonomous orbital and planetary applications for future space missions. Orbital applications encompass activities such as on-orbit servicing and repair, space rendezvous and docking, collision avoidance and active debris removal (ADR). Simultaneous localisation and surface mapping (SLAM) for planetary exploration and general navigation in an unknown environment for scientific purposes can be considered in planetary applications. These envisaged space applications can be tackled by exploiting the flexibility, high performance and long product life of FPGAs. Conventional FPGAs are subject to Single Event Upsets (SEU) due to space radiation, causing their failure. Therefore, space-graded FPGAs, such as those developed by Xilinx, are targeted within the I3DS project. Currently, the main use of the FPGA within the development of this robust end-to-end multi-sensor suite is for navigation and data pre-processing. The aim of this paper is to assess the capabilities of FPGAs to carry out complex operations, such as running navigation algorithms for space applications. The motivation for the development of the on-board software architecture is as follows: raw data, acquired from the various sensors – including, among others, a High Resolution camera, a stereo camera and a LiDAR – is pre-processed to ensure the provision of robust and optimised inputs to 3D navigation algorithms. Noise reduction and conversion into suitable formats for the successful application of navigation algorithms are therefore the main aims of the data pre-processing. Some techniques adopted in this phase include outlier rejection and data dimensionality reduction for large point clouds, e.g. from LiDAR, and geometric and radiometric correction of the images from the cameras. The pre-processed data will then feed state-of-the-art relative navigation algorithms. Some of the proposed navigation algorithms include Generalised Iterative Closest Point (GICP) for dense 3D point clouds, relative positioning with fiducial markers, and visual odometry. The system environment for the preliminary operation is a test-bench setup formed by a standard desktop computer and a non-space-graded FPGA (Xilinx UltraZed-EG FPGA). The choice of FPGA was based on the similarity of this board to other space-graded ones also provided by Xilinx. Experimental tests on the algorithms are being performed in the framework of the validation campaign for the I3DS project. Preliminary results indicate that the data pre-processing can be efficiently carried out on the FPGA board.Item Open Access A furcated visual collision avoidance system for an autonomous micro robot(IEEE, 2018-07-23) Isakhani, Hamid; Aouf, Nabil; Kechagias-Stamatis, Odysseas; Whidborne, James F.This paper proposes a secondary reactive collision avoidance system for micro class of robots based on a novel approach known as the Furcated Luminance-Difference Processing (FLDP) inspired by the Lobula Giant Movement Detector, a wide-field visual neuron located in the lobula layer of a locust nervous system. This paper addresses some of the major collision avoidance challenges; obstacle proximity & direction estimation, and operation in GPS-denied environment with irregular lighting. Additionally, it has proven effective in detecting edges independent of background color, size, and contour. The FLDP executes a series of image enhancement and edge detection algorithms to estimate collision threat-level which further determines whether or not the robot’s field of view must be dissected where each section’s response is compared against the others to generate a simple collision-free maneuver. Ultimately, the computation load and the performance of the model is assessed against an eclectic set of off-line as well as real-time real-world collision scenarios validating the proposed model’s asserted capability to avoid obstacles at more than 670 mm prior to collision, moving at 1.2 ms¯¹ with a successful avoidance rate of 90% processing at 120 Hz on a simple single core microcontroller, sufficient to conclude the system’s feasibility for real-time real-world applications that possess fail-safe collision avoidance system.Item Open Access Fusing deep learning and sparse coding for SAR ATR(IEEE, 2018-08-10) Kechagias-Stamatis, Odysseas; Aouf, NabilWe propose a multi-modal and multi-discipline data fusion strategy appropriate for Automatic Target Recognition (ATR) on Synthetic Aperture Radar imagery. Our architecture fuses a proposed Clustered version of the AlexNet Convolutional Neural Network with Sparse Coding theory that is extended to facilitate an adaptive elastic net optimization concept. Evaluation on the MSTAR dataset yields the highest ATR performance reported yet which is 99.33% and 99.86% for the 3 and 10-class problems respectively.Item Open Access High-speed multi-dimensional relative navigation for uncooperative space objects(Elsevier, 2019-05-03) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Richardson, Mark A.This work proposes a high-speed Light Detection and Ranging (LIDAR) based navigation architecture that is appropriate for uncooperative relative space navigation applications. In contrast to current solutions that exploit 3D LIDAR data, our architecture transforms the odometry problem from the 3D space into multiple 2.5D ones and completes the odometry problem by utilizing a recursive filtering scheme. Trials evaluate several current state-of-the-art 2D keypoint detection and local feature description methods as well as recursive filtering techniques on a number of simulated but credible scenarios that involve a satellite model developed by Thales Alenia Space (France). Most appealing performance is attained by the 2D keypoint detector Good Features to Track (GFFT) combined with the feature descriptor KAZE, that are further combined with either the H∞ or the Kalman recursive filter. Experimental results demonstrate that compared to current algorithms, the GFTT/KAZE combination is highly appealing affording one order of magnitude more accurate odometry and a very low processing burden, which depending on the competitor method, may exceed one order of magnitude faster computation.Item Open Access H∞ LIDAR odometry for spacecraft relative navigation(IET, 2016-01-04) Kechagias-Stamatis, Odysseas; Aouf, NabilCurrent light detection and ranging (LIDAR) based odometry solutions that are used for spacecraft relative navigation suffer from quite a few deficiencies. These include an off-line training requirement and relying on the iterative closest point (ICP) that does not guarantee a globally optimum solution. To encounter this, the authors suggest a robust architecture that overcomes the problems of current proposals by combining the concepts of 3D local feature matching with an adaptive variant of the H∞ recursive filtering process. Trials on real laser scans of an EnviSat model demonstrate that the proposed architecture affords at least one order of magnitude better accuracy compared to ICP.Item Open Access Kernel-based fault diagnosis of inertial sensors using analytical redundancy(2017) Vitanov, Ivan V.; Aouf, NabilKernel methods are able to exploit high-dimensional spaces for representational advantage, while only operating implicitly in such spaces, thus incurring none of the computational cost of doing so. They appear to have the potential to advance the state of the art in control and signal processing applications and are increasingly seeing adoption across these domains. Applications of kernel methods to fault detection and isolation (FDI) have been reported, but few in aerospace research, though they offer a promising way to perform or enhance fault detection. It is mostly in process monitoring, in the chemical processing industry for example, that these techniques have found broader application. This research work explores the use of kernel-based solutions in model-based fault diagnosis for aerospace systems. Specifically, it investigates the application of these techniques to the detection and isolation of IMU/INS sensor faults – a canonical open problem in the aerospace field. Kernel PCA, a kernelised non-linear extension of the well-known principal component analysis (PCA) algorithm, is implemented to tackle IMU fault monitoring. An isolation scheme is extrapolated based on the strong duality known to exist between probably the most widely practiced method of FDI in the aerospace domain – the parity space technique – and linear principal component analysis. The algorithm, termed partial kernel PCA, benefits from the isolation properties of the parity space method as well as the non-linear approximation ability of kernel PCA. Further, a number of unscented non-linear filters for FDI are implemented, equipped with data-driven transition models based on Gaussian processes - a non-parametric Bayesian kernel method. A distributed estimation architecture is proposed, which besides fault diagnosis can contemporaneously perform sensor fusion. It also allows for decoupling faulty sensors from the navigation solution.Item Open Access Local feature based automatic target recognition for future 3D active homing seeker missiles(Elsevier, 2017-12-13) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Gray, Greer Jillian; Chermak, Lounis; Richardson, Mark A.; Oudyi, F.We propose an architecture appropriate for future Light Detection and Ranging (LIDAR) active homing seeker missiles with Automatic Target Recognition (ATR) capabilities. Our proposal enhances military targeting performance by extending ATR into the 3rd dimension. From a military and aerospace industry point of view, this is appealing as weapon effectiveness against camouflage, concealment and deception techniques can be substantially improved. Specifically, we present a missile seeker 3D ATR architecture that relies on the 3D local feature based SHOT descriptor and a dual-role pipeline with a number of pre and post-processing operations. We evaluate our architecture on a number of missile engagement scenarios in various environmental setups with the missile being under various altitudes, obliquities, distances to the target and scene resolutions. Under these demanding conditions, the recognition performance gained is highly promising. Even in the extreme case of reducing the database entries to a single template per target, our interchangeable ATR architecture still provides a highly acceptable performance. Although we focus on future intelligent missile systems, our approach can be implemented to a great range of time-critical complex systems for space, air and ground environments for military, law-enforcement, commercial and research purposes.Item Open Access Local feature based automatic target recognition for future 3D active homing seeker missiles(Elsevier, 2017-12-13) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Gray, Greer Jillian; Chermak, Lounis; Richardson, Mark A.; Oudyi, F.We propose an architecture appropriate for future Light Detection and Ranging (LIDAR) active homing seeker missiles with Automatic Target Recognition (ATR) capabilities. Our proposal enhances military targeting performance by extending ATR into the 3rd dimension. From a military and aerospace industry point of view, this is appealing as weapon effectiveness against camouflage, concealment and deception techniques can be substantially improved. Specifically, we present a missile seeker 3D ATR architecture that relies on the 3D local feature based SHOT descriptor and a dual-role pipeline with a number of pre and post-processing operations. We evaluate our architecture on a number of missile engagement scenarios in various environmental setups with the missile being under various altitudes, obliquities, distances to the target and scene resolutions. Under these demanding conditions, the recognition performance gained is highly promising. Even in the extreme case of reducing the database entries to a single template per target, our interchangeable ATR architecture still provides a highly acceptable performance. Although we focus on future intelligent missile systems, our approach can be implemented to a great range of time-critical complex systems for space, air and ground environments for military, law-enforcement, commercial and research purposes.Item Open Access Multi-view monocular pose estimation for spacecraft relative navigation(AIAA, 2018-01-07) Rondao, Duarte; Aouf, NabilThis paper presents a method of estimating the pose of a non-cooperative target for spacecraft rendezvous applications employing exclusively a monocular camera and a threedimensional model of the target. This model is used to build an offline database of prerendered keyframes with known poses. An online stage solves the model-to-image registration problem by matching two-dimensional point and edge features from the camera to the database. We apply our method to retrieve the motion of the now inoperational satellite ENVISAT. The combination of both feature types is shown to produce a robust pose solution even for large displacements respective to the keyframes which does not rely on real-time rendering, making it attractive for autonomous systems applications.Item Open Access Multimodal Navigation for Accurate Space Rendezvous Missions(2021-05) Rondao, Duarte O De M A; Aouf, Nabil; Richardson, Mark ARelative navigation is paramount in space missions that involve rendezvousing between two spacecraft. It demands accurate and continuous estimation of the six degree-of-freedom relative pose, as this stage involves close-proximity-fast-reaction operations that can last up to five orbits. This has been routinely achieved thanks to active sensors such as lidar, but their large size, cost, power and limited operational range remain a stumbling block for en masse on-board integration. With the onset of faster processing units, lighter and cheaper passive optical sensors are emerging as the suitable alternative for autonomous rendezvous in combination with computer vision algorithms. Current vision-based solutions, however, are limited by adverse illumination conditions such as solar glare, shadowing, and eclipse. These effects are exacerbated when the target does not hold cooperative markers to accommodate the estimation process and is incapable of controlling its rotational state. This thesis explores novel model-based methods that exploit sequences of monoc ular images acquired by an on-board camera to accurately carry out spacecraft relative pose estimation for non-cooperative close-range rendezvous with a known artificial target. The proposed solutions tackle the current challenges of imaging in the visible spectrum and investigate the contribution of the long wavelength infrared (or “thermal”) band towards a combined multimodal approach. As part of the research, a visible-thermal synthetic dataset of a rendezvous approach with the defunct satellite Envisat is generated from the ground up using a realistic orbital camera simulator. From the rendered trajectories, the performance of several state-of-the-art feature detectors and descriptors is first evaluated for both modalities in a tailored scenario for short and wide baseline image processing transforms. Multiple combinations, including the pairing of algorithms with their non-native counterparts, are tested. Computational runtimes are assessed in an embedded hardware board. From the insight gained, a method to estimate the pose on the visible band is derived from minimising geometric constraints between online local point and edge contour features matched to keyframes generated offline from a 3D model of the target. The combination of both feature types is demonstrated to achieve a pose solution for a tumbling target using a sparse set of training images, bypassing the need for hardware-accelerated real-time renderings of the model. The proposed algorithm is then augmented with an extended Kalman filter which processes each feature-induced minimisation output as individual pseudo measurements, fusing them to estimate the relative pose and velocity states at each time-step. Both the minimisation and filtering are established using Lie group formalisms, allowing for the covariance of the solution computed by the former to be automatically incorporated as measurement noise in the latter, providing an automatic weighing of each feature type directly related to the quality of the matches. The predicted states are then used to search for new feature matches in the subsequent time-step. Furthermore, a method to derive a coarse viewpoint estimate to initialise the nominal algorithm is developed based on probabilistic modelling of the target’s shape. The robustness of the complete approach is demonstrated for several synthetic and laboratory test cases involving two types of target undergoing extreme illumination conditions. Lastly, an innovative deep learning-based framework is developed by processing the features extracted by a convolutional front-end with long short-term memory cells, thus proposing the first deep recurrent convolutional neural network for spacecraft pose estimation. The framework is used to compare the performance achieved by visible-only and multimodal input sequences, where the addition of the thermal band is shown to greatly improve the performance during sunlit sequences. Potential limitations of this modality are also identified, such as when the target’s thermal signature is comparable to Earth’s during eclipse.
- «
- 1 (current)
- 2
- 3
- »