Browsing by Author "Feetham, Luke"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Open Access FPGA-based multi-sensor relative navigation in space: Preliminary analysis in the framework of the I3DS H2020 project(Internation Astronautical Federation, 2018-10-04) Estébanez Camarena, Monica; Feetham, Luke; Scannapieco, Antonio; Aouf, NabilThe Horizon 2020 Integrated 3D Sensors (I3DS) project brings together the following entities throughout Europe: THALES ALENIA SPACE - France / Italy / UK / Spain, SINTEF (Norway), TERMA (Denmark), COSINE (Netherlands), PIAP Space (Poland), HERTZ Systems (Poland), and Cranfield University (UK). I3DS is co-funded under the Horizon 2020 EU research and development program and is part of the Strategic Research Cluster on Space Robotics Technologies. The ambition of I3DS is to produce a standardised modular Inspector Sensor Suite (INSES) for autonomous orbital and planetary applications for future space missions. Orbital applications encompass activities such as on-orbit servicing and repair, space rendezvous and docking, collision avoidance and active debris removal (ADR). Simultaneous localisation and surface mapping (SLAM) for planetary exploration and general navigation in an unknown environment for scientific purposes can be considered in planetary applications. These envisaged space applications can be tackled by exploiting the flexibility, high performance and long product life of FPGAs. Conventional FPGAs are subject to Single Event Upsets (SEU) due to space radiation, causing their failure. Therefore, space-graded FPGAs, such as those developed by Xilinx, are targeted within the I3DS project. Currently, the main use of the FPGA within the development of this robust end-to-end multi-sensor suite is for navigation and data pre-processing. The aim of this paper is to assess the capabilities of FPGAs to carry out complex operations, such as running navigation algorithms for space applications. The motivation for the development of the on-board software architecture is as follows: raw data, acquired from the various sensors – including, among others, a High Resolution camera, a stereo camera and a LiDAR – is pre-processed to ensure the provision of robust and optimised inputs to 3D navigation algorithms. Noise reduction and conversion into suitable formats for the successful application of navigation algorithms are therefore the main aims of the data pre-processing. Some techniques adopted in this phase include outlier rejection and data dimensionality reduction for large point clouds, e.g. from LiDAR, and geometric and radiometric correction of the images from the cameras. The pre-processed data will then feed state-of-the-art relative navigation algorithms. Some of the proposed navigation algorithms include Generalised Iterative Closest Point (GICP) for dense 3D point clouds, relative positioning with fiducial markers, and visual odometry. The system environment for the preliminary operation is a test-bench setup formed by a standard desktop computer and a non-space-graded FPGA (Xilinx UltraZed-EG FPGA). The choice of FPGA was based on the similarity of this board to other space-graded ones also provided by Xilinx. Experimental tests on the algorithms are being performed in the framework of the validation campaign for the I3DS project. Preliminary results indicate that the data pre-processing can be efficiently carried out on the FPGA board.Item Open Access Perception fields: analysing distributions of optical features as a proximity navigation tool for autonomous probes around asteroids(IEEE, 2021-08-19) Di Fraia, Marco Zaccaria; Feetham, Luke; Felicetti, Leonard; Sanchez, Joan-Pau; Chermak, LounisThis paper suggests a new way of interpreting visual information perceived by visible cameras in the proximity of small celestial bodies. At close ranges, camera-based perception processes generally rely on computational constructs known as features. Our hypothesis is that trends in the quantity of available optical features can be correlated to variations in the angular distance from the source of illumination. Indeed, the discussed approach is based on treating properties related to these detected optical features as readings of a field - the perception fields of the title, assumed induced by the coupling of the environmental conditions and the state of the sensing device. The extreme spectrum of shapes, surface properties and gravity fields of small celestial bodies heavily affects visual proximity operational procedures. Therefore, self-contained ancillary tools providing context and an evaluation of estimators' performance while using the least number of priors are extremely significant in these conditions. This preliminary study presents an analysis of the occurrences of optical feature observed around two asteroids, 101955 Bennu and (8567) 1996 HW1 in visual data simulated within Blender, a computer graphics engine. The comparison of three different feature detectors showed distinctive trends in the distribution of the detected optical features, directly correlated to the spacecraft-target-Sun angle, confirming our hypothesis.Item Open Access Robust vision based slope estimation and rocks detection for autonomous space landers(2017-06-13) Feetham, Luke; Aouf, NabilAs future robotic surface exploration missions to other planets, moons and asteroids become more ambitious in their science goals, there is a rapidly growing need to significantly enhance the capabilities of entry, descent and landing technology such that landings can be carried out with pin-point accuracy at previously inaccessible sites of high scientific value. As a consequence of the extreme uncertainty in touch-down locations of current missions and the absence of any effective hazard detection and avoidance capabilities, mission designers must exercise extreme caution when selecting candidate landing sites. The entire landing uncertainty footprint must be placed completely within a region of relatively flat and hazard free terrain in order to minimise the risk of mission ending damage to the spacecraft at touchdown. Consequently, vast numbers of scientifically rich landing sites must be rejected in favour of safer alternatives that may not offer the same level of scientific opportunity. The majority of truly scientifically interesting locations on planetary surfaces are rarely found in such hazard free and easily accessible locations, and so goals have been set for a number of advanced capabilities of future entry, descent and landing technology. Key amongst these is the ability to reliably detect and safely avoid all mission critical surface hazards in the area surrounding a pre-selected landing location. This thesis investigates techniques for the use of a single camera system as the primary sensor in the preliminary development of a hazard detection system that is capable of supporting pin-point landing operations for next generation robotic planetary landing craft. The requirements for such a system have been stated as the ability to detect slopes greater than 5 degrees and surface objects greater than 30cm in diameter. The primary contribution in this thesis, aimed at achieving these goals, is the development of a feature-based,self-initialising, fully adaptive structure from motion (SFM) algorithm based on a robust square-root unscented Kalman filtering framework and the fusion of the resulting SFM scene structure estimates with a sophisticated shape from shading (SFS) algorithm that has the potential to produce very dense and highly accurate digital elevation models (DEMs) that possess sufficient resolution to achieve the sensing accuracy required by next generation landers. Such a system is capable of adapting to potential changes in the external noise environment that may result from intermittent and varying rocket motor thrust and/or sudden turbulence during descent, which may translate to variations in the vibrations experienced by the platform and introduce varying levels of motion blur that will affect the accuracy of image feature tracking algorithms. Accurate scene structure estimates have been obtained using this system from both real and synthetic descent imagery, allowing for the production of accurate DEMs. While some further work would be required in order to produce DEMs that possess the resolution and accuracy needed to determine slopes and the presence of small objects such as rocks at the levels of accuracy required, this thesis presents a very strong foundation upon which to build and goes a long way towards developing a highly robust and accurate solution.Item Open Access Space-oriented navigation solutions with integrated sensor-suite: the I3DS H2020 project(International Astronautical Federation, 2018-10-04) Scannapieco, Antonio; Feetham, Luke; Camarena, Monica; Aouf, NabilIn all orbital applications, such as on-orbit servicing and repair, rendezvous and docking, active debris removal (ADR), and planetary applications, such as exploration of unknown environments for scientific purposes by means of rovers, GPS-denied navigation aspects have a very large impact on the successful outcome of missions. Having a sensor suite, and hence several different sensors, also requires, at the same time, a suite of navigation algorithms able to deal with different kinds of inputs. Some of them, however, can be shared between multiple sensors, after thorough pre-processing of the raw data. Additionally, the same kind of sensor can require two different navigation algorithms depending on the scenario. The work described in this paper aims to present and critically discuss the approach to precise relative navigation solutions with a complete suite of sensors and their performance in different space-oriented application scenarios. Standalone navigation filters are examined. In the case of a high-resolution camera for an orbital scenario, the pose of a target, with respect to a chaser, can be thoroughly obtained with the aid of fiducial markers. Stereo camera-based navigation is also addressed with visual odometry. In the case of a stereo camera the problem of scale estimation during odometry is solved by means of triangulation. Since the outputs of the sensor-suite are also dense 3D point clouds, Iterative Closest Point and Histogram of Distances (HoD) with Kalman filter approaches are analyzed, paying attention to the provision of correct sensor characterization. The results for each filter are exhaustively examined, highlighting their strengths and the points where some improvements can be achieved.Item Open Access Towards scene understanding implementing the stixel world(IEEE, 2019-03-07) Grenier, Amélie; Alzoubi, Alaa; Feetham, Luke; Nam, DavidIn this paper, we present our work towards scene understanding based on modeling the scene prior to understanding its content. We describe the environment representation model used, the Stixel World, and its benefits for compact scene representation. We show our preliminary results of its application in a diverse environment and the limitations reached in our experiments using imaging systems. We argue that this method has been developed in an ideal scenario and does not generalise well to uncommon changes in the environment. We also found that this method is sensitive to the quality of the stereo rectification and the calibration of the optics, among other parameters, which makes it time-consuming and delicate to prepare in real-time applications. We think that pixel-wise semantic segmentation techniques can address some of the shortcomings of the concept presented in a theoretical discussion.Item Open Access UAV-assisted real-time evidence detection in outdoor crime scene investigations(Wiley, 2022-03-09) Georgiou, Argyrios; Masters, Peter; Johnson, Stephen; Feetham, LukeNowadays, a plethora of unmanned aerial vehicles (UAVs) designs that significantly vary in size, shape, operating flight altitude, and flight range have been developed to provide multidimensional capabilities across a wide range of military and civil applications. In the field of forensic and police applications, drones are becoming increasingly used instead of helicopters to assist field officers to search for vulnerable missing persons or to target criminals in crime hotspots, and also to provide high-quality data for the documentation and reconstruction of the forensic scene or to facilitate evidence detection. This paper aims to examine the contribution of UAVs in real-time evidence detection in outdoor crime scene investigations. It should be highlighted that the project innovates by providing a quantitative comparative analysis of UAV-based and traditional search methods through the simulation of a crime scene investigation for evidence detection. The first experimental phase tested the usefulness of UAVs as a forensic detection tool by posing the dilemma of humans or drones. The second phase examined the ability of the drone to reproduce the obtained performance results in different terrains, while the third phase tested the accuracy in detection by subjecting the drone-recorded videos to computer vision techniques. The experimental results indicate that drone deployment in evidence detection can provide increased accuracy and speed of detection over a range of terrain types. Additionally, it was found that real-time object detection based on computer vision techniques could be the key enabler of drone-based investigations if interoperability between drones and these techniques is achieved.