Visual navigation in unmanned air vehicles with simultaneous location and mapping (SLAM)

Show simple item record

dc.contributor.advisor Aouf, Nabil
dc.contributor.author Li, X
dc.date.accessioned 2014-08-15T09:43:42Z
dc.date.available 2014-08-15T09:43:42Z
dc.date.issued 2014-08-15
dc.identifier.uri http://dspace.lib.cranfield.ac.uk/handle/1826/8644
dc.description © Cranfield University, 2013 en_UK
dc.description.abstract This thesis focuses on the theory and implementation of visual navigation techniques for Autonomous Air Vehicles in outdoor environments. The target of this study is to fuse and cooperatively develop an incremental map for multiple air vehicles under the application of Simultaneous Location and Mapping (SLAM). Without loss of generality, two unmanned air vehicles (UAVs) are investigated for the generation of ground maps from current and a priori data. Each individual UAV is equipped with inertial navigation systems and external sensitive elements which can provide the possible mixture of visible, thermal infrared (IR) image sensors, with a special emphasis on the stereo digital cameras. The corresponding stereopsis is able to provide the crucial three-dimensional (3-D) measurements. Therefore, the visual aerial navigation problems tacked here are interpreted as stereo vision based SLAM (vSLAM) for both single and multiple UAVs applications. To begin with, the investigation is devoted to the methodologies of feature extraction. Potential landmarks are selected from airborne camera images as distinctive points identified in the images are the prerequisite for the rest. Feasible feature extraction algorithms have large influence over feature matching/association in 3-D mapping. To this end, effective variants of scale-invariant feature transform (SIFT) algorithms are employed to conduct comprehensive experiments on feature extraction for both visible and infrared aerial images. As the UAV is quite often in an uncertain location within complex and cluttered environments, dense and blurred images are practically inevitable. Thus, it becomes a challenge to find feature correspondences, which involves feature matching between 1st and 2nd image in the same frame, and data association of mapped landmarks and camera measurements. A number of tests with different techniques are conducted by incorporating the idea of graph theory and graph matching. The novel approaches, which could be tagged as classification and hypergraph transformation (HGTM) based respectively, have been proposed to solve the data association in stereo vision based navigation. These strategies are then utilised and investigated for UAV application within SLAM so as to achieve robust matching/association in highly cluttered environments. The unknown nonlinearities in the system model, including noise would introduce undesirable INS drift and errors. Therefore, appropriate appraisals on the pros and cons of various potential data filtering algorithms to resolve this issue are undertaken in order to meet the specific requirements of the applications. These filters within visual SLAM were put under investigation for data filtering and fusion of both single and cooperative navigation. Hence updated information required for construction and maintenance of a globally consistent map can be provided by using a suitable algorithm with the compromise between computational accuracy and intensity imposed by the increasing map size. The research provides an overview of the feasible filters, such as extended Kalman Filter, extended Information Filter, unscented Kalman Filter and unscented H Infinity Filter. As visual intuition always plays an important role for humans to recognise objects, research on 3-D mapping in textures is conducted in order to fulfil the purpose of both statistical and visual analysis for aerial navigation. Various techniques are proposed to smooth texture and minimise mosaicing errors during the reconstruction of 3-D textured maps with vSLAM for UAVs. Finally, with covariance intersection (CI) techniques adopted on multiple sensors, various cooperative and data fusion strategies are introduced for the distributed and decentralised UAVs for Cooperative vSLAM (C-vSLAM). Together with the complex structure of high nonlinear system models that reside in cooperative platforms, the robustness and accuracy of the estimations in collaborative mapping and location are achieved through HGTM association and communication strategies. Data fusion among UAVs and estimation for visual navigation via SLAM were impressively verified and validated in conditions of both simulation and real data sets. en_UK
dc.subject Autonomous navigation en_UK
dc.subject Unmanned aerial vehicles en_UK
dc.subject Simultaneous location and mapping (SLAM) en_UK
dc.subject Navigation en_UK
dc.title Visual navigation in unmanned air vehicles with simultaneous location and mapping (SLAM) en_UK
dc.type Thesis or dissertation en_UK
dc.type.qualificationlevel Doctoral en_UK
dc.type.qualificationname PhD en_UK


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search CERES


Browse

My Account

Statistics