CERES
Library Services
  • Communities & Collections
  • Browse CERES
  • Library Staff Log In
    Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Zahidi, Usman A."

Now showing 1 - 3 of 3
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    CHIMES: An enhanced end-to-end Cranfield hyperspectral image modelling and evaluation system
    (2020-02) Zahidi, Usman A.; Yuen, Peter W. T.; James, David B.
    Hyperspectral remote sensing enables establishing semantics from an image by providing spectral details used for differentiating materials. The airborne/satellite setup for remote sensing are typically expensive in terms of time and cost-effectiveness. It is therefore important to predict performance of such systems as a precursor. Hyperspectral scene simulation is a technique which allows the detailed spatial and spectral information of a natural scene to be reconstructed without the need for expensive and time-consuming airborne/spaceborne image acquisition systems. It helps in predicting the potential performance of airborne/satellite systems, moreover, it enables varying atmospheric conditions, estimating degradation in sensor performance to provide better uncertainty analysis and traceability, performance analysis of data processing algorithms and counter-measures/camouflage assessment in military applications. Digital Imaging Remote Sensing Image Generation (DIRSIG) developed by Rochester Institute of Technology and Camoflauge Electro-Optic Simulator (CameoSim) by Lockheed Martin are the two earliest research and commercial products, respectively, that incorporate hyperspectral rendering for accurate physicsbased radiance estimation. Although CameoSim is a well-established Scene simulator, however it does not support volumetric scattering and localised adjacency model. DIRSIG has provided support form these features in newly developed version called DIRSIG5. Due to export control restriction it is typically not possible to access these simulators, hence motivates development of inhouse scene simulator. This thesis summarises the research which constitutes part of the deliverable under the DSTL R-Cloud project for the establishment of an in-house HSI scene simulator, which is known as the Cranfield Hyperspectral Image Modelling and Evaluation System (CHIMES). CHIMES is a physicsbased rendering enabled simulator and the main concept follows directly the radiative transfer (RT) big equation, with some components adopted from DIRSIG and CameoSim etc. The goal of the present research has been set and the work has been progressed in the following manner: • The establishment of CHIMES from scratch; • Validation of CHIMES through direct comparison with commercial-off-the-shelf (COTS) simulator such as CameoSim (CS); • Enhancement of CHIMES over the COTS simulator (e.g. CS) to include automatic in-scene atmospheric parametrisation, localised adjacency-effect model and volumetric scattering to achieve a more realistic scene simulation particularly for the rugged terrain; • To propose methods on how difficult issues such as shadows can be mitigated in scene simulation. This thesis summarises the work performed as according to the above 4 objectives with main results as follows: • CHIMES has been shown to reproduce the scene simulation performed by a COTS simulator (e.g. CameoSim) under various atmospheric conditions. • An automatic atmosphere parameterisation search algorithm has been shown to be effective to allow the simulation of the scene without the need of repeated trial and error atmospheric parameter adjustments. • Two adjacency models: the Background One-Spectra Adjacency Effect Model (BOAEM) and the Texture-Spectra Incorporated Adjacency Effect Model (TIAEM) have been developed under this work. The BOAEM is somewhat similar to that adopted in CS with a distinctive feature of volumetric scattering, however, the TIAEM is a terrain dependence adjacency which is much more sophisticated. It has been shown that at high altitude scene, TIAEM performs better than the BOAEM by 6.0% and by 10.0% better than CameoSim particularly in the 2D geometric simulation, in terms of `1-norm error. In the lower altitude scene, BOAEM performs better than both TIAEM and CameoSim by 22.0% and 16%. In a 3D scene (i.e. terrain with Digital Elevation Model (DEM)) with sensor at lower altitude CameoSim error raises by 5 times compared to GT. BOAEM still performs better than TIAEM by a similar 22% `1-norm error. • A means for assessing the shadowed pixels of the scene has been proposed and the validation of the model requires more comprehensive ground truth (GT) data which will be performed in the future research. Most of the above results have been published in three journal papers as part of the contributions towards the HSI research community
  • Loading...
    Thumbnail Image
    ItemOpen Access
    An end-to-end hyperspectral scene simulator with alternate adjacency effect models and its comparison with cameoSim
    (MDPI, 2019-12-24) Zahidi, Usman A.; Yuen, Peter W. T.; Piper, Jonathan; Godfree, Peter S.
    In this research, we developed a new rendering-based end to end Hyperspectral scene simulator CHIMES (Cranfield Hyperspectral Image Modelling and Evaluation System), which generates nadir images of passively illuminated 3-D outdoor scenes in Visible, Near Infrared (NIR) and Short-Wave Infrared (SWIR) regions, ranging from 360 nm to 2520 nm. MODTRAN TM (MODerate resolution TRANsmission), is used to generate the sky-dome environment map which includes sun and sky radiance along with the polarisation effect of the sky due to Rayleigh scattering. Moreover, we perform path tracing and implement ray interaction with medium and volumetric backscattering at rendering time to model the adjacency effect. We propose two variants of adjacency models, the first one incorporates a single spectral albedo as the averaged background of the scene, this model is called the Background One-Spectra Adjacency Effect Model (BOAEM), which is a CameoSim like model created for performance comparison. The second model calculates background albedo from a pixel’s neighbourhood, whose size depends on the air volume between sensor and target, and differential air density up to sensor altitude. Average background reflectance of all neighbourhood pixel is computed at rendering time for estimating the total upwelled scattered radiance, by volumetric scattering. This model is termed the Texture-Spectra Incorporated Adjacency Effect Model (TIAEM). Moreover, for estimating the underlying atmospheric condition MODTRAN is run with varying aerosol optical thickness and its total ground reflected radiance (TGRR) is compared with TGRR of known in-scene material. The Goodness of fit is evaluated in each iteration, and MODTRAN’s output with the best fit is selected. We perform a tri-modal validation of simulators on a real hyperspectral scene by varying atmospheric condition, terrain surface models and proposed variants of adjacency models. We compared results of our model with Lockheed Martin’s well-established scene simulator CameoSim and acquired Ground Truth (GT) by Hyspex cameras. In clear-sky conditions, both models of CHIMES and CameoSim are in close agreement, however, in searched overcast conditions CHIMES BOAEM is shown to perform better than CameoSim in terms of ℓ1 -norm error of the whole scene with respect to GT. TIAEM produces better radiance shape and covariance of background statistics with respect to Ground Truth (GT), which is key to good target detection performance. We also report that the results of CameoSim have a many-fold higher error for the same scene when the flat surface terrain is replaced with a Digital Elevation Model (DEM) based rugged one.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    A radiative transfer model-based multi-layered regression learning to estimate shadow map in hyperspectral images
    (MDPI, 2019-08-06) Zahidi, Usman A.; Chatterjee, Ayan; Yuen, Peter W. T.
    The application of Empirical Line Method (ELM) for hyperspectral Atmospheric Compensation (AC) premises the underlying linear relationship between a material’s reflectance and appearance. ELM solves the Radiative Transfer (RT) equation under specialized constraint by means of in-scene white and black calibration panels. The reflectance of material is invariant to illumination. Exploiting this property, we articulated a mathematical formulation based on the RT model to create cost functions relating variably illuminated regions within a scene. In this paper, we propose multi-layered regression learning-based recovery of radiance components, i.e., total ground-reflected radiance and path radiance from reflectance and radiance images of the scene. These decomposed components represent terms in the RT equation and enable us to relate variable illumination. Therefore, we assume that Hyperspectral Image (HSI) radiance of the scene is provided and AC can be processed on it, preferably with QUick Atmospheric Correction (QUAC) algorithm. QUAC is preferred because it does not account for surface models. The output from the proposed algorithm is an intermediate map of the scene on which our mathematically derived binary and multi-label threshold is applied to classify shadowed and non-shadowed regions. Results from a satellite and airborne NADIR imagery are shown in this paper. Ground truth (GT) is generated by ray-tracing on a LIDAR-based surface model in the form of contour data, of the scene. Comparison of our results with GT implies that our algorithm’s binary classification shadow maps outperform other existing shadow detection algorithms in true positive, which is the detection of shadows when it is in ground truth. It also has the lowest false negative i.e., detecting non-shadowed region as shadowed, compared to existing algorithms.

Quick Links

  • About our Libraries
  • Cranfield Research Support
  • Cranfield University

Useful Links

  • Accessibility Statement
  • CERES Takedown Policy

Contacts-TwitterFacebookInstagramBlogs

Cranfield Campus
Cranfield, MK43 0AL
United Kingdom
T: +44 (0) 1234 750111
  • Cranfield University at Shrivenham
  • Shrivenham, SN6 8LA
  • United Kingdom
  • Email us: researchsupport@cranfield.ac.uk for REF Compliance or Open Access queries

Cranfield University copyright © 2002-2025
Cookie settings | Privacy policy | End User Agreement | Send Feedback