Multi-spectral fusion using generative adversarial networks for UAV detection of wild fires
Date published
Free to read from
Supervisor/s
Journal Title
Journal ISSN
Volume Title
Publisher
Department
Type
ISSN
Format
Citation
Abstract
Wild fires are now increasingly responsible for immense ecological damage. Unmanned aerials vehicles (UAVs) are being used for monitoring and early-detection of wild fires. Recently, significant research has been conducted for using Deep Learning (DL) vision models for fire and smoke segmentation. Such models predominantly use images from the visible spectrum, which are operationally prone to large false-positive rates and sub-optimal performance across environmental conditions. In comparison, fire detection using infrared (IR) images has shown to be robust to lighting and environmental variations, but long range IR sensors remain expensive. There is an increasing interest in the fusion of visible and IR images since a fused representation would combine the visual as well as thermal information of the image. This yields significant benefits especially towards reducing false positive scenarios and increasing robustness of the model. However, the impact of fusion of the two spectrum on the performance of fire segmentation has not been extensively investigated. In this paper, we assess multiple image fusion techniques and evaluate the performance of a U-Net based segmentation model on each of the three image representations - visible, IR and fused. We also identify subsets of fire classes that are observed to have better results using the fused representation.