Rock segmentation in the navigation vision of the planetary rovers

dc.contributor.authorKuang, Boyu
dc.contributor.authorWisniewski, Mariusz
dc.contributor.authorRana, Zeeshan A.
dc.contributor.authorZhao, Yifan
dc.date.accessioned2021-12-06T16:35:50Z
dc.date.available2021-12-06T16:35:50Z
dc.date.issued2021-11-24
dc.description.abstractVisual navigation is an essential part of planetary rover autonomy. Rock segmentation emerged as an important interdisciplinary topic among image processing, robotics, and mathematical modeling. Rock segmentation is a challenging topic for rover autonomy because of the high computational consumption, real-time requirement, and annotation difficulty. This research proposes a rock segmentation framework and a rock segmentation network (NI-U-Net++) to aid with the visual navigation of rovers. The framework consists of two stages: the pre-training process and the transfer-training process. The pre-training process applies the synthetic algorithm to generate the synthetic images; then, it uses the generated images to pre-train NI-U-Net++. The synthetic algorithm increases the size of the image dataset and provides pixel-level masks—both of which are challenges with machine learning tasks. The pre-training process accomplishes the state-of-the-art compared with the related studies, which achieved an accuracy, intersection over union (IoU), Dice score, and root mean squared error (RMSE) of 99.41%, 0.8991, 0.9459, and 0.0775, respectively. The transfer-training process fine-tunes the pre-trained NI-U-Net++ using the real-life images, which achieved an accuracy, IoU, Dice score, and RMSE of 99.58%, 0.7476, 0.8556, and 0.0557, respectively. Finally, the transfer-trained NI-U-Net++ is integrated into a planetary rover navigation vision and achieves a real-time performance of 32.57 frames per second (or the inference time is 0.0307 s per frame). The framework only manually annotates about 8% (183 images) of the 2250 images in the navigation vision, which is a labor-saving solution for rock segmentation tasks. The proposed rock segmentation framework and NI-U-Net++ improve the performance of the state-of-the-art models. The synthetic algorithm improves the process of creating valid data for the challenge of rock segmentation. All source codes, datasets, and trained models of this research are openly available in Cranfield Online Research Data (CORD).en_UK
dc.identifier.citationKuang B, Wisniewski M, Rana ZA, Zhao Y. (2021) Rock segmentation in the navigation vision of the planetary rovers. Mathematics, Issue 9, Volume 23, November 2021, Article number 3048en_UK
dc.identifier.issn2227-7390
dc.identifier.urihttps://doi.org/10.3390/math9233048
dc.identifier.urihttps://dspace.lib.cranfield.ac.uk/handle/1826/17312
dc.language.isoenen_UK
dc.publisherMDPIen_UK
dc.rightsAttribution 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectimage segmentationen_UK
dc.subjectremote sensingen_UK
dc.subjectterrain identificationen_UK
dc.subjectdata synthesisen_UK
dc.subjecttransfer learningen_UK
dc.titleRock segmentation in the navigation vision of the planetary roversen_UK
dc.typeArticleen_UK

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Vision_of_the_planetary_rovers-2021.pdf
Size:
13.04 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.63 KB
Format:
Item-specific license agreed upon to submission
Description: