Browsing by Author "Beycimen, Semih"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access Predicting autonomous vehicle navigation parameters via image and image-and-point cloud fusion-based end-to-end methods(IEEE, 2022-10-13) Beycimen, Semih; Ignatyev, Dmitry; Zolotas, ArgyriosThis paper presents a study of end-to-end methods for predicting autonomous vehicle navigation parameters. Image-based and Image & Lidar points-based end-to-end models have been trained under Nvidia learning architectures as well as Densenet-169, Resnet-152 and Inception-v4. Various learning parameters for autonomous vehicle navigation, input models and pre-processing data algorithms i.e. image cropping, noise removing, semantic segmentation for image data have been investigated and tested. The best ones, from the rigorous investigation, are selected for the main framework of the study. Results reveal that the Nvidia architecture trained Image & Lidar points-based method offers the better results accuracy rate-wise for steering angle and speed.Item Open Access Vision-based autonomous UGV detection, tracking, and following for a UAV(AIAA, 2024-01-04) Amil, Fatma G.; Sen, Muhammet; Kurt, Huseyin Burak; Beycimen, Semih; Millidere, MuratThis study proposes a methodology for unmanned ground vehicle (UGV) navigation in off-road environments where GPS signals are not available. The Husky-A200 at Cranfield University, United Kingdom has been used as a UGV in this research project. Due to the limited field of vision of UGVs, a UAV-UGV collaboration approach was adopted. The methodology involves five steps. The first step is divided into three phases: The aerial images of UGV from UAV are generated in the first phase. In the second phase, the UGV is detected and tracked using computer vision techniques. In the third phase, the relative pose (position and heading) between the UAV and UGV is estimated continuously using visual data. In the second step, the UAV maintain a fixed location (position and heading) relative to the UGV. The third step involves capturing aerial images from the UAV‘s mounted camera and transmitting it to the ground station instantly to create a global traversability map that classifies terrain features based on their traversability. In the fourth step, additional sensors such as LiDAR, radar, and IMU are used to refine the global traversability map. In the final step, the UGV navigates automatically using the refined traversability map. This study will focus on the first two steps of the methodology, while subsequent studies will address the remaining steps.