Browsing by Author "Liu, Ruifan"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Open Access Decentralized mission planning for multiple unmanned aerial vehicles(Cranfield University, 2023-02) Liu, Ruifan; Shin, Hyo-Sang; Tsourdos, AntoniosThe focus of this thesis is the mission planning challenge for multiple unmanned aerial vehicles (UAVs), with a particular emphasis on their stable operations in a stochastic and dynamic environment. Mission planning is a crucial module in automated multi-UAV systems, allowing for efficient resource allocation, conflict resolution, and reliable operation. However, the distributed nature of the system and physical and environmental constraints make it challenging to develop effective mission planning algorithms. The thesis begins with a review of the taxonomy, frameworks, and techniques in multiple-UAV mission planning. Following this, it identifies four critical research challenges, encompassing scalability, efficiency, adaptability and robustness, and energy management and renewable strategies. In response to these challenges, four objectives have been defined with the overall aim of developing a generic decentralized mission planning paradigm for multi-UAV systems. The thesis subsequently concentrates on accomplishing these objectives, with notable contributions in the development of 1) a decentralized task coordination algorithm, 2) an efficient route planner in consideration of recharging, and 3) an energy-aware planning framework. This research first proposes a decentralized auction-based coordination strategy for task-constrained multi-agent stochastic planning problems. Through casting the problem as task-constrained Markov decision processes (MDPs), the task dependency due to an exclusive constraint is despatched from Multi-agent Markov decision processes (M-MDPs) and then resolved by adopting an auction-based coordination method. For multi-agent stochastic planning problems, the suggested technique resolves the trade-off concern between computational tractability and solution quality. The proposed method ensures convergence, achieves at least 50% optimality under the assumption of a submodular reward function, and greatly reduces the computational complexity compared to multi-agent MDPs. Deep Auction is then proposed as an approximate modification of the suggested auction-based coordination method, where two neural network approximators are introduced to facilitate scaled-up implementations. By theoretical analysis, these two proposed algorithms achieve better robustness and feature less computing complexity compared to the state-of-the-art. Finally, a case study of drone delivery with time windows is implemented for validation. Simulation results demonstrate the theoretical benefits of the recommended methodologies. Then, an efficient route planner for individual UAVs accounting for recharging services is proposed. Despite extensive research in decision-making algorithms, existing models have limitations in accurately representing real-world scenarios in terms of UAV’s physical restrictions and stochastic operating environments. To address this, a drone delivery problem with recharging (DDP-R) is proposed. The problem is characterized by directional edges and stochastic edge costs affected by winds. To solve DDP-Rs, a novel edge-enhanced attention model (AM-E) is proposed and trained via the REINFOCE algorithm to map the optimal policy. AM-E consists of a series of edge-enhanced dot-product attention layers that capture the heterogeneous relationships between nodes in DDP-Rs by incorporating adjacent edge information. Simulation results show that the edge enhancement achieves better results with a simpler architecture and fewer trainable parameters, compared to other deep learning models. Extensive simulations demonstrate that the proposed DRL method outperforms state-of-the-art heuristics in solving the DDP-R problem, especially at large sizes, for both non-wind and windy scenarios. Finally, we integrate the above route planning algorithm into an online energy inference framework, namely, the Energy-aware Planning Framework (EaPF), with the aim of optimizing solution quality in consideration of possible time-window violation and battery depletion. The framework comprises a statistical energy predictive model, a risk assessment module, and a route optimizer, with functions of modelling energy costs, estimating risks, and optimizing the risk-sensitive objective. Concretely, a Mixture Density Network (MDN) is established for predicting the distribution of future energy consumption taking account of wind conditions. The MDN is trained by historical data and continuously updated as new data is collected. Then, a risk-sensitive criterion is formed based on the MDN model of energy consumption to assess the risk of task lateness and battery depletion. To minimize the risk-sensitive objective, the EaPF incorporates the proposed AM-E planner using a Model-based Multi-Sampling (MBMS) route construction strategy, to further improve solution quality and planning robustness. In the context of drone deliveries, simulations validate the effectiveness of the MDN energy model and the EaPF. Results show that the integration of EaPF achieves an average cost reduction of 25%, which implies a lower energy cost, a higher task accomplishment rate, and a smaller battery depletion risk compared to the stand-alone DRL planner.Item Open Access Decentralized task allocation for multiple UAVs with task execution uncertainties(IEEE, 2020-10-06) Liu, Ruifan; Seo, Min-Guk; Yan, Binbin; Tsourdos, AntoniosThis work builds on a robust decentralized task allocation algorithm to address the multiple unmanned aerial vehicle (UAV) surveillance problem under task duration uncertainties. Considering the existing robust task allocation algorithm is computationally intensive and also has no optimality guarantees, this paper proposes a new robust task assignment formulation that reduces the calculation of robust scores and provides a certain theoretical guarantee of optimality. In the proposed method, the Markov model is introduced to describe the impact of uncertain parameters on task rewards and the expected score function is reformulated as the utility function of the states in the Markov model. Through providing the high-precision expected marginal gain of tasks, the task assignment gains a better accumulative score than the state of arts robust algorithms do. Besides, this algorithm is proven to be convergent and could reach a prior optimality guarantee of at least 50%. Numerical Simulations demonstrate the performance improvement of the proposed method compared with basic CBBA, robust extension to CBBA and cost-benefit greedy algorithm.Item Open Access Delivery route planning for unmanned aerial system in presence of recharging stations(AIAA, 2022-06-20) Liu, Ruifan; Shin, Hyosang; Seo, Miuguk; Tsourdos, AntoniosExisting variants of vehicle routing problems (VRPs) are incapable of describing real-world drone delivery scenarios in terms of drone physical restrictions, mission constraints, and stochastic operating environments. To that end, this paper proposes a specific drone delivery model with recharging (DDP-R) characterized by directional edges and stochastic edge costs subject to wind conditions. To address it, the DDP-R is cast into a Markov Decision Process (MDP) over a graph, with the next node chosen according to a stochastic policy based on the evolving observation. An edge-enhanced attention model (AM-E) is then suggested to map the optimal policy via the deep reinforcement learning (DRL) approach. AM-E comprises a succession of edge-enhanced dot-product attention layers which is designed with the aim of capturing the heterogeneous node relationship for DDP-Rs by incorporating adjacent edge information. Simulation shows that edge enhancement facilitates the training process, achieving superior performance with less trainable parameters and simpler architecture in comparison with other deep learning models. Furthermore, a stochastic drone energy cost model in consideration of winds is incorporated into validation simulations, which provides a practical insight into drone delivery problems. In terms of both non-wind and windy cases, extensive simulations demonstrate that the proposed DRL method outperforms state-of-the-art heuristics for solving DDP-R, especially at large sizes.Item Open Access Distributed optimal deployment on a circle for cooperative encirclement of autonomous mobile multi-agents(IEEE, 2020-03-23) Yan, Pengpeng; Fan, Yonghua; Liu, Ruifan; Wang, MingangA distributed encirclement points deployment scheme for a group of autonomous mobile agents is addressed in this paper. Herein, each agent can measure its own azimuth related to the common target and can at least communicate with its two adjacent neighbors. Given its space-cooperative character, the encirclement points deployment problem is formulated as the coverage control problem on a circle. The measurement range of azimuth sensor is taken into consideration when doing problem formulation, which is closer to the facts in real-world applications. Then, the fully distributed control protocols are put forward based on geometric principle and the convergence is proved strictly with algebraic method. The proposed control protocols can steer the mobile agents to distribute evenly on the circle such that the coverage cost function is minimized, and meanwhile the mobile agents' spatial order on the circle is preserved throughout the systems' evolution. A noteworthy feature of the proposed control protocols is that only the azimuths of a mobile agent and its two adjacent neighbors are needed to calculate the mobile agent's control input, so that the control protocols can be easily implemented in general. Moreover, an adjustable feedback gain is introduced, and it can be employed to improve the convergence rate effectively. Finally, numerical simulations are carried out to verify the effectiveness of the proposed distributed control protocolsItem Open Access Distributed target-encirclement guidance law for cooperative attack of multiple missiles(SAGE, 2020-06-15) Yan, Pengpeng; Fan, Yonghua; Liu, Ruifan; Wang, MingangThe target-encirclement guidance problem for many-to-one missile-target engagement scenario is studied, where the missiles evenly distribute on a target-centered circle during the homing guidance. The proposed distributed target-encirclement guidance law can achieve simultaneous attack of multiple missiles in different line-of-sight directions. Firstly, the decentralization protocols of desired line-of-sight angles are constructed based on the information of neighboring missiles. Secondly, a biased proportional navigation guidance law that can arbitrarily designate the impact angle is cited. The missiles can achieve all-aspect attack on the target in an encirclement manner by combining the biased proportional navigation guidance law and dynamic virtual targets strategy. Thirdly, the consensus protocol of simultaneous attack is designed, which can guarantee that all missiles’ time-to-go estimates achieve consensus asymptotically, and the convergence of the closed-loop system is proved strictly via the Lyapunov stability theory. Finally, numerical simulation results demonstrate the performance and feasibility of the proposed distributed target-encirclement guidance law in different engagement situations.Item Open Access Edge-enhanced attentions for drone delivery in presence of winds and recharging stations(AIAA, 2023-01-31) Liu, Ruifan; Shin, Hyosang; Tsourdos, AntoniosExisting variants of vehicle routing problems have limited capabilities in describing real-world drone delivery scenarios in terms of drone physical restrictions, mission constraints, and stochastic operating environments. To that end, this paper proposes a specific drone delivery problem with recharging (DDP-R) characterized by directional edges and stochastic edge costs subject to wind conditions. To address it, the DDP-R is cast into a Markov decision process over a graph, with the next node chosen according to a stochastic policy based on the evolving observation. An edge-enhanced attention model (AM-E) is then suggested to map the optimal policy via the deep reinforcement learning (DRL) approach. The AM-E comprises a succession of edge-enhanced dot-product attention layers and is designed with the aim of capturing the heterogeneous node relationship for DDP-Rs by incorporating adjacent edge information. Simulations show that edge enhancement facilitates the training process, achieving superior performance with less trainable parameters and simpler architecture in comparison with other deep learning models. Furthermore, a stochastic drone energy cost model in consideration of winds is incorporated into validation simulations, which provides a practical insight into drone delivery problems. In terms of both nonwind and windy cases, extensive simulations demonstrate that the proposed DRL method outperforms state-of-the-art heuristics for solving DDP-Rs, especially at large sizes.Item Open Access Mission planning for a multiple-UAV patrol system in an obstructed airport environment(IEEE, 2023-11-10) Liu, Ruifan; Shin, Hyo-sang; Tsourdos, AntoniosThis paper investigates using multiple unmanned aerial vehicles (UAVs) to carry out routine patrolling at an airport to enhance its perimeter security. It specifically focuses on mission planning of the system to facilitate efficient patrolling with consideration of local buildings and restricted airspace. The proposed methodology includes three aspects: 1) a vision-based set cover algorithm to construct the patrolling network, 2) an obstructed partitioning-based clustering algorithm for recharging station placement, and 3) a mixture integer quadratic programming (MIQP) algorithm to plan routes for UAVs minimizing the maximum idle time through out all patrolling waypoints. The main contribution of this work is that it provides a comprehensive mission planning solution for UAVs persistently patrolling in a complex environment characterized by blocked vision and restricted airspace. The proposed methodology is evaluated through intensive simulations in the context of the Cranfield Airport scenario.