CERES
Library Services
  • Communities & Collections
  • Browse CERES
  • Library Staff Log In
    Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Wang, Jiang"

Now showing 1 - 2 of 2
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Integral global sliding mode guidance for impact angle control
    (IEEE, 2018-10-18) He, Shaoming; Lin, Defu; Wang, Jiang
    This Correspondence proposes a new guidance law based on integral sliding mode control (ISMC) technique for maneuvering target interception with impact angle constraint. A time-varying function weighted line-of-sight (LOS) error dynamics, representing the nominal guidance performance, is introduced first. The proposed guidance law is derived by utilizing ISMC to follow the desired error dynamics. The convergence of the guidance law developed is supported by Lyapunov stability. Simulations with extensive comparisons explicitly demonstrate the effectiveness of the proposed approach.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Learning prediction-correction guidance for impact time control
    (Elsevier, 2021-10-28) Liu, Zichao; Wang, Jiang; He, Shaoming; Shin, Hyosang; Tsourdos, Antonios
    This paper investigates the problem of impact-time-control and proposes a learning-based computational guidance algorithm to solve this problem. The proposed guidance algorithm is developed based on a general prediction-correction concept: the exact time-to-go under proportional navigation guidance with realistic aerodynamic characteristics is estimated by a deep neural network and a biased command to nullify the impact time error is developed by utilizing the emerging reinforcement learning techniques. To deal with the problem of insufficient training data, a transfer-ensemble learning approach is proposed to train the deep neural network. The deep neural network is augmented into the reinforcement learning block to resolve the issue of sparse reward that has been observed in typical reinforcement learning formulation. Extensive numerical simulations are conducted to support the proposed algorithm.

Quick Links

  • About our Libraries
  • Cranfield Research Support
  • Cranfield University

Useful Links

  • Accessibility Statement
  • CERES Takedown Policy

Contacts-TwitterFacebookInstagramBlogs

Cranfield Campus
Cranfield, MK43 0AL
United Kingdom
T: +44 (0) 1234 750111
  • Cranfield University at Shrivenham
  • Shrivenham, SN6 8LA
  • United Kingdom
  • Email us: researchsupport@cranfield.ac.uk for REF Compliance or Open Access queries

Cranfield University copyright © 2002-2025
Cookie settings | Privacy policy | End User Agreement | Send Feedback