Reinforcement learning optimized look-ahead energy management of a parallel hybrid electric vehicle

Date

2017-08-14

Advisors

Journal Title

Journal ISSN

Volume Title

Publisher

IEEE

Department

Type

Article

ISSN

1083-4435

item.page.extent-format

Citation

Teng Liu, Xiaosong Hu, Shengbo Eben Li and Dongpu Cao. Reinforcement learning optimized look-ahead energy management of a parallel hybrid electric vehicle. IEEE/ASME Transactions on Mechatronics, Volume 22, Issue 4, August 2017, pp1497-1507

Abstract

This paper presents a predictive energy management strategy for a parallel hybrid electric vehicle (HEV) based on velocity prediction and reinforcement learning (RL). The design procedure starts with modeling the parallel HEV as a systematic control-oriented model and defining a cost function. Fuzzy encoding and nearest neighbor approaches are proposed to achieve velocity prediction, and a finite-state Markov chain is exploited to learn transition probabilities of power demand. To determine the optimal control behaviors and power distribution between two energy sources, a novel RL-based energy management strategy is introduced. For comparison purposes, the two velocity prediction processes are examined by RL using the same realistic driving cycle. The look-ahead energy management strategy is contrasted with shortsighted and dynamic programming based counterparts, and further validated by hardware-in-the-loop test. The results demonstrate that the RL-optimized control is able to significantly reduce fuel consumption and computational time.

Description

item.page.description-software

item.page.type-software-language

item.page.identifier-giturl

Keywords

Energy management, hybrid electric vehicle (HEV), Markov chain (MC), predictive control, reinforcement learning (RL)

Rights

Attribution-NonCommercial 4.0 International

item.page.relationships

item.page.relationships

item.page.relation-supplements