Reinforcement learning optimized look-ahead energy management of a parallel hybrid electric vehicle

Date published

2017-08-14

Free to read from

Supervisor/s

Journal Title

Journal ISSN

Volume Title

Publisher

IEEE

Department

Type

Article

ISSN

1083-4435

Format

Citation

Teng Liu, Xiaosong Hu, Shengbo Eben Li and Dongpu Cao. Reinforcement learning optimized look-ahead energy management of a parallel hybrid electric vehicle. IEEE/ASME Transactions on Mechatronics, Volume 22, Issue 4, August 2017, pp1497-1507

Abstract

This paper presents a predictive energy management strategy for a parallel hybrid electric vehicle (HEV) based on velocity prediction and reinforcement learning (RL). The design procedure starts with modeling the parallel HEV as a systematic control-oriented model and defining a cost function. Fuzzy encoding and nearest neighbor approaches are proposed to achieve velocity prediction, and a finite-state Markov chain is exploited to learn transition probabilities of power demand. To determine the optimal control behaviors and power distribution between two energy sources, a novel RL-based energy management strategy is introduced. For comparison purposes, the two velocity prediction processes are examined by RL using the same realistic driving cycle. The look-ahead energy management strategy is contrasted with shortsighted and dynamic programming based counterparts, and further validated by hardware-in-the-loop test. The results demonstrate that the RL-optimized control is able to significantly reduce fuel consumption and computational time.

Description

Software Description

Software Language

Github

Keywords

Energy management, hybrid electric vehicle (HEV), Markov chain (MC), predictive control, reinforcement learning (RL)

DOI

Rights

Attribution-NonCommercial 4.0 International

Relationships

Relationships

Supplements

Funder/s