A complementary learning approach for expertise transference of human-optimized controllers

Date published

2021-10-21

Free to read from

Supervisor/s

Journal Title

Journal ISSN

Volume Title

Publisher

Elsevier

Department

Type

Article

ISSN

0893-6080

Format

Citation

Perrusquia A. (2022) A complementary learning approach for expertise transference of human-optimized controllers. Neural Networks, Volume 145, January 2022, pp. 33-41

Abstract

In this paper, a complementary learning scheme for experience transference of unknown continuous-time linear systems is proposed. The algorithm is inspired in the complementary learning properties that exhibit the hippocampus and neocortex learning systems via the striatum. The hippocampus is modelled as pattern-separated data of a human optimized controller. The neocortex is modelled as a Q-reinforcement learning algorithm which improves the hippocampus control policy. The complementary learning (striatum) is designed as an inverse reinforcement learning algorithm which relates the hippocampus and neocortex learning models to seek and transfer the weights of the hidden expert’s utility function. Convergence of the proposed approach is analysed using Lyapunov recursions. Simulations are given to verify the proposed approach.

Description

Software Description

Software Language

Github

Keywords

Complementary learning, Hippocampus and neocortex learning systemsQ-learning, Inverse reinforcement learning, Batch least squares, Gradient-descent rule

DOI

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International

Relationships

Relationships

Supplements

Funder/s