Title:
|
Dual REPS: a generalization of relative entropy policy search exploiting bad experiences
|
Author:
|
Colomé Figueras, Adrià; Torras, Carme
|
Other authors:
|
Institut de Robòtica i Informàtica Industrial; Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI |
Abstract:
|
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. |
Abstract:
|
Policy search (PS) algorithms are widely used for their simplicity and effectiveness in finding solutions for robotic problems. However, most current PS algorithms derive policies by statistically fitting the data from the best experiments only. This means that experiments yielding a poor performance are usually discarded or given too little influence on the policy update. In this paper, we propose a generalization of the relative entropy policy search (REPS) algorithm that takes bad experiences into consideration when computing a policy. The proposed approach, named dual REPS (DREPS) following the philosophical interpretation of the duality between good and bad, finds clusters of experimental data yielding a poor behavior and adds them to the optimization problem as a repulsive constraint. Thus, considering that there is a duality between good and bad data samples, both are taken into account in the stochastic search for a policy. Additionally, a cluster with the best samples may be included as an attractor to enforce faster convergence to a single optimal solution in multimodal problems. We first tested our proposed approach in a simulated reinforcement learning setting and found that DREPS considerably speeds up the learning process, especially during the early optimization steps and in cases where other approaches get trapped in between several alternative maxima. Further experiments in which a real robot had to learn a task with a multimodal reward function confirm the advantages of our proposed approach with respect to REPS. |
Abstract:
|
Peer Reviewed |
Subject(s):
|
-Àrees temàtiques de la UPC::Informàtica::Robòtica -Low-reward data reuse -policy search (PS) -reinforcement learning (RL) -relative entropy policy search (REPS) -Classificació INSPEC::Cybernetics::Artificial intelligence::Learning (artificial intelligence) |
Rights:
|
Attribution-NonCommercial-NoDerivs 3.0 Spain
http://creativecommons.org/licenses/by-nc-nd/3.0/es/ |
Document type:
|
Article - Submitted version Article |
Published by:
|
Institute of Electrical and Electronics Engineers (IEEE)
|
Share:
|
|