Altmetric

Model-based contextual policy search for data-efficient generalization of robot skills

File Description SizeFormat 
aij2015.pdfAccepted version5.21 MBAdobe PDFView/Open
Title: Model-based contextual policy search for data-efficient generalization of robot skills
Authors: Kupcsik, A
Deisenroth, MP
Peters, J
Poh, LA
Vadakkepat, P
Neumann, G
Item Type: Journal Article
Abstract: © 2014 Elsevier B.V.In robotics, lower-level controllers are typically used to make the robot solve a specific task in a fixed context. For example, the lower-level controller can encode a hitting movement while the context defines the target coordinates to hit. However, in many learning problems the context may change between task executions. To adapt the policy to a new context, we utilize a hierarchical approach by learning an upper-level policy that generalizes the lower-level controllers to new contexts. A common approach to learn such upper-level policies is to use policy search. However, the majority of current contextual policy search approaches are model-free and require a high number of interactions with the robot and its environment. Model-based approaches are known to significantly reduce the amount of robot experiments, however, current model-based techniques cannot be applied straightforwardly to the problem of learning contextual upper-level policies. They rely on specific parametrizations of the policy and the reward function, which are often unrealistic in the contextual policy search formulation. In this paper, we propose a novel model-based contextual policy search algorithm that is able to generalize lower-level controllers, and is data-efficient. Our approach is based on learned probabilistic forward models and information theoretic policy search. Unlike current algorithms, our method does not require any assumption on the parametrization of the policy or the reward function. We show on complex simulated robotic tasks and in a real robot experiment that the proposed learning framework speeds up the learning process by up to two orders of magnitude in comparison to existing methods, while learning high quality policies.
Issue Date: 1-Jan-2014
URI: http://hdl.handle.net/10044/1/19434
DOI: http://dx.doi.org/10.1016/j.artint.2014.11.005
ISSN: 0004-3702
Journal / Book Title: Artificial Intelligence
Copyright Statement: © 2014 Elsevier Ltd. All rights reserved. NOTICE: this is the author’s version of a work that was accepted for publication in Artificial Intelligence. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in ARTIFICIAL INTELLIGENCE , (2014) DOI: 10.1016/j.artint.2014.11.005
Appears in Collections:Computing



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Creative Commonsx