Xlera8

DeepMind papers at ICML 2017 (part two)

DARLA: Improving Zero-Shot Transfer in Reinforcement Learning

Authors: Irina Higgins*, Arka Pal*, Andrei Rusu, Loic Matthey, Chris Burgess, Alexander Pritzel, Matt Botvinick, Charles Blundell, Alexander Lerchner

Modern deep reinforcement learning agents rely on large quantities of data to learn how to act. In some scenarios, such as robotics, obtaining a lot of training data may be infeasible. Hence such agents are often trained on a related task where data is easy to obtain (e.g. simulation) with the hope that the learnt knowledge will generalise to the task of interest (e.g. reality). We propose DARLA, a DisentAngled Representation Learning Agent, that exploits its interpretable and structured vision to learn how to act in a way that is robust to various novel changes in its environment – including a simulation to reality transfer scenario in robotics. We show that DARLA significantly outperforms all baselines, and that its performance is crucially dependent on the quality of its vision.

For further details and related work, please see the paper.

Check it out at ICML:

Monday 07 August, 16:42-17:00 @ C4.5 (Talk)

Monday 07 August, 18:30-22:00 @ Gallery #123 (Poster)


Source: https://deepmind.com/blog/article/deepmind-papers-icml-2017-part-two

Chat with us

Hi there! How can I help you?