Model-based Reinforcement Learning with Structured Representation
Recent advancements in deep reinforcement learning (RL) have enabled incredible breakthroughs on a wide variety of problems in which computer systems are required to learn through interacting with the environment with no or minimal human intervention. An example of this is DeepMind’s AlphaGo agent, which taught itself to play Go at a superhuman performance. Deep RL has also been applied to robotics, where robotic arms manipulate and transport objects to desired final states guided by the raw images captured by onboard cameras. However, most of the existing research focuses on interacting with a limited set of rigid objects, whereas robotic systems deployed in real life need to manipulate a wide range of objects with different material and inertial properties. To close the gap between theory and practice, we propose to reinforcement learning model that can understand different material and dynamical properties of objects and to use that information to facilitate decision making. We hope that such an architecture would enable the agent to robustly manipulate a wide variety of materials and objects.