Curiosity-Driven Learning for Physically Grounded Autonomous Agents
Abstract
The human ability to solve complex manipulation tasks is based on a flexible generalizable understanding of intuitive physics mostly learned through curiosity-driven self-play during infancy. We aim to replicate such interactive learning in artificial agents to achieve the same flexibility and generalizability when solving complex manipulation tasks. For that purpose, we introduce a general framework for learning intuitive physics through curiosity-driven self-play for artificial agents. Within this framework, we demonstrate how object-centric representations can greatly improve intuitive physics predictions and support stochastic predictions of complex physical scenes modeling uncertainty, and then show that object-centric physics prediction models can be trained within the presented curiosity-driven framework. Lastly, we apply our findings to drive the exploration of robotic systems to advance the generalizability of manipulation policies for complex pick and place tasks, and to measure and model human intuitive physics on a wide variety of visual stimuli of complex physics scenarios.