Can we extrapolate? Deep learning for equation identification

Georg Martius
Max Planck Institute

Thursday 21 February 2019
15:15 - 16:15
Room 3.36
Earl Mountbatten Building


In classical machine learning, regression is treated as a black box process of identifying a suitable function without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. I will present a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions from data. In addition to interpolation it is also able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient-based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified. The system can be used to learn and control robotic systems with high data efficiency.


Georg Martius is leading a research group on Autonomous Learning at the Max Planck Institute for Intelligent Systems in Tübingen, Germany. He was a postdoc fellow at the IST Austria in the groups of Christoph Lampert and Ga?per Tka?ik after being a postdoc at the Max Planck Institute for Mathematics in the Sciences in Leipzig. He pursues research in autonomous learning, that is how an embodied agent can determine what to learn, how to learn, and how to judge the learning success. He is using information theory and dynamical systems theory to formulate generic intrinsic motivations that lead to coherent behaviour exploration ? much like playful behaviour. He is also working on machine learning methods particularly suitable for internal models and hierarchical reinforcement learning. More details can be found on

Host: Katrin Solveig Lohan