A path integral approach to stochastic optimal control

Posted in Science on October 13, 2008


A path integral approach to stochastic optimal control

Many problems in machine learning use a probabilistic description. Examples are pattern recognition methods and graphical models. As a consequence of this uniform description, one can apply generic approximation methods such as mean field theory and sampling methods. Another important class of machine learning problems are the reinforcement learning problems, aka optimal control problems. Here, also a probabilistic description is used, but up to now efficient mean field approximations have not been obtained. In this presentation, I consider linear-quadratic control of an arbitrary dynamical system and show, that for this class of stochastic control problems the non-linear Hamilton-Jacobi-Bellman equation can be transformed into a linear equation. The transformation is similar to the transformation used to relate the Schrödinger equation to the Hamilton-Jacobi formalism. The computation can be performed efficiently by means of a forward diffusion process that can be computed by stochastic integration or that can be described by a path integral. For this path integral it is expected that a variational mean field approximation could be derived.

Author: Bert Kappen, Radboud University Nijmegen

Watch Video

Tags: Science, Lectures, Computer Science, Machine Learning, VideoLectures.Net