Crossover random fields: A practical framework for learning and inference with graphical models and applications to computer vision and image processing problems
Graphical Models, such as Markov random fields, are a powerful methodology for modeling probability distributions over large numbers of variables. These models, in principle, offer a natural approach to learning and inference of many computer vision problems, such as stereo, denoising, segmentation, and image labeling. However, graphical models face severe computational problems when dealing with images, due to the fact that the uncertainty structure is a "grid", and not a one-dimensional tree or chain.
In this talk, I will discuss a practical and efficient framework for joint learning and inference in situations where a normal graphical model would be intractable. This framework is based on two basic ideas:
- Iteratively using a series of tractable models.
- New loss functions measuring only univariate accuracy.
That is, the problem is attacked through a sequence of models, each of which is tractable. The motivating example is an image-- the first model is defined over scanlines, while the next model is defined over columns, "crossing over" the first model. The results of each model can be computed efficiently by dynamic programming, and are used by the next layer. During learning, the parameters of the entire "stack" of models are simultaneously fit to give maximally accurate univariate marginal distributions.
This talk will include experimental results on several problems, including automatic labeling of outdoor scenes.
Speaker: Justin Domke
Google Tech Talks September 9, 2008
Justin Domke is pursing a Ph.D. at the University of Maryland. Before coming to Maryland he received B.S. degrees in Physics and Computer Science from Washington University is St. Louis. His research interest is efficient learning and inference with graphical models and applications to computer vision and image processing problems.